Updates from: 09/11/2024 01:06:06
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Ropc Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-ropc-policy.md
Previously updated : 01/11/2024 Last updated : 09/11/2024 zone_pivot_groups: b2c-policy-type
When using the ROPC flow, consider the following limitations:
## Create a resource owner user flow
-1. Sign in to the [Azure portal](https://portal.azure.com) as the **global administrator** of your Azure AD B2C tenant.
+1. Sign in to the [Azure portal](https://portal.azure.com) as the [External ID User Flow Administrator](/entra/identity/role-based-access-control/permissions-reference#external-id-user-flow-administrator) of your Azure AD B2C tenant.
1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. In the Azure portal, search for and select **Azure AD B2C**. 1. Select **User flows**, and select **New user flow**.
active-directory-b2c Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md
Previously updated : 01/11/2024 Last updated : 09/11/2024
The following diagram depicts the components you'll configure in your Microsoft
![Resource group projection](./media/azure-monitor/resource-group-projection.png)
-During this deployment, you'll configure your Azure AD B2C tenant where logs are generated. You'll also configure Microsoft Entra tenant where the Log Analytics workspace will be hosted. The Azure AD B2C accounts used (such as your admin account) should be assigned the [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator) role on the Azure AD B2C tenant. The Microsoft Entra account you'll use to run the deployment must be assigned the [Owner](../role-based-access-control/built-in-roles.md#owner) role in the Microsoft Entra subscription. It's also important to make sure you're signed in to the correct directory as you complete each step as described.
+During this deployment, you'll configure your Azure AD B2C tenant where logs are generated. You'll also configure Microsoft Entra tenant where the Log Analytics workspace will be hosted. The Azure AD B2C accounts used (such as your admin account) should be assigned the [Global Administrator](/entr#owner) role in the Microsoft Entra subscription. It's also important to make sure you're signed in to the correct directory as you complete each step as described.
In summary, you'll use Azure Lighthouse to allow a user or group in your Azure AD B2C tenant to manage a resource group in a subscription associated with a different tenant (the Microsoft Entra tenant). After this authorization is completed, the subscription and log analytics workspace can be selected as a target in the Diagnostic settings in Azure AD B2C. ## Prerequisites -- An Azure AD B2C account with [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator) role on the Azure AD B2C tenant.
+- An Azure AD B2C account with [Global Administrator](/entra/identity/role-based-access-control/permissions-reference#global-administrator) role on the Azure AD B2C tenant.
- A Microsoft Entra account with the [Owner](../role-based-access-control/built-in-roles.md#owner) role in the Microsoft Entra subscription. See how to [Assign a user as an administrator of an Azure subscription](../role-based-access-control/role-assignments-portal-subscription-admin.yml).
active-directory-b2c Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/billing.md
Title: Billing model for Azure Active Directory B2C description: Learn about Azure AD B2C's monthly active users (MAU) billing model, how to link an Azure AD B2C tenant to an Azure subscription, and how to select the appropriate premium tier pricing.- - - Previously updated : 01/11/2024 Last updated : 09/11/2024 - #Customer intent: As a business decision maker managing an Azure AD B2C tenant, I want to understand the billing model based on monthly active users (MAU), so that I can determine the cost and pricing structure for my Azure AD B2C tenant.
A monthly active user (MAU) is a unique user that performs an authentication wit
If Azure AD B2C [Go-Local add-on](data-residency.md#go-local-add-on) is available in your country/region, and you enable it, you'll be charged per MAU, which is an added charge to your Azure AD B2C [Premium P1 or P2 pricing](https://azure.microsoft.com/pricing/details/active-directory-b2c/) license. Learn more [About Local Data Residency add-on](#about-go-local-add-on)
-Also, if you choose to provide higher levels of assurance by using Multi-factor Authentication (MFA) for Voice and SMS, you'll be charged a worldwide flat fee for each MFA attempt that month, whether the sign in is successful or unsuccessful.
+Also, if you choose to provide higher levels of assurance by using multifactor authentication (MFA) for Voice and SMS, you'll be charged a worldwide flat fee for each MFA attempt that month, whether the sign in is successful or unsuccessful.
> [!IMPORTANT]
A subscription linked to an Azure AD B2C tenant can be used for the billing of A
1. Select **Create a resource**, and then, in the **Search services and Marketplace** field, search for and select **Azure Active Directory B2C**. 1. Select **Create**. 1. Select **Link an existing Azure AD B2C Tenant to my Azure subscription**.
-1. Select an **Azure AD B2C Tenant** from the dropdown. Only tenants for which you're a global administrator and that aren't already linked to a subscription are shown. The **Azure AD B2C Resource name** field is populated with the domain name of the Azure AD B2C tenant you select.
+1. Select an **Azure AD B2C Tenant** from the dropdown. Only tenants for which you're a Global Administrator and that aren't already linked to a subscription are shown. The **Azure AD B2C Resource name** field is populated with the domain name of the Azure AD B2C tenant you select.
1. Select an active Azure **Subscription** of which you're an owner. 1. Under **Resource group**, select **Create new**, and then specify the **Resource group location**. The resource group settings here have no impact on your Azure AD B2C tenant location, performance, or billing status. 1. Select **Create**.
Before you start the move, be sure to read the entire article to fully understan
If the source and destination subscriptions are associated with different Microsoft Entra tenants, you can't perform the move via Azure Resource Manager as explained above. However, you can still achieve the same result by unlinking the Azure AD B2C tenant from the source subscription and relinking it to the destination subscription. This method is safe because the only object you delete is the *billing link*, not the Azure AD B2C tenant itself. None of the users, apps, user flows, etc. will be affected.
-1. In the Azure AD B2C directory itself, [invite a guest user](user-overview.md#guest-user) from the destination Microsoft Entra tenant (the one that the destination Azure subscription is linked to) and ensure this user has the **Global administrator** role in Azure AD B2C.
+1. In the Azure AD B2C directory itself, [invite a guest user](user-overview.md#guest-user) from the destination Microsoft Entra tenant (the one that the destination Azure subscription is linked to) and ensure this user has the *Global Administrator* role in Azure AD B2C.
1. Navigate to the *Azure resource* representing Azure AD B2C in your source Azure subscription as explained in the [Manage your Azure AD B2C tenant resources](#manage-your-azure-ad-b2c-tenant-resources) section above. Don't switch to the actual Azure AD B2C tenant. 1. Select the **Delete** button on the **Overview** page. This action *doesn't* delete the related Azure AD B2C tenant's users or applications. It merely removes the billing link from the source subscription. 1. Sign in to the Azure portal with the user account that was added as an administrator in Azure AD B2C in step 1. Then navigate to the destination Azure subscription, which is linked to the destination Microsoft Entra tenant.
active-directory-b2c Conditional Access User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-user-flow.md
description: Learn how to add Conditional Access to your Azure AD B2C user flows
Previously updated : 01/11/2024 Last updated : 09/11/2024
To review the result of a Conditional Access event:
## Next steps
-[Customize the user interface in an Azure AD B2C user flow](customize-ui-with-html.md)
+[Customize the user interface in an Azure AD B2C user flow](customize-ui-with-html.md)
active-directory-b2c Custom Policies Series Store User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-store-user.md
Previously updated : 06/21/2024 Last updated : 09/11/2024
Follow the steps in [Test the custom policy](custom-policies-series-validate-use
After the policy finishes execution, and you receive your ID token, check that the user record has been created:
-1. Sign in to the [Azure portal](https://portal.azure.com/) as at least Privileged Role Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
active-directory-b2c Extensions App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/extensions-app.md
Previously updated : 01/11/2024 Last updated : 09/11/2024
To restore the app using Microsoft Graph, you must restore both the application
To restore the application object: 1. Browse to [https://developer.microsoft.com/en-us/graph/graph-explorer](https://developer.microsoft.com/en-us/graph/graph-explorer).
-1. Log in to the site as a global administrator for the Azure AD B2C directory that you want to restore the deleted app for. This global administrator must have an email address similar to the following: `username@{yourTenant}.onmicrosoft.com`.
+1. Sign in to the site as a [Application Administrator](/entra/identity/role-based-access-control/permissions-reference#application-administrator) for the Azure AD B2C directory that you want to restore the deleted app for.
1. Issue an HTTP GET against the URL `https://graph.microsoft.com/v1.0/directory/deleteditems/microsoft.graph.application`. This operation will list all of the applications that have been deleted within the past 30 days. You can also use the URL `https://graph.microsoft.com/v1.0/directory/deletedItems/microsoft.graph.application?$filter=displayName eq 'b2c-extensions-app. Do not modify. Used by AADB2C for storing user data.'` to filter by the app's **displayName** property. 1. Find the application in the list where the name begins with `b2c-extensions-app` and copy its `id` property value.
-1. Issue an HTTP POST against the URL `https://graph.microsoft.com/v1.0/directory/deleteditems/{id}/restore`. Replace the `{id}` portion of the URL with the `id` from the previous step.]
+1. Issue an HTTP POST against the URL `https://graph.microsoft.com/v1.0/directory/deleteditems/{id}/restore`. Replace the `{id}` portion of the URL with the `id` from the previous step.
To restore the service principal object: 1. Issue an HTTP GET against the URL `https://graph.microsoft.com/v1.0/directory/deleteditems/microsoft.graph.servicePrincipal`. This operation will list all of the service principals that have been deleted within the past 30 days. You can also use the URL `https://graph.microsoft.com/v1.0/directory/deletedItems/microsoft.graph.servicePrincipal?$filter=displayName eq 'b2c-extensions-app. Do not modify. Used by AADB2C for storing user data.'` to filter by the app's **displayName** property. 1. Find the service principal in the list where the name begins with `b2c-extensions-app` and copy its `id` property value. 1. Issue an HTTP POST against the URL `https://graph.microsoft.com/v1.0/directory/deleteditems/{id}/restore`. Replace the `{id}` portion of the URL with the `id` from the previous step.
-You should now be able to [see the restored app](#verifying-that-the-extensions-app-is-present) in the Azure portal.
+You should now be able to [see the restored app](#verifying-that-the-extensions-app-is-present) in the Azure portal.
active-directory-b2c Idp Pass Through User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/idp-pass-through-user-flow.md
Previously updated : 01/11/2024 Last updated : 09/11/2024 zone_pivot_groups: b2c-policy-type
The following diagram shows how an identity provider token returns to your app:
## Enable the claim
-1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
+1. Sign in to the [Azure portal](https://portal.azure.com/) as the [External ID User Flow Administrator](/entra/identity/role-based-access-control/permissions-reference#external-id-user-flow-administrator) of your Azure AD B2C tenant.
1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **User flows (policies)**, and then select your user flow. For example, **B2C_1_signupsignin1**.
active-directory-b2c Partner Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-twilio.md
Previously updated : 01/11/2024 Last updated : 09/11/2024
The following components make up the Twilio solution:
Add the policy files to Azure AD B2C:
-1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
+1. Sign in to the [Azure portal](https://portal.azure.com/) as the [B2C IEF Policy Administrator](/entra/identity/role-based-access-control/permissions-reference#b2c-ief-policy-administrator) of your Azure AD B2C tenant.
1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Navigate to **Azure AD B2C** > **Identity Experience Framework** > **Policy Keys**.
active-directory-b2c Phone Based Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-based-mfa.md
Previously updated : 03/01/2024 Last updated : 09/11/2024
Take the following actions to help mitigate fraudulent sign-ups.
- Remove country codes that aren't relevant to your organization from the drop-down menu where the user verifies their phone number (this change will apply to future sign-ups):
- 1. Sign in to the [Azure portal](https://portal.azure.com) as the global administrator of your Azure AD B2C tenant.
+ 1. Sign in to the [Azure portal](https://portal.azure.com) as the [External ID User Flow Administrator](/entra/identity/role-based-access-control/permissions-reference#external-id-user-flow-administrator) of your Azure AD B2C tenant.
1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select the user flow, and then select **Languages**. Select the language for your organization's geographic location to open the language details panel. (For this example, we'll select **English en** for the United States). Select **Multifactor authentication page**, and then select **Download defaults (en)**.
Take the following actions to help mitigate fraudulent sign-ups.
![Country code drop-down](media/phone-based-mfa/country-code-drop-down.png)
-## Next steps
+## Related content
- Learn about [Identity Protection and Conditional Access for Azure AD B2C](conditional-access-identity-protection-overview.md) -- Apply [Conditional Access to user flows in Azure Active Directory B2C](conditional-access-user-flow.md)
+- Apply [Conditional Access to user flows in Azure Active Directory B2C](conditional-access-user-flow.md)
active-directory-b2c Tenant Management Check Tenant Creation Permission https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-check-tenant-creation-permission.md
Previously updated : 06/21/2024 Last updated : 09/11/2024
# Review tenant creation permission in Azure Active Directory B2C
-Anyone who creates an Azure Active Directory B2C (Azure AD B2C) becomes the *Global Administrator* of the tenant. It's a security risk if a non-admin user is allowed to create a tenant.
+It's a security risk if a non-admin user in a tenant is allowed to create a tenant. As a [Global Administrator](/entra/identity/role-based-access-control/permissions-reference#global-administrator) in an Azure AD B2C tenant, you can restrict non-admin users from creating tenants.
In this article, you learn how, as an admin, you can restrict tenant creation for non-admins. Also, you learn how, as a non-admin user, you can check if you've permission to create a tenant.
In this article, you learn how, as an admin, you can restrict tenant creation fo
## Restrict non-admin users from creating Azure AD B2C tenants
-As a *Global Administrator* in an Azure AD B2C tenant, you can restrict non-admin users from creating tenants. To do so, use the following steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure portal](https://portal.azure.com) as a [Global Administrator](/entra/identity/role-based-access-control/permissions-reference#global-administrator).
1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
Before you create an Azure AD B2C tenant, make sure that you've the permission t
1. Under **Manage**, select **User Settings**.
-1. Under **Default user role permissions**, review your **Restrict non-admin users from creating tenants** setting. If the setting is set to **No**, then contact your administrator to assign the tenant creator role to you. The setting is greyed out if you're not an administrator in the tenant.
+1. Under **Default user role permissions**, review your **Restrict non-admin users from creating tenants** setting. If the setting is set to **No**, then contact your administrator to assign you [Tenant Creator](/entra/identity/role-based-access-control/permissions-reference#tenant-creator) role. The setting is greyed out if you're not an administrator in the tenant.
-## Next steps
+## Related content
- [Read tenant name and ID](tenant-management-read-tenant-name.md)-- [Clean up resources and delete tenant](tutorial-delete-tenant.md)
+- [Clean up resources and delete tenant](tutorial-delete-tenant.md)
active-directory-b2c Tenant Management Emergency Access Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-emergency-access-account.md
description: Learn how to manage emergency access accounts in Azure AD B2C tenan
- - Previously updated : 06/21/2024 Last updated : 09/11/2024
Create two or more emergency access accounts. These accounts should be cloud-onl
Use the following steps to create an emergency access account:
-1. Sign in to the [Azure portal](https://portal.azure.com) as an existing Global Administrator. If you use your Microsoft Entra account, make sure you're using the directory that contains your Azure AD B2C tenant:
+1. Sign in to the [Azure portal](https://portal.azure.com) as an existing [Global Administrator](/entra/identity/role-based-access-control/permissions-reference#global-administrator). If you use your Microsoft Entra account, make sure you're using the directory that contains your Azure AD B2C tenant:
1. Select the **Directories + subscriptions** icon in the portal toolbar.
active-directory-b2c Tenant Management Manage Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-manage-administrator.md
Title: Manage administrator accounts in Azure Active Directory B2C
-description: Learn how to add an administrator account to your Azure Active Directory B2C tenant. Learn how to invite a guest account as an administrator into your Azure AD B2C tenant.
-
+description: Learn how to add an administrator account to your Azure Active Directory B2C tenant. Learn how to invite a guest account as an administrator into your Azure AD B2C tenant
- - Previously updated : 06/21/2024 Last updated : 09/11/2024 #Customer intent: As an Azure AD B2C administrator, I want to manage administrator accounts, add new administrators (work and guest accounts), assign roles to user accounts, remove role assignments, delete administrator accounts, and protect administrative accounts with multifactor authentication, so that I can control access and ensure security in my Azure AD B2C tenant.- # Manage administrator accounts in Azure Active Directory B2C
In this article, you learn how to:
To create a new administrative account, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com/) as at least Privileged Role Administrator permissions.
+1. Sign in to the [Azure portal](https://portal.azure.com/) as at least [Privileged Role Administrator](/entra/identity/role-based-access-control/permissions-reference#privileged-role-administrator) permissions.
1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. Under **Manage**, select **Users**.
You can also invite a new guest user to manage your tenant. The guest account is
To invite a user, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com/) as at least Privileged Role Administrator permissions.
+1. Sign in to the [Azure portal](https://portal.azure.com/) as at least [Privileged Role Administrator](/entra/identity/role-based-access-control/permissions-reference#privileged-role-administrator) permissions.
1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. Under **Manage**, select **Users**.
If the guest didn't receive the invitation email, or the invitation expired, you
You can assign a role when you [create a user](#add-an-administrator-work-account) or [invite a guest user](#invite-an-administrator-guest-account). You can add a role, change the role, or remove a role for a user:
-1. Sign in to the [Azure portal](https://portal.azure.com/) as at least Privileged Role Administrator permissions.
+1. Sign in to the [Azure portal](https://portal.azure.com/) as at least [Privileged Role Administrator](/entra/identity/role-based-access-control/permissions-reference#privileged-role-administrator) permissions.
1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. Under **Manage**, select **Users**.
If you need to remove a role assignment from a user, follow these steps:
As part of an auditing process, you typically review which users are assigned to specific roles in the Azure AD B2C directory. Use the following steps to audit which users are currently assigned privileged roles.
-1. Sign in to the [Azure portal](https://portal.azure.com/) as Privileged Role Administrator.
+1. Sign in to the [Azure portal](https://portal.azure.com/) as [Privileged Role Administrator](/entra/identity/role-based-access-control/permissions-reference#privileged-role-administrator).
1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. Under **Azure services**, select **Azure AD B2C**. Or use the search box to find and select **Azure AD B2C**. 1. Under **Manage**, select **Roles and administrators**.
As part of an auditing process, you typically review which users are assigned to
## Delete an administrator account
-To delete an existing user, you must have a *Global administrator* role assignment. Global admins can delete any user, including other admins. *User administrators* can delete any non-admin user.
+To delete an existing user, you must have a [Global administrator](/entra/identity/role-based-access-control/permissions-reference#global-administrator) role assignment. Global administrators can delete any user, including other admins. *User administrators* can delete any non-admin user.
1. In your Azure AD B2C directory, select **Users**, and then select the user you want to delete. 1. Select **Delete**, and then **Yes** to confirm the deletion.
active-directory-b2c Tutorial Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-tenant.md
Previously updated : 01/11/2024 Last updated : 09/11/2024
Before you create your Azure AD B2C tenant, you need to take the following consi
## Create an Azure AD B2C tenant >[!NOTE]
->If you're unable to create Azure AD B2C tenant, [review your user settings page](tenant-management-check-tenant-creation-permission.md) to ensure that tenant creation isn't switched off. If tenant creation is switched on, ask your *Global Administrator* to assign you a **Tenant Creator** role.
+>If you're unable to create Azure AD B2C tenant, [review your user settings page](tenant-management-check-tenant-creation-permission.md) to ensure that tenant creation isn't switched off. If tenant creation is switched on, ask your [Global Administrator](/entra/identity/role-based-access-control/permissions-reference#global-administrator) to assign you a [Tenant Creator](/entra/identity/role-based-access-control/permissions-reference#tenant-creator) role.
1. Sign in to the [Azure portal](https://portal.azure.com). - 1. Make sure you're using the Microsoft Entra tenant that contains your subscription: 1. In the Azure portal toolbar, select the **Directories + subscriptions** icon.
You can link multiple Azure AD B2C tenants to a single Azure subscription for bi
## Activate Azure AD B2C Go-Local add-on
-Azure AD B2C allows you to activate Go-Local add-on on an existing tenant as long as your tenant stores data in a country/region that has local data residence option. To opt-in to Go-Local add-on, use the following steps:
+Azure AD B2C allows you to activate Go-Local add-on on an existing tenant as long as your tenant stores data in a country/region that has local data residence option. To opt in to Go-Local add-on, use the following steps:
1. Sign in to the [Azure portal](https://portal.azure.com).
active-directory-b2c Tutorial Delete Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-delete-tenant.md
Previously updated : 01/11/2024 Last updated : 09/11/2024
When you've finished the Azure Active Directory B2C (Azure AD B2C) tutorials, yo
## Identify cleanup tasks
-1. Sign in to the [Azure portal](https://portal.azure.com/) with a global administrator or subscription administrator role. Use the same work or school account or the same Microsoft account that you used to sign up for Azure.
+1. Sign in to the [Azure portal](https://portal.azure.com/) with a [Global Administrator](/entra/identity/role-based-access-control/permissions-reference#global-administrator) role. Use the same work or school account or the same Microsoft account that you used to sign up for Azure.
1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. In the Azure portal, search for and select the **Microsoft Entra ID** service. 1. In the left menu, under **Manage**, select **Properties**.
When you've finished the Azure Active Directory B2C (Azure AD B2C) tutorials, yo
If you've the confirmation page open from the previous section, you can use the links in the **Required action** column to open the Azure portal pages where you can remove these resources. Or, you can remove tenant resources from within the Azure AD B2C service using the following steps.
-1. Sign in to the [Azure portal](https://portal.azure.com/) with a global administrator or subscription administrator role. Use the same work or school account or the same Microsoft account that you used to sign up for Azure.
+1. Sign in to the [Azure portal](https://portal.azure.com/) with a [Global Administrator](/entra/identity/role-based-access-control/permissions-reference#global-administrator). Use the same work or school account or the same Microsoft account that you used to sign up for Azure.
1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. In the Azure portal, select the **Azure AD B2C** service, or search for and select **Azure AD B2C**. 1. Delete all users *except* the admin account you're currently signed in as:
If you've the confirmation page open from the previous section, you can use the
Once you delete all the tenant resources, you can now delete the tenant itself:
-1. Sign in to the [Azure portal](https://portal.azure.com/) with a global administrator or subscription administrator role. Use the same work or school account or the same Microsoft account that you used to sign up for Azure.
+1. Sign in to the [Azure portal](https://portal.azure.com/) with a [Global Administrator](/entra/identity/role-based-access-control/permissions-reference#global-administrator). Use the same work or school account or the same Microsoft account that you used to sign up for Azure.
1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. In the Azure portal, search for and select the **Microsoft Entra ID** service. 1. If you haven't already granted yourself access management permissions, do the following:
In this article, you learned how to:
> * Delete your tenant resources > * Delete the tenant
-Next, learn more about getting started with Azure AD B2C [user flows and custom policies](user-flow-overview.md).
+Next, learn more about getting started with Azure AD B2C [user flows and custom policies](user-flow-overview.md).
active-directory-b2c User Flow Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-custom-attributes.md
Previously updated : 01/11/2024 Last updated : 09/11/2024 zone_pivot_groups: b2c-policy-type
Azure AD B2C allows you to extend the set of attributes stored on each user acco
## Create a custom attribute
-1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
+1. Sign in to the [Azure portal](https://portal.azure.com/) as at least [External ID User Flow Attribute Administrator](/entra/identity/role-based-access-control/permissions-reference#external-id-user-flow-attribute-administrator) of your Azure AD B2C tenant.
1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **User attributes**, and then select **Add**.
Unlike built-in attributes, custom attributes can be removed. The extension attr
Use the following steps to remove a custom attribute from a user flow in your tenant:
-1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
+1. Sign in to the [Azure portal](https://portal.azure.com/) as at least [External ID User Flow Attribute Administrator](/entra/identity/role-based-access-control/permissions-reference#external-id-user-flow-attribute-administrator) of your Azure AD B2C tenant.
2. Make sure you're using the directory that contains your Azure AD B2C tenant: 1. Select the **Directories + subscriptions** icon in the portal toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the Directory name list, and then select **Switch**
Use the [Microsoft Graph API](microsoft-graph-operations.md#application-extensio
::: zone-end
-
- ## Next steps
app-service Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java.md
## Next steps > [!div class="nextstepaction"]
-> [Connect to Azure Database for PostgreSQL with Java](/azure/postgresql/connect-java)
+> [Tutorial: Build a Tomcat web app with Azure App Service on Linux and MySQL](tutorial-java-tomcat-mysql-app.md)
+
+> [!div class="nextstepaction"]
+> [Tutorial: Build a Java Spring Boot web app with Azure App Service on Linux and Azure Cosmos DB](tutorial-java-spring-cosmosdb.md)
> [!div class="nextstepaction"] > [Set up CI/CD](deploy-continuous-deployment.md)
app-service Tutorial Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-overview.md
Examples of using application secrets to connect to a database:
- [Deploy a Python (Django or Flask) web app with PostgreSQL in Azure](tutorial-python-postgresql-app.md) - [Tutorial: Build a Tomcat web app with Azure App Service on Linux and MySQL](tutorial-java-tomcat-mysql-app.md) - [Tutorial: Build a Java Spring Boot web app with Azure App Service on Linux and Azure Cosmos DB](tutorial-java-spring-cosmosdb.md)-- [Tutorial: Build a Quarkus web app with Azure App Service on Linux and PostgreSQL](tutorial-java-quarkus-postgresql-app.md) ## Next steps
app-service Tutorial Java Quarkus Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md
- Title: 'Tutorial: Linux Java app with Quarkus and PostgreSQL'
-description: Learn how to get a data-driven Linux Quarkus app working in Azure App Service, with connection to a PostgreSQL running in Azure.
--- Previously updated : 05/08/2024-
-zone_pivot_groups: app-service-portal-azd
---
-# Tutorial: Build a Quarkus web app with Azure App Service on Linux and PostgreSQL
-
-This tutorial shows how to build, configure, and deploy a secure [Quarkus](https://quarkus.io) application in Azure App Service that's connected to a PostgreSQL database (using [Azure Database for PostgreSQL](/azure/postgresql/)). Azure App Service is a highly scalable, self-patching, web-hosting service that can easily deploy apps on Windows or Linux. When you're finished, you'll have a Quarkus app running on [Azure App Service on Linux](overview.md).
--
-**To complete this tutorial, you'll need:**
--
-* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/java/).
-* Knowledge of Java with [Quarkus](https://quarkus.io) development.
---
-* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/java).
-* [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) installed. You can follow the steps with the [Azure Cloud Shell](https://shell.azure.com) because it already has Azure Developer CLI installed.
-* Knowledge of Java with Tomcat development.
--
-## Skip to the end
-
-You can quickly deploy the sample app in this tutorial and see it running in Azure. Just run the following commands in the [Azure Cloud Shell](https://shell.azure.com), and follow the prompt:
-
-```bash
-mkdir msdocs-quarkus-postgresql-sample-app
-cd msdocs-quarkus-postgresql-sample-app
-azd init --template msdocs-quarkus-postgresql-sample-app
-azd up
-```
-
-## 1. Run the sample
-
-First, you set up a sample data-driven app as a starting point. For your convenience, the sample repository, [Hibernate ORM with Panache and RESTEasy](https://github.com/Azure-Samples/msdocs-quarkus-postgresql-sample-app), includes a [dev container](https://docs.github.com/codespaces/setting-up-your-project-for-codespaces/adding-a-dev-container-configuration/introduction-to-dev-containers) configuration. The dev container has everything you need to develop an application, including the database, cache, and all environment variables needed by the sample application. The dev container can run in a [GitHub codespace](https://docs.github.com/en/codespaces/overview), which means you can run the sample on any computer with a web browser.
-
- :::column span="2":::
- **Step 1:** In a new browser window:
- 1. Sign in to your GitHub account.
- 1. Navigate to [https://github.com/Azure-Samples/msdocs-quarkus-postgresql-sample-app/fork](https://github.com/Azure-Samples/msdocs-quarkus-postgresql-sample-app/fork).
- 1. Unselect **Copy the main branch only**. You want all the branches.
- 1. Select **Create fork**.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-run-sample-application-1.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-run-sample-application-1.png":::
- :::column-end:::
- :::column span="2":::
- **Step 2:** In the GitHub fork:
- 1. Select **main** > **starter-no-infra** for the starter branch. This branch contains just the sample project and no Azure-related files or configuration.
- 1. Select **Code** > **Create codespace on main**.
- The codespace takes a few minutes to set up.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-run-sample-application-2.png" alt-text="A screenshot showing how create a codespace in GitHub." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-run-sample-application-2.png":::
- :::column-end:::
- :::column span="2":::
- **Step 3:** In the codespace terminal:
- 1. Run `mvn quarkus:dev`.
- 1. When you see the notification `Your application running on port 8080 is available.`, select **Open in Browser**. If you see a notification with port 5005, skip it.
- You should see the sample application in a new browser tab.
- To stop the Quarkus development server, type `Ctrl`+`C`.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-run-sample-application-3.png" alt-text="A screenshot showing how to run the sample application inside the GitHub codespace." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-run-sample-application-3.png":::
- :::column-end:::
-
-For more information on how the Quarkus sample application is created, see Quarkus documentation [Simplified Hibernate ORM with Panache](https://quarkus.io/guides/hibernate-orm-panache) and [Configure data sources in Quarkus](https://quarkus.io/guides/datasource).
-
-Having issues? Check the [Troubleshooting section](#troubleshooting).
--
-## 2. Create App Service and PostgreSQL
-
-First, you create the Azure resources. The steps used in this tutorial create a set of secure-by-default resources that include App Service and Azure Database for PostgreSQL. For the creation process, you'll specify:
-
-* The **Name** for the web app. It's the name used as part of the DNS name for your webapp in the form of `https://<app-name>.azurewebsites.net`.
-* The **Region** to run the app physically in the world.
-* The **Runtime stack** for the app. It's where you select the version of Java to use for your app.
-* The **Hosting plan** for the app. It's the pricing tier that includes the set of features and scaling capacity for your app.
-* The **Resource Group** for the app. A resource group lets you group (in a logical container) all the Azure resources needed for the application.
-
-Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
-
- :::column span="2":::
- **Step 1:** In the Azure portal:
- 1. Enter "web app database" in the search bar at the top of the Azure portal.
- 1. Select the item labeled **Web App + Database** under the **Marketplace** heading.
- You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-create-app-postgres-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find the Web App + Database creation wizard." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-create-app-postgres-1.png":::
- :::column-end:::
- :::column span="2":::
- **Step 2:** In the **Create Web App + Database** page, fill out the form as follows.
- 1. *Resource Group*: Select **Create new** and use a name of **msdocs-quarkus-postgres-tutorial**.
- 1. *Region*: Any Azure region near you.
- 1. *Name*: **msdocs-quarkus-postgres-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure.
- 1. *Runtime stack*: **Java 17**.
- 1. *Java web server stack*: **Java SE (Embedded Web Server)**.
- 1. *Database*: **PostgreSQL - Flexible Server**. The server name and database name are set by default to appropriate values.
- 1. *Hosting plan*: **Basic**. When you're ready, you can [scale up](manage-scale-up.md) to a production pricing tier later.
- 1. Select **Review + create**.
- 1. After validation completes, select **Create**.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-create-app-postgres-2.png" alt-text="A screenshot showing how to configure a new app and database in the Web App + Database wizard." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-create-app-postgres-2.png":::
- :::column-end:::
- :::column span="2":::
- **Step 3:** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
- - **Resource group**: The container for all the created resources.
- - **App Service plan**: Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created.
- - **App Service**: Represents your app and runs in the App Service plan.
- - **Virtual network**: Integrated with the App Service app and isolates back-end network traffic.
- - **Azure Database for PostgreSQL flexible server**: Accessible only from within the virtual network. A database and a user are created for you on the server.
- - **Private DNS zone**: Enables DNS resolution of the PostgreSQL server in the virtual network.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-create-app-postgres-3.png" alt-text="A screenshot showing the deployment process completed." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-create-app-postgres-3.png":::
- :::column-end:::
-
-Having issues? Check the [Troubleshooting section](#troubleshooting).
-
-## 3. Verify connection settings
-
-The creation wizard generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings). In this step, you learn where to find the app settings, and how you can create your own.
-
-App settings are one way to keep connection secrets out of your code repository. When you're ready to move your secrets to a more secure location, you can use [Key Vault references](app-service-key-vault-references.md) instead.
-
- :::column span="2":::
- **Step 1:** In the App Service page, in the left menu, select **Environment variables**.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-get-connection-string-1.png":::
- :::column-end:::
- :::column span="2":::
- **Step 2:** In the **App settings** tab of the **Environment variables** page, verify that `AZURE_POSTGRESQL_CONNECTIONSTRING` is present. It's injected at runtime as an environment variable.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-get-connection-string-2.png" alt-text="A screenshot showing how to see the autogenerated connection string." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-get-connection-string-2.png":::
- :::column-end:::
- :::column span="2":::
- **Step 4:** Select **Add application setting**. Name the setting `PORT` and set its value to `8080`, which is the default port of the Quarkus application. Select **Apply**.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-app-service-app-setting.png" alt-text="A screenshot showing how to set the PORT app setting in the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-app-service-app-setting.png":::
- :::column-end:::
- :::column span="2":::
- **Step 5:** Select **Apply**, then select **Confirm**.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-app-service-app-setting-save.png" alt-text="A screenshot showing how to save the PORT app setting in the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-app-service-app-setting-save.png":::
- :::column-end:::
--
-Having issues? Check the [Troubleshooting section](#troubleshooting).
-
-## 4. Deploy sample code
-
-In this step, you'll configure GitHub deployment using GitHub Actions. It's just one of many ways to deploy to App Service, but also a great way to have continuous integration in your deployment process. By default, every `git push` to your GitHub repository will kick off the build and deploy action.
-
-Note the following:
--- Your deployed Java package must be an [Uber-Jar](https://quarkus.io/guides/maven-tooling#uber-jar-maven).-- For simplicity of the tutorial, you'll disable tests during the deployment process. The GitHub Actions runners don't have access to the PostgreSQL database in Azure, so any integration tests that require database access will fail, such as is the case with the Quarkus sample application. -
- :::column span="2":::
- **Step 1:** Back in the App Service page, in the left menu, select **Deployment Center**.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-1.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-1.png":::
- :::column-end:::
- :::column span="2":::
- **Step 2:** In the Deployment Center page:
- 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider.
- 1. Sign in to your GitHub account and follow the prompt to authorize Azure.
- 1. In **Organization**, select your account.
- 1. In **Repository**, select **msdocs-quarkus-postgresql-sample-app**.
- 1. In **Branch**, select **starter-no-infra**.
- 1. In **Authentication type**, select **User-assigned identity (Preview)**.
- 1. In the top menu, select **Save**. App Service commits a workflow file into the chosen GitHub repository, in the `.github/workflows` directory.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-2.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-2.png":::
- :::column-end:::
- :::column span="2":::
- **Step 3:** Back in the GitHub codespace of your sample fork, run `git pull origin starter-no-infra`.
- This pulls the newly committed workflow file into your codespace.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-3.png" alt-text="A screenshot showing git pull inside a GitHub codespace." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-3.png":::
- :::column-end:::
- :::column span="2":::
- **Step 4:**
- 1. Open *src/main/resources/application.properties* in the explorer. Quarkus uses this file to load Java properties.
- 1. Find the commented code (lines 10-11) and uncomment it.
- This code sets the production variable `%prod.quarkus.datasource.jdbc.url` to the app setting that the creation wizard for you. The `quarkus.package.type` is set to build an Uber-Jar, which you need to run in App Service.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-4.png" alt-text="A screenshot showing a GitHub codespace and the application.properties file opened." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-4.png":::
- :::column-end:::
- :::column span="2":::
- **Step 5:**
- 1. Open *.github/workflows/main_msdocs-quarkus-postgres-XYZ.yml* in the explorer. This file was created by the App Service create wizard.
- 1. Under the `Build with Maven` step, change the Maven command to `mvn clean install -DskipTests`.
- `-DskipTests` skips the tests in your Quarkus project, to avoid the GitHub workflow failing prematurely.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-5.png" alt-text="A screenshot showing a GitHub codespace and a GitHub workflow YAML opened." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-5.png":::
- :::column-end:::
- :::column span="2":::
- **Step 6:**
- 1. Select the **Source Control** extension.
- 1. In the textbox, type a commit message like `Configure DB and deployment workflow`.
- 1. Select **Commit**, then confirm with **Yes**.
- 1. Select **Sync changes 1**, then confirm with **OK**.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-6.png" alt-text="A screenshot showing the changes being committed and pushed to GitHub." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-6.png":::
- :::column-end:::
- :::column span="2":::
- **Step 7:** Back in the Deployment Center page in the Azure portal:
- 1. Select **Logs**. A new deployment run is already started from your committed changes.
- 1. In the log item for the deployment run, select the **Build/Deploy Logs** entry with the latest timestamp.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-7.png" alt-text="A screenshot showing how to open deployment logs in the deployment center." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-7.png":::
- :::column-end:::
- :::column span="2":::
- **Step 8:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-8.png" alt-text="A screenshot showing a GitHub run in progress." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-deploy-sample-code-8.png":::
- :::column-end:::
-
-Having issues? Check the [Troubleshooting section](#troubleshooting).
-
-## 5. Browse to the app
-
- :::column span="2":::
- **Step 1:** In the App Service page:
- 1. From the left menu, select **Overview**.
- 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-browse-app-1.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-browse-app-1.png":::
- :::column-end:::
- :::column span="2":::
- **Step 2:** Add a few fruits to the list.
- Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure Database for PostgreSQL.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-browse-app-2.png" alt-text="A screenshot of the Quarkus web app with PostgreSQL running in Azure showing a list of fruits." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-browse-app-2.png":::
- :::column-end:::
-
-## 6. Stream diagnostic logs
-
-Azure App Service captures all messages output to the console to help you diagnose issues with your application. The sample application includes standard JBoss logging statements to demonstrate this capability as shown below.
--
- :::column span="2":::
- **Step 1:** In the App Service page:
- 1. From the left menu, select **App Service logs**.
- 1. Under **Application logging**, select **File System**.
- 1. In the top menu, select **Save**.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-stream-diagnostic-logs-1.png" alt-text="A screenshot showing how to enable native logs in App Service in the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-stream-diagnostic-logs-1.png":::
- :::column-end:::
- :::column span="2":::
- **Step 2:** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-stream-diagnostic-logs-2.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-stream-diagnostic-logs-2.png":::
- :::column-end:::
-
-Learn more about logging in Java apps in the series on [Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications](../azure-monitor/app/opentelemetry-enable.md?tabs=java).
-
-## 7. Clean up resources
-
-When you're finished, you can delete all of the resources from your Azure subscription by deleting the resource group.
-
- :::column span="2":::
- **Step 1:** In the search bar at the top of the Azure portal:
- 1. Enter the resource group name.
- 1. Select the resource group.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-clean-up-resources-1.png" alt-text="A screenshot showing how to search for and navigate to a resource group in the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-clean-up-resources-1.png":::
- :::column-end:::
- :::column span="2":::
- **Step 2:** In the resource group page, select **Delete resource group**.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-clean-up-resources-2.png":::
- :::column-end:::
- :::column span="2":::
- **Step 3:**
- 1. Confirm your deletion by typing the resource group name.
- 1. Select **Delete**.
- 1. Confirm with **Delete** again.
- :::column-end:::
- :::column:::
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-clean-up-resources-3.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-clean-up-resources-3.png"::::
- :::column-end:::
---
-## 2. Create Azure resources and deploy a sample app
-
-In this step, you create the Azure resources and deploy a sample app to App Service on Linux. The steps used in this tutorial create a set of secure-by-default resources that include App Service and Azure Database for PostgreSQL.
-
-The dev container already has the [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) (AZD).
-
-1. From the repository root, run `azd init`.
-
- ```bash
- azd init --template javase-app-service-postgresql-infra
- ```
-
-1. When prompted, give the following answers:
-
- |Question |Answer |
- |||
- |Continue initializing an app in '\<your-directory>'? | **Y** |
- |What would you like to do with these files? | **Keep my existing files unchanged** |
- |Enter a new environment name | Type a unique name. The AZD template uses this name as part of the DNS name of your web app in Azure (`<app-name>.azurewebsites.net`). Alphanumeric characters and hyphens are allowed. |
-
-1. Sign into Azure by running the `azd auth login` command and following the prompt:
-
- ```bash
- azd auth login
- ```
-
-1. Create the necessary Azure resources and deploy the app code with the `azd up` command. Follow the prompt to select the desired subscription and location for the Azure resources.
-
- ```bash
- azd up
- ```
-
- The `azd up` command takes about 15 minutes to complete (the Redis cache take the most time). It also compiles and deploys your application code, but you'll modify your code later to work with App Service. While it's running, the command provides messages about the provisioning and deployment process, including a link to the deployment in Azure. When it finishes, the command also displays a link to the deploy application.
-
- This AZD template contains files (*azure.yaml* and the *infra* directory) that generate a secure-by-default architecture with the following Azure resources:
-
- - **Resource group**: The container for all the created resources.
- - **App Service plan**: Defines the compute resources for App Service. A Linux plan in the *B1* tier is created.
- - **App Service**: Represents your app and runs in the App Service plan.
- - **Virtual network**: Integrated with the App Service app and isolates back-end network traffic.
- - **Azure Database for PostgreSQL flexible server**: Accessible only from behind its private endpoint. A database is created for you on the server.
- - **Azure Cache for Redis**: Accessible only from within the virtual network.
- - **Private DNS zones**: Enable DNS resolution of the database server and the Redis cache in the virtual network.
- - **Log Analytics workspace**: Acts as the target container for your app to ship its logs, where you can also query the logs.
- - **Key vault**: Used to keep your database password the same when you redeploy with AZD.
-
-Having issues? Check the [Troubleshooting section](#troubleshooting).
-
-## 3. Verify connection strings
-
-The AZD template you use generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings) and outputs the them to the terminal for your convenience. App settings are one way to keep connection secrets out of your code repository.
-
-1. In the AZD output, find the app setting `AZURE_POSTGRESQL_CONNECTIONSTRING`. To keep secrets safe, only the setting names are displayed. They look like this in the AZD output:
-
- <pre>
- App Service app has the following connection strings:
-
- - AZURE_POSTGRESQL_CONNECTIONSTRING
- - AZURE_REDIS_CONNECTIONSTRING
- </pre>
-
- `AZURE_POSTGRESQL_CONNECTIONSTRING` contains the connection string to the PostgreSQL database in Azure. You need to use it in your code later.
-
-1. For your convenience, the AZD template shows you the direct link to the app's app settings page. Find the link and open it in a new browser tab. Later, you will add an app setting using AZD instead of in the portal.
-
-Having issues? Check the [Troubleshooting section](#troubleshooting).
-
-## 4. Modify sample code and redeploy
-
-1. Back in the GitHub codespace of your sample fork, open *infra/resources.bicep*.
-
-1. Find the `appSettings` resource and comment the property `PORT: '8080'`. When you're done, your `appSettings` resource should look like the following code:
-
- ```Bicep
- resource appSettings 'config' = {
- name: 'appsettings'
- properties: {
- PORT: '8080'
- }
- }
- ```
-
-1. From the explorer, open *src/main/resources/application.properties*.
-
-1. Find the commented code (lines 10-11) and uncomment it.
-
- ```
- %prod.quarkus.datasource.jdbc.url=${AZURE_POSTGRESQL_CONNECTIONSTRING}
- quarkus.package.type=uber-jar
- ```
-
- This code sets the production variable `%prod.quarkus.datasource.jdbc.url` to the app setting that the creation wizard for you. The `quarkus.package.type` is set to build an Uber-Jar, which you need to run in App Service.
-
-1. Back in the codespace terminal, run `azd up`.
-
- ```bash
- azd up
- ```
-
- > [!TIP]
- > `azd up` actually does `azd package`, `azd provision`, and `azd deploy`. `azd provision` lets you update the Azure changes you made in *infra/resources.bicep*. `azd deploy` uploads the built Jar file.
- >
- > To find out how the Jar file is packaged, you can run `azd package --debug` by itself.
-
-Having issues? Check the [Troubleshooting section](#troubleshooting).
-
-## 5. Browse to the app
-
-1. In the AZD output, find the URL of your app and navigate to it in the browser. The URL looks like this in the AZD output:
-
- <pre>
- Deploying services (azd deploy)
-
- (Γ£ô) Done: Deploying service web
- - Endpoint: https://&lt;app-name>.azurewebsites.net/
- </pre>
-
-2. Add a few fruits to the list.
-
- :::image type="content" source="./media/tutorial-java-quarkus-postgresql-app/azure-portal-browse-app-2.png" alt-text="A screenshot of the Quarkus web app with PostgreSQL running in Azure showing fruits." lightbox="./media/tutorial-java-quarkus-postgresql-app/azure-portal-browse-app-2.png":::
-
- Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure Database for PostgreSQL.
-
-Having issues? Check the [Troubleshooting section](#troubleshooting).
-
-## 6. Stream diagnostic logs
-
-Azure App Service can capture console logs to help you diagnose issues with your application. For convenience, the AZD template already [enabled logging to the local file system](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) and is [shipping the logs to a Log Analytics workspace](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor).
-
-The sample application includes standard JBoss logging statements to demonstrate this capability as shown below.
--
-In the AZD output, find the link to stream App Service logs and navigate to it in the browser. The link looks like this in the AZD output:
-
-<pre>
-Stream App Service logs at: https://portal.azure.com/#@/resource/subscriptions/&lt;subscription-guid>/resourceGroups/&lt;group-name>/providers/Microsoft.Web/sites/&lt;app-name>/logStream
-</pre>
-
-Learn more about logging in Java apps in the series on [Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python and Java applications](../azure-monitor/app/opentelemetry-enable.md?tabs=java).
-
-Having issues? Check the [Troubleshooting section](#troubleshooting).
-
-## 7. Clean up resources
-
-To delete all Azure resources in the current deployment environment, run `azd down` and follow the prompts.
-
-```bash
-azd down
-```
--
-## Troubleshooting
-
-#### I see the error log "ERROR [org.acm.hib.orm.pan.ent.FruitEntityResource] (vert.x-eventloop-thread-0) Failed to handle request: jakarta.ws.rs.NotFoundException: HTTP 404 Not Found".
-
-This is a Vert.x error (see [Quarkus Reactive Architecture](https://quarkus.io/guides/quarkus-reactive-architecture)), indicating that the client requested an unknown path. This error happens on every app startup because App Service verifies that the app starts by sending a `GET` request to `/robots933456.txt`.
-
-#### The app failed to start and shows the following error in log: "Model classes are defined for the default persistence unit \<default> but configured datasource \<default> not found: the default EntityManagerFactory will not be created."
-
-This Quarkus error is most likely because the app can't connect to the Azure database. Make sure that the app setting `AZURE_POSTGRESQL_CONNECTIONSTRING` hasn't been changed, and that *application.properties* is using the app setting properly.
-
-## Frequently asked questions
--- [How much does this setup cost?](#how-much-does-this-setup-cost)-- [How do I connect to the PostgreSQL server that's secured behind the virtual network with other tools?](#how-do-i-connect-to-the-postgresql-server-thats-secured-behind-the-virtual-network-with-other-tools)-- [How does local app development work with GitHub Actions?](#how-does-local-app-development-work-with-github-actions)-- [What if I want to run tests with PostgreSQL during the GitHub workflow?](#what-if-i-want-to-run-tests-with-postgresql-during-the-github-workflow)-
-#### How much does this setup cost?
-
-Pricing for the created resources is as follows:
--- The App Service plan is created in **Basic** tier and can be scaled up or down. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/).-- The PostgreSQL flexible server is created in the lowest burstable tier **Standard_B1ms**, with the minimum storage size, which can be scaled up or down. See [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/).-- The virtual network doesn't incur a charge unless you configure extra functionality, such as peering. See [Azure Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/).-- The private DNS zone incurs a small charge. See [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/). -
-#### How do I connect to the PostgreSQL server that's secured behind the virtual network with other tools?
--- For basic access from a command-line tool, you can run `psql` from the app's SSH terminal.-- To connect from a desktop tool, your machine must be within the virtual network. For example, it could be an Azure VM in one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network.-- You can also [integrate Azure Cloud Shell](../cloud-shell/private-vnet.md) with the virtual network.-
-#### How does local app development work with GitHub Actions?
-
-Using the autogenerated workflow file from App Service as an example, each `git push` kicks off a new build and deployment run. From a local clone of the GitHub repository, you make the desired updates and push to GitHub. For example:
-
-```terminal
-git add .
-git commit -m "<some-message>"
-git push origin main
-```
-
-#### What if I want to run tests with PostgreSQL during the GitHub workflow?
-
-The default Quarkus sample application includes tests with database connectivity. To avoid connection errors, you added the `-skipTests` property. If you want, you can run the tests against a PostgreSQL service container. For example, in the automatically generated workflow file in your GitHub fork (*.github/workflows/main_cephalin-quarkus.yml*), make the following changes:
-
-1. Add YAML code for the PostgreSQL container to the `build` job, as shown in the following snippet.
-
- ```yml
- ...
- jobs:
- build:
- runs-on: ubuntu-latest
-
- # BEGIN CODE ADDITION
- container: ubuntu
-
- # Hostname for the PostgreSQL container
- postgresdb:
- image: postgres
- env:
- POSTGRES_PASSWORD: postgres
- POSTGRES_USER: postgres
- POSTGRES_DB: postgres
- # Set health checks to wait until postgres has started
- options: >-
- --health-cmd pg_isready
- --health-interval 10s
- --health-timeout 5s
- --health-retries 5
-
- # END CODE ADDITION
-
- steps:
- - uses: actions/checkout@v4
- ...
- ```
-
- `container: ubuntu` tells GitHub to run the `build` job in a container. This way, the connection string in your dev environment `jdbc:postgresql://postgresdb:5432/postgres` can work as-is in when the workflow runs. For more information about PostgreSQL connectivity in GitHub Actions, see [Creating PostgreSQL service containers](https://docs.github.com/en/actions/using-containerized-services/creating-postgresql-service-containers).
-
-1. In the `Build with Maven` step, remove `-DskipTests`. For example:
-
- ```yml
- - name: Build with Maven
- run: mvn clean install
- ```
-
-## Next steps
--- [Azure for Java Developers](/java/azure/)-- [Quarkus](https://quarkus.io),-- [Getting Started with Quarkus](https://quarkus.io/get-started/)-
-Learn more about running Java apps on App Service in the developer guide.
-
-> [!div class="nextstepaction"]
-> [Configure a Java app in Azure App Service](configure-language-java-deploy-run.md?pivots=platform-linux)
-
-Learn how to secure your app with a custom domain and certificate.
-
-> [!div class="nextstepaction"]
-> [Secure with custom domain and certificate](tutorial-secure-domain-certificate.md)
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md
The creation wizard generated the connectivity string for you already as an [app
Having issues? Check the [Troubleshooting section](#troubleshooting).
-## 5. Deploy sample code
+## 4. Deploy sample code
In this step, you configure GitHub deployment using GitHub Actions. It's just one of many ways to deploy to App Service, but also a great way to have continuous integration in your deployment process. By default, every `git push` to your GitHub repository kicks off the build and deploy action.
Like the Tomcat convention, if you want to deploy to the root context of Tomcat,
Having issues? Check the [Troubleshooting section](#troubleshooting).
-## 6. Browse to the app
+## 5. Browse to the app
:::row::: :::column span="2":::
Having issues? Check the [Troubleshooting section](#troubleshooting).
Having issues? Check the [Troubleshooting section](#troubleshooting).
-## 7. Stream diagnostic logs
+## 6. Stream diagnostic logs
Azure App Service captures all messages output to the console to help you diagnose issues with your application. The sample application includes standard Log4j logging statements to demonstrate this capability, as shown in the following snippet:
Learn more about logging in Java apps in the series on [Enable Azure Monitor Ope
Having issues? Check the [Troubleshooting section](#troubleshooting).
-## 8. Clean up resources
+## 7. Clean up resources
When you're finished, you can delete all of the resources from your Azure subscription by deleting the resource group.
The AZD template you use generated the connectivity variables for you already as
Having issues? Check the [Troubleshooting section](#troubleshooting).
-## 5. Browse to the app
+## 4. Browse to the app
1. In the AZD output, find the URL of your app and navigate to it in the browser. The URL looks like this in the AZD output:
Having issues? Check the [Troubleshooting section](#troubleshooting).
Having issues? Check the [Troubleshooting section](#troubleshooting).
-## 6. Stream diagnostic logs
+## 5. Stream diagnostic logs
Azure App Service can capture console logs to help you diagnose issues with your application. For convenience, the AZD template already [enabled logging to the local file system](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) and is [shipping the logs to a Log Analytics workspace](troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor).
Learn more about logging in Java apps in the series on [Enable Azure Monitor Ope
Having issues? Check the [Troubleshooting section](#troubleshooting).
-## 7. Clean up resources
+## 6. Clean up resources
To delete all Azure resources in the current deployment environment, run `azd down` and follow the prompts.
automation Move Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/move-account.md
Title: Move your Azure Automation account to another subscription
description: This article tells how to move your Automation account to another subscription. Previously updated : 05/26/2023- Last updated : 09/10/2024+
automation Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/private-link-security.md
Title: Use Azure Private Link to securely connect networks to Azure Automation description: Use Azure Private Link to securely connect networks to Azure Automation- Previously updated : 12/15/2022+ Last updated : 09/10/2024 # Use Azure Private Link to securely connect networks to Azure Automation
automation Region Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/region-mappings.md
Title: Supported regions for linked Log Analytics workspace description: This article describes the supported region mappings between an Automation account and a Log Analytics workspace as it relates to certain features of Azure Automation. Previously updated : 08/30/2024 Last updated : 09/10/2024 -+
automation Manage Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-runbooks.md
Title: Manage runbooks in Azure Automation
description: This article tells how to manage runbooks in Azure Automation. Previously updated : 12/20/2023- Last updated : 09/10/2024+
automation Manage Runtime Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-runtime-environment.md
description: This article tells how to manage runbooks in Runtime environment an
Last updated 07/24/2024-+
automation Migrate Run As Accounts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-run-as-accounts-managed-identity.md
Title: Migrate from a Run As account to Managed identities
description: This article describes how to migrate from a Run As account to managed identities in Azure Automation. Previously updated : 10/03/2023- Last updated : 09/10/2024+
automation Python 3 Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/python-3-packages.md
Title: Manage Python 3 packages in Azure Automation
description: This article tells how to manage Python 3 packages in Azure Automation. Previously updated : 10/16/2023- Last updated : 09/10/2024+
automation Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/python-packages.md
Title: Manage Python 2 packages in Azure Automation
description: This article tells how to manage Python 2 packages in Azure Automation. Previously updated : 07/23/2024- Last updated : 09/10/2024+
automation Remove User Assigned Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/remove-user-assigned-identity.md
description: This article explains how to remove a user-assigned managed identit
Previously updated : 10/26/2021- Last updated : 09/10/2024+
automation Runbook Input Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/runbook-input-parameters.md
Title: Configure runbook input parameters in Azure Automation
description: This article tells how to configure runbook input parameters, which allow data to be passed to a runbook when it's started. Previously updated : 08/18/2023- Last updated : 09/10/2024+
automation Runtime Environment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/runtime-environment-overview.md
Title: Runtime environment (preview) in Azure Automation
description: This article provides an overview on Runtime environment in Azure Automation. Previously updated : 07/17/2024- Last updated : 09/10/2024+
automation Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/shared-resources/certificates.md
Title: Manage certificates in Azure Automation
description: This article tells how to work with certificates for access by runbooks and DSC configurations. Previously updated : 05/26/2023- Last updated : 09/10/2024+
automation Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/shared-resources/credentials.md
description: This article tells how to create credential assets and use them in
Previously updated : 05/26/2023- Last updated : 09/10/2024+
automation Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/shared-resources/modules.md
Title: Manage modules in Azure Automation
description: This article tells how to use PowerShell modules to enable cmdlets in runbooks and DSC resources in DSC configurations. Previously updated : 09/08/2024- Last updated : 09/10/2024+
automation Schedules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/shared-resources/schedules.md
Title: Manage schedules in Azure Automation
description: This article tells how to create and work with a schedule in Azure Automation. Previously updated : 03/29/2021- Last updated : 09/10/2024+
automation Source Control Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/source-control-integration.md
Title: Use source control integration in Azure Automation
description: This article tells you how to synchronize Azure Automation source control with other repositories. Previously updated : 05/15/2024- Last updated : 09/10/2024+
automation Start Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/start-runbooks.md
Title: Start a runbook in Azure Automation
description: This article tells how to start a runbook in Azure Automation. Previously updated : 04/28/2021- Last updated : 09/09/2024+
automation Tutorial Configure Servers Desired State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/tutorial-configure-servers-desired-state.md
Title: Configure machines to a desired state in Azure Automation
description: This article tells how to configure machines to a desired state using Azure Automation State Configuration. - Previously updated : 04/15/2021+ Last updated : 09/10/2024
automation Configure Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/configure-groups.md
Title: Use dynamic groups with Azure Automation Update Management
description: This article tells how to use dynamic groups with Azure Automation Update Management. Previously updated : 09/05/2024- Last updated : 09/10/2024+
automation Enable From Automation Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-automation-account.md
Title: Enable Azure Automation Update Management from Automation account
description: This article tells how to enable Update Management from an Automation account. Previously updated : 08/30/2024- Last updated : 09/10/2024+
automation Enable From Runbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-runbook.md
Title: Enable Azure Automation Update Management from runbook
description: This article tells how to enable Update Management from a runbook. - Previously updated : 08/30/2024+ Last updated : 09/10/2024
automation Enable From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/enable-from-vm.md
Title: Enable Azure Automation Update Management for an Azure VM
description: This article tells how to enable Update Management for an Azure VM. Previously updated : 08/30/2024- Last updated : 09/10/2024+
automation Manage Updates For Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/manage-updates-for-vm.md
description: This article tells how to use Update Management to manage updates a
- Previously updated : 08/30/2024+ Last updated : 09/10/2024
automation Mecmintegration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/mecmintegration.md
Title: Integrate Azure Automation Update Management with Microsoft Configuration
description: This article tells how to configure Microsoft Configuration Manager with Update Management to deploy software updates to manager clients. Previously updated : 08/30/2024- Last updated : 09/10/2024+
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md
description: This article describes the supported Windows and Linux operating sy
Previously updated : 08/30/2024- Last updated : 09/10/2024+
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
description: This article provides an overview of the Update Management feature
Previously updated : 08/30/2024- Last updated : 09/10/2024+
automation Plan Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/plan-deployment.md
description: This article describes the considerations and decisions to be made
Previously updated : 08/30/2024- Last updated : 09/10/2024+
automation Pre Post Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/pre-post-scripts.md
Title: Manage pre-scripts and post-scripts in your Update Management deployment
description: This article tells how to configure and manage pre-scripts and post-scripts for update deployments. Previously updated : 08/30/2024- Last updated : 09/10/2024+
automation Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/query-logs.md
Title: Query Azure Automation Update Management logs
description: This article tells how to query the logs for Update Management in your Log Analytics workspace. Previously updated : 07/15/2024- Last updated : 09/10/2024+
automation Remove Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/remove-feature.md
Title: Remove Azure Automation Update Management feature description: This article tells how to stop using Update Management and unlink an Automation account from the Log Analytics workspace. Previously updated : 08/30/2024- Last updated : 09/10/2024+
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
For more information, see [Try Azure AI Video Indexer enabled by Arc](/azure/azu
For more information, see [What is Edge Storage Accelerator?](../edge-storage-accelerator/overview.md).
+## Connected registry on Arc-enabled Kubernetes
+
+- **Supported distributions**: Connected registry for Arc-enabled Kubernetes clusters.
+- **Supported Azure regions**: All regions where Azure Arc-enabled Kubernetes is available.
+
+The connected registry extension for Azure Arc enables you to sync container images between your Azure Container Registry (ACR) and your local on-prem Azure Arc-enabled Kubernetes cluster. The extension is deployed to the local or remote cluster and uses a synchronization schedule and window to sync images between the on-prem connected registry and the cloud ACR registry.
+
+For more information, see [Connected Registry for Arc-enabled Kubernetes clusters](../../container-registry/quickstart-connected-registry-arc-cli.md).
+ ## Next steps - Read more about [cluster extensions for Azure Arc-enabled Kubernetes](conceptual-extensions.md).
azure-functions Durable Functions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md
async def main(req: func.HttpRequest, client) -> func.HttpResponse:
::: zone-end ::: zone pivot="powershell"
-> [!NOTE]
-> Durable entities are currently not supported in PowerShell.
- ::: zone-end ::: zone pivot="java"
-> [!NOTE]
-> Durable entities are currently not supported in Java.
- ::: zone-end Entity functions are available in [Durable Functions 2.0](durable-functions-versions.md) and above for C#, JavaScript, and Python.
azure-functions Functions Bindings Event Grid Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md
For explanations of the common and event-specific properties, see [Event propert
## Next steps
-* If you have questions, submit an issue to the team [here](https://github.com/Azure/azure-functions-eventgrid-extension/issues)
+* If you have questions, submit an issue to the team [here](https://github.com/Azure/azure-sdk-for-net/issues)
* [Dispatch an Event Grid event](./functions-bindings-event-grid-output.md) [EventGridEvent]: /dotnet/api/microsoft.azure.eventgrid.models.eventgridevent
azure-functions Functions Bindings Expressions Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-expressions-patterns.md
The binding expression `DateTime` resolves to `DateTime.UtcNow`. The following b
In C# and other .NET languages, you can use an imperative binding pattern, as opposed to the declarative bindings in *function.json* and attributes. Imperative binding is useful when binding parameters need to be computed at runtime rather than design time. To learn more, see the [C# developer reference](functions-dotnet-class-library.md#binding-at-runtime) or the [C# script developer reference](functions-reference-csharp.md#binding-at-runtime).
-## Next steps
-> [!div class="nextstepaction"]
-> [Using the Azure Function return value](./functions-bindings-return-value.md)
+## Related content
+++ [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md)
azure-functions Functions Bindings Return Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-return-value.md
- Title: Using return value from an Azure Function
-description: Learn to manage return values for Azure Functions
-
-# ms.devlang: csharp, fsharp, java, javascript, powershell, python
- Previously updated : 07/25/2023
-zone_pivot_groups: programming-languages-set-functions-lang-workers
--
-# Using the Azure Function return value
-
-This article explains how return values work inside a function. In languages that have a return value, you can bind a function [output binding](./functions-triggers-bindings.md#binding-direction) to the return value.
--
-Set the `name` property in *function.json* to `$return`. If there are multiple output bindings, use the return value for only one of them.
---
-How return values are used depends on the C# mode you're using in your function app:
-
-# [Isolated worker model](#tab/isolated-process)
-
-See [Output bindings in the .NET worker guide](./dotnet-isolated-process-guide.md#output-bindings) for details and examples.
-
-# [In-process model](#tab/in-process)
--
-In a C# class library, apply the output binding attribute to the method return value. In C# and C# script, alternative ways to send data to an output binding are `out` parameters and [collector objects](functions-reference-csharp.md#writing-multiple-output-values).
-
-Here's C# code that uses the return value for an output binding, followed by an async example:
-
-```cs
-[FunctionName("QueueTrigger")]
-[return: Blob("output-container/{id}")]
-public static string Run([QueueTrigger("inputqueue")]WorkItem input, ILogger log)
-{
- string json = string.Format("{{ \"id\": \"{0}\" }}", input.Id);
- log.LogInformation($"C# script processed queue message. Item={json}");
- return json;
-}
-```
-
-```cs
-[FunctionName("QueueTrigger")]
-[return: Blob("output-container/{id}")]
-public static Task<string> Run([QueueTrigger("inputqueue")]WorkItem input, ILogger log)
-{
- string json = string.Format("{{ \"id\": \"{0}\" }}", input.Id);
- log.LogInformation($"C# script processed queue message. Item={json}");
- return Task.FromResult(json);
-}
-```
-----
-Here's the output binding in the *function.json* file:
-
-```json
-{
- "name": "$return",
- "type": "blob",
- "direction": "out",
- "path": "output-container/{id}"
-}
-```
-
-Here's the JavaScript code:
-
-```javascript
-module.exports = function (context, input) {
- var json = JSON.stringify(input);
- context.log('Node.js script processed queue message', json);
- return json;
-}
-```
----
-Here's the output binding in the *function.json* file:
-
-```json
-{
- "name": "Response",
- "type": "blob",
- "direction": "out",
- "path": "output-container/{blobname}"
-}
-```
-
-Here's the PowerShell code that uses the return value for an http output binding:
-
-```powershell
-Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
- StatusCode = [HttpStatusCode]::OK
- Body = $blobname
- })
-```
---
-Here's the output binding in the *function.json* file:
-
-```json
-{
- "name": "$return",
- "type": "blob",
- "direction": "out",
- "path": "output-container/{id}"
-}
-```
-Here's the Python code:
-
-```python
-def main(input: azure.functions.InputStream) -> str:
- return json.dumps({
- 'name': input.name,
- 'length': input.length,
- 'content': input.read().decode('utf-8')
- })
-```
----
-Apply the output binding annotation to the function method. If there are multiple output bindings, use the return value for only one of them.
--
-Here's Java code that uses the return value for an output binding:
-
-```java
-@FunctionName("QueueTrigger")
-@StorageAccount("AzureWebJobsStorage")
-@BlobOutput(name = "output", path = "output-container/{id}")
-public static String run(
- @QueueTrigger(name = "input", queueName = "inputqueue") WorkItem input,
- final ExecutionContext context
-) {
- String json = String.format("{ \"id\": \"%s\" }", input.id);
- context.getLogger().info("Java processed queue message. Item=" + json);
- return json;
-}
-```
---
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Handle Azure Functions binding errors](./functions-bindings-errors.md)
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
You can also build your app with ReadyToRun from the command line. For more info
## Supported types for bindings
-Each binding has its own supported types; for instance, a blob trigger attribute can be applied to a string parameter, a POCO parameter, a `CloudBlockBlob` parameter, or any of several other supported types. The [binding reference article for blob bindings](functions-bindings-storage-blob-trigger.md#usage) lists all supported parameter types. For more information, see [Triggers and bindings](functions-triggers-bindings.md) and the [binding reference docs for each binding type](functions-triggers-bindings.md#next-steps).
+Each binding has its own supported types; for instance, a blob trigger attribute can be applied to a string parameter, a POCO parameter, a `CloudBlockBlob` parameter, or any of several other supported types. The [binding reference article for blob bindings](functions-bindings-storage-blob-trigger.md#usage) lists all supported parameter types. For more information, see [Triggers and bindings](functions-triggers-bindings.md) and the [binding reference docs for each binding type](functions-triggers-bindings.md#related-content).
[!INCLUDE [HTTP client best practices](../../includes/functions-http-client-best-practices.md)] ## Binding to method return value
-You can use a method return value for an output binding, by applying the attribute to the method return value. For examples, see [Triggers and bindings](./functions-bindings-return-value.md).
+You can use a method return value for an output binding, by applying the attribute to the method return value. For examples, see [Triggers and bindings](./functions-triggers-bindings.md).
Use the return value only if a successful function execution always results in a return value to pass to the output binding. Otherwise, use `ICollector` or `IAsyncCollector`, as shown in the following section.
azure-functions Functions Reference Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md
The `#r` statement is explained [later in this article](#referencing-external-as
## Supported types for bindings
-Each binding has its own supported types; for instance, a blob trigger can be used with a string parameter, a POCO parameter, a `CloudBlockBlob` parameter, or any of several other supported types. The [binding reference article for blob bindings](functions-bindings-storage-blob-trigger.md#usage) lists all supported parameter types for blob triggers. For more information, see [Triggers and bindings](functions-triggers-bindings.md) and the [binding reference docs for each binding type](functions-triggers-bindings.md#next-steps).
+Each binding has its own supported types; for instance, a blob trigger can be used with a string parameter, a POCO parameter, a `CloudBlockBlob` parameter, or any of several other supported types. The [binding reference article for blob bindings](functions-bindings-storage-blob-trigger.md#usage) lists all supported parameter types for blob triggers. For more information, see [Triggers and bindings](functions-triggers-bindings.md) and the [binding reference docs for each binding type](functions-triggers-bindings.md#related-content).
[!INCLUDE [HTTP client best practices](../../includes/functions-http-client-best-practices.md)]
azure-functions Functions Triggers Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-triggers-bindings.md
Title: Triggers and bindings in Azure Functions description: Learn to use triggers and bindings to connect your Azure Function to online events and cloud-based services. Previously updated : 08/14/2023 Last updated : 09/06/2024 zone_pivot_groups: programming-languages-set-functions
zone_pivot_groups: programming-languages-set-functions
In this article, you learn the high-level concepts surrounding functions triggers and bindings.
-Triggers cause a function to run. A trigger defines how a function is invoked and a function must have exactly one trigger. Triggers have associated data, which is often provided as the payload of the function.
+Triggers cause a function to run. A trigger defines how a function is invoked and a function must have exactly one trigger. Triggers can also pass data into your function, as you would with method calls.
-Binding to a function is a way of declaratively connecting another resource to the function; bindings may be connected as *input bindings*, *output bindings*, or both. Data from bindings is provided to the function as parameters.
+Binding to a function is a way of declaratively connecting your functions to other resources; bindings either pass data into your function (an *input binding*) or enable you to write data out from your function (an *output binding*) using *binding parameters*. Your function trigger is essentially a special type of input binding.
-You can mix and match different bindings to suit your needs. Bindings are optional and a function might have one or multiple input and/or output bindings.
+You can mix and match different bindings to suit your function's specific scenario. Bindings are optional and a function might have one or multiple input and/or output bindings.
Triggers and bindings let you avoid hardcoding access to other services. Your function receives data (for example, the content of a queue message) in function parameters. You send data (for example, to create a queue message) by using the return value of the function.
Consider the following examples of how you could implement different functions.
| Example scenario | Trigger | Input binding | Output binding | |-|||-| | A new queue message arrives which runs a function to write to another queue. | Queue<sup>*</sup> | *None* | Queue<sup>*</sup> |
-|A scheduled job reads Blob Storage contents and creates a new Azure Cosmos DB document. | Timer | Blob Storage | Azure Cosmos DB |
-|The Event Grid is used to read an image from Blob Storage and a document from Azure Cosmos DB to send an email. | Event Grid | Blob Storage and Azure Cosmos DB | SendGrid |
-| A webhook that uses Microsoft Graph to update an Excel sheet. | HTTP | *None* | Microsoft Graph |
+| A scheduled job reads Blob Storage contents and creates a new Azure Cosmos DB document. | Timer | Blob Storage | Azure Cosmos DB |
+| The Event Grid is used to read an image from Blob Storage and a document from Azure Cosmos DB to send an email. | Event Grid | Blob Storage and Azure Cosmos DB | SendGrid |
<sup>\*</sup> Represents different queues
-These examples aren't meant to be exhaustive, but are provided to illustrate how you can use triggers and bindings together.
+These examples aren't meant to be exhaustive, but are provided to illustrate how you can use triggers and bindings together. For a more comprehensive set of scenarios, see [Azure Functions scenarios](functions-scenarios.md).
-### Trigger and binding definitions
+>[!TIP]
+>Functions doesn't require you to use input and output bindings to connect to Azure services. You can always create an Azure SDK client in your code and use it instead for your data transfers. For more information, see [Connect to services](functions-reference.md#connect-to-services).
-Triggers and bindings are defined differently depending on the development language.
+## Trigger and binding definitions
-| Language | Triggers and bindings are configured by... |
-|-|--|
-| C# class library | &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;decorating methods and parameters with C# attributes |
-| Java | &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;decorating methods and parameters with Java annotations |
-| JavaScript/PowerShell/Python/TypeScript | &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;updating [function.json](./functions-reference.md) ([schema](http://json.schemastore.org/function)) |
+Triggers and bindings are defined differently depending on the development language. Make sure to select your language at the [top](#top) of the article.
-For languages that rely on function.json, the portal provides a UI for adding bindings in the **Integration** tab. You can also edit the file directly in the portal in the **Code + test** tab of your function. Visual Studio Code lets you easily [add a binding to a function.json file](functions-develop-vs-code.md?tabs=nodejs#add-a-function-to-your-project) by following a convenient set of prompts.
+Bindings can be either input or output bindings. Not all services support both input and output bindings. See your specific binding extension for [specific bindings code examples](#bindings-code-examples).
-In .NET and Java, the parameter type defines the data type for input data. For instance, use `string` to bind to the text of a queue trigger, a byte array to read as binary, and a custom type to de-serialize to an object. Since .NET class library functions and Java functions don't rely on *function.json* for binding definitions, they can't be created and edited in the portal. C# portal editing is based on C# script, which uses *function.json* instead of attributes.
+This example shows an HTTP triggered function with an output binding that writes a message to an Azure Storage queue.
-To learn more about how to add bindings to existing functions, see [Connect functions to Azure services using bindings](add-bindings-existing-function.md).
+For C# class library functions, triggers and bindings are configured by decorating methods and parameters with C# attributes, where the specific attribute applied might depend on the C# runtime model:
-For languages that are dynamically typed such as JavaScript, use the `dataType` property in the *function.json* file. For example, to read the content of an HTTP request in binary format, set `dataType` to `binary`:
+### [Isolated worker model](#tab/isolated-process)
+
+The HTTP trigger (`HttpTrigger`) is defined on the `Run` method for a function named `HttpExample` that returns a `MultiResponse` object:
++
+This example shows the `MultiResponse` object definition which both returns an `HttpResponse` to the HTTP request and also writes a message to a storage queue using a `QueueOutput` binding:
++
+For more information, see the [C# isolated worker model guide](dotnet-isolated-process-guide.md#methods-recognized-as-functions).
+
+### [In-process model](#tab/in-process)
+
+The HTTP trigger (`HttpTrigger`) is defined on the `Run` method for a function named `HttpExample` that writes to a storage queue defined by the `Queue` and `StorageAccount` attributes on the `msg` parameter:
++
+For more information, see the [C# in-process model guide](functions-dotnet-class-library.md#methods-recognized-as-functions).
+++
+Legacy C# Script functions use a function.json definition file. For more information, see the [Azure Functions C# script (.csx) developer reference](functions-reference-csharp.md).
+For Java functions, triggers and bindings are configured by annotating specific methods and parameters. This HTTP trigger (`@HttpTrigger`) is defined on the `run` method for a function named `HttpTriggerQueueOutput`, which writes to a storage queue defined by the `@QueueOutput` annotation on the `message` parameter:
++
+For more information, see the [Java developer guide](functions-reference-java.md#triggers-and-annotations).
+The way that triggers and binding are defined for Node.js functions depends on the specific version of Node.js for Functions:
+
+### [v4](#tab/node-v4)
+
+In Node.js for Functions version 4, you configure triggers and bindings using objects exported from the `@azure/functions` module. For more information, see the [Node.js developer guide](functions-reference-node.md?pivots=nodejs-model-v4#inputs-and-outputs).
+
+### [v3](#tab/node-v3)
+
+In Node.js for Functions version 3, you configure triggers and bindings in a function-specific `function.json` file in the same folder as your code. For more information, see the [Node.js developer guide](functions-reference-node.md?pivots=nodejs-model-v3#inputs-and-outputs).
+++
+This example is an HTTP triggered function that creates a queue item for each HTTP request received.
+
+### [v4](#tab/node-v4)
+
+The `http` method on the exported `app` object defines an HTTP trigger, and the `storageQueue` method on `output` defines an output binding on this trigger.
++
+### [v3](#tab/node-v3)
+
+This example `function.json` file defines the HTTP trigger function that returns an HTTP response and writes to a storage queue.
```json {
- "dataType": "binary",
- "type": "httpTrigger",
- "name": "req",
- "direction": "in"
+ "bindings": [
+ {
+ "type": "httpTrigger",
+ "direction": "in",
+ "authLevel": "function",
+ "name": "input"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+ },
+ {
+ "type": "queue",
+ "direction": "out",
+ "name": "myQueueItem",
+ "queueName": "outqueue",
+ "connection": "MyStorageConnectionAppSetting"
+ }
+ ]
} ```
-Other options for `dataType` are `stream` and `string`.
++
+### [v4](#tab/node-v4)
+
+The `http` method on the exported `app` object defines an HTTP trigger, and the `storageQueue` method on `output` defines an output binding on this trigger.
+
-## Binding direction
+### [v3](#tab/node-v3)
+
+This example `function.json` file defines the HTTP trigger function that returns an HTTP response and writes to a storage queue.
+
+```json
+{
+ "bindings": [
+ {
+ "type": "httpTrigger",
+ "direction": "in",
+ "authLevel": "function",
+ "name": "input"
+ },
+ {
+ "type": "http",
+ "direction": "out",
+ "name": "$return"
+ },
+ {
+ "type": "queue",
+ "direction": "out",
+ "name": "myQueueItem",
+ "queueName": "outqueue",
+ "connection": "MyStorageConnectionAppSetting"
+ }
+ ]
+}
+```
-All triggers and bindings have a `direction` property in the [function.json](./functions-reference.md) file:
+This example `function.json` file defines the function:
-- For triggers, the direction is always `in`-- Input and output bindings use `in` and `out`-- Some bindings support a special direction `inout`. If you use `inout`, only the **Advanced editor** is available via the **Integrate** tab in the portal.
-When you use [attributes in a class library](functions-dotnet-class-library.md) to configure triggers and bindings, the direction is provided in an attribute constructor or inferred from the parameter type.
+For more information, see the [PowerShell developer guide](functions-reference-powershell.md#bindings).
+The way that the function is defined depends on the version of Python for Functions:
+### [v2](#tab/python-v2)
+
+In Python for Functions version 2, you define the function directly in code using decorators.
+++
+### [v1](#tab/python-v1)
+
+In Python for Functions version 1, this example `function.json` file defines an HTTP trigger function that returns an HTTP response and writes to a storage queue.
++++ ## Add bindings to a function You can connect your function to other services by using input or output bindings. Add a binding by adding its specific definitions to your function. To learn how, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md).
Specific binding extension versions are only supported while the underlying serv
## Bindings code examples
-Use the following table to find examples of specific binding types that show you how to work with bindings in your functions. First, choose the language tab that corresponds to your project.
+Use the following table to find more examples of specific binding types that show you how to work with bindings in your functions. First, choose the language tab that corresponds to your project.
[!INCLUDE [functions-bindings-code-example-chooser](../../includes/functions-bindings-code-example-chooser.md)]
Use the following table to find examples of specific binding types that show you
You can create custom input and output bindings. Bindings must be authored in .NET, but can be consumed from any supported language. For more information about creating custom bindings, see [Creating custom input and output bindings](https://github.com/Azure/azure-webjobs-sdk/wiki/Creating-custom-input-and-output-bindings).
-## Resources
+## Related content
+ - [Binding expressions and patterns](./functions-bindings-expressions-patterns.md)-- [Using the Azure Function return value](./functions-bindings-return-value.md) - [How to register a binding expression](./functions-bindings-register.md) - Testing: - [Strategies for testing your code in Azure Functions](functions-test-a-function.md) - [Manually run a non HTTP-triggered function](functions-manually-run-non-http.md) - [Handling binding errors](./functions-bindings-errors.md)-
-## Next steps
-> [!div class="nextstepaction"]
-> [Register Azure Functions binding extensions](./functions-bindings-register.md)
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
Title: 'Tutorial: Migrate a web app from Bing Maps | Microsoft Azure Maps'
-description: Tutorial on how to migrate a web app from Bing Maps to Microsoft Azure Maps.
+ Title: 'Migrate a web app from Bing Maps | Microsoft Azure Maps'
+description: How to migrate a web app from Bing Maps to Microsoft Azure Maps.
Previously updated : 10/28/2021- Last updated : 09/09/2024+
-# Tutorial: Migrate a web app from Bing Maps
+# Migrate a web app from Bing Maps
-Web apps that use Bing Maps often use the Bing Maps V8 JavaScript SDK. The Azure Maps Web SDK is the suitable Azure-based SDK to migrate to. The Azure Maps Web SDK lets you customize interactive maps with your own content and imagery for display in your web or mobile applications. This control makes use of WebGL, allowing you to render large data sets with high performance. Develop with this SDK using JavaScript or TypeScript. This tutorial demonstrates how to:
+Web apps that use Bing Maps often use the Bing Maps V8 JavaScript SDK. The Azure Maps Web SDK is the suitable Azure-based SDK to migrate to. The Azure Maps Web SDK lets you customize interactive maps with your own content and imagery for display in your web or mobile applications. This control makes use of WebGL, allowing you to render large data sets with high performance. Develop with this SDK using JavaScript or TypeScript. This article demonstrates how to:
> [!div class="checklist"] >
In Azure Maps, the drawing tools module needs to be loaded by loading the JavaSc
</html> ``` > [!TIP] > In Azure Maps layers the drawing tools provide multiple ways that users can draw shapes. For example, when drawing a polygon the user can click to add each point, or hold the left mouse button down and drag the mouse to draw a path. This can be modified using the `interactionType` option of the `DrawingManager`.
azure-netapp-files Cross Region Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-introduction.md
Previously updated : 09/05/2024 Last updated : 09/10/2024
Azure NetApp Files volume replication is supported between various [Azure region
| Qatar/Europe | Qatar Central | West Europe | | North America | East US | East US 2 | | North America | East US 2| West US 2 |
+| North America | East US 2 | West US 3 |
| North America | North Central US | East US 2| | North America | South Central US | East US | | North America | South Central US | East US 2 |
azure-netapp-files Faq Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-performance.md
Previously updated : 08/20/2024 Last updated : 09/10/2024 # Performance FAQs for Azure NetApp Files
This article answers frequently asked questions (FAQs) about Azure NetApp Files
## What should I do to optimize or tune Azure NetApp Files performance? You can take the following actions per the performance requirements: -- Ensure that the Virtual Machine is sized appropriately.
+- Ensure the virtual machine (VM) is sized appropriately.
- Enable Accelerated Networking for the VM. - Select the desired service level and size for the capacity pool. - Create a volume with the desired quota size for the capacity and performance.
-There is no need to set accelerated networking for the network interface cards (NICs) in the dedicated subnet of Azure NetApp Files. [Accelerated networking](../virtual-network/virtual-machine-network-throughput.md) is a capability that only applies to Azure virtual machines. Azure NetApp Files NICs are optimized by design.
+There is no need to set accelerated networking for the network interface cards (NICs) in the dedicated subnet of Azure NetApp Files. [Accelerated networking](../virtual-network/virtual-machine-network-throughput.md) is a capability that only applies to Azure VMs. Azure NetApp Files NICs are optimized by design.
## How do I monitor Azure NetApp Files volume performance Azure NetApp Files volumes performance can be monitored through [available metrics](azure-netapp-files-metrics.md).
-## How do I convert throughput-based service levels of Azure NetApp Files to IOPS?
+## How do I convert throughput-based service levels of Azure NetApp Files to input/output operations per second (IOPS)?
-You can convert MB/s to IOPS by using the following formula:
+You can convert megabytes per seconds (MBps) to IOPS with this formula:
`IOPS = (MBps Throughput / KB per IO) * 1024`
azure-signalr Concept Service Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-service-mode.md
description: An overview of service modes in Azure SignalR Service.
Previously updated : 09/01/2022 Last updated : 08/30/2024 # Service mode in Azure SignalR Service
-Service mode is an important concept in Azure SignalR Service. SignalR Service currently supports three service modes: *Default*, *Serverless*, and *Classic*. Your SignalR Service resource will behave differently in each mode. In this article, you'll learn how to choose the right service mode based on your scenario.
+Service mode is an important concept in Azure SignalR Service. SignalR Service currently supports three service modes: *Default*, *Serverless*, and *Classic*. Your SignalR Service resource behaves differently in each mode. In this article, you learn how to choose the right service mode based on your scenario.
## Setting the service mode
-You'll be asked to specify a service mode when you create a new SignalR resource in the Azure portal.
+You're asked to specify a service mode when you create a new SignalR resource in the Azure portal.
:::image type="content" source="media/concept-service-mode/create.png" alt-text="Azure portal ΓÇô Choose service mode when creating a SignalR Service":::
Use `az signalr create` and `az signalr update` to set or change the service mod
## Default mode
-As the name implies, *Default* mode is the default service mode for SignalR Service. In Default mode, your application works as a typical [ASP.NET Core SignalR](/aspnet/core/signalr/introduction) or ASP.NET SignalR (deprecated) application. You have a web server application that hosts a hub, called a *hub server*, and clients have full duplex communication with the hub server. The difference between ASP.NET Core SignalR and Azure SignalR Service is instead of connecting client and hub server directly, client and server both connect to SignalR Service and use the service as a proxy. The following diagram shows the typical application structure in Default mode.
+As the name implies, *Default* mode is the default service mode for SignalR Service. In Default mode, your application works as a typical [ASP.NET Core SignalR](/aspnet/core/signalr/introduction) or ASP.NET SignalR (deprecated) application. You have a web server application that hosts a hub, called a *hub server*, and clients have full duplex communication with the hub server. The difference between ASP.NET Core SignalR and Azure SignalR Service is: With ASP.NET Core SignalR, the client connects directly to the hub server. With Azure SignalR Service, both the client and the hub server connect to SignalR Service and use the service as a proxy. The following diagram shows the typical application structure in Default mode.
:::image type="content" source="media/concept-service-mode/default.png" alt-text="Application structure in Default mode":::
Default mode is usually the right choice when you have a SignalR application tha
### Connection routing in Default mode
-In Default mode, there are WebSocket connections between hub server and SignalR Service called *server connections*. These connections are used to transfer messages between a server and client. When a new client is connected, SignalR Service will route the client to one hub server (assume you've more than one server) through existing server connections. The client connection will stick to the same hub server during its lifetime. This property is referred to as *connection stickiness*. When the client sends messages, they always go to the same hub server. With stickiness behavior, you can safely maintain some states for individual connections on your hub server. For example, if you want to stream something between server and client, you don't need to consider the case where data packets go to different servers.
+In Default mode, there are WebSocket connections between hub server and SignalR Service called *server connections*. These connections are used to transfer messages between a server and client. When a new client is connected, SignalR Service routes the client to one hub server (assume you have more than one server) through existing server connections. The client connection sticks to the same hub server during its lifetime. This property is referred to as *connection stickiness*. When the client sends messages, they always go to the same hub server. With stickiness behavior, you can safely maintain some states for individual connections on your hub server. For example, if you want to stream something between server and client, you don't need to consider the case where data packets go to different servers.
> [!IMPORTANT] > In Default mode a client cannot connect without a hub server being connected to the service first. If all your hub servers are disconnected due to network interruption or server reboot, your client connections will get an error telling you no server is connected. It's your responsibility to make sure there is always at least one hub server connected to SignalR service. For example, you can design your application with multiple hub servers, and then make sure they won't all go offline at the same time.
-The default routing model also means when a hub server goes offline, the connections routed to that server will be dropped. You should expect connections to drop when your hub server is offline for maintenance, and handle reconnection to minimize the effects on your application.
+The default routing model also means when a hub server goes offline, the connections routed to that server are dropped. You should expect connections to drop when your hub server is offline for maintenance, and handle reconnection to minimize the effects on your application.
> [!NOTE] > In Default mode you can also use REST API, management SDK, and function binding to directly send messages to a client if you don't want to go through a hub server. In Default mode client connections are still handled by hub servers and upstream endpoints won't work in that mode. ## Serverless mode
-Unlike Default mode, Serverless mode doesn't require a hub server to be running, which is why this mode is named "serverless." SignalR Service is responsible for maintaining client connections. There's no guarantee of connection stickiness and HTTP requests may be less efficient than WebSockets connections.
+Unlike Default mode, Serverless mode doesn't require a hub server to be running, which is why this mode is named "serverless." SignalR Service is responsible for maintaining client connections. There's no guarantee of connection stickiness and HTTP requests might be less efficient than WebSockets connections.
Serverless mode works with Azure Functions to provide real time messaging capability. Clients work with [SignalR Service bindings for Azure Functions](../azure-functions/functions-bindings-signalr-service.md), called *function binding*, to send messages as an output binding.
-Because there's no server connection, if you try to use a server SDK to establish a server connection you'll get an error. SignalR Service will reject server connection attempts in Serverless mode.
+Because there's no server connection, if you try to use a server SDK to establish a server connection you get an error. SignalR Service rejects server connection attempts in Serverless mode.
Serverless mode doesn't have connection stickiness, but you can still have a server-side application push messages to clients. There are two ways to push messages to clients in Serverless mode:
Serverless mode doesn't have connection stickiness, but you can still have a ser
> [!NOTE] > Both REST API and WebSockets are supported in SignalR service [management SDK](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md). If you're using a language other than .NET, you can also manually invoke the REST APIs following this [specification](https://github.com/Azure/azure-signalr/blob/dev/docs/rest-api.md).
-It's also possible for your server application to receive messages and connection events from clients. SignalR Service will deliver messages and connection events to pre-configured endpoints (called *upstream endpoints*) using web hooks. Upstream endpoints can only be configured in Serverless mode. For more information, see [Upstream endpoints](concept-upstream.md).
+It's also possible for your server application to receive messages and connection events from clients. SignalR Service delivers messages and connection events to preconfigured endpoints (called *upstream endpoints*) using web hooks. Upstream endpoints can only be configured in Serverless mode. For more information, see [Upstream endpoints](concept-upstream.md).
The following diagram shows how Serverless mode works.
> [!NOTE] > Classic mode is mainly for backward compatibility for applications created before the Default and Serverless modes were introduced. Don't use Classic mode except as a last resort. Use Default or Serverless for new applications, based on your scenario. You should consider redesigning existing applications to eliminate the need for Classic mode.
-Classic is a mixed mode of Default and Serverless modes. In Classic mode, connection type is decided by whether there's a hub server connected when the client connection is established. If there's a hub server, the client connection will be routed to a hub server. If a hub server isn't available, the client connection will be made in a limited serverless mode where client-to-server messages can't be delivered to a hub server. Classic mode serverless connections don't support some features such as upstream endpoints.
+Classic is a mixed mode of Default and Serverless modes. In Classic mode, connection type is decided by whether there's a hub server connected when the client connection is established. If there's a hub server, the client connection is routed to a hub server. If a hub server isn't available, the client connection is made in a limited serverless mode where client-to-server messages can't be delivered to a hub server. Classic mode serverless connections don't support some features such as upstream endpoints.
-If all your hub servers are offline for any reason, connections will be made in Serverless mode. It's your responsibility to ensure that at least one hub server is always available.
+If all your hub servers are offline for any reason, connections are made in Serverless mode. It's your responsibility to ensure that at least one hub server is always available.
## Choose the right service mode
Now you should understand the differences between service modes and know how to
- Choose Default mode if you're already familiar with how SignalR library works and want to move from a self-hosted SignalR to use Azure SignalR Service. Default mode works exactly the same way as self-hosted SignalR, and you can use the same programming model in SignalR library. SignalR Service acts as a proxy between clients and hub servers. -- Choose Serverless mode if you're creating a new application and don't want to maintain hub server and server connections. Serverless mode works together with Azure Functions so that you don't need to maintain any server at all. You can still have full duplex communications with REST API, management SDK, or function binding + upstream endpoint, but the programming model will be different than SignalR library.
+- Choose Serverless mode if you're creating a new application and don't want to maintain hub server and server connections. Serverless mode works together with Azure Functions so that you don't need to maintain any server at all. You can still have full duplex communications with REST API, management SDK, or function binding + upstream endpoint, but the programming model is different than SignalR library.
- Choose Default mode if you have *both* hub servers to serve client connections and a backend application to directly push messages to clients. The key difference between Default and Serverless mode is whether you have hub servers and how client connections are routed. REST API/management SDK/function binding can be used in both modes. -- If you really have a mixed scenario, you should consider separating use cases into multiple SignalR Service instances with service mode set according to use. An example of a mixed scenario that requires Classic mode is where you have two different hubs on the same SignalR resource. One hub is used as a traditional SignalR hub and the other hub is used with Azure Functions. This example should be split into two resources, with one instance in Default mode and one in Serverless mode.
+- Consider separating use cases into multiple SignalR Service instances with service mode set according to use, if you really have a mixed scenario. An example of a mixed scenario that requires Classic mode is where you have two different hubs on the same SignalR resource. One hub is used as a traditional SignalR hub and the other hub is used with Azure Functions. This example should be split into two resources, with one instance in Default mode and one in Serverless mode.
## Next steps
azure-signalr Signalr Concept Messages And Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-messages-and-connections.md
description: An overview of key concepts about messages and connections in Azure
Previously updated : 03/23/2023 Last updated : 09/03/2024 # Messages and connections in Azure SignalR Service
Large messages do negatively affect messaging performance. Use smaller messages
## How messages are counted for billing
-Messages sent into the service are inbound messages and messages sent out of the service are outbound messages. Only outbound messages from Azure SignalR Service are counted for billing. Ping messages between clients and servers are ignored.
+Messages sent into the service are inbound messages and messages sent out of the service are outbound messages. Only outbound messages from Azure SignalR Service are counted for billing. Ping messages between clients and servers are ignored.
Messages larger than 2 KB are counted as multiple messages of 2 KB each. The message count chart in the Azure portal is updated every 100 messages per hub.
For example, imagine you have one application server, and three clients:
* When *client A* sends a 1 KB inbound message to *client B*, without going through app server, the message is a free inbound message. The message routed from service to *client B* is billed as an outbound message.
-* If you have three clients and one application server, when one client sends a 4-KB message for the server broadcast to all clients, the billed message count is eight:
+* When one client sends a 4-KB message for the server broadcast to all clients, and there are three clients and one application server, the billed message count is eight:
* One message from the service to the application server. * Three messages from the service to the clients. Each message is counted as two 2-KB messages.
A live trace connection isn't counted as a client connection or as a server conn
ASP.NET SignalR calculates server connections in a different way. It includes one default hub in addition to hubs that you define. By default, each application server needs five more initial server connections. The initial connection count for the default hub stays consistent with other hubs.
-The service and the application server keep syncing connection status and making adjustments to server connections to get better performance and service stability. So you may see changes in the number of server connections in your running service.
+The service and the application server keep syncing connection status and making adjustments to server connections to get better performance and service stability. So you might see changes in the number of server connections in your running service.
## Related resources
azure-signalr Signalr Quickstart Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-quickstart-rest-api.md
description: Learn how to use REST API with Azure SignalR Service following samp
Previously updated : 11/13/2019 Last updated : 09/03/2024
Having issues? Try the [troubleshooting guide](signalr-howto-troubleshoot-guide.
## Clone the sample application
-While the service is being deployed, let's switch to prepare the code. Clone the [sample app from GitHub](https://github.com/aspnet/AzureSignalR-samples.git), set the SignalR Service connection string, and run the application locally.
+While the service is being deployed, let's get the code ready. First, clone the [sample app from GitHub](https://github.com/aspnet/AzureSignalR-samples.git). Next, set the SignalR Service connection string to the app. Finally, run the application locally.
1. Open a git terminal window. Change to a folder where you want to clone the sample project.
azure-vmware Deploy Zerto Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-zerto-disaster-recovery.md
In this scenario, the primary site is an Azure VMware Solution private cloud in
## Install Zerto on Azure VMware Solution
-To deploy Zerto on Azure VMware Solution, follow these [instructions](https://help.zerto.com/bundle/Install.AVS.HTML/page/Prerequisites_Zerto_AVS.htm).
+To deploy Zerto on Azure VMware Solution, follow these [instructions](
+/azure/azure-vmware/deploy-zerto-disaster-recovery#install-zerto-on-azure-vmware-solution
+).
## FAQs
backup Backup Azure Sap Hana Database Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database-troubleshoot.md
Title: Troubleshoot SAP HANA databases back up errors description: Describes how to troubleshoot common errors that might occur when you use Azure Backup to back up SAP HANA databases. Previously updated : 07/30/2024 Last updated : 09/10/2024 - # Troubleshoot backup of SAP HANA databases on Azure
See the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [What
| **Error message** | `Backup log chain is broken` | | | |
-| **Possible causes** | HANA LSN Log chain break can be triggered for various reasons, including:<ul><li>Azure Storage call failure to commit backup.</li><li>The Tenant DB is offline.</li><li>Extension upgrade terminates an in-progress Backup job.</li><li>Unable to connect to Azure Storage during backup.</li><li>SAP HANA has rolled back a transaction in the backup process.</li><li>A backup is complete, but catalog isn't yet updated with success in HANA system.</li><li>Backup failed from Azure Backup perspective, but success from the perspective of HANA ΓÇö the log backup/catalog destination might have been updated from Backint-to-file system, or the Backint executable might have been changed.</li></ul> |
-| **Recommended action** | To resolve this issue, Azure Backup triggers an auto-heal Full backup. While this auto-heal backup is in progress, all log backups are triggered by HANA fail with **OperationCancelledBecauseConflictingAutohealOperationRunningUserError**. Once the auto-heal Full backup is complete, logs and all other backups start working as expected.<br>If you don't see an auto-heal full backup triggered or any successful backup (Full/Differential/ Incremental) in 24 hours, contact Microsoft support.</br> |
+| **Possible causes** | HANA LSN Log chain break can be triggered for various reasons, including:<ul><li>Azure Storage call failure to commit backup.</li><li>The Tenant DB is offline.</li><li>Extension upgrade has terminated an in-progress Backup job.</li><li>Unable to connect to Azure Storage during backup.</li><li>SAP HANA has rolled back a transaction in the backup process.</li><li>A backup is complete, but catalog isn't yet updated with success in HANA system.</li><li>Backup failed from Azure Backup perspective, but success from the perspective of HANA ΓÇö the log backup/catalog destination might have been updated from Backint-to-file system, or the Backint executable might have been changed.</li></ul> |
+| **Recommended action** | To resolve this issue, Azure Backup triggers an autoheal Full backup. While this auto-heal backup is in progress, all log backups are triggered by HANA fail with **OperationCancelledBecauseConflictingAutohealOperationRunningUserError**. Once the autoheal Full backup is complete, logs and all other backups start working as expected.<br>If you don't see an autoheal full backup triggered or any successful backup (Full/Differential/ Incremental) in 24 hours, contact Microsoft support.</br> |
### UserErrorSDCtoMDCUpgradeDetected
See the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [What
|**Error message** | `The source and target systems for restore are incompatible.` | ||| |**Possible causes** | The restore flow fails with this error when the source and target HANA databases, and systems are incompatible. |
-|Recommended action | Ensure that your restore scenario isn't in the following list of possible incompatible restores:<br> **Case 1:** SYSTEMDB can't be renamed during restore.<br>**Case 2:** Source ΓÇö SDC and target ΓÇö MDC: The source database can't be restored as SYSTEMDB or tenant DB on the target. <br> **Case 3:** Source ΓÇö MDC and target ΓÇö SDC: The source database (SYSTEMDB or tenant DB) can't be restored to the target.<br>To learn more, see the note **1642148** in the [SAP support launchpad](https://launchpad.support.sap.com). |
+|Recommended action | Ensure that your restore scenario isn't in the following list of possible incompatible restores:<br> **Case 1:** SYSTEMDB can't be renamed during restore.<br>**Case 2:** Source - SDC and target - MDC: The source database can't be restored as SYSTEMDB or tenant DB on the target. <br> **Case 3:** Source ΓÇö MDC and target ΓÇö SDC: The source database (SYSTEMDB or tenant DB) can't be restored to the target.<br>To learn more, see the note **1642148** in the [SAP support launchpad](https://launchpad.support.sap.com). |
### UserErrorHANAPODoesNotExist
See the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [What
**Error message** | `Azure Backup does not have enough privileges to carry out Backup and Restore operations.` - |
-**Possible causes** | Backup user (AZUREWLBACKUPHANAUSER) created by the preregistration script doesn't have one or more of the following roles assigned:<ul><li>For MDC, DATABASE ADMIN and BACKUP ADMIN (for HANA 2.0 SPS05 and later) create new databases during restore.</li><li>For SDC, BACKUP ADMIN creates new databases during restore.</li><li>CATALOG READ to read the backup catalog.</li><li>SAP_INTERNAL_HANA_SUPPORT to access a few private tables. Only required for SDC and MDC versions prior to HANA 2.0 SPS04 Rev 46. It's not required for HANA 2.0 SPS04 Rev 46 and later. This is because we're getting the required information from public tables now with the fix from HANA team.</li></ul>
+**Possible causes** | Backup user (AZUREWLBACKUPHANAUSER) created by the pre-registration script doesn't have one or more of the following roles assigned:<ul><li>For MDC, DATABASE ADMIN and BACKUP ADMIN (for HANA 2.0 SPS05 and later) create new databases during restore.</li><li>For SDC, BACKUP ADMIN creates new databases during restore.</li><li>CATALOG READ to read the backup catalog.</li><li>SAP_INTERNAL_HANA_SUPPORT to access a few private tables. Only required for SDC and MDC versions prior to HANA 2.0 SPS04 Rev 46. It's not required for HANA 2.0 SPS04 Rev 46 and later. This is because we're getting the required information from public tables now with the fix from HANA team.</li></ul>
**Recommended action** | To resolve the issue, add the required roles and permissions manually to the Backup user (AZUREWLBACKUPHANAUSER). Or, you can download and run the preregistration script on the [SAP HANA instance](https://aka.ms/scriptforpermsonhana). ### UserErrorDatabaseUserPasswordExpired **Error message** | `Database/Backup user's password expired.` -- | --
-**Possible causes** | The Database/Backup user created by the preregistration script doesn't set expiry for the password. However, if it was altered, you may see this error.
+**Possible causes** | The Database/Backup user created by the pre-registration script doesn't set expiry for the password. However, if it was altered, you may see this error.
**Recommended action** | Download and run the [pre-registration script](https://aka.ms/scriptforpermsonhana) on the SAP HANA instance to resolve the issue. ### UserErrorInconsistentSSFS
See the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [What
**Error message** | `Pre-registration script not run.` | --
-**Possible causes** | The SAP HANA preregistration script to set up the environment hasn't been run.
+**Possible causes** | The SAP HANA pre-registration script to set up the environment hasn't been run.
**Recommended action** | Download and run the [pre-registration script](https://aka.ms/scriptforpermsonhana) on the SAP HANA instance.
See the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [What
**Error message** | `Operation is blocked as the vault has reached its maximum limit for such operations permitted in a span of 24 hours.` | --
-**Possible causes** | When you've reached the maximum permissible limit for an operation in a span of 24 hours, this error appears. This error usually appears when there are at-scale operations such as modify policy or auto-protection. Unlike the case of CloudDosAbsoluteLimitReached, there isn't much you can do to resolve this state. In fact, Azure Backup service will retry the operations internally for all the items in question.<br><br> For example, if you have a large number of datasources protected with a policy and you try to modify that policy, it'll trigger the configure protection jobs for each of the protected items and sometimes may hit the maximum limit permissible for such operations per day.
+**Possible causes** | When you've reached the maximum permissible limit for an operation in a span of 24 hours, this error appears. This error usually appears when there are at-scale operations such as modify policy or auto-protection. Unlike the case of CloudDosAbsoluteLimitReached, there isn't much you can do to resolve this state. In fact, Azure Backup service will retry the operations internally for all the items in question.<br><br> For example, if you have a large number of datasources protected with a policy and you try to modify that policy, it will trigger the configure protection jobs for each of the protected items and sometimes may hit the maximum limit permissible for such operations per day.
**Recommended action** | Azure Backup service will automatically retry this operation after 24 hours. ### UserErrorInvalidBackint
Note the following points:
### Multiple Container Database (MDC) restore
-In multiple container databases for HANA, the standard configuration is SYSTEMDB + 1 or more Tenant DBs. Restore of an entire SAP HANA instance restores both SYSTEMDB and Tenant DBs. One restores SYSTEMDB first and then proceeds for Tenant DB. System DB essentially means to override the system information on the selected target. This restore also overrides the BackInt related information in the target instance. So after the system DB is restored to a target instance, run the preregistration script again. Only then the subsequent tenant DB restores will succeed.
+In multiple container databases for HANA, the standard configuration is SYSTEMDB + 1 or more Tenant DBs. Restore of an entire SAP HANA instance restores both SYSTEMDB and Tenant DBs. One restores SYSTEMDB first and then proceeds for Tenant DB. System DB essentially means to override the system information on the selected target. This restore also overrides the BackInt related information in the target instance. So after the system DB is restored to a target instance, run the pre-registration script again. Only then the subsequent tenant DB restores will succeed.
## Back up a replicated VM
This scenario could include two possible cases. Learn how to back up the replica
1. The new VM created has the same name, and is in the same resource group and subscription as the deleted VM. - The extension is already present on the VM, but isn't visible to any of the services
- - Run the preregistration script
+ - Run the pre-registration script
- Re-register the extension for the same machine in the Azure portal (**Backup** -> **View details** -> Select the relevant Azure VM -> Re-register) - The already existing backed up databases (from the deleted VM) should then start successfully being backed up
This scenario could include two possible cases. Learn how to back up the replica
If so, then follow these steps: - The extension is already present on the VM, but isn't visible to any of the services
- - Run the preregistration script
+ - Run the pre-registration script
- If you discover and protect the new databases, you start seeing duplicate active databases in the portal. To avoid this, [Stop protection with retain data](sap-hana-db-manage.md#stop-protection-for-an-sap-hana-database) for the old databases. Then continue with the remaining steps. - Discover the databases - Enable backups on these databases
The original VM was replicated using Azure Site Recovery or Azure VM backup. The
Follow these steps to enable backups on the new VM: - The extension is already present on the VM, but not visible to any of the services-- Run the preregistration script. Based on the SID of the new VM, two scenarios can arise:
- - The original VM and the new VM have the same SID. The preregistration script runs successfully.
- - The original VM and the new VM have different SIDs. The preregistration script fails. Contact support to get help in this scenario.
+- Run the pre-registration script. Based on the SID of the new VM, two scenarios can arise:
+ - The original VM and the new VM have the same SID. The pre-registration script runs successfully.
+ - The original VM and the new VM have different SIDs. The pre-registration script fails. Contact Microsoft support to get help in this scenario.
- Discover the databases that you want to back up - Enable backups on these databases
These symptoms might arise for one or more of the following reasons:
In the preceding scenarios, we recommend that you trigger a re-register operation on the VM.
+## Back up SAP HANA database logs
+
+### Log backup isn't triggered despite the full backup's success.
+
+**Possible cause**: The values for SAP HANA database are incorrect to trigger log backup.
+
+**Recommended action**: Ensure that the following values for SAP HANA configuration are set correctly:
+
+- `enable_auto_log_backup`: Yes
+- `log_backup_using_backint`: True
+- `catalog_backup_using_backint`: True
+- `log_mode`: normal
+- `log_backup_timeout_s`: Same as Azure portal's log backup policy (frequency is in seconds).
+ ## Next step - Review the [frequently asked questions](./sap-hana-faq-backup-azure-vm.yml) about the backup of SAP HANA databases on Azure VMs.
backup Backup Azure Sql Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-backup-cli.md
Title: Back up SQL server databases in Azure VMs using Azure Backup via CLI description: Learn how to use CLI to back up SQL server databases in Azure VMs in the Recovery Services vault. Previously updated : 08/11/2022 Last updated : 09/10/2024
See the [currently supported scenarios](sql-support-matrix.md) for SQL in Azure
A Recovery Services vault is a logical container that stores the backup data for each protected resource, such as Azure VMs or workloads running on Azure VMs - for example, SQL or HANA databases. When the backup job for a protected resource runs, it creates a recovery point inside the Recovery Services vault. You can then use one of these recovery points to restore data to a given point in time.
-Create a Recovery Services vault with the [az backup vault create](/cli/azure/backup/vault#az-backup-vault-create) command. Use the resource group and location as that of the VM you want to protect. Learn how to create a VM using Azure CLI with [this VM quickstart](/azure/virtual-machines/linux/quick-create-cli).
+Create a Recovery Services vault with the [az backup vault create](/cli/azure/backup/vault#az-backup-vault-create) command. Use the resource group and location as that of the VM you want to protect. Learn how to create a [Windows VM](/azure/virtual-machines/windows/quick-create-cli) and a [Linux VM](/azure/virtual-machines/linux/quick-create-cli) using Azure CLI.
For this article, we'll use:
backup Backup Azure Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-database.md
Add **NT AUTHORITY\SYSTEM** and **NT Service\AzureWLBackupPluginSvc** logins to
7. Select OK. 8. Repeat the same sequence of steps (1-7 above) to add NT Service\AzureWLBackupPluginSvc login to the SQL Server instance. If the login already exists, make sure it has the sysadmin server role and under Status it has Grant the Permission to connect to database engine and Login as Enabled.
-9. After granting permission, **Rediscover DBs** in the portal: Vault **->** Backup Infrastructure **->** Workload in Azure VM:
+9. After granting permission, **Rediscover DBs** in the portal: Vault **->** Manage **->** Backup Infrastructure **->** Workload in Azure VM:
![Rediscover DBs in Azure portal](media/backup-azure-sql-database/sql-rediscover-dbs.png)
backup Selective Disk Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/selective-disk-backup-restore.md
Title: Selective disk backup and restore for Azure virtual machines description: In this article, learn about selective disk backup and restore using the Azure virtual machine backup solution. Previously updated : 10/16/2023 Last updated : 08/21/2024
az backup protection enable-for-vm --resource-group {resourcegroup} --vault-name
### Backup only OS disk during modify protection with Azure CLI ```azurecli
-az backup protection update-for-vm --resource-group {resourcegroup} --vault-name {vaultname} -c {vmname} -i {vmname} --backup-management-type AzureIaasVM --exclude-all-data-disks
+ az backup protection update-for-vm --vault-name MyVault --resource-group MyResourceGroup --container-name MyContainer --item-name MyItem --disk-list-setting exclude --diskslist 1.
``` ### Restore disks with Azure CLI
container-registry Connected Registry Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/connected-registry-glossary.md
+
+ Title: "Glossary for connected registry with Azure Arc"
+description: "Learn the terms and definitions for the connected registry extension with Azure Arc for a seamless extension deployment."
++++ Last updated : 06/18/2024
+#customer intent: As a customer, I want to understand the terms and definitions for the connected registry extension with Azure Arc for a successful deployment.
+++
+# Glossary for Connected registry with Azure Arc
+
+This glossary provides terms and definitions for the connected registry extension with Azure Arc for a seamless extension deployment.
+
+## Glossary of terms
+
+### Auto-upgrade-version
+
+- **Definition:** Automatically upgrade the version of the extension instance.
+- **Accepted Values:** `true`, `false`
+- **Default Value:** `false`
+- **Note:** [Azure Connected Machine agent](/azure/azure-arc/servers/agent-overview) manages the upgrade process and automatic rollback.
+
+### Bring Your Own Certificate (BYOC)
+
+- **Definition:** Allows customers to use their own certificate management service.
+- **Accepted Values:** Kubernetes Secret or Public Certificate + Private Key pair
+- **Note:** Customer must specify.
+
+### Cert-manager.enabled
+
+- **Definition:** Enables cert-manager service for use with the connected registry, handling the TLS certificate management lifecycle.
+- **Accepted Values:** `true`, `false`
+- **Default Value:** `true`
+- **Note:** Customers can either use the provided cert-manager service at deployment or use theirs (must already be installed).
+
+### Cert-manager.install
+
+- **Definition:** Installs the cert-manager tool as part of the extension deployment.
+- **Accepted Values:** `true`, `false`
+- **Default Value:** `true`
+- **Note:** Must be set to `false` if a customer is using their own cert-manager service.
+
+### Child Registry
+
+- **Description:** A registry that synchronizes with its parent (top-level) registry. The modes of the parent and child registries must match to ensure compatibility.
+
+### Client Token
+
+- **Definition:** Manages client access to a connected registry, allowing for actions on one or more repositories.
+- **Accepted Values:** Token name
+- **Note:** After creating a token, configure the connected registry to accept it using the `az acr connected-registry update` command.
+
+### Cloud Registry
+
+- **Description:** The ACR registry from which the connected registry syncs artifacts.
+
+### Cluster-name
+
+- **Definition:** The name of the Arc cluster for which the extension is deployed.
+- **Accepted Values:** Alphanumerical value
+
+### Cluster-type
+
+- **Definition:** Specifies the type of Arc cluster for the extension deployment.
+- **Accepted Values:** `connectedCluster`
+- **Default Value:** `connectedCluster`
+
+### Single configuration value (--config)
+
+- **Definition:** The configuration parameters and values for deploying the connected registry extension on the Arc Kubernetes cluster.
+- **Accepted Values:** Alphanumerical value
+
+### Connection String
+
+- **Value Type:** Alphanumerical
+- **Customer Action:** Must generate and specify
+- **Description:** The connection string contains authorization details necessary for the connected registry to securely connect and sync data with the cloud registry using Shared Key authorization. It includes the connected registry name, sync token name, sync token password, parent gateway endpoint, and parent endpoint protocol.
+
+### Connected Registry
+
+- **Description:** The on-premises or remote registry replica that facilitates local access to containerized workloads synchronized from the ACR registry.
+
+### Data-endpoint-enabled
+
+- **Definition:** Enables a [dedicated data endpoint](/azure/container-registry/container-registry-dedicated-data-endpoints) for client firewall configuration.
+- **Accepted Values:** `true`, `false`
+- **Default Value:** `false`
+- **Note:** Must be enabled for a successful creation of a connected registry.
+
+### Extension-type
+
+- **Definition:** Specifies the extension provider unique name for the extension deployment.
+- **Accepted Values:** `Microsoft.ContainerRegistry.ConnectedRegistry`
+- **Default Value:** `Microsoft.ContainerRegistry.ConnectedRegistry`
+
+### Kubernetes Secret
+
+- **Definition:** A Kubernetes managed secret for securely accessing data across pods within a cluster.
+- **Accepted Values:** Secret name
+- **Note:** Customer must specify.
+
+### Message TTL (Time To Live)
+
+- **Value Type:** Numerical
+- **Default Value/Behavior:** Every two days
+- **Description:** Message TTL defines the duration sync messages are retained in the cloud. This value isn't applicable when the sync schedule is continuous.
+
+### Modes
+
+- **Accepted Values:** `ReadOnly` and `ReadWrite`
+- **Default Value/Behavior:** `ReadOnly`
+- **Description:** Defines the operational permissions for client access to the connected registry. In `ReadOnly` mode, clients can only pull (read) artifacts, which is also suitable for nested scenarios. In `ReadWrite` mode, clients can pull (read) and push (write) artifacts, which is ideal for local development environments.
+
+### Parent Registry
+
+- **Description:** The primary registry that synchronizes with its child connected registries. A single parent registry can have multiple child registries connected to it. In a nested scenario, there can be multiple layers of registries within the hierarchy.
+
+### Protected Settings File (--config-protected-file)
+
+- **Definition:** The file containing the connection string for deploying the connected registry extension on the Kubernetes cluster. This file would also include the Kubernetes Secret or Public Cert + Private Key values pair for BYOC scenarios.
+- **Accepted Values:** Alphanumerical value
+- **Note:** Customer must specify.
+
+### Public Certificate + Private Key
+
+- **Value Type:** Alphanumerical base64-encoded
+- **Customer Action:** Must specify
+- **Description:** The public key certificate comprises a pair of keys: a public key available to anyone for identity verification of the certificate holder, and a private key, a unique secret key.
+
+### Pvc.storageClassName
+
+- **Definition:** Specifies the storage class in use on the cluster.
+- **Accepted Values:** `standard`, `azurefile`
+
+### Pvc.storageRequest
+
+- **Definition:** Specifies the storage size that the connected registry claims in the cluster.
+- **Accepted Values:** Alphanumerical value (for example, ΓÇ£500GiΓÇ¥)
+- **Default Value:** `500Gi`
+
+### Service.ClusterIP
+
+- **Definition:** The IP address within the Kubernetes service cluster IP range.
+- **Accepted Values:** IPv4 or IPv6 format
+- **Note:** Customer must specify. An incorrect IP not within the range will result in a failed extension deployment.
+
+### Sync Token
+
+- **Definition:** A token used by each connected registry to authenticate with its immediate parent for content synchronization and updates.
+- **Accepted Values:** Token name
+- **Action:** Customer action required.
+
+### Synchronization Schedule
+
+- **Value Type:** Numerical
+- **Default Value/Behavior:** Every minute
+- **Description:** The synchronization schedule, set using a cron expression, determines the cadence for when the registry syncs with its parent.
+
+### Synchronization Window
+
+- **Value Type:** Alphanumerical
+- **Default Value/Behavior:** Hourly
+- **Description:** The synchronization window specifies the sync duration. This parameter is disregarded if the sync schedule is continuous.
+
+### TrustDistribution.enabled
+
+- **Definition:** Trust distribution refers to the process of securely distributing trust between the connected registry and all client nodes within a Kubernetes cluster. When enabled, all nodes are configured with trust distribution.
+- **Accepted Values:** `true`, `false`
+- **Note:** Customer must choose `true` or `false`.
+
+### TrustDistribution.useNodeSelector
+
+- **Definition:** By default, the trust distribution daemonsets, which are responsible for configuring the container runtime environment (containerd), will run on all nodes in the cluster. However, with this setting enabled, trust distribution is limited to only those nodes that have been specifically labeled with `containerd-configured-by: connected-registry`.
+- **Accepted Values:** `true`, `false`
+- **Label:** `containerd-configured-by=connected-registry`
+- **Command to specify nodes for trust distribution:** `kubectl label node/[node name] containerd-configured-by=connected-registry`
++
+### Registry Hierarchy
+
+- **Description:** The structure of connected registries, where each connected registry is linked to a parent registry. The top parent in this hierarchy is the ACR registry.
container-registry Pull Images From Connected Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/pull-images-from-connected-registry.md
ms.devlang: azurecli
-# Pull images from a connected registry on IoT Edge device
+# Pull images from a connected registry on IoT Edge device (To be deprecated)
To pull images from a [connected registry](intro-connected-registry.md), configure a [client token](overview-connected-registry-access.md#client-tokens) and pass the token credentials to access registry content.
container-registry Quickstart Connected Registry Arc Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-connected-registry-arc-cli.md
+
+ Title: "Quickstart: Deploying the connected registry Arc extension"
+description: "Learn how to deploy the Connected Registry Arc Extension CLI UX with secure-by-default settings for efficient and secure container workload operations."
++++ Last updated : 05/09/2024
+ai-usage: ai-assisted
+
+#customer intent: As a user, I want to learn how to deploy the connected registry Arc extension using the CLI UX with secure-by-default settings, such as using HTTPS, Read Only, Trust Distribution, and Cert Manager service, so that I can ensure the secure and efficient operation of my container workloads."
++
+# Quickstart: Deploy the connected registry Arc extension (preview)
+
+In this quickstart, you learn how to deploy the Connected registry Arc extension using the CLI UX with secure-by-default settings to ensure robust security and operational integrity.
+
+The connected registry is a pivotal tool for edge customers, enabling efficient management and access to containerized workloads, whether on-premises or at remote sites. By integrating with Azure Arc, the service ensures a seamless and unified lifecycle management experience for Kubernetes-based containerized workloads. Deploying the connected registry Arc extension on Arc-enabled Kubernetes clusters simplifies the management and access of these workloads.
+
+## Prerequisites
+
+* Set up the [Azure CLI][Install Azure CLI] to connect to Azure and Kubernetes.
+
+* Create or use an existing Azure Container Registry (ACR) with [quickstart.][create-acr]
+
+* Set up the firewall access and communication between the ACR and the connected registry by enabling the [dedicated data endpoints.][dedicated data endpoints]
+
+* Create or use an existing Azure KubernetesService (AKS) cluster with the [tutorial.][tutorial-aks-cluster]
+
+* Set up the connection between the Kubernetescluster and Azure Arc by following the [quickstart.][quickstart-connect-cluster]
+
+* Use the [k8s-extension][k8s-extension] command to manage Kubernetesextensions.
+
+ ```azurecli
+ az extension add --name k8s-extension
+ ```
+* Register the required [Azure resource providers][azure-resource-provider-requirements] in your subscription and use Azure Arc-enabled Kubernetes:
+
+ ```azurecli
+ az provider register --namespace Microsoft.Kubernetes
+ az provider register --namespace Microsoft.KubernetesConfiguration
+ az provider register --namespace Microsoft.ExtendedLocation
+ ```
+ An Azure resource provider is a set of REST operations that enable functionality for a specific Azure service.
+
+* Repository in the ACR registry to synchronize with the connected registry.
+
+ ```azurecli
+ az acr import --name myacrregistry --source mcr.microsoft.com/mcr/hello-world:latest --image hello-world:latest
+ ```
+
+ The `hello-world` repository is created in the ACR registry `myacrregistry` to synchronize with the Connected registry.
++
+## Deploy the connected registry Arc extension with secure-by-default settings
+
+Once the prerequisites and necessary conditions and components are in place, follow the streamlined approach to securely deploy a connected registry extension on an Arc-enabled Kubernetes cluster using the following settings. These settings define the following configuration with HTTPS, Read Only, Trust Distribution, and Cert Manager service. Follow the steps for a successful deployment:
+
+1. [Create the connected registry.](#create-the-connected-registry-and-synchronize-with-acr)
+2. [Deploy the connected registry Arc extension.](#deploy-the-connected-registry-arc-extension-on-the-arc-enabled-kubernetes-cluster)
+3. [Verify the connected registry extension deployment.](#verify-the-connected-registry-extension-deployment)
+4. [Deploy a pod that uses image from connected registry.](#deploy-a-pod-that-uses-an-image-from-connected-registry)
++
+### Create the connected registry and synchronize with ACR
+
+Creating the connected registry to synchronize with ACR is the foundational step for deploying the connected registry Arc extension.
+
+1. Create the connected registry, which synchronizes with the ACR registry:
+
+ To create a connected registry `myconnectedregistry` that synchronizes with the ACR registry `myacrregistry` in the resource group `myresourcegroup` and the repository `hello-world`, you can run the [az acr connected-registry create][az-acr-connected-registry-create] command:
+
+ ```azurecli
+ az acr connected-registry create --registry myacrregistry \
+ --name myconnectedregistry \
+ --resource-group myresourcegroup \
+ --repository "hello-world"
+ ```
+
+- The [az acr connected-registry create][az-acr-connected-registry-create] command creates the connected registry with the specified repository.
+- The [az acr connected-registry create][az-acr-connected-registry-create] command overwrites actions if the sync scope map named `myscopemap` exists and overwrites properties if the sync token named `mysynctoken` exists.
+- The [az acr connected-registry create][az-acr-connected-registry-create] command validates a dedicated data endpoint during the creation of the connected registry and provides a command to enable the dedicated data endpoint on the ACR registry.
+
+### Deploy the connected registry Arc extension on the Arc-enabled Kubernetes cluster
+
+By deploying the connected Registry Arc extension, you can synchronize container images and other Open Container Initiative (OCI) artifacts with your ACR registry. The deployment helps speed-up access to registry artifacts and enables the building of advanced scenarios. The extension deployment ensures secure trust distribution between the connected registry and all client nodes within the cluster, and installs the cert-manager service for Transport Layer Security (TLS) encryption.
+
+1. Generate the Connection String and Protected Settings JSON File
+
+ For secure deployment of the connected registry extension, generate the connection string, including a new password, transport protocol, and create the `protected-settings-extension.json` file required for the extension deployment with [az acr connected-registry get-settings][az-acr-connected-registry-get-settings] command:
+
+```bash
+ cat << EOF > protected-settings-extension.json
+ {
+ "connectionString": "$(az acr connected-registry get-settings \
+ --name myconnectedregistry \
+ --registry myacrregistry \
+ --parent-protocol https \
+ --generate-password 1 \
+ --query ACR_REGISTRY_CONNECTION_STRING --output tsv --yes)"
+ }
+ EOF
+```
+
+```bash
+ cat << EOF > protected-settings-extension.json
+ {
+ "connectionString": "$(az acr connected-registry get-settings \
+ --name myconnectedregistry \
+ --registry myacrregistry \
+ --parent-protocol https \
+ --generate-password 1 \
+ --query ACR_REGISTRY_CONNECTION_STRING --output tsv --yes)"
+ }
+ EOF
+```
+
+```azurepowershell
+ echo "{\"connectionString\":\"$(az acr connected-registry get-settings \
+ --name myconnectedregistry \
+ --registry myacrregistry \
+ --parent-protocol https \
+ --generate-password 1 \
+ --query ACR_REGISTRY_CONNECTION_STRING \
+ --output tsv \
+ --yes | tr -d '\r')\" }" > settings.json
+```
+
+>[!NOTE]
+> The cat and echo commands create the `protected-settings-extension.json` file with the connection string details, injecting the contents of the connection string into the `protected-settings-extension.json` file, a necessary step for the extension deployment. The [az acr connected-registry get-settings][az-acr-connected-registry-get-settings] command generates the connection string, including the creation of a new password and the specification of the transport protocol.
+
+2. Deploy the connected registry extension
+
+ Deploy the connected registry extension with the specified configuration details using the [az k8s-extension create][az-k8s-extension-create] command:
+
+ ```azurecli
+ az k8s-extension create --cluster-name myarck8scluster \
+ --cluster-type connectedClusters \
+ --extension-type Microsoft.ContainerRegistry.ConnectedRegistry \
+ --name myconnectedregistry \
+ --resource-group myresourcegroup \
+ --config service.clusterIP=192.100.100.1 \
+ --config-protected-file protected-settings-extension.json
+ ```
+
+- The [az k8s-extension create][az-k8s-extension-create] command deploys the connected registry extension on the Kubernetescluster with the provided configuration parameters and protected settings file.
+- It ensures secure trust distribution between the connected registry and all client nodes within the cluster, and installs the cert-manager service for Transport Layer Security (TLS) encryption.
+- The clusterIP must be from the AKS cluster subnet IP range. The `service.clusterIP` parameter specifies the IP address of the connected registry service within the cluster. It is essential to set the `service.clusterIP` within the range of valid service IPs for the Kubernetescluster. Ensure that the IP address specified for `service.clusterIP` falls within the designated service IP range defined during the cluster's initial configuration, typically found in the cluster's networking settings. If the `service.clusterIP` is not within this range, it must be updated to an IP address that is both within the valid range and not currently in use by another service.
++
+### Verify the connected registry extension deployment
+
+To verify the deployment of the connected registry extension on the Arc-enabled Kubernetescluster, follow the steps:
+
+1. Verify the deployment status
+
+ Run the [az k8s-extension show][az-k8s-extension-show] command to check the deployment status of the connected registry extension:
+
+ ```azurecli
+ az k8s-extension show --name myconnectedregistry \
+ --cluster-name myarck8scluster \
+ --resource-group myresourcegroup \
+ --cluster-type connectedClusters
+ ```
+
+ **Example Output**
+
+ ```output
+ {
+ "aksAssignedIdentity": null,
+ "autoUpgradeMinorVersion": true,
+ "configurationProtectedSettings": {
+ "connectionString": ""
+ },
+ "configurationSettings": {
+ "pvc.storageClassName": "standard",
+ "pvc.storageRequest": "250Gi",
+ "service.clusterIP": "[your service cluster ip]"
+ },
+ "currentVersion": "0.11.0",
+ "customLocationSettings": null,
+ "errorInfo": null,
+ "extensionType": "microsoft.containerregistry.connectedregistry",
+ "id": "/subscriptions/[your subscription id]/resourceGroups/[your resource group name]/providers/Microsoft.Kubernetes/connectedClusters/[your arc cluster name]/providers/Microsoft.KubernetesConfiguration/extensions/[your extension name]",
+ "identity": {
+ "principalId": "[identity principal id]",
+ "tenantId": null,
+ "type": "SystemAssigned"
+ },
+ "isSystemExtension": false,
+ "name": "[your extension name]",
+ "packageUri": null,
+ "plan": null,
+ "provisioningState": "Succeeded",
+ "releaseTrain": "preview",
+ "resourceGroup": "[your resource group]",
+ "scope": {
+ "cluster": {
+ "releaseNamespace": "connected-registry"
+ },
+ "namespace": null
+ },
+ "statuses": [],
+ "systemData": {
+ "createdAt": "2024-07-12T18:17:51.364427+00:00",
+ "createdBy": null,
+ "createdByType": null,
+ "lastModifiedAt": "2024-07-12T18:22:42.156799+00:00",
+ "lastModifiedBy": null,
+ "lastModifiedByType": null
+ },
+ "type": "Microsoft.KubernetesConfiguration/extensions",
+ "version": null
+ }
+ ```
+
+2. Verify the connected registry status and state
+
+ For each connected registry, you can view the status and state of the connected registry using the [az acr connected-registry list][az-acr-connected-registry-list] command:
+
+ ```azurecli
+ az acr connected-registry list --registry myacrregistry \
+ --output table
+ ```
+
+**Example Output**
+
+```console
+ | NAME | MODE | CONNECTION STATE | PARENT | LOGIN SERVER | LAST SYNC(UTC) |
+ ||||--|--|-|
+ | myconnectedregistry | ReadWrite | online | myacrregistry | myacrregistry.azurecr.io | 2024-05-09 12:00:00 |
+ | myreadonlyacr | ReadOnly | offline | myacrregistry | myacrregistry.azurecr.io | 2024-05-09 12:00:00 |
+```
+
+3. Verify the specific connected registry details
+
+ For details on a specific connected registry, use [az acr connected-registry show][az-acr-connected-registry-show] command:
+
+ ```azurecli
+ az acr connected-registry show --registry myacrregistry \
+ --name myreadonlyacr \
+ --output table
+ ```
+
+**Example Output**
+
+```console
+ | NAME | MODE | CONNECTION STATE | PARENT | LOGIN SERVER | LAST SYNC(UTC) | SYNC SCHEDULE | SYNC WINDOW |
+ | - | | - | - | | - | - | -- |
+ | myconnectedregistry | ReadWrite | online | myacrregistry | myacrregistry.azurecr.io | 2024-05-09 12:00:00 | 0 0 * * * | 00:00:00-23:59:59 |
+```
+
+- The [az k8s-extension show][az-k8s-extension-show] command verifies the state of the extension deployment.
+- The command also provides details on the connected registry's connection status, last sync, sync window, sync schedule, and more.
+
+### Deploy a pod that uses an image from connected registry
+
+To deploy a pod that uses an image from connected registry within the cluster, the operation must be performed from within the cluster node itself. Follow these steps:
+
+1. Create a secret in the cluster to authenticate with the connected registry:
+
+Run the [kubectl create secret docker-registry][kubectl-create-secret-docker-registry] command to create a secret in the cluster to authenticate with the Connected registry:
+
+```bash
+kubectl create secret docker-registry regcred --docker-server=192.100.100.1 --docker-username=mytoken --docker-password=mypassword
+ ```
+
+2. Deploy the pod that uses the desired image from the connected registry using the value of service.clusterIP address `192.100.100.1` of the connected registry, and the image name `hello-world` with tag `latest`:
+
+ ```bash
+ kubectl apply -f - <<EOF
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: hello-world-deployment
+ labels:
+ app: hello-world
+ spec:
+ selector:
+ matchLabels:
+ app: hello-world
+ replicas: 1
+ template:
+ metadata:
+ labels:
+ app: hello-world
+ spec:
+ imagePullSecrets:
+ - name: regcred
+ containers:
+ - name: hello-world
+ image: 192.100.100.1/hello-world:latest
+ EOF
+ ```
+
+## Clean up resources
+
+By deleting the deployed connected registry extension, you remove the corresponding connected registry pods and configuration settings.
+
+1. Delete the connected registry extension
+
+ Run the [az k8s-extension delete][az-k8s-extension-delete] command to delete the connected registry extension:
+
+ ```azurecli
+ az k8s-extension delete --name myconnectedregistry
+ --cluster-name myarcakscluster \
+ --resource-group myresourcegroup \
+ --cluster-type connectedClusters
+ ```
+
+By deleting the deployed connected registry, you remove the connected registry cloud instance and its configuration details.
+
+2. Delete the connected registry
+
+ Run the [az acr connected-registry delete][az-acr-connected-registry-delete] command to delete the Connected registry:
+
+ ```azurecli
+ az acr connected-registry delete --registry myacrregistry \
+ --name myconnectedregistry \
+ --resource-group myresourcegroup
+ ```
+
+## Next steps
+
+- [Known issues: Connected registry Arc Extension](troubleshoot-connected-registry-arc.md)
++
+<!-- LINKS - internal -->
+[create-acr]: container-registry-get-started-azure-cli.md
+[dedicated data endpoints]: container-registry-firewall-access-rules.md#enable-dedicated-data-endpoints
+[Install Azure CLI]: /cli/azure/install-azure-cli
+[k8s-extension]: /cli/azure/k8s-extension
+[azure-resource-provider-requirements]: /azure/azure-arc/kubernetes/system-requirements#azure-resource-provider-requirements
+[quickstart-connect-cluster]: /azure/azure-arc/kubernetes/quickstart-connect-cluster
+[tutorial-aks-cluster]: /azure/aks/tutorial-kubernetes-deploy-cluster?tabs=azure-cli
+[az-acr-connected-registry-create]: /cli/azure/acr/connected-registry#az-acr-connected-registry-create
+[az-acr-connected-registry-get-settings]: /cli/azure/acr/connected-registry#az-acr-connected-registry-get-settings
+[az-k8s-extension-create]: /cli/azure/k8s-extension#az-k8s-extension-create
+[az-k8s-extension-show]: /cli/azure/k8s-extension#az-k8s-extension-show
+[az-acr-connected-registry-list]: /cli/azure/acr/connected-registry#az-acr-connected-registry-list
+[az-acr-connected-registry-show]: /cli/azure/acr/connected-registry#az-acr-connected-registry-show
+[az-k8s-extension-delete]: /cli/azure/k8s-extension#az-k8s-extension-delete
+[az-acr-connected-registry-delete]: /cli/azure/acr/connected-registry#az-acr-connected-registry-delete
+[kubectl-create-secret-docker-registry]: https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/kubectl_create_secret_docker-registry/
container-registry Quickstart Connected Registry Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-connected-registry-cli.md
ms.devlang: azurecli
+# customer intent: To create a connected registry resource in Azure using the Azure CLI.
-# Quickstart: Create a connected registry using the Azure CLI
+# Quickstart: Create a connected registry using the Azure CLI (To be deprecated)
In this quickstart, you use the Azure CLI to create a [connected registry](intro-connected-registry.md) resource in Azure. The connected registry feature of Azure Container Registry allows you to deploy a registry remotely or on your premises and synchronize images and other artifacts with the cloud registry.
container-registry Quickstart Connected Registry Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-connected-registry-portal.md
-# Quickstart: Create a connected registry using the Azure portal
+# Quickstart: Create a connected registry using the Azure portal (To be deprecated)
In this quickstart, you use the Azure portal to create a [connected registry](intro-connected-registry.md) resource in Azure. The connected registry feature of Azure Container Registry allows you to deploy a registry remotely or on your premises and synchronize images and other artifacts with the cloud registry.
container-registry Quickstart Deploy Connected Registry Iot Edge Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-deploy-connected-registry-iot-edge-cli.md
ms.devlang: azurecli
+#customer intent: To deploy a connected registry resource to an Azure IoT Edge device using the Azure CLI.
-# Quickstart: Deploy a connected registry to an IoT Edge device
+# Quickstart: Deploy a connected registry to an IoT Edge device (To be deprecated)
In this quickstart, you use the Azure CLI to deploy a [connected registry](intro-connected-registry.md) as a module on an Azure IoT Edge device. The IoT Edge device can access the parent Azure container registry in the cloud.
container-registry Troubleshoot Connected Registry Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/troubleshoot-connected-registry-arc.md
+
+ Title: "Known issues: Connected Registry Arc Extension"
+description: "Learn how to troubleshoot the most common problems for a Connected Registry Arc Extension and resolve issues with ease."
++++ Last updated : 05/09/2024
+#customer intent: As a customer, I want to understand the common issues with the connected registry Arc extension and how to troubleshoot them.
++
+# Troubleshoot connected registry extension
+
+This article discusses some common error messages that you may receive when you install or update the connected registry extension for Arc-enabled Kubernetes clusters.
+
+## How is the connected registry extension installed
+
+The connected registry extension is released as a helm chart and installed by Helm V3. All components of the connected registry extension are installed in _connected-registry_ namespace. You can use the following commands to check the extension status.
+
+```bash
+# get the extension status
+az k8s-extension show --name <extension-name>
+# check status of all pods of connected registry extension
+kubectl get pod -n connected-registry
+# get events of the extension
+kubectl get events -n connected-registry --sort-by='.lastTimestamp'
+```
+
+## Common errors
+
+### Error: can't reuse a name that is still in use
+
+This error means the extension name you specified already exists. If the name is already in use, you need to use another name.
+
+### Error: unable to create new content in namespace _connected-registry_ because it's being terminated
+
+This error happens when an uninstallation operation isn't finished, and another installation operation is triggered. You can run `az k8s-extension show` command to check the provisioning status of the extension and make sure the extension has been uninstalled before taking other actions.
+
+### Error: failed in download the Chart path not found
+
+This error happens when you specify the wrong extension version. You need to make sure the specified version exists. If you want to use the latest version, you don't need to specify `--version`.
+
+## Common Scenarios
+
+### Scenario 1: Installation fails but doesn't show an error message
+
+If the extension generates an error message when you create or update it, you can inspect where the creation failed by running the `az k8s-extension list` command:
+
+```bash
+az k8s-extension list \
+--resource-group <my-resource-group-name> \
+--cluster-name <my-cluster-name> \
+--cluster-type connectedClusters
+```
+
+**Solution:** Restart the cluster, register the service provider, or delete and reinstall connected registry
+
+To fix this issue, try the following methods:
+
+- Restart your Arc Kubernetes cluster.
+
+- Register the KubernetesConfiguration service provider.
+
+- Force delete and reinstall the connected registry extension.
+
+### Scenario 2: Targeted connected registry version doesn't exist
+
+When you try to install the connected registry extension to target a specific version, you receive an error message that states that the connected registry version doesn't exist.
+
+**Solution:** Install again for a supported connected registry version
+
+Try again to install the extension. Make sure that you use a supported version of connected registry.
+
+## Common issues
+
+### Issue: Extension creation stuck in running state
+
+**Possibility 1:** Issue with Persistent Volume Claim (PVC)
+
+- Check status of connected registry PVC
+```bash
+kubectl get pvc -n connected-registry -o yaml connected-registry-pvc
+```
+
+The value of _phase_ under _status_ should be _bound_. If it doesnΓÇÖt change from _pending_, delete the extension.
+
+- Check whether the desired storage class is in your list of storage classes:
+
+```bash
+kubectl get storageclass --all-namespaces
+```
+
+- If not, recreate the extension and add
+
+```bash
+--config pvc.storageClassName=ΓÇ¥standardΓÇ¥`
+```
+
+- Alternatively, it could be an issue with not having enough space for the PVC. Recreate the extension with the parameter
+
+```bash
+--config pvc.storageRequest=ΓÇ¥250GiΓÇ¥`
+```
+
+**Possibility 2:** Connection String is bad
+
+- Check the logs for the connected registry Pod:
+
+```bash
+kubectl get pod -n connected-registry
+```
+
+- Copy the name of the connected registry pod (e.g.: ΓÇ£connected-registry-8d886cf7f-w4prp") and paste it into the following command:
+
+```bash
+kubectl logs -n connected-registry connected-registry-8d886cf7f-w4prp
+```
+
+- If you see the following error message, the connected registry's connection string is bad:
+
+```bash
+Response: '{"errors":[{"code":"UNAUTHORIZED","message":"Incorrect Password","detail":"Please visit https://aka.ms/acr#UNAUTHORIZED for more information."}]}'
+```
+
+- Ensure that a _protected-settings-extension.json_ file has been created
+
+```bash
+cat protected-settings-extension.json
+```
+
+- If needed, regenerate _protected-settings-extension.json_
+
+```bash
+cat << EOF > protected-settings-extension.json
+{
+"connectionString": "$(az acr connected-registry get-settings \
+--name myconnectedregistry \
+--registry myacrregistry \
+--parent-protocol https \
+--generate-password 1 \
+--query ACR_REGISTRY_CONNECTION_STRING --output tsv --yes)"
+}
+EOF
+```
+
+- Update the extension to include the new connection string
+
+```bash
+az k8s-extension update \
+--cluster-name <myarck8scluster> \
+--cluster-type connectedClusters \
+--name <myconnectedregistry> \
+-g <myresourcegroup> \
+--config-protected-file protected-settings-extension.json
+```
+
+### Issue: Extension created, but connected registry is not an 'Online' state
+
+**Possibility 1:** Previous connected registry has not been deactivated
+
+This scenario commonly happens when a previous connected registry extension has been deleted and a new one has been created for the same connected registry.
+
+- Check the logs for the connected registry Pod:
+
+```bash
+kubectl get pod -n connected-registry
+```
+
+- Copy the name of the connected registry pod (e.g.: ΓÇ£connected-registry-xxxxxxxxx-xxxxx") and paste it into the following command:
+
+```bash
+kubectl logs -n connected-registry connected-registry-xxxxxxxxx-xxxxx
+```
+
+- If you see the following error message, the connected registry needs to be deactivated:
+
+`Response: '{"errors":[{"code":"ALREADY_ACTIVATED","message":"Failed to activate the connected registry as it is already activated by another instance. Only one instance is supported at any time.","detail":"Please visit https://aka.ms/acr#ALREADY_ACTIVATED for more information."}]}'`
+
+- Run the following command to deactivate:
+
+```azurecli
+az acr connected-registry deactivate -n <myconnectedregistry> -r <mycontainerregistry>
+```
+
+After a few minutes, the connected registry pod should be recreated, and the error should disappear.
+
+## Enable logging
+
+- Run the [az acr connected-registry update] command to update the connected registry extension with the debug log level:
+
+```azurecli
+az acr connected-registry update --registry mycloudregistry --name myacrregistry --log-level debug
+```
+
+- The following log levels can be applied to aid in troubleshooting:
+
+ - **Debug** provides detailed information for debugging purposes.
+
+ - **Information** provides general information for debugging purposes.
+
+ - **Warning** indicates potential problems that aren't yet errors but might become one if no action is taken.
+
+ - **Error** logs errors that prevent an operation from completing.
+
+ - **None** turns off logging, so no log messages are written.
+
+- Adjust the log level as needed to troubleshoot the issue.
+
+The active selection provides more options to adjust the verbosity of logs when debugging issues with a connected registry. The following options are available:
+
+The connected registry log level is specific to the connected registry's operations and determines the severity of messages that the connected registry handles. This setting is used to manage the logging behavior of the connected registry itself.
+
+**--log-level** set the log level on the instance. The log level determines the severity of messages that the logger handle. By setting the log level, you can filter out messages that are below a certain severity. For example, if you set the log level to "warning" the logger handles warnings, errors, and critical messages, but it ignores information and debug messages.
+
+The az cli log level controls the verbosity of the output messages during the operation of the Azure CLI. The Azure CLI (az) provides several verbosity options for log levels, which can be adjusted to control the amount of output information during its operation:
+
+**--verbose** increases the verbosity of the logs. It provides more detailed information than the default setting, which can be useful for identifying issues.
+
+**--debug** enables full debug logs. Debug logs provide the most detailed information, including all the information provided at the "verbose" level plus more details intended for diagnosing problems.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Quickstart: Deploying the Connected Registry Arc Extension](quickstart-connected-registry-arc-cli.md)
+> [Glossary of terms](connected-registry-glossary.md)
container-registry Tutorial Connected Registry Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-connected-registry-arc.md
+
+ Title: "Secure and deploy connected registry Arc extension"
+description: "Learn to secure the connected registry Arc extension deployment with HTTPS, TLS, optional no TLS, BYOC certificate, and trust distribution."
++++ Last updated : 06/17/2024+
+#customer intent: Learn how to secure and deploy the connected registry extension with HTTPS, TLS encryption, and upgrades/rollbacks.
+++
+# Tutorial: Secure deployment methods for the connected registry extension
+
+These tutorials cover various deployment scenarios for the connected registry extension in an Arc-enabled Kubernetes cluster. Once the connected registry extension is installed, you can synchronize images from your cloud registry to on-premises or remote locations.
+
+Before you dive in, take a moment to learn how [Arc-enabled Kubernetes][Arc-enabled Kubernetes] works conceptually.
+
+The connected registry can be securely deployed using various encryption methods. To ensure a successful deployment, follow the quickstart guide to review prerequisites and other pertinent information. By default, the connected registry is configured with HTTPS, ReadOnly mode, Trust Distribution, and the Cert Manager service. You can add more customizations and dependencies as needed, depending on your scenario.
+
+### What is Cert Manager service?
+
+The connected registry cert manager is a service that manages TLS certificates for the connected registry extension in an Azure Arc-enabled Kubernetes cluster. It ensures secure communication between the connected registry and other components by handling the creation, renewal, and distribution of certificates. This service can be installed as part of the connected registry deployment, or you can use an existing cert manager if it's already installed on your cluster.
+
+[Cert-Manager][cert-manager] is an open-source Kubernetes add-on that automates the management and issuance of TLS certificates from various sources. It manages the lifecycle of certificates issued by CA pools created using CA Service, ensuring they are valid and renewed before they expire.
+
+### What is trust distribution?
+
+Connected registry trust distribution refers to the process of securely distributing trust between the connected registry service and Kubernetes clients within a cluster. This is achieved by using a Certificate Authority (CA), such as cert-manager, to sign TLS certificates, which are then distributed to both the registry service and the clients. This ensures that all entities can securely authenticate each other, maintaining a secure and trusted environment within the Kubernetes cluster.
+
+## Prerequisites
+
+To complete this tutorial, you need:
+
+* Follow the [quickstart][quickstart] to securely deploy the connected registry extension.
+
+## Deploy connected registry extension using your preinstalled cert-manager
+
+In this tutorial, we demonstrate how to use a preinstalled cert-manager service on the cluster. This setup gives you control over certificate management, enabling you to deploy the connected registry extension with encryption by following the steps provided:
+
+Run the [az-k8s-extension-create][az-k8s-extension-create] command in the [quickstart][quickstart] and set the `cert-manager.enabled=true` and `cert-manager.install=false` parameters to determine the cert-manager service is installed and enabled:
+
+```azurecli
+ az k8s-extension create --cluster-name myarck8scluster \
+ --cluster-type connectedClusters \
+ --extension-type Microsoft.ContainerRegistry.ConnectedRegistry \
+ --name myconnectedregistry \
+ --resource-group myresourcegroup \
+ --config service.clusterIP=192.100.100.1 \
+ --config cert-manager.install=false \
+ --config-protected-file protected-settings-extension.json
+```
+
+## Deploy connected registry extension using bring your own certificate (BYOC)
+
+In this tutorial, we demonstrate how to use your own certificate (BYOC) on the cluster. BYOC allows you to use your own public certificate and private key pair, giving you control over certificate management. This setup enables you to deploy the connected registry extension with encryption by following the provided steps:
+
+>[!NOTE]
+>BYOC is applicable for customers who bring their own certificate that is already trusted by their Kubernetes nodes. It is not recommended to manually update the nodes to trust the certificates.
+
+Follow the [quickstart][quickstart] and add the public certificate and private key string variable + value pair.
+
+1. Create self-signed SSL cert with connected-registry service IP as the SAN
+
+```bash
+ mkdir /certs
+```
+
+```bash
+openssl req -newkey rsa:4096 -nodes -sha256 -keyout /certs/mycert.key -x509 -days 365 -out /certs/mycert.crt -addext "subjectAltName = IP:<service IP>"
+```
+
+2. Get base64 encoded strings of these cert files
+
+```bash
+export TLS_CRT=$(cat mycert.crt | base64 -w0)
+export TLS_KEY=$(cat mycert.key | base64 -w0)
+```
+
+3. Protected settings file example with secret in JSON format:
+
+> [!NOTE]
+> The public certificate and private key pair must be encoded in base64 format and added to the protected settings file.
+
+```json
+ {
+ "connectionString": "[connection string here]",
+ "tls.crt": $TLS_CRT,
+ "tls.key": $TLS_KEY,
+ "tls.cacrt": $TLS_CRT
+ }
+```
+
+4. Now, you can deploy the Connected registry extension with HTTPS (TLS encryption) using the public certificate and private key pair management by configuring variables set to `cert-manager.enabled=false` and `cert-manager.install=false`. With these parameters, the cert-manager isn't installed or enabled since the public certificate and private key pair is used instead for encryption.
+
+5. Run the [az-k8s-extension-create][az-k8s-extension-create] command for deployment after protected settings file is edited:
+
+ ```azurecli
+ az k8s-extension create --cluster-name myarck8scluster \
+ --cluster-type connectedClusters \
+ --extension-type Microsoft.ContainerRegistry.ConnectedRegistry \
+ --name myconnectedregistry \
+ --resource-group myresourcegroup \
+ --config service.clusterIP=192.100.100.1 \
+ --config cert-manager.enabled=false \
+ --config cert-manager.install=false \
+ --config-protected-file protected-settings-extension.json
+ ```
+
+## Deploy connected registry with Kubernetes secret management
+
+In this tutorial, we demonstrate how to use a [Kubernetes secret][Kubernetes secret] on your cluster. Kubernetes secret allows you to securely manage authorized access between pods within the cluster. This setup enables you to deploy the connected registry extension with encryption by following the provided steps:
+
+Follow the [quickstart][quickstart] and add the Kubernetes TLS secret string variable + value pair.
+
+1. Create self-signed SSL cert with connected-registry service IP as the SAN
+
+```bash
+mkdir /certs
+```
+
+```bash
+openssl req -newkey rsa:4096 -nodes -sha256 -keyout /certs/mycert.key -x509 -days 365 -out /certs/mycert.crt -addext "subjectAltName = IP:<service IP>"
+```
+
+2. Get base64 encoded strings of these cert files
+
+```bash
+export TLS_CRT=$(cat mycert.crt | base64 -w0)
+export TLS_KEY=$(cat mycert.key | base64 -w0)
+```
+
+3. Create k8s secret
+
+```bash
+cat <<EOF | kubectl apply -f -
+apiVersion: v1
+kind: Secret
+metadata:
+ name: k8secret
+ type: kubernetes.io/tls
+data:
+ ca.crt: $TLS_CRT
+ tls.crt: $TLS_CRT
+ tls.key: $TLS_KEY
+EOF
+```
+
+4. Protected settings file example with secret in JSON format:
+
+ ```json
+ {
+ "connectionString": "[connection string here]",
+ "tls.secret": ΓÇ£k8secretΓÇ¥
+ }
+ ```
+
+Now, you can deploy the Connected registry extension with HTTPS (TLS encryption) using the Kubernetes secret management by configuring variables set to `cert-manager.enabled=false` and `cert-manager.install=false`. With these parameters, the cert-manager isn't installed or enabled since the Kubernetes secret is used instead for encryption.
+
+5. Run the [az-k8s-extension-create][az-k8s-extension-create] command for deployment after protected settings file is edited:
+
+ ```azurecli
+ az k8s-extension create --cluster-name myarck8scluster \
+ --cluster-type connectedClusters \
+ --extension-type Microsoft.ContainerRegistry.ConnectedRegistry \
+ --name myconnectedregistry \
+ --resource-group myresourcegroup \
+ --config service.clusterIP=192.100.100.1 \
+ --config cert-manager.enabled=false \
+ --config cert-manager.install=false \
+ --config-protected-file protected-settings-extension.json
+ ```
+
+## Deploy the connected registry using your own trust distribution and disable the connected registry's default trust distribution
+
+In this tutorial, we demonstrate how to configure trust distribution on the cluster. While using your own Kubernetes secret or public certificate and private key pairs, you can deploy the connected registry extension with TLS encryption, your inherent trust distribution, and reject the connected registryΓÇÖs default trust distribution. This setup enables you to deploy the connected registry extension with encryption by following the provided steps:
+
+1. Follow the [quickstart][quickstart] to add either the Kubernetes secret or public certificate, and private key variable + value pairs in the protected settings file in JSON format.
+
+2. Run the [az-k8s-extension-create][az-k8s-extension-create] command in [quickstart][quickstart] and set the `trustDistribution.enabled=false`, `trustDistribution.skipNodeSelector=false` parameters to reject Connected registry trust distribution:
+
+ ```azurecli
+ az k8s-extension create --cluster-name myarck8scluster \
+ --cluster-type connectedClusters \
+ --extension-type Microsoft.ContainerRegistry.ConnectedRegistry \
+ --name myconnectedregistry \
+ --resource-group myresourcegroup \
+ --config service.clusterIP=192.100.100.1 \
+ --config trustDistribution.enabled=false \
+ --config cert-manager.enabled=false \
+ --config cert-manager.install=false \
+ --config-protected-file <JSON file path>
+ ```
+
+With these parameters, cert-manager isn't installed or enabled, additionally, the Connected registry trust distribution isn't enforced. Instead you're using the cluster provided trust distribution for establishing trust between the Connected registry and the client nodes.
+
+## Clean up resources
+
+By deleting the deployed Connected registry extension, you remove the corresponding Connected registry pods and configuration settings.
+
+1. Run the [az-k8s-extension-delete][az-k8s-extension-delete] command to delete the Connected registry extension:
+
+ ```azurecli
+ az k8s-extension delete --name myconnectedregistry
+ --cluster-name myarcakscluster \
+ --resource-group myresourcegroup \
+ --cluster-type connectedClusters
+ ```
+
+2. Run the [az acr connected-registry delete][az-acr-connected-registry-delete] command to delete the Connected registry:
+
+ ```azurecli
+ az acr connected-registry delete --registry myacrregistry \
+ --name myconnectedregistry \
+ --resource-group myresourcegroup
+ ```
+
+By deleting the Connected registry extension and the Connected registry, you remove all the associated resources and configurations.
+
+## Next steps
+
+- [Enable Connected registry with Azure arc CLI][quickstart]
+- [Upgrade Connected registry with Azure arc](tutorial-connected-registry-upgrade.md)
+- [Sync Connected registry with Azure arc in Scheduled window](tutorial-connected-registry-sync.md)
+- [Troubleshoot Connected registry with Azure arc](troubleshoot-connected-registry-arc.md)
+- [Glossary of terms](connected-registry-glossary.md)
+
+<!-- LINKS - internal -->
+[create-acr]: container-registry-get-started-azure-cli.md
+[dedicated data endpoints]: container-registry-firewall-access-rules.md#enable-dedicated-data-endpoints
+[Install Azure CLI]: /cli/azure/install-azure-cli
+[k8s-extension]: /cli/azure/k8s-extension
+[azure-resource-provider-requirements]: /azure/azure-arc/kubernetes/system-requirements#azure-resource-provider-requirements
+[quickstart-connect-cluster]: /azure/azure-arc/kubernetes/quickstart-connect-cluster
+[tutorial-aks-cluster]: /azure/aks/tutorial-kubernetes-deploy-cluster?tabs=azure-cli
+[quickstart]: quickstart-connected-registry-arc-cli.md
+[Arc-enabled Kubernetes]: /azure/azure-arc/kubernetes/overview
+[cert-manager]: https://cert-manager.io/
+[Kubernetes secret]: https://kubernetes.io/docs/concepts/configuration/secret/
+<!-- LINKS - external -->
+[az-k8s-extension-create]: /cli/azure/k8s-extension#az-k8s-extension-create
+[az-k8s-extension-delete]: /cli/azure/k8s-extension#az-k8s-extension-delete
+[az-acr-connected-registry-delete]: /cli/azure/acr/connected-registry#az-acr-connected-registry-delete
container-registry Tutorial Connected Registry Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-connected-registry-sync.md
+
+ Title: "Connected registry synchronization scheduling"
+description: "Sync the Connected registry extension with Azure Arc synchronization schedule and window."
++++ Last updated : 06/17/2024+
+#customer intent: Learn how to sync the connected registry extension using a synchronization schedule and window.
++
+# Configuring the connected registry sync schedule and window
+
+In this tutorial, youΓÇÖll learn how to configure the synchronization for a connected registry. The process includes updating the connected registry extension with a synchronization schedule and window.
+
+YouΓÇÖll be guided on how to update the synchronization schedule using Azure CLI commands. This tutorial covers setting up the connected registry to sync continuously every minute or to sync daily at midnight.
+
+The commands utilize CRON expressions to define the sync schedule and the ISO 8601 duration format for the sync window. Remember to replace the placeholders with your actual registry names when executing the commands.
+
+## Prerequisites
+
+To complete this tutorial, you need the following resources:
+
+* Follow the [quickstart][quickstart] as needed.
+
+## Update the connected registry to sync every day at midnight
+
+Run the [az acr connected-registry update][az-acr-connected-registry-update] command to update the connected registry synchronization schedule to occasionally connect and sync every day at midnight with sync window for 4 hours duration.
+
+For example, the following command configures the connected registry `myconnectedregistry` to schedule sync daily occur every day at 12:00 PM UTC at midnight and set the synchronization window to 4 hours (PT4H). The duration for which the connected registry will sync with the parent ACR `myacrregistry` after the sync initiates.
+
+```azurecli
+az acr connected-registry update --registry myacrregistry \
+--name myconnectedregistry \
+--sync-schedule "0 12 * * *" \
+--sync-window PT4H
+```
+
+The configuration syncs the connected registry daily at noon UTC for 4 hours.
+
+## Update the connected registry to sync continuously every minute
+
+Run the [az acr connected-registry update][az-acr-connected-registry-update] command to update the connected registry synchronization to connect and sync continuously every minute.
+
+For example, the following command configures the connected registry `myconnectedregistry` to schedule sync every minute with the cloud registry.
+
+```azurecli
+az acr connected-registry update --registry myacrregistry \
+--name myconnectedregistry \
+--sync-schedule "* * * * *"
+```
+
+The configuration syncs the connected registry with the cloud registry every minute.
+
+## Next steps
+
+- [Enable Connected registry with Azure arc CLI][quickstart]
+- [Deploy the Connected registry Arc extension](tutorial-connected-registry-arc.md)
+- [Upgrade Connected registry with Azure arc](tutorial-connected-registry-upgrade.md)
+- [Troubleshoot Connected registry with Azure arc](troubleshoot-connected-registry-arc.md)
+- [Glossary of terms](connected-registry-glossary.md)
+
+<!-- LINKS - internal -->
+[az-acr-connected-registry-update]: /cli/azure/acr/connected-registry#az-acr-connected-registry-update
+[quickstart]: quickstart-connected-registry-arc-cli.md
container-registry Tutorial Connected Registry Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-connected-registry-upgrade.md
+
+ Title: "Upgrade and roll back connected registry Arc extension version"
+description: "Upgrade and roll back the connected registry Arc extension version. Learn how to upgrade and roll back the connected registry extension version in this tutorial."
++++ Last updated : 06/17/2024+
+#customer intent: Learn how to upgrade and roll back the connected registry Arc extension.
++
+# Upgrade and roll back the connected registry extension version
+
+In this tutorial, you learn how to upgrade and roll back the connected registry extension version.
+
+## Prerequisites
+
+To complete this tutorial, you need the following resources:
+
+* Follow the [quickstart][quickstart] as needed.
+
+## Deploy the connected registry extension with auto upgrade enabled
+
+Follow the [quickstart][quickstart] to edit the [az-k8s-extension-create][az-k8s-extension-create] command and include the `--auto-upgrade-minor-version true` parameter. This parameter automatically upgrades the extension to the latest version whenever a new version is available.
+
+```azurecli
+ az k8s-extension create --cluster-name myarck8scluster \
+ --cluster-type connectedClusters \
+ --extension-type Microsoft.ContainerRegistry.ConnectedRegistry \
+ --name myconnectedregistry \
+ --resource-group myresourcegroup \
+ --config service.clusterIP=192.100.100.1 \
+ --config-protected-file protected-settings-extension.json \
+ --auto-upgrade-minor-version true
+```
+
+## Deploy the connected registry extension with auto roll back enabled
+
+> [!IMPORTANT]
+> When a customer pins to a specific version, the extension does not auto-rollback. Auto-rollback will only occur if the--auto-upgrade-minor-version flag is set to true.
+
+Follow the [quickstart][quickstart] to edit the [az k8s-extension update] command and add --version with your desired version. This example uses version 0.6.0. This parameter updates the extension version to the desired pinned version.
+
+```azurecli
+ az k8s-extension update --cluster-name myarck8scluster \
+ --cluster-type connectedClusters \
+ --extension-type Microsoft.ContainerRegistry.ConnectedRegistry \
+ --name myconnectedregistry \
+ --resource-group myresourcegroup \
+ --config service.clusterIP=192.100.100.1 \
+ --config-protected-file <JSON file path> \
+ --auto-upgrade-minor-version true \
+ --version 0.6.0
+```
+
+## Deploy the connected registry extension using manual upgrade steps
+
+Follow the [quickstart][quickstart] to edit the [az-k8s-extension-update][az-k8s-extension-update] command and add--version with your desired version. This example uses version 0.6.1. This parameter upgrades the extension version to 0.6.1.
+
+```azurecli
+ az k8s-extension update --cluster-name myarck8scluster \
+ --cluster-type connectedClusters \
+ --name myconnectedregistry \
+ --resource-group myresourcegroup \
+ --config service.clusterIP=192.100.100.1 \
+ --auto-upgrade-minor-version false \
+ --version 0.6.1
+```
+
+## Next steps
+
+In this tutorial, you learned how to upgrade the Connected registry extension with Azure Arc.
+
+- [Enable Connected registry with Azure arc CLI][quickstart]
+- [Deploy the Connected registry Arc extension](tutorial-connected-registry-arc.md)
+- [Sync Connected registry with Azure arc](tutorial-connected-registry-sync.md)
+- [Troubleshoot Connected registry with Azure arc](troubleshoot-connected-registry-arc.md)
+- [Glossary of terms](connected-registry-glossary.md)
+
+[quickstart]: quickstart-connected-registry-arc-cli.md
+[az-k8s-extension-create]: /cli/azure/k8s-extension#az-k8s-extension-create
+[az-k8s-extension-update]: /cli/azure/k8s-extension#az-k8s-extension-update
cost-management-billing Understand Cost Mgt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md
description: This article helps you better understand data included in Cost Management. It also explains how frequently data is processed, collected, shown, and closed. Previously updated : 08/12/2024 Last updated : 09/10/2024
Historical data for credit-based and pay-in-advance offers might not match your
For example, you get invoiced on January 5 for a service consumed in the month of December. It has a price of $86 per unit. On January 1, the unit price changed to $100. When you view your estimated charges in Cost Management, you see that your cost is the result of your consumed quantity * $100 (not $86, as shown in your invoice). >[!NOTE]
->The price change might result in a a price decrease, not only an increase, as explained in this example.
+>The price change might result in a price decrease, not only an increase, as explained in this example.
Historical data shown for the following offers might not match exactly with your invoice.
cost-management-billing Reservation Discount Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-discount-databricks.md
Title: How an Azure Databricks prepurchase discount is applied
-description: Learn how an Azure Databricks prepurchase discount applies to your usage. You can use Databricks prepurchased units at any time during the purchase term.
+ Title: How an Azure Databricks pre-purchase discount is applied
+description: Learn how an Azure Databricks pre-purchase discount applies to your usage. You can use Databricks prepurchased units at any time during the purchase term.
Previously updated : 05/07/2024 Last updated : 09/10/2024
-# How Azure Databricks prepurchase discount is applied
+# How Azure Databricks pre-purchase discount is applied
You can use prepurchased Azure Databricks commit units (DBCU) at any time during the purchase term. Any Azure Databricks usage is deducted from the prepurchased DBCUs automatically.
The prepurchase discount applies only to Azure Databricks unit (DBU) usage. Othe
## Prepurchase discount application
-Databricks prepurchase applies to all Databricks workloads and tiers. You can think of the prepurchase as a pool of prepaid Databricks commit units. Usage is deducted from the pool, regardless of the workload or tier. Usage is deducted in the following ratios:
-
-| Workload | DBU application ratio - Standard tier | DBU application ratio - Premium tier |
-| | | |
-| All-purpose compute | 0.4 | 0.55 |
-| Jobs compute | 0.15 | 0.30 |
-| Jobs light compute | 0.07 | 0.22 |
-| SQL compute | N/A | 0.22 |
-| SQL Pro compute | N/A | 0.55 |
-| Serverless SQL | N/A | 0.70 |
-| Serverless real-time inference | N/A | 0.082 |
-| Model training | N/A | 0.65 |
-| Delta Live Tables | NA | 0.30 (core), 0.38 (pro), 0.54 (advanced) |
-| All Purpose Photon | NA | 0.55 |
-
-For example, when All-purpose compute ΓÇô Standard Tier capacity gets consumed, the prepurchased Databricks commit units get deducted by 0.4 units. When Jobs light compute ΓÇô Standard Tier capacity gets used, the prepurchased Databricks commit unit gets deducted by 0.07 units.
+Databricks pre-purchase applies to all Databricks workloads and tiers. You can think of the prepurchase as a pool of prepaid Databricks commit units.
>[!NOTE]
-> Enabling Photon increases the DBU count.
+> Usage is deducted from the pool, regardless of the workload or tier. Usage is deducted at various rates, depending on the workload and tier. For more information and a complete list of rates, see the [Databricks pricing page](https://azure.microsoft.com/pricing/details/databricks/).
## Determine plan use
cost-management-billing Synapse Analytics Pre Purchase Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/synapse-analytics-pre-purchase-plan.md
Previously updated : 04/15/2024 Last updated : 09/10/2024
You can't split or merge a Synapse Pre-Purchase Plan. For more information about
Cancel and exchange isn't supported for Synapse Pre-Purchase Plans. All purchases are final.
-## Next steps
+## Related content
To learn more about Azure Reservations, see the following articles: - [What are Azure Reservations?](save-compute-costs-reservations.md) - [Manage Azure Reservations](manage-reserved-vm-instance.md) - [Understand Azure Reservations discount](understand-reservation-charges.md)-- [Understand reservation usage for your Pay-As-You-Go subscription](understand-reserved-instance-usage.md)
+- [Understand reservation usage for your pay-as-you-go subscription](understand-reserved-instance-usage.md)
- [Understand reservation usage for your Enterprise enrollment](understand-reserved-instance-usage-ea.md)
+- [Buy a reservation](prepare-buy-reservation.md)
data-factory Connector Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-rest.md
Previously updated : 02/26/2024 Last updated : 08/29/2024
Set the **authenticationType** property to **AadServicePrincipal**. In addition
| Property | Description | Required | |: |: |: | | servicePrincipalId | Specify the Microsoft Entra application's client ID. | Yes |
-| servicePrincipalKey | Specify the Microsoft Entra application's key. Mark this field as a **SecureString** to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes |
+| servicePrincipalCredentialType | Specify the credential type to use for service principal authentication. Allowed values are `ServicePrincipalKey` and `ServicePrincipalCert`. | No |
+| ***For ServicePrincipalKey*** | | |
+| servicePrincipalKey | Specify the Microsoft Entra application's key. Mark this field as a **SecureString** to store it securely in Data Factory, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | No |
+| ***For ServicePrincipalCert*** | | |
+| servicePrincipalEmbeddedCert | Specify the base64 encoded certificate of your application registered in Microsoft Entra ID, and ensure the certificate content type is **PKCS #12**. Mark this field as a **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). Go to this [section](#save-the-service-principal-certificate-in-azure-key-vault) to learn how to save the certificate in Azure Key Vault. | No |
+| servicePrincipalEmbeddedCertPassword | Specify the password of your certificate if your certificate is secured with a password. Mark this field as a **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | No |
+| | | |
| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the top-right corner of the Azure portal. | Yes | | aadResourceId | Specify the Microsoft Entra resource you are requesting for authorization, for example, `https://management.core.windows.net`.| Yes | | azureCloudType | For Service Principal authentication, specify the type of Azure cloud environment to which your Microsoft Entra application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory's cloud environment is used. | No |
-**Example**
+**Example 1: Using service principal key authentication**
```json {
Set the **authenticationType** property to **AadServicePrincipal**. In addition
"url": "<REST endpoint e.g. https://www.example.com/>", "authenticationType": "AadServicePrincipal", "servicePrincipalId": "<service principal id>",
+ "servicePrincipalCredentialType": "ServicePrincipalKey",
"servicePrincipalKey": { "value": "<service principal key>", "type": "SecureString"
Set the **authenticationType** property to **AadServicePrincipal**. In addition
} } ```+
+**Example 2: Using service principal certificate authentication**
+
+```json
+{
+ "name": "RESTLinkedService",
+ "properties": {
+ "type": "RestService",
+ "typeProperties": {
+ "url": "<REST endpoint e.g. https://www.example.com/>",
+ "authenticationType": "AadServicePrincipal",
+ "servicePrincipalId": "<service principal id>",
+ "servicePrincipalCredentialType": "ServicePrincipalCert",
+ "servicePrincipalEmbeddedCert": {
+ "type": "SecureString",
+ "value": "<the base64 encoded certificate of your application registered in Microsoft Entra ID>"
+ },
+ "servicePrincipalEmbeddedCertPassword": {
+ "type": "SecureString",
+ "value": "<password of your certificate>"
+ },
+ "tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>",
+ "aadResourceId": "<Azure AD resource URL e.g. https://management.core.windows.net>"
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+
+#### Save the service principal certificate in Azure Key Vault
+
+You have two options to save the service principal certificate in Azure Key Vault:
+
+- **Option 1**
+
+ 1. Convert the service principal certificate to a base64 string. Learn more from this [article](https://blog.tekspace.io/convert-certificate-from-pfx-to-base64-with-powershell/).
+
+
+ 2. Save the base64 string as a secret in Azure Key Vault.
+
+ :::image type="content" source="media/connector-rest/secrets.png" alt-text="Screenshot of secrets.":::
+
+ :::image type="content" source="media/connector-rest/secret-value.png" alt-text="Screenshot of secret value.":::
+
+- **Option 2**
+
+ If you can't download the certificate from Azure Key Vault, you can use this [template](https://supportability.visualstudio.com/256c8350-cb4b-49c9-ac6e-a012aeb312d1/_apis/git/repositories/da6cf5d9-0dc5-4ba9-a5e2-6e6a93adf93c/Items?path=/AzureDataFactory/.attachments/ConvertCertToBase64StringInAKVPipeline-47f8e507-e7ef-4343-a73b-733b9a7f8e4e.zip&download=false&resolveLfs=true&%24format=octetStream&api-version=5.0-preview.1&sanitize=true&includeContentMetadata=true&versionDescriptor.version=master) to save the converted service principal certificate as a secret in Azure Key Vault.
+
+ :::image type="content" source="media/connector-rest/template-pipeline.png" alt-text="Screenshot of template pipeline to save service principal certificate as a secret in AKV.":::
+
### Use OAuth2 Client Credential authentication Set the **authenticationType** property to **OAuth2ClientCredential**. In addition to the generic properties that are described in the preceding section, specify the following properties:
data-factory Connector Troubleshoot Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-rest.md
description: Learn how to troubleshoot issues with the REST connector in Azure D
Previously updated : 10/20/2023 Last updated : 08/29/2024
This article provides suggestions to troubleshoot common problems with the REST
Tools like **Fiddler** are recommended for the preceding case.
+## The service principal certificate in Azure Key Vault is not correct
+
+- **Message**: `"Failed to create certificate from certificate raw data and password. Cannot find the requested object."`
+- **Cause**: Only support the base64 string service principal certificate for Rest connector service principal certificate authentication.
+- **Recommendation**: Follow this [section](connector-rest.md#save-the-service-principal-certificate-in-azure-key-vault) to save the service principal certificate in Azure Key Vault correctly.
+ ## Related content For more troubleshooting help, try these resources:
databox Data Box System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-system-requirements.md
- Previously updated : 09/09/2024+ Last updated : 10/21/2022
This article describes important system requirements for your Microsoft Azure Da
The system requirements include:
-* **Software requirements:** For hosts that connect to the Data Box, describes supported operating systems, file transfer protocols, storage accounts, storage types, and browsers for the local web UI.
-* **Networking requirements:** For the Data Box, describes requirements for network connections and ports for best operation of the Data Box.
+* **Software requirements** for hosts that connect to Data Box.<br>
+They describe supported operating systems, file transfer protocols, storage accounts, storage types, and browsers for the local web UI.
+* **Networking requirements** for the Data Box device.<br>
+They describe network connections and ports used for optimal Data Box device operation.
## Software requirements
The software requirements include supported operating systems, file transfer pro
[!INCLUDE [data-box-supported-file-systems-clients](../../includes/data-box-supported-file-systems-clients.md)] > [!IMPORTANT]
-> Connection to Data Box shares is not supported via REST for export orders.
-> Transporting data from on-premises NFS clients into Data Box using NFSv4 is supported. However, to copy data from Data Box to Azure, Data Box supports only REST-based transport. Azure file share with NFSv4.1 does not support REST for data access/transfers.
+> Connection to Data Box shares is not supported via REST for export orders.
+>
+> You can transport your data to Data Box from on-premises Network File System (NFS) clients by using NFSv4. However, when copying data from Data Box to Azure, Data Box supports REST-based transport only. Azure file shares with NFSv4.1 doesn't support REST for data access or transfer.
+ ### Supported storage accounts > [!Note]
Your datacenter needs to have high-speed network. We strongly recommend you have
### Port requirements
-The following table lists the ports that need to be opened in your firewall to allow for SMB or NFS traffic. In this table, *In* (*inbound*) refers to the direction from which incoming client requests access to your device. *Out* (or *outbound*) refers to the direction in which your Data Box device sends data externally, beyond the deployment. For example, data might be outbound to the Internet.
+The following table lists the ports that need to be opened in your firewall to allow for Server Message Block (SMB) or Network File System (NFS) traffic. In this table, *In* (*inbound*) refers to the direction from which incoming client requests access to your device. *Out* (or *outbound*) refers to the direction in which your Data Box device sends data externally, beyond the deployment. For example, data might be outbound to the Internet.
[!INCLUDE [data-box-port-requirements](../../includes/data-box-port-requirements.md)]
energy-data-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md
Azure Data Manager for Energy has now been upgraded with the supported set of se
### Syncing Reference Values We are releasing a Limited Preview for syncing Reference Values with your Azure Data Manager for Energy data partitions. Note that this feature is currently only available for newly created Azure Data Manager for Energy after feature enablement for your Azure subscription. Learn more about [Reference Values on Azure Data Manager for Energy](concepts-reference-data-values.md).
+### CNAME DNS Record Fix
+Previously, each ADME resource had an incorrect privatelink DNS record by default, causing inaccessibility issues for some SLB apps. This release resolves the issue for both new and existing instances, ensuring correct and secure configuration of private endpoints. For more details, see [How to setup private links](how-to-set-up-private-links.md).
+ ## June 2024 ### Azure Data Manager for Energy Developer Tier Price Update
event-hubs Send And Receive Events Using Data Generator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/send-and-receive-events-using-data-generator.md
Last updated 06/07/2024
In this quickstart, you learn how to send and receive events by using Azure Event Hubs Data Generator.
+> [!IMPORTANT]
+> The Data generator preview feature is deprecated and has been **replaced with the [Event Hubs Data Explorer](event-hubs-data-explorer.md)**. Please leverage the Event Hubs Data Explorer to send events to and receive events from an Event Hubs namespace using the portal.
+>
+ ## Prerequisites If you're new to Event Hubs, see the [Event Hubs overview](event-hubs-about.md) before you go through this quickstart.
governance Get Resource Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/changes/get-resource-changes.md
When a resource is created, updated, or deleted, a new change resource (`Microso
"properties.provisioningState": { "newValue": "Succeeded", "previousValue": "Updating",
- "changeCategory": "System",
- "propertyChangeType": "Update",
"isTruncated": "true" }, "tags.key1": {
governance Supported Tables Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/reference/supported-tables-resources.md
For sample queries for this table, see [Resource Graph sample queries for adviso
- microsoft.chaos/targets - microsoft.chaos/targets/capabilities
+## computeresources
+
+- microsoft.compute/virtualmachinescalesets/virtualmachines
+- microsoft.compute/virtualmachinescalesets/virtualmachines/networkinterfaces
+- microsoft.compute/virtualmachinescalesets/virtualmachines/networkinterfaces/ipconfigurations/publicipaddresses
+ ## desktopvirtualizationresources - microsoft.desktopvirtualization/hostpools/sessionhosts
For sample queries for this table, see [Resource Graph sample queries for resour
- microsoft.compute/virtualmachines/runcommands - Microsoft.Compute/virtualMachineScaleSets (Virtual machine scale sets) - Sample query: [Get virtual machine scale set capacity and size](../samples/samples-by-category.md#get-virtual-machine-scale-set-capacity-and-size)-- microsoft.compute/virtualmachinescalesets/virtualmachines-- microsoft.compute/virtualmachinescalesets/virtualmachines/networkinterfaces - microsoft.compute/virtualmachinescalesets/virtualmachines/networkinterfaces/ipconfigurations/publicipaddresses - Microsoft.ConfidentialLedger/ledgers (Confidential Ledgers) - Microsoft.Confluent/organizations (Confluent organizations)
hdinsight Hdinsight Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-component-versioning.md
Title: Open-source components and versions - Azure HDInsight
description: Learn about the open-source components and versions in Azure HDInsight. Previously updated : 10/25/2023 Last updated : 09/10/2024 # Azure HDInsight versions
Support defined as a time period that an HDInsight version supported by Microsof
- **Standard support** - **Basic support**
-### For EOL versions (Spark 2.4 clusters):
+### For EOL versions:
| Action | Till Jul 2024 | After Jul 2024 | After Sep 2024| | -- | -- |--|--|
hdinsight Hdinsight Ranger 5 1 Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-ranger-5-1-migration.md
+
+ Title: Upgrade to Apache Ranger in Azure HDInsight
+description: Learn how to upgrade to Apache Ranger in Azure HDInsight
++ Last updated : 09/10/2024++
+# Upgrade to Apache Ranger in Azure HDInsight
+
+HDInsight 5.1 has Apache Ranger version 2.3.0, which is major version upgrade from 1.2.0 HDI 4.1. [Ranger 2.3.0](https://cwiki.apache.org/confluence/display/RANGER/Apache+Ranger+2.3.0+-+Release+Notes) has multiple improvements, features, and DB schema changes.
+
+## Behavioral changes
+
+Hive Ranger permissions - In 5.1 stack for hive, default hive ranger policies have been added which allow all users to
+
+* Create a database.
+* Provide all privileges on default database tables and columns.  
+
+This is different from 4.0 stack where these default policies aren't present.
+ 
+This change has been introduced in OSS (open-source software) ranger: [Create Default Policies for Hive Databases - default, Information_schema](https://issues.apache.org/jira/browse/RANGER-2539).
+
+Ranger User Interface in HDInsight 4.0 and earlier versions:
++
+Ranger User Interface in HDInsight 5.1:
++
+> [!NOTE]
+> The default policy **all databases** have public group access enabled by default from HDInsight 5.1.
+
+### What does this mean for customers onboarding to 5.1
+
+They'll start seeing that new users added to the cluster via LDAP sync via AADS or internal users to the cluster have privileges to create a new database and read write privileges on default database tables and columns.  
+
+This behavior Is different from 4.0 clusters. Hence if they need to disallow this behavior and have the default permissions same as 4.0, it's required to:
+
+* Disable the **all-databases** policy on ranger UI or edit **all-database** policy to remove **public** group from policy.
+* Remove **public** group from **default database tables columns** policy on ranger UI.  
++
+Ranger UI is available by clicking on navigating to ranger component and clicking on ranger UI on right side.
+
+### User Interface differences
+
+* Ranger admin URL has new UI and looks & feel. There's option to switch to the classic Ranger 1.2.0 UI as well.
+
+* Root Service of Hive renamed to Hadoop SQL.
+
+* Hive/Hadoop SQL also has new capabilities of adding roles under Ranger.
+
+## Migration method recommendations
+
+As migration path to HDInsight 5.1, the Ranger policies migration between the clusters is recommended only through Ranger import/export options.
+
+> [!NOTE]
+> Reuse of HDInsight 4.1 Ranger database in HDInsight 5.1 Ranger service configurations isn't recommended. Ranger service would fail to restart with following exception due to differences in db schema.
+
+```
+2023-11-01 12:47:20,295 [JISQL] /usr/lib/jvm/lib/mssql-jdbc-7.4.1.jre8.jar:/usr/hdp/current/ranger-admin/jisql/lib/\* org.apache.util.sql.Jisql -user ranger -p '\*\*\*\*\*\*\*\*' -driver mssql -cstring jdbc:sqlserver://xxx\;databaseName=ranger -noheader -trim -c \; -query "delete from x\_db\_version\_h where version = '040' and active = 'N' and updated\_by=xxx.com';"
+2023-11-01 12:47:21,095 [E] 040-modify-unique-constraint-on-policy-table.sql import failed!
+```
+
+## Migration steps
+
+Steps to import/export.
+
+1. Go to the older adults 4.0 clusters ranger page and select on export.
+
+1. Save the file.
+
+1. On new 5.1 cluster, open ranger and import the same file created in step 2.
+
+1. Map the services appropriately and set the override flag.
hdinsight Apache Kafka Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-get-started.md
Kafka stores streams of data in *topics*. You can use the `kafka-topics.sh` util
* **To create a topic**, use the following command in the SSH connection: ```bash
- /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 3 --partitions 8 --topic test --zookeeper $KAFKAZKHOSTS
+ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --create --replication-factor 3 --partitions 8 --topic test --bootstrap-server $KAFKABROKERS
```
- This command connects to Zookeeper using the host information stored in `$KAFKAZKHOSTS`. It then creates an Apache Kafka topic named **test**.
+ This command connects to Broker using the host information stored in `$KAFKABROKERS`. It then creates an Apache Kafka topic named **test**.
* Data stored in this topic is partitioned across eight partitions.
Kafka stores streams of data in *topics*. You can use the `kafka-topics.sh` util
* **To list topics**, use the following command: ```bash
- /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --list --zookeeper $KAFKAZKHOSTS
+ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --list --bootstrap-server $KAFKABROKERS
``` This command lists the topics available on the Apache Kafka cluster.
Kafka stores streams of data in *topics*. You can use the `kafka-topics.sh` util
* **To delete a topic**, use the following command: ```bash
- /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --delete --topic topicname --zookeeper $KAFKAZKHOSTS
+ /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --delete --topic topicname --bootstrap-server $KAFKABROKERS
``` This command deletes the topic named `topicname`.
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
Title: IoT Edge supported platforms
description: Azure IoT Edge supported operating systems, runtimes, and container engines. Previously updated : 05/01/2024 Last updated : 09/04/2024
Modules built as Linux containers can be deployed to either Linux or Windows dev
[IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) is the recommended way to run IoT Edge on Windows devices. +
+| Operating System | AMD64 | ARM32v7 | ARM64 | End of OS provider standard support |
+| - | -- | - | -- | -- |
+| [Debian 11](https://www.debian.org/releases/bullseye/) | | ![Debian + ARM32v7](./media/support/green-check.png) | | [June 2026](https://wiki.debian.org/LTS) |
+| [Red Hat Enterprise Linux 9](https://access.redhat.com/articles/3078) | ![Red Hat Enterprise Linux 9 + AMD64](./media/support/green-check.png) | | | [May 2032](https://access.redhat.com/product-life-cycles?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204) |
+| [Red Hat Enterprise Linux 8](https://access.redhat.com/articles/3078) | ![Red Hat Enterprise Linux 8 + AMD64](./media/support/green-check.png) | | | [May 2029](https://access.redhat.com/product-life-cycles?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204) |
+| [Ubuntu Server 22.04](https://wiki.ubuntu.com/JammyJellyfish/ReleaseNotes) | ![Ubuntu Server 22.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 22.04 + ARM64](./media/support/green-check.png) | [June 2027](https://wiki.ubuntu.com/Releases) |
+| [Ubuntu Server 20.04](https://wiki.ubuntu.com/FocalFoss64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) | [April 2025](https://wiki.ubuntu.com/Releases) |
+| [Windows 10/11](iot-edge-for-linux-on-windows.md#prerequisites) | ![Windows 10/11 + AMD64](./medi#prerequisites) for supported Windows OS versions. |
+| [Windows Server 2019/2022](iot-edge-for-linux-on-windows.md#prerequisites) | ![Windows Server 2019/2022 + AMD64](./medi#prerequisites) for supported Windows OS versions. |
+++ | Operating System | AMD64 | ARM32v7 | ARM64 | End of OS provider standard support | | - | -- | - | -- | -- |
+| [Debian 12](https://www.debian.org/releases/bookworm/) | | ![Debian + ARM32v7](./media/support/green-check.png) | | [June 2028](https://wiki.debian.org/LTS) |
| [Debian 11](https://www.debian.org/releases/bullseye/) | | ![Debian + ARM32v7](./media/support/green-check.png) | | [June 2026](https://wiki.debian.org/LTS) | | [Red Hat Enterprise Linux 9](https://access.redhat.com/articles/3078) | ![Red Hat Enterprise Linux 9 + AMD64](./media/support/green-check.png) | | | [May 2032](https://access.redhat.com/product-life-cycles?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204) | | [Red Hat Enterprise Linux 8](https://access.redhat.com/articles/3078) | ![Red Hat Enterprise Linux 8 + AMD64](./media/support/green-check.png) | | | [May 2029](https://access.redhat.com/product-life-cycles?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204) |
+| [Ubuntu Server 24.04](https://wiki.ubuntu.com/NobleNumbat/ReleaseNotes) | ![Ubuntu Server 24.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 24.04 + ARM64](./media/support/green-check.png) | [June 2029](https://wiki.ubuntu.com/Releases) |
| [Ubuntu Server 22.04](https://wiki.ubuntu.com/JammyJellyfish/ReleaseNotes) | ![Ubuntu Server 22.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 22.04 + ARM64](./media/support/green-check.png) | [June 2027](https://wiki.ubuntu.com/Releases) | | [Ubuntu Server 20.04](https://wiki.ubuntu.com/FocalFoss64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) | [April 2025](https://wiki.ubuntu.com/Releases) |
+| [Ubuntu Core <sup>1</sup>](https://snapcraft.io/azure-iot-edge) | ![Ubuntu Core + AMD64](./media/support/green-check.png) | | ![Ubuntu Core + ARM64](./media/support/green-check.png) | [April 2027](https://ubuntu.com/about/release-cycle) |
| [Windows 10/11](iot-edge-for-linux-on-windows.md#prerequisites) | ![Windows 10/11 + AMD64](./medi#prerequisites) for supported Windows OS versions. | | [Windows Server 2019/2022](iot-edge-for-linux-on-windows.md#prerequisites) | ![Windows Server 2019/2022 + AMD64](./medi#prerequisites) for supported Windows OS versions. |
+<sup>1</sup> Ubuntu Core is fully supported but the automated testing of Snaps currently happens on Ubuntu 22.04 Server LTS.
++ > [!NOTE] > When a *Tier 1* operating system reaches its end of standard support date, it's removed from the *Tier 1* supported platform list. If you take no action, IoT Edge devices running on the unsupported operating system continue to work but ongoing security patches and bug fixes in the host packages for the operating system won't be available after the end of support date. To continue to receive support and security updates, we recommend that you update your host OS to a *Tier 1* supported platform.
The systems listed in the following table are considered compatible with Azure I
| [Ubuntu Server 20.04 <sup>2</sup>](https://wiki.ubuntu.com/FocalFossa/ReleaseNotes) | | ![Ubuntu 20.04 + ARM32v7](./media/support/green-check.png) | | [April 2025](https://wiki.ubuntu.com/Releases) | | [Ubuntu Core <sup>3</sup>](https://snapcraft.io/azure-iot-edge) | ![Ubuntu Core + AMD64](./media/support/green-check.png) | | ![Ubuntu Core + ARM64](./media/support/green-check.png) | [April 2027](https://ubuntu.com/about/release-cycle) | | [Wind River 8](https://docs.windriver.com/category/os-wind_river_linux) | ![Wind River 8 + AMD64](./media/support/green-check.png) | | | |
-| [Yocto (Kirkstone)](https://www.yoctoproject.org/)<br>For Yocto issues, open a [GitHub issue](https://github.com/Azure/meta-iotedge/issues) | ![Yocto + AMD64](./media/support/green-check.png) | ![Yocto + ARM32v7](./media/support/green-check.png) | ![Yocto + ARM64](./media/support/green-check.png) | [April 2026](https://wiki.yoctoproject.org/wiki/Releases) |
+| [Yocto (kirkstone)](https://www.yoctoproject.org/)<br>For Yocto issues, open a [GitHub issue](https://github.com/Azure/meta-iotedge/issues) | ![Yocto + AMD64](./media/support/green-check.png) | ![Yocto + ARM32v7](./media/support/green-check.png) | ![Yocto + ARM64](./media/support/green-check.png) | [April 2026](https://wiki.yoctoproject.org/wiki/Releases) |
| Raspberry Pi OS Buster | | ![Raspberry Pi OS Buster + ARM32v7](./media/support/green-check.png) | ![Raspberry Pi OS Buster + ARM64](./media/support/green-check.png) | [June 2024](https://wiki.debian.org/LTS) | <sup>1</sup> With the release of 1.3, there are new system calls that cause crashes in Debian 10. To see the workaround, view the [Known issue: Debian 10 (Buster) on ARMv7](https://github.com/Azure/azure-iotedge/releases) section of the 1.3 release notes for details.
The systems listed in the following table are considered compatible with Azure I
| Operating System | AMD64 | ARM32v7 | ARM64 | End of OS provider standard support | | - | -- | - | -- | -- |
-| [Debian 11 ](https://www.debian.org/releases/bullseye/) | ![Debian 11 + AMD64](./media/support/green-check.png) | | ![Debian 11 + ARM64](./media/support/green-check.png) | [June 2026](https://wiki.debian.org/LTS) |
+| [Debian 12](https://www.debian.org/releases/bookworm/) | ![Debian 12 + AMD64](./media/support/green-check.png) | | ![Debian 12 + ARM64](./media/support/green-check.png) | [June 2028](https://wiki.debian.org/LTS) |
+| [Debian 11](https://www.debian.org/releases/bullseye/) | ![Debian 11 + AMD64](./media/support/green-check.png) | | ![Debian 11 + ARM64](./media/support/green-check.png) | [June 2026](https://wiki.debian.org/LTS) |
| [Mentor Embedded Linux Flex OS](https://www.mentor.com/embedded-software/linux/mel-flex-os/) | ![Mentor Embedded Linux Flex OS + AMD64](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM32v7](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM64](./media/support/green-check.png) | | | [Mentor Embedded Linux Omni OS](https://www.mentor.com/embedded-software/linux/mel-omni-os/) | ![Mentor Embedded Linux Omni OS + AMD64](./media/support/green-check.png) | | ![Mentor Embedded Linux Omni OS + ARM64](./media/support/green-check.png) | |
+| [Ubuntu Server 24.04 <sup>1</sup>](https://wiki.ubuntu.com/NobleNumbat/ReleaseNotes) | | ![Ubuntu 24.04 + ARM32v7](./media/support/green-check.png) | | [June 2029](https://wiki.ubuntu.com/Releases) |
| [Ubuntu Server 22.04 <sup>1</sup>](https://wiki.ubuntu.com/JammyJellyfish/ReleaseNotes) | | ![Ubuntu 22.04 + ARM32v7](./media/support/green-check.png) | | [June 2027](https://wiki.ubuntu.com/Releases) | | [Ubuntu Server 20.04 <sup>1</sup>](https://wiki.ubuntu.com/FocalFossa/ReleaseNotes) | | ![Ubuntu 20.04 + ARM32v7](./media/support/green-check.png) | | [April 2025](https://wiki.ubuntu.com/Releases) |
-| [Ubuntu Core <sup>2</sup>](https://snapcraft.io/azure-iot-edge) | ![Ubuntu Core + AMD64](./media/support/green-check.png) | | ![Ubuntu Core + ARM64](./media/support/green-check.png) | [April 2027](https://ubuntu.com/about/release-cycle) |
| [Wind River 8](https://docs.windriver.com/category/os-wind_river_linux) | ![Wind River 8 + AMD64](./media/support/green-check.png) | | | |
-| [Yocto (Kirkstone)](https://www.yoctoproject.org/)<br>For Yocto issues, open a [GitHub issue](https://github.com/Azure/meta-iotedge/issues) | ![Yocto + AMD64](./media/support/green-check.png) | ![Yocto + ARM32v7](./media/support/green-check.png) | ![Yocto + ARM64](./media/support/green-check.png) | [April 2026](https://wiki.yoctoproject.org/wiki/Releases) |
-
+| [Yocto (scarthgap)](https://www.yoctoproject.org/)<br>For Yocto issues, open a [GitHub issue](https://github.com/Azure/meta-iotedge/issues) | ![Yocto + AMD64](./media/support/green-check.png) | ![Yocto + ARM32v7](./media/support/green-check.png) | ![Yocto + ARM64](./media/support/green-check.png) | [April 2028](https://wiki.yoctoproject.org/wiki/Releases) |
+| [Yocto (kirkstone)](https://www.yoctoproject.org/)<br>For Yocto issues, open a [GitHub issue](https://github.com/Azure/meta-iotedge/issues) | ![Yocto + AMD64](./media/support/green-check.png) | ![Yocto + ARM32v7](./media/support/green-check.png) | ![Yocto + ARM64](./media/support/green-check.png) | [April 2026](https://wiki.yoctoproject.org/wiki/Releases) |
<sup>1</sup> Installation packages are made available on the [Azure IoT Edge releases](https://github.com/Azure/azure-iotedge/releases). See the installation steps in [Offline or specific version installation](how-to-provision-single-device-linux-symmetric.md#offline-or-specific-version-installation-optional).
-<sup>2</sup> Ubuntu Core is fully supported but the automated testing of Snaps currently happens on Ubuntu 22.04 Server LTS.
- ::: moniker-end > [!NOTE]
iot-hub Device Management Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-python.md
You're now ready to run the device code and the service code that initiates a re
The following shows the device response to the reboot direct method:
- ![Simulated device app output](./media/iot-hub-python-python-device-management-get-started/device.png)
+ ![Screenshot that shows the output of the simulated device app after receiving reboot direct method.](./media/device-management-python/device.png)
The following shows the service calling the reboot direct method and polling the device twin for status:
- ![Trigger reboot service output](./media/iot-hub-python-python-device-management-get-started/service.png)
+ ![Screenshot that shows the output of the service app after sending reboot direct method.](./media/device-management-python/service.png)
[!INCLUDE [iot-hub-dm-followup](../../includes/iot-hub-dm-followup.md)]
iot-operations Howto Configure Dataflow Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-dataflow-endpoint.md
spec:
endpointType: mqtt authentication: method: systemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings:{
+ systemAssignedManagedIdentitySettings:
audience: "https://eventgrid.azure.net"
- }
mqttSettings: host: example.westeurope-1.ts.eventgrid.azure.net:8883 tls:
spec:
endpointType: kafka authentication: method: systemAssignedManagedIdentity
- systemAssignedManagedIdentitySettings: {
+ systemAssignedManagedIdentitySettings:
audience: "https://eventgrid.azure.net"
- }
kafkaSettings: host: <NAMESPACE>.servicebus.windows.net:9093 tls:
load-balancer Egress Only https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/egress-only.md
This configuration provides outbound NAT for an internal load balancer scenario,
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ## Create internal load balancer
notification-hubs Notification Hubs Push Notification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-push-notification-overview.md
Title: What is Azure Notification Hubs? description: Learn how to add push notification capabilities with Azure Notification Hubs. - - ms.assetid: fcfb0ce8-0e19-4fa8-b777-6b9f9cdda178 multiple
oracle Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/database-overview.md
Oracle Database@Azure is available in the following locations. Oracle Database@A
|-|:-:|:--:| |East US |&check; | &check;| |Germany West Central | &check;|&check; |
-|France Central |&check; | |
+|France Central |&check; | &check;|
|UK South |&check; |&check; | |Canada Central |&check; |&check; | |Australia East |&check; |&check; |
security Backup Plan To Protect Against Ransomware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/backup-plan-to-protect-against-ransomware.md
After a ransomware attack or an incident response simulation, take the following
1. Identify lessons learned where the process didn't work well (and opportunities to simplify, accelerate, or otherwise improve the process) 2. Perform root cause analysis on the biggest challenges (at enough detail to ensure solutions address the right problem ΓÇö considering people, process, and technology)
-3. Investigate and remediate the original breach (engage the [Microsoft Detection and Response Team (DART)](https://www.microsoft.com/security/blog/2019/03/25/dart-the-microsoft-cybersecurity-team-we-hope-you-never-meet/) to help)
+1. Investigate and remediate the original breach (engage the [Microsoft Incident Response team (formerly DART)](https://www.microsoft.com/security/blog/2019/03/25/dart-the-microsoft-cybersecurity-team-we-hope-you-never-meet/) to help)
4. Update your backup and restore strategy based on lessons learned and opportunities ΓÇö prioritizing based on highest impact and quickest implementation steps first ## Next steps
-In this article, you learned how to improve your backup and restore plan to protect against ransomware. For best practices on deploying ransomware protection, see Rapidly protect against ransomware and extortion.
+For best practices on deploying ransomware protection, see Rapidly protect against ransomware and extortion.
Key industry information:
Microsoft Defender XDR:
- [Find ransomware with advanced hunting](/microsoft-365/security/defender/advanced-hunting-find-ransomware)
-Microsoft Security team blog posts:
--- [Becoming resilient by understanding cybersecurity risks: Part 4, navigating current threats (May 2021)](https://www.microsoft.com/security/blog/2021/05/26/becoming-resilient-by-understanding-cybersecurity-risks-part-4-navigating-current-threats/). See the Ransomware section-- [Human-operated ransomware attacks: A preventable disaster (March 2020)](https://www.microsoft.com/security/blog/2020/03/05/human-operated-ransomware-attacks-a-preventable-disaster/). Includes attack chain analysis of actual human-operated ransomware attacks-- [Ransomware response ΓÇö to pay or not to pay? (December 2019)](https://www.microsoft.com/security/blog/2019/12/16/ransomware-response-to-pay-or-not-to-pay/)-- [Norsk Hydro responds to ransomware attack with transparency (December 2019)](https://www.microsoft.com/security/blog/2019/12/17/norsk-hydro-ransomware-attack-transparency/)
sentinel Add Entity To Threat Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/add-entity-to-threat-intelligence.md
Last updated 3/14/2024
appliesto: - Microsoft Sentinel in the Azure portal
-#Customer intent: As a security analyst, I want to quickly add relevant threat intelligence from my investigation for myself and others so I don't lose important information.
+#Customer intent: As a security analyst, I want to quickly add relevant threat intelligence from my investigation for myself and others so that I don't lose important information.
# Add entities to threat intelligence in Microsoft Sentinel During an investigation, you examine entities and their context as an important part of understanding the scope and nature of an incident. When you discover an entity as a malicious domain name, URL, file, or IP address in the incident, it should be labeled and tracked as an indicator of compromise (IOC) in your threat intelligence.
-For example, you discover an IP address performing port scans across your network, or functioning as a command and control node, sending and/or receiving transmissions from large numbers of nodes in your network.
+For example, you might discover an IP address that performs port scans across your network or functions as a command and control node by sending and/or receiving transmissions from large numbers of nodes in your network.
-Microsoft Sentinel allows you to flag these types of entities right from within your incident investigation, and add it to your threat intelligence. You are able to view the added indicators both in **Logs** and **Threat Intelligence**, and use them across your Microsoft Sentinel workspace.
+With Microsoft Sentinel, you can flag these types of entities from within your incident investigation and add them to your threat intelligence. You can view the added indicators in **Logs** and **Threat Intelligence** and use them across your Microsoft Sentinel workspace.
## Add an entity to your threat intelligence
-The new [incident details page](investigate-incidents.md) gives you another way to add entities to threat intelligence, in addition to the investigation graph. Both ways are shown below.
+The [Incident details page](investigate-incidents.md) and the investigation graph give you two ways to add entities to threat intelligence.
# [Incident details page](#tab/incidents)
-1. From the Microsoft Sentinel navigation menu, select **Incidents**.
+1. On the Microsoft Sentinel menu, select **Incidents** from the **Threat management** section.
-1. Select an incident to investigate. In the incident details panel, select **View full details** to open the incident details page.
+1. Select an incident to investigate. On the **Incident details** pane, select **View full details** to open the **Incident details** page.
- :::image type="content" source="media/add-entity-to-threat-intelligence/incident-details-overview.png" alt-text="Screenshot of incident details page." lightbox="media/add-entity-to-threat-intelligence/incident-details-overview.png":::
+1. On the **Entities** pane, find the entity that you want to add as a threat indicator. (You can filter the list or enter a search string to help you locate it.)
-1. Find the entity from the **Entities** widget that you want to add as a threat indicator. (You can filter the list or enter a search string to help you locate it.)
+ :::image type="content" source="media/add-entity-to-threat-intelligence/incident-details-overview.png" alt-text="Screenshot that shows the Incident details page." lightbox="media/add-entity-to-threat-intelligence/incident-details-overview.png":::
1. Select the three dots to the right of the entity, and select **Add to TI** from the pop-up menu.
- Only the following types of entities can be added as threat indicators:
+ Add only the following types of entities as threat indicators:
+ - Domain name - IP address (IPv4 and IPv6) - URL - File (hash)
- :::image type="content" source="media/add-entity-to-threat-intelligence/entity-actions-from-overview.png" alt-text="Screenshot of adding an entity to threat intelligence.":::
+ :::image type="content" source="media/add-entity-to-threat-intelligence/entity-actions-from-overview.png" alt-text="Screenshot that shows adding an entity to threat intelligence.":::
# [Investigation graph](#tab/cases)
-The [investigation graph](investigate-cases.md) is a visual, intuitive tool that presents connections and patterns and enables your analysts to ask the right questions and follow leads. You can use it to add entities to your threat intelligence indicator lists, making them available across your workspace.
+The [investigation graph](investigate-cases.md) is a visual, intuitive tool that presents connections and patterns and enables your analysts to ask the right questions and follow leads. Use it to add entities to your threat intelligence indicator lists by making them available across your workspace.
+
+1. On the Microsoft Sentinel menu, select **Incidents** from the **Threat management** section.
-1. From the Microsoft Sentinel navigation menu, select **Incidents**.
+1. Select an incident to investigate. On the **Incident details** pane, select **Actions**, and choose **Investigate** from the pop-up menu to open the investigation graph.
-1. Select an incident to investigate. In the incident details panel, select the **Actions** button and choose **Investigate** from the pop-up menu. This will open the investigation graph.
+ :::image type="content" source="media/add-entity-to-threat-intelligence/select-incident-to-investigate.png" alt-text="Screenshot that shows selecting an incident from the list to investigate.":::
- :::image type="content" source="media/add-entity-to-threat-intelligence/select-incident-to-investigate.png" alt-text="Screenshot of selecting incident from queue to investigate.":::
+1. Select the entity from the graph that you want to add as a threat indicator. On the side pane that opens, select **Add to TI**.
-1. Select the entity from the graph that you want to add as a threat indicator. A side panel will open on the right. Select **Add to TI**.
+ Only add the following types of entities as threat indicators:
- Only the following types of entities can be added as threat indicators:
- Domain name - IP address (IPv4 and IPv6) - URL - File (hash)
- :::image type="content" source="media/add-entity-to-threat-intelligence/add-entity-to-ti.png" alt-text="Screenshot of adding entity to threat intelligence.":::
+ :::image type="content" source="media/add-entity-to-threat-intelligence/add-entity-to-ti.png" alt-text="Screenshot that shows adding an entity to threat intelligence.":::
-Whichever of the two interfaces you choose, you will end up here:
+Whichever of the two interfaces you choose, you end up here.
-1. The **New indicator** side panel will open. The following fields will be populated automatically:
+1. The **New indicator** side pane opens. The following fields are populated automatically:
- - **Type**
- - The type of indicator represented by the entity you're adding.
- Drop-down with possible values: *ipv4-addr*, *ipv6-addr*, *URL*, *file*, *domain-name*
- - Required; automatically populated based on the **entity type**.
+ - **Types**
+ - The type of indicator represented by the entity you're adding.
+ - Dropdown list with possible values: `ipv4-addr`, `ipv6-addr`, `URL`, `file`, and `domain-name`.
+ - Required. Automatically populated based on the *entity type*.
- **Value** - The name of this field changes dynamically to the selected indicator type. - The value of the indicator itself.
- - Required; automatically populated by the **entity value**.
+ - Required. Automatically populated by the *entity value*.
- - **Tags**
+ - **Tags**
- Free-text tags you can add to the indicator.
- - Optional; automatically populated by the **incident ID**. You can add others.
+ - Optional. Automatically populated by the *incident ID*. You can add others.
- **Name**
- - Name of the indicator&mdash;this is what will be displayed in your list of indicators.
- - Optional; automatically populated by the **incident name.**
+ - Name of the indicator. This name is what appears in your list of indicators.
+ - Optional. Automatically populated by the *incident name*.
- **Created by** - Creator of the indicator.
- - Optional; automatically populated by the user logged into Microsoft Sentinel.
+ - Optional. Automatically populated by the user signed in to Microsoft Sentinel.
Fill in the remaining fields accordingly.
- - **Threat type**
+ - **Threat types**
- The threat type represented by the indicator.
- - Optional; free text.
+ - Optional. Free text.
- **Description** - Description of the indicator.
- - Optional; free text.
+ - Optional. Free text.
- **Revoked**
- - Revoked status of the indicator. Mark checkbox to revoke the indicator, clear checkbox to make it active.
- - Optional; boolean.
+ - Revoked status of the indicator. Select the checkbox to revoke the indicator. Clear the checkbox to make it active.
+ - Optional. Boolean.
- **Confidence**
- - Score reflecting confidence in the correctness of the data, by percent.
- - Optional; integer, 1-100
+ - Score that reflects confidence in the correctness of the data, by percent.
+ - Optional. Integer, 1-100.
- - **Kill chain**
- - Phases in the [*Lockheed Martin Cyber Kill Chain*](https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html#OVERVIEW) to which the indicator corresponds.
- - Optional; free text
+ - **Kill chains**
+ - Phases in the [Lockheed Martin Cyber Kill Chain](https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html#OVERVIEW) to which the indicator corresponds.
+ - Optional. Free text.
- **Valid from** - The time from which this indicator is considered valid.
- - Required; date/time
+ - Required. Date/time.
- **Valid until** - The time at which this indicator should no longer be considered valid.
- - Optional; date/time
+ - Optional. Date/time.
- :::image type="content" source="media/add-entity-to-threat-intelligence/new-indicator-panel.png" alt-text="Screenshot of entering information in new threat indicator panel.":::
+ :::image type="content" source="media/add-entity-to-threat-intelligence/new-indicator-panel.png" alt-text="Screenshot that shows entering information in the new threat indicator pane.":::
-1. When all the fields are filled in to your satisfaction, select **Apply**. You'll see a confirmation message in the upper-right-hand corner that your indicator was created.
+1. When all the fields are filled in to your satisfaction, select **Apply**. A message appears in the upper-right corner to confirm that your indicator was created.
-1. The entity will be added as a threat indicator in your workspace. You can find it [in the list of indicators in the **Threat intelligence** page](work-with-threat-indicators.md#find-and-view-your-indicators-in-the-threat-intelligence-page), and also [in the *ThreatIntelligenceIndicators* table in **Logs**](work-with-threat-indicators.md#find-and-view-your-indicators-in-logs).
+1. The entity is added as a threat indicator in your workspace. You can find it [in the list of indicators on the Threat intelligence page](work-with-threat-indicators.md#find-and-view-your-indicators-on-the-threat-intelligence-page). You can also find it [in the ThreatIntelligenceIndicators table in Logs](work-with-threat-indicators.md#find-and-view-your-indicators-in-logs).
## Related content
-In this article, you learned how to add entities to your threat indicator lists. For more information, see:
+In this article, you learned how to add entities to your threat indicator lists. For more information, see the following articles:
- [Investigate incidents with Microsoft Sentinel](investigate-incidents.md) - [Understand threat intelligence in Microsoft Sentinel](understand-threat-intelligence.md)
sentinel Connect Common Event Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-common-event-format.md
If you're not seeing any data, see the [CEF troubleshooting](./troubleshooting-c
By default, the Log Analytics agent populates the *TimeGenerated* field in the schema with the time the agent received the event from the Syslog daemon. As a result, the time at which the event was generated on the source system is not recorded in Microsoft Sentinel.
-You can, however, run the following command, which will download and run the `TimeGenerated.py` script. This script configures the Log Analytics agent to populate the *TimeGenerated* field with the event's original time on its source system, instead of the time it was received by the agent.
+You can, however, run the following command, which will download and run the `TimeGenerated.py` script. This script configures the Log Analytics agent to populate the *TimeGenerated* field with the event's original time on its source system, instead of the time it was received by the agent. In the following command, replace `{WORKSPACE_ID}` with your own workspace ID.
```bash
-wget -O TimeGenerated.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/TimeGenerated.py && python TimeGenerated.py {ws_id}
+wget -O TimeGenerated.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/TimeGenerated.py && python TimeGenerated.py {WORKSPACE_ID}
``` ## Next steps
sentinel Connect Google Cloud Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-google-cloud-platform.md
With the **GCP Pub/Sub** connectors, based on our [Codeless Connector Platform (
- The **Google Cloud Platform (GCP) Security Command Center connector** collects findings from Google Security Command Center, a robust security and risk management platform for Google Cloud. Analysts can view these findings to gain insights into the organization's security posture, including asset inventory and discovery, detections of vulnerabilities and threats, and risk mitigation and remediation.
-> [!IMPORTANT]
-> The GCP Pub/Sub connectors are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- ## Prerequisites Before you begin, verify that you have the following:
Follow the instructions in the Google Cloud documentation to [**configure Pub/Su
1. Select **Data connectors**, and in the search bar, type *GCP Pub/Sub Audit Logs*.
-1. Select the **GCP Pub/Sub Audit Logs (Preview)** connector.
+1. Select the **GCP Pub/Sub Audit Logs** connector.
1. In the details pane, select **Open connector page**.
Follow the instructions in the Google Cloud documentation to [**configure Pub/Su
1. Select **Data connectors**, and in the search bar, type *Google Security Command Center*.
-1. Select the **Google Security Command Center (Preview)** connector.
+1. Select the **Google Security Command Center** connector.
1. In the details pane, select **Open connector page**.
sentinel Connect Threat Intelligence Taxii https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-threat-intelligence-taxii.md
Title: Connect to STIX/TAXII threat intelligence feeds
-description: Learn about how to connect Microsoft Sentinel to industry-standard threat intelligence feeds to import threat indicators.
+description: Learn how to connect Microsoft Sentinel to industry-standard threat intelligence feeds to import threat indicators.
Last updated 3/14/2024
appliesto:
- Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal
-#customer intent: As a SOC admin, I want to connect Microsoft Sentinel to a STIX/TAXII feed to ingest threat intelligence, so I can generate alerts incidents.
+#customer intent: As an SOC admin, I want to connect Microsoft Sentinel to a STIX/TAXII feed to ingest threat intelligence so that I can generate alert incidents.
# Connect Microsoft Sentinel to STIX/TAXII threat intelligence feeds
-The most widely adopted industry standard for the transmission of threat intelligence is a [combination of the STIX data format and the TAXII protocol](https://oasis-open.github.io/cti-documentation/). If your organization receives threat indicators from solutions that support the current STIX/TAXII version (2.0 or 2.1), you can use the **Threat Intelligence - TAXII data connector** to bring your threat indicators into Microsoft Sentinel. This connector enables a built-in TAXII client in Microsoft Sentinel to import threat intelligence from TAXII 2.x servers.
+The most widely adopted industry standard for the transmission of threat intelligence is a [combination of the STIX data format and the TAXII protocol](https://oasis-open.github.io/cti-documentation/). If your organization receives threat indicators from solutions that support the current STIX/TAXII version (2.0 or 2.1), you can use the Threat Intelligence - TAXII data connector to bring your threat indicators into Microsoft Sentinel. This connector enables a built-in TAXII client in Microsoft Sentinel to import threat intelligence from TAXII 2.x servers.
-To import STIX formatted threat indicators to Microsoft Sentinel from a TAXII server, you must get the TAXII server API Root and Collection ID, and then enable the Threat Intelligence - TAXII data connector in Microsoft Sentinel.
+To import STIX-formatted threat indicators to Microsoft Sentinel from a TAXII server, you must get the TAXII server API root and collection ID. Then you enable the Threat Intelligence - TAXII data connector in Microsoft Sentinel.
-Learn more about [Threat Intelligence](understand-threat-intelligence.md) in Microsoft Sentinel, and specifically about the [TAXII threat intelligence feeds](threat-intelligence-integration.md#taxii-threat-intelligence-feeds) that can be integrated with Microsoft Sentinel.
+Learn more about [threat intelligence](understand-threat-intelligence.md) in Microsoft Sentinel, and specifically about the [TAXII threat intelligence feeds](threat-intelligence-integration.md#taxii-threat-intelligence-feeds) that you can integrate with Microsoft Sentinel.
[!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)]
+For more information, see [Connect your threat intelligence platform (TIP) to Microsoft Sentinel](connect-threat-intelligence-tip.md).
+ [!INCLUDE [unified-soc-preview](includes/unified-soc-preview.md)]
-**See also**: [Connect your threat intelligence platform (TIP) to Microsoft Sentinel](connect-threat-intelligence-tip.md)
+## Prerequisites
-## Prerequisites
-- In order to install, update and delete standalone content or solutions in content hub, you need the **Microsoft Sentinel Contributor** role at the resource group level.
+- To install, update, and delete standalone content or solutions in the **Content hub**, you need the Microsoft Sentinel Contributor role at the resource group level.
- You must have read and write permissions to the Microsoft Sentinel workspace to store your threat indicators.-- You must have a TAXII 2.0 or TAXII 2.1 **API Root URI** and **Collection ID**.
+- You must have a TAXII 2.0 or TAXII 2.1 API root URI and collection ID.
-## Get the TAXII server API Root and Collection ID
+## Get the TAXII server API root and collection ID
-TAXII 2.x servers advertise API Roots, which are URLs that host Collections of threat intelligence. You can usually find the API Root and the Collection ID in the documentation pages of the threat intelligence provider hosting the TAXII server.
+TAXII 2.x servers advertise API roots, which are URLs that host collections of threat intelligence. You can usually find the API root and the collection ID in the documentation pages of the threat intelligence provider that hosts the TAXII server.
> [!NOTE]
-> In some cases, the provider will only advertise a URL called a Discovery Endpoint. You can use the [cURL](https://en.wikipedia.org/wiki/CURL) utility to browse the discovery endpoint and request the API Root.
+> In some cases, the provider only advertises a URL called a discovery endpoint. You can use the [cURL](https://en.wikipedia.org/wiki/CURL) utility to browse the discovery endpoint and request the API root.
## Install the Threat Intelligence solution in Microsoft Sentinel To import threat indicators into Microsoft Sentinel from a TAXII server, follow these steps:
-1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**. <br>For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Content hub**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**.
+
+ For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Content management** > **Content hub**.
1. Find and select the **Threat Intelligence** solution.
To import threat indicators into Microsoft Sentinel from a TAXII server, follow
For more information about how to manage the solution components, see [Discover and deploy out-of-the-box content](sentinel-solutions-deploy.md).
-## Enable the Threat intelligence - TAXII data connector
+## Enable the Threat Intelligence - TAXII data connector
-1. To configure the TAXII data connector, select the **Data connectors** menu.
+1. To configure the TAXII data connector, select the **Data connectors** menu.
-1. Find and select the **Threat Intelligence - TAXII** data connector > **Open connector page** button.
+1. Find and select the **Threat Intelligence - TAXII** data connector, and then select **Open connector page**.
- :::image type="content" source="media/connect-threat-intelligence-taxii/taxii-data-connector-config.png" alt-text="Screenshot displaying the data connectors page with the TAXII data connector listed." lightbox="media/connect-threat-intelligence-taxii/taxii-data-connector-config.png":::
+ :::image type="content" source="media/connect-threat-intelligence-taxii/taxii-data-connector-config.png" alt-text="Screenshot that shows the Data connectors page with the TAXII data connector listed." lightbox="media/connect-threat-intelligence-taxii/taxii-data-connector-config.png":::
-1. Enter a **friendly name** for this TAXII server Collection, the **API Root URL**, the **Collection ID**, a **Username** (if required), and a **Password** (if required), and choose the group of indicators and the polling frequency you want. Select the **Add** button.
+1. Enter a name for this TAXII server collection in the **Friendly name** text box. Fill in the text boxes for **API root URL**, **Collection ID**, **Username** (if necessary), and **Password** (if necessary). Choose the group of indicators and the polling frequency you want. Select **Add**.
- :::image type="content" source="media/connect-threat-intelligence-taxii/threat-intel-configure-taxii-servers.png" alt-text="Configure TAXII servers":::
+ :::image type="content" source="media/connect-threat-intelligence-taxii/threat-intel-configure-taxii-servers.png" alt-text="Screenshot that shows configuring TAXII servers.":::
-You should receive confirmation that a connection to the TAXII server was established successfully, and you may repeat the last step above as many times as you want, to connect to multiple Collections from one or more TAXII servers.
-
-Within a few minutes, threat indicators should begin flowing into this Microsoft Sentinel workspace. You can find the new indicators in the **Threat intelligence** blade, accessible from the Microsoft Sentinel navigation menu.
+You should receive confirmation that a connection to the TAXII server was established successfully. Repeat the last step as many times as you want to connect to multiple collections from one or more TAXII servers.
+Within a few minutes, threat indicators should begin flowing into this Microsoft Sentinel workspace. Find the new indicators on the **Threat intelligence** pane. You can access it from the Microsoft Sentinel menu.
-## IP allow listing for the Microsoft Sentinel TAXII client
+## IP allowlisting for the Microsoft Sentinel TAXII client
Some TAXII servers, like FS-ISAC, have a requirement to keep the IP addresses of the Microsoft Sentinel TAXII client on the allowlist. Most TAXII servers don't have this requirement.
-When relevant, the following IP addresses are those to include in your allowlist:
-
+When relevant, the following IP addresses are the addresses to include in your allowlist:
:::row::: :::column span="":::
When relevant, the following IP addresses are those to include in your allowlist
:::column-end::: :::row-end::: - ## Related content
-In this document, you learned how to connect Microsoft Sentinel to threat intelligence feeds using the TAXII protocol. To learn more about Microsoft Sentinel, see the following articles.
+In this article, you learned how to connect Microsoft Sentinel to threat intelligence feeds by using the TAXII protocol. To learn more about Microsoft Sentinel, see the following articles:
- Learn how to [get visibility into your data and potential threats](get-visibility.md). - Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md).
sentinel Create Codeless Connector Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-codeless-connector-legacy.md
This section provides metadata in the data connector UI under the **Description*
| **resourceProvider** | [resourceProvider](#resourceprovider) | Describes any prerequisites for your Azure resource. <br><br>Example: The **resourceProvider** value displays in Microsoft Sentinel **Prerequisites** section as: <br>**Workspace: read and write permission is required.**<br>**Keys: read permissions to shared keys for the workspace are required.**| | **tenant** | array of ENUM values<br>Example:<br><br>`"tenant": [`<br>`"GlobalADmin",`<br>`"SecurityAdmin"`<br>`]`<br> | Defines the required permissions, as one or more of the following values: `"GlobalAdmin"`, `"SecurityAdmin"`, `"SecurityReader"`, `"InformationProtection"` <br><br>Example: displays the **tenant** value in Microsoft Sentinel as: **Tenant Permissions: Requires `Global Administrator` or `Security Administrator` on the workspace's tenant**|
+> [!IMPORTANT]
+> Microsoft recommends that you use roles with the fewest permissions. This helps improve security for your organization. Global Administrator is a highly privileged role that should be limited to emergency scenarios when you can't use an existing role.
+>
+ #### resourceProvider |sub array value |Type |Description |
sentinel Data Connector Ui Definitions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connector-ui-definitions-reference.md
Provide either one query for all of the data connector's data types, or a differ
| **resourceProvider** | [resourceProvider](#resourceprovider) | Describes any prerequisites for your Azure resource. <br><br>Example: The **resourceProvider** value displays in Microsoft Sentinel **Prerequisites** section as: <br>**Workspace: read and write permission is required.**<br>**Keys: read permissions to shared keys for the workspace are required.**| | **tenant** | array of ENUM values<br>Example:<br><br>`"tenant": [`<br>`"GlobalADmin",`<br>`"SecurityAdmin"`<br>`]`<br> | Defines the required permissions, as one or more of the following values: `"GlobalAdmin"`, `"SecurityAdmin"`, `"SecurityReader"`, `"InformationProtection"` <br><br>Example: displays the **tenant** value in Microsoft Sentinel as: **Tenant Permissions: Requires `Global Administrator` or `Security Administrator` on the workspace's tenant**|
+> [!IMPORTANT]
+> Microsoft recommends that you use roles with the fewest permissions. This helps improve security for your organization. Global Administrator is a highly privileged role that should be limited to emergency scenarios when you can't use an existing role.
+ #### resourceProvider |sub array value |Type |Description |
For more examples of the `connectorUiConfig` review [other CCP data connectors](
} } }
-```
+```
sentinel Indicators Bulk File Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/indicators-bulk-file-import.md
appliesto:
- Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal
-#Customer intent: As a security analyst, I want to bulk import indicators from common file types to my threat intelligence (TI), so I can more effectively share TI during an investigation.
+#Customer intent: As a security analyst, I want to bulk import indicators from common file types to my threat intelligence so that I can more effectively share TI during an investigation.
# Add indicators in bulk to Microsoft Sentinel threat intelligence from a CSV or JSON file
-In this how-to guide, you'll add indicators from a CSV or JSON file into Microsoft Sentinel threat intelligence. A lot of threat intelligence sharing still happens across emails and other informal channels during an ongoing investigation. The ability to import indicators directly into Microsoft Sentinel threat intelligence allows you to quickly socialize emerging threats for your team and make them available to power other analytics such as producing security alerts, incidents, and automated responses.
+In this article, you add indicators from a CSV or JSON file into Microsoft Sentinel threat intelligence. Threat intelligence sharing still happens across emails and other informal channels during an ongoing investigation. You have the ability to import indicators directly into Microsoft Sentinel threat intelligence so that you can quickly relay emerging threats to your team. You make the threats available to power other analytics, such as producing security alerts, incidents, and automated responses.
> [!IMPORTANT]
-> This feature is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> This feature is currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for more legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
> > [!INCLUDE [unified-soc-preview-without-alert](includes/unified-soc-preview-without-alert.md)] ## Prerequisites -- You must have read and write permissions to the Microsoft Sentinel workspace to store your threat indicators.-
+You must have read and write permissions to the Microsoft Sentinel workspace to store your threat indicators.
## Select an import template for your indicators
-Add multiple indicators to your threat intelligence with a specially crafted CSV or JSON file. Download the file templates to get familiar with the fields and how they map to the data you have. Review the required fields for each template type to validate your data before importing.
+Add multiple indicators to your threat intelligence with a specially crafted CSV or JSON file. Download the file templates to get familiar with the fields and how they map to the data you have. Review the required fields for each template type to validate your data before you import it.
+
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Threat intelligence**.
-1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Threat intelligence**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Threat intelligence**.
+ For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Threat intelligence**.
1. Select **Import** > **Import using a file**. #### [Azure portal](#tab/azure-portal)
- :::image type="content" source="media/indicators-bulk-file-import/import-using-file-menu-fixed.png" alt-text="Screenshot of the menu options to import indicators using a file menu." lightbox="media/indicators-bulk-file-import/import-using-file-menu-fixed.png":::
+ :::image type="content" source="media/indicators-bulk-file-import/import-using-file-menu-fixed.png" alt-text="Screenshot that shows the menu options to import indicators by using a file menu." lightbox="media/indicators-bulk-file-import/import-using-file-menu-fixed.png":::
#### [Defender portal](#tab/defender-portal)
- :::image type="content" source="media/indicators-bulk-file-import/import-using-file-menu-defender-portal.png" alt-text="Screenshot of the menu options to import indicators using a file menu from the Defender portal." lightbox="media/indicators-bulk-file-import/import-using-file-menu-defender-portal.png":::
+ :::image type="content" source="media/indicators-bulk-file-import/import-using-file-menu-defender-portal.png" alt-text="Screenshot that shows the menu options to import indicators by using a file menu from the Defender portal." lightbox="media/indicators-bulk-file-import/import-using-file-menu-defender-portal.png":::
-1. Choose CSV or JSON from the **File Format** drop down menu.
+1. On the **File format** dropdown menu, select **CSV** or **JSON**.
- :::image type="content" source="media/indicators-bulk-file-import/format-select-and-download.png" alt-text="Screenshot of the menu flyout to upload a CSV or JSON file, choose a template to download, and specify a source.":::
+ :::image type="content" source="media/indicators-bulk-file-import/format-select-and-download.png" alt-text="Screenshot that shows the dropdown menu to upload a CSV or JSON file, choose a template to download, and specify a source.":::
-1. Select the **Download template** link once you've chosen a bulk upload template.
+1. After you choose a bulk upload template, select the **Download template** link.
-1. Consider grouping your indicators by source since each file upload requires one.
-
-The templates provide all the fields you need to create a single valid indicator, including required fields and validation parameters. Replicate that structure to populate additional indicators in one file. For more information on the templates, see [Understand the import templates](indicators-bulk-file-import.md#understand-the-import-templates).
+1. Consider grouping your indicators by source because each file upload requires one.
+The templates provide all the fields you need to create a single valid indicator, including required fields and validation parameters. Replicate that structure to populate more indicators in one file. For more information on the templates, see [Understand the import templates](indicators-bulk-file-import.md#understand-the-import-templates).
## Upload the indicator file 1. Change the file name from the template default, but keep the file extension as .csv or .json. When you create a unique file name, it's easier to monitor your imports from the **Manage file imports** pane.
-1. Drag your indicators file to the **Upload a file** section or browse for the file using the link.
+1. Drag your indicators file to the **Upload a file** section, or browse for the file by using the link.
1. Enter a source for the indicators in the **Source** text box. This value is stamped on all the indicators included in that file. View this property as the `SourceSystem` field. The source is also displayed in the **Manage file imports** pane. For more information, see [Work with threat indicators](work-with-threat-indicators.md#find-and-view-your-indicators-in-logs).
-1. Choose how you want Microsoft Sentinel to handle invalid indicator entries by selecting one of the radio buttons at the bottom of the **Import using a file** pane.
+1. Choose how you want Microsoft Sentinel to handle invalid indicator entries by selecting one of the buttons at the bottom of the **Import using a file** pane:
+ - Import only the valid indicators and leave aside any invalid indicators from the file. - Don't import any indicators if a single indicator in the file is invalid.
- :::image type="content" source="media/indicators-bulk-file-import/upload-file-pane.png" alt-text="Screenshot of the menu flyout to upload a CSV or JSON file, choose a template to download, and specify a source highlighting the Import button.":::
-
-1. Select the **Import** button.
+ :::image type="content" source="media/indicators-bulk-file-import/upload-file-pane.png" alt-text="Screenshot that shows the dropdown menu to upload a CSV or JSON file, choose a template, and specify a source highlighting the Import button.":::
+1. Select **Import**.
## Manage file imports
Monitor your imports and view error reports for partially imported or failed imp
1. Select **Import** > **Manage file imports**.
- :::image type="content" source="media/indicators-bulk-file-import/manage-file-imports.png" alt-text="Screenshot of the menu option to manage file imports.":::
-
-1. Review the status of imported files and the number of invalid indicator entries. The valid indicator count is updated once the file is processed. Wait for the import to complete to get the updated count of valid indicators.
+ :::image type="content" source="media/indicators-bulk-file-import/manage-file-imports.png" alt-text="Screenshot that shows the menu option to manage file imports.":::
- :::image type="content" source="media/indicators-bulk-file-import/manage-file-imports-pane.png" alt-text="Screenshot of the manage file imports pane with example ingestion data. The columns show sorted by imported number with various sources.":::
+1. Review the status of imported files and the number of invalid indicator entries. The valid indicator count is updated after the file is processed. Wait for the import to finish to get the updated count of valid indicators.
-1. View and sort imports by selecting **Source**, indicator file **Name**, the number **Imported**, the **Total** number of indicators in each file, or the **Created** date.
+ :::image type="content" source="media/indicators-bulk-file-import/manage-file-imports-pane.png" alt-text="Screenshot that shows the Manage file imports pane with example ingestion data. The columns show sorted by imported number with various sources.":::
-1. Select the preview of the error file or download the error file containing the errors about invalid indicators.
+1. View and sort imports by selecting **Source**, the indicator file **Name**, the number **Imported**, the **Total** number of indicators in each file, or the **Created** date.
-Microsoft Sentinel maintains the status of the file import for 30 days. The actual file and the associated error file are maintained in the system for 24 hours. After 24 hours the file and the error file are deleted, but any ingested indicators continue to show in Threat Intelligence.
+1. Select the preview of the error file or download the error file that contains the errors about invalid indicators.
+Microsoft Sentinel maintains the status of the file import for 30 days. The actual file and the associated error file are maintained in the system for 24 hours. After 24 hours, the file and the error file are deleted, but any ingested indicators continue to show in threat intelligence.
## Understand the import templates
-Review each template to ensure your indicators are imported successfully. Be sure to reference the instructions in the template file and the following supplemental guidance.
+Review each template to ensure that your indicators are imported successfully. Be sure to reference the instructions in the template file and the following supplemental guidance.
-### CSV template structure
+### CSV template structure
-1. Choose between the **File indicators** or **All other indicator types** option from the **Indicator type** drop down menu when you select **CSV**.
+1. On the **Indicator type** dropdown menu, select **CSV**. Then choose between the **File indicators** or **All other indicator types** options.
- The CSV template needs multiple columns to accommodate the file indicator type because file indicators can have multiple hash types like MD5, SHA256, and more. All other indicator types like IP addresses only require the observable type and the observable value.
+ The CSV template needs multiple columns to accommodate the file indicator type because file indicators can have multiple hash types like MD5 and SHA256. All other indicator types like IP addresses only require the observable type and the observable value.
1. The column headings for the CSV **All other indicator types** template include fields such as `threatTypes`, single or multiple `tags`, `confidence`, and `tlpLevel`. Traffic Light Protocol (TLP) is a sensitivity designation to help make decisions on threat intelligence sharing.
-1. Only the `validFrom`, `observableType` and `observableValue` fields are required.
+1. Only the `validFrom`, `observableType`, and `observableValue` fields are required.
1. Delete the entire first row from the template to remove the comments before upload.
-1. Keep in mind the max file size for a CSV file import is 50MB.
+ The maximum file size for a CSV file import is 50 MB.
-Here's an example domain-name indicator using the CSV template.
+Here's an example domain-name indicator that uses the CSV template:
```CSV threatTypes,tags,name,description,confidence,revoked,validFrom,validUntil,tlpLevel,severity,observableType,observableValue
Phishing,"demo, csv",MDTI article - Franken-Phish domainname,Entity appears in M
### JSON template structure
-1. There is only one JSON template for all indicator types. The JSON template is based on STIX 2.1 format.
+1. There's only one JSON template for all indicator types. The JSON template is based on the STIX 2.1 format.
-1. The `pattern` element supports indicator types of: file, ipv4-addr, ipv6-addr, domain-name, url, user-account, email-addr, and windows-registry-key types.
+1. The `pattern` element supports indicator types of `file`, `ipv4-addr`, `ipv6-addr`, `domain-name`, `url`, `user-account`, `email-addr`, and `windows-registry-key`.
1. Remove the template comments before upload.
-1. Close the last indicator in the array using the `}` without a comma.
+1. Close the last indicator in the array by using the `}` without a comma.
-1. Keep in mind the max file size for a JSON file import is 250MB.
+ The maximum file size for a JSON file import is 250 MB.
-Here's an example ipv4-addr indicator using the JSON template.
+Here's an example `ipv4-addr` indicator that uses the JSON template:
```json [
Here's an example ipv4-addr indicator using the JSON template.
## Related content
-This article has shown you how to manually bolster your threat intelligence by importing indicators gathered in flat files. Check out these links to learn how indicators power other analytics in Microsoft Sentinel.
+In this article, you learned how to manually bolster your threat intelligence by importing indicators gathered in flat files. To learn more about how indicators power other analytics in Microsoft Sentinel, see the following articles:
+ - [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md) - [Threat indicators for cyber threat intelligence in Microsoft Sentinel](/azure/architecture/example-scenario/data/sentinel-threat-intelligence) - [Detect threats quickly with near-real-time (NRT) analytics rules in Microsoft Sentinel](near-real-time-rules.md)
sentinel Understand Threat Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/understand-threat-intelligence.md
Because Microsoft Sentinel workbooks are based on Azure Monitor workbooks, exten
There's also a rich resource for [Azure Monitor workbooks on GitHub](https://github.com/microsoft/Application-Insights-Workbooks), where you can download more templates and contribute your own templates.
-For more information on using and customizing the **Threat Intelligence** workbook, see [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md#workbooks-provide-insights-about-your-threat-intelligence).
+For more information on using and customizing the **Threat Intelligence** workbook, see [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md#gain-insights-about-your-threat-intelligence-with-workbooks).
## Related content
sentinel Use Matching Analytics To Detect Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/use-matching-analytics-to-detect-threats.md
Title: Use matching analytics to detect threats
-description: This article explains how to detect threats with Microsoft generated threat intelligence in Microsoft Sentinel.
+description: This article explains how to detect threats with Microsoft-generated threat intelligence in Microsoft Sentinel.
Last updated 3/14/2024
appliesto:
- Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal
-#Customer intent: As a SOC analyst, I want to match my security data with Microsoft threat intelligence so I can generate high fidelity alerts and incidents.
+#Customer intent: As an SOC analyst, I want to match my security data with Microsoft threat intelligence so that I can generate high-fidelity alerts and incidents.
# Use matching analytics to detect threats
-Take advantage of threat intelligence produced by Microsoft to generate high fidelity alerts and incidents with the **Microsoft Defender Threat Intelligence Analytics** rule. This built-in rule in Microsoft Sentinel matches indicators with Common Event Format (CEF) logs, Windows DNS events with domain and IPv4 threat indicators, syslog data, and more.
+Take advantage of threat intelligence produced by Microsoft to generate high-fidelity alerts and incidents with the **Microsoft Defender Threat Intelligence Analytics** rule. This built-in rule in Microsoft Sentinel matches indicators with Common Event Format (CEF) logs, Windows DNS events with domain and IPv4 threat indicators, syslog data, and more.
> [!IMPORTANT]
-> Matching analytics is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> Matching analytics is currently in preview. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for more legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
> ## Prerequisites
-In order to produce high fidelity alerts and incidents, one or more of the supported data connectors must be installed, but a premium MDTI license is not required. Install the appropriate solutions from the content hub to connect these data sources.
+You must install one or more of the supported data connectors to produce high-fidelity alerts and incidents. A premium Microsoft Defender Threat Intelligence license isn't required. Install the appropriate solutions from the **Content hub** to connect these data sources:
- - Common Event Format (CEF)
- - DNS (Preview)
+ - Common Event Format
+ - DNS (preview)
- Syslog - Office activity logs - Azure activity logs
- :::image type="content" source="media/use-matching-analytics-to-detect-threats/data-sources.png" alt-text="A screenshot showing the Microsoft Defender Threat Intelligence Analytics rule data source connections.":::
+ :::image type="content" source="media/use-matching-analytics-to-detect-threats/data-sources.png" alt-text="A screenshot that shows the Microsoft Defender Threat Intelligence Analytics rule data source connections.":::
- For example, depending on your data source you might use the following solutions and data connectors.
+ For example, depending on your data source, you might use the following solutions and data connectors:
|Solution |Data connector | |||
- |[Common Event Format solution for Sentinel](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-commoneventformat?tab=Overview) | [Common Event Format (CEF) connector for Microsoft Sentinel](data-connectors/common-event-format-cef.md)|
+ |[Common Event Format solution for Sentinel](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-commoneventformat?tab=Overview) | [Common Event Format connector for Microsoft Sentinel](data-connectors/common-event-format-cef.md)|
|[Windows Server DNS](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-dns?tab=Overview) |[DNS connector for Microsoft Sentinel](data-connectors/dns.md) | |[Syslog solution for Sentinel](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-syslog?tab=Overview) |[Syslog connector for Microsoft Sentinel](data-connectors/syslog.md) | |[Microsoft 365 solution for Sentinel](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-office365?tab=Overview) | [Office 365 connector for Microsoft Sentinel](data-connectors/office-365.md) |
In order to produce high fidelity alerts and incidents, one or more of the suppo
Matching analytics is configured when you enable the **Microsoft Defender Threat Intelligence Analytics** rule.
-1. Click the **Analytics** menu from the **Configuration** section.
+1. Under the **Configuration** section, select the **Analytics** menu.
-1. Select the **Rule templates** menu tab.
+1. Select the **Rule templates** tab.
-1. In the search window type *threat intelligence*.
+1. In the search window, enter **threat intelligence**.
1. Select the **Microsoft Defender Threat Intelligence Analytics** rule template.
-1. Click **Create rule**. The rule details are read only, and the default status of the rule is enabled.
+1. Select **Create rule**. The rule details are read only, and the default status of the rule is enabled.
-1. Click **Review** > **Create**.
-
+1. Select **Review** > **Create**.
## Data sources and indicators
-Microsoft Defender Threat Intelligence (MDTI) Analytics matches your logs with domain, IP and URL indicators in the following way:
--- **CEF** logs ingested into the Log Analytics **CommonSecurityLog** table match URL and domain indicators if populated in the `RequestURL` field, and IPv4 indicators in the `DestinationIP` field.--- Windows **DNS** logs where event `SubType == "LookupQuery"` ingested into the **DnsEvents** table match domain indicators populated in the `Name` field, and IPv4 indicators in the `IPAddresses` field.--- **Syslog** events where `Facility == "cron"` ingested into the **Syslog** table match domain and IPv4 indicators directly from the `SyslogMessage` field. --- **Office activity logs** ingested into the **OfficeActivity** table match IPv4 indicators directly from the `ClientIP` field.--- **Azure activity logs** ingested into the **AzureActivity** table match IPv4 indicators directly from the `CallerIpAddress` field.
+Microsoft Defender Threat Intelligence Analytics matches your logs with domain, IP, and URL indicators in the following ways:
+- **CEF logs** ingested into the Log Analytics `CommonSecurityLog` table match URL and domain indicators if populated in the `RequestURL` field, and IPv4 indicators in the `DestinationIP` field.
+- **Windows DNS logs**, where `SubType == "LookupQuery"` ingested into the `DnsEvents` table matches domain indicators populated in the `Name` field, and IPv4 indicators in the `IPAddresses` field.
+- **Syslog events**, where `Facility == "cron"` ingested into the `Syslog` table matches domain and IPv4 indicators directly from the `SyslogMessage` field.
+- **Office activity logs** ingested into the `OfficeActivity` table match IPv4 indicators directly from the `ClientIP` field.
+- **Azure activity logs** ingested into the `AzureActivity` table match IPv4 indicators directly from the `CallerIpAddress` field.
## Triage an incident generated by matching analytics
If Microsoft's analytics finds a match, any alerts generated are grouped into in
Use the following steps to triage through the incidents generated by the **Microsoft Defender Threat Intelligence Analytics** rule:
-1. In the Microsoft Sentinel workspace where you've enabled the **Microsoft Defender Threat Intelligence Analytics** rule, select **Incidents** and search for **Microsoft Defender Threat Intelligence Analytics**.
+1. In the Microsoft Sentinel workspace where you enabled the **Microsoft Defender Threat Intelligence Analytics** rule, select **Incidents**, and search for **Microsoft Defender Threat Intelligence Analytics**.
- Any incidents found are shown in the grid.
+ Any incidents that are found appear in the grid.
1. Select **View full details** to view entities and other details about the incident, such as specific alerts.
- For example:
+ Here's an example.
:::image type="content" source="media/use-matching-analytics-to-detect-threats/matching-analytics.png" alt-text="Screenshot of incident generated by matching analytics with details pane.":::
-1. Observe the severity assigned to the alerts and the incident. Depending on how the indicator is matched, an appropriate severity is assigned to an alert from `Informational` to `High`. For example, if the indicator is matched with firewall logs that have allowed the traffic, a high severity alert is generated. If the same indicator was matched with firewall logs that blocked the traffic, the alert generated would be low or medium.
+1. Observe the severity assigned to the alerts and the incident. Depending on how the indicator is matched, an appropriate severity is assigned to an alert from `Informational` to `High`. For example, if the indicator is matched with firewall logs that allowed the traffic, a high-severity alert is generated. If the same indicator was matched with firewall logs that blocked the traffic, the generated alert is low or medium.
Alerts are then grouped on a per-observable basis of the indicator. For example, all alerts generated in a 24-hour time period that match the `contoso.com` domain are grouped into a single incident with a severity assigned based on the highest alert severity.
-1. Observe the indicator details. When a match is found, the indicator is published to the Log Analytics **ThreatIntelligenceIndicators** table, and displayed in the **Threat Intelligence** page. For any indicators published from this rule, the source is defined as **Microsoft Defender Threat Intelligence Analytics**.
+1. Observe the indicator information. When a match is found, the indicator is published to the Log Analytics `ThreatIntelligenceIndicators` table, and it appears on the **Threat Intelligence** page. For any indicators published from this rule, the source is defined as **Microsoft Defender Threat Intelligence Analytics**.
-For example, in the **ThreatIntelligenceIndicators** table:
+Here's an example of the `ThreatIntelligenceIndicators` table.
-In the **Threat Intelligence** page:
+Here's an example of the **Threat Intelligence** page.
## Get more context from Microsoft Defender Threat Intelligence
-Along with high fidelity alerts and incidents, some MDTI indicators include a link to a reference article in the MDTI community portal.
+Along with high-fidelity alerts and incidents, some Microsoft Defender Threat Intelligence indicators include a link to a reference article in the Microsoft Defender Threat Intelligence community portal.
-For more information, see the [MDTI portal](https://ti.defender.microsoft.com) and [What is Microsoft Defender Threat Intelligence?](/defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti)
+For more information, see [What is Microsoft Defender Threat Intelligence?](/defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti).
## Related content
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
The listed features were released in the last three months. For information abou
## September 2024
+- [Google Cloud Platform data connectors are now generally available (GA)](#google-cloud-platform-data-connectors-are-now-generally-available-ga)
- [Microsoft Sentinel now generally available (GA) in Azure Israel Central](#microsoft-sentinel-now-generally-available-ga-in-azure-israel-central)
+### Google Cloud Platform data connectors are now generally available (GA)
+
+Microsoft Sentinel's [Google Cloud Platform (GCP) data connectors](connect-google-cloud-platform.md), based on our [Codeless Connector Platform (CCP)](create-codeless-connector.md), are now **generally available**. WIth these connectors, you can ingest logs from your GCP environment using the GCP [Pub/Sub capability](https://cloud.google.com/pubsub/docs/overview):
+
+- The **Google Cloud Platform (GCP) Pub/Sub Audit Logs connector** collects audit trails of access to GCP resources. Analysts can monitor these logs to track resource access attempts and detect potential threats across the GCP environment.
+
+- The **Google Cloud Platform (GCP) Security Command Center connector** collects findings from Google Security Command Center, a robust security and risk management platform for Google Cloud. Analysts can view these findings to gain insights into the organization's security posture, including asset inventory and discovery, detections of vulnerabilities and threats, and risk mitigation and remediation.
+
+For more information on these connectors, see [Ingest Google Cloud Platform log data into Microsoft Sentinel](connect-google-cloud-platform.md).
+ ### Microsoft Sentinel now generally available (GA) in Azure Israel Central Microsoft Sentinel is now available in the *Israel Central* Azure region, with the same feature set as all other Azure Commercial regions.
sentinel Work With Threat Indicators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/work-with-threat-indicators.md
appliesto:
- Microsoft Sentinel in the Azure portal - Microsoft Sentinel in the Microsoft Defender portal
-#customer intent: As a security analyst, I want to use threat intelligence so I can power my threat detections.
+#customer intent: As a security analyst, I want to use threat intelligence so that I can power my threat detections.
# Work with threat indicators in Microsoft Sentinel
-Integrate threat intelligence (TI) into Microsoft Sentinel through the following activities:
--- **Import threat intelligence** into Microsoft Sentinel by enabling **data connectors** to various TI [platforms](connect-threat-intelligence-tip.md) and [feeds](connect-threat-intelligence-taxii.md).--- **View and manage** the imported threat intelligence in **Logs** and in the Microsoft Sentinel **Threat Intelligence** page.--- **Detect threats** and generate security alerts and incidents using the built-in **Analytics** rule templates based on your imported threat intelligence.
+Integrate threat intelligence into Microsoft Sentinel through the following activities:
+- **Import threat intelligence** into Microsoft Sentinel by enabling *data connectors* to various threat intelligence [platforms](connect-threat-intelligence-tip.md) and [feeds](connect-threat-intelligence-taxii.md).
+- **View and manage** the imported threat intelligence in **Logs** and on the Microsoft Sentinel **Threat intelligence** page.
+- **Detect threats** and generate security alerts and incidents by using the built-in **Analytics** rule templates based on your imported threat intelligence.
- **Visualize key information** about your imported threat intelligence in Microsoft Sentinel with the **Threat Intelligence workbook**. [!INCLUDE [unified-soc-preview](includes/unified-soc-preview.md)] ## View your threat indicators in Microsoft Sentinel
-### Find and view your indicators in the Threat intelligence page
+Learn how to work with threat intelligence indicators throughout Microsoft Sentinel.
-This procedure describes how to view and manage your indicators in the **Threat intelligence** page, accessible from the main Microsoft Sentinel menu. Use the **Threat intelligence** page to sort, filter, and search your imported threat indicators without writing a Log Analytics query.
+### Find and view your indicators on the Threat intelligence page
-**To view your threat intelligence indicators in the Threat intelligence page**:
+This procedure describes how to view and manage your indicators on the **Threat intelligence** page, which you can access from the main Microsoft Sentinel menu. Use the **Threat intelligence** page to sort, filter, and search your imported threat indicators without writing a Log Analytics query.
-1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Threat intelligence**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Threat intelligence**.
+To view your threat intelligence indicators on the **Threat intelligence** page:
-1. From the grid, select the indicator for which you want to view more details. The indicator's details appear on the right, showing information such as confidence levels, tags, threat types, and more.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Threat intelligence**.
-1. Microsoft Sentinel only displays the most current version of indicators in this view. For more information on how indicators are updated, see [Understand threat intelligence](understand-threat-intelligence.md#view-and-manage-your-threat-indicators).
+ For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Threat intelligence**.
-1. IP and domain name indicators are enriched with extra GeoLocation and WhoIs data, providing more context for investigations where the selected indicator is found.
+1. From the grid, select the indicator for which you want to view more information. The indicator's information includes confidence levels, tags, and threat types.
-For example:
+Microsoft Sentinel only displays the most current version of indicators in this view. For more information on how indicators are updated, see [Understand threat intelligence](understand-threat-intelligence.md#view-and-manage-your-threat-indicators).
+IP and domain name indicators are enriched with extra `GeoLocation` and `WhoIs` data. This data provides more context for investigations where the selected indicator is found.
-> [!IMPORTANT]
-> GeoLocation and WhoIs enrichment is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+Here's an example.
+
+> [!IMPORTANT]
+> `GeoLocation` and `WhoIs` enrichment is currently in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include more legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
### Find and view your indicators in Logs
-This procedure describes how to view your imported threat indicators in the Microsoft Sentinel **Logs** area, together with other Microsoft Sentinel event data, regardless of the source feed or the connector used.
+This procedure describes how to view your imported threat indicators in the Microsoft Sentinel **Logs** area, together with other Microsoft Sentinel event data, regardless of the source feed or the connector that you used.
+
+Imported threat indicators are listed in the Microsoft Sentinel `ThreatIntelligenceIndicator` table. This table is the basis for threat intelligence queries run elsewhere in Microsoft Sentinel, such as in **Analytics** or **Workbooks**.
-Imported threat indicators are listed in the **Microsoft Sentinel > ThreatIntelligenceIndicator** table, which is the basis for threat intelligence queries run elsewhere in Microsoft Sentinel, such as in **Analytics** or **Workbooks**.
+To view your threat intelligence indicators in **Logs**:
-**To view your threat intelligence indicators in Logs**:
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **General**, select **Logs**.
-1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **General**, select **Logs**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Investigation & response** > **Hunting** > **Advanced hunting**.
+ For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Investigation & response** > **Hunting** > **Advanced hunting**.
-1. The **ThreatIntelligenceIndicator** table is located under the **Microsoft Sentinel** group.
+1. The `ThreatIntelligenceIndicator` table is located under the **Microsoft Sentinel** group.
-1. Select the **Preview data** icon (the eye) next to the table name and select the **See in query editor** button to run a query that will show records from this table.
+1. Select the **Preview data** icon (the eye) next to the table name. Select **See in query editor** to run a query that shows records from this table.
- Your results should look similar to the sample threat indicator shown in this screenshot:
+ Your results should look similar to the sample threat indicator shown here.
- :::image type="content" source="media/work-with-threat-indicators/ti-table-results.png" alt-text="Screenshot shows sample ThreatIntelligenceIndicator table results with the details expanded." lightbox="media/work-with-threat-indicators/ti-table-results.png":::
+ :::image type="content" source="media/work-with-threat-indicators/ti-table-results.png" alt-text="Screenshot that shows sample ThreatIntelligenceIndicator table results with the details expanded." lightbox="media/work-with-threat-indicators/ti-table-results.png":::
## Create and tag indicators
-The **Threat intelligence** page also allows you to create threat indicators directly within the Microsoft Sentinel interface, and perform two of the most common threat intelligence administrative tasks: indicator tagging and creating new indicators related to security investigations.
+Use the **Threat Intelligence** page to create threat indicators directly within the Microsoft Sentinel interface and perform two common threat intelligence administrative tasks: indicator tagging and creating new indicators related to security investigations.
### Create a new indicator
-1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Threat intelligence**.<br> For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Threat intelligence**.
+1. For Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Threat management**, select **Threat intelligence**.
-1. Select the **Add new** button from the menu bar at the top of the page.
+ For Microsoft Sentinel in the [Defender portal](https://security.microsoft.com/), select **Microsoft Sentinel** > **Threat management** > **Threat intelligence**.
- :::image type="content" source="media/work-with-threat-indicators/threat-intel-add-new-indicator.png" alt-text="Add a new threat indicator" lightbox="media/work-with-threat-indicators/threat-intel-add-new-indicator.png":::
+1. On the menu bar at the top of the page, select **Add new**.
-1. Choose the indicator type, then complete the form on the **New indicator** panel. The required fields are marked with a red asterisk (*).
+ :::image type="content" source="media/work-with-threat-indicators/threat-intel-add-new-indicator.png" alt-text="Screenshot that shows adding a new threat indicator." lightbox="media/work-with-threat-indicators/threat-intel-add-new-indicator.png":::
-1. Select **Apply**. The indicator is added to the indicators list, and is also sent to the *ThreatIntelligenceIndicator* table in **Logs**.
+1. Choose the indicator type, and then fill in the form on the **New indicator** pane. The required fields are marked with an asterisk (*).
+
+1. Select **Apply**. The indicator is added to the indicators list and is also sent to the `ThreatIntelligenceIndicator` table in **Logs**.
### Tag and edit threat indicators
-Tagging threat indicators is an easy way to group them together to make them easier to find. Typically, you might apply tags to an indicator related to a particular incident, or representing threats from a particular known actor or well-known attack campaign. Once you search for the indicators you want to work with, tag them individually, or multi-select indicators and tag them all at once with one or more tags. Since tagging is free-form, a recommended practice is to create standard naming conventions for threat indicator tags.
+Tagging threat indicators is an easy way to group them together to make them easier to find. Typically, you might apply tags to an indicator related to a particular incident, or if the indicator represents threats from a particular known actor or well-known attack campaign. After you search for the indicators you want to work with, tag them individually. Multiselect indicators and tag them all at once with one or more tags. Because tagging is free-form, we recommend that you create standard naming conventions for threat indicator tags.
-Microsoft Sentinel also allows you to edit indicators, whether they've been created directly in Microsoft Sentinel, or come from partner sources, like TIP and TAXII servers. For indicators created in Microsoft Sentinel, all fields are editable. For indicators coming from partner sources, only specific fields are editable, including tags, *Expiration date*, *Confidence*, and *Revoked*. Either way, keep in mind only the latest version of the indicator is displayed in the **Threat Intelligence** page view. For more information on how indicators are updated, see [Understand threat intelligence](understand-threat-intelligence.md#view-and-manage-your-threat-indicators).
+With Microsoft Sentinel, you can also edit indicators, whether they were created directly in Microsoft Sentinel or come from partner sources, like TIP and TAXII servers. For indicators created in Microsoft Sentinel, all fields are editable. For indicators that come from partner sources, only specific fields are editable, including tags, **Expiration date**, **Confidence**, and **Revoked**. Either way, only the latest version of the indicator appears on the **Threat Intelligence** page. For more information on how indicators are updated, see [Understand threat intelligence](understand-threat-intelligence.md#view-and-manage-your-threat-indicators).
-## Workbooks provide insights about your threat intelligence
+## Gain insights about your threat intelligence with workbooks
Use a purpose-built Microsoft Sentinel workbook to visualize key information about your threat intelligence in Microsoft Sentinel, and customize the workbook according to your business needs. Here's how to find the threat intelligence workbook provided in Microsoft Sentinel, and an example of how to make edits to the workbook to customize it.
- 1. From the [Azure portal](https://portal.azure.com/), navigate to the **Microsoft Sentinel** service.
+ 1. From the [Azure portal](https://portal.azure.com/), go to **Microsoft Sentinel**.
- 1. Choose the **workspace** to which youΓÇÖve imported threat indicators using either threat intelligence data connector.
+ 1. Choose the workspace to which you imported threat indicators by using either threat intelligence data connector.
- 1. Select **Workbooks** from the **Threat management** section of the Microsoft Sentinel menu.
+ 1. Under the **Threat management** section of the Microsoft Sentinel menu, select **Workbooks**.
- 1. Find the workbook titled **Threat Intelligence** and verify you have data in the **ThreatIntelligenceIndicator** table as shown below.
+ 1. Find the workbook titled **Threat Intelligence**. Verify that you have data in the `ThreatIntelligenceIndicator` table.
- :::image type="content" source="media/work-with-threat-indicators/threat-intel-verify-data.png" alt-text="Verify data":::
+ :::image type="content" source="media/work-with-threat-indicators/threat-intel-verify-data.png" alt-text="Screenshot that shows verifying that you have data.":::
- 1. Select the **Save** button and choose an Azure location to store the workbook. This step is required if you are going to modify the workbook in any way and save your changes.
+ 1. Select **Save**, and choose an Azure location in which to store the workbook. This step is required if you intend to modify the workbook in any way and save your changes.
- 1. Now select the **View saved workbook** button to open the workbook for viewing and editing.
+ 1. Now select **View saved workbook** to open the workbook for viewing and editing.
- 1. You should now see the default charts provided by the template. To modify a chart, select the **Edit** button at the top of the page to enter editing mode for the workbook.
+ 1. You should now see the default charts provided by the template. To modify a chart, select **Edit** at the top of the page to start the editing mode for the workbook.
1. Add a new chart of threat indicators by threat type. Scroll to the bottom of the page and select **Add Query**.
Here's how to find the threat intelligence workbook provided in Microsoft Sentin
| summarize count() by ThreatType ```
-1. In the **Visualization** drop-down, select **Bar chart**.
+1. On the **Visualization** dropdown menu, select **Bar chart**.
+
+1. Select **Done editing**, and view the new chart for your workbook.
-1. Select the **Done editing** button. YouΓÇÖve created a new chart for your workbook.
+ :::image type="content" source="media/work-with-threat-indicators/threat-intel-bar-chart.png" alt-text="Screenshot that shows a bar chart for the workbook.":::
- :::image type="content" source="media/work-with-threat-indicators/threat-intel-bar-chart.png" alt-text="Bar chart":::
+Workbooks provide powerful interactive dashboards that give you insights into all aspects of Microsoft Sentinel. You can do many tasks with workbooks, and the provided templates are a great starting point. Customize the templates or create new dashboards by combining many data sources so that you can visualize your data in unique ways.
-Workbooks provide powerful interactive dashboards that give you insights into all aspects of Microsoft Sentinel. There is a whole lot you can do with workbooks, and while the provided templates are a great starting point, you will likely want to dive in and customize these templates, or create new dashboards combining many different data sources so to visualize your data in unique ways. Since Microsoft Sentinel workbooks are based on Azure Monitor workbooks, there is already extensive documentation available, and many more templates. A great place to start is this article on how to [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md).
+Microsoft Sentinel workbooks are based on Azure Monitor workbooks, so extensive documentation and many more templates are available. For more information, see [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md).
-There is also a rich community of [Azure Monitor workbooks on GitHub](https://github.com/microsoft/Application-Insights-Workbooks) to download more templates and contribute your own templates.
+There's also a rich resource for [Azure Monitor workbooks on GitHub](https://github.com/microsoft/Application-Insights-Workbooks), where you can download more templates and contribute your own templates.
## Related content
-In this article, you learned all the ways to work with threat intelligence indicators throughout Microsoft Sentinel. For more about threat intelligence in Microsoft Sentinel, see the following articles:
+In this article, you learned how to work with threat intelligence indicators throughout Microsoft Sentinel. For more about threat intelligence in Microsoft Sentinel, see the following articles:
- [Understand threat intelligence in Microsoft Sentinel](understand-threat-intelligence.md). - Connect Microsoft Sentinel to [STIX/TAXII threat intelligence feeds](./connect-threat-intelligence-taxii.md).
service-connector How To Use Service Connector In Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-use-service-connector-in-aks.md
If there's an error during the extension installation, and the error message in
`Operation returned an invalid status code: Conflict`. **Reason:**
-This error usually occurs when attempting to create a service connection while the AKS (Azure Kubernetes Service) cluster is in an updating state. The service connection update conflicts with the ongoing update.
+This error usually occurs when attempting to create a service connection while the AKS (Azure Kubernetes Service) cluster is in an updating state. The service connection update conflicts with the ongoing update. It could also happen when your subscription is not resgitered for the `Microsoft.KubernetesConfiguration` resource provider.
**Mitigation:**
-Ensure your cluster is in a "Succeeded" state before retrying the creation. It resolves most errors related to conflicts.
+- Run the following command to make sure your subscription is registered for `Microsoft.KubernetesConfiguration` resource provider.
+
+ ```azurecli
+ az provider register -n Microsoft.KubernetesConfiguration
+ ```
+- Ensure your cluster is in a "Succeeded" state and retry the creation.
+ #### Timeout
Check the permissions on the Azure resources specified in the error message. Obt
Service Connector requires the subscription to be registered for `Microsoft.KubernetesConfiguration`, which is the resource provider for [Azure Arc-enabled Kubernetes cluster extensions](../azure-arc/kubernetes/extensions.md). **Mitigation:**
-To resolve errors related to resource provider registration, follow this [tutorial](../azure-resource-manager/troubleshooting/error-register-resource-provider.md).
+Register the `Microsoft.KubernetesConfiguration` resource provider by running the following command. For more information on resource provider registration errors, please refer to this [tutorial](../azure-resource-manager/troubleshooting/error-register-resource-provider.md).
+
+```azurecli
+az provider register -n Microsoft.KubernetesConfiguration
+```
#### Other issues
service-connector Quickstart Cli Aks Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-aks-connection.md
This quickstart shows you how to connect Azure Kubernetes Service (AKS) to other
## Initial set-up
-1. If you're using Service Connector for the first time, start by running the command [az provider register](/cli/azure/provider#az-provider-register) to register the Service Connector resource provider.
+1. If you're using Service Connector for the first time, start by running the command [az provider register](/cli/azure/provider#az-provider-register) to register the Service Connector and Kubernetes Configuration resource providers.
```azurecli az provider register -n Microsoft.ServiceLinker ```
+ ```azurecli
+ az provider register -n Microsoft.KubernetesConfiguration
+ ```
> [!TIP]
- > You can check if the resource provider has already been registered by running the command `az provider show -n "Microsoft.ServiceLinker" --query registrationState`. If the output is `Registered`, then Service Connector has already been registered.
+ > You can check if these resource providers have already been registered by running the commands `az provider show -n "Microsoft.ServiceLinker" --query registrationState` and `az provider show -n "Microsoft.KubernetesConfiguration" --query registrationState`.
1. Optionally, use the Azure CLI command to get a list of supported target services for AKS cluster.
Go to the following tutorials to start connecting AKS cluster to Azure services
> [Tutorial: Connect to Azure Key Vault using CSI driver](./tutorial-python-aks-keyvault-csi-driver.md) > [!div class="nextstepaction"]
-> [Tutorial: Connect to Azure Storage using workload identity](./tutorial-python-aks-storage-workload-identity.md)
+> [Tutorial: Connect to Azure Storage using workload identity](./tutorial-python-aks-storage-workload-identity.md)
site-recovery Site Recovery Failover To Azure Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-failover-to-azure-troubleshoot.md
Title: 'Troubleshoot failover to Azure failures | Microsoft Docs' description: This article describes ways to troubleshoot common errors in failing over to Azure - Previously updated : 03/07/2024 Last updated : 09/10/2024 # Troubleshoot errors when failing over VMware VM or physical machine to Azure
To resolve the issue:
Manually create the Master target in the vCenter that manages your source machine. The datastore will be available after the next vCenter discovery and refresh fabric operations.
-> [!Note]
->
+> [!NOTE]
> The discovery and refresh fabric operations can take up to 30 minutes to complete. ## Linux Master Target registration with CS fails with a TLS error 35
To resolve the issue:
## Next steps+ - Troubleshoot [RDP connection to Windows VM](/troubleshoot/azure/virtual-machines/troubleshoot-rdp-connection) - Troubleshoot [SSH connection to Linux VM](/troubleshoot/azure/virtual-machines/detailed-troubleshoot-ssh-connection)
site-recovery Vmware Azure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-architecture.md
Title: VMware VM disaster recovery architecture in Azure Site Recovery - Classic
description: This article provides an overview of components and architecture used when setting up disaster recovery of on-premises VMware VMs to Azure with Azure Site Recovery - Classic Previously updated : 12/15/2023 Last updated : 09/10/2024
storage-mover Agent Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-deploy.md
This article guides you through the steps necessary to successfully deploy a Sto
## Prerequisites -- A capable Windows Hyper-V or VMware host on which to run the agent VM.<br/> See the [Recommended compute and memory resources](#recommended-compute-and-memory-resources) section in this article for details about resource requirements for the agent VM.
+1. The below Storage Mover endpoints need to have access to https traffic
+- `mcr.microsoft.com`
+- `<region>.agentgateway.prd.azsm.azure.com`
+- `evhns-sm-ur-prd-<region>.servicebus.windows.net`
+
+2. A capable Windows Hyper-V or VMware host on which to run the agent VM.<br/> See the [Recommended compute and memory resources](#recommended-compute-and-memory-resources) section in this article for details about resource requirements for the agent VM.
> [!NOTE] > At present, Windows Hyper-V and VMware are the only supported virtualization environments for your agent VM. Other virtualization environments have not been tested and are not supported.
storage Data Lake Storage Acl Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-dotnet.md
description: Use .NET to manage access control lists (ACL) in storage accounts t
Previously updated : 02/07/2023 Last updated : 09/06/2024
ACL inheritance is already available for new child items that are created under
## Prerequisites -- An Azure subscription. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).--- A storage account that has hierarchical namespace (HNS) enabled. Follow [these](create-data-lake-storage-account.md) instructions to create one.-
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/).
+- Azure storage account that has hierarchical namespace (HNS) enabled. Follow [these instructions](create-data-lake-storage-account.md) to create one.
- Azure CLI version `2.6.0` or higher.- - One of the following security permissions:- - A provisioned Microsoft Entra ID [security principal](../../role-based-access-control/overview.md#security-principal) that has been assigned the [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role, scoped to the target container, storage account, parent resource group, or subscription.- - Owning user of the target container or directory to which you plan to apply ACL settings. To set ACLs recursively, this includes all child items in the target container or directory.- - Storage account key. ## Set up your project
-To get started, install the [Azure.Storage.Files.DataLake](https://www.nuget.org/packages/Azure.Storage.Files.DataLake/) NuGet package.
+This section shows you how to set up a project to work with the Azure Storage Data Lake client library.
+
+### Install packages
-1. Open a command window (For example: Windows PowerShell).
+From your project directory, install packages for the Azure Storage Data Lake and Azure Identity client libraries using the `dotnet add package` command. The Azure.Identity package is needed for passwordless connections to Azure services.
-2. From your project directory, install the Azure.Storage.Files.DataLake preview package by using the `dotnet add package` command.
+```dotnetcli
+dotnet add package Azure.Storage.Files.DataLake
+dotnet add package Azure.Identity
+```
- ```console
- dotnet add package Azure.Storage.Files.DataLake -v 12.6.0 -s https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-net/nuget/v3/index.json
- ```
+### Add `using` directives
- Then, add these using statements to the top of your code file.
+Add these `using` directives to the top of your code file:
- ```csharp
- using Azure;
- using Azure.Core;
- using Azure.Storage;
- using Azure.Storage.Files.DataLake;
- using Azure.Storage.Files.DataLake.Models;
- using System.Collections.Generic;
- using System.Threading.Tasks;
- ```
+```csharp
+using Azure;
+using Azure.Core;
+using Azure.Storage;
+using Azure.Storage.Files.DataLake;
+using Azure.Storage.Files.DataLake.Models;
+using System.Collections.Generic;
+using System.Threading.Tasks;
+```
## Connect to the account
-To use the snippets in this article, you'll need to create a [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient) instance that represents the storage account.
+To run the code examples in this article, you need to create a [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient) instance that represents the storage account. You can authorize the client object with Microsoft Entra ID credentials or with an account key.
-<a name='connect-by-using-azure-active-directory-ad'></a>
+### [Microsoft Entra ID (recommended)](#tab/entra-id)
-### Connect by using Microsoft Entra ID
+You can use the [Azure identity client library for .NET](/dotnet/api/overview/azure/identity-readme) to authenticate your application with Microsoft Entra ID.
> [!NOTE] > If you're using Microsoft Entra ID to authorize access, then make sure that your security principal has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control model in Azure Data Lake Storage](./data-lake-storage-access-control-model.md).
-You can use the [Azure identity client library for .NET](/dotnet/api/overview/azure/identity-readme) to authenticate your application with Microsoft Entra ID.
-
-After you install the package, add this using statement to the top of your code file.
+First, assign one of the following [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) roles to your security principal:
-```csharp
-using Azure.Identity;
-```
-
-First, you'll have to assign one of the following [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) roles to your security principal:
-
-|Role|ACL setting capability|
-|--|--|
-|[Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner)|All directories and files in the account.|
-|[Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)|Only directories and files owned by the security principal.|
+| Role | ACL setting capability |
+| | |
+| [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) | All directories and files in the account. |
+| [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) | Only directories and files owned by the security principal. |
Next, create a [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient) instance and pass in a new instance of the [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) class.
Next, create a [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.
To learn more about using **DefaultAzureCredential** to authorize access to data, see [How to authenticate .NET applications with Azure services](/dotnet/azure/sdk/authentication#defaultazurecredential).
-### Connect by using an account key
+### [Account key](#tab/account-key)
You can authorize access to data using your account access keys (Shared Key). This example creates a [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient) instance that is authorized with the account key.
You can authorize access to data using your account access keys (Shared Key). Th
[!INCLUDE [storage-shared-key-caution](../../../includes/storage-shared-key-caution.md)] ++ ## Set ACLs When you *set* an ACL, you **replace** the entire ACL including all of its entries. If you want to change the permission level of a security principal or add a new security principal to the ACL without affecting other existing entries, you should *update* the ACL instead. To update an ACL instead of replace it, see the [Update ACLs](#update-acls) section of this article.
This example updates the root ACL of a container by replacing the ACL entry for
### Update ACLs recursively
-To update an ACL recursively, create a new ACL object with the ACL entry that you want to update, and then use that object in update ACL operation. Do not get the existing ACL, just provide ACL entries to be updated.
+To update an ACL recursively, create a new ACL object with the ACL entry that you want to update, and then use that object in update ACL operation. Don't get the existing ACL, just provide ACL entries to be updated.
Update an ACL recursively by calling the **DataLakeDirectoryClient.UpdateAccessControlRecursiveAsync** method. Pass this method a [List](/dotnet/api/system.collections.generic.list-1) of [PathAccessControlItem](/dotnet/api/azure.storage.files.datalake.models.pathaccesscontrolitem). Each [PathAccessControlItem](/dotnet/api/azure.storage.files.datalake.models.pathaccesscontrolitem) defines an ACL entry.
This example updates the root ACL of a container by replacing the ACL entry for
### Remove ACL entries recursively
-To remove ACL entries recursively, create a new ACL object for ACL entry to be removed, and then use that object in remove ACL operation. Do not get the existing ACL, just provide the ACL entries to be removed.
+To remove ACL entries recursively, create a new ACL object for ACL entry to be removed, and then use that object in remove ACL operation. Don't get the existing ACL, just provide the ACL entries to be removed.
Remove ACL entries by calling the **DataLakeDirectoryClient.RemoveAccessControlRecursiveAsync** method. Pass this method a [List](/dotnet/api/system.collections.generic.list-1) of [PathAccessControlItem](/dotnet/api/azure.storage.files.datalake.models.pathaccesscontrolitem). Each [PathAccessControlItem](/dotnet/api/azure.storage.files.datalake.models.pathaccesscontrolitem) defines an ACL entry.
storage Data Lake Storage Acl Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-java.md
Previously updated : 02/07/2023 Last updated : 09/06/2024 ms.devlang: java
ACL inheritance is already available for new child items that are created under
## Prerequisites -- An Azure subscription. See [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/).
+- Azure storage account that has hierarchical namespace (HNS) enabled. Follow [these instructions](create-data-lake-storage-account.md) to create one.
+- [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above.
+- [Apache Maven](https://maven.apache.org/download.cgi) is used for project management in this example.
+- Azure CLI version `2.6.0` or higher.
+- One of the following security permissions:
+ - A provisioned Microsoft Entra ID [security principal](../../role-based-access-control/overview.md#security-principal) that has been assigned the [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role, scoped to the target container, storage account, parent resource group, or subscription.
+ - Owning user of the target container or directory to which you plan to apply ACL settings. To set ACLs recursively, this includes all child items in the target container or directory.
+ - Storage account key.
-- A storage account that has hierarchical namespace (HNS) enabled. Follow [these](create-data-lake-storage-account.md) instructions to create one.
+## Set up your project
-- Azure CLI version `2.6.0` or higher.
+> [!NOTE]
+> This article uses the Maven build tool to build and run the sample code. Other build tools, such as Gradle, also work with the Azure SDK for Java.
-- One of the following security permissions:
+Use Maven to create a new console app, or open an existing project. Follow these steps to install packages and add the necessary `import` directives.
- - A provisioned Microsoft Entra ID [security principal](../../role-based-access-control/overview.md#security-principal) that has been assigned the [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role, scoped to the target container, storage account, parent resource group, or subscription..
+### Install packages
- - Owning user of the target container or directory to which you plan to apply ACL settings. To set ACLs recursively, this includes all child items in the target container or directory.
+Open the `pom.xml` file in your text editor. Install the packages by [including the BOM file](#include-the-bom-file), or [including a direct dependency](#include-a-direct-dependency).
- - Storage account key.
+#### Include the BOM file
+
+Add **azure-sdk-bom** to take a dependency on the latest version of the library. In the following snippet, replace the `{bom_version_to_target}` placeholder with the version number. Using **azure-sdk-bom** keeps you from having to specify the version of each individual dependency. To learn more about the BOM, see the [Azure SDK BOM README](https://github.com/Azure/azure-sdk-for-jav).
-## Set up your project
+```xml
+<dependencyManagement>
+ <dependencies>
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-sdk-bom</artifactId>
+ <version>{bom_version_to_target}</version>
+ <type>pom</type>
+ <scope>import</scope>
+ </dependency>
+ </dependencies>
+</dependencyManagement>
+```
+
+Add the following dependency elements to the group of dependencies. The **azure-identity** dependency is needed for passwordless connections to Azure services.
+
+```xml
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-storage-file-datalake</artifactId>
+</dependency>
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-storage-common</artifactId>
+</dependency>
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+</dependency>
+```
-To get started, open [this page](https://search.maven.org/artifact/com.azure/azure-storage-file-datalake) and find the latest version of the Java library. Then, open the *pom.xml* file in your text editor. Add a dependency element that references that version.
+#### Include a direct dependency
+
+To take dependency on a particular version of the library, add the direct dependency to your project:
+
+```xml
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-storage-file-datalake</artifactId>
+ <version>{package_version_to_target}</version>
+</dependency>
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-storage-common</artifactId>
+ <version>{package_version_to_target}</version>
+</dependency>
+<dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>{package_version_to_target}</version>
+</dependency>
+```
-If you plan to authenticate your client application by using Microsoft Entra ID, then add a dependency to the Azure Secret Client Library. See [Adding the Secret Client Library package to your project](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/identity/azure-identity#adding-the-package-to-your-project).
+### Include import directives
-Next, add these imports statements to your code file.
+Add the necessary `import` directives. In this example, we add the following directives in the *App.java* file:
```java import com.azure.storage.common.StorageSharedKeyCredential;
import com.azure.storage.file.datalake.options.PathSetAccessControlRecursiveOpti
## Connect to the account
-To use the snippets in this article, you'll need to create a **DataLakeServiceClient** instance that represents the storage account.
-
-<a name='connect-by-using-azure-active-directory-azure-ad'></a>
+To run the code examples in this article, you need to create a [DataLakeServiceClient](/java/api/com.azure.storage.file.datalake.datalakeserviceclient) instance that represents the storage account. You can authorize the client object with Microsoft Entra ID credentials or with an account key.
-### Connect by using Microsoft Entra ID
+### [Microsoft Entra ID (recommended)](#tab/entra-id)
You can use the [Azure identity client library for Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/identity/azure-identity) to authenticate your application with Microsoft Entra ID. First, you'll have to assign one of the following [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) roles to your security principal:
-|Role|ACL setting capability|
-|--|--|
-|[Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner)|All directories and files in the account.|
-|[Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)|Only directories and files owned by the security principal.|
+| Role | ACL setting capability |
+| | |
+| [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) | All directories and files in the account. |
+| [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) | Only directories and files owned by the security principal. |
Next, create a [DataLakeServiceClient](/java/api/com.azure.storage.file.datalake.datalakeserviceclient) instance and pass in a new instance of the [DefaultAzureCredential](/java/api/com.azure.identity.defaultazurecredential) class. :::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/Java-v12/src/main/java/com/datalake/manage/Authorize_DataLake.java" id="Snippet_AuthorizeWithAzureAD":::
-To learn more about using **DefaultAzureCredential** to authorize access to data, see [Azure Identity client library for Java](/java/api/overview/azure/identity-readme).
+To learn more about using `DefaultAzureCredential` to authorize access to data, see [Azure Identity client library for Java](/java/api/overview/azure/identity-readme).
-### Connect by using an account key
+### [Account key](#tab/account-key)
You can authorize access to data using your account access keys (Shared Key). This example creates a [DataLakeServiceClient](/dotnet/api/azure.storage.files.datalake.datalakeserviceclient) instance that is authorized with the account key.
You can authorize access to data using your account access keys (Shared Key). Th
[!INCLUDE [storage-shared-key-caution](../../../includes/storage-shared-key-caution.md)] ++ ## Set ACLs When you *set* an ACL, you **replace** the entire ACL including all of its entries. If you want to change the permission level of a security principal or add a new security principal to the ACL without affecting other existing entries, you should *update* the ACL instead. To update an ACL instead of replace it, see the [Update ACLs](#update-acls) section of this article.
storage Data Lake Storage Acl Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-javascript.md
Previously updated : 02/07/2023 Last updated : 09/06/2024 ms.devlang: javascript
This article shows you how to use Node.js to get, set, and update the access con
## Prerequisites -- An Azure subscription. For more information, see [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).--- A storage account that has hierarchical namespace (HNS) enabled. Follow [these](create-data-lake-storage-account.md) instructions to create one.-
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/).
+- Azure storage account that has hierarchical namespace (HNS) enabled. Follow [these instructions](create-data-lake-storage-account.md) to create one.
+- [Node.js LTS](https://nodejs.org/)
- Azure CLI version `2.6.0` or higher.- - One of the following security permissions:-
- - A provisioned Microsoft Entra ID [security principal](../../role-based-access-control/overview.md#security-principal) that has been assigned the [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role, scoped to the target container, storage account, parent resource group, or subscription..
-
+ - A provisioned Microsoft Entra ID [security principal](../../role-based-access-control/overview.md#security-principal) that has been assigned the [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role, scoped to the target container, storage account, parent resource group, or subscription.
- Owning user of the target container or directory to which you plan to apply ACL settings. To set ACLs recursively, this includes all child items in the target container or directory.-
- - Storage account key..
+ - Storage account key.
## Set up your project
-Install Data Lake client library for JavaScript by opening a terminal window, and then typing the following command.
+This section walks you through preparing a project to work with the Azure Data Lake Storage client library for JavaScript.
-```javascript
+### Install packages
+
+Install packages for the Azure Data Lake Storage and Azure Identity client libraries using the `npm install` command. The **@azure/identity** package is needed for passwordless connections to Azure services.
+
+```bash
npm install @azure/storage-file-datalake
+npm install @azure/identity
```
-Import the `storage-file-datalake` package by placing this statement at the top of your code file.
+### Load modules
+
+Add the following code at the top of your file to load the required modules:
```javascript const {
AzureStorageDataLake,
DataLakeServiceClient, StorageSharedKeyCredential } = require("@azure/storage-file-datalake");+
+const { DefaultAzureCredential } = require('@azure/identity');
``` ## Connect to the account
-To use the snippets in this article, you'll need to create a **DataLakeServiceClient** instance that represents the storage account.
+To run the code examples in this article, you need to create a [DataLakeServiceClient](/javascript/api/@azure/storage-file-datalake/datalakeserviceclient) instance that represents the storage account. You can authorize the client object with Microsoft Entra ID credentials or with an account key.
<a name='connect-by-using-azure-active-directory-ad'></a>
-### Connect by using Microsoft Entra ID
+### [Microsoft Entra ID (recommended)](#tab/entra-id)
+
+You can use the [Azure identity client library for JavaScript](https://www.npmjs.com/package/@azure/identity) to authenticate your application with Microsoft Entra ID.
> [!NOTE] > If you're using Microsoft Entra ID to authorize access, then make sure that your security principal has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control model in Azure Data Lake Storage](./data-lake-storage-access-control-model.md).
-You can use the [Azure identity client library for JS](https://www.npmjs.com/package/@azure/identity) to authenticate your application with Microsoft Entra ID.
- First, you'll have to assign one of the following [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) roles to your security principal:
-|Role|ACL setting capability|
-|--|--|
-|[Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner)|All directories and files in the account.|
-|[Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)|Only directories and files owned by the security principal.|
+| Role | ACL setting capability |
+| | |
+| [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) | All directories and files in the account. |
+| [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) | Only directories and files owned by the security principal. |
Next, create a [DataLakeServiceClient](/javascript/api/@azure/storage-file-datalake/datalakeserviceclient) instance and pass in a new instance of the [DefaultAzureCredential](/javascript/api/@azure/identity/defaultazurecredential) class.
function GetDataLakeServiceClientAD(accountName) {
} ```
-To learn more about using **DefaultAzureCredential** to authorize access to data, see [Overview: Authenticate JavaScript apps to Azure using the Azure SDK](/azure/developer/javascript/sdk/authentication/overview).
+To learn more about using `DefaultAzureCredential` to authorize access to data, see [Overview: Authenticate JavaScript apps to Azure using the Azure SDK](/azure/developer/javascript/sdk/authentication/overview).
-### Connect by using an account key
+### [Account key](#tab/account-key)
You can authorize access to data using your account access keys (Shared Key). This example creates a [DataLakeServiceClient](/javascript/api/@azure/storage-file-datalake/datalakeserviceclient) instance that is authorized with the account key.
function GetDataLakeServiceClient(accountName, accountKey) {
[!INCLUDE [storage-shared-key-caution](../../../includes/storage-shared-key-caution.md)] ++ ## Get and set a directory ACL This example gets and then sets the ACL of a directory named `my-directory`. This example gives the owning user read, write, and execute permissions, gives the owning group only read and execute permissions, and gives all others read access.
storage Data Lake Storage Acl Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-python.md
Previously updated : 02/07/2023 Last updated : 09/06/2024 ms.devlang: python
ACL inheritance is already available for new child items that are created under
## Prerequisites -- An Azure subscription. For more information, see [Get Azure free trial](https://azure.microsoft.com/pricing/free-trial/).--- A storage account that has hierarchical namespace (HNS) enabled. Follow [these](create-data-lake-storage-account.md) instructions to create one.-
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/).
+- Azure storage account that has hierarchical namespace (HNS) enabled. Follow [these instructions](create-data-lake-storage-account.md) to create one.
+- [Python](https://www.python.org/downloads/) 3.8+
- Azure CLI version `2.6.0` or higher.- - One of the following security permissions:- - A provisioned Microsoft Entra ID [security principal](../../role-based-access-control/overview.md#security-principal) that has been assigned the [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) role, scoped to the target container, storage account, parent resource group, or subscription.- - Owning user of the target container or directory to which you plan to apply ACL settings. To set ACLs recursively, this includes all child items in the target container or directory.- - Storage account key. ## Set up your project
-Install the Azure Data Lake Storage client library for Python by using [pip](https://pypi.org/project/pip/).
+This section walks you through preparing a project to work with the Azure Data Lake Storage client library for Python.
-```
-pip install azure-storage-file-datalake
+From your project directory, install packages for the Azure Data Lake Storage and Azure Identity client libraries using the `pip install` command. The **azure-identity** package is needed for passwordless connections to Azure services.
+
+```console
+pip install azure-storage-file-datalake azure-identity
```
-Add these import statements to the top of your code file.
+Then open your code file and add the necessary import statements. In this example, we add the following to our *.py* file:
```python
-from azure.storage.filedatalake import DataLakeServiceClient
from azure.identity import DefaultAzureCredential
+from azure.storage.filedatalake import DataLakeServiceClient
``` ## Connect to the account
-To use the snippets in this article, you'll need to create a **DataLakeServiceClient** instance that represents the storage account.
+To run the code examples in this article, you need to create a [DataLakeServiceClient](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakeserviceclient) instance that represents the storage account. You can authorize the client object with Microsoft Entra ID credentials or with an account key.
-<a name='connect-by-using-azure-active-directory-ad'></a>
+### [Microsoft Entra ID (recommended)](#tab/entra-id)
-### Connect by using Microsoft Entra ID
+You can use the [Azure identity client library for Python](https://pypi.org/project/azure-identity/) to authenticate your application with Microsoft Entra ID.
> [!NOTE] > If you're using Microsoft Entra ID to authorize access, then make sure that your security principal has been assigned the [Storage Blob Data Owner role](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner). To learn more about how ACL permissions are applied and the effects of changing them, see [Access control model in Azure Data Lake Storage](./data-lake-storage-access-control-model.md).
-You can use the [Azure identity client library for Python](https://pypi.org/project/azure-identity/) to authenticate your application with Microsoft Entra ID.
+First, assign one of the following [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) roles to your security principal:
-First, you'll have to assign one of the following [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) roles to your security principal:
-
-|Role|ACL setting capability|
-|--|--|
-|[Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner)|All directories and files in the account.|
-|[Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)|Only directories and files owned by the security principal.|
+| Role | ACL setting capability |
+| | |
+| [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner) | All directories and files in the account. |
+| [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) | Only directories and files owned by the security principal. |
Next, create a [DataLakeServiceClient](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakeserviceclient) instance and pass in a new instance of the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential) class.
Next, create a [DataLakeServiceClient](/python/api/azure-storage-file-datalake/a
To learn more about using **DefaultAzureCredential** to authorize access to data, see [Overview: Authenticate Python apps to Azure using the Azure SDK](/azure/developer/python/sdk/authentication-overview).
-### Connect by using an account key
+### [Account key](#tab/account-key)
You can authorize access to data using your account access keys (Shared Key). This example creates a [DataLakeServiceClient](/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakeserviceclient) instance that is authorized with the account key.
You can authorize access to data using your account access keys (Shared Key). Th
[!INCLUDE [storage-shared-key-caution](../../../includes/storage-shared-key-caution.md)] ++ ## Set ACLs When you *set* an ACL, you **replace** the entire ACL including all of its entries. If you want to change the permission level of a security principal or add a new security principal to the ACL without affecting other existing entries, you should *update* the ACL instead. To update an ACL instead of replace it, see the [Update ACLs](#update-acls-recursively) section of this article.
You can remove one or more ACL entries. To remove ACL entries recursively, creat
Remove ACL entries by calling the **DataLakeDirectoryClient.remove_access_control_recursive** method. If you want to remove a **default** ACL entry, then add the string `default:` to the beginning of the ACL entry string.
-This example removes an ACL entry from the ACL of the directory named `my-parent-directory`. This method accepts a boolean parameter named `is_default_scope` that specifies whether to remove the entry from the default ACL. if that parameter is `True`, the updated ACL entry is preceded with the string `default:`.
+This example removes an ACL entry from the ACL of the directory named `my-parent-directory`. This method accepts a boolean parameter named `is_default_scope` that specifies whether to remove the entry from the default ACL. If that parameter is `True`, the updated ACL entry is preceded with the string `default:`.
```python def remove_permission_recursively(is_default_scope):
storage Storage Ref Azcopy Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-copy.md
To copy files changed before or after the AzCopy job has started, AzCopy provide
Copy a subset of files modified on or after the given date and time (in ISO8601 format) in a container by using the `include-after` flag.
-`azcopy cp "https://[srcaccount].blob.core.windows.net/[containername]?[SAS]" "https://[dstaccount].blob.core.windows.net/[containername]?[SAS]" --include-after='2020-08-19T15:04:00Z''"`
+`azcopy cp "https://[srcaccount].blob.core.windows.net/[containername]?[SAS]" "https://[dstaccount].blob.core.windows.net/[containername]?[SAS]" --include-after="2020-08-19T15:04:00Z"`
Copy a subset of files modified on or before the given date and time (in ISO8601 format) in a container by using the `include-before` flag.
-`azcopy cp "https://[srcaccount].blob.core.windows.net/[containername]?[SAS]" "https://[dstaccount].blob.core.windows.net/[containername]?[SAS]" --include-before='2020-08-19T15:04:00Z'"`
+`azcopy cp "https://[srcaccount].blob.core.windows.net/[containername]?[SAS]" "https://[dstaccount].blob.core.windows.net/[containername]?[SAS]" --include-before="2020-08-19T15:04:00Z"`
## Options
storage Client Side Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/client-side-encryption.md
Previously updated : 07/11/2022 Last updated : 09/10/2024
Due to a security vulnerability discovered in the Queue Storage client library's
- If you need to use client-side encryption, then migrate your applications from client-side encryption v1 to client-side encryption v2.
-The following table summarizes the steps you'll need to take if you choose to migrate your applications to client-side encryption v2:
+The following table summarizes the steps you need to take if you choose to migrate your applications to client-side encryption v2:
| Client-side encryption status | Recommended actions | |||
The following table shows which versions of the client libraries for .NET and Py
| **Client-side encryption v2 and v1** | [Versions 12.11.0 and later](https://www.nuget.org/packages/Azure.Storage.Queues) | [Versions 12.4.0 and later](https://pypi.org/project/azure-storage-queue) | | **Client-side encryption v1 only** | Versions 12.10.0 and earlier | Versions 12.3.0 and earlier |
-If your application is using client-side encryption with an earlier version of the .NET or Python client library, you must first upgrade your code to a version that supports client-side encryption v2. Next, you must decrypt and re-encrypt your data with client-side encryption v2. If necessary, you can use a version of the client library that supports client-side encryption v2 side-by-side with an earlier version of the client library while you are migrating your code.
+If your application is using client-side encryption with an earlier version of the .NET or Python client library, you must first upgrade your code to a version that supports client-side encryption v2. Next, you must decrypt and re-encrypt your data with client-side encryption v2. If necessary, you can use a version of the client library that supports client-side encryption v2 side-by-side with an earlier version of the client library while you're migrating your code.
## How client-side encryption works
Decryption via the envelope technique works as follows:
Since queue messages can be of any format, the client library defines a custom format that includes the Initialization Vector (IV) and the encrypted content encryption key (CEK) in the message text.
-During encryption, the client library generates a random IV of 16 bytes along with a random CEK of 32 bytes and performs envelope encryption of the queue message text using this information. The wrapped CEK and some additional encryption metadata are then added to the encrypted queue message. This modified message (shown below) is stored on the service.
+During encryption, the client library generates a random IV of 16 bytes along with a random CEK of 32 bytes and performs envelope encryption of the queue message text using this information. The wrapped CEK and some additional encryption metadata are then added to the encrypted queue message. This modified message is stored on the service.
```xml <MessageText>{"EncryptedMessageContents":"6kOu8Rq1C3+M1QO4alKLmWthWXSmHV3mEfxBAgP9QGTU++MKn2uPq3t2UjF1DO6w","EncryptionData":{…}}</MessageText> ```
-During decryption, the wrapped key is extracted from the queue message and unwrapped. The IV is also extracted from the queue message and used along with the unwrapped key to decrypt the queue message data. Encryption metadata is small (under 500 bytes), so while it does count toward the 64KB limit for a queue message, the impact should be manageable. The encrypted message is Base64-encoded, as shown in the above snippet, which will also expand the size of the message being sent.
+During decryption, the wrapped key is extracted from the queue message and unwrapped. The IV is also extracted from the queue message and used along with the unwrapped key to decrypt the queue message data. Encryption metadata is small (under 500 bytes), so while it does count toward the 64 KB limit for a queue message, the impact should be manageable. The encrypted message is Base64-encoded, as shown in the above snippet, which expands the size of the message being sent.
-Due to the short-lived nature of messages in the queue, decrypting and reencrypting queue messages after updating to client-side encryption v2 should not be necessary. Any less secure messages will be rotated in the course of normal queue consumption.
+Due to the short-lived nature of messages in the queue, decrypting and reencrypting queue messages after updating to client-side encryption v2 shouldn't be necessary. Any less secure messages are rotated in the course of normal queue consumption.
## Client-side encryption and performance
storage Queues Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-auth-abac-examples.md
Title: Example Azure role assignment conditions for Queue Storage
+ Title: Examples for Azure role assignment conditions for Queue Storage
-description: Example Azure role assignment conditions for Queue Storage.
+description: Example role assignment conditions and Azure attribute-based access control (Azure ABAC) for Azure Queue Storage.
This condition allows users to peek or clear messages in a queue named **sample-
![Diagram of condition showing peek and clear access to named queue.](./media/queues-auth-abac-examples/peek-clear-messages-named-queue.png)
-The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs in this article to view the examples for your preferred portal editor.
# [Portal: Visual editor](#tab/portal-visual-editor)
Use the values in the following table to build the expression portion of the con
| Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) | | Value | {queueName} |
-The following image shows the condition after the settings have been entered into the Azure portal. Note that you must group expressions to ensure correct evaluation.
+The following image shows the condition after the settings are entered into the Azure portal. You must group expressions to ensure correct evaluation.
:::image type="content" source="./media/queues-auth-abac-examples/peek-clear-messages-portal.png" alt-text="Screenshot of condition editor in Azure portal showing peek or clear access to messages in a named queue." lightbox="./media/queues-auth-abac-examples/peek-clear-messages-portal.png":::
Use the values in the following table to build the expression portion of the con
| Operator | [DateTimeGreaterThan](../../role-based-access-control/conditions-format.md#datetime-comparison-operators) | | Value | `2023-05-01T13:00:00.000Z` |
-The following image shows the condition after the settings have been entered into the Azure portal. Note that you must group expressions to ensure correct evaluation.
+The following image shows the condition after the settings are entered into the Azure portal. You must group expressions to ensure correct evaluation.
:::image type="content" source="./media/queues-auth-abac-examples/environment-utcnow-queue-peek-portal.png" alt-text="Screenshot of the condition editor in the Azure portal showing peek access allowed after a specific date and time." lightbox="./media/queues-auth-abac-examples/environment-utcnow-queue-peek-portal.png":::
Use the values in the following table to build the expression portion of the con
| Operator | [StringEqualsIgnoreCase](../../role-based-access-control/conditions-format.md#stringequals) | | Value | `/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/sample-vnet/subnets/default` |
-The following image shows the condition after the settings have been entered into the Azure portal. Note that you must group expressions to ensure correct evaluation.
+The following image shows the condition after the settings are entered into the Azure portal. You must group expressions to ensure correct evaluation.
:::image type="content" source="./media/queues-auth-abac-examples/environment-subnet-queue-put-update-portal.png" alt-text="Screenshot of the condition editor in the Azure portal showing read access to specific queues allowed from a specific subnet." lightbox="./media/queues-auth-abac-examples/environment-subnet-queue-put-update-portal.png":::
virtual-desktop Rdp Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-properties.md
To learn how to use this property, see [Configure Media Transfer Protocol and Pi
- *Empty*: Don't redirect any drives. - `*`: Redirect all drives, including drives that are connected later. - `DynamicDrives`: Redirect any drives that are connected later.
- - `drivestoredirect:s:C\:;E\:;`: Redirect the specified drive letters for one or more drives, such as this example.
+ - `drivestoredirect:s:C:\;E:\;`: Redirect the specified drive letters for one or more drives, such as this example.
- **Default value**: `*` - **Applies to**: - Azure Virtual Desktop
virtual-desktop Store Fslogix Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/store-fslogix-profile.md
The following tables compare the storage solutions Azure Storage offers for Azur
|Platform service|Yes, Azure-native solution|Yes, Azure-native solution|No, self-managed| |Regional availability|All regions|[Select regions](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=netapp&regions=all&rar=true)|All regions| |Redundancy|Locally redundant/zone-redundant/geo-redundant/geo-zone-redundant|Locally redundant/zone-redundant [with cross-zone replication](../azure-netapp-files/cross-zone-replication-introduction.md)/geo-redundant [with cross-region replication](../azure-netapp-files/cross-region-replication-introduction.md)|Locally redundant/zone-redundant/geo-redundant|
-|Tiers and performance| Standard (Transaction optimized)<br>Premium<br>Up to max 100K IOPS per share with 10 GBps per share at about 3-ms latency|Standard<br>Premium<br>Ultra<br>Up to max 460K IOPS per volume with 4.5 GBps per volume at about 1 ms latency. For IOPS and performance details, see [Azure NetApp Files performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) and [the FAQ](../azure-netapp-files/faq-performance.md#how-do-i-convert-throughput-based-service-levels-of-azure-netapp-files-to-iops).|Standard HDD: up to 500 IOPS per-disk limits<br>Standard SSD: up to 4k IOPS per-disk limits<br>Premium SSD: up to 20k IOPS per-disk limits<br>We recommend Premium disks for Storage Spaces Direct|
+|Tiers and performance| Standard (Transaction optimized)<br>Premium<br>Up to max 100K IOPS per share with 10 GBps per share at about 3-ms latency|Standard<br>Premium<br>Ultra<br>Up to max 460K IOPS per volume with 4.5 GBps per volume at about 1 ms latency. For IOPS and performance details, see [Azure NetApp Files performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) and [the FAQ](../azure-netapp-files/faq-performance.md#how-do-i-convert-throughput-based-service-levels-of-azure-netapp-files-to-inputoutput-operations-per-second-iops).|Standard HDD: up to 500 IOPS per-disk limits<br>Standard SSD: up to 4k IOPS per-disk limits<br>Premium SSD: up to 20k IOPS per-disk limits<br>We recommend Premium disks for Storage Spaces Direct|
|Capacity|100 TiB per share, Up to 5 PiB per general purpose account |100 TiB per volume, up to 12.5 PiB per NetApp account|Maximum 32 TiB per disk| |Required infrastructure|Minimum share size 1 GiB|Minimum capacity pool 2 TiB, min volume size 100 GiB|Two VMs on Azure IaaS (+ Cloud Witness) or at least three VMs without and costs for disks| |Protocols|SMB 3.0/2.1, NFSv4.1 (preview), REST|[NFSv3, NFSv4.1](../azure-netapp-files/azure-netapp-files-create-volumes.md), [SMB 3.x/2.x](../azure-netapp-files/azure-netapp-files-create-volumes-smb.md), [dual-protocol](../azure-netapp-files/create-volumes-dual-protocol.md)|NFSv3, NFSv4.1, SMB 3.1|
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
zone_pivot_groups: azure-virtual-desktop-windows-clients Previously updated : 08/27/2024 Last updated : 09/09/2024 # What's new in the Remote Desktop client for Windows
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
In Azure, virtual network peering and connected groups are two methods of establ
In a connected group, all virtual networks are connected without individual peering relationships. For example, if three virtual networks are part of the same connected group, connectivity is enabled between each virtual network without the need for individual peering relationships.
+### When managing virtual networks that currently use VNet peering, does this result in paying VNet peering charges twice with Azure Virtual Network Manager?
+
+There is no second or double charge for peering. Your virtual network manager respects all previously created VNet peerings, and migrates those connections. All peering resources, whether created inside a virtual network manager or outside, with incur a single peering charge.
++ ### Can I create exceptions to security admin rules? Normally, security admin rules are defined to block traffic across virtual networks. However, there are times when certain virtual networks and their resources need to allow traffic for management or other processes. For these scenarios, you can [create exceptions](./concept-enforcement.md#network-traffic-enforcement-and-exceptions-with-security-admin-rules) where necessary. [Learn how to block high-risk ports with exceptions](how-to-block-high-risk-ports.md) for these scenarios.
virtual-network-manager How To Configure Cross Tenant Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-configure-cross-tenant-portal.md
Title: Configure a cross-tenant connection in Azure Virtual Network Manager Preview - Portal
+ Title: Configure a cross-tenant connection in Azure Virtual Network Manager - Portal
description: Learn how to create cross-tenant connections in Azure Virtual Network Manager to support virtual networks across subscriptions and management groups in different tenants.
# Customer intent: As a cloud admin, I need to manage multiple tenants from a single network manager so that I can easily manage all network resources governed by Azure Virtual Network Manager.
-# Configure a cross-tenant connection in Azure Virtual Network Manager Preview - portal
+# Configure a cross-tenant connection in Azure Virtual Network Manager - portal
In this article, you'll learn how to create [cross-tenant connections](concept-cross-tenant.md) in Azure Virtual Network Manager by using the Azure portal. Cross-tenant support allows organizations to use a central network manager for managing virtual networks across tenants and subscriptions.