Updates from: 10/25/2022 01:07:27
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Threat Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/threat-management.md
The smart lockout feature uses many factors to determine when an account should
- Passwords such as 12456! and 1234567! (or newAccount1234 and newaccount1234) are so similar that the algorithm interprets them as human error and counts them as a single try. - Larger variations in pattern, such as 12456! and ABCD2!, are counted as separate tries.
-When testing the smart lockout feature, use a distinctive pattern for each password you enter. Consider using password generation web apps, such as `https://password-generator.net/`.
+When testing the smart lockout feature, use a distinctive pattern for each password you enter. Consider using password generation web apps, such as `https://password-gen.com/`.
When the smart lockout threshold is reached, you'll see the following message while the account is locked: **Your account is temporarily locked to prevent unauthorized use. Try again later**. The error messages can be [localized](localization-string-ids.md#sign-up-or-sign-in-error-messages).
active-directory Concept Authentication Default Enablement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md
+
+ Title: Protecting authentication methods in Azure Active Directory
+description: Learn about authentication features that may be enabled by default in Azure Active Directory
+++++ Last updated : 10/19/2022+++++++
+# Customer intent: As an identity administrator, I want to encourage users to understand how default protection can improve our security posture.
+
+# Protecting authentication methods in Azure Active Directory
+
+Azure Active Directory (Azure AD) adds and improves security features to better protect customers against increasing attacks. As new attack vectors become known, Azure AD may respond by enabling protection by default to help customers stay ahead of emerging security threats.
+
+For example, in response to increasing MFA fatigue attacks, Microsoft recommended ways for customers to [defend users](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/defend-your-users-from-mfa-fatigue-attacks/ba-p/2365677). One recommendation to prevent users from accidental multifactor authentication (MFA) approvals is to enable [number matching](how-to-mfa-number-match.md). As a result, default behavior for number matching will be explicitly **Enabled** for all Microsoft Authenticator users.
+
+There are two ways for protection of a security feature to be enabled by default:
+
+- After a security feature is released, customers can use the Azure portal or Graph API to test and roll out the change on their own schedule. To help defend against new attack vectors, Azure AD may enable protection of a security feature by default for all tenants on a certain date, and there won't be an option to disable protection. Microsoft schedules default protection far in advance to give customers time to prepare for the change. Customers can't opt out if Microsoft schedules protection by default.
+- Protection can be **Microsoft managed**, which means Azure AD can enable or disable protection based upon the current landscape of security threats. Customers can choose whether to allow Microsoft to manage the protection. They can change from **Microsoft managed** to explicitly make the protection **Enabled** or **Disabled** at any time.
+
+>[!NOTE]
+>Only a critical security feature will have protection enabled by default.
+
+## Default protection enabled by Azure AD
+
+Number matching is a good example of protection for an authentication method that is currently optional for push notifications in Microsoft Authenticator in all tenants. Customers could choose to enable number matching for push notifications in Microsoft Authenticator for users and groups, or they could leave it disabled. Number matching is already the default behavior for passwordless notifications in Microsoft Authenticator, and users can't opt out.
+
+As MFA fatigue attacks rise, number matching becomes more critical to sign-in security. As a result, Microsoft will change the default behavior for push notifications in Microsoft Authenticator.
+
+>[!NOTE]
+>Number matching will begin to be enabled for all users of Microsoft Authenticator starting February 27, 2023.
+
+<!Add link to Mayur Blog post here>
+
+## Microsoft managed settings
+
+In addition to configuring Authentication methods policy settings to be either **Enabled** or **Disabled**, IT admins can configure some settings in the Authentication methods policy to be **Microsoft managed**. A setting that is configured as **Microsoft managed** allows Azure AD to enable or disable the setting.
+
+The option to let Azure AD manage the setting is a convenient way for an organization to allow Microsoft to enable or disable a feature by default. Organizations can more easily improve their security posture by trusting Microsoft to manage when a feature should be enabled by default. By configuring a setting as **Microsoft managed** (named *default* in Graph APIs), IT admins can trust Microsoft to enable a security feature they haven't explicitly disabled.
+
+For example, an admin can enable [location and application name](how-to-mfa-number-match.md) in push notifications to give users more context when they approve MFA requests with Microsoft Authenticator. The additional context can also be explicitly disabled, or set as **Microsoft managed**. Today, the **Microsoft managed** configuration for location and application name is **Disabled**, which effectively disables the option for any environment where an admin chooses to let Azure AD manage the setting.
+
+As the security threat landscape changes over time, Microsoft may change the **Microsoft managed** configuration for location and application name to **Enabled**. For customers who want to rely upon Microsoft to improve their security posture, setting security features to **Microsoft managed** is an easy way stay ahead of security threats. They can trust Microsoft to determine the best way to configure security settings based on the current threat landscape.
+
+The following table lists each setting that can be set to Microsoft managed and whether that setting is enabled or disabled by default.
+
+| Setting | Configuration |
+|-||
+| [Registration campaign](how-to-mfa-registration-campaign.md) | Disabled |
+| [Location in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled |
+| [Application name in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled |
+
+As threat vectors change, Azure AD may announce default protection for a **Microsoft managed** setting in [release notes](../fundamentals/whats-new.md) and on commonly read forums like [Tech Community](https://techcommunity.microsoft.com/).
+
+## Next steps
+
+[Authentication methods in Azure Active Directory - Microsoft Authenticator](concept-authentication-authenticator-app.md)
+
active-directory How To Mfa Microsoft Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-microsoft-managed.md
- Title: Use Microsoft managed settings for the Authentication Methods Policy - Azure Active Directory
-description: Learn how to use Microsoft managed settings for Microsoft Authenticator
----- Previously updated : 02/22/2022-------
-# Customer intent: As an identity administrator, I want to encourage users to use the Microsoft Authenticator app in Azure AD to improve and secure user sign-in events.
-
-# How to use Microsoft managed settings - Authentication Methods Policy
-
-<!what API>
-
-In addition to configuring Authentication Methods Policy settings to be either **Enabled** or **Disabled**, IT admins can configure some settings to be **Microsoft managed**. A setting that is configured as **Microsoft managed** allows Azure AD to enable or disable the setting.
-
-The option to let Azure AD manage the setting is a convenient way for an organization to allow Microsoft to enable or disable a feature by default. Organizations can more easily improve their security posture by trusting Microsoft to manage when a feature should be enabled by default. By configuring a setting as **Microsoft managed** (named *default* in Graph APIs), IT admins can trust Microsoft to enable a security feature they have not explicitly disabled.
-
-## Settings that can be Microsoft managed
-
-The following table list each setting that can be set to Microsoft managed and whether that setting is enabled or disabled by default.
-
-| Setting | Configuration |
-|||
-| [Registration campaign](how-to-mfa-registration-campaign.md) | Disabled |
-| [Number match](how-to-mfa-number-match.md) | Disabled |
-| [Additional context in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled |
-
-## Next steps
-
-[Authentication methods in Azure Active Directory - Microsoft Authenticator app](concept-authentication-authenticator-app.md)
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
This topic covers how to enable number matching in Microsoft Authenticator push notifications to improve user sign-in security. >[!NOTE]
->Number matching is a key security upgrade to traditional second factor notifications in the Authenticator app that will be enabled for all users of the Microsoft Authenticator app starting February 28, 2023.<br>
+>Number matching is a key security upgrade to traditional second factor notifications in Microsoft Authenticator that will begin to be enabled by default for all users starting February 27, 2023.<br>
>We highly recommend enabling number matching in the near-term for improved sign-in security. ## Prerequisites
Number matching is available for the following scenarios. When enabled, all scen
>[!NOTE] >For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
-Number matching is available for sign in for Azure Government. It is available for combined registration two weeks after General Availability. Number matching isn't supported for Apple Watch notifications. Apple Watch users need to use their phone to approve notifications when number matching is enabled.
+Number matching is available for sign-in for Azure Government. It's available for combined registration two weeks after General Availability. Number matching isn't supported for Apple Watch notifications. Apple Watch users need to use their phone to approve notifications when number matching is enabled.
### Multifactor authentication
https://graph.microsoft.com/beta/authenticationMethodsPolicy/authenticationMetho
In **featureSettings**, you'll need to change the **numberMatchingRequiredState** from **default** to **enabled**.
-The value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we will use **any**, but if you don't want to allow passwordless, use **push**.
+The value of Authentication Mode can be either **any** or **push**, depending on whether or not you also want to enable passwordless phone sign-in. In these examples, we'll use **any**, but if you don't want to allow passwordless, use **push**.
>[!NOTE] >For passwordless users, enabling or disabling number matching has no impact because it's already part of the passwordless experience.
To enable number matching in the Azure AD portal, complete the following steps:
:::image type="content" border="true" source="./media/how-to-mfa-number-match/number-match.png" alt-text="Screenshot of how to enable number matching.":::
-## FAQ
++
+## FAQs
+
+### When will my tenant see number matching if I don't use the Azure portal or Graph API to roll out the change?
+
+Number match will be enabled for all users of Microsoft Authenticator app after February 27, 2023. Relevant services will begin deploying these changes after February 27, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all your users, we highly recommend you use the Azure portal or Graph API to roll out number match for all Microsoft Authenticator users.
### Can I opt out of number matching?
Yes, currently you can disable number matching. We highly recommend that you ena
### What about my Apple Watch?
-Apple Watch will remain unsupported for number matching. We recommend you uninstall the Microsoft Authenticator Apple Watch app because you will have to approve notifications on your phone.
+Apple Watch will remain unsupported for number matching. We recommend you uninstall the Microsoft Authenticator Apple Watch app because you have to approve notifications on your phone.
### What happens if a user runs an older version of Microsoft Authenticator? If a user is running an older version of Microsoft Authenticator that doesn't support number matching, authentication won't work if number matching is enabled. Users need to upgrade to the latest version of Microsoft Authenticator to use it for sign-in. + ## Next steps [Authentication methods in Azure Active Directory](concept-authentication-authenticator-app.md)
active-directory Scenario Web Api Call Api Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-overview.md
Previously updated : 03/03/2021 Last updated : 10/24/2022 -+ #Customer intent: As an application developer, I want to know how to write a web API that calls web APIs by using the Microsoft identity platform.
This scenario, in which a protected web API calls other web APIs, builds on [Sce
- A web, desktop, mobile, or single-page application client (not represented in the accompanying diagram) calls a protected web API and provides a JSON Web Token (JWT) bearer token in its "Authorization" HTTP header. - The protected web API validates the token and uses the Microsoft Authentication Library (MSAL) `AcquireTokenOnBehalfOf` method to request another token from Azure Active Directory (Azure AD) so that the protected web API can call a second web API, or downstream web API, on behalf of the user. `AcquireTokenOnBehalfOf` refreshes the token when needed.
-![Diagram of a web API calling a web API](media/scenarios/web-api.svg)
+
+![Diagram of a web app calling a web API.](media/scenarios/web-api.svg)
## Specifics
-The app registration part that's related to API permissions is classical. The app configuration involves using the OAuth 2.0 On-Behalf-Of flow to use the JWT bearer token for obtaining a second token for a downstream API. The second token in this case is added to the token cache, where it's available in the web API's controllers. This second token can be used to acquire an access token silently to call downstream APIs whenever required.
+The app registration part that's related to API permissions is classical. The app configuration involves using the [OAuth 2.0 On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) to use the JWT bearer token for obtaining a second token for a downstream API. The second token is added to the token cache, where it's available in the web API's controllers. This second token can be used to acquire an access token silently to call downstream APIs whenever required.
## Next steps
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md
Previously updated : 05/13/2022 Last updated : 10/24/2022
This article describes how to set up federation with any organization whose iden
> >- We no longer support an allowlist of IdPs for new SAML/WS-Fed IdP federations. When you're setting up a new external federation, refer to [Step 1: Determine if the partner needs to update their DNS text records](#step-1-determine-if-the-partner-needs-to-update-their-dns-text-records). >- In the SAML request sent by Azure AD for external federations, the Issuer URL is a tenanted endpoint. For any new federations, we recommend that all our partners set the audience of the SAML or WS-Fed based IdP to a tenanted endpoint. Refer to the [SAML 2.0](#required-saml-20-attributes-and-claims) and [WS-Fed](#required-ws-fed-attributes-and-claims) required attributes and claims sections below. Any existing federations configured with the global endpoint will continue to work, but new federations will stop working if your external IdP is expecting a global issuer URL in the SAML request.
-> - Currently, you can add only one domain to your external federation. We're actively working on allowing additional domains.
+> - We've removed the single domain limitation. You can now associate multiple domains with an individual federation configuration.
> - We've removed the limitation that required the authentication URL domain to match the target domain or be from an allowed IdP. For details, see [Step 1: Determine if the partner needs to update their DNS text records](#step-1-determine-if-the-partner-needs-to-update-their-dns-text-records). ## When is a guest user authenticated with SAML/WS-Fed IdP federation?
You can also give guest users a direct link to an application or resource by inc
## Frequently asked questions
-### Can I set up SAML/WS-Fed IdP federation with Azure AD verified domains?
+**Can I set up SAML/WS-Fed IdP federation with Azure AD verified domains?**
+ No, we block SAML/WS-Fed IdP federation for Azure AD verified domains in favor of native Azure AD managed domain capabilities. If you try to set up SAML/WS-Fed IdP federation with a domain that is DNS-verified in Azure AD, you'll see an error.
-### Can I set up SAML/WS-Fed IdP federation with a domain for which an unmanaged (email-verified) tenant exists?
+**Can I set up SAML/WS-Fed IdP federation with a domain for which an unmanaged (email-verified) tenant exists?**
+
Yes, you can set up SAML/WS-Fed IdP federation with domains that aren't DNS-verified in Azure AD, including unmanaged (email-verified or "viral") Azure AD tenants. Such tenants are created when a user redeems a B2B invitation or performs self-service sign-up for Azure AD using a domain that doesnΓÇÖt currently exist. If the domain hasn't been verified and the tenant hasn't undergone an [admin takeover](../enterprise-users/domains-admin-takeover.md), you can set up federation with that domain.
-### How many federation relationships can I create?
+**How many federation relationships can I create?**
+ Currently, a maximum of 1,000 federation relationships is supported. This limit includes both [internal federations](/powershell/module/msonline/set-msoldomainfederationsettings) and SAML/WS-Fed IdP federations.
-### Can I set up federation with multiple domains from the same tenant?
-We donΓÇÖt currently support SAML/WS-Fed IdP federation with multiple domains from the same tenant.
+**Can I set up federation with multiple domains from the same tenant?**
+
+Yes, we now support SAML/WS-Fed IdP federation with multiple domains from the same tenant.
+
+**Do I need to renew the signing certificate when it expires?**
-### Do I need to renew the signing certificate when it expires?
If you specify the metadata URL in the IdP settings, Azure AD will automatically renew the signing certificate when it expires. However, if the certificate is rotated for any reason before the expiration time, or if you don't provide a metadata URL, Azure AD will be unable to renew it. In this case, you'll need to update the signing certificate manually.
-### If SAML/WS-Fed IdP federation and email one-time passcode authentication are both enabled, which method takes precedence?
+**If SAML/WS-Fed IdP federation and email one-time passcode authentication are both enabled, which method takes precedence?**
+ When SAML/WS-Fed IdP federation is established with a partner organization, it takes precedence over email one-time passcode authentication for new guest users from that organization. If a guest user redeemed an invitation using one-time passcode authentication before you set up SAML/WS-Fed IdP federation, they'll continue to use one-time passcode authentication.
-### Does SAML/WS-Fed IdP federation address sign-in issues due to a partially synced tenancy?
+**Does SAML/WS-Fed IdP federation address sign-in issues due to a partially synced tenancy?**
+ No, the [email one-time passcode](one-time-passcode.md) feature should be used in this scenario. A ΓÇ£partially synced tenancyΓÇ¥ refers to a partner Azure AD tenant where on-premises user identities aren't fully synced to the cloud. A guest whose identity doesnΓÇÖt yet exist in the cloud but who tries to redeem your B2B invitation wonΓÇÖt be able to sign in. The one-time passcode feature would allow this guest to sign in. The SAML/WS-Fed IdP federation feature addresses scenarios where the guest has their own IdP-managed organizational account, but the organization has no Azure AD presence at all.
-### Once SAML/WS-Fed IdP federation is configured with an organization, does each guest need to be sent and redeem an individual invitation?
+**Once SAML/WS-Fed IdP federation is configured with an organization, does each guest need to be sent and redeem an individual invitation?**
+ Setting up SAML/WS-Fed IdP federation doesnΓÇÖt change the authentication method for guest users who have already redeemed an invitation from you. You can update a guest userΓÇÖs authentication method by [resetting their redemption status](reset-redemption-status.md).
-### Is there a way to send a signed request to the SAML identity provider?
+**Is there a way to send a signed request to the SAML identity provider?**
+ Currently, the Azure AD SAML/WS-Fed federation feature doesn't support sending a signed authentication token to the SAML identity provider. ## Step 1: Determine if the partner needs to update their DNS text records
Next, you'll configure federation with the IdP configured in step 1 in Azure AD.
4. On the **New SAML/WS-Fed IdP** page, enter the following: - **Display name** - Enter a name to help you identify the partner's IdP. - **Identity provider protocol** - Select **SAML** or **WS-Fed**.
- - **Domain name of federating IdP** - Enter your partnerΓÇÖs IdP target domain name for federation. Currently, one domain name is supported, but we're working on allowing more.
+ - **Domain name of federating IdP** - Enter your partnerΓÇÖs IdP target domain name for federation. During this initial configuration, enter just one domain name. You'll be able to add more domains later.
![Screenshot showing the new SAML or WS-Fed IdP page.](media/direct-federation/new-saml-wsfed-idp-parse.png)
Next, you'll configure federation with the IdP configured in step 1 in Azure AD.
> [!NOTE] > Metadata URL is optional, however we strongly recommend it. If you provide the metadata URL, Azure AD can automatically renew the signing certificate when it expires. If the certificate is rotated for any reason before the expiration time or if you do not provide a metadata URL, Azure AD will be unable to renew it. In this case, you'll need to update the signing certificate manually.
-6. Select **Save**.
+6. Select **Save**. The identity provider is added to the **SAML/WS-Fed identity providers** list.
+
+ ![Screenshot showing the SAML/WS-Fed identity provider list with the new entry.](media/direct-federation/new-saml-wsfed-idp-list.png)
+
+7. (Optional) To add more domain names to this federating identity provider:
+
+ a. Select the link in the **Domains** column.
+
+ ![Screenshot showing the link for adding domains to the SAML/WS-Fed identity provider.](media/direct-federation/new-saml-wsfed-idp-add-domain.png)
+ b. Next to **Domain name of federating IdP**, type the domain name, and then select **Add**. Repeat for each domain you want to add. When you're finished, select **Done**.
+
+ ![Screenshot showing the Add button in the domain details pane.](media/direct-federation/add-domain.png)
+
### To configure federation using the Microsoft Graph API You can use the Microsoft Graph API [samlOrWsFedExternalDomainFederation](/graph/api/resources/samlorwsfedexternaldomainfederation?view=graph-rest-beta&preserve-view=true) resource type to set up federation with an identity provider that supports either the SAML or WS-Fed protocol.
Now test your federation setup by inviting a new B2B guest user. For details, se
On the **All identity providers** page, you can view the list of SAML/WS-Fed identity providers you've configured and their certificate expiration dates. From this list, you can renew certificates and modify other configuration details.
-![Screenshot showing an identity provider in the SAML WS-Fed list](media/direct-federation/saml-ws-fed-identity-provider-list.png)
+![Screenshot showing an identity provider in the SAML WS-Fed list](media/direct-federation/new-saml-wsfed-idp-list-multi.png)
1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**. 1. Select **External Identities**.
On the **All identity providers** page, you can view the list of SAML/WS-Fed ide
![Screenshot of the IDP configuration details.](media/direct-federation/modify-configuration.png)
-1. To view the domain for the IdP, select the link in the **Domains** column to view the partner's target domain name for federation.
- > [!NOTE]
- > If you need to update the partner's domain, you'll need to [delete the configuration](#how-do-i-remove-federation) and reconfigure federation with the identity provider using the new domain.
+1. To edit the domains associated with the partner, select the link in the **Domains** column. In the domain details pane:
+
+ - To add a domain, type the domain name next to **Domain name of federating IdP**, and then select **Add**. Repeat for each domain you want to add.
+ - To delete a domain, select the delete icon next to the domain.
+ - When you're finished, select **Done**.
- ![Screenshot of the domain configuration page](media/direct-federation/view-domain.png)
+ ![Screenshot of the domain configuration page](media/direct-federation/edit-domains.png)
+
+ > [!NOTE]
+ > To remove federation with the partner, delete all but one of the domains and follow the steps in the [next section](#how-do-i-remove-federation).
## How do I remove federation?
To remove a configuration for an IdP in the Azure AD portal:
1. Select **All identity providers**. 1. Under **SAML/WS-Fed identity providers**, scroll to the identity provider in the list or use the search box. 1. Select the link in the **Domains** column to view the IdP's domain details.
-1. Select **Delete Configuration**.
+2. Delete all but one of the domains in the **Domain name** list.
+3. Select **Delete Configuration**, and then select **Done**.
![Screenshot of deleting a configuration.](media/direct-federation/delete-configuration.png)
active-directory External Collaboration Settings Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-collaboration-settings-configure.md
Previously updated : 08/22/2022 Last updated : 10/24/2022
For B2B collaboration with other Azure AD organizations, you should also review
- **Member users and users assigned to specific admin roles can invite guest users including guests with member permissions**: To allow member users and users who have specific administrator roles to invite guests, select this radio button. - **Only users assigned to specific admin roles can invite guest users**: To allow only those users with administrator roles to invite guests, select this radio button. The administrator roles include [Global Administrator](../roles/permissions-reference.md#global-administrator), [User Administrator](../roles/permissions-reference.md#user-administrator), and [Guest Inviter](../roles/permissions-reference.md#guest-inviter). - **No one in the organization can invite guest users including admins (most restrictive)**: To deny everyone in the organization from inviting guests, select this radio button.
- > [!NOTE]
- > If **Members can invite** is set to **No** and **Admins and users in the guest inviter role can invite** is set to **Yes**, users in the **Guest Inviter** role will still be able to invite guests.
1. Under **Enable guest self-service sign up via user flows**, select **Yes** if you want to be able to create user flows that let users sign up for apps. For more information about this setting, see [Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md).
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
To access these details, go to the Azure AD sign-in logs, select a sign-in, and
**Service category:** Privileged Identity Management **Product capability:** Privileged Identity Management
-Along with the public preview of attributed based access control for specific Azure RBAC role, you can also add ABAC conditions inside Privileged Identity Management for your eligible assignments. [Learn more](../../role-based-access-control/conditions-overview.md#conditions-and-privileged-identity-management-pim).
+Along with the public preview of attributed-based access control (ABAC) for specific Azure roles, you can also add ABAC conditions inside Privileged Identity Management for your eligible assignments. [Learn more](../../role-based-access-control/conditions-overview.md#conditions-and-azure-ad-pim).
active-directory Entitlement Management Access Package Incompatible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-incompatible.md
Follow these steps to view the list of other access packages that have indicated
1. Select on **Incompatible With**.
-## Identifying users who already have incompatible access to another access package
+## Identifying users who already have incompatible access to another access package (Preview)
+
+If you've configured incompatible access settings on an access package that already has users assigned to it, then you can download a list of those users who have that additional access. Those users who also have an assignment to the incompatible access package won't be able to re-request access.
+
+**Prerequisite role**: Global administrator, Identity Governance administrator, User administrator, Catalog owner or Access package manager
+
+Follow these steps to view the list of users who have assignments to two access packages.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Azure Active Directory**, and then select **Identity Governance**.
+
+1. In the left menu, select **Access packages** and then open the access package where you've configured another access package as incompatible.
+
+1. In the left menu, select **Separation of duties**.
+
+1. In the table, if there is a non-zero value in the Additional access column for the second access package, then that indicates there are one or more users with assignments.
+
+ ![Screenshot of an access package marked as incompatible with existing access assignments.](./media/entitlement-management-access-package-incompatible/incompatible-ap.png)
+
+1. Select that count to view the list of incompatible assignments.
+
+1. If you wish, you can select the **Download** button to save that list of assignments as a CSV file.
+
+## Identifying users who will have incompatible access to another access package
If you're configuring incompatible access settings on an access package that already has users assigned to it, then any of those users who also have an assignment to the incompatible access package or groups won't be able to re-request access.
active-directory Lifecycle Workflows Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflows-deployment.md
Title: Plan a Lifecycle Workflow deployment
description: Planning guide for a successful Lifecycle Workflow deployment in Azure AD. documentationCenter: ''--++ editor:
na
Last updated 04/16/2021-+
- **Extend** your HR-driven provisioning process with other workflows that simplify and automate tasks. - **Centralize** your workflow process so you can easily create and manage workflows all in one location. - **Troubleshoot** workflow scenarios with the Workflow history and Audit logs with minimal effort.-- **Manage** user lifecycle at scale. As your organization grows, the need for other resources to manage user lifecycles is minimalized.
+- **Manage** user lifecycle at scale. As your organization grows, the need for other resources to manage user lifecycles is lowered.
- **Reduce** or remove manual tasks that were done in the past with automated Lifecycle Workflows - **Apply** logic apps to extend workflows for more complex scenarios using your existing Logic apps
For more information on deployment plans, see [Azure AD deployment plans](../fun
>[!Note] >Be aware that if your license expires, any workflows that you have created will stop working. >
->Workflows that are in progress when a license expires will continue to exectue, but no new ones will be processed.
+>Workflows that are in progress when a license expires will continue to execute, but no new ones will be processed.
### Plan the Lifecycle Workflow deployment project
active-directory F5 Bigip Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-bigip-deployment-guide.md
A BIG-IP system can also be managed via its underlying SSH environment, which is
- [Azure Bastion service](../../bastion/bastion-overview.md): Allows fast and secure connections to any VM within a vNET, from any location -- Connect directly via an SSH client like PuTTY through the JIT approach
+- Connect directly via an SSH client like PowerShell through the JIT approach
- Serial Console: Offered at the bottom of the Support and troubleshooting section of VMs menu in the portal. It doesn't support file transfers.
active-directory Pim Resource Roles Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md
# Assign Azure resource roles in Privileged Identity Management
-With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, can manage the built-in Azure resource roles, and custom roles, including (but not limited to):
+With Azure AD Privileged Identity Management (Azure AD PIM), part of Microsoft Entra, can manage the built-in Azure resource roles, and custom roles, including (but not limited to):
- Owner - User Access Administrator
Privileged Identity Management support both built-in and custom Azure roles. For
## Role assignment conditions
-You can use the Azure attribute-based access control (Azure ABAC) preview to place resource conditions on eligible role assignments using Privileged Identity Management (PIM). With PIM, your end users must activate an eligible role assignment to get permission to perform certain actions. Using Azure attribute-based access control conditions in PIM enables you not only to limit a userΓÇÖs role permissions to a resource using fine-grained conditions, but also to use PIM to secure the role assignment with a time-bound setting, approval workflow, audit trail, and so on. For more information, see [Azure attribute-based access control public preview](../../role-based-access-control/conditions-overview.md).
+You can use the Azure attribute-based access control (Azure ABAC) to add conditions on eligible role assignments using Azure AD PIM for Azure resources. With Azure AD PIM, your end users must activate an eligible role assignment to get permission to perform certain actions. Using conditions in Azure AD PIM enables you not only to limit a user's role permissions to a resource using fine-grained conditions, but also to use Azure AD PIM to secure the role assignment with a time-bound setting, approval workflow, audit trail, and so on.
>[!Note] >When a role is assigned, the assignment:
->- Can't be assign for a duration of less than five minutes
+>- Can't be assigned for a duration of less than five minutes
>- Can't be removed within five minutes of it being assigned
+Currently, the following built-in roles can have conditions added:
+
+- [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor)
+- [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner)
+- [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader)
+
+For more information, see [What is Azure attribute-based access control (Azure ABAC)](../../role-based-access-control/conditions-overview.md).
+ ## Assign a role Follow these steps to make a user eligible for an Azure resource role.
Follow these steps to make a user eligible for an Azure resource role.
![Screenshot of add assignments settings pane.](./media/pim-resource-roles-assign-roles/resources-membership-settings-type.png)
- Privileged Identity Management for Azure resources provides two distinct assignment types:
+ Azure AD PIM for Azure resources provides two distinct assignment types:
- - **Eligible** assignments require the member of the role to perform an action to use the role. Actions might include performing a multi-factor authentication (MFA) check, providing a business justification, or requesting approval from designated approvers.
+ - **Eligible** assignments require the member to activate the role before using it. Administrator may require role member to perform certain actions before role activation which might include performing a multi-factor authentication (MFA) check, providing a business justification, or requesting approval from designated approvers.
- - **Active** assignments don't require the member to perform any action to use the role. Members assigned as active have the privileges assigned ready to use.
+ - **Active** assignments don't require the member to activate the role before usage. Members assigned as active have the privileges assigned ready to use. This type of assignment is also available to customers that don't use Azure AD PIM.
1. To specify a specific assignment duration, change the start and end dates and times.
Follow these steps to update or remove an existing role assignment.
:::image type="content" source="./media/pim-resource-roles-assign-roles/resources-update-remove.png" alt-text="Screenshot demonstrates how to update or remove role assignment." lightbox="./media/pim-resource-roles-assign-roles/resources-update-remove.png":::
-1. To add or update a condition to refine Azure resource access, select **Add** or **View/Edit** in the **Condition** column for the role assignment. Currently, the Storage Blob Data Owner, Storage Blob Data Reader, and the Blob Storage Blob Data Contributor roles in Privileged Identity Management are the only two roles supported as part of the [Azure attribute-based access control public preview](../../role-based-access-control/conditions-overview.md).
+1. To add or update a condition to refine Azure resource access, select **Add** or **View/Edit** in the **Condition** column for the role assignment. Currently, the Storage Blob Data Owner, Storage Blob Data Reader, and Storage Blob Data Contributor roles in Azure AD PIM are the only roles that can have conditions added.
1. Select **Add expression** or **Delete** to update the expression. You can also select **Add condition** to add a new condition to your role.
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md
Title: Concepts - Sustainable software engineering in Azure Kubernetes Services
description: Learn about sustainable software engineering in Azure Kubernetes Service (AKS). Previously updated : 03/29/2021 Last updated : 10/21/2022
-# Sustainable software engineering principles in Azure Kubernetes Service (AKS)
+# Sustainable software engineering practices in Azure Kubernetes Service (AKS)
-The sustainable software engineering principles are a set of competencies to help you define, build, and run sustainable applications. The overall goal is to reduce the carbon footprint in every aspect of your application. [The Principles of Sustainable Software Engineering][principles-sse] has an overview of the principles of sustainable software engineering.
+The sustainable software engineering principles are a set of competencies to help you define, build, and run sustainable applications. The overall goal is to reduce the carbon footprint in every aspect of your application. The Azure Well-Architected Framework guidance for sustainability aligns with the [The Principles of Sustainable Software Engineering](https://principles.green/) from the [Green Software Foundation](https://greensoftware.foundation/), and provides an overview of the principles of sustainable software engineering.
-Sustainable software engineering is a shift in priorities and focus. In many cases, the way most software is designed and run highlights fast performance and low latency. Meanwhile, sustainable software engineering focuses on reducing as much carbon emission as possible. Consider:
+Sustainable software engineering is a shift in priorities and focus. In many cases, the way most software is designed and run highlights fast performance and low latency. Meanwhile, sustainable software engineering focuses on reducing as much carbon emission as possible. Consider the following:
-* Applying sustainable software engineering principles can give you faster performance or lower latency, such as by lowering total network travel.
-* Reducing carbon emissions may cause slower performance or increased latency, such as delaying low-priority workloads.
+* Applying sustainable software engineering principles can give you faster performance or lower latency, such as by lowering total network traversal.
+* Reducing carbon emissions may cause slower performance or increased latency, such as delaying low-priority workloads.
-Before applying sustainable software engineering principles to your application, review the priorities, needs, and trade-offs of your application.
+The guidance found in this article is focused on Azure Kubernetes Services you're building or operating on Azure and includes design and configuration checklists, recommended design, and configuration options. Before applying sustainable software engineering principles to your application, review the priorities, needs, and trade-offs of your application.
-## Measure and optimize
+## Prerequisites
-To lower the carbon footprint of your AKS clusters, you need to understand how your cluster's resources are being used. [Azure Monitor][azure-monitor] provides details on your cluster's resource usage, such as memory and CPU usage. This data informs your decision to reduce the carbon footprint of your cluster and observes the effect of your changes.
+* Understanding the Well-Architected Framework sustainability guidance can help you produce a high quality, stable, and efficient cloud architecture. We recommend that you start by reading more about [sustainable workloads](/azure/architecture/framework/sustainability/sustainability-get-started) and reviewing your workload using the [Microsoft Azure Well-Architected Review](https://aka.ms/assessments) assessment.
+* Having clearly defined business requirements is crucial when building applications, as they might have a direct impact on both cluster and workload architectures and configurations. When building or updating existing applications, review the Well-Architected Framework sustainability design areas, alongside your application's holistic lifecycle.
-You can also install the [Microsoft Sustainability Calculator][sustainability-calculator] to see the carbon footprint of all your Azure resources.
+## Understanding the shared responsibility model
-## Increase resource utilization
+Sustainability ΓÇô just like security ΓÇô is a shared responsibility between the cloud provider and the customer or partner designing and deploying AKS clusters on the platform. Deploying AKS does not automatically make it sustainable, even if the [data centers are optimized for sustainability](https://infrastructuremap.microsoft.com/fact-sheets). Applications that aren't optimized may still emit more carbon than necessary.
-One approach to lowering your carbon footprint is to reduce your idle time. Reducing your idle time involves increasing the utilization of your compute resources. For example:
-1. You had four nodes in your cluster, each running at 50% capacity. So, all four of your nodes have 50% unused capacity remaining idle.
-1. You reduced your cluster to three nodes, each running at 67% capacity with the same workload. You would have successfully decreased your unused capacity to 33% on each node and increased your utilization.
+Learn more about the [shared responsibility model for sustainability](/azure/architecture/framework/sustainability/sustainability-design-methodology#a-shared-responsibility).
-> [!IMPORTANT]
-> When considering changing the resources in your cluster, verify your [system pools][system-pools] have enough resources to maintain the stability of your cluster's core system components. **Never** reduce your cluster's resources to the point where your cluster may become unstable.
+## Design principles
-After reviewing your cluster's utilization, consider using the features offered by [multiple node pools][multiple-node-pools]:
+**[Carbon Efficiency](https://learn.greensoftware.foundation/practitioner/carbon-efficiency)**: Emit the least amount of carbon possible.
-* Node sizing
+A carbon efficient cloud application is one that is optimized, and the starting point is the cost optimization.
- Use [node sizing][node-sizing] to define node pools with specific CPU and memory profiles, allowing you to tailor your nodes to your workload needs. By sizing your nodes to your workload needs, you can run a few nodes at higher utilization.
+**[Energy Efficiency](https://learn.greensoftware.foundation/practitioner/energy-efficiency/)**: Use the least amount of energy possible.
-* Cluster scaling
+One way to increase energy efficiency, is to run the application on as few servers as possible, with the servers running at the highest utilization rate; thereby increasing hardware efficiency as well.
- Configure how your cluster [scales][scale]. Use the [horizontal pod autoscaler][scale-horizontal] and the [cluster autoscaler][scale-auto] to scale your cluster automatically based on your configuration. Control how your cluster scales to keep all your nodes running at a high utilization while staying in sync with changes to your cluster's workload.
+**[Hardware Efficiency](https://learn.greensoftware.foundation/practitioner/hardware-efficiency)**: Use the least amount of embodied carbon possible.
-* Spot pools
+There are two main approaches to hardware efficiency:
- For cases where a workload is tolerant to sudden interruptions or terminations, you can use [spot pools][spot-pools]. Spot pools take advantage of idle capacity within Azure. For example, spot pools may work well for batch jobs or development environments.
+* For end-user devices, it's extending the lifespan of the hardware.
+* For cloud computing, it's increasing the utilization of the resource.
-> [!NOTE]
->Increasing utilization can also reduce excess nodes, which reduces the energy consumed by [resource reservations on each node][resource-reservations].
+**[Carbon Awareness](https://learn.greensoftware.foundation/practitioner/carbon-awareness)**: Do more when the electricity is cleaner and do less when the electricity is dirtier.
-Finally, review the CPU and memory *requests* and *limits* in the Kubernetes manifests of your applications.
-* As you lower memory and CPU values, more memory and CPU are available to the cluster to run other workloads.
-* As you run more workloads with lower CPU and memory, your cluster becomes more densely allocated, which increases your utilization.
+Being carbon aware means responding to shifts in carbon intensity by increasing or decreasing your demand.
-When reducing the CPU and memory for your applications, your applications' behavior may become degraded or unstable if you set CPU and memory values too low. Before changing the CPU and memory *requests* and *limits*, run some benchmarking tests to verify if the values are set appropriately. Never reduce these values to the point of application instability.
+## Design patterns and practices
-## Reduce network travel
+We recommend careful consideration of these design patterns for building a sustainable workload on Azure Kubernetes Service, before reviewing the detailed recommendations in each of the design areas.
-By reducing requests and responses travel distance to and from your cluster, you can reduce carbon emissions and electricity consumption by networking devices. After reviewing your network traffic, consider creating clusters [in regions][regions] closer to the source of your network traffic. You can use [Azure Traffic Manager][azure-traffic-manager] to route traffic to the closest cluster and [proximity placement groups][proiximity-placement-groups] and reduce the distance between Azure resources.
+| Design pattern | Applies to workload | Applies to cluster |
+| | | |
+| [Design for independent scaling of logical components](#design-for-independent-scaling-of-logical-components) | ✔️ | |
+| [Design for event-driven scaling](#design-for-event-driven-scaling) | ✔️ | |
+| [Aim for stateless design](#aim-for-stateless-design) | ✔️ | |
+| [Enable cluster and node auto-updates](#enable-cluster-and-node-auto-updates) | | ✔️ |
+| [Install supported add-ons and extensions](#install-supported-add-ons-and-extensions) | ✔️ | ✔️ |
+| [Containerize your workload where applicable](#containerize-your-workload-where-applicable) | ✔️ | |
+| [Use spot node pools when possible](#use-spot-node-pools-when-possible) | | ✔️ |
+| [Match the scalability needs and utilize auto-scaling and bursting capabilities](#match-the-scalability-needs-and-utilize-auto-scaling-and-bursting-capabilities) | | ✔️ |
+| [Turn off workloads and node pools outside of business hours](#turn-off-workloads-and-node-pools-outside-of-business-hours) | ✔️ | ✔️ |
+| [Delete unused resources](#delete-unused-resources) | ✔️ | ✔️ |
+| [Tag your resources](#tag-your-resources) | ✔️ | ✔️ |
+| [Optimize storage utilization](#optimize-storage-utilization) | ✔️ | ✔️ |
+| [Choose a region that is closest to users](#choose-a-region-that-is-closest-to-users) | | ✔️ |
+| [Reduce network traversal between nodes](#reduce-network-traversal-between-nodes) | | ✔️ |
+| [Evaluate using a service mesh](#evaluate-using-a-service-mesh) | | ✔️ |
+| [Optimize log collection](#optimize-log-collection) | ✔️ | ✔️ |
+| [Cache static data](#cache-static-data) | ✔️ | ✔️ |
+| [Evaluate whether to use TLS termination](#evaluate-whether-to-use-tls-termination) | ✔️ | ✔️ |
+| [Use cloud native network security tools and controls](#use-cloud-native-network-security-tools-and-controls) | ✔️ | ✔️ |
+| [Scan for vulnerabilities](#scan-for-vulnerabilities) | ✔️ | ✔️ |
-> [!IMPORTANT]
-> When considering making changes to your cluster's networking, never reduce network travel at the cost of meeting workload requirements. For example, while using [availability zones][availability-zones] causes more network travel on your cluster, availability zones may be necessary to handle workload requirements.
+## Application design
-## Demand shaping
+Explore this section to learn more about how to optimize your applications for a more sustainable application design.
-Where possible, consider shifting demand for your cluster's resources to times or regions where you can use excess capacity. For example, consider:
-* Changing the time or region for a batch job to run.
-* Using [spot pools][spot-pools].
-* Refactoring your application to use a queue to defer running workloads that don't need immediate processing.
+### Design for independent scaling of logical components
+
+A microservice architecture may reduce the compute resources required, as it allows for independent scaling of its logical components and ensures they are scaled according to the demand.
+
+* Consider using [Dapr Framework](https://dapr.io/) or [other CNCF projects](/azure/architecture/example-scenario/apps/build-cncf-incubated-graduated-projects-aks) to help you separate your application functionality into different microservices, to allow independent scaling of its logical components.
+
+### Design for event-driven scaling
+
+Scaling your workload based on relevant business metrics such as HTTP requests, queue length, and cloud events can help reduce its resource utilization, hence its carbon emissions.
+
+* Use [Keda](https://keda.sh/) when building event-driven applications to allow scaling down to zero when there is no demand.
+
+### Aim for stateless design
+
+Removing state from your design reduces the in-memory or on-disk data required by the workload to function.
+
+* Consider [stateless design](/azure/aks/operator-best-practices-multi-region#remove-service-state-from-inside-containers) to reduce unnecessary network load, data processing, and compute resources.
+
+## Application platform
+
+Explore this section to learn how to make better informed platform-related decisions around sustainability.
+
+### Enable cluster and node auto-updates
+
+An up-to-date cluster avoids unnecessary performance issues and ensures you benefit from the latest performance improvements and compute optimizations.
+
+* Enable [cluster auto-upgrade](/azure/aks/auto-upgrade-cluster) and [apply security updates to nodes automatically using GitHub Actions](/azure/aks/node-upgrade-github-actions), to ensure your cluster has the latest improvements.
+
+### Install supported add-ons and extensions
+
+Add-ons and extensions covered by the [AKS support policy](/azure/aks/support-policies) provide additional and supported functionality to your cluster while allowing you to benefit from the latest performance improvements and energy optimizations throughout your cluster lifecycle.
+
+* Ensure you install [Keda](/azure/aks/integrations#available-add-ons) as an add-on and [GitOps & Dapr](/azure/aks/cluster-extensions?tabs=azure-cli#currently-available-extensions) as extensions.
+
+### Containerize your workload where applicable
+
+Containers allow for reducing unnecessary resource allocation and making better use of the resources deployed as they allow for bin packing and require less compute resources than virtual machines.
+
+* Use [Draft](/azure/aks/draft) to simplify application containerization by generating Dockerfiles and Kubernetes manifests.
+
+### Use spot node pools when possible
+
+Spot nodes use Spot VMs and are great for workloads that can handle interruptions, early terminations, or evictions such as batch processing jobs and development and testing environments.
+
+* Use [spot node pools](/azure/aks/spot-node-pool) to take advantage of unused capacity in Azure at a significant cost saving for a more sustainable platform design for your [interruptible workloads](/azure/architecture/guide/spot/spot-eviction).
+
+### Match the scalability needs and utilize auto-scaling and bursting capabilities
+
+An oversized cluster does not maximize utilization of compute resources and can lead to a waste of energy. Separate your applications into different node pools to allow for cluster right sizing and independent scaling according to the application requirements. As you run out of capacity in your AKS cluster, grow from AKS to ACI to scale out additional pods to serverless nodes and ensure your workload uses all the allocated resources efficiently.
+
+* Size your cluster to match the scalability needs of your application and [use cluster autoscaler](/azure/aks/cluster-autoscaler) in combination with [virtual nodes](/azure/aks/virtual-nodes) to rapidly scale and maximize compute resource utilization. Additionally, [enforce resource quotas](/azure/aks/operator-best-practices-scheduler#enforce-resource-quotas) at the namespace level and [scale user node pools to 0](/azure/aks/scale-cluster?tabs=azure-cli#scale-user-node-pools-to-0) when there is no demand.
+
+### Turn off workloads and node pools outside of business hours
+
+Workloads may not need to run continuously and could be turned off to reduce energy waste, hence carbon emissions. You can completely turn off (stop) your node pools in your AKS cluster, allowing you to also save on compute costs.
+
+* Use the [node pool stop / start](/azure/aks/start-stop-nodepools) to turn off your node pools outside of business hours, and [KEDA CRON scaler](https://keda.sh/docs/2.7/scalers/cron/) to scale down your workloads (pods) based on time.
+
+## Operational procedures
+
+Explore this section to set up your environment for measuring and continuously improving your workloads cost and carbon efficiency.
+
+### Delete unused resources
+
+Unused resources such as unreferenced images and storage resources should be identified and deleted as they have a direct impact on hardware and energy efficiency. Identifying and deleting unused resources must be treated as a process, rather than a point-in-time activity to ensure continuous energy optimization.
+
+* Use [Azure Advisor](/azure/advisor/advisor-cost-recommendations) to identify unused resources and [ImageCleaner](/azure/aks/image-cleaner?tabs=azure-cli) to clean up stale images and remove an area of risk in your cluster.
+
+### Tag your resources
+
+Getting the right information and insights at the right time is important for producing reports about performance and resource utilization.
+
+* Set [Azure tags on your cluster](/azure/aks/use-tags) to enable monitoring of your workloads.
+
+## Storage
+
+Explore this section to learn how to design a more sustainable data storage architecture and optimize existing deployments.
+
+### Optimize storage utilization
+
+The data retrieval and data storage operations can have a significant impact on both energy and hardware efficiency. Designing solutions with the correct data access pattern can reduce energy consumption and embodied carbon.
+
+* Understand the needs of your application to [choose the appropriate storage](/azure/aks/operator-best-practices-storage#choose-the-appropriate-storage-type) and define it using [storage classes](/azure/aks/operator-best-practices-storage#create-and-use-storage-classes-to-define-application-needs) to avoid storage underutilization. Additionally, consider [provisioning volumes dynamically](/azure/aks/operator-best-practices-storage#dynamically-provision-volumes) to automatically scale the number of storage resources.
+
+## Network and connectivity
+
+Explore this section to learn how to enhance and optimize network efficiency to reduce unnecessary carbon emissions.
+
+### Choose a region that is closest to users
+
+The distance from a data center to the users has a significant impact on energy consumption and carbon emissions. Shortening the distance a network packet travels improves both your energy and carbon efficiency.
+
+* Review your application requirements and [Azure geographies](https://azure.microsoft.com/explore/global-infrastructure/geographies/#overview) to choose a region that is the closest to the majority of where the network packets are going.
+
+### Reduce network traversal between nodes
+
+Placing nodes in a single region or a single availability zone reduces the physical distance between the instances. However, for business critical workloads, you need to ensure your cluster is spread across multiple availability-zones, which may result in more network traversal and increase in your carbon footprint.
+
+* Consider deploying your nodes within a [proximity placement group](/azure/virtual-machines/co-location) to reduce the network traversal by ensuring your compute resources are physically located close to each other. For critical workloads configure [proximity placement groups with availability zones](/azure/aks/reduce-latency-ppg#configure-proximity-placement-groups-with-availability-zones).
+
+### Evaluate using a service mesh
+
+A service mesh deploys additional containers for communication, typically in a [sidecar pattern](/azure/architecture/patterns/sidecar), to provide more operational capabilities leading to an increase in CPU usage and network traffic. Nevertheless, it allows you to decouple your application from these capabilities as it moves them out from the application layer, and down to the infrastructure layer.
+
+* Carefully consider the increase in CPU usage and network traffic generated by [service mesh](/azure/aks/servicemesh-about) communication components before making the decision to use one.
+
+### Optimize log collection
+
+Sending and storing all logs from all possible sources (workloads, services, diagnostics and platform activity) can considerably increase storage and network traffic, which would impact higher costs and carbon emissions.
+
+* Make sure you are collecting and retaining only the log data necessary to support your requirements. [Configure data collection rules for your AKS workloads](/azure/azure-monitor/containers/container-insights-agent-config#data-collection-settings) and implement design considerations for [optimizing your Log Analytics costs](/azure/architecture/framework/services/monitoring/log-analytics/cost-optimization).
+
+### Cache static data
+
+Using Content Delivery Network (CDN) is a sustainable approach to optimizing network traffic because it reduces the data movement across a network. It minimizes latency through storing frequently read static data closer to users, and helps reduce network traffic and server load.
+
+* Ensure you [follow best practices](/azure/architecture/best-practices/cdn) for CDN and consider using [Azure CDN](/azure/cdn/cdn-how-caching-works?toc=%2Fazure%2Ffrontdoor%2FTOC.json) to lower the consumed bandwidth and keep costs down.
+
+## Security
+
+Explore this section to learn more about the recommendations leading to a sustainable, right-sized security posture.
+
+### Evaluate whether to use TLS termination
+
+Transport Layer Security (TLS) ensures that all data passed between the web server and web browsers remain private and encrypted. However, terminating and re-establishing TLS increases CPU utilization and might be unnecessary in certain architectures. A balanced level of security can offer a more sustainable and energy efficient workload, while a higher level of security may increase the compute resource requirements.
+
+* Review the information on TLS termination when using [Application Gateway](/azure/application-gateway/ssl-overview) or [Azure Front Door](/azure/application-gateway/ssl-overview). Consider if you can terminate TLS at your border gateway and continue with non-TLS to your workload load balancer and onwards to your workload.
+
+### Use cloud native network security tools and controls
+
+Azure Font Door and Application Gateway help manage traffic from web applications while Azure Web Application Firewall provides protection against OWASP top 10 attacks and load shedding bad bots. Using these capabilities helps remove unnecessary data transmission and reduces the burden on the cloud infrastructure, with lower bandwidth and less infrastructure requirements.
+
+* Use [Application Gateway Ingress Controller (AGIC) in AKS](/azure/architecture/example-scenario/aks-agic/aks-agic) to filter and offload traffic at the network edge from reaching your origin to reduce energy consumption and carbon emissions.
+
+### Scan for vulnerabilities
+
+Many attacks on cloud infrastructure seek to misuse deployed resources for the attacker's direct gain leading to an unnecessary spike in usage and cost. Vulnerability scanning tools help minimize the window of opportunity for attackers and mitigate any potential malicious usage of resources.
+
+* Follow recommendations from [Microsoft Defender for Cloud](/security/benchmark/azure/security-control-vulnerability-management) and run automated vulnerability scanning tools such as [Defender for Containers](/azure/defender-for-cloud/defender-for-containers-va-acr) to avoid unnecessary resource usage by identifying vulnerabilities in your images and minimizing the window of opportunity for attackers.
## Next steps
-Learn more about the features of AKS mentioned in this article:
-
-* [Multiple node pools][multiple-node-pools]
-* [Node sizing][node-sizing]
-* [Scaling a cluster][scale]
-* [Horizontal pod autoscaler][scale-horizontal]
-* [Cluster autoscaler][scale-auto]
-* [Spot pools][spot-pools]
-* [System pools][system-pools]
-* [Resource reservations][resource-reservations]
-* [Proximity placement groups][proiximity-placement-groups]
-* [Availability Zones][availability-zones]
-
-[availability-zones]: availability-zones.md
-[azure-monitor]: ../azure-monitor/containers/container-insights-overview.md
-[azure-traffic-manager]: ../traffic-manager/traffic-manager-overview.md
-[proiximity-placement-groups]: reduce-latency-ppg.md
-[regions]: faq.md#which-azure-regions-currently-provide-aks
-[resource-reservations]: concepts-clusters-workloads.md#resource-reservations
-[scale]: concepts-scale.md
-[scale-auto]: concepts-scale.md#cluster-autoscaler
-[scale-horizontal]: concepts-scale.md#horizontal-pod-autoscaler
-[spot-pools]: spot-node-pool.md
-[multiple-node-pools]: use-multiple-node-pools.md
-[node-sizing]: use-multiple-node-pools.md#specify-a-vm-size-for-a-node-pool
-[sustainability-calculator]: https://azure.microsoft.com/blog/microsoft-sustainability-calculator-helps-enterprises-analyze-the-carbon-emissions-of-their-it-infrastructure/
-[system-pools]: use-system-pools.md
-[principles-sse]: /training/modules/sustainable-software-engineering-overview/
+> [!div class="nextstepaction"]
+> [Azure Well-Architected Framework review of AKS](/azure/architecture/framework/services/compute/azure-kubernetes-service/azure-kubernetes-service)
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
spec:
After you've created the Kubernetes secret, you can reference it by setting an environment variable in your pod, as shown in the following example code: > [!NOTE]
-> The example here demonstrates access to a secret through env variables and through volume/volumeMount. This is for illustrative purposes. These two methods can exist independently from the other.
+> The example here demonstrates access to a secret through env variables and through volume/volumeMount. This is for illustrative purposes; a typical application would use one method or the other. However, be aware that in order for a secret to be available through env variables, it first must be mounted by at least one pod.
```yml kind: Pod
aks Operator Best Practices Run At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-run-at-scale.md
To scale AKS clusters beyond 1000 nodes, you need to request a node limit quota
To increase the node limit beyond 1000, you must have the following pre-requisites: - An existing AKS cluster that needs the node limit increase. This cluster shouldn't be deleted as that will remove the limit increase. - Uptime SLA enabled on your cluster.
+- Clusters should use Kubernetes version 1.23 or above
> [!NOTE] > It may take up to a week to enable your clusters with the larger node limit.
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
The `validate-jwt` policy enforces existence and validity of a JSON web token (J
```xml <validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid.">
- <openid-config url="https://login.microsoftonline.com/contoso.onmicrosoft.com/.well-known/openid-configuration" />
+ <openid-config url="https://login.microsoftonline.com/contoso.onmicrosoft.com/v2.0/.well-known/openid-configuration" />
<audiences> <audience>25eef6e4-c905-4a07-8eb4-0d08d5df8b3f</audience> </audiences>
This example shows how to use the [Validate JWT](api-management-access-restricti
| require-scheme | The name of the token scheme, e.g. "Bearer". When this attribute is set, the policy will ensure that specified scheme is present in the Authorization header value. | No | N/A | | require-signed-tokens | Boolean. Specifies whether a token is required to be signed. | No | true | | separator | String. Specifies a separator (e.g. ",") to be used for extracting a set of values from a multi-valued claim. | No | N/A |
-| url | Open ID configuration endpoint URL from where OpenID configuration metadata can be obtained. The response should be according to specs as defined at URL:`https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata`. For Azure Active Directory use the following URL: `https://login.microsoftonline.com/{tenant-name}/.well-known/openid-configuration` substituting your directory tenant name, e.g. `contoso.onmicrosoft.com`. | Yes | N/A |
+| url | Open ID configuration endpoint URL from where OpenID configuration metadata can be obtained. The response should be according to specs as defined at URL: `https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata`. <br/><br/>For Azure Active Directory use the OpenID Connect [metadata endpoint](../active-directory/develop/v2-protocols-oidc.md#find-your-apps-openid-configuration-document-uri) configured in your app registration such as:<br/>- (v2) `https://login.microsoftonline.com/{tenant-name}/v2.0/.well-known/openid-configuration`<br/> - (v2 multitenant) ` https://login.microsoftonline.com/organizations/v2.0/.well-known/openid-configuration`<br/>- (v1) `https://login.microsoftonline.com/{tenant-name}/.well-known/openid-configuration` <br/><br/> substituting your directory tenant name or ID, for example `contoso.onmicrosoft.com`, for `{tenant-name}`. | Yes | N/A |
| output-token-variable-name | String. Name of context variable that will receive token value as an object of type [`Jwt`](api-management-policy-expressions.md) upon successful token validation | No | N/A | ### Usage
app-service Configure Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md
For your custom Windows image, you must choose the right [parent image (base ima
It takes some time to download a parent image during app start-up. However, you can reduce start-up time by using one of the following parent images that are already cached in Azure App Service: -- [mcr.microsoft.com/windows/servercore](https://hub.docker.com/_/microsoft-windows-servercore):20H2-- [mcr.microsoft.com/windows/servercore](https://hub.docker.com/_/microsoft-windows-servercore):ltsc2019-- [mcr.microsoft.com/dotnet/framework/aspnet](https://hub.docker.com/_/microsoft-dotnet-framework-aspnet/):4.8-windowsservercore-20H2-- [mcr.microsoft.com/dotnet/framework/aspnet](https://hub.docker.com/_/microsoft-dotnet-framework-aspnet/):4.8-windowsservercore-ltsc2019-- [mcr.microsoft.com/dotnet/runtime](https://hub.docker.com/_/microsoft-dotnet-runtime/):5.0-nanoserver-20H2-- [mcr.microsoft.com/dotnet/runtime](https://hub.docker.com/_/microsoft-dotnet-runtime/):5.0-nanoserver-1809-- [mcr.microsoft.com/dotnet/aspnet](https://hub.docker.com/_/microsoft-dotnet-aspnet/):5.0-nanoserver-20H2-- [mcr.microsoft.com/dotnet/aspnet](https://hub.docker.com/_/microsoft-dotnet-aspnet/):5.0-nanoserver-1809-- [mcr.microsoft.com/dotnet/runtime](https://hub.docker.com/_/microsoft-dotnet-runtime/):3.1-nanoserver-20H2-- [mcr.microsoft.com/dotnet/runtime](https://hub.docker.com/_/microsoft-dotnet-runtime/):3.1-nanoserver-1809-- [mcr.microsoft.com/dotnet/aspnet](https://hub.docker.com/_/microsoft-dotnet-aspnet/):3.1-nanoserver-20H2-- [mcr.microsoft.com/dotnet/aspnet](https://hub.docker.com/_/microsoft-dotnet-aspnet/):3.1-nanoserver-1809
+- [https://mcr.microsoft.com/windows/servercore:ltsc2022](https://mcr.microsoft.com/windows/servercore:ltsc2022)
+- [https://mcr.microsoft.com/windows/servercore:ltsc2019](https://mcr.microsoft.com/windows/servercore:ltsc2019)
+- [https://mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2022](https://mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2022)
+- [https://mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2019](https://mcr.microsoft.com/dotnet/framework/aspnet:4.8-windowsservercore-ltsc2019)
+- [https://mcr.microsoft.com/dotnet/runtime:3.1-nanoserver-ltsc2022](https://mcr.microsoft.com/dotnet/runtime:3.1-nanoserver-ltsc2022)
+- [https://mcr.microsoft.com/dotnet/runtime:3.1-nanoserver-1809](https://mcr.microsoft.com/dotnet/runtime:3.1-nanoserver-1809)
+- [https://mcr.microsoft.com/dotnet/runtime:6.0-nanoserver-ltsc2022](https://mcr.microsoft.com/dotnet/runtime:6.0-nanoserver-ltsc2022)
+- [https://mcr.microsoft.com/dotnet/runtime:6.0-nanoserver-1809](https://mcr.microsoft.com/dotnet/runtime:6.0-nanoserver-1809)
+- [https://mcr.microsoft.com/dotnet/aspnet:3.1-nanoserver-ltsc2022](https://mcr.microsoft.com/dotnet/aspnet:3.1-nanoserver-ltsc2022)
+- [https://mcr.microsoft.com/dotnet/aspnet:3.1-nanoserver-1809](https://mcr.microsoft.com/dotnet/aspnet:3.1-nanoserver-1809)
+- [https://mcr.microsoft.com/dotnet/aspnet:6.0-nanoserver-ltsc2022](https://mcr.microsoft.com/dotnet/aspnet:6.0-nanoserver-ltsc2022)
+- [https://mcr.microsoft.com/dotnet/aspnet:6.0-nanoserver-1809](https://mcr.microsoft.com/dotnet/aspnet:6.0-nanoserver-1809)
::: zone-end
app-service Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/firewall-integration.md
The steps to lock down egress from your existing ASE with Azure Firewall are:
![Add NTP service tag network rule][6]
-1. Create a route table with the management addresses from [App Service Environment management addresses]( ./management-addresses.md) with a next hop of Internet. The route table entries are required to avoid asymmetric routing problems. Add routes for the IP address dependencies noted below in the IP address dependencies with a next hop of Internet. Add a Virtual Appliance route to your route table for 0.0.0.0/0 with the next hop being your Azure Firewall private IP address.
+1. Create a route table with the management addresses from [App Service Environment management addresses]( ./management-addresses.md) or the AppServiceManagement service tag with a next hop of Internet. The route table entries are required to avoid asymmetric routing problems. Add routes for the IP address dependencies noted below in the IP address dependencies with a next hop of Internet. Add a Virtual Appliance route to your route table for 0.0.0.0/0 with the next hop being your Azure Firewall private IP address.
![Creating a route table][4]
app-service Forced Tunnel Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/forced-tunnel-support.md
After you configure the ASE subnet to ignore all BGP routes, your apps will no l
To route all outbound traffic from your ASE, except that which goes to Azure SQL and Azure Storage, perform the following steps:
-1. Create a route table and assign it to your ASE subnet. Find the addresses that match your region here [App Service Environment management addresses][management]. Create routes for those addresses with a next hop of internet. These routes are needed because the App Service Environment inbound management traffic must reply from the same address it was sent to.
+1. Create a route table and assign it to your ASE subnet. Find the addresses that match your region here [App Service Environment management addresses][management]. Create routes for those addresses or use the AppServiceManagement service tag with a next hop of internet. These routes are needed because the App Service Environment inbound management traffic must reply from the same address it was sent to.
2. Enable Service Endpoints with Azure SQL and Azure Storage with your ASE subnet. After this step is completed, you can then configure your VNet with forced tunneling.
app-service Management Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/management-addresses.md
ms.assetid: a7738a24-89ef-43d3-bff1-77f43d5a3952 Previously updated : 11/10/2021 Last updated : 10/24/2022
## Summary
-The App Service Environment (ASE) is a single tenant deployment of the Azure App Service that runs in your Azure Virtual Network (VNet). While the ASE does run in your VNet, it must still be accessible from a number of dedicated IP addresses that are used by the Azure App Service to manage the service. In the case of an ASE, the management traffic traverses the user-controlled network. If this traffic is blocked or misrouted, the ASE will become suspended. For details on the ASE networking dependencies, read [Networking considerations and the App Service Environment][networking]. For general information on the ASE, you can start with [Introduction to the App Service Environment][intro].
+The App Service Environment (ASE) is a single tenant deployment of the Azure App Service that runs in your Azure Virtual Network. While the ASE does run in your virtual network, it must still be accessible from a number of dedicated IP addresses that are used by the Azure App Service to manage the service. In the case of an ASE, the management traffic traverses the user-controlled network. If this traffic is blocked or misrouted, the ASE will become suspended. For details on the ASE networking dependencies, read [Networking considerations and the App Service Environment][networking]. For general information on the ASE, you can start with [Introduction to the App Service Environment][intro].
All ASEs have a public VIP which management traffic comes into. The incoming management traffic from these addresses comes in from to ports 454 and 455 on the public VIP of your ASE. This document lists the App Service source addresses for management traffic to the ASE. These addresses are also in the IP Service Tag named AppServiceManagement.
-The addresses noted below can be configured in a route table to avoid asymmetric routing problems with the management traffic. Routes act on traffic at the IP level and do not have an awareness of traffic direction or that the traffic is a part of a TCP reply message. If the reply address for a TCP request is different than the address it was sent to, you have an asymmetric routing problem. To avoid asymmetric routing problems with your ASE management traffic, you need to ensure that replies are sent back from the same address they were sent to. For details on how to configure your ASE to operate in an environment where outbound traffic is sent on premises, read [Configure your ASE with forced tunneling][forcedtunnel]
+The addresses noted below can be configured in a route table to avoid asymmetric routing problems with the management traffic. The address are also maintained in a service tag, that can be used as well. Routes act on traffic at the IP level and do not have an awareness of traffic direction or that the traffic is a part of a TCP reply message. If the reply address for a TCP request is different than the address it was sent to, you have an asymmetric routing problem. To avoid asymmetric routing problems with your ASE management traffic, you need to ensure that replies are sent back from the same address they were sent to. For details on how to configure your ASE to operate in an environment where outbound traffic is sent on premises, read [Configure your ASE with forced tunneling][forcedtunnel]
## List of management addresses ##
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md
Title: Back up an app
description: Learn how to restore backups of your apps in Azure App Service or configure custom backups. Customize backups by including the linked database. ms.assetid: 6223b6bd-84ec-48df-943f-461d84605694 Previously updated : 09/19/2022 Last updated : 10/24/2022
Backup and restore are supported in **Basic**, **Standard**, **Premium**, and **
> [!NOTE] > Support in App Service environments (ASE) V2 and V3 is in preview. For App Service environments: >
-> - Backups can be restored to a target app within the ASE itself, not in another ASE.
-> - Backups can be restored to a target app in another App Service plan in the ASE.
+> - Automatic backups can be restored to a target app within the ASE itself, not in another ASE.
+> - Custom backups can be restored to a target app in another ASE, such as from a V2 ASE to a V3 ASE.
> - Backups can be restored to target app of the same OS platform as the source app. ## Automatic vs custom backups
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
Premium plan hosting provides the following benefits to your functions:
* Avoid cold starts with warm instances. * Virtual network connectivity.
-* Unlimited execution duration, with 60 minutes guaranteed.
+* Supports [longer runtime durations](#longer-run-duration).
* [Choice of Premium instance sizes](#available-instance-skus). * More predictable pricing, compared with the Consumption plan. * High-density app allocation for plans with multiple function apps.
To learn more about how scaling works, see [Event-driven scaling in Azure Functi
## Longer run duration
-Azure Functions in a Consumption plan are limited to 10 minutes for a single execution. In the Premium plan, the run duration defaults to 30 minutes to prevent runaway executions. However, you can [modify the host.json configuration](./functions-host-json.md#functiontimeout) to make the duration unbounded for Premium plan apps. When set to an unbounded duration, your function app is guaranteed to run for at least 60 minutes.
+Functions in a Consumption plan are limited to 10 minutes for a single execution. In the Premium plan, the run duration defaults to 30 minutes to prevent runaway executions. However, you can [modify the host.json configuration](./functions-host-json.md#functiontimeout) to make the duration unbounded for Premium plan apps, with the following limitations:
+++ Platform upgrades can trigger a managed shutdown and halt the function execution.++ Platform outages can cause an unhandled shutdown and halt the function execution.++ There's an idle timer that stops the worker after 60 minutes with no new executions.++ [Scale-in behavior](event-driven-scaling.md#scale-in-behaviors) can cause worker shutdown after 60 minutes.++ [Slot swaps](functions-deployment-slots.md) can terminate executions on the source and target slots during the swap. ## Migration
For example, a JavaScript function app is constrained by the default memory limi
And for plans with more than 4GB memory, ensure the Bitness Platform Setting is set to `64 Bit` under [General Settings](../app-service/configure-common.md#configure-general-settings).
-## Region max scale out
+## Region max scale-out
Below are the currently supported maximum scale-out values for a single plan in each region and OS configuration.
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
### [Log alert](#tab/log) To create a log alert rule that monitors count of system event errors:+ ```azurecli az monitor scheduled-query create -g {ResourceGroup} -n {nameofthealert} --scopes {vm_id} --condition "count \'union Event, Syslog | where TimeGenerated > a(1h) | where EventLevelName == \"Error\" or SeverityLevel== \"err\"\' > 2" --description {descriptionofthealert} ```
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
- [az monitor activity-log alert action-group](/cli/azure/monitor/activity-log/alert/action-group): Add an action group to the activity log alert rule. - ## Create a new alert rule using PowerShell - To create a metric alert rule using PowerShell, use this cmdlet: [Add-AzMetricAlertRuleV2](/powershell/module/az.monitor/add-azmetricalertrulev2)
+- To create a log alert rule using PowerShell, use this cmdlet: [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule)
- To create an activity log alert rule using PowerShell, use this cmdlet: [Set-AzActivityLogAlert](/powershell/module/az.monitor/set-azactivitylogalert) ## Create an activity log alert rule from the Activity log pane
The *sampleActivityLogAlert.parameters.json* file contains the values provided f
## Changes to log alert rule creation experience
-If you're creating a new log alert rule, please note that current alert rule wizard is a little different from the earlier experience:
+If you're creating a new log alert rule, note that current alert rule wizard is a little different from the earlier experience:
- Previously, search results were included in the payload of the triggered alert and its associated notifications. The email included only 10 rows from the unfiltered results while the webhook payload contained 1000 unfiltered results. To get detailed context information about the alert so that you can decide on the appropriate action: - We recommend using [Dimensions](alerts-types.md#narrow-the-target-using-dimensions). Dimensions provide the column value that fired the alert, giving you context for why the alert fired and how to fix the issue.
azure-monitor Alerts Manage Alert Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-rules.md
To enable recommended alert rules:
## Manage metric alert rules with the Azure CLI
-This section describes how to do manage metric alert rules using the cross-platform [Azure CLI](/cli/azure/get-started-with-azure-cli). The following examples use [Azure Cloud Shell](../../cloud-shell/overview.md).
+This section describes how to manage metric alert rules using the cross-platform [Azure CLI](/cli/azure/get-started-with-azure-cli). The following examples use [Azure Cloud Shell](../../cloud-shell/overview.md).
1. In the [portal](https://portal.azure.com/), select **Cloud Shell**.-
-You can use commands with ``--help`` option to learn more about the command and how to use it. For example, the following command shows you the list of commands available for creating, viewing, and managing metric alerts.
-
-```azurecli
-az monitor metrics alert --help
-```
-
-### View all the metric alerts in a resource group
-
-```azurecli
-az monitor metrics alert list -g {ResourceGroup}
-```
-
-### See the details of a particular metric alert rule
-
-Use the name or the resource ID of the rule in the following commands:
-
-```azurecli
-az monitor metrics alert show -g {ResourceGroup} -n {AlertRuleName}
-```
-
-```azurecli
-az monitor metrics alert show --ids {RuleResourceId}
-```
-
-### Disable a metric alert rule
-
-```azurecli
-az monitor metrics alert update -g {ResourceGroup} -n {AlertRuleName} --enabled false
-```
-
-### Delete a metric alert rule
-
-```azurecli
-az monitor metrics alert delete -g {ResourceGroup} -n {AlertRuleName}
-```
+1. Use these options of the `az monitor metrics alert` CLI command in this table:
+
+
+ |What you want to do|CLI command |
+ |||
+ |View all the metric alerts in a resource group|`az monitor metrics alert list -g {ResourceGroup}`|
+ |See the details of a metric alert rule|`az monitor metrics alert show -g {ResourceGroup} -n {AlertRuleName}`|
+ | |`az monitor metrics alert show --ids {RuleResourceId}`|
+ |Disable a metric alert rule|`az monitor metrics alert update -g {ResourceGroup} -n {AlertRuleName} --enabled false`|
+ |Delete a metric alert rule|`az monitor metrics alert delete -g {ResourceGroup} -n {AlertRuleName}`|
+ |Learn more about the command|`az monitor metrics alert --help`|
## Manage metric alert rules with PowerShell
This section describes how to manage log alerts using the cross-platform [Azure
1. In the [portal](https://portal.azure.com/), select **Cloud Shell**.-
-You can use commands with ``--help`` option to learn more about the command and how to use it. For example, the following command shows you the list of commands available for creating, viewing, and managing log alerts.
-
-```azurecli
-az monitor scheduled-query --help
-```
-
-### View all the log alert rules in a resource group
-
-```azurecli
-az monitor scheduled-query list -g {ResourceGroup}
-```
-
-### See the details of a log alert rule
-
-Use the name or the resource ID of the rule in the following command:
-
-```azurecli
-az monitor scheduled-query show -g {ResourceGroup} -n {AlertRuleName}
-```
-```azurecli
-az monitor scheduled-query show --ids {RuleResourceId}
-```
-
-### Disable a log alert rule
-
-```azurecli
-az monitor scheduled-query update -g {ResourceGroup} -n {AlertRuleName} --disabled true
-```
-
-### Delete a log alert rule
-
-```azurecli
-az monitor scheduled-query delete -g {ResourceGroup} -n {AlertRuleName}
-```
+1. Use these options of the `az monitor scheduled-query alert` CLI command in this table:
+
+
+ |What you want to do|CLI command |
+ |||
+ |View all the log alert rules in a resource group|`az monitor scheduled-query list -g {ResourceGroup}`|
+ |See the details of a log alert rule|`az monitor scheduled-query show -g {ResourceGroup} -n {AlertRuleName}`|
+ | |`az monitor scheduled-query show --ids {RuleResourceId}`|
+ |Disable a log alert rule|`az monitor scheduled-query update -g {ResourceGroup} -n {AlertRuleName} --disabled true`|
+ |Delete a log alert rule|`az monitor scheduled-query delete -g {ResourceGroup} -n {AlertRuleName}`|
+ |Learn more about the command|`az monitor scheduled-query --help`|
### Manage log alert rules using the Azure Resource Manager CLI with [templates](./alerts-log-create-templates.md)
az deployment group create \
A 201 response is returned on successful creation. 200 is returned on successful updates.
+## Manage log alert rules with PowerShell
+
+Log alert rules have this dedicated PowerShell cmdlet:
+- [New-AzScheduledQueryRule](/powershell/module/az.monitor/new-azscheduledqueryrule): Creates a new log alert rule or updates an existing log alert rule.
## Manage activity log alert rules using PowerShell Activity log alerts have these dedicated PowerShell cmdlets:
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
This table can help you decide when to use what type of alert. For more detailed
|Alert Type |When to Use |Pricing Information| ||||
-|Metric alert|Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. Metric data is stored in the system already pre-computed, so metric alerts are less expensive than log alerts. If the data you want to monitor is available in metric data, using metric alerts is recommended.|Each metrics alert rule is charged based on the number of time-series that are monitored. |
-|Log alert|Log alerts allow you to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of KQL for data manipulation using log alerts. Log alerts are more expensive than metric alerts.|Each Log Alert rule is billed based the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for Log Alerts configured for [at scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost will also depend on the number of time series created by the dimensions resulting from your query. |
+|Metric alert|Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. Metric data is stored in the system already pre-computed. We recommend using metric alerts if the data you want to monitor is available in metric data.|Each metrics alert rule is charged based on the number of time-series that are monitored. |
+|Log alert|Log alerts allow you to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of KQL for data manipulation using log alerts.|Each Log Alert rule is billed based the interval at which the log query is evaluated (more frequent query evaluation results in a higher cost). Additionally, for Log Alerts configured for [at scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. |
|Activity Log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource, for example, a restart, a shutdown, or the creation or deletion of a resource.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).| |Prometheus alerts (preview)| Prometheus alerts are primarily used for alerting on performance and health of Kubernetes clusters (including AKS). The alert rules are based on PromQL, which is an open source query language. | There is no charge for Prometheus alerts during the preview period. | ## Metric alerts
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
In Java, many dependency calls can be automatically tracked by using the
You use this call if you want to track calls that the automated tracking doesn't catch. To turn off the standard dependency-tracking module in C#, edit [ApplicationInsights.config](./configuration-with-applicationinsights-config.md) and delete the reference to `DependencyCollector.DependencyTrackingTelemetryModule`. For Java, see
-[Suppressing specific autocollected telemetry](./java-standalone-config.md#suppressing-specific-auto-collected-telemetry).
+[Suppressing specific autocollected telemetry](./java-standalone-config.md#suppress-specific-auto-collected-telemetry).
### Dependencies in Log Analytics
azure-monitor Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md
You can set up availability tests for any HTTP or HTTPS endpoint that's accessib
There are four types of availability tests:
-* [URL ping test (classic)](monitor-web-app-availability.md): You can create this simple test through the portal to validate whether an endpoint is responding and measure performance associated with that response. You can also set custom success criteria coupled with more advanced features, like parsing dependent requests and allowing for retries.
-* [Standard test](availability-standard-tests.md): This single request test is similar to the URL ping test. It includes SSL certificate validity, proactive lifetime check, HTTP request verb (for example `GET`, `HEAD`, or `POST`), custom headers, and custom data associated with your HTTP request.
+* [URL ping test (classic)](monitor-web-app-availability.md): You can create this test through the Azure portal to validate whether an endpoint is responding and measure performance associated with that response. You can also set custom success criteria coupled with more advanced features, like parsing dependent requests and allowing for retries.
+* [Standard test](availability-standard-tests.md): This single request test is similar to the URL ping test. It includes TLS/SSL certificate validity, proactive lifetime check, HTTP request verb (for example, `GET`, `HEAD`, or `POST`), custom headers, and custom data associated with your HTTP request.
* [Multi-step web test (classic)](availability-multistep.md): You can play back this recording of a sequence of web requests to test more complex scenarios. Multi-step web tests are created in Visual Studio Enterprise and uploaded to the portal, where you can run them. * [Custom TrackAvailability test](availability-azure-functions.md): If you decide to create a custom application to run availability tests, you can use the [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) method to send the results to Application Insights.
There are four types of availability tests:
You can create up to 100 availability tests per Application Insights resource. > [!NOTE]
-> Availability tests are stored encrypted, according to [Microsoft Azure Data Encryption at rest](../../security/fundamentals/encryption-atrest.md#encryption-at-rest-in-microsoft-cloud-services) policies.
+> Availability tests are stored encrypted, according to [Azure data encryption at rest](../../security/fundamentals/encryption-atrest.md#encryption-at-rest-in-microsoft-cloud-services) policies.
## Troubleshooting
See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/ap
* [Availability alerts](availability-alerts.md) * [URL tests](monitor-web-app-availability.md)
-* [Standard Tests](availability-standard-tests.md)
+* [Standard tests](availability-standard-tests.md)
* [Multi-step web tests](availability-multistep.md) * [Create and run custom availability tests using Azure Functions](availability-azure-functions.md)
-* [Web Tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json)
+* [Web tests Azure Resource Manager template](/azure/templates/microsoft.insights/webtests?tabs=json)
azure-monitor Availability Private Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-private-test.md
# Private testing
-If you want to use availability tests on internal servers that run behind a firewall, there are two possible solutions: public ping test enablement and disconnected/no ingress scenarios.
+If you want to use availability tests on internal servers that run behind a firewall, there are two possible solutions: public availability test enablement and disconnected/no ingress scenarios.
-## Public ping test enablement
+## Public availability test enablement
> [!NOTE] > If you donΓÇÖt want to allow any ingress to your environment, then use the method in the [Disconnected or no ingress scenarios](#disconnected-or-no-ingress-scenarios) section.
If you want to use availability tests on internal servers that run behind a fire
Configure your firewall to permit incoming requests from our service. -- [Service tags](../../virtual-network/service-tags-overview.md) are a simple way to enable Azure services without having to authorize individual IPs or maintain an up-to-date list. Service tags can be used across Azure Firewall and Network Security Groups to allow our service access. **ApplicationInsightsAvailability** is the Service tag dedicated to our ping testing service.
+- [Service tags](../../virtual-network/service-tags-overview.md) are a simple way to enable Azure services without having to authorize individual IPs or maintain an up-to-date list. Service tags can be used across Azure Firewall and Network Security Groups to allow our service access. **ApplicationInsightsAvailability** is the Service tag dedicated to our ping testing service, covering both URL ping tests and Standard availability tests.
1. If you are using [Azure Network Security Groups](../../virtual-network/network-security-groups-overview.md), go to your Network Security group resource and select **inbound security rules** under *Settings* then select **Add**. :::image type="content" source="media/availability-private-test/add.png" alt-text="Screenshot of the inbound security rules tab in the network security group resource.":::
azure-monitor Custom Operations Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md
Title: Track custom operations with Azure Application Insights .NET SDK
-description: Tracking custom operations with Azure Application Insights .NET SDK
+ Title: Track custom operations with Application Insights .NET SDK
+description: Learn how to track custom operations with the Application Insights .NET SDK.
ms.devlang: csharp
# Track custom operations with Application Insights .NET SDK
-Azure Application Insights SDKs automatically track incoming HTTP requests and calls to dependent services, such as HTTP requests and SQL queries. Tracking and correlation of requests and dependencies give you visibility into the whole application's responsiveness and reliability across all microservices that combine this application.
+Application Insights SDKs automatically track incoming HTTP requests and calls to dependent services, such as HTTP requests and SQL queries. Tracking and correlation of requests and dependencies give you visibility into the whole application's responsiveness and reliability across all microservices that combine this application.
-There is a class of application patterns that can't be supported generically. Proper monitoring of such patterns requires manual code instrumentation. This article covers a few patterns that might require manual instrumentation, such as custom queue processing and running long-running background tasks.
+There's a class of application patterns that can't be supported generically. Proper monitoring of such patterns requires manual code instrumentation. This article covers a few patterns that might require manual instrumentation, such as custom queue processing and running long-running background tasks.
-This document provides guidance on how to track custom operations with the Application Insights SDK. This documentation is relevant for:
+This article provides guidance on how to track custom operations with the Application Insights SDK. This documentation is relevant for:
- Application Insights for .NET (also known as Base SDK) version 2.4+. - Application Insights for web applications (running ASP.NET) version 2.4+. - Application Insights for ASP.NET Core version 2.1+. ## Overview
-An operation is a logical piece of work run by an application. It has a name, start time, duration, result, and a context of execution like user name, properties, and result. If operation A was initiated by operation B, then operation B is set as a parent for A. An operation can have only one parent, but it can have many child operations. For more information on operations and telemetry correlation, see [Azure Application Insights telemetry correlation](correlation.md).
+
+An operation is a logical piece of work run by an application. It has a name, start time, duration, result, and a context of execution like user name, properties, and result. If operation A was initiated by operation B, then operation B is set as a parent for A. An operation can have only one parent, but it can have many child operations. For more information on operations and telemetry correlation, see [Application Insights telemetry correlation](correlation.md).
In the Application Insights .NET SDK, the operation is described by the abstract class [OperationTelemetry](https://github.com/microsoft/ApplicationInsights-dotnet/blob/7633ae849edc826a8547745b6bf9f3174715d4bd/BASE/src/Microsoft.ApplicationInsights/Extensibility/Implementation/OperationTelemetry.cs) and its descendants [RequestTelemetry](https://github.com/microsoft/ApplicationInsights-dotnet/blob/7633ae849edc826a8547745b6bf9f3174715d4bd/BASE/src/Microsoft.ApplicationInsights/DataContracts/RequestTelemetry.cs) and [DependencyTelemetry](https://github.com/microsoft/ApplicationInsights-dotnet/blob/7633ae849edc826a8547745b6bf9f3174715d4bd/BASE/src/Microsoft.ApplicationInsights/DataContracts/DependencyTelemetry.cs).
-## Incoming operations tracking
-The Application Insights web SDK automatically collects HTTP requests for ASP.NET applications that run in an IIS pipeline and all ASP.NET Core applications. There are community-supported solutions for other platforms and frameworks. However, if the application isn't supported by any of the standard or community-supported solutions, you can instrument it manually.
+## Incoming operations tracking
+
+The Application Insights web SDK automatically collects HTTP requests for ASP.NET applications that run in an IIS pipeline and all ASP.NET Core applications. There are community-supported solutions for other platforms and frameworks. If the application isn't supported by any of the standard or community-supported solutions, you can instrument it manually.
-Another example that requires custom tracking is the worker that receives items from the queue. For some queues, the call to add a message to this queue is tracked as a dependency. However, the high-level operation that describes message processing is not automatically collected.
+Another example that requires custom tracking is the worker that receives items from the queue. For some queues, the call to add a message to this queue is tracked as a dependency. The high-level operation that describes message processing isn't automatically collected.
Let's see how such operations could be tracked. On a high level, the task is to create `RequestTelemetry` and set known properties. After the operation is finished, you track the telemetry. The following example demonstrates this task. ### HTTP request in Owin self-hosted app+ In this example, trace context is propagated according to the [HTTP Protocol for Correlation](https://github.com/dotnet/runtime/blob/master/src/libraries/System.Diagnostics.DiagnosticSource/src/HttpCorrelationProtocol.md). You should expect to receive headers that are described there. ```csharp public class ApplicationInsightsMiddleware : OwinMiddleware {
- // you may create a new TelemetryConfiguration instance, reuse one you already have
+ // You may create a new TelemetryConfiguration instance, reuse one you already have,
// or fetch the instance created by Application Insights SDK. private readonly TelemetryConfiguration telemetryConfiguration = TelemetryConfiguration.CreateDefault(); private readonly TelemetryClient telemetryClient = new TelemetryClient(telemetryConfiguration);
public class ApplicationInsightsMiddleware : OwinMiddleware
} ```
-The HTTP Protocol for Correlation also declares the `Correlation-Context` header. However, it's omitted here for simplicity.
+The HTTP Protocol for Correlation also declares the `Correlation-Context` header. It's omitted here for simplicity.
## Queue instrumentation
-While there are [W3C Trace Context](https://www.w3.org/TR/trace-context/) and [HTTP Protocol for Correlation](https://github.com/dotnet/runtime/blob/master/src/libraries/System.Diagnostics.DiagnosticSource/src/HttpCorrelationProtocol.md) to pass correlation details with HTTP request, every queue protocol has to define how the same details are passed along the queue message. Some queue protocols (such as AMQP) allow passing additional metadata and some others (such Azure Storage Queue) require the context to be encoded into the message payload.
+
+The [W3C Trace Context](https://www.w3.org/TR/trace-context/) and [HTTP Protocol for Correlation](https://github.com/dotnet/runtime/blob/master/src/libraries/System.Diagnostics.DiagnosticSource/src/HttpCorrelationProtocol.md) pass correlation details with HTTP requests, but every queue protocol has to define how the same details are passed along the queue message. Some queue protocols, such as AMQP, allow passing more metadata. Other protocols, such as Azure Storage Queue, require the context to be encoded into the message payload.
> [!NOTE]
-> * **Cross-component tracing is not supported for queues yet** With HTTP, if your producer and consumer send telemetry to different Application Insights resources, Transaction Diagnostics Experience and Application Map show transactions and map end-to-end. In case of queues this is not supported yet.
+> Cross-component tracing isn't supported for queues yet.
+>
+> With HTTP, if your producer and consumer send telemetry to different Application Insights resources, transaction diagnostics experience and Application Map show transactions and map end-to-end. In the case of queues, this capability isn't supported yet.
-### Service Bus Queue
-Refer to [Distributed tracing and correlation through Service Bus messaging](../../service-bus-messaging/service-bus-end-to-end-tracing.md#distributed-tracing-and-correlation-through-service-bus-messaging) for tracing information.
+### Service Bus queue
+
+For tracing information, see [Distributed tracing and correlation through Azure Service Bus messaging](../../service-bus-messaging/service-bus-end-to-end-tracing.md#distributed-tracing-and-correlation-through-service-bus-messaging).
> [!IMPORTANT] > The WindowsAzure.ServiceBus and Microsoft.Azure.ServiceBus packages are deprecated. ### Azure Storage queue
-The following example shows how to track the [Azure Storage queue](../../storage/queues/storage-dotnet-how-to-use-queues.md) operations and correlate telemetry between the producer, the consumer, and Azure Storage.
-The Storage queue has an HTTP API. All calls to the queue are tracked by the Application Insights Dependency Collector for HTTP requests.
-It is configured by default on ASP.NET and ASP.NET Core applications, with other kinds of applicaiton, you can refer to [console applications documentation](./console.md)
+The following example shows how to track the [Azure Storage queue](../../storage/queues/storage-dotnet-how-to-use-queues.md) operations and correlate telemetry between the producer, the consumer, and Azure Storage.
+
+The Storage queue has an HTTP API. All calls to the queue are tracked by the Application Insights Dependency Collector for HTTP requests. It's configured by default on ASP.NET and ASP.NET Core applications. With other kinds of applications, see the [Console applications documentation](./console.md).
You also might want to correlate the Application Insights operation ID with the Storage request ID. For information on how to set and get a Storage request client and a server request ID, see [Monitor, diagnose, and troubleshoot Azure Storage](../../storage/common/storage-monitoring-diagnosing-troubleshooting.md#end-to-end-tracing). #### Enqueue
-Because Storage queues support the HTTP API, all operations with the queue are automatically tracked by Application Insights. In many cases, this instrumentation should be enough. However, to correlate traces on the consumer side with producer traces, you must pass some correlation context similarly to how we do it in the HTTP Protocol for Correlation.
+
+Because Storage queues support the HTTP API, all operations with the queue are automatically tracked by Application Insights. In many cases, this instrumentation should be enough. To correlate traces on the consumer side with producer traces, you must pass some correlation context similarly to how we do it in the HTTP Protocol for Correlation.
This example shows how to track the `Enqueue` operation. You can: - **Correlate retries (if any)**: They all have one common parent that's the `Enqueue` operation. Otherwise, they're tracked as children of the incoming request. If there are multiple logical requests to the queue, it might be difficult to find which call resulted in retries. - **Correlate Storage logs (if and when needed)**: They're correlated with Application Insights telemetry.
-The `Enqueue` operation is the child of a parent operation (for example, an incoming HTTP request). The HTTP dependency call is the child of the `Enqueue` operation and the grandchild of the incoming request:
+The `Enqueue` operation is the child of a parent operation. An example is an incoming HTTP request. The HTTP dependency call is the child of the `Enqueue` operation and the grandchild of the incoming request.
```csharp public async Task Enqueue(CloudQueue queue, string message)
To reduce the amount of telemetry your application reports or if you don't want
- Create (and start) a new `Activity` instead of starting the Application Insights operation. You do *not* need to assign any properties on it except the operation name. - Serialize `yourActivity.Id` into the message payload instead of `operation.Telemetry.Id`. You can also use `Activity.Current.Id`. - #### Dequeue
-Similarly to `Enqueue`, an actual HTTP request to the Storage queue is automatically tracked by Application Insights. However, the `Enqueue` operation presumably happens in the parent context, such as an incoming request context. Application Insights SDKs automatically correlate such an operation (and its HTTP part) with the parent request and other telemetry reported in the same scope.
-The `Dequeue` operation is tricky. The Application Insights SDK automatically tracks HTTP requests. However, it doesn't know the correlation context until the message is parsed. It's not possible to correlate the HTTP request to get the message with the rest of the telemetry especially when more than one message is received.
+Similarly to `Enqueue`, an actual HTTP request to the Storage queue is automatically tracked by Application Insights. The `Enqueue` operation presumably happens in the parent context, such as an incoming request context. Application Insights SDKs automatically correlate such an operation, and its HTTP part, with the parent request and other telemetry reported in the same scope.
+
+The `Dequeue` operation is tricky. The Application Insights SDK automatically tracks HTTP requests. But it doesn't know the correlation context until the message is parsed. It's not possible to correlate the HTTP request to get the message with the rest of the telemetry, especially when more than one message is received.
```csharp public async Task<MessagePayload> Dequeue(CloudQueue queue)
public async Task<MessagePayload> Dequeue(CloudQueue queue)
#### Process
-In the following example, an incoming message is tracked in a manner similarly to incoming HTTP request:
+In the following example, an incoming message is tracked in a manner similar to an incoming HTTP request:
```csharp public async Task Process(MessagePayload message)
When you instrument message deletion, make sure you set the operation (correlati
- Start the `Activity`. - Track dequeue, process, and delete operations by using `Start/StopOperation` helpers. Do it from the same asynchronous control flow (execution context). In this way, they're correlated properly. - Stop the `Activity`.-- Use `Start/StopOperation`, or call `Track` telemetry manually.
+- Use `Start/StopOperation` or call `Track` telemetry manually.
-### Dependency Types
+### Dependency types
-Application Insights uses dependency type to customize UI experiences. For queues it recognizes following types of `DependencyTelemetry` that improve [Transaction diagnostics experience](./transaction-diagnostics.md):
-- `Azure queue` for Azure Storage Queues
+Application Insights uses dependency type to customize UI experiences. For queues, it recognizes the following types of `DependencyTelemetry` that improve [Transaction diagnostics experience](./transaction-diagnostics.md):
+
+- `Azure queue` for Azure Storage queues
- `Azure Event Hubs` for Azure Event Hubs - `Azure Service Bus` for Azure Service Bus ### Batch processing+ With some queues, you can dequeue multiple messages with one request. Processing such messages is presumably independent and belongs to the different logical operations. It's not possible to correlate the `Dequeue` operation to a particular message being processed. Each message should be processed in its own asynchronous control flow. For more information, see the [Outgoing dependencies tracking](#outgoing-dependencies-tracking) section. ## Long-running background tasks
-Some applications start long-running operations that might be caused by user requests. From the tracing/instrumentation perspective, it's not different from request or dependency instrumentation:
+Some applications start long-running operations that might be caused by user requests. From the tracing/instrumentation perspective, it's not different from request or dependency instrumentation:
```csharp async Task BackgroundTask()
In this example, `telemetryClient.StartOperation` creates `DependencyTelemetry`
When the task starts from the background thread that doesn't have any operation (`Activity`) associated with it, `BackgroundTask` doesn't have any parent. However, it can have nested operations. All telemetry items reported from the task are correlated to the `DependencyTelemetry` created in `BackgroundTask`. ## Outgoing dependencies tracking+ You can track your own dependency kind or an operation that's not supported by Application Insights. The `Enqueue` method in the Service Bus queue or the Storage queue can serve as examples for such custom tracking. The general approach for custom dependency tracking is to: -- Call the `TelemetryClient.StartOperation` (extension) method that fills the `DependencyTelemetry` properties that are needed for correlation and some other properties (start time stamp, duration).
+- Call the `TelemetryClient.StartOperation` (extension) method that fills the `DependencyTelemetry` properties that are needed for correlation and some other properties, like start, time stamp, and duration.
- Set other custom properties on the `DependencyTelemetry`, such as the name and any other context you need. - Make a dependency call and wait for it. - Stop the operation with `StopOperation` when it's finished.
public async Task RunMyTaskAsync()
} ```
-Disposing operation causes operation to be stopped, so you may do it instead of calling `StopOperation`.
+Disposing an operation causes the operation to stop, so you might do it instead of calling `StopOperation`.
-*Warning*: in some cases unhanded exception may [prevent](/dotnet/csharp/language-reference/keywords/try-finally) `finally` to be called so operations may not be tracked.
+> [!WARNING]
+> In some cases, an unhanded exception might [prevent](/dotnet/csharp/language-reference/keywords/try-finally) `finally` from being called, so operations might not be tracked.
### Parallel operations processing and tracking
-`StopOperation` only stops the operation that was started. If the current running operation doesn't match the one you want to stop, `StopOperation` does nothing. This situation might happen if you start multiple operations in parallel in the same execution context:
+Calling `StopOperation` only stops the operation that was started. If the current running operation doesn't match the one you want to stop, `StopOperation` does nothing. This situation might happen if you start multiple operations in parallel in the same execution context.
```csharp var firstOperation = telemetryClient.StartOperation<DependencyTelemetry>("task 1");
telemetryClient.StopOperation(firstOperation);
await secondTask; ```
-Make sure you always call `StartOperation` and process operation in the same **async** method to isolate operations running in parallel. If operation is synchronous (or not async), wrap process and track with `Task.Run`:
+Make sure you always call `StartOperation` and process the operation in the same **async** method to isolate operations running in parallel. If the operation is synchronous (or not async), wrap the process and track with `Task.Run`.
```csharp public void RunMyTask(string name)
public async Task RunAllTasks()
} ```
-## ApplicationInsights operations vs System.Diagnostics.Activity
-`System.Diagnostics.Activity` represents the distributed tracing context and is used by frameworks and libraries to create and propagate context inside and outside of the process and correlate telemetry items. Activity works together with `System.Diagnostics.DiagnosticSource` - the notification mechanism between the framework/library to notify about interesting events (incoming or outgoing requests, exceptions, etc).
+## ApplicationInsights operations vs. System.Diagnostics.Activity
+
+`System.Diagnostics.Activity` represents the distributed tracing context and is used by frameworks and libraries to create and propagate context inside and outside of the process and correlate telemetry items. `Activity` works together with `System.Diagnostics.DiagnosticSource` as the notification mechanism between the framework/library to notify about interesting events like incoming or outgoing requests and exceptions.
-Activities are first-class citizens in Application Insights and automatic dependency and request collection relies heavily on them along with `DiagnosticSource` events. If you create Activity in your application - it would not result in Application Insights telemetry being created. Application Insights needs to receive DiagnosticSource events and know the events names and payloads to translate Activity into telemetry.
+Activities are first-class citizens in Application Insights. Automatic dependency and request collection rely heavily on them along with `DiagnosticSource` events. If you created `Activity` in your application, it wouldn't result in Application Insights telemetry being created. Application Insights needs to receive `DiagnosticSource` events and know the events names and payloads to translate `Activity` into telemetry.
-Each Application Insights operation (request or dependency) involves `Activity` - when `StartOperation` is called, it creates Activity underneath. `StartOperation` is the recommended way to track request or dependency telemetries manually and ensure everything is correlated.
+Each Application Insights operation (request or dependency) involves `Activity`. When `StartOperation` is called, it creates `Activity` underneath. `StartOperation` is the recommended way to track request or dependency telemetries manually and ensure everything is correlated.
## Next steps - Learn the basics of [telemetry correlation](correlation.md) in Application Insights.-- Check out how correlated data powers [Transaction Diagnostics Experience](./transaction-diagnostics.md) and [Application Map](./app-map.md).
+- Check out how correlated data powers [transaction diagnostics experience](./transaction-diagnostics.md) and [Application Map](./app-map.md).
- See the [data model](./data-model.md) for Application Insights types and data model. - Report custom [events and metrics](./api-custom-events-metrics.md) to Application Insights. - Check out standard [configuration](configuration-with-applicationinsights-config.md#telemetry-initializers-aspnet) for context properties collection. - Check the [System.Diagnostics.Activity User Guide](https://github.com/dotnet/runtime/blob/master/src/libraries/System.Diagnostics.DiagnosticSource/src/ActivityUserGuide.md) to see how we correlate telemetry.-
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Title: Configuration options - Azure Monitor Application Insights for Java
-description: How to configure Azure Monitor Application Insights for Java
+description: This article shows you how to configure Azure Monitor Application Insights for Java.
Previously updated : 11/04/2020 Last updated : 10/24/2022 ms.devlang: java
-# Configuration options - Azure Monitor Application Insights for Java
+# Configuration options: Azure Monitor Application Insights for Java
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Connection string and role name
-Connection string and role name are the most common settings needed to get started:
+Connection string and role name are the most common settings you need to get started:
```json {
Connection string and role name are the most common settings needed to get start
} ```
-The connection string is required, and the role name is important anytime you are sending data
-from different applications to the same Application Insights resource.
+Connection string is required. Role name is important anytime you're sending data from different applications to the same Application Insights resource.
-You will find more details and additional configuration options below.
+You'll find more information and configuration options in the following sections.
## Configuration file path By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.2.jar`.
-You can specify your own configuration file path using either
+You can specify your own configuration file path by using one of these two options:
-* `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable, or
+* `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable
* `applicationinsights.configuration.file` Java system property If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.2.jar` is located.
-Alternatively, instead of using a configuration file, you can specify the entire _content_ of the json configuration
-via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
+Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
## Connection string
-Connection string is required. You can find your connection string in your Application Insights resource:
+Connection string is required. You can find your connection string in your Application Insights resource.
```json
Connection string is required. You can find your connection string in your Appli
} ```
-You can also set the connection string using the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`
-(which will then take precedence over connection string specified in the json configuration).
+You can also set the connection string by using the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`. It then takes precedence over the connection string specified in the JSON configuration.
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.2.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.2.jar` is located.
```json {
If you specify a relative path, it will be resolved relative to the directory wh
The file should contain only the connection string and nothing else.
-Not setting the connection string will disable the Java agent.
+Not setting the connection string disables the Java agent.
-If you have multiple applications deployed in the same JVM and want them to send telemetry to different instrumentation
-keys, see [Instrumentation key overrides (preview)](#instrumentation-key-overrides-preview).
+If you have multiple applications deployed in the same JVM and want them to send telemetry to different instrumentation keys, see [Instrumentation key overrides (preview)](#instrumentation-key-overrides-preview).
## Cloud role name
-Cloud role name is used to label the component on the application map.
+The cloud role name is used to label the component on the application map.
If you want to set the cloud role name:
If you want to set the cloud role name:
} ```
-If cloud role name is not set, the Application Insights resource's name will be used to label the component on the application map.
+If the cloud role name isn't set, the Application Insights resource's name is used to label the component on the application map.
-You can also set the cloud role name using the environment variable `APPLICATIONINSIGHTS_ROLE_NAME`
-(which will then take precedence over cloud role name specified in the json configuration).
+You can also set the cloud role name by using the environment variable `APPLICATIONINSIGHTS_ROLE_NAME`. It then takes precedence over the cloud role name specified in the JSON configuration.
-Or you can set the cloud role name using the Java system property `applicationinsights.role.name`
-(which will also take precedence over cloud role name specified in the json configuration).
+Or you can set the cloud role name by using the Java system property `applicationinsights.role.name`. It also takes precedence over the cloud role name specified in the JSON configuration.
-If you have multiple applications deployed in the same JVM and want them to send telemetry to different cloud role
-names, see [Cloud role name overrides (preview)](#cloud-role-name-overrides-preview).
+If you have multiple applications deployed in the same JVM and want them to send telemetry to different cloud role names, see [Cloud role name overrides (preview)](#cloud-role-name-overrides-preview).
## Cloud role instance
-Cloud role instance defaults to the machine name.
+The cloud role instance defaults to the machine name.
If you want to set the cloud role instance to something different rather than the machine name:
If you want to set the cloud role instance to something different rather than th
} ```
-You can also set the cloud role instance using the environment variable `APPLICATIONINSIGHTS_ROLE_INSTANCE`
-(which will then take precedence over cloud role instance specified in the json configuration).
+You can also set the cloud role instance by using the environment variable `APPLICATIONINSIGHTS_ROLE_INSTANCE`. It then takes precedence over the cloud role instance specified in the JSON configuration.
-Or you can set the cloud role instance using the Java system property `applicationinsights.role.instance`
-(which will also take precedence over cloud role instance specified in the json configuration).
+Or you can set the cloud role instance by using the Java system property `applicationinsights.role.instance`.
+It also takes precedence over the cloud role instance specified in the JSON configuration.
## Sampling
Or you can set the cloud role instance using the Java system property `applicati
> Sampling can be a great way to reduce the cost of Application Insights. Make sure to set up your sampling > configuration appropriately for your use case.
-Sampling is request-based, meaning if a request is captured (sampled), then so are its dependencies, logs and
-exceptions.
+Sampling is based on request, which means that if a request is captured (sampled), so are its dependencies, logs, and exceptions.
-Furthermore, sampling is trace ID based, to help ensure consistent sampling decisions across different services.
+Sampling is also based on trace ID to help ensure consistent sampling decisions across different services.
-### Rate-Limited Sampling
+### Rate-limited sampling
-Starting from 3.4.2, rate-limited sampling is available, and is now the default.
+Starting from 3.4.2, rate-limited sampling is available and is now the default.
If no sampling has been configured, the default is now rate-limited sampling configured to capture at most (approximately) 5 requests per second, along with all the dependencies and logs on those requests.
-This replaces the prior default which was to capture all requests.
-If you still wish to capture all requests, use [fixed-percentage sampling](#fixed-percentage-sampling) and set the
-sampling percentage to 100.
+This configuration replaces the prior default, which was to capture all requests. If you still want to capture all requests, use [fixed-percentage sampling](#fixed-percentage-sampling) and set the sampling percentage to 100.
> [!NOTE]
-> The rate-limited sampling is approximate, because internally it must adapt a "fixed" sampling percentage over
-> time in order to emit accurate item counts on each telemetry record. Internally, the rate-limited sampling is
-> tuned to adapt quickly (0.1 seconds) to new application loads, so you should not see it exceed the configured rate by
-> much, or for very long.
+> The rate-limited sampling is approximate because internally it must adapt a "fixed" sampling percentage over time to emit accurate item counts on each telemetry record. Internally, the rate-limited sampling is
+tuned to adapt quickly (0.1 seconds) to new application loads. For this reason, you shouldn't see it exceed the configured rate by much, or for very long.
-Here is an example how to set the sampling to capture at most (approximately) 1 request per second:
+This example shows how to set the sampling to capture at most (approximately) 1 request per second:
```json {
Here is an example how to set the sampling to capture at most (approximately) 1
} ```
-Note that `requestsPerSecond` can be a decimal, so you can configure it to capture less than one request per second if you wish.
-For example, a value of `0.5` means capture at most 1 request every 2 seconds.
+Note that `requestsPerSecond` can be a decimal, so you can configure it to capture less than 1 request per second if you want. For example, a value of `0.5` means capture at most 1 request every 2 seconds.
-You can also set the sampling percentage using the environment variable `APPLICATIONINSIGHTS_SAMPLING_REQUESTS_PER_SECOND`
-(which will then take precedence over rate limit specified in the json configuration).
+You can also set the sampling percentage by using the environment variable `APPLICATIONINSIGHTS_SAMPLING_REQUESTS_PER_SECOND`. It then takes precedence over the rate limit specified in the JSON configuration.
-### Fixed-Percentage Sampling
+### Fixed-percentage sampling
-Here is an example how to set the sampling to capture approximately a third of all requests:
+This example shows how to set the sampling to capture approximately a third of all requests:
```json {
Here is an example how to set the sampling to capture approximately a third of a
} ```
-You can also set the sampling percentage using the environment variable `APPLICATIONINSIGHTS_SAMPLING_PERCENTAGE`
-(which will then take precedence over sampling percentage specified in the json configuration).
+You can also set the sampling percentage by using the environment variable `APPLICATIONINSIGHTS_SAMPLING_PERCENTAGE`. It then takes precedence over the sampling percentage specified in the JSON configuration.
> [!NOTE]
-> For the sampling percentage, choose a percentage that is close to 100/N where N is an integer.
-> Currently sampling doesn't support other values.
+> For the sampling percentage, choose a percentage that's close to 100/N, where N is an integer. Currently, sampling doesn't support other values.
## Sampling overrides (preview) This feature is in preview, starting from 3.0.3.
-Sampling overrides allow you to override the [default sampling percentage](#sampling), for example:
-* Set the sampling percentage to 0 (or some small value) for noisy health checks.
-* Set the sampling percentage to 0 (or some small value) for noisy dependency calls.
-* Set the sampling percentage to 100 for an important request type (e.g. `/login`)
- even though you have the default sampling configured to something lower.
+Sampling overrides allow you to override the [default sampling percentage](#sampling). For example, you can:
+
+* Set the sampling percentage to 0, or some small value, for noisy health checks.
+* Set the sampling percentage to 0, or some small value, for noisy dependency calls.
+* Set the sampling percentage to 100 for an important request type. For example, you can use `/login` even though you have the default sampling configured to something lower.
-For more information, check out the [sampling overrides](./java-standalone-sampling-overrides.md) documentation.
+For more information, see the [Sampling overrides](./java-standalone-sampling-overrides.md) documentation.
## JMX metrics
-If you want to collect some additional JMX metrics:
+If you want to collect some other JMX metrics:
```json {
If you want to collect some additional JMX metrics:
} ```
-`name` is the metric name that will be assigned to this JMX metric (can be anything).
+In the preceding configuration example:
-`objectName` is the [Object Name](https://docs.oracle.com/javase/8/docs/api/javax/management/ObjectName.html)
-of the JMX MBean that you want to collect.
+* `name` is the metric name that will be assigned to this JMX metric (can be anything).
+* `objectName` is the [Object Name](https://docs.oracle.com/javase/8/docs/api/javax/management/ObjectName.html) of the JMX MBean that you want to collect.
+* `attribute` is the attribute name inside of the JMX MBean that you want to collect.
-`attribute` is the attribute name inside of the JMX MBean that you want to collect.
-
-Numeric and boolean JMX metric values are supported. Boolean JMX metrics are mapped to `0` for false, and `1` for true.
+Numeric and Boolean JMX metric values are supported. Boolean JMX metrics are mapped to `0` for false and `1` for true.
## Custom dimensions
-If you want to add custom dimensions to all of your telemetry:
+If you want to add custom dimensions to all your telemetry:
```json {
If you want to add custom dimensions to all of your telemetry:
} ```
-`${...}` can be used to read the value from specified environment variable at startup.
+You can use `${...}` to read the value from the specified environment variable at startup.
> [!NOTE]
-> Starting from version 3.0.2, if you add a custom dimension named `service.version`, the value will be stored
-> in the `application_Version` column in the Application Insights Logs table instead of as a custom dimension.
+> Starting from version 3.0.2, if you add a custom dimension named `service.version`, the value is stored in the `application_Version` column in the Application Insights Logs table instead of as a custom dimension.
## Inherited attribute (preview)
-Starting from version 3.2.0, if you want to set a custom dimension programmatically on your request telemetry, and have it inherited by dependency telemetry that follows:
+Starting from version 3.2.0, if you want to set a custom dimension programmatically on your request telemetry and have it inherited by dependency telemetry that follows:
```json {
Starting from version 3.2.0, if you want to set a custom dimension programmatica
This feature is in preview, starting from 3.4.2.
-Connection string overrides allow you to override the [default connection string](#connection-string), for example:
-* Set one connection string for one http path prefix `/myapp1`.
-* Set another connection string for another http path prefix `/myapp2/`.
+Connection string overrides allow you to override the [default connection string](#connection-string). For example, you can:
+
+* Set one connection string for one HTTP path prefix `/myapp1`.
+* Set another connection string for another HTTP path prefix `/myapp2/`.
```json {
Connection string overrides allow you to override the [default connection string
This feature is in preview, starting from 3.2.3.
-Instrumentation key overrides allow you to override the [default instrumentation key](#connection-string), for example:
-* Set one instrumentation key for one http path prefix `/myapp1`.
-* Set another instrumentation key for another http path prefix `/myapp2/`.
+Instrumentation key overrides allow you to override the [default instrumentation key](#connection-string). For example, you can:
+
+* Set one instrumentation key for one HTTP path prefix `/myapp1`.
+* Set another instrumentation key for another HTTP path prefix `/myapp2/`.
```json {
Instrumentation key overrides allow you to override the [default instrumentation
This feature is in preview, starting from 3.3.0.
-Cloud role name overrides allow you to override the [default cloud role name](#cloud-role-name), for example:
-* Set one cloud role name for one http path prefix `/myapp1`.
-* Set another cloud role name for another http path prefix `/myapp2/`.
+Cloud role name overrides allow you to override the [default cloud role name](#cloud-role-name). For example, you can:
+
+* Set one cloud role name for one HTTP path prefix `/myapp1`.
+* Set another cloud role name for another HTTP path prefix `/myapp2/`.
```json {
Cloud role name overrides allow you to override the [default cloud role name](#c
## Autocollect InProc dependencies (preview)
-Starting from version 3.2.0, if you want to capture controller "InProc" dependencies, please use the following configuration:
+Starting from version 3.2.0, if you want to capture controller "InProc" dependencies, use the following configuration:
```json {
Starting from version 3.2.0, if you want to capture controller "InProc" dependen
## Telemetry processors (preview)
-It allows you to configure rules that will be applied to request, dependency and trace telemetry, for example:
- * Mask sensitive data
- * Conditionally add custom dimensions
+Yu can use telemetry processors to configure rules that will be applied to request, dependency, and trace telemetry. For example, you can:
+
+ * Mask sensitive data.
+ * Conditionally add custom dimensions.
* Update the span name, which is used to aggregate similar telemetry in the Azure portal. * Drop specific span attributes to control ingestion costs.
-For more information, check out the [telemetry processor](./java-standalone-telemetry-processors.md) documentation.
+For more information, see the [Telemetry processor](./java-standalone-telemetry-processors.md) documentation.
> [!NOTE]
-> If you are looking to drop specific (whole) spans for controlling ingestion cost,
-> see [sampling overrides](./java-standalone-sampling-overrides.md).
+> If you want to drop specific (whole) spans for controlling ingestion cost, see [Sampling overrides](./java-standalone-sampling-overrides.md).
## Auto-collected logging
-Log4j, Logback, JBoss Logging, and java.util.logging are auto-instrumented,
-and logging performed via these logging frameworks is auto-collected.
+Log4j, Logback, JBoss Logging, and java.util.logging are auto-instrumented. Logging performed via these logging frameworks is auto-collected.
+
+Logging is only captured if it:
-Logging is only captured if it first meets the level that is configured for the logging framework,
-and second, also meets the level that is configured for Application Insights.
+* Meets the level that's configured for the logging framework.
+* Also meets the level that's configured for Application Insights.
-For example, if your logging framework is configured to log `WARN` (and above) from package `com.example`,
-and Application Insights is configured to capture `INFO` (and above),
-then Application Insights will only capture `WARN` (and above) from package `com.example`.
+For example, if your logging framework is configured to log `WARN` (and above) from the package `com.example`,
+and Application Insights is configured to capture `INFO` (and above), Application Insights will only capture `WARN` (and above) from the package `com.example`.
The default level configured for Application Insights is `INFO`. If you want to change this level:
The default level configured for Application Insights is `INFO`. If you want to
} ```
-You can also set the level using the environment variable `APPLICATIONINSIGHTS_INSTRUMENTATION_LOGGING_LEVEL`
-(which will then take precedence over level specified in the json configuration).
+You can also set the level by using the environment variable `APPLICATIONINSIGHTS_INSTRUMENTATION_LOGGING_LEVEL`. It then takes precedence over the level specified in the JSON configuration.
-These are the valid `level` values that you can specify in the `applicationinsights.json` file, and how they correspond to logging levels in different logging frameworks:
+You can use these valid `level` values to specify in the `applicationinsights.json` file. The table shows how they correspond to logging levels in different logging frameworks.
-| level | Log4j | Logback | JBoss | JUL |
+| Level | Log4j | Logback | JBoss | JUL |
|-|--||--|| | OFF | OFF | OFF | OFF | OFF | | FATAL | FATAL | ERROR | FATAL | SEVERE |
These are the valid `level` values that you can specify in the `applicationinsig
| ALL | ALL | ALL | ALL | ALL | > [!NOTE]
-> If an exception object is passed to the logger, then the log message (and exception object details)
-> will show up in the Azure portal under the `exceptions` table instead of the `traces` table.
-> If you want to see the log messages across both the `traces` and `exceptions` tables,
-> you can write a Logs (Kusto) query to union across them, e.g.
+> If an exception object is passed to the logger, the log message (and exception object details) will show up in the Azure portal under the `exceptions` table instead of the `traces` table. If you want to see the log messages across both the `traces` and `exceptions` tables, you can write a Logs (Kusto) query to union across them. For example:
> > ``` > union traces, (exceptions | extend message = outerMessage)
You can enable the `Marker` property for Logback and Log4j 2:
This feature is in preview, starting from 3.4.2.
-### Code properties for Logback (preview)
-
-You can enable code properties (_FileName_, _ClassName_, _MethodName_, _LineNumber_) for Logback:
+You can enable code properties, such as `FileName`, `ClassName`, `MethodName`, and `LineNumber`, for Logback:
```json {
This feature is in preview, starting from 3.4.2.
### LoggingLevel
-Starting from version 3.3.0, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field.
+Starting from version 3.3.0, `LoggingLevel` isn't captured by default as part of the Traces custom dimension because that data is already captured in the `SeverityLevel` field.
-If needed, you can re-enable the previous behavior:
+If needed, you can temporarily re-enable the previous behavior:
```json {
If needed, you can re-enable the previous behavior:
} ```
-We will remove this configuration option in 4.0.0.
- ## Auto-collected Micrometer metrics (including Spring Boot Actuator metrics)
-If your application uses [Micrometer](https://micrometer.io),
-then metrics that are sent to the Micrometer global registry are auto-collected.
+If your application uses [Micrometer](https://micrometer.io), metrics that are sent to the Micrometer global registry are auto-collected.
-Also, if your application uses
-[Spring Boot Actuator](https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-features.html),
-then metrics configured by Spring Boot Actuator are also auto-collected.
+Also, if your application uses [Spring Boot Actuator](https://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-features.html), metrics configured by Spring Boot Actuator are also auto-collected.
-To disable auto-collection of Micrometer metrics (including Spring Boot Actuator metrics):
+To disable auto-collection of Micrometer metrics and Spring Boot Actuator metrics:
> [!NOTE]
-> Custom metrics are billed separately and may generate additional costs. Make sure to check the detailed [pricing information](https://azure.microsoft.com/pricing/details/monitor/). To disable the Micrometer and Spring Actuator metrics, add the below configuration to your config file.
+> Custom metrics are billed separately and might generate extra costs. Make sure to check the [Pricing information](https://azure.microsoft.com/pricing/details/monitor/). To disable the Micrometer and Spring Boot Actuator metrics, add the following configuration to your config file.
```json {
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
## JDBC query masking
-Literal values in JDBC queries are masked by default in order to avoid accidentally capturing sensitive data.
+Literal values in JDBC queries are masked by default to avoid accidentally capturing sensitive data.
-Starting from 3.4.2, this behavior can be disabled if desired, e.g.
+Starting from 3.4.2, this behavior can be disabled. For example:
```json {
Starting from 3.4.2, this behavior can be disabled if desired, e.g.
## Mongo query masking
-Literal values in Mongo queries are masked by default in order to avoid accidentally capturing sensitive data.
+Literal values in Mongo queries are masked by default to avoid accidentally capturing sensitive data.
-Starting from 3.4.2, this behavior can be disabled if desired, e.g.
+Starting from 3.4.2, this behavior can be disabled. For example:
```json {
Starting from version 3.3.0, you can capture request and response headers on you
} ```
-The header names are case-insensitive.
+The header names are case insensitive.
-The examples above will be captured under property names `http.request.header.my_header_a` and
+The preceding examples will be captured under the property names `http.request.header.my_header_a` and
`http.response.header.my_header_b`. Similarly, you can capture request and response headers on your client (dependency) telemetry:
Similarly, you can capture request and response headers on your client (dependen
} ```
-Again, the header names are case-insensitive, and the examples above will be captured under property names
+Again, the header names are case insensitive. The preceding examples will be captured under the property names
`http.request.header.my_header_c` and `http.response.header.my_header_d`.
-## Http server 4xx response codes
+## HTTP server 4xx response codes
-By default, http server requests that result in 4xx response codes are captured as errors.
+By default, HTTP server requests that result in 4xx response codes are captured as errors.
-Starting from version 3.3.0, you can change this behavior to capture them as success if you prefer:
+Starting from version 3.3.0, you can change this behavior to capture them as success:
```json {
Starting from version 3.3.0, you can change this behavior to capture them as suc
} ```
-## Suppressing specific auto-collected telemetry
+## Suppress specific auto-collected telemetry
-Starting from version 3.0.3, specific auto-collected telemetry can be suppressed using these configuration options:
+Starting from version 3.0.3, specific auto-collected telemetry can be suppressed by using these configuration options:
```json {
You can also suppress these instrumentations by setting these environment variab
* `APPLICATIONINSIGHTS_INSTRUMENTATION_REDIS_ENABLED` * `APPLICATIONINSIGHTS_INSTRUMENTATION_SPRING_SCHEDULING_ENABLED`
-(which will then take precedence over enabled specified in the json configuration).
+These variables then take precedence over the enabled variables specified in the JSON configuration.
> [!NOTE]
-> If you are looking for more fine-grained control, e.g. to suppress some redis calls but not all redis calls,
-> see [sampling overrides](./java-standalone-sampling-overrides.md).
+> If you're looking for more fine-grained control, for example, to suppress some redis calls but not all redis calls, see [Sampling overrides](./java-standalone-sampling-overrides.md).
## Preview instrumentations
-Starting from version 3.2.0, the following preview instrumentations can be enabled:
+Starting from version 3.2.0, you can enable the following preview instrumentations:
``` {
Starting from version 3.2.0, the following preview instrumentations can be enabl
} } ```+ > [!NOTE]
-> Akka instrumentation is available starting from version 3.2.2
-> Vertx HTTP Library instrumentation is available starting from version 3.3.0
+> Akka instrumentation is available starting from version 3.2.2. Vertx HTTP Library instrumentation is available starting from version 3.3.0.
## Metric interval
Starting from version 3.0.3, you can change this interval:
} ```
-The setting applies to all of these metrics:
+The setting applies to the following metrics:
-* Default performance counters, e.g. CPU and Memory
-* Default custom metrics, e.g. Garbage collection timing
-* Configured JMX metrics ([see above](#jmx-metrics))
-* Micrometer metrics ([see above](#auto-collected-micrometer-metrics-including-spring-boot-actuator-metrics))
+* **Default performance counters**: For example, CPU and memory
+* **Default custom metrics**: For example, garbage collection timing
+* **Configured JMX metrics**: [See the JMX metric section](#jmx-metrics)
+* **Micrometer metrics**: [See the Auto-collected Micrometer metrics section](#auto-collected-micrometer-metrics-including-spring-boot-actuator-metrics)
## Heartbeat
-By default, Application Insights Java 3.x sends a heartbeat metric once every 15 minutes.
-If you are using the heartbeat metric to trigger alerts, you can increase the frequency of this heartbeat:
+By default, Application Insights Java 3.x sends a heartbeat metric once every 15 minutes. If you're using the heartbeat metric to trigger alerts, you can increase the frequency of this heartbeat:
```json {
If you are using the heartbeat metric to trigger alerts, you can increase the fr
``` > [!NOTE]
-> You cannot increase the interval to longer than 15 minutes,
-> because the heartbeat data is also used to track Application Insights usage.
+> You can't increase the interval to longer than 15 minutes because the heartbeat data is also used to track Application Insights usage.
## Authentication (preview)+ > [!NOTE]
-> Authentication feature is available starting from version 3.2.0
+> The authentication feature is available starting from version 3.2.0.
-It allows you to configure agent to generate [token credentials](/java/api/overview/azure/identity-readme#credentials) that are required for Azure Active Directory Authentication.
-For more information, check out the [Authentication](./azure-ad-authentication.md) documentation.
+You can use authentication to configure the agent to generate [token credentials](/java/api/overview/azure/identity-readme#credentials) that are required for Azure Active Directory authentication.
+For more information, see the [Authentication](./azure-ad-authentication.md) documentation.
-## HTTP Proxy
+## HTTP proxy
-If your application is behind a firewall and cannot connect directly to Application Insights
-(see [IP addresses used by Application Insights](./ip-addresses.md)),
-you can configure Application Insights Java 3.x to use an HTTP proxy:
+If your application is behind a firewall and can't connect directly to Application Insights (see [IP addresses used by Application Insights](./ip-addresses.md)), you can configure Application Insights Java 3.x to use an HTTP proxy:
```json {
you can configure Application Insights Java 3.x to use an HTTP proxy:
} ```
-Application Insights Java 3.x also respects the global `https.proxyHost` and `https.proxyPort` system properties
-if those are set (and `http.nonProxyHosts` if needed).
+Application Insights Java 3.x also respects the global `https.proxyHost` and `https.proxyPort` system properties if they're set, and `http.nonProxyHosts`, if needed.
## Recovery from ingestion failures
-When sending telemetry to the Application Insights service fails, Application Insights Java 3.x will store the telemetry
-to disk and continue retrying from disk.
+When sending telemetry to the Application Insights service fails, Application Insights Java 3.x stores the telemetry to disk and continues retrying from disk.
-The default limit for disk persistence is 50 Mb. If you have high telemetry volume, or need to be able to recover from
-longer network or ingestion service outages, you can increase this limit starting from version 3.3.0:
+The default limit for disk persistence is 50 Mb. If you have high telemetry volume or need to be able to recover from longer network or ingestion service outages, you can increase this limit starting from version 3.3.0:
```json {
longer network or ingestion service outages, you can increase this limit startin
## Self-diagnostics
-"Self-diagnostics" refers to internal logging from Application Insights Java 3.x.
-
-This functionality can be helpful for spotting and diagnosing issues with Application Insights itself.
+"Self-diagnostics" refers to internal logging from Application Insights Java 3.x. This functionality can be helpful for spotting and diagnosing issues with Application Insights itself.
By default, Application Insights Java 3.x logs at level `INFO` to both the file `applicationinsights.log` and the console, corresponding to this configuration:
and the console, corresponding to this configuration:
} ```
-`destination` can be one of `file`, `console` or `file+console`.
-
-`level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`.
+In the preceding configuration example:
-`path` can be an absolute or relative path. Relative paths are resolved against the directory where
+* `level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`.
+* `path` can be an absolute or relative path. Relative paths are resolved against the directory where
`applicationinsights-agent-3.4.2.jar` is located.
-`maxSizeMb` is the max size of the log file before it rolls over.
-
-`maxHistory` is the number of rolled over log files that are retained (in addition to the current log file).
-
-Starting from version 3.0.2, you can also set the self-diagnostics `level` using the environment variable
-`APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`
-(which will then take precedence over self-diagnostics level specified in the json configuration).
+Starting from version 3.0.2, you can also set the self-diagnostics `level` by using the environment variable
+`APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`. It then takes precedence over the self-diagnostics level specified in the JSON configuration.
-And starting from version 3.0.3, you can also set the self-diagnostics file location using the environment variable
-`APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_FILE_PATH`
-(which will then take precedence over self-diagnostics file path specified in the json configuration).
+Starting from version 3.0.3, you can also set the self-diagnostics file location by using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_FILE_PATH`. It then takes precedence over the self-diagnostics file path specified in the JSON configuration.
## An example
-This is just an example to show what a configuration file looks like with multiple components.
-Please configure specific options based on your needs.
+This example shows what a configuration file looks like with multiple components. Configure specific options based on your needs.
```json {
azure-monitor Javascript React Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-react-plugin.md
Title: React plugin for Application Insights JavaScript SDK
-description: How to install and use React plugin for Application Insights JavaScript SDK.
+ Title: React plug-in for Application Insights JavaScript SDK
+description: Learn how to install and use the React plug-in for the Application Insights JavaScript SDK.
ibiza
ms.devlang: javascript
-# React plugin for Application Insights JavaScript SDK
+# React plug-in for Application Insights JavaScript SDK
-React plugin for the Application Insights JavaScript SDK, enables:
+The React plug-in for the Application Insights JavaScript SDK enables:
-- Tracking of route changes-- React components usage statistics
+- Tracking of route changes.
+- React components usage statistics.
-## Getting started
+## Get started
-Install npm package:
+Install the npm package:
```bash
class MyComponent extends React.Component {
... }
-// withAITracking takes 4 parameters ( reactPlugin, Component, ComponentName, className)
-// the first two are required and the other two are optional.
+// withAITracking takes 4 parameters (reactPlugin, Component, ComponentName, className).
+// The first two are required and the other two are optional.
export default withAITracking(reactPlugin, MyComponent); ```
-For `react-router v6` or other scenarios where router history is not exposed, appInsights config `enableAutoRouteTracking` can be used to auto track router changes:
+For `react-router v6` or other scenarios where router history isn't exposed, Application Insights configuration `enableAutoRouteTracking` can be used to auto-track router changes:
```javascript var reactPlugin = new ReactPlugin();
appInsights.loadAppInsights();
| Name | Default | Description | |||-|
-| history | null | React router history. For more information, see the [react-router package documentation](https://reactrouter.com/web/api/history). To learn how to access the history object outside of components, see the [React-router FAQ](https://github.com/ReactTraining/react-router/blob/master/FAQ.md#how-do-i-access-the-history-object-outside-of-components) |
+| history | null | React router history. For more information, see the [React router package documentation](https://reactrouter.com/web/api/history). To learn how to access the history object outside of components, see the [React router FAQ](https://github.com/ReactTraining/react-router/blob/master/FAQ.md#how-do-i-access-the-history-object-outside-of-components). |
### React components usage tracking To instrument various React components usage tracking, apply the `withAITracking` higher-order component function.
-It will measure time from the `ComponentDidMount` event through the `ComponentWillUnmount` event. However, in order to make this more accurate, it will subtract the time in which the user was idle `React Component Engaged Time = ComponentWillUnmount timestamp - ComponentDidMount timestamp - idle time`.
+It measures time from the `ComponentDidMount` event through the `ComponentWillUnmount` event. To make the result more accurate, it subtracts the time in which the user was idle by using `React Component Engaged Time = ComponentWillUnmount timestamp - ComponentDidMount timestamp - idle time`.
-To see this metric in the Azure portal you need to navigate to the Application Insights resource, select the "Metrics" tab and configure the empty charts to display custom metric name "React Component Engaged Time (seconds)", select aggregation (sum, avg, etc.) of your metric and apply split be "Component Name".
+To see this metric in the Azure portal, go to the Application Insights resource and select the **Metrics** tab. Configure the empty charts to display the custom metric name `React Component Engaged Time (seconds)`. Select the aggregation (for example, sum or avg) of your metric and split by `Component Name`.
-![Screenshot of chart that displays the custom metric "React Component Engaged Time (seconds)" split by "Component Name"](./media/javascript-react-plugin/chart.png)
+![Screenshot that shows a chart that displays the custom metric "React Component Engaged Time (seconds)" split by "Component Name"](./media/javascript-react-plugin/chart.png)
-You can also run custom queries to divide Application Insights data to generate report and visualizations as per your requirements. In the Azure portal navigate to the Application Insights resource, select "Analytics" from the top menu of the Overview tab and run your query.
+You can also run custom queries to divide Application Insights data to generate reports and visualizations as per your requirements. In the Azure portal, go to the Application Insights resource, select **Analytics** from the **Overview** tab, and run your query.
-![Screenshot of custom metric query results.](./media/javascript-react-plugin/query.png)
+![Screenshot that shows custom metric query results.](./media/javascript-react-plugin/query.png)
> [!NOTE]
-> It can take up to 10 minutes for new custom metrics to appear in the Azure Portal.
+> It can take up to 10 minutes for new custom metrics to appear in the Azure portal.
-## Using React Hooks
+## Use React Hooks
-[React Hooks](https://reactjs.org/docs/hooks-reference.html) are an approach to state and life-cycle management in a React application without relying on class-based React components. The Application Insights React plugin provides a number of Hooks integrations that operate in a similar way to the higher-order component approach.
+[React Hooks](https://reactjs.org/docs/hooks-reference.html) are an approach to state and lifecycle management in a React application without relying on class-based React components. The Application Insights React plug-in provides several Hooks integrations that operate in a similar way to the higher-order component approach.
-### Using React Context
+### Use React Context
-The React Hooks for Application Insights are designed to use [React Context](https://reactjs.org/docs/context.html) as a containing aspect for it. To use Context, initialize Application Insights as above, and then import the Context object:
+The React Hooks for Application Insights are designed to use [React Context](https://reactjs.org/docs/context.html) as a containing aspect for it. To use Context, initialize Application Insights, and then import the Context object:
```javascript import React from "react";
const App = () => {
}; ```
-This Context Provider will make Application Insights available as a `useContext` Hook within all children components of it.
+This Context Provider makes Application Insights available as a `useContext` Hook within all children components of it:
```javascript import React from "react";
const MyComponent = () => {
export default MyComponent; ```
-### `useTrackMetric`
+### useTrackMetric
-The `useTrackMetric` Hook replicates the functionality of the `withAITracking` higher-order component, without adding an additional component to the component structure. The Hook takes two arguments, first is the Application Insights instance (which can be obtained from the `useAppInsightsContext` Hook), and an identifier for the component for tracking (such as its name).
+The `useTrackMetric` Hook replicates the functionality of the `withAITracking` higher-order component, without adding another component to the component structure. The Hook takes two arguments. First is the Application Insights instance, which can be obtained from the `useAppInsightsContext` Hook. The second is an identifier for the component for tracking, such as its name.
```javascript import React from "react";
const MyComponent = () => {
export default MyComponent; ```
-It will operate like the higher-order component, but respond to Hooks life-cycle events, rather than a component life-cycle. The Hook needs to be explicitly provided to user events if there is a need to run on particular interactions.
+It operates like the higher-order component, but it responds to Hooks lifecycle events rather than a component lifecycle. The Hook needs to be explicitly provided to user events if there's a need to run on particular interactions.
-### `useTrackEvent`
+### useTrackEvent
-The `useTrackEvent` Hook is used to track any custom event that an application may need to track, such as a button click or other API call. It takes four arguments:
-- Application Insights instance (which can be obtained from the `useAppInsightsContext` Hook).
+The `useTrackEvent` Hook is used to track any custom event that an application might need to track, such as a button click or other API call. It takes four arguments:
+
+- Application Insights instance, which can be obtained from the `useAppInsightsContext` Hook.
- Name for the event.-- Event data object that encapsulates the changes that has to be tracked.-- skipFirstRun (optional) flag to skip calling the `trackEvent` call on initialization. Default value is set to `true` to mimic more closely the way the non-hook version works. With `useEffect` hooks, the effect is triggered on each value update _including_ the initial setting of the value, thereby starting the tracking too early causing potentially unwanted events to be tracked.
+- Event data object that encapsulates the changes that have to be tracked.
+- skipFirstRun (optional) flag to skip calling the `trackEvent` call on initialization. The default value is set to `true` to mimic more closely the way the non-Hook version works. With `useEffect` Hooks, the effect is triggered on each value update _including_ the initial setting of the value. As a result, tracking starts too early, which causes potentially unwanted events to be tracked.
```javascript import React, { useState, useEffect } from "react";
const MyComponent = () => {
export default MyComponent; ```
-When the Hook is used, a data payload can be provided to it to add additional data to the event when it is stored in Application Insights.
+When the Hook is used, a data payload can be provided to it to add more data to the event when it's stored in Application Insights.
-## React Error Boundaries
+## React error boundaries
-[Error Boundaries](https://reactjs.org/docs/error-boundaries.html) provide a way to gracefully handle an exception when it occurs within a React application, and when such error occurs it's likely that the exception needs to be logged. The React Plugin for Application Insights provides an Error Boundary component that will automatically log the error when it occurs.
+[Error boundaries](https://reactjs.org/docs/error-boundaries.html) provide a way to gracefully handle an exception when it occurs within a React application. When such an error occurs, it's likely that the exception needs to be logged. The React plug-in for Application Insights provides an error boundary component that automatically logs the error when it occurs.
```javascript import React from "react";
const App = () => {
}; ```
-The `AppInsightsErrorBoundary` requires two props to be passed to it, the `ReactPlugin` instance created for the application and a component to be rendered when an error occurs. When an unhandled error occurs, `trackException` is called with the information provided to the Error Boundary and the `onError` component is displayed.
+The `AppInsightsErrorBoundary` requires two props to be passed to it. They're the `ReactPlugin` instance created for the application and a component to be rendered when an error occurs. When an unhandled error occurs, `trackException` is called with the information provided to the error boundary, and the `onError` component appears.
-## Enable Correlation
+## Enable correlation
Correlation generates and sends data that enables distributed tracing and powers the [application map](../app/app-map.md), [end-to-end transaction view](../app/app-map.md#go-to-details), and other diagnostic tools.
-In JavaScript correlation is turned off by default in order to minimize the telemetry we send by default. To enable correlation please reference [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
+In JavaScript, correlation is turned off by default to minimize the telemetry we send by default. To enable correlation, see the [JavaScript client-side correlation documentation](./javascript.md#enable-distributed-tracing).
### Route tracking
-The React Plugin automatically tracks route changes and collects other React specific telemetry.
+The React plug-in automatically tracks route changes and collects other React-specific telemetry.
> [!NOTE]
-> `enableAutoRouteTracking` should be set to `false` if it set to true then when the route changes duplicate PageViews may be sent.
+> `enableAutoRouteTracking` should be set to `false`. If it's set to `true`, then when the route changes, duplicate `PageViews` can be sent.
-For `react-router v6` or other scenarios where router history is not exposed, you can add `enableAutoRouteTracking: true` to your [setup configuration](#basic-usage).
+For `react-router v6` or other scenarios where router history isn't exposed, you can add `enableAutoRouteTracking: true` to your [setup configuration](#basic-usage).
### PageView
-If a custom `PageView` duration is not provided, `PageView` duration defaults to a value of 0.
+If a custom `PageView` duration isn't provided, `PageView` duration defaults to a value of `0`.
## Sample app
Check out the [Application Insights React demo](https://github.com/Azure-Samples
## Next steps - To learn more about the JavaScript SDK, see the [Application Insights JavaScript SDK documentation](javascript.md).-- To learn about the Kusto query language and querying data in Log Analytics, see the [Log query overview](../../azure-monitor/logs/log-query-overview.md).
+- To learn about the Kusto Query Language and querying data in Log Analytics, see the [Log query overview](../../azure-monitor/logs/log-query-overview.md).
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
To provide feedback:
- To review the source code, see the [Azure Monitor Exporter GitHub repository](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter). - To install the NuGet package, check for updates, or view release notes, see the [Azure Monitor Exporter NuGet Package](https://www.nuget.org/packages/Azure.Monitor.OpenTelemetry.Exporter/) page.-- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter/tests/Azure.Monitor.OpenTelemetry.Exporter.Tracing.Customization).
+- To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/monitor/Azure.Monitor.OpenTelemetry.Exporter/tests/Azure.Monitor.OpenTelemetry.Exporter.Demo).
- To learn more about OpenTelemetry and its community, see the [OpenTelemetry .NET GitHub repository](https://github.com/open-telemetry/opentelemetry-dotnet). - To enable usage experiences, [enable web or browser user monitoring](javascript.md).
azure-monitor Tutorial App Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-app-dashboards.md
To complete this tutorial:
- Enable the [Application Insights SDK](../app/asp-net.md). > [!NOTE]
-> Required permissions for working with dashboards are discussed in the article on [understanding access control for dashboards](../../azure-portal/azure-portal-dashboard-share-access.md#understanding-access-control-for-dashboards).
+> Required permissions for working with dashboards are discussed in the article on [understanding access control for dashboards](../../azure-portal/azure-portal-dashboard-share-access.md).
## Sign in to Azure
azure-monitor Tutorial Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/tutorial-performance.md
Title: Diagnose performance issues using Azure Application Insights | Microsoft Docs
-description: Tutorial to find and diagnose performance issues in your application using Azure Application Insights.
+ Title: Diagnose performance issues using Application Insights | Microsoft Docs
+description: Tutorial to find and diagnose performance issues in your application by using Application Insights.
Last updated 06/15/2020
-# Find and diagnose performance issues with Azure Application Insights
+# Find and diagnose performance issues with Application Insights
-Azure Application Insights collects telemetry from your application to help analyze its operation and performance. You can use this information to identify problems that may be occurring or to identify improvements to the application that would most impact users. This tutorial takes you through the process of analyzing the performance of both the server components of your application and the perspective of the client. You learn how to:
+Application Insights collects telemetry from your application to help analyze its operation and performance. You can use this information to identify problems that might be occurring or to identify improvements to the application that would most affect users. This tutorial takes you through the process of analyzing the performance of both the server components of your application and the perspective of the client.
-> [!div class="checklist"]
-> * Identify the performance of server-side operations
-> * Analyze server operations to determine the root cause of slow performance
-> * Identify slowest client-side operations
-> * Analyze details of page views using query language
+You learn how to:
+> [!div class="checklist"]
+> * Identify the performance of server-side operations.
+> * Analyze server operations to determine the root cause of slow performance.
+> * Identify the slowest client-side operations.
+> * Analyze details of page views by using query language.
## Prerequisites
To complete this tutorial:
- Deploy a .NET application to Azure and [enable the Application Insights SDK](../app/asp-net.md). - [Enable the Application Insights profiler](../app/profiler.md) for your application.
-## Log in to Azure
-Log in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
+## Sign in to Azure
+
+Sign in to the [Azure portal](https://portal.azure.com).
## Identify slow server operations
-Application Insights collects performance details for the different operations in your application. By identifying those operations with the longest duration, you can diagnose potential problems or best target your ongoing development to improve the overall performance of the application.
-1. Select **Application Insights** and then select your subscription.
-1. To open the **Performance** panel either select **Performance** under the **Investigate** menu or click the **Server Response Time** graph.
+Application Insights collects performance details for the different operations in your application. By identifying the operations with the longest duration, you can diagnose potential problems or target your ongoing development to improve the overall performance of the application.
+
+1. Select **Application Insights** and then select your subscription.
+1. To open the **Performance** panel, either select **Performance** under the **Investigate** menu or select the **Server response time** graph.
- ![Performance](media/tutorial-performance/1-overview.png)
+ ![Screenshot that shows the Performance view.](media/tutorial-performance/1-overview.png)
-2. The **Performance** panel shows the count and average duration of each operation for the application. You can use this information to identify those operations that most impact users. In this example, the **GET Customers/Details** and **GET Home/Index** are likely candidates to investigate because of their relatively high duration and number of calls. Other operations may have a higher duration but were rarely called, so the effect of their improvement would be minimal.
+1. The **Performance** screen shows the count and average duration of each operation for the application. You can use this information to identify those operations that affect users the most. In this example, the **GET Customers/Details** and **GET Home/Index** are likely candidates to investigate because of their relatively high duration and number of calls. Other operations might have a higher duration but were rarely called, so the effect of their improvement would be minimal.
- ![Performance server panel](media/tutorial-performance/2-server-operations.png)
+ ![Screenshot that shows the Performance server panel.](media/tutorial-performance/2-server-operations.png)
-3. The graph currently shows the average duration of the selected operations over time. You can switch to the 95th percentile to find the performance issues. Add the operations that you're interested in by pinning them to the graph. This shows that there are some peaks worth investigating. Isolate this further by reducing the time window of the graph.
+1. The graph currently shows the average duration of the selected operations over time. You can switch to the 95th percentile to find the performance issues. Add the operations you're interested in by pinning them to the graph. The graph shows that there are some peaks worth investigating. To isolate them further, reduce the time window of the graph.
- ![Pin operations](media/tutorial-performance/3-server-operations-95th.png)
+ ![Screenshot that shows Pin operations.](media/tutorial-performance/3-server-operations-95th.png)
-4. The performance panel on the right shows distribution of durations for different requests for the selected operation. Reduce the window to start around the 95th percentile. The "Top 3 dependencies" insights card, can tell you at a glance that the external dependencies are likely contributing to the slow transactions. Click on the button with number of samples to see a list of the samples. You can then select any sample to see transaction details.
+1. The performance panel on the right shows distribution of durations for different requests for the selected operation. Reduce the window to start around the 95th percentile. The **Top 3 Dependencies** insights card can tell you at a glance that the external dependencies are likely contributing to the slow transactions. Select the button with the number of samples to see a list of the samples. Then select any sample to see transaction details.
-5. You can see at a glance that the call to Fabrikamaccount Azure Table is contributing most to the total duration of the transaction. You can also see that an exception caused it to fail. You can click on any item in the list to see its details on the right side. [Learn more about the transaction diagnostics experience](../app/transaction-diagnostics.md)
+1. You can see at a glance that the call to the Fabrikamaccount Azure Table contributes most to the total duration of the transaction. You can also see that an exception caused it to fail. Select any item in the list to see its details on the right side. [Learn more about the transaction diagnostics experience](../app/transaction-diagnostics.md)
- ![Operation end-to-end details](media/tutorial-performance/4-end-to-end.png)
-
+ ![Screenshot that shows Operation end-to-end transaction details.](media/tutorial-performance/4-end-to-end.png)
-6. The [**Profiler**](../app/profiler-overview.md) helps get further with code level diagnostics by showing the actual code that ran for the operation and the time required for each step. Some operations may not have a trace since the profiler runs periodically. Over time, more operations should have traces. To start the profiler for the operation, click **Profiler traces**.
-5. The trace shows the individual events for each operation so you can diagnose the root cause for the duration of the overall operation. Click one of the top examples, which have the longest duration.
-6. Click **Hot Path** to highlight the specific path of events that most contribute to the total duration of the operation. In this example, you can see that the slowest call is from *FabrikamFiberAzureStorage.GetStorageTableData* method. The part that takes most time is the *CloudTable.CreateIfNotExist* method. If this line of code is executed every time the function gets called, unnecessary network call and CPU resource will be consumed. The best way to fix your code is to put this line in some startup method that only executes once.
+1. The [Profiler](../app/profiler-overview.md) helps get further with code-level diagnostics by showing the actual code that ran for the operation and the time required for each step. Some operations might not have a trace because the Profiler runs periodically. Over time, more operations should have traces. To start the Profiler for the operation, select **Profiler traces**.
+1. The trace shows the individual events for each operation so that you can diagnose the root cause for the duration of the overall operation. Select one of the top examples that has the longest duration.
+1. Select **Hot path** to highlight the specific path of events that contribute the most to the total duration of the operation. In this example, you can see that the slowest call is from the `FabrikamFiberAzureStorage.GetStorageTableData` method. The part that takes the most time is the `CloudTable.CreateIfNotExist` method. If this line of code is executed every time the function gets called, unnecessary network call and CPU resources will be consumed. The best way to fix your code is to put this line in some startup method that executes only once.
- ![Profiler details](media/tutorial-performance/5-hot-path.png)
+ ![Screenshot that shows Profiler details.](media/tutorial-performance/5-hot-path.png)
-7. The **Performance Tip** at the top of the screen supports the assessment that the excessive duration is due to waiting. Click the **waiting** link for documentation on interpreting the different types of events.
+1. The **Performance Tip** at the top of the screen supports the assessment that the excessive duration is because of waiting. Select the **waiting** link for documentation on interpreting the different types of events.
- ![Performance tip](media/tutorial-performance/6-perf-tip.png)
+ ![Screenshot that shows a Performance Tip.](media/tutorial-performance/6-perf-tip.png)
-8. For further analysis, you can click **Download Trace** to download the trace. You can view this data using [PerfView](https://github.com/Microsoft/perfview#perfview-overview).
+1. For further analysis, select **Download Trace** to download the trace. You can view this data by using [PerfView](https://github.com/Microsoft/perfview#perfview-overview).
## Use logs data for server
- Logs provides a rich query language that allows you to analyze all data collected by Application Insights. You can use this to perform deep analysis on request and performance data.
-1. Return to the operation detail panel and click ![Logs icon](media/tutorial-performance/app-viewinlogs-icon.png)**View in Logs (Analytics)**
+ Logs provides a rich query language that you can use to analyze all data collected by Application Insights. You can use this feature to perform deep analysis on request and performance data.
-2. Logs opens with a query for each of the views in the panel. You can run these queries as they are or modify them for your requirements. The first query shows the duration for this operation over time.
+1. Return to the operation detail panel and select ![Logs icon](media/tutorial-performance/app-viewinlogs-icon.png)**View in Logs (Analytics)**.
- ![logs query](media/tutorial-performance/7-request-time-logs.png)
+1. The **Logs** screen opens with a query for each of the views in the panel. You can run these queries as they are or modify them for your requirements. The first query shows the duration for this operation over time.
+ ![Screenshot that shows a logs query.](media/tutorial-performance/7-request-time-logs.png)
## Identify slow client operations
-In addition to identifying server processes to optimize, Application Insights can analyze the perspective of client browsers. This can help you identify potential improvements to client components and even identify issues with different browsers or different locations.
-1. Select **Browser** under **Investigate** then click **Browser Performance** or select **Performance** under **Investigate** and switch to the **Browser** tab by clicking the server/browser toggle button in the top right to open the browser performance summary. This provides a visual summary of various telemetries of your application from the perspective of the browser.
+In addition to identifying server processes to optimize, Application Insights can analyze the perspective of client browsers. This information can help you identify potential improvements to client components and even identify issues with different browsers or different locations.
- ![Browser summary](media/tutorial-performance/8-browser.png)
+1. Select **Browser** under **Investigate** and then select **Browser Performance**. Alternatively, select **Performance** under **Investigate** and switch to the **Browser** tab by selecting the **Server/Browser** toggle button in the upper-right corner to open the browser performance summary. This view provides a visual summary of various telemetries of your application from the perspective of the browser.
-2. Select on one of the operation names then click the blue samples button in the bottom right and select an operation. This will bring up the end-to-end transaction details and on the right side you can view the **Page View Properties**. This allows you to view details of the client requesting the page including the type of browser and its location. This information can assist you in determining whether there are performance issues related to particular types of clients.
+ ![Screenshot that shows the Browser summary.](media/tutorial-performance/8-browser.png)
- ![Page view](media/tutorial-performance/9-page-view-properties.png)
+1. Select one of the operation names, select the **Samples** button at the bottom right, and then select an operation. End-to-end transaction details open on the right side where you can view the **Page View Properties**. You can view details of the client requesting the page including the type of browser and its location. This information can assist you in determining whether there are performance issues related to particular types of clients.
+
+ ![Screenshot that shows Page View Properties.](media/tutorial-performance/9-page-view-properties.png)
## Use logs data for client
-Like the data collected for server performance, Application Insights makes all client data available for deep analysis using Logs.
-1. Return to the browser summary and click ![Logs icon](media/tutorial-performance/app-viewinlogs-icon.png) **View in Logs (Analytics)**
+Like the data collected for server performance, Application Insights makes all client data available for deep analysis by using logs.
+
+1. Return to the browser summary and select ![Logs icon](media/tutorial-performance/app-viewinlogs-icon.png) **View in Logs (Analytics)**.
-2. Logs opens with a query for each of the views in the panel. The first query shows the duration for different page views over time.
+1. The **Logs** screen opens with a query for each of the views in the panel. The first query shows the duration for different page views over time.
- ![Logs query](media/tutorial-performance/10-page-view-logs.png)
+ ![Screenshot that shows the Logs screen.](media/tutorial-performance/10-page-view-logs.png)
## Next steps
-Now that you've learned how to identify run-time exceptions, advance to the next tutorial to learn how to create alerts in response to failures.
+
+Now that you've learned how to identify runtime exceptions, proceed to the next tutorial to learn how to create alerts in response to failures.
> [!div class="nextstepaction"] > [Alert on application health](./tutorial-alert.md)-
azure-monitor Kql Machine Learning Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/kql-machine-learning-azure-monitor.md
+
+ Title: Detect and analyze anomalies with KQL in Azure Monitor
+description: Learn how to use KQL machine learning tools for time series analysis and anomaly detection in Azure Monitor Log Analytics.
+++++ Last updated : 07/01/2022+
+# Customer intent: As a data analyst, I want to use the native machine learning capabilities of Azure Monitor Logs to gain insights from my log data without having to export data outside of Azure Monitor.
+++
+# Tutorial: Detect and analyze anomalies using KQL machine learning capabilities in Azure Monitor
+
+The Kusto Query Language (KQL) includes machine learning operators, functions and plugins for time series analysis, anomaly detection, forecasting, and root cause analysis. Use these KQL capabilities to perform advanced data analysis in Azure Monitor without the overhead of exporting data to external machine learning tools.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a time series
+> * Identify anomalies in a time series
+> * Tweak anomaly detection settings to refine results
+> * Analyze the root cause of anomalies
+
+> [!NOTE]
+> This tutorial provides links to a Log Analytics demo environment in which you can run the KQL query examples. However, you can implement the same KQL queries and principals in all [Azure Monitor tools that use KQL](log-query-overview.md).
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A workspace with log data.
+## Create a time series
+
+Use the KQL `make-series` operator to create a time series.
+
+Let's create a time series based on logs in the [Usage table](/azure/azure-monitor/reference/tables/usage), which holds information about how much data each table in a workspace ingests every hour, including billable and non-billable data.
+
+This query uses `make-series` to chart the total amount of billable data ingested by each table in the workspace every day, over the past 21 days:
+
+<a href="https://portal.azure.com#@ec7cb332-9a0a-4569-835a-ce7658e8444e/blade/Microsoft_Azure_Monitoring_Logs/DemoLogsBlade/resourceId/%2FDemo/source/LogsBlade.AnalyticsShareLinkToQuery/q/H4sIAAAAAAAAA6VSu04DMRDs8xWrVHdSyKsEpQggQQoKUPiAvfNecorPF%252By1IiMKfoPf40tYO5cEBaWiHY9nd2ZWE4NjtMx1QzCD6UTdwGgEyzXtcVDIBG0FLEgiObI1uQGUrTdcmxUUWG6gsm2TOKW3lsz%252BX0%252BLPBnViY9P2gL%252BXzl%252Bqiwm7W7vx3YnkkwGuAWHzVZT5GPv1eGKDtMZC8F39P35ZQnQoA7vMq%252F3Abs1CbIU4QcyZGWSgoJ4R6KYpUDaSmHIcNVmx9zyfDg8e%252BtM53meZkZ3Fo1sULU2mXnzZMNAtFe1MdErMkym1%252BMxzJ8OoVS1ddFukBVVjOwCT2NHq80pzDTu6Gjhbmutk%252B3ZDPpsPfXjZgtTaq%252BkBqMDFAdKTOwgZsl5LUdCLGINbuhqXxPMS%252FaoU64z55vs2aO0xiEHRRXGP9K4CJ%252Blmeq8nGTq7UKW8kDbX60XAe5l02XYpmbvLMkE9%252FeedO1Sj2FvjCNfzMgxKbKJWq7jqYvGS8Jc59rFEPDE%252BAERMgnLLgMAAA%253D%253D" target="_blank">Click to run query</a>
+
+```kusto
+let starttime = 21d; // The start date of the time series, counting back from the current date
+let endtime = 0d; // The end date of the time series, counting back from the current date
+let timeframe = 1d; // How often to sample data
+Usage // The table weΓÇÖre analyzing
+| where TimeGenerated between (startofday(ago(starttime))..startofday(ago(endtime))) // Time range for the query, beginning at 12:00 AM of the first day and ending at 12:00 AM of the last day in the time range
+| where IsBillable == "true" // Include only billable data in the result set
+| make-series ActualUsage=sum(Quantity) default = 0 on TimeGenerated from startofday(ago(starttime)) to startofday(ago(endtime)) step timeframe by DataType // Creates the time series, listed by data type
+| render timechart // Renders results in a timechart
+```
+
+In the resulting chart, you can clearly see some anomalies - for example, in the `AzureDiagnostics` and `SecurityEvent` data types:
++
+Next, we'll use a KQL function to list all of the anomalies in a time series.
+
+> [!NOTE]
+> For more information about `make-series` syntax and usage, see [make-series operator](/azure/data-explorer/kusto/query/make-seriesoperator).
+
+## Find anomalies in a time series
+
+The `series_decompose_anomalies()` function takes a series of values as input and extracts anomalies.
+
+Let's give the result set of our time series query as input to the `series_decompose_anomalies()` function:
+
+<a href="https://portal.azure.com#@ec7cb332-9a0a-4569-835a-ce7658e8444e/blade/Microsoft_Azure_Monitoring_Logs/DemoLogsBlade/resourceId/%2FDemo/source/LogsBlade.AnalyticsShareLinkToQuery/q/H4sIAAAAAAAAA61Uu3LUQBDM%252FRWDI6lKfoZQFxjsAgcEYBO75rQj3eLVrtiH70QR8Bv8Hl%252FC7Ejn09mYiExa9fRMd8%252FKUIQQ0ceoO4IFnJ%252BpN3ByAjf5DBRGgsZ5iCsCQQTymkIFtUs2atvCEut7aLzrBFMn78mOhQeGucmqifl0JL6y6j%252FQ5qLGoxBPE39wa3BNJAvRQcCuN5TxePAlYEsZcZu74ZLP1%252FT75y9PgBbN8J37HfyA9Yr45JaJ35Mlz50ULCmuiRkLscg1CocCW1c8OlaWx8dPvk2Ky7KUnlmdR9vuBH9L5IeKuVttbdaKEc7OX5%252BewsVHViCYRvuQ5Q48osomvoAzOMG03Zkp7R4VXYe32hiRvVjAYfSJDvNk17Y2SVEAZ80Ayy0mW7Zl8xSS4f2gyGwd3tPRmBNc1DGhEWMXIXXFp4QcWxxKUNRgruG8mfiJnZLny1ZKcC%252BYyR%252Bon8W%252BHOCSJ70deon2nSfuEJ4vlNFBghxGYZHxrIU2vCequLCuQyO48XG4qZ2nCq42PdVcJwpLFjPS3SmqXde7QHe4LS1mXkjiQhHG3DbRYx3zy4TmvQ48jhv9dSn2KeYs5%252BZmrx%252BOaNNnihl7tifP75pCucRZldUTf2cAfhfjtsoy8fP6ueq%252F0e%252F5MAMYZ1sReyVTjr6j97yItSQhjv%252FDtPJxPXfjvco7k0k%252FU0zesmvGANfpqB9I%252FLTUorwoetD85BgkO0XTnJDyoMzde%252FeVT%252Fb9qWZmVnvS9oyodmsxX7FLarTlMdcrXa%252F4R2VSZ8VTL%252BPm2ILjfyYLx2Uo5oz5WoRaloMRYbpXQaAjDIJEwPcuI6f77rxiB237B35S8SykBQAA" target="_blank">Click to run query</a>
+
+```kusto
+let starttime = 21d; // Start date for the time series, counting back from the current date
+let endtime = 0d; // End date for the time series, counting back from the current date
+let timeframe = 1d; // How often to sample data
+Usage // The table weΓÇÖre analyzing
+| where TimeGenerated between (startofday(ago(starttime))..startofday(ago(endtime))) // Time range for the query, beginning at 12:00 AM of the first day and ending at 12:00 AM of the last day in the time range
+| where IsBillable == "true" // Includes only billable data in the result set
+| make-series ActualUsage=sum(Quantity) default = 0 on TimeGenerated from startofday(ago(starttime)) to startofday(ago(endtime)) step timeframe by DataType // Creates the time series, listed by data type
+| extend(Anomalies, AnomalyScore, ExpectedUsage) = series_decompose_anomalies(ActualUsage) // Scores and extracts anomalies based on the output of make-series
+| mv-expand ActualUsage to typeof(double), TimeGenerated to typeof(datetime), Anomalies to typeof(double),AnomalyScore to typeof(double), ExpectedUsage to typeof(long) // Expands the array created by series_decompose_anomalies()
+| where Anomalies != 0 // Returns all positive and negative deviations from expected usage
+| project TimeGenerated,ActualUsage,ExpectedUsage,AnomalyScore,Anomalies,DataType // Defines which columns to return
+| sort by abs(AnomalyScore) desc // Sorts results by anomaly score in descending ordering
+```
+
+This query returns all usage anomalies for all tables in the last three weeks:
++
+Looking at the query results, you can see that the function:
+
+- Calculates an expected daily usage for each table.
+- Compares actual daily usage to expected usage.
+- Assigns an anomaly score to each data point, indicating the extent of the deviation of actual usage from expected usage.
+- Identifies positive (`1`) and negative (`-1`) anomalies in each table.
+
+> [!NOTE]
+> For more information about `series_decompose_anomalies()` syntax and usage, see [series_decompose_anomalies()](/azure/data-explorer/kusto/query/series-decompose-anomaliesfunction).
+
+## Tweak anomaly detection settings to refine results
+
+It's good practice to review initial query results and make tweaks to the query, if necessary. Outliers in input data can affect the function's learning, and you might need to adjust the function's anomaly detection settings to get more accurate results.
+
+Filter the results of the `series_decompose_anomalies()` query for anomalies in the `AzureDiagnostics` data type:
++
+The results show two anomalies on June 14 and June 15. Compare these results with the chart from our first `make-series` query, where you can see other anomalies on May 27 and 28:
++
+The difference in results occurs because the `series_decompose_anomalies()` function scores anomalies relative to the expected usage value, which the function calculates based on the full range of values in the input series.
+
+To get more refined results from the function, exclude the usage on the June 15 - which is an outlier compared to the other values in the series - from the function's learning process.
+
+The [syntax of the `series_decompose_anomalies()` function](/azure/data-explorer/kusto/query/series-decompose-anomaliesfunction) is:
+
+`series_decompose_anomalies (Series[Threshold,Seasonality,Trend,Test_points,AD_method,Seasonality_threshold])`
+
+`Test_points` specifies the number of points at the end of the series to exclude from the learning (regression) process.
+
+To exclude the last data point, set `Test_points` to `1`:
+
+<a href="https://portal.azure.com#@ec7cb332-9a0a-4569-835a-ce7658e8444e/blade/Microsoft_Azure_Monitoring_Logs/DemoLogsBlade/resourceId/%2FDemo/source/LogsBlade.AnalyticsShareLinkToQuery/q/H4sIAAAAAAAAA61Uy27UQBC85yuaXGJLzmMjcQHtISgR5IAiIJyjXk%252FbazKeMfPYXSMO%252FAa%252Fx5fQ0%252FbuegPhxM0eV1dXV%252FVYUwAf0IXQtARzuJyp13B%252BDp%252FSGSgMBJV1EJYEgvDkGvIFlDaa0JgaFlg%252BQuVsK5gyOkdmKDzSzE1GjcwXA%252FGNUf%252BBNhVVDoV4VPzOrsFWgQwECx7bTlPC49FnjzUlxH3qhgs%252BX9OvHz8dARrU%252FTfud%252FQd1kvik3smfkuGHHdSsKCwJmbMxCJbKewzrG22cyzPz86efBsnzvNceqbpHJp6P%252FDXSK4vmLtujEmzYoDZ5auLC7h6zxMIpmqcT%252BP2LFElE5%252FBaRxhjdmbKe12E936N43WMvZ8DsfBRTpOym5NqaMiD9boHhZbTLJsy%252BbIR837QYHZWnyk0yEnuCpDRC3Gzn1ssw8RObbQ56CowlTDeTPxEzslz%252BetlOCeMZM%252FUDeJfdHDNSu977sh2rvrO9ZIG85fZVfGtqhloYbH%252FlNpHRVws%252BmoZCWiPGeRwzwPikrbdtbTA25Ls8mMxezsZXE6K05wVZ8UMwlWGP0QzyY4LEN6GYt5fT3PawcbbQxdDCmyiYcFl6UAUrC7JFeoI23dH71O1q9OadOlVhNRya3A49sqUzZydHnxxO4JgN%252FFx60hifjP%252BqlZf6M%252FsG8C0NbUYsqNqPQiH53jvSwdDTep%252F5fX%252BW5b9%252FJepBVKpB8pRGfYXa2B65rQrEh8N1SjvChaNfxkGSQrRqNOiEkoc3fOfuGTQ3%252BKacIHox0YUey3abpx11Q1hmWul0255P%252BWjq0RT53ITbF5y79QHhwXPpsyplviS1kiRvjxmnmBDjDwEgEvQkKO1986xQ6a%252BjfngHTtswUAAA%253D%253D" target="_blank">Click to run query</a>
+
+```kusto
+let starttime = 21d; // Start date for the time series, counting back from the current date
+let endtime = 0d; // End date for the time series, counting back from the current date
+let timeframe = 1d; // How often to sample data
+Usage // The table weΓÇÖre analyzing
+| where TimeGenerated between (startofday(ago(starttime))..startofday(ago(endtime))) // Time range for the query, beginning at 12:00 AM of the first day and ending at 12:00 AM of the last day in the time range
+| where IsBillable == "true" // Includes only billable data in the result set
+| make-series ActualUsage=sum(Quantity) default = 0 on TimeGenerated from startofday(ago(starttime)) to startofday(ago(endtime)) step timeframe by DataType // Creates the time series, listed by data type
+| extend(Anomalies, AnomalyScore, ExpectedUsage) = series_decompose_anomalies(ActualUsage,1.5,-1,'avg',1) // Scores and extracts anomalies based on the output of make-series, excluding the last value in the series - the Threshold, Seasonality, and Trend input values are the default values for the function
+| mv-expand ActualUsage to typeof(double), TimeGenerated to typeof(datetime), Anomalies to typeof(double),AnomalyScore to typeof(double), ExpectedUsage to typeof(long) // Expands the array created by series_decompose_anomalies()
+| where Anomalies != 0 // Returns all positive and negative deviations from expected usage
+| project TimeGenerated,ActualUsage,ExpectedUsage,AnomalyScore,Anomalies,DataType // Defines which columns to return
+| sort by abs(AnomalyScore) desc // Sorts results by anomaly score in descending ordering
+```
+
+Filter the results for the `AzureDiagnostics` data type:
++
+All of the anomalies in the chart from our first `make-series` query now appear in the result set.
+
+## Analyze the root cause of anomalies
+
+Comparing expected values to anomalous values helps you understand the cause of the differences between the two sets.
+
+The KQL `diffpatterns()` plugin compares two data sets of the same structure and finds patterns that characterize differences between the two data sets.
+
+This query compares `AzureDiagnostics` usage on June 15, the extreme outlier in our example, with the table usage on other days:
+
+<a href="https://portal.azure.com#@ec7cb332-9a0a-4569-835a-ce7658e8444e/blade/Microsoft_Azure_Monitoring_Logs/DemoLogsBlade/resourceId/%2FDemo/source/LogsBlade.AnalyticsShareLinkToQuery/q/H4sIAAAAAAAAA61SwY7UMAw9M19hzWVbqbvLckVzGDGIIxJwX3kTtw2bJiVxKEUc%252BAf%252BkC%252FBaTu77SBu9JLGdp793rMlhsgYmE1HcIBXd%252Fo13N7CxxwDjUxQ%252BwDcEkwVkYKhWIHyybFxDTygeoQ6%252BG6qUSkEcvPDnRVscnpBfjkDv3X6P8Ci8x3a8ZSBDlM4w9yj1sWVxvGqur6roMNHuj%252FniomlryVbYOOLZbBSvhVhXwvYmI%252Fcduky4VcwtEa1YOKUshgZ6mTtVG%252FcM5WArqEc8SkAfcOutwTF6BModBCot6iktBWgwXALCLEnZWqjoMWgr5XXpDety93xewp0Mtg4H9mo%252BGL3Q6BZOMBxo4Sp6zXRTzLQO3IUJKtLOBzWwlWwXz3ey%252FW9kAj5Evdl1uSodZSprUo2A4g7NnUuRyxtOp%252FFib01PAsUKCYruyVmGcceePCZDOZIhN8%252Ff20mR2Hy3F3YDfJPsJkfHogHIgeXVj7tb1ne3PzT5kzoRLVxFC%252FNOq%252Fil0RhlOZ98J9J8ZbhB4riqFi3wplZz7IIqhfWnILL7nxFmzIzLZb0yEzBxWIDuJb7wotp2De%252B61FkhBRRhvTur52cF2hWuxGPwlK69PsDddtehdwDAAA%253D" target="_blank">Click to run query</a>
+
+```kusto
+let starttime = 21d; // Start date for the time series, counting back from the current date
+let endtime = 0d; // End date for the time series, counting back from the current date
+let anomalyDate = datetime_add('day',-1, make_datetime(startofday(ago(endtime)))); // Start of day of the anomaly date, which is the last full day in the time range in our example (you can replace this with a specific hard-coded anomaly date)
+AzureDiagnostics
+| extend AnomalyDate = iff(startofday(TimeGenerated) == anomalyDate, "AnomalyDate", "OtherDates") // Adds calculated column called AnomalyDate, which splits the result set into two data sets ΓÇô AnomalyDate and OtherDates
+| where TimeGenerated between (startofday(ago(starttime))..startofday(ago(endtime))) // Defines the time range for the query
+| project AnomalyDate, Resource // Defines which columns to return
+| evaluate diffpatterns(AnomalyDate, "OtherDates", "AnomalyDate") // Compares usage on the anomaly date with the regular usage pattern
+```
+
+The query identifies each entry in the table as occurring on *AnomalyDate* (June 15) or *OtherDates*. The `diffpatterns()` plugin then splits these data sets - called A (*OtherDates* in our example) and B (*AnomalyDate* in our example) - and returns a few patterns that contribute to the differences in the two sets:
++
+Looking at the query results, you can see the following differences:
+
+- There are 24,892,147 instances of ingestion from the *CH1-GEARAMAAKS* resource on all other days in the query time range, and no ingestion of data from this resource on June 15. Data from the *CH1-GEARAMAAKS* resource accounts for 73.36% of the total ingestion on other days in the query time range and 0% of the total ingestion on June 15.
+- There are 2,168,448 instances of ingestion from the *NSG-TESTSQLMI519* resource on all other days in the query time range, and 110,544 instances of ingestion from this resource on June 15. Data from the *NSG-TESTSQLMI519* resource accounts for 6.39% of the total ingestion on other days in the query time range and 25.61% of ingestion on June 15.
+
+Notice that, on average, there are 108,422 instances of ingestion from the *NSG-TESTSQLMI519* resource during the 20 days that make up the *other days* period (divide 2,168,448 by 20). Therefore, the ingestion from the *NSG-TESTSQLMI519* resource on June 15 isn't significantly different from the ingestion from this resource on other days. However, because there's no ingestion from *CH1-GEARAMAAKS* on June 15, the ingestion from *NSG-TESTSQLMI519* makes up a significantly greater percentage of the total ingestion on the anomaly date as compared to other days.
+
+The *PercentDiffAB* column shows the absolute percentage point difference between A and B (|PercentA - PercentB|), which is the main measure of the difference between the two sets. By default, the `diffpatterns()` plugin returns difference of over 5% between the two data sets, but you can tweak this threshold. For example, to return only differences of 20% or more between the two data sets, you can set `| evaluate diffpatterns(AnomalyDate, "OtherDates", "AnomalyDate", "~", 0.20)` in the query above. The query now returns only one result:
++
+> [!NOTE]
+> For more information about `diffpatterns()` syntax and usage, see [diff patterns plugin](/azure/data-explorer/kusto/query/diffpatternsplugin).
+
+## Next steps
+
+Learn more about:
+
+- [Log queries in Azure Monitor](log-query-overview.md).
+- [How to use Kusto queries](/azure/data-explorer/kusto/query/tutorial?pivots=azuremonitor).
+- [Analyze logs in Azure Monitor with KQL](/training/modules/analyze-logs-with-kql/)
azure-netapp-files Manage Availability Zone Volume Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-availability-zone-volume-placement.md
na Previously updated : 10/20/2022 Last updated : 10/24/2022 # Manage availability zone volume placement for Azure NetApp Files
Azure NetApp Files lets you deploy new volumes in the logical availability zone
* NetApp accounts and capacity pools are not bound by the availability zone. A capacity pool can contain volumes in different availability zones.
-* This feature provides zonal volume placement, with latency within the zonal latency envelopes. It does not provide proximity placement towards compute. As such, it doesnΓÇÖt provide lowest latency guarantee.
+* This feature provides zonal volume placement, with latency within the zonal latency envelopes. It ***does not*** provide proximity placement towards compute. As such, it ***does not*** provide lowest latency guarantee.
* Each data center is assigned to a physical zone. Physical zones are mapped to logical zones in your Azure subscription. Azure subscriptions are automatically assigned this mapping at the time a subscription is created. This feature aligns with the generic logical-to-physical availability zone mapping for the subscription.
azure-portal Azure Portal Dashboard Share Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboard-share-access.md
Title: Share Azure portal dashboards by using Azure role-based access control
description: This article explains how to share a dashboard in the Azure portal by using Azure role-based access control. ms.assetid: 8908a6ce-ae0c-4f60-a0c9-b3acfe823365 Previously updated : 03/19/2021 Last updated : 10/24/2022 # Share Azure dashboards by using Azure role-based access control
-After configuring a dashboard, you can publish it and share it with other users in your organization. You allow others to view your dashboard by using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md). Assign a single user or a group of users to a role. That role defines whether those users can view or modify the published dashboard.
+After configuring a dashboard, you can publish it and share it with other users in your organization. You allow others to view your dashboard by using [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) to assign roles to either a single user or a group of users. You can select a role that allows them only to view the published dashboard, or a role that also allows them to modify it.
-All published dashboards are implemented as Azure resources. They exist as manageable items within your subscription and are contained in a resource group. From an access control perspective, dashboards are no different from other resources, such as a virtual machine or a storage account. Individual tiles on the dashboard enforce their own access control requirements based on the resources they display. You can share a dashboard broadly while protecting the data on individual tiles.
+> [!TIP]
+> Within a dashboard, individual tiles enforce their own access control requirements based on the resources they display. You can share any dashboard broadly, even though some data on specific tiles might not be visible to all users.
-## Understanding access control for dashboards
+## Understand access control for dashboards
-With Azure role-based access control (Azure RBAC), you can assign users to roles at three different levels of scope:
+From an access control perspective, dashboards are no different from other resources, such as virtual machines or storage accounts. Published dashboards are implemented as Azure resources. Each dashboard exists as a manageable item contained in a resource group within your subscription.
+
+Azure RBAC lets you assign users to roles at three different levels of scope:
* subscription * resource group * resource
-The permissions you assign inherit from the subscription down to the resource. The published dashboard is a resource. You may already have users assigned to roles for the subscription that apply for the published dashboard.
+Azure RBAC permissions inherit from the subscription down to the resource. You may already have users assigned to roles for the subscription that apply for the published dashboard.
-Let's say you have an Azure subscription and various members of your team have been assigned the roles of *owner*, *contributor*, or *reader* for the subscription. Users who are owners or contributors can list, view, create, modify, or delete dashboards within the subscription. Users who are readers can list and view dashboards, but can't modify or delete them. Users with reader access can make local edits to a published dashboard, such as when troubleshooting an issue, but they can't publish those changes back to the server. They can make a private copy of the dashboard for themselves.
+For example, say you have an Azure subscription and various members of your team have been assigned the roles of Owner, Contributor, or Reader for that subscription. This means that any users who have the Owner or Contributor role can list, view, create, modify, or delete dashboards within the subscription. Users with the Reader role can list and view dashboards, but can't modify or delete them. They can make local edits to a published dashboard for their own use, such as when troubleshooting an issue, but they can't publish those changes back to the server. They can also make a private copy of the dashboard for themselves.
-You could assign permissions to the resource group that contains several dashboards or to an individual dashboard. For example, you may decide that a group of users should have limited permissions across the subscription but greater access to a particular dashboard. Assign those users to a role for that dashboard.
+To expand access to a dashboard beyond what is granted at the subscription level, you can assign permissions to a resource group that contains several dashboards, or assign permissions to individual dashboards. For example, if a group of users should have limited permissions across the subscription, but they need to be able to edit one particular dashboard, you can assign those users a different role with more permissions (such as Contributor) for that dashboard only.
## Publish a dashboard
-Let's suppose you configure a dashboard that you want to share with a group of users in your subscription. The following steps show how to share a dashboard to a group called Storage Managers. You can name your group whatever you like. For more information, see [Managing groups in Azure Active Directory](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
-
-Before assigning access, you must publish the dashboard.
+To share access to a dashboard, you must first publish it. When you do so, other users in your organization will be access and modify the dashboard based on their Azure RBAC roles.
1. In the dashboard, select **Share**.
- ![select share for your dashboard](./media/azure-portal-dashboard-share-access/share-dashboard-for-access-control.png)
+ :::image type="content" source="media/azure-portal-dashboard-share-access/share-dashboard-for-access-control.png" alt-text="Screenshot showing the Share option for an Azure portal dashboard.":::
1. In **Sharing + access control**, select **Publish**.
- ![publish your dashboard](./media/azure-portal-dashboard-share-access/publish-dashboard-for-access-control.png)
+ :::image type="content" source="media/azure-portal-dashboard-share-access/publish-dashboard-for-access-control.png" alt-text="Screenshot showing how to publish an Azure portal dashboard.":::
+
+ By default, sharing publishes your dashboard to a resource group named **dashboards**. To select a different resource group, clear the checkbox.
+
+1. To [add optional tags](/azure/azure-resource-manager/management/tag-resources) to the dashboard, enter one or more name/value pairs.
- By default, sharing publishes your dashboard to a resource group named **dashboards**. To select a different resource group, clear the checkbox.
+1. Select **Publish**.
-Your dashboard is now published. If the permissions inherited from the subscription are suitable, you don't need to do anything more. Other users in your organization can access and modify the dashboard based on their subscription level role.
+Your dashboard is now published. If the permissions inherited from the subscription are suitable, you don't need to do anything more. Otherwise, read on to see how to expand access to specific users or groups.
## Assign access to a dashboard
-You can assign a group of users to a role for that dashboard.
+For each dashboard that you have published, you can assign Azure RBAC built-in roles to groups of users (or to individual users). This lets them use that role on the dashboard, even if their subscription-level permissions wouldn't normally allow it.
-1. After publishing the dashboard, select **Manage sharing**.
+1. After publishing the dashboard, select **Manage sharing**, then select **Access control**.
-1. In **Access Control** select **Role assignments** to see existing users that are already assigned a role for this dashboard.
+1. In **Access Control**, select **Role assignments** to see existing users that are already assigned a role for this dashboard.
1. To add a new user or group, select **Add** then **Add role assignment**.
- ![add a user for access to the dashboard](./media/azure-portal-dashboard-share-access/manage-users-existing-users.png)
+ :::image type="content" source="media/azure-portal-dashboard-share-access/manage-users-existing-users.png" alt-text="Screenshot showing how to add a role assignment for an Azure portal dashboard.":::
-1. Select the role that represents the permissions to grant, such as **Contributor**.
+1. Select the role that represents the permissions to grant, such as **Contributor**, and then select **Next**.
-1. Select the user or group to assign to the role. If you don't see the user or group you're looking for in the list, use the search box. Your list of available groups depends on the groups you've created in Active Directory.
+1. Select **Select members**, then select one or more Azure Active Directory (Azure AD) groups and/or users. If you don't see the user or group you're looking for in the list, use the search box. When you have finished, choose **Select**.
-1. When you've finished adding users or groups, select **Save**.
+1. Select **Review + assign** to complete the assignment.
## Next steps
-* For a list of roles, see [Azure built-in roles](../role-based-access-control/built-in-roles.md).
-* To learn about managing resources, see [Manage Azure resources by using the Azure portal](../azure-resource-manager/management/manage-resources-portal.md).
+* View the list of [Azure built-in roles](../role-based-access-control/built-in-roles.md).
+* Learn about [managing groups in Azure AD](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md).
+* Learn more about [managing Azure resources by using the Azure portal](../azure-resource-manager/management/manage-resources-portal.md).
+* [Create a dashboard](azure-portal-dashboards.md) in the Azure portal.
azure-vmware Enable Public Internet Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-internet-access.md
- Title: Enable public internet for Azure VMware Solution workloads
-description: This article explains how to use the public IP functionality in Azure Virtual WAN.
-- Previously updated : 06/25/2021--
-# Enable public internet for Azure VMware Solution workloads
-
-Public IP is a feature in Azure VMware Solution connectivity. It makes resources, such as web servers, virtual machines (VMs), and hosts accessible through a public network.
-
-You enable public internet access in two ways.
--- Host and publish applications under the Application Gateway load balancer for HTTP/HTTPS traffic.--- Publish through public IP features in Azure Virtual WAN.-
-As a part of Azure VMware Solution private cloud deployment, upon enabling public IP functionality, the required components with automation get created and enabled:
--- Virtual WAN--- Virtual WAN hub with ExpressRoute connectivity--- Azure Firewall services with public IP-
-This article details how you can use the public IP functionality in Virtual WAN.
-
-## Prerequisites
--- Azure VMware Solution environment--- A webserver running in Azure VMware Solution environment.--- A new non-overlapping IP range for the Virtual WAN hub deployment, typically a `/24`.-
-## Reference architecture
--
-The architecture diagram shows a web server hosted in the Azure VMware Solution environment and configured with RFC1918 private IP addresses. The web service is made available to the internet through Virtual WAN public IP functionality. Public IP is typically a destination NAT translated in Azure Firewall. With DNAT rules, firewall policy translates public IP address requests to a private address (webserver) with a port.
-
-User requests hit the firewall on a public IP that, in turn, is translated to private IP using DNAT rules in the Azure Firewall. The firewall checks the NAT table, and if the request matches an entry, it forwards the traffic to the translated address and port in the Azure VMware Solution environment.
-
-The web server receives the request and replies with the requested information or page to the firewall, and then the firewall forwards the information to the user on the public IP address.
-
-## Test case
-In this scenario, you'll publish the IIS webserver to the internet. Use the public IP feature in Azure VMware Solution to publish the website on a public IP address. You'll also configure NAT rules on the firewall and access Azure VMware Solution resource (VMs with a web server) with public IP.
-
->[!TIP]
->To enable egress traffic, you must set Security configuration > Internet traffic to **Azure Firewall**.
-
-## Deploy Virtual WAN
-
-1. Sign in to the Azure portal and then search for and select **Azure VMware Solution**.
-
-1. Select the Azure VMware Solution private cloud.
-
-1. Under **Manage**, select **Connectivity**.
-
- :::image type="content" source="media/public-ip-usage/avs-private-cloud-manage-menu.png" alt-text="Screenshot of the Connectivity section." lightbox="media/public-ip-usage/avs-private-cloud-manage-menu.png":::
-
-1. Select the **Public IP** tab and then select **Configure**.
-
- :::image type="content" source="media/public-ip-usage/connectivity-public-ip-tab.png" alt-text="Screenshot that shows where to begin to configure the public IP." lightbox="media/public-ip-usage/connectivity-public-ip-tab.png":::
-
-1. Accept the default values or change them, and then select **Create**.
-
- - Virtual WAN resource group
-
- - Virtual WAN name
-
- - Virtual hub address block (using new non-overlapping IP range)
-
- - Number of public IPs (1-100)
-
-It takes about one hour to complete the deployment of all components. This deployment only has to occur once to support all future public IPs for this Azure VMware Solution environment.
-
->[!TIP]
->You can monitor the status from the **Notification** area.
-
-## View and add public IP addresses
-
-We can check and add more public IP addresses by following the below steps.
-
-1. In the Azure portal, search for and select **Firewall**.
-
-1. Select a deployed firewall and then select **Visit Azure Firewall Manager to configure and manage this firewall**.
-
- :::image type="content" source="media/public-ip-usage/configure-manage-deployed-firewall.png" alt-text="Screenshot that shows the option to configure and manage the firewall." lightbox="media/public-ip-usage/configure-manage-deployed-firewall.png":::
-
-1. Select **Secured virtual hubs** and, from the list, select a virtual hub.
-
- :::image type="content" source="media/public-ip-usage/select-virtual-hub.png" alt-text="Screenshot of Firewall Manager." lightbox="media/public-ip-usage/select-virtual-hub.png":::
-
-1. On the virtual hub page, select **Public IP configuration**, and to add more public IP address, then select **Add**.
-
- :::image type="content" source="media/public-ip-usage/virtual-hub-page-public-ip-configuration.png" alt-text="Screenshot of how to add a public IP configuration in Firewall Manager." lightbox="media/public-ip-usage/virtual-hub-page-public-ip-configuration.png":::
-
-1. Provide the number of IPs required and select **Add**.
-
- :::image type="content" source="media/public-ip-usage/add-number-of-ip-addresses-required.png" alt-text="Screenshot to add a specified number of public IP configurations.":::
--
-## Create firewall policies
-
-Once all components are deployed, you can see them in the added Resource group. The next step is to add a firewall policy.
-
-1. In the Azure portal, search for and select **Firewall**.
-
-1. Select a deployed firewall and then select **Visit Azure Firewall Manager to configure and manage this firewall**.
-
- :::image type="content" source="media/public-ip-usage/configure-manage-deployed-firewall.png" alt-text="Screenshot that shows the option to configure and manage the firewall." lightbox="media/public-ip-usage/configure-manage-deployed-firewall.png":::
-
-1. Select **Azure Firewall Policies** and then select **Create Azure Firewall Policy**.
-
- :::image type="content" source="media/public-ip-usage/create-firewall-policy.png" alt-text="Screenshot of how to create a firewall policy in Firewall Manager." lightbox="media/public-ip-usage/create-firewall-policy.png":::
-
-1. Under the **Basics** tab, provide the required details and select **Next: DNS Settings**.
-
-1. Under the **DNS** tab, select **Disable**, and then select **Next: Rules**.
-
-1. Select **Add a rule collection**, provide the below details and select **Add**. Then select **Next: Threat intelligence**.
-
- - Name
- - Rules collection Type - **DNAT**
- - Priority
- - Rule collection Action ΓÇô **Allow**
- - Name of rule
- - Source Type- **IPaddress**
- - Source - **\***
- - Protocol ΓÇô **TCP**
- - Destination port ΓÇô **80**
- - Destination Type ΓÇô **IP Address**
- - Destination ΓÇô **Public IP Address**
- - Translated address ΓÇô **Azure VMware Solution Web Server private IP Address**
- - Translated port - **Azure VMware Solution Web Server port**
-
-1. Leave the default value, and then select **Next: Hubs**.
-
-1. Select **Associate virtual hub**.
-
-1. Select a hub from the list and select **Add**.
-
- :::image type="content" source="media/public-ip-usage/secure-hubs-with-azure-firewall-policy.png" alt-text="Screenshot that shows the selected hubs that will be converted to Secured Virtual Hubs." lightbox="media/public-ip-usage/secure-hubs-with-azure-firewall-policy.png":::
-
-1. Select **Next: Tags**.
-
-1. (Optional) Create name and value pairs to categorize your resources.
-
-1. Select **Next: Review + create** and then select **Create**.
-
-## Limitations
-
-You can have 100 public IPs per private cloud.
-
-## Next steps
-
-Now that you've covered how to use the public IP functionality in Azure VMware Solution, you may want to learn about:
--- Using public IP addresses with [Azure Virtual WAN](../virtual-wan/virtual-wan-about.md).-- [Creating an IPSec tunnel into Azure VMware Solution](./configure-site-to-site-vpn-gateway.md).
azure-vmware Move Azure Vmware Solution Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/move-azure-vmware-solution-across-regions.md
To this point, you've migrated the workloads to the target environment. These ap
- Published through the public IP feature in vWAN.
-Public IP is typically the destination NAT translated into the Azure firewall. With DNAT rules, firewall policy would translate the public IP address request to a private address (webserver) with a port. For more information, see [How to use the public IP functionality in Azure Virtual WAN](./enable-public-internet-access.md).
+Public IP is typically the destination NAT translated into the Azure firewall. With DNAT rules, firewall policy would translate the public IP address request to a private address (webserver) with a port. For more information, see [How to use the public IP functionality in Azure Virtual WAN](./enable-public-ip-nsx-edge.md).
>[!NOTE] >SNAT is by default configured in Azure VMware Solution, so you must enable SNAT from Azure VMware Solution private cloud connectivity settings under the Manage tab.
azure-vmware Vrealize Operations For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
Title: Configure vRealize Operations for Azure VMware Solution
description: Learn how to set up vRealize Operations for your Azure VMware Solution private cloud. Previously updated : 04/11/2022 Last updated : 10/18/2022 # Configure vRealize Operations for Azure VMware Solution - vRealize Operations is an operations management platform that allows VMware infrastructure administrators to monitor system resources. These system resources could be application-level or infrastructure level (both physical and virtual) objects. Most VMware administrators have used vRealize Operations to monitor and manage the VMware private cloud components ΓÇô vCenter Server, ESXi, NSX-T Data Center, vSAN, and VMware HCX. Each provisioned Azure VMware Solution private cloud includes a dedicated vCenter Server, NSX-T Data Center, vSAN, and HCX deployment.
-Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#prerequisites) first. Then, we'll walk you through the two typical deployment topologies:
+Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#prerequisites) first. Then, we'll walk you through the three typical deployment topologies:
> [!div class="checklist"] > * [On-premises vRealize Operations managing Azure VMware Solution deployment](#on-premises-vrealize-operations-managing-azure-vmware-solution-deployment)
+> * [vRealize Operations Cloud managing Azure VMware Solution deployment](#vrealize-operations-cloud-managing-azure-vmware-solution-deployment)
> * [vRealize Operations running on Azure VMware Solution deployment](#vrealize-operations-running-on-azure-vmware-solution-deployment) ## Before you begin
Thoroughly review [Before you begin](#before-you-begin) and [Prerequisites](#pre
* Review the basic Azure VMware Solution Software-Defined Datacenter (SDDC) [tutorial series](tutorial-network-checklist.md). * Optionally, review the [vRealize Operations Remote Controller](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-263F9219-E801-4383-8A59-E84F3D01ED6B.html) product documentation for the on-premises vRealize Operations managing Azure VMware Solution deployment option. - ## Prerequisites * [vRealize Operations Manager](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) installed. * A VPN or an Azure ExpressRoute configured between on-premises and Azure VMware Solution SDDC. * An Azure VMware Solution private cloud has been deployed in Azure. -- ## On-premises vRealize Operations managing Azure VMware Solution deployment Most customers have an existing on-premises deployment of vRealize Operations to manage one or more on-premises vCenter Server domains. When they provision an Azure VMware Solution private cloud, they connect their on-premises environment with their private cloud using an Azure ExpressRoute or a Layer 3 VPN solution.
To extend the vRealize Operations capabilities to the Azure VMware Solution priv
> [!TIP] > Refer to the [VMware documentation](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) for step-by-step guide for installing vRealize Operations Manager.
+## vRealize Operations Cloud managing Azure VMware Solution deployment
+VMware vRealize Operations Cloud supports the Azure VMware Solution, including the vCenter Server, vSAN and NSX-T Data Center adapters.
+> [!IMPORTANT]
+> Refer to the [VMware documentation](https://docs.vmware.com/en/vRealize-Operations/Cloud/com.vmware.vcom.config.doc/GUID-6CDFEDDC-A72C-4AB4-B8E8-84542CC6CE27.html) for step-by-step guide for connecting vRealize Operations Cloud to Azure VMware Solution.
## vRealize Operations running on Azure VMware Solution deployment
Another option is to deploy an instance of vRealize Operations Manager on a vSph
Once the instance has been deployed, you can configure vRealize Operations to collect data from vCenter Server, ESXi, NSX-T Data Center, vSAN, and HCX. -- ## Known limitations - The **cloudadmin@vsphere.local** user in Azure VMware Solution has [limited privileges](concepts-identity.md). Virtual machines (VMs) on Azure VMware Solution doesn't support in-guest memory collection using VMware tools. Active and consumed memory utilization continues to work in this case.
Once the instance has been deployed, you can configure vRealize Operations to co
- You can't sign in to vRealize Operations Manager using your Azure VMware Solution vCenter Server credentials. - Azure VMware Solution doesn't support the vRealize Operations Manager plugin.
-When you connect the Azure VMware Solution vCenter to vRealize Operations Manager using a vCenter Server Cloud Account, you'll see a warning:
+When you connect the Azure VMware Solution vCenter Server to vRealize Operations Manager using a vCenter Server Cloud Account, you'll see a warning:
:::image type="content" source="./media/vrealize-operations-manager/warning-adapter-instance-creation-succeeded.png" alt-text="Screenshot showing a Warning message that states the adapter instance was created successfully.":::
For more information, see [Privileges Required for Configuring a vCenter Server
<!-- LINKS - external --> - <!-- LINKS - internal -->----
cosmos-db How To Configure Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-configure-integrated-cache.md
This article describes how to provision a dedicated gateway, configure the integ
1. Navigate to an Azure Cosmos DB account in the Azure portal and select the **Dedicated Gateway** tab.
- :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-tab.png" alt-text="Screenshot of the Azure Portal that shows how to navigate to the Azure Cosmos DB dedicated gateway tab." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-tab.png" :::
+ :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-tab.png" alt-text="Screenshot of the Azure portal that shows how to navigate to the Azure Cosmos DB dedicated gateway tab." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-tab.png" :::
2. Fill out the **Dedicated gateway** form with the following details:
This article describes how to provision a dedicated gateway, configure the integ
* **SKU** - Select a SKU with the required compute and memory size. The integrated cache will use approximately 50% of the memory, and the remaining memory is used for metadata and routing requests to the backend partitions. * **Number of instances** - Number of nodes. For development purpose, we recommend starting with one node of the D4 size. Based on the amount of data you need to cache and to achieve high availability, you can increase the node size after initial testing.
- :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-input.png" alt-text="Screenshot of the Azure Portal dedicated gateway tab that shows sample input settings for creating a dedicated gateway cluster." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-input.png" :::
+ :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-input.png" alt-text="Screenshot of the Azure portal dedicated gateway tab that shows sample input settings for creating a dedicated gateway cluster." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-input.png" :::
3. Select **Save** and wait about 5-10 minutes for the dedicated gateway provisioning to complete. When the provisioning is done, you'll see the following notification:
- :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-notification.png" alt-text="Screenshot of a notification in the Azure Portal that shows how to check if dedicated gateway provisioning is complete." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-notification.png" :::
+ :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-notification.png" alt-text="Screenshot of a notification in the Azure portal that shows how to check if dedicated gateway provisioning is complete." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-notification.png" :::
## Configuring the integrated cache
When you create a dedicated gateway, an integrated cache is automatically provis
The updated dedicated gateway connection string is in the **Keys** blade:
- :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-connection-string.png" alt-text="Screenshot of the Azure Portal keys tab with the dedicated gateway connection string." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-connection-string.png" :::
+ :::image type="content" source="./media/how-to-configure-integrated-cache/dedicated-gateway-connection-string.png" alt-text="Screenshot of the Azure portal keys tab with the dedicated gateway connection string." lightbox="./media/how-to-configure-integrated-cache/dedicated-gateway-connection-string.png" :::
All dedicated gateway connection strings follow the same pattern. Remove `documents.azure.com` from your original connection string and replace it with `sqlx.cosmos.azure.com`. A dedicated gateway will always have the same connection string, even if you remove and reprovision it.
You must ensure the request consistency is session or eventual. If not, the requ
Configure `MaxIntegratedCacheStaleness`, which is the maximum time in which you are willing to tolerate stale cached data. It is recommended to set the `MaxIntegratedCacheStaleness` as high as possible because it will increase the likelihood that repeated point reads and queries can be cache hits. If you set `MaxIntegratedCacheStaleness` to 0, your read request will **never** use the integrated cache, regardless of the consistency level. When not configured, the default `MaxIntegratedCacheStaleness` is 5 minutes.
+>[!NOTE]
+> The `MaxIntegratedCacheStaleness` can be set as high as 10 years. In practice, this value is the maximum staleness and the cache may be reset sooner due to node restarts which may occur.
+ Adjusting the `MaxIntegratedCacheStaleness` is supported in these versions of each SDK: | SDK | Supported versions |
cosmos-db Integrated Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-cache.md
An integrated cache is automatically configured within the dedicated gateway. Th
* An item cache for point reads * A query cache for queries
-The integrated cache is a read-through, write-through cache with a Least Recently Used (LRU) eviction policy. The item cache and query cache share the same capacity within the integrated cache and the LRU eviction policy applies to both. In other words, data is evicted from the cache strictly based on when it was least recently used, regardless of whether it is a point read or query.
+The integrated cache is a read-through, write-through cache with a Least Recently Used (LRU) eviction policy. The item cache and query cache share the same capacity within the integrated cache and the LRU eviction policy applies to both. In other words, data is evicted from the cache strictly based on when it was least recently used, regardless of whether it's a point read or query.
> [!NOTE] > Do you have any feedback about the integrated cache? We want to hear it! Feel free to share feedback directly with the Azure Cosmos DB engineering team:
cosmoscachefeedback@microsoft.com
## Workloads that benefit from the integrated cache
-The main goal of the integrated cache is to reduce costs for read-heavy workloads. Low latency, while helpful, is not the main benefit of the integrated cache because Azure Cosmos DB is already fast without caching.
+The main goal of the integrated cache is to reduce costs for read-heavy workloads. Low latency, while helpful, isn't the main benefit of the integrated cache because Azure Cosmos DB is already fast without caching.
Point reads and queries that hit the integrated cache will have an RU charge of 0. Cache hits will have a much lower per-operation cost than reads from the backend database.
Workloads that fit the following characteristics should evaluate if the integrat
- Many repeated high RU queries - Hot partition key for reads
-The biggest factor in expected savings is the degree to which reads repeat themselves. If your workload consistently executes the same point reads or queries within a short period of time, it is a great candidate for the integrated cache. When using the integrated cache for repeated reads, you only use RU's for the first read. Subsequent reads routed through the same dedicated gateway node (within the `MaxIntegratedCacheStaleness` window and if the data hasn't been evicted) won't use throughput.
+The biggest factor in expected savings is the degree to which reads repeat themselves. If your workload consistently executes the same point reads or queries within a short period of time, it's a great candidate for the integrated cache. When using the integrated cache for repeated reads, you only use RUs for the first read. Subsequent reads routed through the same dedicated gateway node (within the `MaxIntegratedCacheStaleness` window and if the data hasn't been evicted) won't use throughput.
-Some workloads should not consider the integrated cache, including:
+Some workloads shouldn't consider the integrated cache, including:
- Write-heavy workloads - Rarely repeated point reads or queries
The query cache can be used to cache queries. The query cache transforms a query
### Populating the query cache -- If the cache does not have a result for that query (cache miss), the query is sent to the backend. After the query is run, the cache will store the results for that query
+- If the cache doesn't have a result for that query (cache miss), the query is sent to the backend. After the query is run, the cache will store the results for that query
### Query cache eviction
This is an improvement from how most caches work and allows the following additi
- You can set different staleness requirements for each point read or query - Different clients, even if they run the same point read or query, can configure different `MaxIntegratedCacheStaleness` values-- If you wanted to modify read consistency when using cached data, changing `MaxIntegratedCacheStaleness` will have an immediate effect on read consistency
+- If you wanted to modify read consistency for cached data, changing `MaxIntegratedCacheStaleness` will have an immediate effect on read consistency
> [!NOTE]
-> When not explicitly configured, the MaxIntegratedCacheStaleness defaults to 5 minutes.
+> The minimum `MaxIntegratedCacheStaleness` value is 0 and the maximum value is 10 years. When not explicitly configured, the `MaxIntegratedCacheStaleness` defaults to 5 minutes.
To better understand the `MaxIntegratedCacheStaleness` parameter, consider the following example:
To better understand the `MaxIntegratedCacheStaleness` parameter, consider the f
## Metrics
-When using the integrated cache, it is helpful to monitor some key metrics. The integrated cache metrics include:
+It's helpful to monitor some key metrics for the integrated cache. These metrics include:
- `DedicatedGatewayCPUUsage` - CPU usage with Avg, Max, or Min Aggregation types for data across all dedicated gateway nodes. - `DedicatedGatewayAverageCPUUsage` - (Deprecated) Average CPU usage across all dedicated gateway nodes.
When using the integrated cache, it is helpful to monitor some key metrics. The
- `DedicatedGatewayMemoryUsage` - Memory usage with Avg, Max, or Min Aggregation types for data across all dedicated gateway nodes. - `DedicatedGatewayAverageMemoryUsage` - (Deprecated) Average memory usage across all dedicated gateway nodes. - `DedicatedGatewayRequests` - Total number of dedicated gateway requests across all dedicated gateway nodes.-- `IntegratedCacheEvictedEntriesSize` ΓÇô The average amount of data evicted from the integrated cache due to LRU across all dedicated gateway nodes. This value does not include data that expired due to exceeding the `MaxIntegratedCacheStaleness` time.
+- `IntegratedCacheEvictedEntriesSize` ΓÇô The average amount of data evicted from the integrated cache due to LRU across all dedicated gateway nodes. This value doesn't include data that expired due to exceeding the `MaxIntegratedCacheStaleness` time.
- `IntegratedCacheItemExpirationCount` - The average number of items that are evicted from the integrated cache due to cached point reads exceeding the `MaxIntegratedCacheStaleness` time across all dedicated gateway nodes. - `IntegratedCacheQueryExpirationCount` - The average number of queries that are evicted from the integrated cache due to cached queries exceeding the `MaxIntegratedCacheStaleness` time across all dedicated gateway nodes. - `IntegratedCacheItemHitRate` ΓÇô The proportion of point reads that used the integrated cache (out of all point reads routed through the dedicated gateway with session or eventual consistency). This value is an average of integrated cache instances across all dedicated gateway nodes.
When using the integrated cache, it is helpful to monitor some key metrics. The
All existing metrics are available, by default, from the **Metrics** blade (not Metrics classic):
- :::image type="content" source="./media/integrated-cache/integrated-cache-metrics.png" alt-text="Screenshot of the Azure Portal that shows the location of integrated cache metrics." border="false":::
+ :::image type="content" source="./media/integrated-cache/integrated-cache-metrics.png" alt-text="Screenshot of the Azure portal that shows the location of integrated cache metrics." border="false":::
Metrics are either an average, maximum, or sum across all dedicated gateway nodes. For example, if you provision a dedicated gateway cluster with five nodes, the metrics reflect the aggregated value across all five nodes. It isn't possible to determine the metric values for each individual node.
The below examples show how to debug some common scenarios:
### I canΓÇÖt tell if my application is using the dedicated gateway
-Check the `DedicatedGatewayRequests`. This metric includes all requests that use the dedicated gateway, regardless of whether they hit the integrated cache. If your application uses the standard gateway or direct mode with your original connection string, you won't see an error message but the `DedicatedGatewayRequests` will be zero.
+Check the `DedicatedGatewayRequests`. This metric includes all requests that use the dedicated gateway, regardless of whether they hit the integrated cache. If your application uses the standard gateway or direct mode with your original connection string, you won't see an error message, but the `DedicatedGatewayRequests` will be zero.
### I canΓÇÖt tell if my requests are hitting the integrated cache
-Check the `IntegratedCacheItemHitRate` and `IntegratedCacheQueryHitRate`. If both of these values are zero, then requests are not hitting the integrated cache. Check that you are using the dedicated gateway connection string, [connecting with gateway mode](nosql/sdk-connection-modes.md), and [have set session or eventual consistency](consistency-levels.md#configure-the-default-consistency-level).
+Check the `IntegratedCacheItemHitRate` and `IntegratedCacheQueryHitRate`. If both of these values are zero, then requests aren't hitting the integrated cache. Check that you're using the dedicated gateway connection string, [connecting with gateway mode](nosql/sdk-connection-modes.md), and [have set session or eventual consistency](consistency-levels.md#configure-the-default-consistency-level).
### I want to understand if my dedicated gateway is too small
-Check the `IntegratedCacheItemHitRate` and `IntegratedCacheQueryHitRate`. If these values are high (for example, above 0.7-0.8), this is a good sign that the dedicated gateway is large enough.
+Check the `IntegratedCacheItemHitRate` and `IntegratedCacheQueryHitRate`. High values (for example, above 0.7-0.8) are a good sign that the dedicated gateway is large enough.
If the `IntegratedCacheItemHitRate` or `IntegratedCacheQueryHitRate`is low, look at the `IntegratedCacheEvictedEntriesSize`. If the `IntegratedCacheEvictedEntriesSize` is high, it may mean that a larger dedicated gateway size would be beneficial. You can experiment by increasing the dedicated gateway size and comparing the new `IntegratedCacheItemHitRate` and `IntegratedCacheQueryHitRate`. If a larger dedicated gateway doesn't improve the `IntegratedCacheItemHitRate` or `IntegratedCacheQueryHitRate`, it's possible that reads simply don't repeat themselves enough for the integrated cache to be impactful. ### I want to understand if my dedicated gateway is too large
-It is more difficult to measure if a dedicated gateway is too large than it is to measure if a dedicated gateway is too small. In general, you should start small and slowly increase the dedicated gateway size until the `IntegratedCacheItemHitRate` and `IntegratedCacheQueryHitRate` stop improving. In some cases, only one of the two cache hit metrics will be important, not both. For example, if your workload is primarily queries, rather than point reads, the `IntegratedCacheQueryHitRate` is much more important than the `IntegratedCacheItemHitRate`.
+It's more difficult to measure if a dedicated gateway is too large than it is to measure if a dedicated gateway is too small. In general, you should start small and slowly increase the dedicated gateway size until the `IntegratedCacheItemHitRate` and `IntegratedCacheQueryHitRate` stop improving. In some cases, only one of the two cache hit metrics will be important, not both. For example, if your workload is primarily queries, rather than point reads, the `IntegratedCacheQueryHitRate` is much more important than the `IntegratedCacheItemHitRate`.
If most data is evicted from the cache due to exceeding the `MaxIntegratedCacheStaleness`, rather than LRU, your cache might be larger than required. If `IntegratedCacheItemExpirationCount` and `IntegratedCacheQueryExpirationCount` combined are nearly as large as `IntegratedCacheEvictedEntriesSize`, you can experiment with a smaller dedicated gateway size and compare performance.
cosmos-db Quickstart Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-spark.md
print("document after patch operation")
dfAfterPatch.show() ```
-For more samples related to partial document update, see the Github code sample [Patch Sample](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/Python/patch-sample.py).
+For more samples related to partial document update, see the GitHub code sample [Patch Sample](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/Python/patch-sample.py).
#### [Scala](#tab/scala)
println("document after patch operation")
dfAfterPatch.show() ```
-For more samples related to partial document update, see the Github code sample [Patch Sample](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/Scala/PatchSample.scala).
+For more samples related to partial document update, see the GitHub code sample [Patch Sample](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/Scala/PatchSample.scala).
cost-management-billing Prepay Databricks Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-databricks-reserved-capacity.md
Before you buy, calculate the total DBU quantity consumed for different workload
## Purchase Databricks commit units
-You can buy Databricks plans in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/documentation/filters/%7B%22reservedResourceType%22%3A%22Databricks%22%7D). To buy reserved capacity, you must have the owner role for at least one enterprise subscription.
+You can buy Databricks plans in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Reservations/CreateBlade/referrer/documentation/filters/%7B%22reservedResourceType%22%3A%22Databricks%22%7D). To buy reserved capacity, you must have the owner role for at least one enterprise or Microsoft Customer Agreement or an individual subscription with pay-as-you-go rates subscription, or the required role for CSP subscriptions.
- You must be in an Owner role for at least one Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or Microsoft Customer Agreement or an individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P). - For Enterprise subscriptions, **Add Reserved Instances** must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin of the subscription to enable it. Direct EA customers can now update **Reserved Instance** setting on [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes). Navigate to the Policies menu to change settings.
+- For CSP subscriptions, follow the steps in [Acquire, provision, and manage Azure reserved VM instances (RI) + server subscriptions for customers](/partner-center/azure-ri-server-subscriptions).
**To Purchase:**
cost-management-billing Prepay Sql Data Warehouse Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepay-sql-data-warehouse-charges.md
Title: Save on Azure Synapse Analytics (data warehousing only) charges with Azure reserved capacity
+ Title: Save on Azure Synapse Analytics - Dedicated SQL pool (formerly SQL DW) charges with Azure reserved capacity
description: Learn how you save costs for Azure Synapse Analytics charges with reserved capacity to save money.
Last updated 10/19/2021
-# Save costs for Azure Synapse Analytics (data warehousing only) charges with reserved capacity
+# Save costs for Azure Synapse Analytics - Dedicated SQL pool (formerly SQL DW) charges with reserved capacity
-You can save money with Azure Synapse Analytics (data warehousing only) by committing to a reservation for your cDWU usage for a duration of one or three years. To purchase Azure Synapse Analytics reserved capacity, you need to choose the Azure region, and term. Then, add the Azure Synapse Analytics SKU to your cart and choose the quantity of cDWU units that you want to purchase.
+You can save money with Azure Synapse Analytics - Dedicated SQL pool (formerly SQL DW) by committing to a reservation for your cDWU usage for a duration of one or three years. To purchase Azure Synapse Analytics reserved capacity, you need to choose the Azure region, and term. Then, add the Azure Synapse Analytics SKU to your cart and choose the quantity of cDWU units that you want to purchase.
When you purchase a reservation, the Azure Synapse Analytics usage that matches the reservation attributes is no longer charged at the pay-as-you go rates.
cost-management-billing Synapse Analytics Pre Purchase Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/synapse-analytics-pre-purchase-plan.md
For more information about available SCU tiers and pricing discounts, you'll use
## Purchase Synapse commit units
-You buy Synapse plans in the [Azure portal](https://portal.azure.com). To buy a Pre-Purchase Plan, you must have the owner role for at least one enterprise subscription.
+You buy Synapse plans in the [Azure portal](https://portal.azure.com). To buy a Pre-Purchase Plan, you must have the owner role for at least one enterprise or Microsoft Customer Agreement or an individual subscription with pay-as-you-go rates subscription, or the required role for CSP subscriptions.
- You must be in an Owner role for at least one Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P) or Microsoft Customer Agreement or an individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P). - For Enterprise Agreement (EA) subscriptions, the **Add Reserved Instances** option must be enabled in the [EA portal](https://ea.azure.com/). Or, if that setting is disabled, you must be an EA Admin of the subscription.
+- For CSP subscriptions, follow the steps in [Acquire, provision, and manage Azure reserved VM instances (RI) + server subscriptions for customers](/partner-center/azure-ri-server-subscriptions).
### To Purchase:
cost-management-billing Reservation Trade In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/reservation-trade-in.md
Previously updated : 10/12/2022 Last updated : 10/24/2022
If you find that your Azure VMs, Dedicated Hosts, or Azure App Services reservat
Although you can return the above offerings for a savings plan, you can't exchange a savings plan for them or for another savings plan.
-The ability to exchange Azure VM reservations will retire in the future. For more information, see [Self-service exchanges and refunds for Azure Reservations](../reservations/exchange-and-refund-azure-reservations.md).
+> [!NOTE]
+> Exchanges will be unavailable for Azure reserved instances for compute services purchased on or after **January 1, 2024**. Azure savings plan for compute is designed to help you save broadly on predictable compute usage. The savings plan provides more flexibility needed to accommodate changes such as virtual machine series and regions. With savings plan providing the flexibility automatically, we’re adjusting our reservations exchange policy. You can continue to exchange VM sizes (with instance size flexibility) but we'll no longer support exchanging instance series or regions for Azure Reserved Virtual Machine Instances, Azure Dedicated Host reservations, and Azure App Services reservations. Until **December 31, 2023** you can trade-in your Azure reserved instances for compute for a savings plan. Or, you may continue to use and purchase reservations for those predictable, stable workloads where you know the specific configuration you’ll need and want additional savings. For more information, see [Self-service exchanges and refunds for Azure Reservations](../reservations/exchange-and-refund-azure-reservations.md).
The following reservations aren't eligible to be traded in for savings plans:
data-factory Azure Ssis Integration Runtime Express Virtual Network Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-ssis-integration-runtime-express-virtual-network-injection.md
For more information, see the [DNS server name resolution](../virtual-network/vi
At present, for Azure-SSIS IR to use your own DNS server, you need to configure it with a standard custom setup following these steps:
-1. Download a custom setup script ([main.cmd](https://expressvnet.blob.core.windows.net/customsetup/main.cmd)) + its associated file ([setupdnsserver.ps1](https://expressvnet.blob.core.windows.net/customsetup/setupdnsserver.ps1)).
+1. Download a custom setup script ([main.cmd](https://expressvnet.blob.core.windows.net/customsetup/main.cmd?sp=r&st=2022-10-24T07:34:04Z&se=2042-10-24T15:34:04Z&spr=https&sv=2021-06-08&sr=b&sig=dfU16IBua6T%2FB2splQS6rZIXmgkSABaFUZd6%2BWF7fnc%3D)) + its associated file ([setupdnsserver.ps1](https://expressvnet.blob.core.windows.net/customsetup/setupdnsserver.ps1?sp=r&st=2022-10-24T07:36:00Z&se=2042-10-24T15:36:00Z&spr=https&sv=2021-06-08&sr=b&sig=TbspnXbFQv3NPnsRkNe7Q84EdLQT2f1KL%2FxqczFtaw0%3D)).
1. Replace ΓÇ£your-dns-server-ipΓÇ¥ in main.cmd with the IP address of your own DNS server.
data-factory Concepts Pipelines Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-pipelines-activities.md
Previously updated : 09/09/2021 Last updated : 10/24/2022 # Pipelines and activities in Azure Data Factory and Azure Synapse Analytics
An input dataset represents the input for an activity in the pipeline, and an ou
## Data movement activities
-Copy Activity in Data Factory copies data from a source data store to a sink data store. Data Factory supports the data stores listed in the table in this section. Data from any source can be written to any sink. Click a data store to learn how to copy data to and from that store.
-
+Copy Activity in Data Factory copies data from a source data store to a sink data store. Data Factory supports the data stores listed in the table in this section. Data from any source can be written to any sink.
For more information, see [Copy Activity - Overview](copy-activity-overview.md) article.
+Click a data store to learn how to copy data to and from that store.
++ ## Data transformation activities Azure Data Factory and Azure Synapse Analytics support the following transformation activities that can be added either individually or chained with another activity.
+For more information, see the [data transformation activities](transform-data.md) article.
+ Data transformation activity | Compute environment - | - [Data Flow](control-flow-execute-data-flow-activity.md) | Apache Spark clusters managed by Azure Data Factory
Data transformation activity | Compute environment
[Databricks Jar Activity](transform-data-databricks-jar.md) | Azure Databricks [Databricks Python Activity](transform-data-databricks-python.md) | Azure Databricks
-For more information, see the [data transformation activities](transform-data.md) article.
- ## Control flow activities The following control flow activities are supported:
data-factory Connector Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-office-365.md
This Microsoft 365 (Office 365) connector is supported for the following capabil
|| --| |[Copy activity](copy-activity-overview.md) (source/-)|&#9312; &#9313;| |[Mapping data flow](concepts-data-flow-overview.md) (source/-)|&#9312;|
+|[Lookup activity](control-flow-lookup-activity.md) (source/-)|&#9312; &#9313;|
<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
If you were setting `dateFilterColumn`, `startTime`, `endTime`, and `userScopeFi
For a full list of sections and properties available for defining activities, see the [Pipelines](concepts-pipelines-activities.md) article. This section provides a list of properties supported by Microsoft 365 (Office 365) source. + ### Microsoft 365 (Office 365) as source To copy data from Microsoft 365 (Office 365), the following properties are supported in the copy activity **source** section:
To create a mapping data flow using the Microsoft 365 connector as a source, com
6. On the tab **Data preview** click on the **Refresh** button to fetch a sample dataset for validation.
+## Lookup activity properties
+
+To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
+ ## Next steps For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Troubleshoot Synapse Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-synapse-sql.md
Title: Troubleshoot the Azure Synapse Analytics, Azure SQL Database, and SQL Server connectors
+ Title: Troubleshoot the Azure Synapse Analytics, Azure SQL Database, SQL Server, Azure SQL Managed Instance, and Amazon RDS for SQL Server connectors
-description: Learn how to troubleshoot issues with the Azure Synapse Analytics, Azure SQL Database, and SQL Server connectors in Azure Data Factory and Azure Synapse Analytics.
+description: Learn how to troubleshoot issues with the Azure Synapse Analytics, Azure SQL Database, SQL Server connectors, Azure SQL Managed Instance, and Amazon RDS for SQL Server in Azure Data Factory and Azure Synapse Analytics.
Previously updated : 09/20/2022 Last updated : 10/18/2022
- synapse
-# Troubleshoot the Azure Synapse Analytics, Azure SQL Database, and SQL Server connectors in Azure Data Factory and Azure Synapse
+# Troubleshoot the Azure Synapse Analytics, Azure SQL Database, SQL Server, Azure SQL Managed Instance, and Amazon RDS for SQL Server connectors in Azure Data Factory and Azure Synapse
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article provides suggestions to troubleshoot common problems with the Azure Synapse Analytics, Azure SQL Database, and SQL Server connectors in Azure Data Factory and Azure Synapse.
+This article provides suggestions to troubleshoot common problems with the Azure Synapse Analytics, Azure SQL Database, SQL Server, Azure SQL Managed Instance, and Amazon RDS for SQL Server connectors in Azure Data Factory and Azure Synapse.
## Error code: SqlFailedToConnect
This article provides suggestions to troubleshoot common problems with the Azure
| If the error message contains the string "SqlException", SQL Database the error indicates that some specific operation failed. | For more information, search by SQL error code in [Database engine errors](/sql/relational-databases/errors-events/database-engine-events-and-errors). For further help, contact Azure SQL support. | | If this is a transient issue (for example, an instable network connection), add retry in the activity policy to mitigate. | For more information, see [Pipelines and activities](./concepts-pipelines-activities.md#activity-policy). | | If the error message contains the string "Client with IP address '...' is not allowed to access the server", and you're trying to connect to Azure SQL Database, the error is usually caused by an Azure SQL Database firewall issue. | In the Azure SQL Server firewall configuration, enable the **Allow Azure services and resources to access this server** option. For more information, see [Azure SQL Database and Azure Synapse IP firewall rules](/azure/azure-sql/database/firewall-configure). |-
+ |If the error message contains `Login failed for user '<token-identified principal>'`, this error is usually caused by not granting enough permission to your service principal or system-assigned managed identity or user-assigned managed identity (depends on which authentication type you choose) in your database. |Grant enough permission to your service principal or system-assigned managed identity or user-assigned managed identity in your database. <br/><br/> **For Azure SQL Database**:<br/>&nbsp;&nbsp;&nbsp;&nbsp;- If you use service principal authentication, follow [Service principal authentication](connector-azure-sql-database.md#service-principal-authentication).<br/>&nbsp;&nbsp;&nbsp;&nbsp;- If you use system-assigned managed identity authentication, follow [System-assigned managed identity authentication](connector-azure-sql-database.md#managed-identity).<br/>&nbsp;&nbsp;&nbsp;&nbsp;- If you use user-assigned managed identity authentication, follow [User-assigned managed identity authentication](connector-azure-sql-database.md#user-assigned-managed-identity-authentication). <br/>&nbsp;&nbsp;&nbsp;<br/>**For Azure Synapse Analytics**:<br/>&nbsp;&nbsp;&nbsp;&nbsp;- If you use service principal authentication, follow [Service principal authentication](connector-azure-sql-data-warehouse.md#service-principal-authentication).<br/>&nbsp;&nbsp;&nbsp;&nbsp;- If you use system-assigned managed identity authentication, follow [System-assigned managed identities for Azure resources authentication](connector-azure-sql-data-warehouse.md#managed-identity).<br/>&nbsp;&nbsp;&nbsp;&nbsp;- If you use user-assigned managed identity authentication, follow [User-assigned managed identity authentication](connector-azure-sql-data-warehouse.md#user-assigned-managed-identity-authentication).<br/>&nbsp;&nbsp;&nbsp;<br/>**For Azure SQL Managed Instance**: <br/>&nbsp;&nbsp;&nbsp;&nbsp;- If you use service principal authentication, follow [Service principal authentication](connector-azure-sql-managed-instance.md#service-principal-authentication).<br/>&nbsp;&nbsp;&nbsp;- If you use system-assigned managed identity authentication, follow [System-assigned managed identity authentication](connector-azure-sql-managed-instance.md#managed-identity).<br/>&nbsp;&nbsp;&nbsp;- If you use user-assigned managed identity authentication, follow [User-assigned managed identity authentication](connector-azure-sql-managed-instance.md#user-assigned-managed-identity-authentication).|
+
## Error code: SqlOperationFailed - **Message**: `A database operation failed. Please search error to get more details.`
data-factory Quickstart Create Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory.md
Previously updated : 07/09/2022- Last updated : 10/24/2022+ # Quickstart: Create a data factory by using the Azure portal
To learn about the Azure role requirements to create a data factory, refer to [A
## Create a data factory
-A simple creation experience provided in the Azure Data Factory Studio to enable users to create a data factory within seconds. More advanced creation options are available in Azure portal.
+A quick creation experience provided in the Azure Data Factory Studio to enable users to create a data factory within seconds. More advanced creation options are available in Azure portal.
-### Simple creation in the Azure Data Factory Studio
+### Quick creation in the Azure Data Factory Studio
1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers. 1. Go to the [Azure Data Factory Studio](https://adf.azure.com) and choose the **Create a new data factory** radio button.
A simple creation experience provided in the Azure Data Factory Studio to enable
1. Select **Review + create**, and select **Create** after the validation is passed. After the creation is complete, select **Go to resource** to navigate to the **Data Factory** page.
-1. Select **Open** on the **Open Azure Data Factory Studio** tile to start the Azure Data Factory user interface (UI) application on a separate browser tab.
+1. Select **Launch Studio** to open Azure Data Factory Studio to start the Azure Data Factory user interface (UI) application on a separate browser tab.
- :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile highlighted.":::
+ :::image type="content" source="./media/quickstart-create-data-factory/azure-data-factory-launch-studio.png" alt-text="Home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile highlighted.":::
> [!NOTE] > If you see that the web browser is stuck at "Authorizing", clear the **Block third-party cookies and site data** check box. Or keep it selected, create an exception for **login.microsoftonline.com**, and then try to open the app again.
data-factory Quickstart Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-get-started.md
Select the button below to try it out! (If you clicked the one above already, y
You will be redirected to the configuration page shown in the image below to deploy the template. Here, you only need to create a **new resource group**. (You can leave all the other values with their defaults.) Then click **Review + create** and click **Create** to deploy the resources.
+> [!NOTE]
+> The user deploying the template needs to assign a role to a managed identity. This requires permissions that can be granted through the Owner, User Access Administrator or Managed Identity Operator roles.
+ All of the resources referenced above will be created in the new resource group, so you can easily clean them up after trying the demo. :::image type="content" source="media/quickstart-get-started/deploy-template.png" alt-text="A screenshot of the deployment template creation dialog.":::
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
DELETE method in the Web activity now supports sending a body with HTTP request
- Native UI support of parameterization added for 6 additional linked services ΓÇô SAP ODP, ODBC, Microsoft Access, Informix, Snowflake, and DB2 [Learn more](parameterize-linked-services.md?tabs=data-factory#supported-linked-service-types) - Pipeline designer enhancements added in Studio Preview experience ΓÇô users can view workflow inside pipeline objects like For Each, If Then, etc.. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/azure-data-factory-updated-pipeline-designer/ba-p/3618755)
+### Video summary
+
+> [!VIDEO https://www.youtube.com/embed?v=Bh_VA8n-SL8&list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv&index=7]
+ ## August 2022
defender-for-cloud Auto Deploy Azure Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-azure-monitoring-agent.md
The required [Log Analytics workspace solutions](../azure-monitor/insights/solut
### Additional extensions for Defender for Cloud
-The Azure Monitor Agent requires additional extensions. The ASA extension, which supports endpoint protection recommendations and fileless attack detection, is automatically installed when you auto-provision the Azure Monitor Agent.
+The Azure Monitor Agent requires additional extensions. The ASA extension, which supports endpoint protection recommendations, fileless attack detection, and Adaptive Application controls, is automatically installed when you auto-provision the Azure Monitor Agent.
### Additional security events collection
Now that you enabled the Azure Monitor Agent, check out the features that are su
- [Endpoint protection assessment](endpoint-protection-recommendations-technical.md) - [Adaptive application controls](adaptive-application-controls.md) - [Fileless attack detection](defender-for-servers-introduction.md#plan-features)-- [File Integrity Monitoring](file-integrity-monitoring-enable-ama.md)
+- [File Integrity Monitoring](file-integrity-monitoring-enable-ama.md)
defender-for-cloud Azure Devops Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md
If you don't have access to install the extension, you must request access from
The pipeline will run for a few minutes and save the results.
+> [!Note]
+> Install the SARIF SAST Scans Tab extension on the Azure DevOps organization in order to ensure that the generated analysis results will be displayed automatically under the Scans tab.
+ ## Learn more - Learn how to [create your first pipeline](/azure/devops/pipelines/create-first-pipeline?view=azure-devops&tabs=java%2Ctfs-2018-2%2Cbrowser).
defender-for-cloud Defender For Devops Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-devops-introduction.md
Title: Microsoft Defender for DevOps - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Previously updated : 09/20/2022 Last updated : 10/24/2022 -+ # Overview of Defender for DevOps
-Microsoft Defender for Cloud enables comprehensive visibility, posture management and threat protection across multicloud environments including Azure, AWS, Google, and on-premises resources. Defender for DevOps integrates with GitHub Advanced Security that is embedded into both GitHub and Azure DevOps, to empower security teams with the ability to protect resources from code to cloud.
+Microsoft Defender for Cloud enables comprehensive visibility, posture management, and threat protection across multicloud environments including Azure, AWS, GCP, and on-premises resources. Defender for DevOps, a service available in Defender for Cloud, empowers security teams to manage DevOps security across multi-pipeline environments.
-Defender for DevOps uses a central console to provide security teams DevOps insights across multi-pipeline environments, such as GitHub and Azure DevOps. These insights can then be correlated with other contextual cloud security intelligence to prioritize remediation in code and apply consistent security guardrails throughout the application lifecycle. Key capabilities starting in Defender for DevOps, available through Defender for Cloud includes:
+Defender for DevOps uses a central console to empower security teams with the ability to protect applications and resources from code to cloud across multi-pipeline environments, such as GitHub and Azure DevOps. Findings from Defender for DevOps can then be correlated with other contextual cloud security insights to prioritize remediation in code. Key capabilities in Defender for DevOps include:
-- **Unified visibility into DevOps security posture**: Security administrators are given full visibility into the DevOps inventory, the security posture of pre-production application code, resource configurations across multi-pipeline and multicloud environments in a single view.
+- **Unified visibility into DevOps security posture**: Security administrators now have full visibility into DevOps inventory and the security posture of pre-production application code, which includes findings from code, secret, and open-source dependency vulnerability scans. They can configure their DevOps resources across multi-pipeline and multicloud environments in a single view.
-- **Strengthen cloud resource configurations throughout the development lifecycle**: Enables security of Infrastructure as Code (IaC) templates and container images to minimize cloud misconfigurations reaching production environments, allowing security administrators to focus on any critical evolving threats.
+- **Strengthen cloud resource configurations throughout the development lifecycle**: You can enable security of Infrastructure as Code (IaC) templates and container images to minimize cloud misconfigurations reaching production environments, allowing security administrators to focus on any critical evolving threats.
-- **Prioritize remediation of critical issues in code**: Applies comprehensive code to cloud contextual insights within Defender for Cloud, security admins can help developers prioritize critical code fixes with actionable remediation and assign developer ownership by triggering custom workflows feeding directly into the tools developers use and love.
+- **Prioritize remediation of critical issues in code**: Apply comprehensive code to cloud contextual insights within Defender for Cloud. Security admins can help developers prioritize critical code fixes with Pull Request annotations and assign developer ownership by triggering custom workflows feeding directly into the tools developers use and love.
-Defender for DevOps strengthens the development lifecycle by protecting code management systems so that security issues can be found early and mitigated before deployment to production. By using security configuration recommendations, security teams have the ability to harden code management systems to protect them from attacks.
+Defender for DevOps helps unify, strengthen and manage multi-pipeline DevOps security.
## Availability | Aspect | Details | |--|--| | Release state: | Preview<br>The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. |
-| Required roles and permissions: | - **Contributor**: on the relevant Azure subscription <br> - **Security Admin Role**: for Defender for Cloud <br>- **GitHub Organization Administrator**<br>- **Developer(s)/Engineer(s)**: Access to setup GitHub workflows and Azure DevOps builds<br>- **Security Administrator(s)**: The ability to set up and evaluate the connector, evaluate and respond to Microsoft Defender for Cloud recommendations <br> - **Azure account**: with permissions to sign into Azure portal <br>- **Security Admin** permissions in Defender for Cloud to configure a connection to GitHub in Defender for Cloud <br>- **Security Reader** permissions in Defender for Cloud to view recommendations |
-
-## Benefits of Defender for DevOps
-
-Defender for DevOps gives Security Operators the ability to see how their organizations' code and development management systems work, without interfering with their developers. Security Operators can implement security operations and controls at every stage of the development lifecycle to make DevSecOps easier to achieve.
-
-Defender for DevOps grants developers the ability to scan code, infrastructure as code, credentials, and containers, to make the process easier for developers to find and remediate security issues.
-
-Defender for DevOps gives security teams the ability to set, evaluate, and enforce security policies and address risks before they are deployed to the cloud. Security teams gain visibility into their organizations' engineering systems, including security risks and pre-production security debt across multiple development environments and cloud applications.
+| Clouds | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) |
+| Regions: | Central US |
+| Source Code Management | [Azure DevOps](https://ms.portal.azure.com/#home) |
+| Systems | [GitHub](https://github.com/) |
+| Required permissions: | <br> **Azure account** - with permissions to sign into Azure portal. <br> **Contributor** - on the relevant Azure subscription. <br> **Organization Administrator** - in GitHub. <br> **Security Admin role** - in Defender for Cloud. |
## Manage your DevOps environments in Defender for Cloud
-Defender for DevOps allows you to manage your connected environments and provides your security teams with a high level overview of all the issues that may exist within them.
+Defender for DevOps allows you to manage your connected environments and provides your security teams with a high level overview of discovered issues that may exist within them through the [Defender for DevOps console](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/DevOpsSecurity).
:::image type="content" source="media/defender-for-devops-introduction/devops-dashboard.png" alt-text="Screenshot of the Defender for DevOps dashboard." lightbox="media/defender-for-devops-introduction/devops-dashboard.png":::
-Here, you can add environments, open and customize DevOps workbooks to show your desired metrics, view our guides and give feedback, and configure your pull request annotations.
+Here, you can [add GitHub](quickstart-onboard-github.md) and [Azure DevOps](quickstart-onboard-devops.md) environments, customize DevOps workbooks to show your desired metrics, view our guides and give feedback, and [configure your pull request annotations](tutorial-enable-pull-request-annotations.md).
-### Understanding your metrics
+### Understanding your DevOps security
:::image type="content" source="media/defender-for-devops-introduction/devops-metrics.png" alt-text="Screenshot of the top of the Defender for DevOps page that shows all of your attached environments and their metrics." lightbox="media/defender-for-devops-introduction/devops-metrics.png"::: |Page section| Description | |--|--|
-| :::image type="content" source="media/defender-for-devops-introduction/number-vulnerabilities.png" alt-text="Screenshot of the vulnerabilities section of the page."::: | From here you can see the total number of vulnerabilities that were found by the Defender for DevOps scanners and you can organize the results by severity level. |
-| :::image type="content" source="media/defender-for-devops-introduction/number-findings.png" alt-text="Screenshot of the findings section and the associated recommendations."::: | Presents the total number of findings by scan type and the associated recommendations for any onboarded resources. Selecting a result will take you to relevant recommendations. |
-| :::image type="content" source="media/defender-for-devops-introduction/connectors-section.png" alt-text="Screenshot of the connectors section."::: | Provides visibility into the number of connectors. The number of repositories that have been onboarded by an environment. |
+| :::image type="content" source="media/defender-for-devops-introduction/number-vulnerabilities.png" alt-text="Screenshot of the vulnerabilities section of the page."::: | Shows the total number of vulnerabilities found by Defender for DevOps. You can organize the results by severity level. |
+| :::image type="content" source="media/defender-for-devops-introduction/number-findings.png" alt-text="Screenshot of the findings section and the associated recommendations."::: | Presents the total number of findings by scan type and the associated recommendations for any onboarded resources. Selecting a result takes you to corresponding recommendations. |
+| :::image type="content" source="media/defender-for-devops-introduction/connectors-section.png" alt-text="Screenshot of the connectors section."::: | Provides visibility into the number of connectors and repositories that have been onboarded by an environment. |
### Review your findings
-The lower half of the page allows you to review all of the onboarded DevOps resources and the security information related to them.
+The lower half of the page allows you to review onboarded DevOps resources and the security information related to them.
:::image type="content" source="media/defender-for-devops-introduction/bottom-of-page.png" alt-text="Screenshot of the lower half of the Defender for DevOps overview page." lightbox="media/defender-for-devops-introduction/bottom-of-page.png":::
+On this part of the screen you see:
-On this part of the screen you will see:
--- **Repositories**: Lists all onboarded repositories from GitHub and Azure DevOps. You can get more information about specific resources by selecting it.
+- **Repositories** - Lists onboarded repositories from GitHub and Azure DevOps. View more information about a specific resource by selecting it.
-- **Pull request status**: Shows whether PR annotations are enabled for the repository.
+- **Pull request annotation status** - Shows whether PR annotations are enabled for the repository.
- `On` - PR annotations are enabled.
- - `Off` - PR annotations are not enabled.
- - `NA` - Defender for Cloud doesn't have information about the enablement. Currently, this information is available only for Azure DevOps repositories.
+ - `Off` - PR annotations aren't enabled.
+ - `NA` - Defender for Cloud doesn't have information about enablement.
+
+ > [!NOTE]
+ > Currently, this information is available only for Azure DevOps repositories.
+
+- **Exposed secrets** - Shows the number of secrets identified in the repositories.
-- **Total exposed secrets** - Shows number of secrets identified in the repositories.
+- **OSS vulnerabilities** ΓÇô Shows the number of open source dependency vulnerabilities identified in the repositories.
-- **OSS vulnerabilities** ΓÇô Shows number of vulnerabilities identified in the repositories. Currently, this information is available only for GitHub repositories.
+ > [!NOTE]
+ > Currently, this information is available only for GitHub repositories.
-- **Total code scanning findings** ΓÇô Shows number of other code vulnerabilities and misconfigurations identified in the repositories.
+- **Code scanning findings** ΓÇô Shows the number of code vulnerabilities and misconfigurations identified in the repositories.
## Learn more
On this part of the screen you will see:
## Next steps
-Learn how to [Connect your GitHub repositories to Microsoft Defender for Cloud](quickstart-onboard-github.md).
+[Connect your GitHub repositories to Microsoft Defender for Cloud](quickstart-onboard-github.md).
-Learn how to [Connect your Azure DevOps repositories to Microsoft Defender for Cloud](quickstart-onboard-devops.md).
+[Connect your Azure DevOps repositories to Microsoft Defender for Cloud](quickstart-onboard-devops.md).
defender-for-cloud Regulatory Compliance Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md
The [CIS Benchmark](https://www.cisecurity.org/benchmark/azure/) is authored by
Since weΓÇÖve released the Microsoft cloud security benchmark, many customers have chosen to migrate to it as a replacement for CIS benchmarks. ### What standards are supported in the compliance dashboard?
-By default, the regulatory compliance dashboard shows you the Microsoft cloud security benchmark. The Microsoft cloud security benchmark is the Microsoft-authored, Azure-specific guidelines for security, and compliance best practices based on common compliance frameworks. Learn more in the [Microsoft cloud security benchmark introduction](/security/benchmark/azure/introduction).
+By default, the regulatory compliance dashboard shows you the Microsoft cloud security benchmark. The Microsoft cloud security benchmark is the Microsoft-authored guidelines for security, and compliance best practices based on common compliance frameworks. Learn more in the [Microsoft cloud security benchmark introduction](/security/benchmark/azure/introduction).
To track your compliance with any other standard, you'll need to explicitly add them to your dashboard.
defender-for-cloud Troubleshooting Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md
Previously updated : 07/17/2022 Last updated : 10/24/2022 # Microsoft Defender for Cloud Troubleshooting Guide
By default the Microsoft Antimalware user interface is disabled, but you can [en
If you experience issues loading the workload protection dashboard, make sure that the user that first enabled Defender for Cloud on the subscription and the user that want to turn on data collection have the *Owner* or *Contributor* role on the subscription. If that is the case, users with the *Reader* role on the subscription can see the dashboard, alerts, recommendations, and policy.
+## Troubleshoot Azure DevOps Organization connector issues
+
+The `Unable to find Azure DevOps Organization` error occurs when you create an Azure DevOps Organization (ADO) connector and the incorrect account was signed in and granted access to the Microsoft Security DevOps App. This can also result in the `Failed to create Azure DevOps connectorFailed to create Azure DevOps connector. Error: 'Unable to find Azure DevOps organization : OrganizationX in available organizations: Organization1, Organization2, Organization3.'` error.
+
+It is important to know which account you are logged in to when you authorize the access, as that will be the account that is used. Your account can be associated with the same email address but also associated with different tenants.
+
+You should [check which account](https://app.vssps.visualstudio.com/profile/view) you are currently logged in on and ensure that the right account and tenant combination is selected.
++
+**To change your current account**:
+
+1. Select **profile page**.
+
+ :::image type="content" source="./media/troubleshooting-guide/authorize-profile-page.png" alt-text="Screenshot showing how to switch to the ADO Profile Page.":::
+
+1. On your profile page, select the drop down menu to select another account.
+
+ :::image type="content" source="./media/troubleshooting-guide/authorize-select-tenant.png" alt-text="Screenshot of the Azure DevOps profile page that is used to select an account.":::
+
+The first time you authorize the Microsoft Security application, you are given the ability to select an account. However, each time you login after that, the page defaults to the logged in account without giving you the chance to select an account.
+
+**To change the default account**:
+
+1. [Sign in](https://app.vssps.visualstudio.com/profile/view) and select the same tenant you use in Azure from the dropdown menu.
+
+1. Create a new connector, and authorize it. When the pop-up page appears, ensure it shows the correct tenant.
+
+If this process does not fix your issue, you should revoke Microsoft Security DevOps's permission from all tenants in Azure DevOps and repeat the above steps. You should then be able to see the authorization pop up again when authorizing the connector.
++ ## Contacting Microsoft Support You can also find troubleshooting information for Defender for Cloud at the [Defender for Cloud Q&A page](/answers/topics/azure-security-center.html). If you need further troubleshooting, you can open a new support request using **Azure portal** as shown below:
defender-for-cloud Tutorial Enable Pull Request Annotations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-pull-request-annotations.md
Title: Tutorial Enable pull request annotations in GitHub or in Azure DevOps
description: Add pull request annotations in GitHub or in Azure DevOps. By adding pull request annotations, your SecOps and developer teams so that they can be on the same page when it comes to mitigating issues. Previously updated : 09/20/2022 Last updated : 10/20/2022 # Tutorial: Enable pull request annotations in GitHub and Azure DevOps
-With Microsoft Defender for Cloud, you can configure pull request annotations in Azure DevOps. Pull request annotations are enabled in Microsoft Defender for Cloud by security operators and are sent to the developers who can then take action directly in their pull requests. This allows both security operators and developers to see the same security issue information in the systems they're accustomed to working in. Security operators see unresolved findings in Defender for Cloud and developers see them in their source code management systems. These issues can then be acted upon by developers when they submit their pull requests. This helps prevent and fix potential security vulnerabilities and misconfigurations before they enter the production stage.
+Defender for DevOps exposes security findings as annotations in Pull Requests (PR). Security operators can enable PR annotations in Microsoft Defender for Cloud. Any exposed issues can then be remedied by developers. This process can prevent and fix potential security vulnerabilities and misconfigurations before they enter the production stage. Defender for DevOps annotates the vulnerabilities within the differences in the file rather than all the vulnerabilities detected across the entire file. Developers are able to see annotations in their source code management systems and Security operators can see any unresolved findings in Microsoft Defender for Cloud.
-You can get pull request annotations in GitHub if you're a customer of GitHub Advanced Security.
+With Microsoft Defender for Cloud, you can configure PR annotations in Azure DevOps. You can get PR annotations in GitHub if you're a GitHub Advanced Security customer.
> [!NOTE]
-> During the Defender for DevOps preview period, GitHub Advanced Security for Azure DevOps (GHAS for AzDO) is also providing a free trial of pull request annotations.
+> GitHub Advanced Security for Azure DevOps (GHAzDO) is providing a free trial of PR annotations during the Defender for DevOps preview.
In this tutorial you'll learn how to:
Before you can follow the steps in this tutorial, you must:
**For GitHub**:
+- An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+- Be a [GitHub Advanced Security](https://docs.github.com/en/get-started/learning-about-github/about-github-advanced-security) customer.
+- [Connect your GitHub repositories to Microsoft Defender for Cloud](quickstart-onboard-github.md).
+- [Configure the Microsoft Security DevOps GitHub action](github-action.md).
**For Azure DevOps**:
+- An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+- [Connect your Azure DevOps repositories to Microsoft Defender for Cloud](quickstart-onboard-devops.md).
+- [Configure the Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).
+- [Setup secret scanning in Azure DevOps](detect-credential-leaks.md#setup-secret-scanning-in-azure-devops).
## Enable pull request annotations in GitHub
-By enabling pull request annotations in GitHub, your developers gain the ability to see their security issues when they submit their pull requests directly to the main branch.
+By enabling pull request annotations in GitHub, your developers gain the ability to see their security issues when they create a PR directly to the main branch.
**To enable pull request annotations in GitHub**:
-1. Sign in to [GitHub](https://github.com/).
+1. Navigate to [GitHub](https://github.com/) and sign in.
-1. Select the relevant repository.
+1. Select a repository that you've onboarded to Defender for Cloud.
-1. Select **.github/workflows**.
+1. Navigate to **`Your repository's home page`** > **.github/workflows**.
- :::image type="content" source="media/tutorial-enable-pr-annotations/workflow-folder.png" alt-text="Screenshot that shows where to navigate to, to select the GitHub workflow folder.":::
+ :::image type="content" source="media/tutorial-enable-pr-annotations/workflow-folder.png" alt-text="Screenshot that shows where to navigate to, to select the GitHub workflow folder." lightbox="media/tutorial-enable-pr-annotations/workflow-folder.png":::
-1. Select **msdevopssec.yml**.
+1. Select **msdevopssec.yml**, which was created in the [prerequisites](#prerequisites).
- :::image type="content" source="media/tutorial-enable-pr-annotations/devopssec.png" alt-text="Screenshot that shows you where on the screen to select the msdevopssec.yml file.":::
+ :::image type="content" source="media/tutorial-enable-pr-annotations/devopssec.png" alt-text="Screenshot that shows you where on the screen to select the msdevopssec.yml file." lightbox="media/tutorial-enable-pr-annotations/devopssec.png":::
1. Select **edit**.
- :::image type="content" source="media/tutorial-enable-pr-annotations/edit-button.png" alt-text="Screenshot that shows you what the edit button looks like.":::
+ :::image type="content" source="media/tutorial-enable-pr-annotations/edit-button.png" alt-text="Screenshot that shows you what the edit button looks like." lightbox="media/tutorial-enable-pr-annotations/edit-button.png":::
1. Locate and update the trigger section to include: ```yml # Triggers the workflow on push or pull request events but only for the main branch
- push:
- branches: [ main ]
pull_request:
- branches: [ main ]
+ branches: ["main"]
```
-
- By adding these lines to your yaml file, you'll configure the action to run when either a push or pull request event occurs on the designated repository.ΓÇ»
You can also view a [sample repository](https://github.com/microsoft/security-devops-action/tree/main/samples).
By enabling pull request annotations in GitHub, your developers gain the ability
1. Select **Commit changes**.
-1. Select **Files changed**.
+Any issues that are discovered by the scanner will be viewable in the Files changed section of your pull request.
-You'll now be able to see all the issues that were discovered by the scanner.
+### Resolve security issues in GitHub
-### Mitigate GitHub issues found by the scanner
-
-Once you've configured the scanner, you'll be able to view all issues that were detected.
-
-**To mitigate GitHub issues found by the scanner**:
+**To resolve security issues in GitHub**:
1. Navigate through the page and locate an affected file with an annotation.
-1. Select **Dismiss alert**.
+1. Follow the remediation steps in the annotation. If you choose not remediate the annotation, select **Dismiss alert**.
1. Select a reason to dismiss:
Once you've configured the scanner, you'll be able to view all issues that were
## Enable pull request annotations in Azure DevOps
-By enabling pull request annotations in Azure DevOps, your developers gain the ability to see their security issues when they submit their pull requests directly to the main branch.
+By enabling pull request annotations in Azure DevOps, your developers gain the ability to see their security issues when they create PRs directly to the main branch.
### Enable Build Validation policy for the CI Build
Before you can enable pull request annotations, your main branch must have enabl
1. Navigate to **Project settings** > **Repositories**.
+ :::image type="content" source="media/tutorial-enable-pr-annotations/project-settings.png" alt-text="Screenshot that shows you where to navigate to, to select repositories.":::
+ 1. Select the repository to enable pull requests on. 1. Select **Policies**.
-1. Navigate to **Branch Policies** > **Build Validation**.
+1. Navigate to **Branch Policies** > **Main branch**.
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/branch-policies.png" alt-text="Screenshot that shows where to locate the branch policies." lightbox="media/tutorial-enable-pr-annotations/branch-policies.png":::
+
+1. Locate the Build Validation section.
+
+1. Ensure the CI Build is toggled to **On**.
+
+ :::image type="content" source="media/tutorial-enable-pr-annotations/build-validation.png" alt-text="Screenshot that shows where the CI Build toggle is located.":::
+
+1. Select **Save**.
-1. Toggle the CI Build to **On**.
+ :::image type="content" source="media/tutorial-enable-pr-annotations/validation-policy.png" alt-text="Screenshot that shows the build validation.":::
+
+Once you have completed these steps you can select the build pipeline you created previously and customize it's settings to suit your needs.
### Enable pull request annotations
Before you can enable pull request annotations, your main branch must have enabl
1. Toggle Pull request annotations to **On**.
-1. Select a category from the drop-down menu.
+ :::image type="content" source="media/tutorial-enable-pr-annotations/annotation-on.png" alt-text="Screenshot that shows the toggle switched to on.":::
+
+1. (Optional) Select a category from the drop-down menu.
> [!NOTE]
- > Only secret scan results is currently supported.
+ > Only secret scan results are currently supported.
-1. Select a severity level from the drop-down menu.
+1. (Optional) Select a severity level from the drop-down menu.
+
+ > [!NOTE]
+ > Only high-level severity findings are currently supported.
1. Select **Save**.
-All annotations will now be displayed based on your configurations with the relevant line of code.
+All annotations on your main branch will be displayed from now on based on your configurations with the relevant line of code.
-### Mitigate Azure DevOps issues found by the scanner
+### Resolve security issues in Azure DevOps
Once you've configured the scanner, you'll be able to view all issues that were detected.
-**To mitigate Azure DevOps issues found by the scanner**:
+**To resolve security issues in Azure DevOps**:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Sign in to the [Azure DevOps](https://azure.microsoft.com/products/devops).
1. Navigate to **Pull requests**.
-1. Scroll through the Overview page and locate an affected line with an annotation.
+ :::image type="content" source="media/tutorial-enable-pr-annotations/pull-requests.png" alt-text="Screenshot showing where to go to navigate to pull requests.":::
+
+1. On the Overview, or files page, locate an affected line with an annotation.
-1. Select **Active**.
+1. Follow the remediation steps in the annotation.
-1. Select action to take:
+1. Select **Active** to change the status of the annotation and access the dropdown menu.
+
+1. Select an action to take:
- **Active** - The default status for new annotations. - **Pending** - The finding is being worked on.
Once you've configured the scanner, you'll be able to view all issues that were
- **Won't fix** - The finding is noted but won't be fixed. - **Closed** - The discussion in this annotation is closed.
+Defender for DevOps will re-activate an annotation if the security issue is not fixed in a new iteration.
+ ## Learn more In this tutorial, you learned how to enable pull request annotations in GitHub and Azure DevOps. Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
-Learn how to [connect your GitHub](quickstart-onboard-github.md) to Defender for Cloud.
+Learn how to [Discover misconfigurations in Infrastructure as Code](iac-vulnerabilities.md).
-Learn how to [connect your Azure DevOps](quickstart-onboard-devops.md) to Defender for Cloud.
+Learn how to [detect exposed secrets in code](detect-credential-leaks.md).
## Next steps
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
Enter the following parameters:
| Date and time | Date and time that the syslog server machine received the information. | | Priority | User. Alert | | Hostname | Sensor IP address |
-| Message | CyberX platform name: The sensor name.<br /> Microsoft Defender for IoT Alert: The title of the alert.<br /> Type: The type of the alert. Can be **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**.<br /> Severity: The severity of the alert. Can be **Warning**, **Minor**, **Major**, or **Critical**.<br /> Source: The source device name.<br /> Source IP: The source device IP address.<br /> Protocol (Optional): The detected source protocol.<br /> Address (Optional): Source protocol address.<br /> Destination: The destination device name.<br /> Destination IP: The IP address of the destination device.<br /> Protocol (Optional): The detected destination protocol.<br /> Address (Optional): The destination protocol address.<br /> Message: The message of the alert.<br /> Alert group: The alert group associated with the alert. |<br /> UUID (Optional): The UUID the alert. |
+| Message | CyberX platform name: The sensor name.<br /> Microsoft Defender for IoT Alert: The title of the alert.<br /> Type: The type of the alert. Can be **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**.<br /> Severity: The severity of the alert. Can be **Warning**, **Minor**, **Major**, or **Critical**.<br /> Source: The source device name.<br /> Source IP: The source device IP address.<br /> Protocol (Optional): The detected source protocol.<br /> Address (Optional): Source protocol address.<br /> Destination: The destination device name.<br /> Destination IP: The IP address of the destination device.<br /> Protocol (Optional): The detected destination protocol.<br /> Address (Optional): The destination protocol address.<br /> Message: The message of the alert.<br /> Alert group: The alert group associated with the alert. <br /> UUID (Optional): The UUID the alert. |
| Syslog object output | Description | |--|--|
Enter the following parameters:
| Date and time | Date and time that the syslog server machine received the information. | | Priority | User.Alert | | Hostname | Sensor IP address |
-| Message | CEF:0 <br />Microsoft Defender for IoT <br />Sensor name: The name of the sensor appliance. <br />Sensor version <br />Alert Title: The title of the alert. <br />msg: The message of the alert. <br />protocol: The protocol of the alert. <br />severity: **Warning**, **Minor**, **Major**, or **Critical**. <br />type: **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br /> start: The time that the alert was detected. <br />Might vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br />src_ip: IP address of the source device. <br />dst_ip: IP address of the destination device.<br />cat: The alert group associated with the alert. |
+| Message | CEF:0 <br />Microsoft Defender for IoT <br />Sensor name= The name of the sensor appliance. <br />Sensor version <br />Alert title= The title of the alert. <br />msg= The message of the alert. <br />protocol= The protocol of the alert. <br />severity= **Warning**, **Minor**, **Major**, or **Critical**. <br />type= **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br /> start= The time that the alert was detected. <br />Might vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br />src_ip= IP address of the source device. <br />dst_ip= IP address of the destination device.<br />cat= The alert group associated with the alert. |
| Syslog LEEF output format | Description | |--|--|
devops-project Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devops-project/overview.md
# Overview of DevOps Starter
-[!IMPORTANT] DevOps Starter will be retired on March 31, 2023. [Learn more](/azure/devops-project/retirement-and-migration).
+>[!IMPORTANT]
+>DevOps Starter will be retired on March 31, 2023. [Learn more](/azure/devops-project/retirement-and-migration).
DevOps Starter makes it easy to get started on Azure using either GitHub actions or Azure DevOps. It helps you launch your favorite app on the Azure service of your choice in just a few quick steps from the Azure portal.
devtest-labs Configure Shared Image Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-shared-image-gallery.md
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/
### Create or update shared image gallery ```rest
-PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DevTestLab/labs/{labName}/sharedgalleries/{name}?api-version= 2018-10-15-preview
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DevTestLab/labs/{labName}/sharedgalleries/{name}?api-version=2018-10-15-preview
Body: { "properties":{
Body:
### List images in a shared image gallery ```rest
-GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DevTestLab/labs/{labName}/sharedgalleries/{name}/sharedimages?api-version= 2018-10-15-preview
+GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DevTestLab/labs/{labName}/sharedgalleries/{name}/sharedimages?api-version=2018-10-15-preview
``` ## Next steps
-See the following articles on creating a VM using an image from the attached shared image gallery: [Create a VM using a shared image from the gallery](add-vm-use-shared-image.md)
+See the following articles on creating a VM using an image from the attached shared image gallery: [Create a VM using a shared image from the gallery](add-vm-use-shared-image.md)
digital-twins Concepts Data Explorer Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-explorer-plugin.md
Once the target table is created, you can use the Azure Digital Twins plugin to
Here's an example of a schema that might be used to represent shared data.
-| `timestamp` | `twinId` | `modelId` | `name` | `value` | `relationshipTarget` | `relationshipID` |
-| | | | | | | |
-| 2021-02-01 17:24 | ConfRoomTempSensor | `dtmi:com:example:TemperatureSensor;1` | temperature | 301.0 | | |
+| `TimeStamp` | `SourceTimeStamp` | `TwinId` | `ModelId` | `Name` | `Value` | `RelationshipTarget` | `RelationshipID` |
+| | | | | | | | |
+| 2021-02-01 17:24 | 2021-02-01 17:11 | ConfRoomTempSensor | `dtmi:com:example:TemperatureSensor;1` | temperature | 301.0 | | |
Digital twin properties are stored as key-value pairs (`name, value`). `name` and `value` are stored as dynamic data types.
frontdoor Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/best-practices.md
na Previously updated : 07/10/2022 Last updated : 10/25/2022
If you combine both Front Door and Traffic Manager together, it's unlikely that
If you need content caching and delivery (CDN), TLS termination, advanced routing capabilities, or a web application firewall (WAF), consider using Front Door. For simple global load balancing with direct connections from your client to your endpoints, consider using Traffic Manager. For more information about selecting a load balancing option, see [Load-balancing options](/azure/architecture/guide/technology-choices/load-balancing-overview).
+### Restrict traffic to your origins
+
+Front Door's features work best when traffic only flows through Front Door. You should configure your origin to block traffic that hasn't been sent through Front Door. For more information, see [Secure traffic to Azure Front Door origins](origin-security.md).
+ ### Use the latest API version and SDK version When you work with Front Door by using APIs, ARM templates, Bicep, or Azure SDKs, it's important to use the latest available API or SDK version. API and SDK updates occur when new functionality is available, and also contain important security patches and bug fixes.
frontdoor Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-terraform.md
+
+ Title: 'Quickstart: Create a Azure Front Door Standard/Premium profile using Terraform'
+description: This quickstart describes how to create an Azure Front Door Standard/Premium using Terraform.
+++ Last updated : 10/25/2022+++
+ na
++
+# Create a Front Door Standard/Premium profile using Terraform
+
+This quickstart describes how to use Terraform to create a Front Door profile to set up high availability for a web endpoint.
+
+The steps in this article were tested with the following Terraform and Terraform provider versions:
+
+- [Terraform v1.3.2](https://releases.hashicorp.com/terraform/)
+- [AzureRM Provider v.3.27.0](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs)
+
+## Prerequisites
++
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+- IP address or FQDN of a website or web application.
+
+## Implement the Terraform code
+
+1. Create a directory in which to test the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ ```terraform
+ # Configure the Azure provider
+ terraform {
+ required_providers {
+ azurerm = {
+ source = "hashicorp/azurerm"
+ version = "~> 3.27.0"
+ }
+
+ random = {
+ source = "hashicorp/random"
+ }
+ }
+
+ required_version = ">= 1.1.0"
+ }
+
+ provider "azurerm" {
+ features {}
+ }
+ ```
+
+1. Create a file named `resource-group.tf` and insert the following code:
+
+ ```terraform
+ resource "azurerm_resource_group" "my_resource_group" {
+ name = var.resource_group_name
+ location = var.location
+ }
+ ```
+
+1. Create a file named `app-service.tf` and insert the following code:
+
+ ```terraform
+ locals {
+ app_name = "myapp-${lower(random_id.app_name.hex)}"
+ app_service_plan_name = "AppServicePlan"
+ }
+
+ resource "azurerm_service_plan" "app_service_plan" {
+ name = local.app_service_plan_name
+ location = var.location
+ resource_group_name = azurerm_resource_group.my_resource_group.name
+
+ sku_name = var.app_service_plan_sku_name
+ os_type = "Windows"
+ worker_count = var.app_service_plan_capacity
+ }
+
+ resource "azurerm_windows_web_app" "app" {
+ name = local.app_name
+ location = var.location
+ resource_group_name = azurerm_resource_group.my_resource_group.name
+ service_plan_id = azurerm_service_plan.app_service_plan.id
+
+ https_only = true
+
+ site_config {
+ ftps_state = "Disabled"
+ minimum_tls_version = "1.2"
+ ip_restriction = [ {
+ service_tag = "AzureFrontDoor.Backend"
+ ip_address = null
+ virtual_network_subnet_id = null
+ action = "Allow"
+ priority = 100
+ headers = [ {
+ x_azure_fdid = [ azurerm_cdn_frontdoor_profile.my_front_door.resource_guid ]
+ x_fd_health_probe = []
+ x_forwarded_for = []
+ x_forwarded_host = []
+ } ]
+ name = "Allow traffic from Front Door"
+ } ]
+ }
+ }
+ ```
+
+1. Create a file named `front-door.tf` and insert the following code:
+
+ ```terraform
+ locals {
+ front_door_profile_name = "MyFrontDoor"
+ front_door_endpoint_name = "afd-${lower(random_id.front_door_endpoint_name.hex)}"
+ front_door_origin_group_name = "MyOriginGroup"
+ front_door_origin_name = "MyAppServiceOrigin"
+ front_door_route_name = "MyRoute"
+ }
+
+ resource "azurerm_cdn_frontdoor_profile" "my_front_door" {
+ name = local.front_door_profile_name
+ resource_group_name = azurerm_resource_group.my_resource_group.name
+ sku_name = var.front_door_sku_name
+ }
+
+ resource "azurerm_cdn_frontdoor_endpoint" "my_endpoint" {
+ name = local.front_door_endpoint_name
+ cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.my_front_door.id
+ }
+
+ resource "azurerm_cdn_frontdoor_origin_group" "my_origin_group" {
+ name = local.front_door_origin_group_name
+ cdn_frontdoor_profile_id = azurerm_cdn_frontdoor_profile.my_front_door.id
+ session_affinity_enabled = true
+
+ load_balancing {
+ sample_size = 4
+ successful_samples_required = 3
+ }
+
+ health_probe {
+ path = "/"
+ request_type = "HEAD"
+ protocol = "Https"
+ interval_in_seconds = 100
+ }
+ }
+
+ resource "azurerm_cdn_frontdoor_origin" "my_app_service_origin" {
+ name = local.front_door_origin_name
+ cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.my_origin_group.id
+
+ enabled = true
+ host_name = azurerm_windows_web_app.app.default_hostname
+ http_port = 80
+ https_port = 443
+ origin_host_header = azurerm_windows_web_app.app.default_hostname
+ priority = 1
+ weight = 1000
+ certificate_name_check_enabled = true
+ }
+
+ resource "azurerm_cdn_frontdoor_route" "my_route" {
+ name = local.front_door_route_name
+ cdn_frontdoor_endpoint_id = azurerm_cdn_frontdoor_endpoint.my_endpoint.id
+ cdn_frontdoor_origin_group_id = azurerm_cdn_frontdoor_origin_group.my_origin_group.id
+ cdn_frontdoor_origin_ids = [azurerm_cdn_frontdoor_origin.my_app_service_origin.id]
+
+ supported_protocols = ["Http", "Https"]
+ patterns_to_match = ["/*"]
+ forwarding_protocol = "HttpsOnly"
+ link_to_default_domain = true
+ https_redirect_enabled = true
+ }
+ ```
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ ```terraform
+ variable "location" {
+ type = string
+ default = "westus2"
+ }
+
+ variable "resource_group_name" {
+ type = string
+ default = "FrontDoor"
+ }
+
+ variable "app_service_plan_sku_name" {
+ type = string
+ default = "S1"
+ }
+
+ variable "app_service_plan_capacity" {
+ type = number
+ default = 1
+ }
+
+ variable "app_service_plan_sku_tier_name" {
+ type = string
+ default = "Standard"
+ }
+
+ variable "front_door_sku_name" {
+ type = string
+ default = "Standard_AzureFrontDoor"
+ validation {
+ condition = contains(["Standard_AzureFrontDoor", "Premium_AzureFrontDoor"], var. front_door_sku_name)
+ error_message = "The SKU value must be Standard_AzureFrontDoor or Premium_AzureFrontDoor."
+ }
+ }
+
+ resource "random_id" "app_name" {
+ byte_length = 8
+ }
+
+ resource "random_id" "front_door_endpoint_name" {
+ byte_length = 8
+ }
+ ```
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ ```terraform
+ output "frontDoorEndpointHostName" {
+ value = azurerm_cdn_frontdoor_endpoint.my_endpoint.host_name
+ }
+ ```
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [Portal](#tab/Portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Resource groups** from the left pane.
+
+1. Select the FrontDoor resource group.
+
+1. Select the Front Door you created and you'll be able to see the endpoint hostname. Copy the hostname and paste it on to the address bar of a browser. Press enter and your request will automatically get routed to the web app.
+
+ :::image type="content" source="./media/create-front-door-bicep/front-door-bicep-web-app-origin-success.png" alt-text="Screenshot of the message: Your web app is running and waiting for your content.":::
+
+# [Azure CLI](#tab/CLI)
+
+Run the following command:
+
+```azurecli-interactive
+az resource list --resource-group FrontDoor
+```
+
+# [PowerShell](#tab/PowerShell)
+
+Run the following command:
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName FrontDoor
+```
+++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+In this quickstart, you deployed a simple Front Door profile using Terraform. [Learn more about Azure Front Door.](front-door-overview.md)
frontdoor Origin Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/origin-security.md
+
+ Title: Secure traffic to origins - Azure Front Door
+description: This article explains how to restrict traffic to your origins to ensure it's been processed by Azure Front Door.
+++++ Last updated : 10/25/2022+
+zone_pivot_groups: front-door-tiers
++
+# Secure traffic to Azure Front Door origins
+
+Front Door's features work best when traffic only flows through Front Door. You should configure your origin to block traffic that hasn't been sent through Front Door. Otherwise, traffic might bypass Front Door's web application firewall, DDoS protection, and other security features.
++
+> [!NOTE]
+> *Origin* and *origin group* in this article refers to the backend and backend pool of the Azure Front Door (classic) configuration.
+++
+Front Door provides several approaches that you can use to restrict your origin traffic.
+
+## Private Link origins
+
+When you use the premium SKU of Front Door, you can use Private Link to send traffic to your origin. [Learn more about Private Link origins.](private-link.md)
+
+You should configure your origin to disallow traffic that doesn't come through Private Link. The way that you restrict traffic depends on the type of Private Link origin you use:
+
+- Azure App Service and Azure Functions automatically disable access through public internet endpoints when you use Private Link. For more information, see [Using Private Endpoints for Azure Web App](../app-service/networking/private-endpoint.md).
+- Azure Storage provides a firewall, which you can use to deny traffic from the internet. For more information, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md).
+- Internal load balancers with Azure Private Link service aren't publicly routable. You can also configure network security groups to ensure that you disallow access to your virtual network from the internet.
++
+## Public IP address-based origins
+
+When you use public IP address-based origins, there are two approaches you should use together to ensure that traffic flows through your Front Door instance:
+
+- Configure IP address filtering to ensure that requests to your origin are only accepted from the Front Door IP address ranges.
+- Configure your application to verify the `X-Azure-FDID` header value, which Front Door attaches to all requests to the origin, and ensure that its value matches your Front Door's identifier.
+
+### IP address filtering
+
+Configure IP address filtering for your origins to accept traffic from Azure Front Door's backend IP address space and Azure's infrastructure services only.
+
+The *AzureFrontDoor.Backend* service tag provides a list of the IP addresses that Front Door uses to connect to your origins. You can use this service tag within your [network security group rules](../virtual-network/network-security-groups-overview.md#security-rules). You can also download the [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519) data set, which is updated regularly with the latest IP addresses.
+
+You should also allow traffic from Azure's [basic infrastructure services](../virtual-network/network-security-groups-overview.md#azure-platform-considerations) through the virtualized host IP addresses `168.63.129.16` and `169.254.169.254`.
+
+> [!WARNING]
+> Front Door's IP address space changes regularly. Ensure that you use the *AzureFrontDoor.Backend* service tag instead of hard-coding IP addresses.
+
+### Front Door identifier
+
+IP address filtering alone isn't sufficient to secure traffic to your origin, because other Azure customers use the same IP addresses. You should also configure your origin to ensure that traffic has originated from *your* Front Door profile.
+
+Azure generates a unique identifier for each Front Door profile. You can find the identifier in the Azure portal, by looking for the *Front Door ID* value in the Overview page of your profile.
+
+When Front Door makes a request to your origin, it adds the `X-Azure-FDID` request header. Your origin should inspect the header on incoming requests, and reject requests where the value doesn't match your Front Door profile's identifier.
+
+### Example configuration
+
+The following examples show how you can secure different types of origins.
+
+# [App Service and Functions](#tab/app-service-functions)
+
+You can use [App Service access restrictions](../app-service/app-service-ip-restrictions.md#restrict-access-to-a-specific-azure-front-door-instance) to perform IP address filtering as well as header filtering. The capability is provided by the platform, and you don't need to change your application or host.
+
+# [Application Gateway](#tab/application-gateway)
+
+Application Gateway is deployed into your virtual network. Configure a network security group rule to allow inbound access on ports 80 and 443 from the *AzureFrontDoor.Backend* service tag, and disallow inbound traffic on ports 80 and 443 from the *Internet* service tag.
+
+Use a custom WAF rule to check the `X-Azure-FDID` header value. For more information, see [Create and use Web Application Firewall v2 custom rules on Application Gateway](../web-application-firewall/ag/create-custom-waf-rules.md#example-7).
+
+# [IIS](#tab/iis)
+
+When you run [Microsoft Internet Information Services (IIS)](https://www.iis.net/) on an Azure-hosted virtual machine, you should create a network security group in the virtual network that hosts the virtual machine. Configure a network security group rule to allow inbound access on ports 80 and 443 from the *AzureFrontDoor.Backend* service tag, and disallow inbound traffic on ports 80 and 443 from the *Internet* service tag.
+
+Use an IIS configuration file like in the following example to inspect the `X-Azure-FDID` header on your incoming requests:
+
+```xml
+<?xml version="1.0" encoding="UTF-8"?>
+<configuration>
+ <system.webServer>
+ <rewrite>
+ <rules>
+ <rule name="Filter_X-Azure-FDID" patternSyntax="Wildcard" stopProcessing="true">
+ <match url="*" />
+ <conditions>
+ <add input="{HTTP_X_AZURE_FDID}" pattern="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" negate="true" />
+ </conditions>
+ <action type="AbortRequest" />
+ </rule>
+ </rules>
+ </rewrite>
+ </system.webServer>
+</configuration>
+```
+
+# [AKS NGINX controller](#tab/aks-nginx)
+
+When you run [AKS with an NGINX ingress controller](../aks/ingress-basic.md), you should create a network security group in the virtual network that hosts the AKS cluster. Configure a network security group rule to allow inbound access on ports 80 and 443 from the *AzureFrontDoor.Backend* service tag, and disallow inbound traffic on ports 80 and 443 from the *Internet* service tag.
+
+Use a Kubernetes ingress configuration file like in the following example to inspect the `X-Azure-FDID` header on your incoming requests:
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: frontdoor-ingress
+ annotations:
+ kubernetes.io/ingress.class: nginx
+ nginx.ingress.kubernetes.io/enable-modsecurity: "true"
+ nginx.ingress.kubernetes.io/modsecurity-snippet: |
+ SecRuleEngine On
+ SecAuditLog /var/log/modsec_audit.log
+ SecRule REQUEST_HEADERS:X-Azure-FDID "!@eq xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" "log,deny,id:107,status:403,msg:\'Traffic incoming from a different Frontdoor\'"
+spec:
+ #section omitted on purpose
+```
+++
+## Next steps
+
+- Learn how to configure a [WAF profile on Front Door](front-door-waf.md).
+- Learn how to [create a Front Door](quickstart-create-front-door.md).
+- Learn [how Front Door works](front-door-routing-architecture.md).
frontdoor Quickstart Create Front Door Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/quickstart-create-front-door-terraform.md
+
+ Title: 'Quickstart: Create an Azure Front Door Service using Terraform'
+description: This quickstart describes how to create an Azure Front Door Service using Terraform.
+
+documentationcenter:
++ Last updated : 10/25/2022+++
+ na
++
+# Create a Front Door (classic) using Terraform
+
+This quickstart describes how to use Terraform to create a Front Door (classic) profile to set up high availability for a web endpoint.
+
+The steps in this article were tested with the following Terraform and Terraform provider versions:
+
+- [Terraform v1.3.2](https://releases.hashicorp.com/terraform/)
+- [AzureRM Provider v.3.27.0](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs)
+
+## Prerequisites
++
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+1. Create a directory in which to test the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ ```terraform
+ # Configure the Azure provider
+ terraform {
+ required_providers {
+ azurerm = {
+ source = "hashicorp/azurerm"
+ version = "~> 3.27.0"
+ }
+
+ random = {
+ source = "hashicorp/random"
+ }
+ }
+
+ required_version = ">= 1.1.0"
+ }
+
+ provider "azurerm" {
+ features {}
+ }
+ ```
+
+1. Create a file named `resource-group.tf` and insert the following code:
+
+ ```terraform
+ resource "azurerm_resource_group" "my_resource_group" {
+ name = var.resource_group_name
+ location = var.location
+ }
+ ```
+
+1. Create a file named `front-door.tf` and insert the following code:
+
+ ```terraform
+ locals {
+ front_door_name = "afd-${lower(random_id.front_door_name.hex)}"
+ front_door_frontend_endpoint_name = "frontEndEndpoint"
+ front_door_load_balancing_settings_name = "loadBalancingSettings"
+ front_door_health_probe_settings_name = "healthProbeSettings"
+ front_door_routing_rule_name = "routingRule"
+ front_door_backend_pool_name = "backendPool"
+ }
+
+ resource "azurerm_frontdoor" "my_front_door" {
+ name = local.front_door_name
+ resource_group_name = azurerm_resource_group.my_resource_group.name
+
+ frontend_endpoint {
+ name = local.front_door_frontend_endpoint_name
+ host_name = "${local.front_door_name}.azurefd.net"
+ session_affinity_enabled = false
+ }
+
+ backend_pool_load_balancing {
+ name = local.front_door_load_balancing_settings_name
+ sample_size = 4
+ successful_samples_required = 2
+ }
+
+ backend_pool_health_probe {
+ name = local.front_door_health_probe_settings_name
+ path = "/"
+ protocol = "Http"
+ interval_in_seconds = 120
+ }
+
+ backend_pool {
+ name = local.front_door_backend_pool_name
+ backend {
+ host_header = var.backend_address
+ address = var.backend_address
+ http_port = 80
+ https_port = 443
+ weight = 50
+ priority = 1
+ }
+
+ load_balancing_name = local.front_door_load_balancing_settings_name
+ health_probe_name = local.front_door_health_probe_settings_name
+ }
+
+ routing_rule {
+ name = local.front_door_routing_rule_name
+ accepted_protocols = ["Http", "Https"]
+ patterns_to_match = ["/*"]
+ frontend_endpoints = [local.front_door_frontend_endpoint_name]
+ forwarding_configuration {
+ forwarding_protocol = "MatchRequest"
+ backend_pool_name = local.front_door_backend_pool_name
+ }
+ }
+ }
+ ```
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ ```terraform
+ variable "location" {
+ type = string
+ default = "westus2"
+ }
+
+ variable "resource_group_name" {
+ type = string
+ default = "FrontDoor"
+ }
+
+ variable "backend_address" {
+ type = string
+ }
+
+ resource "random_id" "front_door_name" {
+ byte_length = 8
+ }
+ ```
+
+1. Create a file named `terraform.tfvars` and insert the following code, being sure to update the value to your own backend hostname:
+
+ ```terraform
+ backend_address = "<your backend hostname>"
+ ```
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [Portal](#tab/Portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Resource groups** from the left pane.
+
+1. Select the FrontDoor resource group.
+
+1. Select the Front Door you created and you'll be able to see the endpoint hostname. Copy the hostname and paste it on to the address bar of a browser. Press enter and your request will automatically get routed to the web app.
+
+ :::image type="content" source="./media/create-front-door-bicep/front-door-bicep-web-app-origin-success.png" alt-text="Screenshot of the message: Your web app is running and waiting for your content.":::
+
+# [Azure CLI](#tab/CLI)
+
+Run the following command:
+
+```azurecli-interactive
+az resource list --resource-group FrontDoor
+```
+
+# [PowerShell](#tab/PowerShell)
+
+Run the following command:
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName FrontDoor
+```
+++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+In this quickstart, you deployed a simple Front Door (classic) profile using Terraform. [Learn more about Azure Front Door.](front-door-overview.md)
healthcare-apis Export Dicom Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/export-dicom-files.md
+
+ Title: Export DICOM files using the export API of the DICOM service
+description: This how-to guide explains how to export DICOM files to an Azure Blob Storage account
++++ Last updated : 10/14/2022+++
+# Export DICOM Files
+
+The DICOM service provides the ability to easily export DICOM data in a file format, simplifying the process of using medical imaging in external workflows, such as AI and machine learning. DICOM studies, series, and instances can be exported in bulk to an [Azure Blob Storage account](../../storage/blobs/storage-blobs-introduction.md) using the export API. DICOM data that is exported to a storage account will be exported as a `.dcm` file in a folder structure that organizes instances by `StudyInstanceID` and `SeriesInstanceID`.
+
+There are three steps to exporting data from the DICOM service:
+
+- Enable a system assigned managed identity for the DICOM service.
+- Configure a new or existing storage account and give permission to the system managed identity.
+- Use the export API to create a new export job to export the data.
+
+## Enable managed identity for the DICOM service
+
+The first step to export data from the DICOM service is to enable a system managed identity. This managed identity is used to authenticate the DICOM service and give permission to the storage account used as the destination for export. For more information about managed identities in Azure, see [About managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
+
+1. In the Azure portal, browse to the DICOM service that you want to export from and select **Identity**.
++
+2. Set the **Status** option to **On**, and then select **Save**.
++
+3. Select **Yes** in the confirmation dialog that appears.
++
+It will take a few minutes to create the system managed identity. When the system identity has been enabled, an **Object (principal) ID** will be displayed.
+
+## Give storage account permissions to the system managed identity
+
+The system managed identity will need **Storage Blob Data Contributor** permission to write data to the destination storage account.
+
+1. Under **Permissions** select **Azure role assignments**.
+++
+2. Select **Add role assignment**. On the **Add role assignment** panel, make the following selections:
+ * Under **Scope**, select **Storage**.
+ * Under **Resource**, select the destination storage account for the export operation.
+ * Under **Role**, select **Storage Blob Data Contributor**.
++
+3. Select **Save** to add the permission to the system managed identity.
+
+## Use the export API
+
+The export API exposes one `POST` endpoint for exporting data.
+
+```
+POST <dicom-service-url>/<version>/export
+```
+
+Given a *source*, the set of data to be exported, and a *destination*, the location to which data will be exported, the endpoint returns a reference to a new, long-running export operation. The duration of this operation depends on the volume of data to be exported. See [Operation Status](#operation-status) below for more details about monitoring progress of export operations.
+
+Any errors encountered while attempting to export will be recorded in an error log. See [Errors](#errors) below for details.
+
+## Request
+
+The request body consists of the export source and destination.
+
+```json
+{
+ "source": {
+ "type": "identifiers",
+ "settings": {
+ "values": [
+ "..."
+ ]
+ }
+ },
+ "destination": {
+ "type": "azureblob",
+ "settings": {
+ "setting": "<value>"
+ }
+ }
+}
+```
+
+### Source settings
+
+The only setting is the list of identifiers to export.
+
+| Property | Required | Default | Description |
+| :- | :- | : | :- |
+| `Values` | Yes | | A list of one or more DICOM studies, series, and/or SOP instances identifiers in the format of `"<StudyInstanceUID>[/<SeriesInstanceUID>[/<SOPInstanceUID>]]"`. |
+
+### Destination Settings
+
+The connection to the Azure Blob storage account is specified with a `BlobContainerUri`.
+
+| Property | Required | Default | Description |
+| :- | :- | : | :- |
+| `BlobContainerUri` | No | `""` | The complete URI for the blob container. |
+| `UseManagedIdentity` | Yes | `false` | A required flag indicating whether managed identity should be used to authenticate to the blob container. |
+
+### Example
+
+The below example requests the export of the following DICOM resources to the blob container named `export` in the storage account named `dicomexport`:
+- All instances within the study whose `StudyInstanceUID` is `1.2.3`.
+- All instances within the series whose `StudyInstanceUID` is `12.3` and `SeriesInstanceUID` is `4.5.678`.
+- The instance whose `StudyInstanceUID` is `123.456`, `SeriesInstanceUID` is `7.8`, and `SOPInstanceUID` is `9.1011.12`.
+
+```http
+POST /export HTTP/1.1
+Accept: */*
+Content-Type: application/json
+{
+ "sources": {
+ "type": "identifiers",
+ "settings": {
+ "values": [
+ "1.2.3",
+ "12.3/4.5.678",
+ "123.456/7.8/9.1011.12"
+ ]
+ }
+ },
+ "destination": {
+ "type": "azureblob",
+ "settings": {
+ "blobContainerUri": "https://dicomexport.blob.core.windows.net/export",
+ "UseManagedIdentity": true
+ }
+ }
+}
+```
+
+## Response
+
+The export API returns a `202` status code when an export operation is started successfully. The body of the response contains a reference to the operation, while the value of the `Location` header is the URL for the export operation's status (the same as `href` in the body).
+
+Inside of the destination container, the DCM files can be found with the following path format: `<operation id>/results/<study>/<series>/<sop instance>.dcm`
+
+```http
+HTTP/1.1 202 Accepted
+Content-Type: application/json
+{
+ "id": "df1ff476b83a4a3eaf11b1eac2e5ac56",
+ "href": "https://example-dicom.dicom.azurehealthcareapis.com/v1/operations/df1ff476b83a4a3eaf11b1eac2e5ac56"
+}
+```
+
+### Operation status
+The above `href` URL can be polled for the current status of the export operation until completion. Once the job has reached a terminal state, the API will return a 200 status code instead of 202, and the value of its status property will be updated accordingly.
+
+```http
+HTTP/1.1 200 OK
+Content-Type: application/json
+{
+ "operationId": "df1ff476b83a4a3eaf11b1eac2e5ac56",
+ "type": "export",
+ "createdTime": "2022-09-08T16:40:36.2627618Z",
+ "lastUpdatedTime": "2022-09-08T16:41:01.2776644Z",
+ "status": "completed",
+ "results": {
+ "errorHref": "<container uri>/4853cda8c05c44e497d2bc071f8e92c4/errors.log",
+ "exported": 1000,
+ "skipped": 3
+ }
+}
+```
+
+## Errors
+
+If there are any user errors when exporting a DICOM file, then the file is skipped and its corresponding error is logged. This error log is also exported alongside the DICOM files and can be reviewed by the caller. The error log can be found at `<export blob container uri>/<operation ID>/errors.log`.
+
+### Format
+
+Each line of the error log is a JSON object with the following properties. A given error identifier may appear multiple times in the log as each update to the log is processed *at least once*.
+
+| Property | Description |
+| | -- |
+| `Timestamp` | The date and time when the error occurred. |
+| `Identifier` | The identifier for the DICOM study, series, or SOP instance in the format of `"<study instance UID>[/<series instance UID>[/<SOP instance UID>]]"`. |
+| `Error` | The detailed error message. |
+
+## Next steps
+
+>[!div class="nextstepaction"]
+>[Overview of the DICOM service](dicom-services-overview.md)
healthcare-apis How To Enable Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-enable-diagnostic-settings.md
Title: How to enable the MedTech service diagnostic settings - Azure Health Data Services
-description: This article explains how to configure the MedTech service diagnostic settings.
+description: This article explains how to enable the MedTech service diagnostic settings.
Previously updated : 10/13/2022 Last updated : 10/24/2022
In this article, you'll learn how to enable the diagnostic settings for the MedT
:::image type="content" source="media/iot-diagnostic-settings/add-diagnostic-settings.png" alt-text="Screenshot of select the + Add diagnostic setting." lightbox="media/iot-diagnostic-settings/add-diagnostic-settings.png":::
-5. The **+ Add diagnostic setting** page will open, requiring configuration inputs from you.
+5. The **+ Add diagnostic setting** page will open, requiring configuration inputs from you.
+
+ :::image type="content" source="media/iot-diagnostic-settings/select-all-logs-and-metrics.png" alt-text="Screenshot of diagnostic setting and required fields." lightbox="media/iot-diagnostic-settings/select-all-logs-and-metrics.png":::
1. Enter a display name in the **Diagnostic setting name** box. For this example, we'll name it **MedTech_service_All_Logs_and_Metrics**. You'll enter a display name of your own choosing.
In this article, you'll learn how to enable the diagnostic settings for the MedT
|Azure Monitor partner integrations|Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms. Useful when you're already using one of the partners.| 5. Select the **Save** option to save your diagnostic setting selections.
-
- :::image type="content" source="media/iot-diagnostic-settings/select-all-logs-and-metrics.png" alt-text="Screenshot of diagnostic setting and required fields." lightbox="media/iot-diagnostic-settings/select-all-logs-and-metrics.png":::
6. Once you've selected the **Save** option, the page will display a message that the diagnostic setting for your MedTech service has been saved successfully.
In this article, you'll learn how to enable the diagnostic settings for the MedT
To view the frequently asked questions (FAQs) about the MedTech service, see
->[!div class="nextstepaction"]
->[MedTech service FAQs](iot-connector-faqs.md)
+> [!div class="nextstepaction"]
+> [MedTech service FAQs](iot-connector-faqs.md)
load-balancer Load Balancer Standard Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-availability-zones.md
description: With this learning path, get started with Azure Standard Load Balan
documentationcenter: na - na
For an internal load balancer frontend, add a **zones** parameter to the interna
## Non-Zonal
-Load Balancers can also be created in a non-zonal configuration by use of a "no-zone" frontend (a public IP or public IP prefix in the case of a public load balancer; a private IP in the case of an internal load balancer). This option does not give a guarantee of redundancy. Note that all public IP addresses that are [upgraded](../virtual-network/ip-services/public-ip-upgrade-portal.md) will be of type "no-zone".
+Load Balancers can also be created in a non-zonal configuration by use of a "no-zone" frontend. In these scenarios, a public load balancer would use a public IP or public IP prefix, an internal load balancer would use a private IP an internal load balancer. This option doesn't give a guarantee of redundancy.
+
+>[!NOTE]
+>All public IP addresses that are upgraded from Basic SKU to Standard SKU will be of type "no-zone". Learn how to [Upgrade a public IP address in the Azure portal](../virtual-network/ip-services/public-ip-upgrade-portal.md).
## <a name="design"></a> Design considerations
Now that you understand the zone-related properties for Standard Load Balancer,
### Tolerance to zone failure - A **zone redundant** frontend can serve a zonal resource in any zone with a single IP address. The IP can survive one or more zone failures as long as at least one zone remains healthy within the region.-- A **zonal** frontend is a reduction of the service to a single zone and shares fate with the respective zone. If the deployment in your zone goes down, your load balancer will not survive this failure.
+- A **zonal** frontend is a reduction of the service to a single zone and shares fate with the respective zone. If the deployment in your zone goes down, your load balancer won't survive this failure.
-Members in the backend pool of a load balancer are normally associated with a single zone (e.g. zonal virtual machines). A common design for production workloads would be to have multiple zonal resources (e.g. virtual machines from zone 1, 2, and 3) in the backend of a load balancer with a zone-redundant frontend.
+Members in the backend pool of a load balancer are normally associated with a single zone such as with zonal virtual machines. A common design for production workloads would be to have multiple zonal resources. For example, placing virtual machines from zone 1, 2, and 3 in the backend of a load balancer with a zone-redundant frontend meets this design principle.
### Multiple frontends
-Using multiple frontends allow you to load balance traffic on more than one port and/or IP address. When designing your architecture, it is important to account for the way zone redundancy and multiple frontends can interact. Note that if the goal is to always have every frontend be resilient to failure, then all IP addresses assigned as frontends must be zone-redundant. If a set of frontends is intended to be associated with a single zone, then every IP address for that set must be associated with that specific zone. It is not required to have a load balancer for each zone; rather, each zonal frontend (or set of zonal frontends) could be associated with virtual machines in the backend pool that are part of that specific availability zone.
+Using multiple frontends allow you to load balance traffic on more than one port and/or IP address. When designing your architecture, ensure you account for how zone redundancy interacts with multiple frontends. If your goal is to always have every frontend resilient to failure, then all IP addresses assigned as frontends must be zone-redundant. If a set of frontends is intended to be associated with a single zone, then every IP address for that set must be associated with that specific zone. A load balancer isn't required in each zone. Instead, each zonal front end, or set of zonal frontends, could be associated with virtual machines in the backend pool that are part of that specific availability zone.
### Transition between regional zonal models
-In the case where a region is augmented to have [availability zones](../availability-zones/az-overview.md), any existing IPs (e.g., used for load balancer frontends) would remain non-zonal. In order to ensure your architecture can take advantage of the new zones, it is recommended that new frontend IPs be created, and the appropriate rules and configurations be replicated to utilize these new IPs.
+In the case where a region is augmented to have [availability zones](../availability-zones/az-overview.md), any existing IPs would remain non-zonal like IPs used for load balancer frontends. To ensure your architecture can take advantage of the new zones, creation of new frontend IPs is recommended. Once created, replicate the appropriate rules and configurations to utilize these new IPs.
### Control vs data plane implications
logic-apps Deploy Single Tenant Logic Apps Private Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/deploy-single-tenant-logic-apps-private-storage-account.md
ms.suite: integration
Previously updated : 09/08/2022 Last updated : 10/18/2022 # As a developer, I want to deploy Standard logic apps to Azure storage accounts that use private endpoints.
For more information, review the following documentation:
## Deploy using Azure portal or Visual Studio Code
-This deployment method requires that temporary public access to your storage account. If you can't enable public access due to your organization's policies, you can still deploy your logic app to a private storage account. However, you have to [deploy with an Azure Resource Manager template (ARM template)](#deploy-arm-template), which is described in a later section.
+This deployment method requires that temporary public access to your storage account. If you can't enable public access due to your organization's policies, you can still deploy your logic app to a private storage account. However, you have to [deploy with an Azure Resource Manager template (ARM template)](#deploy-arm-template), which is described in a later section.
+
+> [!NOTE]
+> An exception to the previous rule is that you can use the Azure portal to deploy your logic app to an App Service Environment,
+> even if the storage account is protected with a private endpoint. However, you'll need connectivity between the
+> subnet used by the App Service Environment and the subnet used by the storage account's private endpoint.
1. Create different private endpoints for each of the Table, Queue, Blob, and File storage services.
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-access-data-batch-endpoints-jobs.md
+
+ Title: "Accessing data from batch endpoints jobs"
+
+description: Learn how to access data from different sources in batch endpoints jobs.
++++++ Last updated : 10/10/2022++++
+# Accessing data from batch endpoints jobs
+
+Batch endpoints can be used to perform batch scoring on large amounts of data. Such data can be placed in different places. In this tutorial we'll cover the different places where batch endpoints can read data from to.
+
+## Prerequisites
+
+* This example assumes that you've a model correctly deployed as a batch endpoint. Particularly, we're using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+## Supported data inputs
+
+Batch endpoints support reading files or folders that are located in different locations:
+
+* Azure Machine Learning Data Stores. The following stores are supported:
+ * Azure Blob Storage
+ * Azure Data Lake Storage Gen1
+ * Azure Data Lake Storage Gen2
+* Azure Machine Learning Data Assets. The following types are supported:
+ * Data assets of type Folder (`uri_folder`).
+ * Data assets of type File (`uri_file`).
+ * Datasets of type `FileDataset` (Deprecated).
+* Azure Storage Accounts. The following storage containers are supported:
+ * Azure Data Lake Storage Gen1
+ * Azure Data Lake Storage Gen2
+ * Azure Blob Storage
+
+> [!TIP]
+> Local data folders/files can be used when executing batch endpoints from the Azure ML CLI or Azure ML SDK for Python. However, that operation will result in the local data to be uploaded to the default Azure Machine Learning Data Store of the workspace you are working on.
+
+> [!IMPORTANT]
+> __Deprecation notice__: Datasets of type `FileDataset` (V1) are deprecated and will be retired in the future. Existing batch endpoints relying on this functionality will continue to work but batch endpoints created with GA CLIv2 (2.4.0 and newer) or GA REST API (2022-05-01 and newer) will not support V1 dataset.
++
+## Reading data from data stores
+
+We're going to first upload some data to the default data store in the Azure Machine Learning workspace and then run a batch deployment on it. Follow these steps to run a batch endpoint job using data stored in a data store:
+
+1. Let's get access to the default data store in the Azure Machine Learning workspace. If your data is in a different store, you can use that store instead. There's no requirement of using the default data store.
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ az ml workspace show --query storage_account
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ default_ds = ml_client.datastores.get_default()
+ ```
+
+ # [REST](#tab/rest)
+
+ Use the Azure ML CLI, Azure ML SDK for Python, or Studio to get the data store information.
+
+1. We'll need to upload some sample data to it. This example assumes you've uploaded the sample data included in the repo in the folder `sdk/python/endpoints/batch/heart-classifier/data` in the folder `heart-classifier/data` in the blob storage account.
+
+1. Create a data input:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```azurecli
+ DATA_PATH="heart-disease-uci-unlabeled"
+ DATASTORE_ID=$(az ml workspace show | jq -r '.storage_account')
+ ```
+
+ > [!TIP]
+ > You can skip this step if you already know the name of the data store you want to use. Here it is used only to know the name of the default data store of the workspace.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ data_path = "heart-classifier/data"
+ input = Input(type=AssetTypes.URI_FOLDER, path=f"{default_ds.id}/paths/{data_path})
+ ```
+
+ # [REST](#tab/rest)
+
+ Use the Azure ML CLI, Azure ML SDK for Python, or Studio to get the data store information.
+
+
+ > [!NOTE]
+ > Data stores ID would look like `/subscriptions/<subscription>/resourcegroups/<resource-group>/providers/microsoft.storage/storageaccounts/<storage-account-name>`.
++
+1. Run the deployment:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input $DATASTORE_ID/paths/$DATA_PATH)
+ ```
+
+ > [!TIP]
+ > You can also use `--input azureml:/datastores/<data_store_name>/paths/<data_path>` as a way to indicate the input.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ input=input,
+ )
+ ```
+
+ # [REST](#tab/rest)
+
+ __POST__
+
+ ```http
+ {
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "azureml://subscriptions/<subscription>/resourcegroups/<resource-group>/providers/microsoft.storage/storageaccounts/<storage-account-name>/paths/<data_path>"
+ }
+ }
+ }
+ }
+ ```
+
+## Reading data from a data asset
+
+Follow these steps to run a batch endpoint job using data stored in a registered data asset in Azure Machine Learning:
+
+> [!WARNING]
+> Data assets of type Table (`MLTable`) isn't currently supported.
+
+1. Let's create the data asset first. This data asset consists of a folder with multiple CSV files that we want to process in parallel using batch endpoints. You can skip this step is your data is already registered as a data asset.
+
+ # [Azure ML CLI](#tab/cli)
+
+ Create a data asset definition in `YAML`:
+
+ __heart-dataset-unlabeled.yml__
+ ```yaml
+ $schema: https://azuremlschemas.azureedge.net/latest/data.schema.json
+ name: heart-dataset-unlabeled
+ description: An unlabeled dataset for heart classification.
+ type: uri_folder
+ path: heart-classifier-mlflow/data
+ ```
+
+ Then, create the data asset:
+
+ ```bash
+ az ml data create -f heart-dataset-unlabeled.yml
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ data_path = "heart-classifier-mlflow/data"
+ dataset_name = "heart-dataset-unlabeled"
+
+ heart_dataset_unlabeled = Data(
+ path=data_path,
+ type=AssetTypes.URI_FOLDER,
+ description="An unlabeled dataset for heart classification",
+ name=dataset_name,
+ )
+ ml_client.data.create_or_update(heart_dataset_unlabeled)
+ ```
+
+ # [REST](#tab/rest)
+
+ Use the Azure ML CLI, Azure ML SDK for Python, or Studio to get the location (region), workspace, and data asset name and version. You will need them later.
++
+1. Create a data input:
+
+ # [Azure ML CLI](#tab/cli)
+
+ DATASET_ID=$(az ml data show -n heart-dataset-unlabeled --label latest --query id)
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ input = Input(type=AssetTypes.URI_FOLDER, path=heart_dataset_unlabeled.id)
+ ```
+
+ # [REST](#tab/rest)
+
+ This step isn't required.
+
+
+
+ > [!NOTE]
+ > Data stores ID would look like `/subscriptions/<subscription>/resourcegroups/<resource-group>/providers/microsoft.storage/storageaccounts/<storage-account-name>`.
++
+1. Run the deployment:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input $DATASET_ID)
+ ```
+
+ > [!TIP]
+ > You can also use `--input azureml:/<dataasset_name>@latest` as a way to indicate the input.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ input=input,
+ )
+ ```
+
+ # [REST](#tab/rest)
+
+ __POST__
+
+ ```json
+ {
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "azureml://locations/<location>/workspaces/<workspace>/data/<dataset_name>/versions/labels/latest"
+ }
+ }
+ }
+ }
+ ```
+
+## Reading data from Azure Storage Accounts
+
+Azure Machine Learning batch endpoints can read data from cloud locations in Azure Storage Accounts. Both public and private cloud locations are supported. Use the following steps to run a batch endpoint job using data stored in a storage account:
+
+1. Create a data input:
+
+ # [Azure ML CLI](#tab/cli)
+
+ This step isn't required.
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ input = Input(type=AssetTypes.URI_FOLDER, path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data")
+ ```
+
+ If your data is a file, change `type=AssetTypes.URI_FILE`:
+
+ ```python
+ input = Input(type=AssetTypes.URI_FILE, path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data/heart.csv")
+ ```
+
+ # [REST](#tab/rest)
+
+ This step isn't required.
++
+1. Run the deployment:
+
+ # [Azure ML CLI](#tab/cli)
+
+ ```bash
+ INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input-type uri_folder --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data)
+ ```
+
+ If your data is a file, change `--input-type uri_file`:
+
+ ```bash
+ INVOKE_RESPONSE = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input-type uri_file --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data/heart.csv)
+ ```
+
+ # [Azure ML SDK for Python](#tab/sdk)
+
+ ```python
+ job = ml_client.batch_endpoints.invoke(
+ endpoint_name=endpoint.name,
+ input=input,
+ )
+ ```
+
+ # [REST](#tab/rest)
+
+ __POST__
+
+ ```json
+ {
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data"
+ }
+ }
+ }
+ }
+ ```
+
+ If your data is a file, change `JobInputType`:
+
+ __POST__
+
+ ```json
+ {
+ "properties": {
+ "InputData": {
+ "mnistinput": {
+ "JobInputType" : "UriFolder",
+ "Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data/heart.csv"
+ }
+ }
+ }
+ }
+ ```
+
+
+## Security considerations when reading data
+
+Batch endpoints ensure that only authorized users are able to invoke batch deployments and generate jobs. However, depending on how the input data is configured, other credentials may be used to read the underlying data. Use the following table to understand which credentials are used and any additional requirements.
+
+| Data input type | Credential in store | Credentials used | Access granted by |
+||||-|
+| Data store | Yes | Data store's credentials in the workspace | Credentials |
+| Data store | No | Identity of the job | Depends on type |
+| Data asset | Yes | Data store's credentials in the workspace | Credentials |
+| Data asset | No | Identity of the job + Managed identity of the compute cluster | Depends on store |
+| Azure Blob Storage | Not apply | Identity of the job + Managed identity of the compute cluster | RBAC |
+| Azure Data Lake Storage Gen1 | Not apply | Identity of the job + Managed identity of the compute cluster | POSIX |
+| Azure Data Lake Storage Gen2 | Not apply | Identity of the job + Managed identity of the compute cluster | POSIX and RBAC |
+
+The managed identity of the compute cluster is used for mounting and configuring the data store. That means that in order to successfully read data from external storage services, the managed identity of the compute cluster where the deployment is running must have at least [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../../storage/blobs/assign-azure-role-data-access.md).
+
+> [!NOTE]
+> To assign an identity to the compute used by a batch deployment, follow the instructions at [Set up authentication between Azure ML and other services](../how-to-identity-based-service-authentication.md#compute-cluster). Configure the identity on the compute cluster associated with the deployment. Notice that all the jobs running on such compute are affected by this change. However, different deployments (even under the same deployment) can be configured to run under different clusters so you can administer the permissions accordingly depending on your requirements.
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-authenticate-batch-endpoint.md
+
+ Title: "Authentication on batch endpoints"
+
+description: Learn how authentication works on Batch Endpoints.
++++++ Last updated : 10/10/2022++++
+# Authentication on batch endpoints
+
+Batch endpoints support Azure Active Directory authentication, or `aad_token`. That means that in order to invoke a batch endpoint, the user must present a valid Azure Active Directory authentication token to the batch endpoint URI. Authorization is enforced at the endpoint level. The following article explains how to correctly interact with batch endpoints and the security requirements for it.
+
+## Prerequisites
+
+* This example assumes that you have a model correctly deployed as a batch endpoint. Particularly, we are using the *heart condition classifier* created in the tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+## How authentication works
+
+To invoke a batch endpoint, the user must present a valid Azure Active Directory token representing a security principal. This principal can be a user principal or a service principal. In any case, once an endpoint is invoked, a batch deployment job is created under the identity associated with the token. The identity needs the following permissions in order to successfully create a job:
+
+> [!div class="checklist"]
+> * Read batch endpoints/deployments.
+> * Create jobs in batch inference endpoints/deployment.
+> * Create experiments/runs.
+> * Read and write from/to data stores.
+> * Lists datastore secrets.
+
+You can either use one of the [built-in security roles](../../role-based-access-control/built-in-roles.md) or create a new one. In any case, the identity used to invoke the endpoints requires to be granted the permissions explicitly. See [Steps to assign an Azure role](../../role-based-access-control/role-assignments-steps.md) for instructions to assign them.
+
+> [!IMPORTANT]
+> The identity used for invoking a batch endpoint may not be used to read the underlying data depending on how the data store is configured. Please see [Security considerations when reading data](how-to-access-data-batch-endpoints-jobs.md#security-considerations-when-reading-data) for more details.
+
+## How to run jobs using different types of credentials
+
+The following examples show different ways to start batch deployment jobs using different types of credentials:
+
+### Running jobs using user's credentials
+
+# [Azure ML CLI](#tab/cli)
+
+Use the Azure CLI to log in using either interactive or device code authentication:
+
+```azurecli
+az login
+```
+
+Once authenticated, use the following command to run a batch deployment job:
+
+```azurecli
+az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+Use the Azure ML SDK for Python to log in using either interactive or device authentication:
+
+```python
+from azure.ai.ml import MLClient
+from azure.identity import InteractiveAzureCredentials
+
+subscription_id = "<subscription>"
+resource_group = "<resource-group>"
+workspace = "<workspace>"
+
+ml_client = MLClient(InteractiveAzureCredentials(), subscription_id, resource_group, workspace)
+```
+
+Once authenticated, use the following command to run a batch deployment job:
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name,
+ input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data")
+ )
+```
+
+# [Azure ML studio](#tab/studio)
+
+Jobs are always started using the identity of the user in the portal in studio.
+++
+### Running jobs using a service principal
+
+# [Azure ML CLI](#tab/cli)
+
+For more details see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+
+```bash
+az login --service-principal -u <app-id> -p <password-or-cert> --tenant <tenant>
+```
+
+Once authenticated, use the following command to run a batch deployment job:
+
+```azurecli
+az ml batch-endpoint invoke --name $ENDPOINT_NAME --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data
+```
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+To authenticate using a service principal, indicate the tenant ID, client ID and client secret of the service principal using environment variables as demonstrated here:
+
+```python
+from azure.ai.ml import MLClient
+from azure.identity import EnvironmentCredential
+
+os.environ["AZURE_TENANT_ID"] = "<TENANT_ID>"
+os.environ["AZURE_CLIENT_ID"] = "<CLIENT_ID>"
+os.environ["AZURE_CLIENT_SECRET"] = "<CLIENT_SECRET>"
+
+subscription_id = "<subscription>"
+resource_group = "<resource-group>"
+workspace = "<workspace>"
+
+ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group, workspace)
+```
+
+Once authenticated, use the following command to run a batch deployment job:
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name,
+ input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data")
+ )
+```
+
+# [Azure ML studio](#tab/studio)
+
+You can't run jobs using a service principal from studio.
+++
+### Running jobs using a managed identity
+
+# [Azure ML CLI](#tab/cli)
+
+On resources configured for managed identities for Azure resources, you can sign in using the managed identity. Signing in with the resource's identity is done through the `--identity` flag.
+
+```bash
+az login --identity
+```
+
+For more details see [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+
+# [Azure ML SDK for Python](#tab/sdk)
+
+On resources configured for managed identities for Azure resources, you can sign in using the managed identity. Use the resource ID along with the `ManagedIdentityCredential` object as demonstrated in the following example:
+
+```python
+from azure.ai.ml import MLClient
+from azure.identity import ManagedIdentityCredential
+
+subscription_id = "<subscription>"
+resource_group = "<resource-group>"
+workspace = "<workspace>"
+
+ml_client = MLClient(ManagedIdentityCredential("<resource-id>"), subscription_id, resource_group, workspace)
+```
+
+Once authenticated, use the following command to run a batch deployment job:
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name,
+ input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data")
+ )
+```
+
+# [Azure ML studio](#tab/studio)
+
+You can't run jobs using a managed identity from studio.
+++
+## Next steps
+
+* [Network isolation in batch endpoints](how-to-secure-batch-endpoint.md)
machine-learning How To Deploy Model Custom Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-deploy-model-custom-output.md
Title: "Customize outputs in batch deployments"
-description: Learn how authentication works on Batch Endpoints.
+description: Learn how create deployments that generate custom outputs and files.
We need to create a scoring script that can read the input data provided by the
3. Appends the predictions to a `pandas.DataFrame` along with the input data. 4. Writes the data in a file named as the input file, but in `parquet` format.
-__batch_driver.py__
+__batch_driver_parquet.py__
```python import os
def run(mini_batch):
return mini_batch ```
-Remarks:
+__Remarks:__
* Notice how the environment variable `AZUREML_BI_OUTPUT_PATH` is used to get access to the output path of the deployment job. * The `init()` function is populating a global variable called `output_path` that can be used later to know where to write. * The `run` method returns a list of the processed files. It is required for the `run` function to return a `list` or a `pandas.DataFrame` object.
machine-learning How To Secure Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-secure-batch-endpoint.md
+
+ Title: "Network isolation in batch endpoints"
+
+description: Learn how to deploy Batch Endpoints in private networks with isolation.
++++++ Last updated : 10/10/2022++++
+# Network isolation in batch endpoints
+
+When deploying a machine learning model to a batch endpoint, you can secure their communication using private networks. This article explains the requirements to use batch endpoint in an environment secured by private networks.
+
+## Prerequisites
+
+* A secure Azure Machine Learning workspace. For more details about how to achieve it read [Create a secure workspace](../tutorial-create-secure-workspace.md).
+* Ensure blob, file, queue, and table private endpoints are configured for the storage accounts as explained at [Secure Azure storage accounts](../how-to-secure-workspace-vnet.md#secure-azure-storage-accounts). Batch deployments require all the 4 to properly work.
+
+## Securing batch endpoints
+
+All the batch endpoints created inside of secure workspace are deployed as private batch endpoints by default. Not further configuration is required.
+
+> [!IMPORTANT]
+> When working on a private link-enabled workspaces, batch endpoints can be created and managed using Azure Machine Learning studio. However, they can't be invoked from the UI in studio. Please use the Azure ML CLI v2 instead for job creation. For more details about how to use it see [Invoke the batch endpoint to start a batch scoring job](../how-to-use-batch-endpoint.md#invoke-the-batch-endpoint-to-start-a-batch-scoring-job).
+
+The following diagram shows how the networking looks like for batch endpoints when deployed in a private workspace:
++
+## Securing batch deployment jobs
+
+Azure Machine Learning batch deployments run on compute clusters. To secure batch deployment jobs, those compute clusters have to be deployed in a virtual network too.
+
+1. Create an Azure Machine Learning [computer cluster in the virtual network](../how-to-secure-training-vnet.md#compute-cluster).
+1. If your compute instance uses a public IP address, you must [Allow inbound communication](../how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources.
+
+ > [!TIP]
+ > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, you get a load balancer with a public IP to accept the inbound access from Azure batch service and Azure Machine Learning service. You need to configure User Defined Routing (UDR) if you use a firewall. If created without a public IP, you get a private link service to accept the inbound access from Azure batch service and Azure Machine Learning service without a public IP.
+
+1. Extra NSG may be required depending on your case. Please see [Limitations for Azure Machine Learning compute cluster](../how-to-secure-training-vnet.md#azure-machine-learning-compute-clusterinstance-1).
+
+For more details about how to configure compute clusters networking read [Secure an Azure Machine Learning training environment with virtual networks](../how-to-secure-training-vnet.md#azure-machine-learning-compute-clusterinstance-1).
+
+## Using two-networks architecture
+
+There are cases where the input data is not in the same network as in the Azure Machine Learning resources. In those cases, your Azure Machine Learning workspace may need to interact with more than one VNet. You can achieve this configuration by adding an extra set of private endpoints to the VNet where the rest of the resources are located.
+
+The following diagram shows the high level design:
++
+### Considerations
+
+Have the following considerations when using such architecture:
+
+* Put the second set of private endpoints in a different resource group and hence in different private DNS zones. This prevents a name resolution conflict between the set of IPs used for the workload and the ones used by the client VNets.
+* For your storage accounts, add 4 private endpoints in each VNet for blob, file, queue, and table as explained at [Secure Azure storage accounts](../how-to-secure-workspace-vnet.md#secure-azure-storage-accounts).
++
+## Recommended read
+
+* [Secure Azure Machine Learning workspace resources using virtual networks (VNets)](../how-to-network-security-overview.md)
machine-learning How To Use Event Grid Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-use-event-grid-batch.md
The workflow will work in the following way:
## Authenticating against batch endpoints
-Logic Apps can invoke the REST APIs of batch endpoints by using the [HTTP](../../connectors/connectors-native-http.md) activity. Batch endpoints support Azure Active Directory for authorization and hence the request made to the APIs require a proper authentication handling.
+Azure Logic Apps can invoke the REST APIs of batch endpoints by using the [HTTP](../../connectors/connectors-native-http.md) activity. Batch endpoints support Azure Active Directory for authorization and hence the request made to the APIs require a proper authentication handling.
We recommend to using a service principal for authentication and interaction with batch endpoints in this scenario.
We recommend to using a service principal for authentication and interaction wit
## Enabling data access
-We will be using cloud URIs provided by event grid to indicate the input data to send to the deployment job. When reading data from cloud locations, batch deployments use the identity of the compute to gain access instead of the identity used to submit the job. In order to ensure the identity of the compute does have read access to the underlying data, we will need to assign to it a user assigned managed identity. Follow these steps to ensure data access:
+We will be using cloud URIs provided by event grid to indicate the input data to send to the deployment job. Batch deployments use the identity of the compute to mount the data. The identity of the job is used to read the data once mounted for external storage accounts. You will need to assign a user-assigned managed identity to the compute cluster in order to ensure it does have access to mount the underlying data. Follow these steps to ensure data access:
1. Create a [managed identity resource](../../active-directory/managed-identities-azure-resources/overview.md):
We want to trigger the Logic App each time a new file is created in a given fold
:::image type="content" source="./media/how-to-use-event-grid-batch/invoke.png" alt-text="Screenshot of the invoke activity of the Logic App."::: > [!NOTE]
- > Notice that this last action will trigger the batch deployment job, but it will not wait for its completion. Logic Apps are not long running applications. If you need to wait for the job to complete, we recommend you to switch to [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
+ > Notice that this last action will trigger the batch deployment job, but it will not wait for its completion. AzureLogic Apps is not designed for long-running applications. If you need to wait for the job to complete, we recommend you to switch to [Invoking batch endpoints from Azure Data Factory](how-to-use-batch-azure-data-factory.md).
1. Click on __Save__.
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
-+ Last updated 05/24/2022 #Customer intent: As an MLOps administrator, I want to understand what a managed endpoint is and why I need it.
machine-learning Concept Onnx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-onnx.md
---+++ Last updated 10/21/2021
machine-learning Concept Prebuilt Docker Images Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-prebuilt-docker-images-inference.md
Last updated 07/14/2022 -+
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
description: Securely access Azure resources for your machine learning model dep
---+++ Last updated 04/07/2022
machine-learning How To Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-assign-roles.md
Azure Machine Learning workspaces have a five built-in roles that are available
| **Contributor** | View, create, edit, or delete (where applicable) assets in a workspace. For example, contributors can create an experiment, create or attach a compute cluster, submit a run, and deploy a web service. | | **Owner** | Full access to the workspace, including the ability to view, create, edit, or delete (where applicable) assets in a workspace. Additionally, you can change role assignments. |
+In addition, [Azure Machine Learning registries](how-to-manage-registries.md) have a **AzureML Registry User** role that can be assigned to a registry resource to grant data scientists user-level prermissions. For administrator-level permissions to create or delete registries, use **Contributor** or **Owner** role.
+
+| Role | Access level |
+| | |
+| **AzureML Registry User** | Can get registries, and read, write and delete assets within them. Cannot create new registry resources or delete them. |
+ You can combine the roles to grant different levels of access. For example, you can grant a workspace user both **AzureML Data Scientist** and **Azure ML Compute Operator** roles to permit the user to perform experiments while creating computes in a self-service manner. > [!IMPORTANT]
machine-learning How To Authenticate Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-online-endpoint.md
-+ Last updated 05/10/2022
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-autoscale-endpoints.md
description: Learn to scale up online endpoints. Get more CPU, memory, disk spac
---+++ Last updated 04/27/2022
machine-learning How To Convert Custom Model To Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-convert-custom-model-to-mlflow.md
Title: Convert custom models to MLflow
description: Convert custom models to MLflow model format for no code deployment with endpoints. --+++ Last updated 04/15/2022
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
description: Learn how to use Visual Studio Code to test and debug online endpoi
--+++ Last updated 11/03/2021
machine-learning How To Deploy Automl Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md
description: Learn to deploy your AutoML model as a web service that's automatic
---+++ Last updated 05/11/2022
machine-learning How To Deploy Batch With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-batch-with-rest.md
--+++ Last updated 05/24/2022-
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
description: Learn how to use a custom container to use open-source servers in A
---+++ Last updated 10/13/2022
machine-learning How To Deploy Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-managed-online-endpoints.md
-+ Last updated 10/06/2022
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
description: Learn to deploy your MLflow model as a web service that's automatic
- ++ Last updated 03/31/2022 - ms.devlang: azurecli
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
description: Learn to deploy your MLflow model to the deployment targets support
- ++ Last updated 06/06/2022
machine-learning How To Deploy With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-rest.md
- + Last updated 06/15/2022-
machine-learning How To Deploy With Triton https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-triton.md
Last updated 06/10/2022 ---+++ ms.devlang: azurecli
machine-learning How To Inference Server Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-server-http.md
description: Learn how to enable local development with Azure machine learning inference http server. -+
machine-learning How To Monitor Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-online-endpoints.md
description: Monitor online endpoints and create alerts with Application Insight
--++ Last updated 08/29/2022
machine-learning How To Safely Rollout Managed Endpoints Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-managed-endpoints-sdk-v2.md
description: Safe rollout for online endpoints using Python SDK v2.
---+++ Last updated 05/25/2022
machine-learning How To Safely Rollout Managed Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-managed-endpoints.md
-+ Last updated 04/29/2022
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
---+++ Last updated 10/04/2022
machine-learning How To Track Experiments Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md
description: Explains how to use MLflow for managing experiments and runs in Azu
+ Last updated 06/08/2022
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
description: Learn how to troubleshoot some common deployment and scoring errors
---+++ Last updated 04/12/2022
machine-learning How To Use Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint.md
---+++ Last updated 05/24/2022 #Customer intent: As an ML engineer or data scientist, I want to create an endpoint to host my models for batch scoring, so that I can use the same endpoint continuously for different large datasets on-demand or on-schedule.
machine-learning How To Use Batch Endpoints Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoints-studio.md
-+ Last updated 08/03/2022
machine-learning How To Use Managed Online Endpoint Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-online-endpoint-studio.md
- -++ Last updated 09/07/2022
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
Title: Track ML experiments and models with MLflow
description: Set up MLflow Tracking with Azure Machine Learning to log metrics and artifacts from ML models with MLflow --+++ - Last updated 04/08/2022
machine-learning How To View Online Endpoints Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-view-online-endpoints-costs.md
description: 'Learn to how view costs for a managed online endpoint in Azure Machine Learning.' --+++ Last updated 05/03/2021
machine-learning Reference Managed Online Endpoints Vm Sku List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-managed-online-endpoints-vm-sku-list.md
-- ++ Last updated 06/02/2022
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-java.md
First, set up some environment variables. In [Azure Cloud Shell](https://shell.a
```bash export AZ_RESOURCE_GROUP=database-workshop
-export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
+export AZ_DATABASE_SERVER_NAME=<YOUR_DATABASE_SERVER_NAME>
+export AZ_DATABASE_NAME=demo
export AZ_LOCATION=<YOUR_AZURE_REGION> export AZ_MYSQL_AD_NON_ADMIN_USERNAME=demo-non-admin export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
export CURRENT_USER_OBJECTID=$(az ad signed-in-user show --query id -o tsv)
Replace the placeholders with the following values, which are used throughout this article: -- `<YOUR_DATABASE_NAME>`: The name of your MySQL server. It should be unique across Azure.
+- `<YOUR_DATABASE_SERVER_NAME>`: The name of your MySQL server, which should be unique across Azure.
- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can see the full list of available regions by entering `az account list-locations`. - `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Spring Boot application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/).
Replace the placeholders with the following values, which are used throughout th
```bash export AZ_RESOURCE_GROUP=database-workshop
-export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
+export AZ_DATABASE_SERVER_NAME=<YOUR_DATABASE_SERVER_NAME>
+export AZ_DATABASE_NAME=demo
export AZ_LOCATION=<YOUR_AZURE_REGION> export AZ_MYSQL_ADMIN_USERNAME=demo export AZ_MYSQL_ADMIN_PASSWORD=<YOUR_MYSQL_ADMIN_PASSWORD>
export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
Replace the placeholders with the following values, which are used throughout this article: -- `<YOUR_DATABASE_NAME>`: The name of your MySQL server. It should be unique across Azure.
+- `<YOUR_DATABASE_SERVER_NAME>`: The name of your MySQL server, which should be unique across Azure.
- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can have the full list of available regions by entering `az account list-locations`. - `<YOUR_MYSQL_ADMIN_PASSWORD>` and `<YOUR_MYSQL_NON_ADMIN_PASSWORD>`: The password of your MySQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on). - `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Java application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/).
Then, run the following command to create the server:
```azurecli az mysql server create \ --resource-group $AZ_RESOURCE_GROUP \
- --name $AZ_DATABASE_NAME \
+ --name $AZ_DATABASE_SERVER_NAME \
--location $AZ_LOCATION \ --sku-name B_Gen5_1 \ --storage-size 5120 \
Next, run the following command to set the Azure AD admin user:
```azurecli az mysql server ad-admin create \ --resource-group $AZ_RESOURCE_GROUP \
- --server-name $AZ_DATABASE_NAME \
+ --server-name $AZ_DATABASE_SERVER_NAME \
--display-name $CURRENT_USERNAME \ --object-id $CURRENT_USER_OBJECTID ```
This command creates a small MySQL server and sets the Active Directory admin to
```azurecli az mysql server create \ --resource-group $AZ_RESOURCE_GROUP \
- --name $AZ_DATABASE_NAME \
+ --name $AZ_DATABASE_SERVER_NAME \
--location $AZ_LOCATION \ --sku-name B_Gen5_1 \ --storage-size 5120 \
Because you configured your local IP address at the beginning of this article, y
```azurecli az mysql server firewall-rule create \ --resource-group $AZ_RESOURCE_GROUP \
- --name $AZ_DATABASE_NAME-database-allow-local-ip \
- --server $AZ_DATABASE_NAME \
+ --name $AZ_DATABASE_SERVER_NAME-database-allow-local-ip \
+ --server $AZ_DATABASE_SERVER_NAME \
--start-ip-address $AZ_LOCAL_IP_ADDRESS \ --end-ip-address $AZ_LOCAL_IP_ADDRESS \ --output tsv
Then, use the following command to open the server's firewall to your WSL-based
```azurecli az mysql server firewall-rule create \ --resource-group $AZ_RESOURCE_GROUP \
- --name $AZ_DATABASE_NAME-database-allow-local-ip-wsl \
- --server $AZ_DATABASE_NAME \
+ --name $AZ_DATABASE_SERVER_NAME-database-allow-local-ip-wsl \
+ --server $AZ_DATABASE_SERVER_NAME \
--start-ip-address $AZ_WSL_IP_ADDRESS \ --end-ip-address $AZ_WSL_IP_ADDRESS \ --output tsv
az mysql server firewall-rule create \
### Configure a MySQL database
-The MySQL server that you created earlier is empty. Use the following command to create a new database called `demo`:
+The MySQL server that you created earlier is empty. Use the following command to create a new database.
```azurecli az mysql db create \ --resource-group $AZ_RESOURCE_GROUP \
- --name demo \
- --server-name $AZ_DATABASE_NAME \
+ --name $AZ_DATABASE_NAME \
+ --server-name $AZ_DATABASE_SERVER_NAME \
--output tsv ``` ### Create a MySQL non-admin user and grant permission
-Next, create a non-admin user and grant all permissions on the `demo` database to it.
+Next, create a non-admin user and grant all permissions to the database.
> [!NOTE] > You can read more detailed information about creating MySQL users in [Create users in Azure Database for MySQL](./how-to-create-users.md).
SET aad_auth_validate_oids_in_tenant = OFF;
CREATE AADUSER '$AZ_MYSQL_AD_NON_ADMIN_USERNAME' IDENTIFIED BY '$AZ_MYSQL_AD_NON_ADMIN_USERID';
-GRANT ALL PRIVILEGES ON demo.* TO '$AZ_MYSQL_AD_NON_ADMIN_USERNAME'@'%';
+GRANT ALL PRIVILEGES ON $AZ_DATABASE_NAME.* TO '$AZ_MYSQL_AD_NON_ADMIN_USERNAME'@'%';
FLUSH privileges;
EOF
Then, use the following command to run the SQL script to create the Azure AD non-admin user: ```bash
-mysql -h $AZ_DATABASE_NAME.mysql.database.azure.com --user $CURRENT_USERNAME@$AZ_DATABASE_NAME --enable-cleartext-plugin --password=$(az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken) < create_ad_user.sql
+mysql -h $AZ_DATABASE_SERVER_NAME.mysql.database.azure.com --user $CURRENT_USERNAME@$AZ_DATABASE_SERVER_NAME --enable-cleartext-plugin --password=$(az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken) < create_ad_user.sql
``` Now use the following command to remove the temporary SQL script file:
cat << EOF > create_user.sql
CREATE USER '$AZ_MYSQL_NON_ADMIN_USERNAME'@'%' IDENTIFIED BY '$AZ_MYSQL_NON_ADMIN_PASSWORD';
-GRANT ALL PRIVILEGES ON demo.* TO '$AZ_MYSQL_NON_ADMIN_USERNAME'@'%';
+GRANT ALL PRIVILEGES ON $AZ_DATABASE_NAME.* TO '$AZ_MYSQL_NON_ADMIN_USERNAME'@'%';
FLUSH PRIVILEGES;
EOF
Then, use the following command to run the SQL script to create the Azure AD non-admin user: ```bash
-mysql -h $AZ_DATABASE_NAME.mysql.database.azure.com --user $AZ_MYSQL_ADMIN_USERNAME@$AZ_DATABASE_NAME --enable-cleartext-plugin --password=$AZ_MYSQL_ADMIN_PASSWORD < create_user.sql
+mysql -h $AZ_DATABASE_SERVER_NAME.mysql.database.azure.com --user $AZ_MYSQL_ADMIN_USERNAME@$AZ_DATABASE_SERVER_NAME --enable-cleartext-plugin --password=$AZ_MYSQL_ADMIN_PASSWORD < create_user.sql
``` Now use the following command to remove the temporary SQL script file:
Run the following script in the project root directory to create a *src/main/res
mkdir -p src/main/resources && touch src/main/resources/application.properties cat << EOF > src/main/resources/application.properties
-url=jdbc:mysql://${AZ_DATABASE_NAME}.mysql.database.azure.com:3306/demo?sslMode=REQUIRED&serverTimezone=UTC&defaultAuthenticationPlugin=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin&authenticationPlugins=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin
-user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME}@${AZ_DATABASE_NAME}
+url=jdbc:mysql://${AZ_DATABASE_SERVER_NAME}.mysql.database.azure.com:3306/${AZ_DATABASE_NAME}?sslMode=REQUIRED&serverTimezone=UTC&defaultAuthenticationPlugin=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin&authenticationPlugins=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin
+user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME}@${AZ_DATABASE_SERVER_NAME}
EOF ```
EOF
mkdir -p src/main/resources && touch src/main/resources/application.properties cat << EOF > src/main/resources/application.properties
-url=jdbc:mysql://${AZ_DATABASE_NAME}.mysql.database.azure.com:3306/demo?useSSL=true&sslMode=REQUIRED&serverTimezone=UTC
-user=${AZ_MYSQL_NON_ADMIN_USERNAME}@${AZ_DATABASE_NAME}
+url=jdbc:mysql://${AZ_DATABASE_SERVER_NAME}.mysql.database.azure.com:3306/${AZ_DATABASE_NAME}?useSSL=true&sslMode=REQUIRED&serverTimezone=UTC
+user=${AZ_MYSQL_NON_ADMIN_USERNAME}@${AZ_DATABASE_SERVER_NAME}
password=${AZ_MYSQL_NON_ADMIN_PASSWORD} EOF ```
postgresql Concepts Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compliance.md
+
+ Title: 'Security and Compliance Certifications in Azure Database for PostgreSQL - Flexible Server'
+description: Learn about security in the Flexible Server deployment option for Azure Database for PostgreSQL.
+++++
+ms.devlang: python
+ Last updated : 10/20/2022+
+# Security and Compliance Certifications in Azure Database for PostgreSQL - Flexible Server
+++
+## Overview of Compliance Certifications on Microsoft Azure
+
+Customers experience an increasing demand for highly secure and compliant solutions as they face data breaches along with requests from governments to access online customer information. Important regulatory requirements such as the [General Data Protection Regulation (GDPR)](https://learn.microsoft.com/compliance/regulatory/gdpr) or [Sarbanes-Oxley (SOX)](https://azure.microsoft.com/resources/microsoft-azure-guidance-for-sarbanes-oxley-sox/) make selecting cloud services that help customers achieve trust, transparency, security, and compliance essential. To help customers achieve compliance with national, regional, and industry specific regulations and requirements Azure Database for PostgreSQL - Flexible Server build upon Microsoft AzureΓÇÖs compliance offerings to provide the most rigorous compliance certifications to customers at service general availability.
+To help customers meet their own compliance obligations across regulated industries and markets worldwide, Azure maintains the largest compliance portfolio in the industry both in terms of breadth (total number of offerings), as well as depth (number of customer-facing services in assessment scope). Azure compliance offerings are grouped into four segments: globally applicable, US government,
+industry specific, and region/country specific. Compliance offerings are based on various types of assurances, including formal certifications, attestations, validations, authorizations, and assessments produced by independent third-party auditing firms, as well as contractual amendments, self-assessments and customer guidance documents produced by Microsoft. More detailed information about Azure compliance offerings is available from the [Trust](https://www.microsoft.com/trust-center/compliance/compliance-overview) Center.
+
+## Azure Database for PostgreSQL - Flexible Server Compliance Certifications
+
+ Azure Database for PostgreSQL - Flexible Server has achieved a comprehensive set of national, regional, and industry-specific compliance certifications in our Azure public cloud to help you comply with requirements governing the collection and use of your data.
+
+[!div class="mx-tableFixed"]
+> | **Certification**| **Applicable To** |
+> |||
+> |HIPAA and HITECH Act (U.S.) | Healthcare|
+> |HITRUST | Healthcare|
+> |CFTC 1.31 | Financial|
+> |DPP (UK) | Media|
+> |EU EN 301 549 | Accessibility|
+> |EU ENISA IAF | Public and private companies, government entities and not-for-profits|
+> |EU US Privacy Shield | Public and private companies, government entities and not-for-profits|
+> |SO/IEC 27018 | Public and private companies, government entities and not-for-profits that provides PII processing services via the cloud|
+> |EU Model Clauses | Public and private companies, government entities and not-for-profits that provides PII processing services via the cloud|
+> |FERPA | Educational Institutions|
+> |FedRAMP High | US Federal Agencies and Contractors|
+> |GLBA | Financial|
+> |ISO 27001:2013 | Public and private companies, government entities and not-for-profits|
+> |Japan My Number Act | Public and private companies, government entities and not-for-profits|
+> |TISAX | Automotive |
+> |NEN Netherlands 7510 | Healthcare |
+> |NHS IG Toolkit UK | Healthcare |
+> |BIR 2012 Netherlands | Public and private companies, government entities and not-for-profits|
+> |PCI DSS Level 1 | Payment processors and Financial|
+> |SOC 2 Type 2 |Public and private companies, government entities and not-for-profits|
+> |Sec 17a-4 |Financial|
+> |Spain DPA |Public and private companies, government entities and not-for-profits|
+
+## Next Steps
+* [Azure Compliance on Trusted Cloud](https://azure.microsoft.com/explore/trusted-cloud/compliance/)
+* [Azure Trust Center Compliance](https://www.microsoft.com/en-us/trust-center/compliance/compliance-overview)
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md
+
+ Title: Read replicas - Azure Database for PostgreSQL - Flexible Server
+description: This article describes the read replica feature in Azure Database for PostgreSQL - Flexible Server.
+++++ Last updated : 10/21/2022++
+# Read replicas in Azure Database for PostgreSQL - Flexible Server Preview
++
+> [!NOTE]
+> Read replicas for PostgreSQL Flexible Server is currently in preview.
+
+The read replica feature allows you to replicate data from an Azure Database for PostgreSQL server to a read-only replica. Replicas are updated **asynchronously** with the PostgreSQL engine native physical replication technology. Streaming replication by using replication slots is the default operation mode. When necessary, log shipping is used to catch up. You can replicate from the primary server to up to five replicas.
+
+Replicas are new servers that you manage similar to regular Azure Database for PostgreSQL servers. For each read replica, you're billed for the provisioned compute in vCores and storage in GB/ month.
+
+Learn how to [create and manage replicas](how-to-read-replicas-portal.md).
+
+## When to use a read replica
+
+The read replica feature helps to improve the performance and scale of read-intensive workloads. Read workloads can be isolated to the replicas, while write workloads can be directed to the primary. Read replicas can also be deployed on a different region and can be promoted to be a read-write server in the event of a disaster recovery.
+
+A common scenario is to have BI and analytical workloads use the read replica as the data source for reporting.
+
+Because replicas are read-only, they don't directly reduce write-capacity burdens on the primary.
+
+### Considerations
+
+The feature is meant for scenarios where the lag is acceptable and meant for offloading queries. It isn't meant for synchronous replication scenarios where the replica data is expected to be up-to-date. There will be a measurable delay between the primary and the replica. This can be in minutes or even hours depending on the workload and the latency between the primary and the replica. The data on the replica eventually becomes consistent with the data on the primary. Use this feature for workloads that can accommodate this delay.
+
+> [!NOTE]
+> For most workloads read replicas offer near-real-time updates from the primary. However, with persistent heavy write-intensive primary workloads, the replication lag could continue to grow and may never be able to catch-up with the primary. This may also increase storage usage at the primary as the WAL files are not deleted until they are received at the replica. If this situation persists, deleting and recreating the read replica after the write-intensive workloads completes is the option to bring the replica back to a good state with respect to lag.
+> Asynchronous read replicas are not suitable for such heavy write workloads. When evaluating read replicas for your application, monitor the lag on the replica for a full app work load cycle through its peak and non-peak times to access the possible lag and the expected RTO/RPO at various points of the workload cycle.
+
+## Cross-region replication
+
+You can create a read replica in a different region from your primary server. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users.
+
+You can have a primary server in any [Azure Database for PostgreSQL region](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql). A primary server can have replicas also in any global region of Azure that supports Azure Database for PostgreSQL. Currently [special Azure regions](/azure/virtual-machines/regions#special-azure-regions) are not supported.
+
+[//]: # (### Paired regions)
+
+[//]: # ()
+[//]: # (In addition to the universal replica regions, you can create a read replica in the Azure paired region of your primary server. If you don't know your region's pair, you can learn more from the [Azure Paired Regions article]&#40;../../availability-zones/cross-region-replication-azure.md&#41;.)
+
+[//]: # ()
+[//]: # (If you are using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency.)
+
+## Create a replica
+
+When you start the create replica workflow, a blank Azure Database for PostgreSQL server is created. The new server is filled with the data that was on the primary server. For creation of replicas in the same region snapshot approach is used, therefore the time of creation doesn't depend on the size of data. Geo-replicas are created using base backup of the primary instance, which is then transmitted over the network therefore time of creation might range from minutes to several hours depending on the primary size.
+
+Learn how to [create a read replica in the Azure portal](how-to-read-replicas-portal.md).
+
+If your source PostgreSQL server is encrypted with customer-managed keys, please see the [documentation](concepts-data-encryption.md) for additional considerations.
+
+## Connect to a replica
+
+When you create a replica, it does inherit the firewall rules or VNet service endpoint of the primary server. These rules might be changed during creation of replica as well as in any later point in time.
+
+The replica inherits the admin account from the primary server. All user accounts on the primary server are replicated to the read replicas. You can only connect to a read replica by using the user accounts that are available on the primary server.
+
+You can connect to the replica by using its hostname and a valid user account, as you would on a regular Azure Database for PostgreSQL server. For a server named **myreplica** with the admin username **myadmin**, you can connect to the replica by using psql:
+
+```bash
+psql -h myreplica.postgres.database.azure.com -U myadmin postgres
+```
+
+At the prompt, enter the password for the user account.
+
+[//]: # (## Monitor replication)
+
+[//]: # ()
+[//]: # (Azure Database for PostgreSQL provides two metrics for monitoring replication. The two metrics are **Max Lag Across Replicas** and **Replica Lag**. To learn how to view these metrics, see the **Monitor a replica** section of the [read replica how-to article]&#40;how-to-read-replicas-portal.md&#41;.)
+
+[//]: # ()
+[//]: # (The **Max Lag Across Replicas** metric shows the lag in bytes between the primary and the most-lagging replica. This metric is applicable and available on the primary server only, and will be available only if at least one of the read replica is connected to the primary and the primary is in streaming replication mode. The lag information does not show details when the replica is in the process of catching up with the primary using the archived logs of the primary in a file-shipping replication mode.)
+
+[//]: # ()
+[//]: # (The **Replica Lag** metric shows the time since the last replayed transaction. If there are no transactions occurring on your primary server, the metric reflects this time lag. This metric is applicable and available for replica servers only. Replica Lag is calculated from the `pg_stat_wal_receiver` view:)
+
+[//]: # ()
+[//]: # (```SQL)
+
+[//]: # (SELECT EXTRACT &#40;EPOCH FROM now&#40;&#41; - pg_last_xact_replay_timestamp&#40;&#41;&#41;;)
+
+[//]: # (```)
+
+[//]: # ()
+[//]: # (Set an alert to inform you when the replica lag reaches a value that isnΓÇÖt acceptable for your workload.)
+
+[//]: # ()
+[//]: # (For additional insight, query the primary server directly to get the replication lag in bytes on all replicas.)
+
+[//]: # ()
+[//]: # (> [!NOTE])
+
+[//]: # (> If a primary server or read replica restarts, the time it takes to restart and catch up is reflected in the Replica Lag metric.)
+
+## Promote replicas
+
+You can stop the replication between a primary and a replica by promoting one or more replicas at any time. The promote action causes the replica to apply all the pending logs and promotes the replica to be an independent, standalone read-writeable server. The data in the standalone server is the data that was available on the replica server at the time the replication is stopped. Any subsequent updates at the primary are not propagated to the replica. However, replica server may have accumulated logs that are not applied yet. As part of the promote process, the replica applies all the pending logs before accepting client connections.
+
+>[!NOTE]
+> Resetting admin password on replica server is currently not supported. Additionally, updating admin password along with promote replica operation in the same request is also not supported. If you wish to do this you must first promote the replica server then update the password on the newly promoted server separately.
+
+### Considerations
+
+- Before you stop replication on a read replica, check for the replication lag to ensure the replica has all the data that you require.
+- As the read replica has to apply all pending logs before it can be made a standalone server, RTO can be higher for write heavy workloads when the stop replication happens as there could be a significant delay on the replica. Please pay attention to this when planning to promote a replica.
+- The promoted replica server cannot be made into a replica again.
+- If you promote a replica to be a standalone server, you cannot establish replication back to the old primary server. If you want to go back to the old primary region, you can either establish a new replica server with a new name (or) delete the old primary and create a replica using the old primary name.
+- If you have multiple read replicas, and if you promote one of them to be your primary server, other replica servers are still connected to the old primary. You may have to recreate replicas from the new, promoted server.
+- During the create, delete and promote operations of replica, primary server will be in upgrading state.
+- **Power operations**: You can perform power operations (start/stop) on replica but primary server should be stopped before stopping read replicas and primary server should be started before starting the replicas.
+- If server has read replicas then read replicas should be deleted first before deleting the primary server.
+
+When you promote a replica, the replica loses all links to its previous primary and other replicas.
+
+Learn how to [promote a replica](how-to-read-replicas-portal.md#promote-replicas).
+
+## Failover to replica
+
+In the event of a primary server failure, it is **not** automatically failed over to the read replica.
+
+Since replication is asynchronous, there could be a considerable lag between the primary and the replica(s). The amount of lag is influenced by a number of factors such as the type of workload running on the primary server and the latency between the primary and the replica server. In typical cases with nominal write workload, replica lag is expected between a few seconds to few minutes. However, in cases where the primary runs very heavy write-intensive workload and the replica is not catching up fast enough, the lag can be much higher.
+
+[//]: # (You can track the replication lag for each replica using the *Replica Lag* metric. This metric shows the time since the last replayed transaction at the replica. We recommend that you identify the average lag by observing the replica lag over a period of time. You can set an alert on replica lag, so that if it goes outside your expected range, you will be notified to take action.)
+
+> [!Tip]
+> If you failover to the replica, the lag at the time you delink the replica from the primary will indicate how much data is lost.
+
+Once you have decided you want to failover to a replica,
+
+1. Promote replica<br/>
+ This step is necessary to make the replica server to become a standalone server and be able to accept writes. As part of this process, the replica server will be delinked from the primary. Once you initiate promotion, the backend process typically takes few minutes to apply any residual logs that were not yet applied and to open the database as a read-writeable server. See the [Promote replicas](#promote-replicas) section of this article to understand the implications of this action.
+
+2. Point your application to the (former) replica<br/>
+ Each server has a unique connection string. Update your application connection string to point to the (former) replica instead of the primary.
+
+Once your application is successfully processing reads and writes, you have completed the failover. The amount of downtime your application experiences, will depend on when you detect an issue and complete steps 1 and 2 above.
+
+### Disaster recovery
+
+When there is a major disaster event such as availability zone-level or regional failures, you can perform disaster recovery operation by promoting your read replica. From the UI portal, you can navigate to the read replica server. Then select the replication tab, and you can promote the replica to become an independent server.
+
+[//]: # (Alternatively, you can use the [Azure CLI]&#40;/cli/azure/postgres/server/replica#az-postgres-server-replica-stop&#41; to stop and promote the replica server.)
+
+## Considerations
+
+This section summarizes considerations about the read replica feature.
+
+### New replicas
+
+A read replica is created as a new Azure Database for PostgreSQL server. An existing server can't be made into a replica. You can't create a replica of another read replica.
+
+### Replica configuration
+
+During creation of read replicas firewall rules and data encryption method can be changed. Server parameters and authentication method are inherited from the primary server and cannot be changed during creation. After a replica is created, several settings can be changed including storage, compute, backup retention period, server parameters, authentication method, firewall rules etc.
+
+### Scaling
+
+Scaling vCores or between General Purpose and Memory Optimized:
+* PostgreSQL requires several parameters on replicas to be [greater than or equal to the setting on the primary](https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-ADMIN), otherwise the secondary servers will not start. The parameters affected are: max_connections, max_prepared_transactions, max_locks_per_transaction, max_wal_senders, max_worker_processes.
+* **Scaling up**: First scale up a replica's compute, then scale up the primary.
+* **Scaling down**: First scale down the primary's compute, then scale down the replica.
+
+## Next steps
+
+* Learn how to [create and manage read replicas in the Azure portal](how-to-read-replicas-portal.md).
+
+[//]: # (* Learn how to [create and manage read replicas in the Azure CLI and REST API]&#40;how-to-read-replicas-cli.md&#41;.)
postgresql How To Bulk Load Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-bulk-load-data.md
Title: Bulk data uploads For Azure Database for PostgreSQL - Flexible Server
-description: Best practices to bulk load data in Azure Database for PostgreSQL - Flexible Server
+ Title: Upload data in bulk in Azure Database for PostgreSQL - Flexible Server
+description: This article discusses best practices for uploading data in bulk in Azure Database for PostgreSQL - Flexible Server
Last updated 08/16/2022-+
-# Best practices for bulk data upload for Azure Database for PostgreSQL - Flexible Server
+# Best practices for uploading data in bulk in Azure Database for PostgreSQL - Flexible Server
-There are two types of bulk loads:
-- Initial data load of an empty database-- Incremental data loads-
-This article discusses various loading techniques along with best practices when it comes to initial data loads and incremental data loads.
+This article discusses various methods for loading data in bulk in Azure Database for PostgreSQL - Flexible Server, along with best practices for both initial data loads in empty databases and incremental data loads.
## Loading methods
-Performance-wise, the data loading methods arranged in the order of most time consuming to least time consuming is as follows:
-- Single record Insert-- Batch into 100-1000 rows per commit. One can use transaction block to wrap multiple records per commit -- INSERT with multi row values-- COPY command
+The following data-loading methods are arranged in order from most time consuming to least time consuming:
+- Run a single-record `INSERT` command.
+- Batch into 100 to 1000 rows per commit. You can use a transaction block to wrap multiple records per commit.
+- Run `INSERT` with multiple row values.
+- Run the `COPY` command.
-The preferred method to load the data into the database is by copy command. If the copy command isn't possible, batch INSERTs is the next best method. Multi-threading with a COPY command is the optimal method for bulk data loads.
+The preferred method for loading data into a database is to use the `COPY` command. If the `COPY` command isn't possible, using batch `INSERT` is the next best method. Multi-threading with a `COPY` command is the optimal method for loading data in bulk.
## Best practices for initial data loads
-#### Drop indexes
+### Drop indexes
+
+Before you do an initial data load, we recommend that you drop all the indexes in the tables. It's always more efficient to create the indexes after the data is loaded.
-Before an initial data load, it's advised to drop all the indexes in the tables. It's always more efficient to create the indexes after the data load.
+### Drop constraints
-#### Drop constraints
+The main drop constraints are described here:
-##### Unique key constraints
+* **Unique key constraints**
-To achieve strong performance, it's advised to drop unique key constraints before an initial data load, and recreate it once the data load is completed. However, dropping unique key constraints cancels the safeguards against duplicated data.
+ To achieve strong performance, we recommend that you drop unique key constraints before an initial data load, and re-create them after the data load is completed. However, dropping unique key constraints cancels the safeguards against duplicated data.
-##### Foreign key constraints
+* **Foreign key constraints**
-It's advised to drop foreign key constraints before initial data load and recreate once data load is completed.
+ We recommend that you drop foreign key constraints before the initial data load and re-create them after the data load is completed.
-Changing the `session_replication_role` parameter to replica also disables all foreign key checks. However, be aware making the change can leave data in an inconsistent state if not properly used.
+ Changing the `session_replication_role` parameter to `replica` also disables all foreign key checks. However, be aware that making the change can leave data in an inconsistent state if it's not properly used.
-#### Unlogged tables
+### Unlogged tables
-Use of unlogged tables will make data load faster. Data written to unlogged tables isn't written to the write-ahead log.
+Consider the pros and cons of using unlogged tables before you use them in initial data loads.
-The disadvantages of using unlogged tables are
+Using unlogged tables makes data load faster. Data that's written to unlogged tables isn't written to the write-ahead log.
+
+The disadvantages of using unlogged tables are:
- They aren't crash-safe. An unlogged table is automatically truncated after a crash or unclean shutdown. - Data from unlogged tables can't be replicated to standby servers.
-The pros and cons of using unlogged tables should be considered before using in initial data loads.
-
-Use the following options to create an unlogged table or change an existing table to unlogged table:
+To create an unlogged table or change an existing table to an unlogged table, use the following options:
-Create a new unlogged table by using the following syntax:
-```
-CREATE UNLOGGED TABLE <tablename>;
-```
+* Create a new unlogged table by using the following syntax:
+ ```
+ CREATE UNLOGGED TABLE <tablename>;
+ ```
-Convert an existing logged table to an unlogged table by using the following syntax:
-```
-ALTER TABLE <tablename> SET UNLOGGED;
-```
+* Convert an existing logged table to an unlogged table by using the following syntax:
-#### Server parameter tuning
+ ```
+ ALTER TABLE <tablename> SET UNLOGGED;
+ ```
-`Autovacuum`
+### Server parameter tuning
-During the initial data load, it's best to turn off the autovacuum. Once the initial load is completed, it's advised to run a manual VACUUM ANALYZE on all tables in the database, and then turn on autovacuum.
+* `autovacuum`: During the initial data load, it's best to turn off `autovacuum`. After the initial load is completed, we recommend that you run a manual `VACUUM ANALYZE` on all tables in the database, and then turn on `autovacuum`.
> [!NOTE]
-> Please follow the recommendations below only if there is enough memory and disk space.
-
-`maintenance_work_mem`
+> Follow the recommendations here only if there's enough memory and disk space.
-The maintenance_work_mem can be set to a maximum of 2 GB on a flexible server. `maintenance_work_mem` helps in speeding up autovacuum, index, and foreign key creation.
+* `maintenance_work_mem`: Can be set to a maximum of 2 gigabytes (GB) on a flexible server. `maintenance_work_mem` helps in speeding up autovacuum, index, and foreign key creation.
-`checkpoint_timeout`
+* `checkpoint_timeout`: On a flexible server, the `checkpoint_timeout` value can be increased to a maximum of 24 hours from the default setting of 5 minutes. We recommend that you increase the value to 1 hour before you load data initially on the flexible server.
-On the flexible server, the checkpoint_timeout can be increased to maximum 24 h from default 5 minutes. It's advised to increase the value to 1 hour before initial data loads on Flexible server.
+* `checkpoint_completion_target`: We recommend a value of 0.9.
-`checkpoint_completion_target`
+* `max_wal_size`: Can be set to the maximum allowed value on a flexible server, which is 64 GB while you're doing the initial data load.
-A value of 0.9 is always recommended.
+* `wal_compression`: Can be turned on. Enabling this parameter can incur some extra CPU cost spent on the compression during write-ahead log (WAL) logging and on the decompression during WAL replay.
-`max_wal_size`
-The max_wal_size can be set to the maximum allowed value on the Flexible server, which is 64 GB while we do the initial data load.
+### Flexible server recommendations
-`wal_compression`
+Before you begin an initial data load on the flexible server, we recommend that you:
-wal_compression can be turned on. Enabling the parameter can have some extra CPU cost spent on the compression during WAL logging and on the decompression during WAL replay.
+- Disable high availability on the server. You can enable it after the initial load is completed on master/primary.
+- Create read replicas after the initial data load is completed.
+- Make logging minimal or disable it altogether during initial data loads (for example: disable pgaudit, pg_stat_statements, query store).
-#### Flexible server recommendations
+### Re-create indexes and add constraints
-Before the start of initial data load on a Flexible server, it's recommended to
+Assuming that you dropped the indexes and constraints before the initial load, we recommend that you use high values in `maintenance_work_mem` (as mentioned earlier) for creating indexes and adding constraints. In addition, starting with PostgreSQL version 11, the following parameters can be modified for faster parallel index creation after the initial data load:
-- Disable high availability [HA] on the server. You can enable HA once initial load is completed on master/primary.-- Create read replicas after initial data load is completed.-- Make logging minimal or disable all together during initial data loads. Example: disable pgaudit, pg_stat_statements, query store.
+* `max_parallel_workers`: Sets the maximum number of workers that the system can support for parallel queries.
+* `max_parallel_maintenance_workers`: Controls the maximum number of worker processes, which can be used in `CREATE INDEX`.
-#### Recreating indexes and adding constraints
-
-Assuming the indexes and constraints were dropped before the initial load, it's recommended to have high values of maintenance_work_mem (as recommended above) for creating indexes and adding constraints. In addition, starting with Postgres version 11, the following parameters can be modified for faster parallel index creation after initial data load:
-
-`max_parallel_workers`
-
-Sets the maximum number of workers that the system can support for parallel queries.
-
-`max_parallel_maintenance_workers`
-
-Controls the maximum number of worker processes, which can be used to CREATE INDEX.
-
-One could also create the indexes by making recommended settings at the session level. An example of how it can be done at the session level is shown below:
+You can also create the indexes by making the recommended settings at the session level. Here's an example of how to do it:
```sql SET maintenance_work_mem = '2GB';
CREATE INDEX test_index ON test_table (test_column);
## Best practices for incremental data loads
-#### Table partitioning
+### Partition tables
+
+We always recommend that you partition large tables. Some advantages of partitioning, especially during incremental loads, include:
+- Creating new partitions based on new deltas makes it efficient to add new data to the table.
+- Maintaining tables becomes easier. You can drop a partition during an incremental data load to avoid time-consuming deletions in large tables.
+- Autovacuum would be triggered only on partitions that were changed or added during incremental loads, which makes maintaining statistics on the table easier.
-It's always recommended to partition large tables. Some advantages of partitioning, especially during incremental loads:
-- Creation of new partitions based on the new deltas makes it efficient to add new data to the table.-- Maintenance of tables becomes easier. One can drop a partition during incremental data loads avoiding time-consuming deletes on large tables.-- Autovacuum would be triggered only on partitions that were changed or added during incremental loads, which make maintaining statistics on the table easier.
+### Maintain up-to-date table statistics
-#### Maintain up-to-date table statistics
+Monitoring and maintaining table statistics is important for query performance on the database. This also includes scenarios where you have incremental loads. PostgreSQL uses the autovacuum daemon process to clean up dead tuples and analyze the tables to keep the statistics updated. For more information, see [Autovacuum monitoring and tuning](./how-to-autovacuum-tuning.md).
-Monitoring and maintaining table statistics is important for query performance on the database. This also includes scenarios where you have incremental loads. PostgreSQL uses the autovacuum daemon process to clean up dead tuples and analyze the tables to keep the statistics updated. For more details on autovacuum monitoring and tuning, review [Autovacuum Tuning](./how-to-autovacuum-tuning.md).
+### Create indexes on foreign key constraints
-#### Index creation on foreign key constraints
+Creating indexes on foreign keys in the child tables can be beneficial in the following scenarios:
+- Data updates or deletions in the parent table. When data is updated or deleted in the parent table, lookups are performed on the child table. To make lookups faster, you could index foreign keys on the child table.
+- Queries, where you can see the joining of parent and child tables on key columns.
-Creating indexes on foreign keys in the child tables would be beneficial in the following scenarios:
-- Data updates or deletions in the parent table. When data is updated or deleted in the parent table lookups would be performed on the child table. To make lookups faster, you could index foreign keys on the child table.-- Queries, where we see join between parent and child tables on key columns.
+### Identify unused indexes
-#### Unused indexes
+Identify unused indexes in the database and drop them. Indexes are an overhead on data loads. The fewer the indexes on a table, the better the performance during data ingestion.
-Identify unused indexes in the database and drop them. Indexes are an overhead on data loads. The fewer the indexes on a table the better the performance is during data ingestion.
-Unused indexes can be identified in two ways - by Query Store and an index usage query.
+You can identify unused indexes in two ways: by Query Store and an index usage query.
-##### Query store
+**Query Store**
-Query Store helps identify indexes, which can be dropped based on query usage patterns on the database. For step-by-step guidance, see [Query Store](./concepts-query-store.md).
-Once Query Store is enabled on the server, the following query can be used to identify indexes that can be dropped by connecting to azure_sys database.
+The Query Store feature helps identify indexes, which can be dropped based on query usage patterns on the database. For step-by-step guidance, see [Query Store](./concepts-query-store.md).
+
+After you've enabled Query Store on the server, you can use the following query to identify indexes that can be dropped by connecting to azure_sys database.
```sql SELECT * FROM IntelligentPerformance.DropIndexRecommendations; ```
-##### Index usage
+**Index usage**
-The below query can also be used to identify unused indexes:
+You can also use the following query to identify unused indexes:
```sql SELECT
WHERE
ORDER BY 1, 2; ```
-Number_of_scans, tuples_read, and tuples_fetched columns would indicate index usage.number_of_scans column value of zero points to index not being used.
+The `number_of_scans`, `tuples_read`, and `tuples_fetched` columns would indicate the index usage.number_of_scans column value of zero points as an index that's not being used.
-#### Server parameter tuning
+### Server parameter tuning
> [!NOTE]
-> Please follow the recommendations below only if there is enough memory and disk space.
-
-`maintenance_work_mem`
-
-The maintenance_work_mem parameter can be set to a maximum of 2 GB on Flexible Server. `maintenance_work_mem` helps speed up index creation and foreign key additions.
-
-`checkpoint_timeout`
+> Follow the recommendations in the following parameters only if there's enough memory and disk space.
-On the Flexible Server, the checkpoint_timeout parameter can be increased to 10 minutes or 15 minutes from the default 5 minutes. Increasing `checkpoint_timeout` to a larger value, such as 15 minutes, can reduce the I/O load, but the downside is that it takes longer to recover if there was a crash. Careful consideration is recommended before making the change.
+* `maintenance_work_mem`: This parameter can be set to a maximum of 2 GB on the flexible server. `maintenance_work_mem` helps speed up index creation and foreign key additions.
-`checkpoint_completion_target`
+* `checkpoint_timeout`: On the flexible server, the `checkpoint_timeout` value can be increased to 10 or 15 minutes from the default setting of 5 minutes. Increasing `checkpoint_timeout` to a larger value, such as 15 minutes, can reduce the I/O load, but the downside is that it takes longer to recover if there's a crash. We recommend careful consideration before you make the change.
-A value of 0.9 is always recommended.
+* `checkpoint_completion_target`: We recommend a value of 0.9.
-`max_wal_size`
+* `max_wal_size`: This value depends on SKU, storage, and workload. One way to arrive at the correct value for `max_wal_size` is shown in the following example.
-The max_wal_size depends on SKU, storage, and workload.
+ During peak business hours, arrive at a value by doing the following:
-One way to arrive at the correct value for max_wal_size is shown below.
+ a. Take the current WAL log sequence number (LSN) by running the following query:
-During peak business hours, follow the below steps to arrive at a value:
--- Take the current WAL LSN by executing the below query:-
-```sql
-SELECT pg_current_wal_lsn ();
-```
--- Wait for checkpoint_timeout number of seconds. Take the current WAL LSN by executing the below query:-
-```sql
-SELECT pg_current_wal_lsn ();
-```
--- Use the two results to check the difference in GB:
-
-```sql
-SELECT round (pg_wal_lsn_diff('LSN value when run second time','LSN value when run first time')/1024/1024/1024,2) WAL_CHANGE_GB;
-```
+ ```sql
+ SELECT pg_current_wal_lsn ();
+ ```
+ b. Wait for `checkpoint_timeout` number of seconds. Take the current WAL LSN by running the following query:
-`wal_compression`
+ ```sql
+ SELECT pg_current_wal_lsn ();
+ ```
+ c. Use the two results to check the difference, in GB:
+
+ ```sql
+ SELECT round (pg_wal_lsn_diff('LSN value when run second time','LSN value when run first time')/1024/1024/1024,2) WAL_CHANGE_GB;
+ ```
-wal_compression can be turned on. Enabling the parameter can have some extra CPU cost spent on the compression during WAL logging and on the decompression during WAL replay.
+* `wal_compression`: Can be turned on. Enabling this parameter can incur some extra CPU cost spent on the compression during WAL logging and on the decompression during WAL replay.
## Next steps-- Troubleshoot high CPU utilization [High CPU Utilization](./how-to-high-CPU-utilization.md).-- Troubleshoot high memory utilization [High Memory Utilization](./how-to-high-memory-utilization.md).-- Configure server parameters [Server Parameters](./howto-configure-server-parameters-using-portal.md).-- Troubleshoot and tune Autovacuum [Autovacuum Tuning](./how-to-autovacuum-tuning.md).-- Troubleshoot high CPU utilization [High IOPS Utilization](./how-to-high-io-utilization.md).
+- [Troubleshoot high CPU utilization](./how-to-high-CPU-utilization.md)
+- [Troubleshoot high memory utilization](./how-to-high-memory-utilization.md)
+- [Configure server parameters](./howto-configure-server-parameters-using-portal.md)
+- [Troubleshoot and tune Autovacuum](./how-to-autovacuum-tuning.md)
+- [Troubleshoot high CPU utilization](./how-to-high-io-utilization.md)
postgresql How To High Io Utilization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-high-io-utilization.md
Title: High IOPS utilization for Azure Database for PostgreSQL - Flexible Server
-description: Troubleshooting guide for high IOPS utilization in Azure Database for PostgreSQL - Flexible Server
+description: This article is a troubleshooting guide for high IOPS utilization in Azure Database for PostgreSQL - Flexible Server
# Troubleshoot high IOPS utilization for Azure Database for PostgreSQL - Flexible Server
-This article shows you how to quickly identify the root cause of high IOPS utilization and possible remedial actions to control IOPS utilization when using [Azure Database for PostgreSQL - Flexible Server](overview.md).
+This article shows you how to quickly identify the root cause of high IOPS (input/output operations per second) utilization and provides remedial actions to control IOPS utilization when you're using [Azure Database for PostgreSQL - Flexible Server](overview.md).
-In this article, you learn:
+In this article, you learn how to:
-- About tools to identify high IO utilization, such as Azure Metrics, Query Store, and pg_stat_statements.-- How to identify root causes, such as long-running queries, checkpoint timings, disruptive autovacuum daemon process, and high storage utilization.-- How to resolve high IO utilization using Explain Analyze, tune checkpoint-related server parameters, and tune autovacuum daemon.
+- Use tools to identify high input/output (I/O) utilization, such as Azure Metrics, Query Store, and pg_stat_statements.
+- Identify root causes, such as long-running queries, checkpoint timings, a disruptive autovacuum daemon process, and high storage utilization.
+- Resolve high I/O utilization by using Explain Analyze, tune checkpoint-related server parameters, and tune the autovacuum daemon.
-## Tools to identify high IO utilization
+## Tools to identify high I/O utilization
-Consider these tools to identify high IO utilization.
+Consider the following tools to identify high I/O utilization.
-### Azure metrics
+### Azure Metrics
-Azure Metrics is a good starting point to check the IO utilization for the definite date and period. Metrics give information about the time duration the IO utilization is high. Compare the graphs of Write IOPs, Read IOPs, Read Throughput, and Write Throughput to find out times when the workload caused high IO utilization. For proactive monitoring, you can configure alerts on the metrics. For step-by-step guidance, see [Azure Metrics](./howto-alert-on-metrics.md).
+Azure Metrics is a good starting point to check I/O utilization for a defined date and period. Metrics give information about the time during which I/O utilization is high. Compare the graphs of Write IOPs, Read IOPs, Read Throughput, and Write Throughput to find out times when the workload is causing high I/O utilization. For proactive monitoring, you can configure alerts on the metrics. For step-by-step guidance, see [Azure Metrics](./howto-alert-on-metrics.md).
-### Query store
+### Query Store
-Query Store automatically captures the history of queries and runtime statistics and retains them for your review. It slices the data by time to see temporal usage patterns. Data for all users, databases, and queries is stored in a database named azure_sys in the Azure Database for PostgreSQL instance. For step-by-step guidance, see [Query Store](./concepts-query-store.md).
+The Query Store feature automatically captures the history of queries and runtime statistics, and retains them for your review. It slices the data by time to see temporal usage patterns. Data for all users, databases, and queries is stored in a database named *azure_sys* in the Azure Database for PostgreSQL instance. For step-by-step guidance, see [Monitor performance with Query Store](./concepts-query-store.md).
-Use the following statement to view the top five SQL statements that consume IO:
+Use the following statement to view the top five SQL statements that consume I/O:
```sql select * from query_store.qs_view qv where is_system_query is FALSE order by blk_read_time + blk_write_time desc limit 5; ```
-### pg_stat_statements
+### The pg_stat_statements extension
-The pg_stat_statements extension helps identify queries that consume IO on the server.
+The `pg_stat_statements` extension helps identify queries that consume I/O on the server.
-Use the following statement to view the top five SQL statements that consume IO:
+Use the following statement to view the top five SQL statements that consume I/O:
```sql SELECT userid::regrole, dbid, query
LIMIT 5;
``` > [!NOTE]
-> When using query store or pg_stat_statements for columns blk_read_time and blk_write_time to be populated enable server parameter `track_io_timing`.For more information about the **track_io_timing** parameter, review [Server Parameters](https://www.postgresql.org/docs/current/runtime-config-statistics.html).
+> When using query store or pg_stat_statements for columns blk_read_time and blk_write_time to be populated, you need to enable server parameter `track_io_timing`. For more information about `track_io_timing`, review [Server parameters](https://www.postgresql.org/docs/current/runtime-config-statistics.html).
## Identify root causes
-If IO consumption levels are high in general, the following could be possible root causes:
+If I/O consumption levels are high in general, the following could be the root causes:
### Long-running transactions
-Long-running transactions can consume IO, that can lead to high IO utilization.
+Long-running transactions can consume I/O, which can lead to high I/O utilization.
-The following query helps identify connections running for the longest time:
+The following query helps identify connections that are running for the longest time:
```sql SELECT pid, usename, datname, query, now() - xact_start as duration
ORDER BY duration DESC;
### Checkpoint timings
-High IO can also be seen in scenarios where a checkpoint is happening too frequently. One way to identify this is by checking the Postgres log file for the following log text "LOG: checkpoints are occurring too frequently."
+High I/O can also be seen in scenarios where a checkpoint is happening too frequently. One way to identify this is by checking the PostgreSQL log file for the following log text: "LOG: checkpoints are occurring too frequently."
-You could also investigate using an approach where periodic snapshots of `pg_stat_bgwriter` with a timestamp is saved. Using the snapshots saved the average checkpoint interval, number of checkpoints requested and number of checkpoints timed can be calculated.
+You could also investigate by using an approach where periodic snapshots of `pg_stat_bgwriter` with a time stamp are saved. By using the saved snapshots, you can calculate the average checkpoint interval, number of checkpoints requested, and number of checkpoints timed.
### Disruptive autovacuum daemon process
-Execute the below query to monitor autovacuum:
+Run the following query to monitor autovacuum:
```sql SELECT schemaname, relname, n_dead_tup, n_live_tup, autovacuum_count, last_vacuum, last_autovacuum, last_autoanalyze, autovacuum_count, autoanalyze_count FROM pg_stat_all_tables WHERE n_live_tup > 0; ``` The query is used to check how frequently the tables in the database are being vacuumed.
-**last_autovacuum** : provides date and time when the last autovacuum ran on the table.
-**autovacuum_count** : provides number of times the table was vacuumed.
-**autoanalyze_count**: provides number of times the table was analyzed.
+* `last_autovacuum`: The date and time when the last autovacuum ran on the table.
+* `autovacuum_count`: The number of times the table was vacuumed.
+* `autoanalyze_count`: The number of times the table was analyzed.
-## Resolve high IO utilization
+## Resolve high I/O utilization
-To resolve high IO utilization, there are three methods you could employ - using Explain Analyze, terminating long-running transactions, or tuning server parameters.
+To resolve high I/O utilization, you can use any of the following three methods.
-### Explain Analyze
+### The `EXPLAIN ANALYZE` command
-Once you identify the query that's consuming high IO, use **EXPLAIN ANALYZE** to further investigate the query and tune it. For more information about the **EXPLAIN ANALYZE** command, review [Explain Plan](https://www.postgresql.org/docs/current/sql-explain.html).
+After you've identified the query that's consuming high I/O, use `EXPLAIN ANALYZE` to further investigate the query and tune it. For more information about the `EXPLAIN ANALYZE` command, review the [EXPLAIN plan](https://www.postgresql.org/docs/current/sql-explain.html).
-### Terminating long running transactions
+### Terminate long-running transactions
-You could consider killing a long running transaction as an option.
+You could consider killing a long-running transaction as an option.
-To terminate a session's PID, you need to detect the PID using the following query:
+To terminate a session's process ID (PID), you need to detect the PID by using the following query:
```sql SELECT pid, usename, datname, query, now() - xact_start as duration
WHERE pid <> pg_backend_pid() and state IN ('idle in transaction', 'active')
ORDER BY duration DESC; ```
-You can also filter by other properties like `usename` (username), `datname` (database name) etc.
+You can also filter by other properties, such as `usename` (username) or `datname` (database name).
-Once you have the session's PID, you can terminate using the following query:
+After you have the session's PID, you can terminate it by using the following query:
```sql SELECT pg_terminate_backend(pid); ```
-### Server parameter tuning
+### Tune server parameters
-If it's observed that the checkpoint is happening too frequently, increase `max_wal_size` server parameter until most checkpoints are time driven, instead of requested. Eventually, 90% or more should be time based, and the interval between two checkpoints is close to the `checkpoint_timeout` set on the server.
+If you've observed that the checkpoint is happening too frequently, increase the `max_wal_size` server parameter until most checkpoints are time driven, instead of requested. Eventually, 90 percent or more should be time based, and the interval between two checkpoints should be close to the `checkpoint_timeout` value that's set on the server.
-`max_wal_size`
+* `max_wal_size`: Peak business hours are a good time to arrive at a `max_wal_size` value. To arrive at a value, do the following:
-Peak business hours are a good time to arrive at `max_wal_size` value. Follow the below listed steps to arrive at a value.
+ 1. Run the following query to get the current WAL LSN, and then note the result:
-Execute the below query to get current WAL LSN, note down the result:
+ ```sql
+ select pg_current_wal_lsn();
+ ```
- ```sql
-select pg_current_wal_lsn();
-```
-
-Wait for `checkpoint_timeout` number of seconds. Execute the below query to get current WAL LSN, note down the result:
-
- ```sql
-select pg_current_wal_lsn();
-```
-
-Execute below query that uses the two results to check the difference in GB:
+ 1. Wait for a `checkpoint_timeout` number of seconds. Run the following query to get the current WAL LSN, and then note the result:
- ```sql
-select round (pg_wal_lsn_diff ('LSN value when run second time', 'LSN value when run first time')/1024/1024/1024,2) WAL_CHANGE_GB;
-```
+ ```sql
+ select pg_current_wal_lsn();
+ ```
-`checkpoint_completion_target`
+ 1. Run the following query, which uses the two results, to check the difference, in gigabytes (GB):
-A good practice would be to set it to 0.9. As an example, a value of 0.9 for a `checkpoint_timeout` of 5 minutes indicates the target to complete a checkpoint is 270 sec [0.9*300 sec]. A value of 0.9 provides fairly consistent I/O load. An aggressive value of `check_point_completion_target` may result in increased IO load on the server.
+ ```sql
+ select round (pg_wal_lsn_diff ('LSN value when run second time', 'LSN value when run first time')/1024/1024/1024,2) WAL_CHANGE_GB;
+ ```
-`checkpoint_timeout`
+* `checkpoint_completion_target`: A good practice would be to set the value to 0.9. As an example, a value of 0.9 for a `checkpoint_timeout` of 5 minutes indicates that the target to complete a checkpoint is 270 seconds (0.9\*300 seconds). A value of 0.9 provides a fairly consistent I/O load. An aggressive value of `checkpoint_completion_target` might result in an increased I/O load on the server.
-The `checkpoint_timeout` value can be increased from default value set on the server. Note while increasing the `checkpoint_timeout` take into consideration that increasing the value would also increase the time for crash recovery.
+* `checkpoint_timeout`: You can increase the `checkpoint_timeout` value from the default value that's set on the server. As you're increasing the value, take into consideration that increasing it would also increase the time for crash recovery.
-### Autovacuum tuning to decrease disruptions
+### Tune autovacuum to decrease disruptions
-For more details on monitoring and tuning in scenarios where autovacuum is too disruptive review [Autovacuum Tuning](./how-to-autovacuum-tuning.md).
+For more information about monitoring and tuning in scenarios where autovacuum is too disruptive, review [Autovacuum tuning](./how-to-autovacuum-tuning.md).
### Increase storage
-Increasing storage will also help in addition of more IOPS to the server. For more details on storage and associated IOPS review [Compute and Storage Options](./concepts-compute-storage.md).
+Increasing storage helps when you're adding more IOPS to the server. For more information about storage and associated IOPS, review [Compute and storage options](./concepts-compute-storage.md).
## Next steps -- Troubleshoot and tune Autovacuum [Autovacuum Tuning](./how-to-autovacuum-tuning.md)-- Compute and Storage Options [Compute and Storage Options](./concepts-compute-storage.md)
+- [Troubleshoot and tune autovacuum](./how-to-autovacuum-tuning.md)
+- [Compute and storage options](./concepts-compute-storage.md)
postgresql How To Pgdump Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-pgdump-restore.md
Title: Best practices for PG dump and restore in Azure Database for PostgreSQL - Flexible Server
-description: Best Practices For PG Dump And Restore in Azure Database for PostgreSQL - Flexible Server
+ Title: Best practices for pg_dump and pg_restore in Azure Database for PostgreSQL - Flexible Server
+description: This article discusses best practices for pg_dump and pg_restore in Azure Database for PostgreSQL - Flexible Server
Last updated 09/16/2022
-# Best practices for PG dump and restore for Azure Database for PostgreSQL - Flexible Server
+# Best practices for pg_dump and pg_restore for Azure Database for PostgreSQL - Flexible Server
-This article reviews options to speed up pg_dump and pg_restore. It also explains the best server configurations for carrying out pg_restore.
+This article reviews options and best practices for speeding up pg_dump and pg_restore. It also explains the best server configurations for carrying out pg_restore.
## Best practices for pg_dump
-pg_dump is a utility that can extract a PostgreSQL database into a script file or archive file. Few of the command line options that can be used to reduce the overall dump time using pg_dump are listed below.
+You can use the pg_dump utility to extract a PostgreSQL database into a script file or archive file. A few of the command line options that you can use to reduce the overall dump time by using pg_dump are listed in the following sections.
-#### Directory format(-Fd)
+### Directory format(-Fd)
-This option outputs a directory-format archive that can be input to pg_restore. By default the output is compressed.
+This option outputs a directory-format archive that you can input to pg_restore. By default, the output is compressed.
-#### Parallel jobs(-j)
+### Parallel jobs(-j)
-Pg_dump can run dump jobs concurrently using the parallel jobs option. This option reduces the total dump time but increases the load on the database server. It's advised to arrive at a parallel job value after closely monitoring the source server metrics like CPU, Memory, and IOPS usage.
+With pg_dump, you can run dump jobs concurrently by using the parallel jobs option. This option reduces the total dump time but increases the load on the database server. We recommend that you arrive at a parallel job value after closely monitoring the source server metrics, such as CPU, memory, and IOPS (input/output operations per second) usage.
-There are a few considerations that need to be taken into account when setting this value
-- Pg_dump requires number of parallel jobs +1 number of connections when parallel jobs option is considered, so make sure max_connections is set accordingly.-- The number of parallel jobs should be less than or equal to the number of vCPUs allocated for the database server.
+When you're setting a value for the parallel jobs option, pg_dump requires the following:
+- The number of connections must equal the number of parallel jobs&nbsp;+1, so be sure to set the `max_connections` value accordingly.
+- The number of parallel jobs should be less than or equal to the number of vCPUs that are allocated for the database server.
-#### Compression(-Z0)
+### Compression(-Z0)
-Specifies the compression level to use. Zero means no compression. Zero compression during pg_dump process could help with performance gains.
+This option specifies the compression level to use. Zero means no compression. Zero compression during the pg_dump process could help with performance gains.
-#### Table bloats and vacuuming
+### Table bloats and vacuuming
-Before the starting the pg_dump process, consider if table vacuuming is necessary. Bloat on tables significantly increases pg_dump times. Execute the below query to identify table bloats:
+Before you start the pg_dump process, consider whether table vacuuming is necessary. Bloat on tables significantly increases pg_dump times. Execute the following query to identify table bloats:
``` select schemaname,relname,n_dead_tup,n_live_tup,round(n_dead_tup::float/n_live_tup::float*100) dead_pct,autovacuum_count,last_vacuum,last_autovacuum,last_autoanalyze,last_analyze from pg_stat_all_tables where n_live_tup >0; ```
-The **dead_pct** column in the above query gives percentage of dead tuples when compared to live tuples. A high dead_pct value for a table might point to the table not being properly vacuumed. For tuning autovacuum, review the article [Autovacuum Tuning](./how-to-autovacuum-tuning.md).
+The `dead_pct` column in this query is the percentage of dead tuples when compared to live tuples. A high `dead_pct` value for a table might indicate that the table isn't being properly vacuumed. For more information, see [Autovacuum tuning in Azure Database for PostgreSQL - Flexible Server](./how-to-autovacuum-tuning.md).
-As a one of case perform manual vacuum analyze of the tables that are identified.
+For each table that you identify, you can perform a manual vacuum analysis by running the following:
``` vacuum(analyze, verbose) <table_name> ```
-#### Use of PITR [Point In Time Recovery] server
+### Use a PITR server
-Pg dump can be carried out on an online or live server. It makes consistent backups even if the database is being used. It doesn't block other users from using the database. Consider the database size and other business or customer needs before the pg_dump process is started. Small databases might be a good candidate to carry out a pg dump on the production server. For large databases, you could create PITR (Point In Time Recovery) server from the production server and carry out the pg_dump process on the PITR server. Running pg_dump on a PITR would be a cold run process. The trade-off for the approach would be you wouldn't be concerned with extra CPU/memory/IO utilization that comes with the pg_dump process running on the actual production server. You can run pg_dump on a PITR server and drop the PITR server once the pg_dump process is completed.
+You can perform a pg_dump on an online or live server. It makes consistent backups even if the database is being used. It doesn't block other users from using the database. Consider the database size and other business or customer needs before you start the pg_dump process. Small databases might be good candidates for performing a pg_dump on the production server.
-##### Syntax
+For large databases, you could create a point-in-time recovery (PITR) server from the production server and perform the pg_dump process on the PITR server. Running pg_dump on a PITR would be a cold run process. The trade-off for this approach is that you wouldn't be concerned with extra CPU, memory, and IO utilization that comes with a pg_dump process that runs on an actual production server. You can run pg_dump on a PITR server and then drop the PITR server after the pg_dump process is completed.
-Use the following syntax to perform a pg_dump:
+### Syntax for pg_dump
-`pg_dump -h <hostname> -U <username> -d <databasename> -Fd -j <Num of parallel jobs> -Z0 -f sampledb_dir_format`
+Use the following syntax for pg_dump:
+`pg_dump -h <hostname> -U <username> -d <databasename> -Fd -j <Num of parallel jobs> -Z0 -f sampledb_dir_format`
## Best practices for pg_restore
-pg_restore is a utility for restoring postgreSQL database from an archive created by pg_dump. Few of the command line options that can be used to reduce the overall restore time using pg_restore are listed below.
+You can use the pg_restore utility to restore a PostgreSQL database from an archive that's created by pg_dump. A few command line options for reducing the overall restore time are listed in the following sections.
-#### Parallel restore
+### Parallel restore
-Using multiple concurrent jobs, you can reduce the time to restore a large database on a multi vCore target server. The number of jobs can be equal to or less than the number of vCPUs allocated for the target server.
+By using multiple concurrent jobs, you can reduce the time it takes to restore a large database on a multi-vCore target server. The number of jobs can be equal to or less than the number of vCPUs that are allocated for the target server.
-#### Server parameters
+### Server parameters
-If you're restoring data to a new server or non-production server, you can optimize the following server parameters prior to running pg_restore.
+If you're restoring data to a new server or non-production server, you can optimize the following server parameters prior to running pg_restore:
`work_mem` = 32 MB `max_wal_size` = 65536 (64 GB)
If you're restoring data to a new server or non-production server, you can optim
`autovacuum` = off `wal_compression` = on
-Once the restore is completed, make sure all the above mentioned parameters are appropriately updated as per workload requirements.
+After the restore is completed, make sure that all these parameters are appropriately updated as per workload requirements.
> [!NOTE]
-> Please follow the above recommendations only if there is enough memory and disk space. In case you have small server with 2,4,8 vCore, please set the parameters accordingly.
+> Follow the preceding recommendations only if there's enough memory and disk space. If you have a small server with 2, 4, or 8 vCores, set the parameters accordingly.
-#### Other considerations
+### Other considerations
-- Disable High Availability [HA] prior to running pg_restore.-- Analyze all tables migrated after restore option.
+- Disable high availability (HA) prior to running pg_restore.
+- Analyze all tables that are migrated after the restore is complete.
-##### Syntax
+### Syntax for pg_restore
Use the following syntax for pg_restore: `pg_restore -h <hostname> -U <username> -d <db name> -Fd -j <NUM> -C <dump directory>` --Fd - Directory format --j - Number of jobs --C - Begin the output with a command to create the database itself and reconnect to the created database
+* `-Fd`: The directory format.
+* `-j`: The number of jobs.
+* `-C`: Begin the output with a command to create the database itself and then reconnect to it.
Here's an example of how this syntax might appear:
Here's an example of how this syntax might appear:
## Virtual machine considerations
-Create a virtual machine in the same region, same availability zone (AZ) preferably where you have both your target and source servers or at least have the virtual machine closer to source server or a target server. Use of Azure Virtual Machines with high-performance local SSD is recommended. For more details about the SKUs review
-
-[Edv4 and Edsv4-series](../../virtual-machines/edv4-edsv4-series.md)
+Create a virtual machine in the same region and availability zone, preferably where you have both your target and source servers. Or, at a minimum, create the virtual machine closer to the source server or a target server. We recommend that you use Azure Virtual Machines with a high-performance local SSD.
-[Ddv4 and Ddsv4-series](../../virtual-machines/ddv4-ddsv4-series.md)
+For more information about the SKUs, see:
+* [Edv4 and Edsv4-series](../../virtual-machines/edv4-edsv4-series.md)
+* [Ddv4 and Ddsv4-series](../../virtual-machines/ddv4-ddsv4-series.md)
## Next steps -- Troubleshoot high CPU utilization [High CPU Utilization](./how-to-high-cpu-utilization.md).-- Troubleshoot high memory utilization [High Memory Utilization](./how-to-high-memory-utilization.md).-- Troubleshoot and tune Autovacuum [Autovacuum Tuning](./how-to-autovacuum-tuning.md).-- Troubleshoot high CPU utilization [High IOPS Utilization](./how-to-high-io-utilization.md).
+- [Troubleshoot high CPU utilization](./how-to-high-cpu-utilization.md)
+- [Troubleshoot high memory utilization](./how-to-high-memory-utilization.md)
+- [Troubleshoot and tune autovacuum](./how-to-autovacuum-tuning.md)
+- [Troubleshoot high IOPS utilization](./how-to-high-io-utilization.md)
postgresql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-read-replicas-portal.md
+
+ Title: Manage read replicas - Azure portal - Azure Database for PostgreSQL - Flexible Server
+description: Learn how to manage read replicas Azure Database for PostgreSQL - Flexible Server from the Azure portal.
+++++ Last updated : 10/14/2022++
+# Create and manage read replicas in Azure Database for PostgreSQL - Flexible Server from the Azure portal Preview
++
+> [!NOTE]
+> Read replicas for PostgreSQL Flexible Server is currently in preview.
+
+In this article, you learn how to create and manage read replicas in Azure Database for PostgreSQL from the Azure portal. To learn more about read replicas, see the [overview](concepts-read-replicas.md).
+
+## Prerequisites
+
+An [Azure Database for PostgreSQL server](/azure/postgresql/flexible-server/quickstart-create-server-database-portal.md) to be the primary server.
+
+> [!NOTE]
+> When deploying read replicas for persistent heavy write-intensive primary workloads, the replication lag could continue to grow and may never be able to catch-up with the primary. This may also increase storage usage at the primary as the WAL files are not deleted until they are received at the replica.
+
+## Create a read replica
+
+To create a read replica, follow these steps:
+
+1. Select an existing Azure Database for PostgreSQL server to use as the primary server.
+
+2. On the server sidebar, under **Settings**, select **Replication**.
+
+3. Select **Add Replica**.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/add-replica.png" alt-text="Add a replica":::
+
+4. Enter the Basics form with the following information.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/basics.png" alt-text="Enter the Basics information":::
+
+ > [!NOTE]
+ > To learn more about which regions you can create a replica in, visit the [read replica concepts article](concepts-read-replicas.md).
+
+6. Select **Review + create** to confirm the creation of the replica or **Next: Networking** if you want to add, delete or modify any firewall rules.
+ :::image type="content" source="./media/how-to-read-replicas-portal/networking.png" alt-text="Modify firewall rules":::
+7. Leave the remaining defaults and then select the **Review + create** button at the bottom of the page or proceed to the next forms to add tags or change data encryption method.
+8. Review the information in the final confirmation window. When you're ready, select **Create**.
+ :::image type="content" source="./media/how-to-read-replicas-portal/review.png" alt-text="Review the information in the final confirmation window":::
+
+After the read replica is created, it can be viewed from the **Replication** window.
++
+> [!IMPORTANT]
+> Review the [considerations section of the Read Replica overview](concepts-read-replicas.md#considerations).
+>
+> To avoid issues during promotion of replicas always change the following server parameters on the replicas first, before applying them on the primary: max_connections, max_prepared_transactions, max_locks_per_transaction, max_wal_senders, max_worker_processes.
+
+## Promote replicas
+
+You can promote replicas to become stand-alone servers serving read-write requests.
+
+> [!IMPORTANT]
+> Promotion of replicas cannot be undone. The read replica becomes a standalone server that supports both reads and writes. The standalone server can't be made into a replica again.
+
+To promote replica from the Azure portal, follow these steps:
+
+1. In the Azure portal, select your primary Azure Database for PostgreSQL server.
+
+2. On the server menu, under **Settings**, select **Replication**.
+
+3. Select the replica server for which to stop replication and hit **Promote**.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/select-replica.png" alt-text="Select the replica":::
+
+4. Confirm promote operation.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/confirm-promote.png" alt-text="Confirm to promote replica":::
+
+## Delete a primary server
+You can only delete primary server once all read replicas have been deleted. Follow the instruction in [Delete a replica](#delete-a-replica) section to delete replicas and then proceed with steps below.
+
+To delete a server from the Azure portal, follow these steps:
+
+1. In the Azure portal, select your primary Azure Database for PostgreSQL server.
+
+2. Open the **Overview** page for the server and select **Delete**.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/delete-server.png" alt-text="On the server Overview page, select to delete the primary server":::
+
+3. Enter the name of the primary server to delete. Select **Delete** to confirm deletion of the primary server.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/confirm-delete.png" alt-text="Confirm to delete the primary server":::
+
+## Delete a replica
+
+You can delete a read replica similar to how you delete a standalone Azure Database for PostgreSQL server.
+
+- In the Azure portal, open the **Overview** page for the read replica. Select **Delete**.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/delete-replica.png" alt-text="On the replica Overview page, select to delete the replica":::
+
+You can also delete the read replica from the **Replication** window by following these steps:
+
+1. In the Azure portal, select your primary Azure Database for PostgreSQL server.
+
+2. On the server menu, under **Settings**, select **Replication**.
+
+3. Select the read replica to delete and hit the **Delete** button.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/delete-replica02.png" alt-text="Select the replica to delete":::
+
+4. Acknowledge **Delete** operation.
+
+ :::image type="content" source="./media/how-to-read-replicas-portal/delete-confirm.png" alt-text="Confirm to delete te replica":::
+
+[//]: # (## Monitor a replica)
+
+[//]: # ()
+[//]: # (Two metrics are available to monitor read replicas.)
+
+[//]: # ()
+[//]: # (### Max Lag Across Replicas metric)
+
+[//]: # ()
+[//]: # (The **Max Lag Across Replicas** metric shows the lag in bytes between the primary server and the most-lagging replica.)
+
+[//]: # ()
+[//]: # (1. In the Azure portal, select the primary Azure Database for PostgreSQL server.)
+
+[//]: # ()
+[//]: # (2. Select **Metrics**. In the **Metrics** window, select **Max Lag Across Replicas**.)
+
+[//]: # ()
+[//]: # ( :::image type="content" source="./media/how-to-read-replicas-portal/select-max-lag.png" alt-text="Monitor the max lag across replicas":::)
+
+[//]: # ()
+[//]: # (3. For your **Aggregation**, select **Max**.)
+
+[//]: # ()
+[//]: # (### Replica Lag metric)
+
+[//]: # ()
+[//]: # (The **Replica Lag** metric shows the time since the last replayed transaction on a replica. If there are no transactions occurring on your master, the metric reflects this time lag.)
+
+[//]: # ()
+[//]: # (1. In the Azure portal, select the Azure Database for PostgreSQL read replica.)
+
+[//]: # ()
+[//]: # (2. Select **Metrics**. In the **Metrics** window, select **Replica Lag**.)
+
+[//]: # ()
+[//]: # ( :::image type="content" source="./media/how-to-read-replicas-portal/select-replica-lag.png" alt-text="Monitor the replica lag":::)
+
+[//]: # ()
+[//]: # (3. For your **Aggregation**, select **Max**.)
+
+## Next steps
+
+* Learn more about [read replicas in Azure Database for PostgreSQL](concepts-read-replicas.md).
+
+[//]: # (* Learn how to [create and manage read replicas in the Azure CLI and REST API]&#40;how-to-read-replicas-cli.md&#41;.)
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-java.md
First, set up some environment variables. In [Azure Cloud Shell](https://shell.a
```bash export AZ_RESOURCE_GROUP=database-workshop
-export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
+export AZ_DATABASE_SERVER_NAME=<YOUR_DATABASE_SERVER_NAME>
+export AZ_DATABASE_NAME=demo
export AZ_LOCATION=<YOUR_AZURE_REGION> export AZ_POSTGRESQL_AD_NON_ADMIN_USERNAME=<YOUR_POSTGRESQL_AD_NON_ADMIN_USERNAME> export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
export CURRENT_USER_OBJECTID=$(az ad signed-in-user show --query id -o tsv)
Replace the placeholders with the following values, which are used throughout this article: -- `<YOUR_DATABASE_NAME>`: The name of your PostgreSQL server, which should be unique across Azure.
+- `<YOUR_DATABASE_SERVER_NAME>`: The name of your PostgreSQL server, which should be unique across Azure.
- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can see the full list of available regions by entering `az account list-locations`. - `<YOUR_POSTGRESQL_AD_NON_ADMIN_USERNAME>`: The username of your PostgreSQL database server. Make ensure the username is a valid user in your Azure AD tenant. - `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Spring Boot application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/).
Replace the placeholders with the following values, which are used throughout th
```bash export AZ_RESOURCE_GROUP=database-workshop
-export AZ_DATABASE_NAME=<YOUR_DATABASE_NAME>
+export AZ_DATABASE_SERVER_NAME=<YOUR_DATABASE_SERVER_NAME>
+export AZ_DATABASE_NAME=demo
export AZ_LOCATION=<YOUR_AZURE_REGION> export AZ_POSTGRESQL_ADMIN_USERNAME=demo export AZ_POSTGRESQL_ADMIN_PASSWORD=<YOUR_POSTGRESQL_ADMIN_PASSWORD>
export AZ_LOCAL_IP_ADDRESS=<YOUR_LOCAL_IP_ADDRESS>
Replace the placeholders with the following values, which are used throughout this article: -- `<YOUR_DATABASE_NAME>`: The name of your PostgreSQL server. It should be unique across Azure.
+- `<YOUR_DATABASE_SERVER_NAME>`: The name of your PostgreSQL server, which should be unique across Azure.
- `<YOUR_AZURE_REGION>`: The Azure region you'll use. You can use `eastus` by default, but we recommend that you configure a region closer to where you live. You can have the full list of available regions by entering `az account list-locations`. - `<YOUR_POSTGRESQL_ADMIN_PASSWORD>` and `<YOUR_POSTGRESQL_NON_ADMIN_PASSWORD>`: The password of your PostgreSQL database server. That password should have a minimum of eight characters. The characters should be from three of the following categories: English uppercase letters, English lowercase letters, numbers (0-9), and non-alphanumeric characters (!, $, #, %, and so on). - `<YOUR_LOCAL_IP_ADDRESS>`: The IP address of your local computer, from which you'll run your Java application. One convenient way to find it is to open [whatismyip.akamai.com](http://whatismyip.akamai.com/).
Then run following command to create the server:
```azurecli az postgres server create \ --resource-group $AZ_RESOURCE_GROUP \
- --name $AZ_DATABASE_NAME \
+ --name $AZ_DATABASE_SERVER_NAME \
--location $AZ_LOCATION \ --sku-name B_Gen5_1 \ --storage-size 5120 \
Now run the following command to set the Azure AD admin user:
```azurecli az postgres server ad-admin create \ --resource-group $AZ_RESOURCE_GROUP \
- --server-name $AZ_DATABASE_NAME \
+ --server-name $AZ_DATABASE_SERVER_NAME \
--display-name $CURRENT_USERNAME \ --object-id $CURRENT_USER_OBJECTID ```
This command creates a small PostgreSQL server and sets the Active Directory adm
```azurecli az postgres server create \ --resource-group $AZ_RESOURCE_GROUP \
- --name $AZ_DATABASE_NAME \
+ --name $AZ_DATABASE_SERVER_NAME \
--location $AZ_LOCATION \ --sku-name B_Gen5_1 \ --storage-size 5120 \
Because you configured your local IP address at the beginning of this article, y
```azurecli az postgres server firewall-rule create \ --resource-group $AZ_RESOURCE_GROUP \
- --name $AZ_DATABASE_NAME-database-allow-local-ip \
- --server $AZ_DATABASE_NAME \
+ --name $AZ_DATABASE_SERVER_NAME-database-allow-local-ip \
+ --server $AZ_DATABASE_SERVER_NAME \
--start-ip-address $AZ_LOCAL_IP_ADDRESS \ --end-ip-address $AZ_LOCAL_IP_ADDRESS \ --output tsv
Then, use the following command to open the server's firewall to your WSL-based
```azurecli az postgres server firewall-rule create \ --resource-group $AZ_RESOURCE_GROUP \
- --name $AZ_DATABASE_NAME-database-allow-local-ip \
- --server $AZ_DATABASE_NAME \
+ --name $AZ_DATABASE_SERVER_NAME-database-allow-local-ip \
+ --server $AZ_DATABASE_SERVER_NAME \
--start-ip-address $AZ_WSL_IP_ADDRESS \ --end-ip-address $AZ_WSL_IP_ADDRESS \ --output tsv
az postgres server firewall-rule create \
### Configure a PostgreSQL database
-The PostgreSQL server that you created earlier is empty. Use the following command to create a new database called `demo`:
+The PostgreSQL server that you created earlier is empty. Use the following command to create a new database.
```azurecli az postgres db create \ --resource-group $AZ_RESOURCE_GROUP \
- --name demo \
- --server-name $AZ_DATABASE_NAME \
+ --name $AZ_DATABASE_NAME \
+ --server-name $AZ_DATABASE_SERVER_NAME \
--output tsv ``` ### Create a PostgreSQL non-admin user and grant permission
-Next, create a non-admin user and grant all permissions on the `demo` database to it.
+Next, create a non-admin user and grant all permissions to the database.
> [!NOTE] > You can read more detailed information about creating PostgreSQL users in [Create users in Azure Database for PostgreSQL](./how-to-create-users.md).
Create a SQL script called *create_ad_user.sql* for creating a non-admin user. A
cat << EOF > create_ad_user.sql SET aad_validate_oids_in_tenant = off; CREATE ROLE "$AZ_POSTGRESQL_AD_NON_ADMIN_USERNAME" WITH LOGIN IN ROLE azure_ad_user;
-GRANT ALL PRIVILEGES ON DATABASE demo TO "$AZ_POSTGRESQL_AD_NON_ADMIN_USERNAME";
+GRANT ALL PRIVILEGES ON DATABASE $AZ_DATABASE_NAME TO "$AZ_POSTGRESQL_AD_NON_ADMIN_USERNAME";
EOF ``` Then, use the following command to run the SQL script to create the Azure AD non-admin user: ```bash
-psql "host=$AZ_DATABASE_NAME.postgres.database.azure.com user=$CURRENT_USERNAME@$AZ_DATABASE_NAME dbname=demo port=5432 password=`az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken` sslmode=require" < create_ad_user.sql
+psql "host=$AZ_DATABASE_SERVER_NAME.postgres.database.azure.com user=$CURRENT_USERNAME@$AZ_DATABASE_SERVER_NAME dbname=$AZ_DATABASE_NAME port=5432 password=`az account get-access-token --resource-type oss-rdbms --output tsv --query accessToken` sslmode=require" < create_ad_user.sql
``` Now use the following command to remove the temporary SQL script file:
Create a SQL script called *create_user.sql* for creating a non-admin user. Add
```bash cat << EOF > create_user.sql CREATE ROLE "$AZ_POSTGRESQL_NON_ADMIN_USERNAME" WITH LOGIN PASSWORD '$AZ_POSTGRESQL_NON_ADMIN_PASSWORD';
-GRANT ALL PRIVILEGES ON DATABASE demo TO "$AZ_POSTGRESQL_NON_ADMIN_USERNAME";
+GRANT ALL PRIVILEGES ON DATABASE $AZ_DATABASE_NAME TO "$AZ_POSTGRESQL_NON_ADMIN_USERNAME";
EOF ``` Then, use the following command to run the SQL script to create the Azure AD non-admin user: ```bash
-psql "host=$AZ_DATABASE_NAME.postgres.database.azure.com user=$AZ_POSTGRESQL_ADMIN_USERNAME@$AZ_DATABASE_NAME dbname=demo port=5432 password=$AZ_POSTGRESQL_ADMIN_PASSWORD sslmode=require" < create_user.sql
+psql "host=$AZ_DATABASE_SERVER_NAME.postgres.database.azure.com user=$AZ_POSTGRESQL_ADMIN_USERNAME@$AZ_DATABASE_SERVER_NAME dbname=$AZ_DATABASE_NAME port=5432 password=$AZ_POSTGRESQL_ADMIN_PASSWORD sslmode=require" < create_user.sql
``` Now use the following command to remove the temporary SQL script file:
Create a *src/main/resources/application.properties* file, then add the followin
```bash cat << EOF > src/main/resources/application.properties
-url=jdbc:postgresql://${AZ_DATABASE_NAME}.postgres.database.azure.com:5432/demo?sslmode=require&authenticationPluginClassName=com.azure.identity.providers.postgresql.AzureIdentityPostgresqlAuthenticationPlugin
-user=${AZ_POSTGRESQL_AD_NON_ADMIN_USERNAME}@${AZ_DATABASE_NAME}
+url=jdbc:postgresql://${AZ_DATABASE_SERVER_NAME}.postgres.database.azure.com:5432/${AZ_DATABASE_NAME}?sslmode=require&authenticationPluginClassName=com.azure.identity.providers.postgresql.AzureIdentityPostgresqlAuthenticationPlugin
+user=${AZ_POSTGRESQL_AD_NON_ADMIN_USERNAME}@${AZ_DATABASE_SERVER_NAME}
EOF ```
EOF
```bash cat << EOF > src/main/resources/application.properties
-url=jdbc:postgresql://${AZ_DATABASE_NAME}.postgres.database.azure.com:5432/demo?sslmode=require
-user=${AZ_POSTGRESQL_NON_ADMIN_USERNAME}@${AZ_DATABASE_NAME}
+url=jdbc:postgresql://${AZ_DATABASE_SERVER_NAME}.postgres.database.azure.com:5432/${AZ_DATABASE_NAME}?sslmode=require
+user=${AZ_POSTGRESQL_NON_ADMIN_USERNAME}@${AZ_DATABASE_SERVER_NAME}
password=${AZ_POSTGRESQL_NON_ADMIN_PASSWORD} EOF ```
private-5g-core Configure Service Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-service-azure-portal.md
In this how-to guide, we'll configure a service using the Azure portal.
In this step, you'll configure basic settings for your new service using the Azure portal.
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. Search for and select the **Mobile Network** resource representing the private mobile network for which you want to configure a service. :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource."::: 1. In the **Resource** menu, select **Services**.
private-5g-core Configure Sim Policy Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-sim-policy-azure-portal.md
## Configure the SIM policy
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. Search for and select the **Mobile Network** resource representing the private mobile network for which you want to configure a SIM policy. :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
Azure Private 5G Core Preview private mobile networks include one or more *sites
In this step, you'll create the mobile network site resource representing the physical enterprise location of your Azure Stack Edge device, which will host the packet core instance.
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. Search for and select the **Mobile Network** resource representing the private mobile network to which you want to add a site. :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a mobile network resource.":::
private-5g-core Distributed Tracing Share Traces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/distributed-tracing-share-traces.md
In this step, you'll export the trace from the distributed tracing web GUI and s
You can now upload the trace to the container you created in [Create a storage account and blob container in Azure](#create-a-storage-account-and-blob-container-in-azure).
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. Navigate to your Storage account resource. 1. In the **Resource** menu, select **Containers**.
private-5g-core Enable Log Analytics For Private 5G Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/enable-log-analytics-for-private-5g-core.md
In this step, you'll configure and deploy a ConfigMap which will allow Container
In this step, you'll run a query in the Log Analytics workspace to confirm that you can retrieve logs for the packet core instance.
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. Search for and select the Log Analytics workspace you used when creating the Azure Monitor extension in [Create an Azure Monitor extension](#create-an-azure-monitor-extension). 1. Select **Logs** from the resource menu. :::image type="content" source="media/log-analytics-workspace.png" alt-text="Screenshot of the Azure portal showing a Log Analytics workspace resource. The Logs option is highlighted.":::
private-5g-core How To Guide Deploy A Private Mobile Network Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/how-to-guide-deploy-a-private-mobile-network-azure-portal.md
Private mobile networks provide high performance, low latency, and secure connec
## Deploy your private mobile network In this step, you'll create the Mobile Network resource representing your private mobile network as a whole. You can also provision one or more SIMs, and / or create the default service and SIM policy.
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. In the **Search** bar, type *mobile networks* and then select the **Mobile Networks** service from the results that appear. :::image type="content" source="media/mobile-networks-search.png" alt-text="Screenshot of the Azure portal showing a search for the Mobile Networks service.":::
private-5g-core Manage Existing Sims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/manage-existing-sims.md
You can view your existing SIMs in the Azure portal.
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. Search for and select the **Mobile Network** resource representing the private mobile network. :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
private-5g-core Manage Sim Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/manage-sim-groups.md
You can view your existing SIM groups in the Azure portal.
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. Search for and select the **Mobile Network** resource representing the private mobile network to which you want to add a SIM group. :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
private-5g-core Provision Sims Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-azure-portal.md
Prepare the JSON file using the information you collected for your SIMs in [Coll
You'll now begin the SIM provisioning process through the Azure portal.
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. Search for and select the **Mobile Network** resource representing the private mobile network for which you want to provision SIMs. :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
private-5g-core Tutorial Create Example Set Of Policy Control Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/tutorial-create-example-set-of-policy-control-configuration.md
In this step, we'll create a service that filters packets based on their protoco
To create the service:
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. Search for and select the Mobile Network resource representing your private mobile network. :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal showing the results for a search for a Mobile Network resource.":::
private-5g-core Upgrade Packet Core Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-azure-portal.md
Each Azure Private 5G Core Preview site contains a packet core instance, which i
To check which version your packet core instance is currently running, and whether there is a newer version available:
-1. Sign in to the Azure portal at [https://aka.ms/AP5GCNewPortal](https://aka.ms/AP5GCNewPortal).
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. Search for and select the **Mobile Network** resource representing the private mobile network. :::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource.":::
purview Register Scan Power Bi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-power-bi-tenant.md
For more information about Microsoft Purview network settings, see [Use private
To create and run a new scan, do the following:
-1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory** and create an App Registration in the tenant. Provide a web URL in the **Redirect URI**.
+1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory** and create an App Registration in the tenant. Provide a web URL in the **Redirect URI**. [For information about the Redirect URI see this documenation from Azure Active Directory](/azure/active-directory/develop/reply-url).
:::image type="content" source="media/setup-power-bi-scan-catalog-portal/power-bi-scan-app-registration.png" alt-text="Screenshot how to create App in AAD.":::
pytorch-enterprise Pte Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/pytorch-enterprise/pte-overview.md
- Title: What is PyTorch Enterprise on Azure?
-description: This article describes the PyTorch Enterprise program.
---- Previously updated : 07/06/2021--
-# What is PyTorch Enterprise?
-
-PyTorch is an increasingly popular open-source deep learning framework that accelerates AI innovations from research to production. At Microsoft, we use PyTorch to power products such as Bing and Azure Cognitive Services and we actively contribute to several PyTorch open-source projects, including PyTorch Profiler, ONNX Runtime, DeepSpeed, and more.
-
-The PyTorch Enterprise Support Program provides long-term support, prioritized troubleshooting, and integration with Azure solutions.
-
-* **Long-term support** (LTS): Microsoft will provide commercial support for the public PyTorch codebase. Each release will be supported for as long as it is current. In addition, one PyTorch release will be selected for LTS every year. Such releases will be supported for two years, enabling a stable production experience without frequent major upgrade investment.
-* **Prioritized troubleshooting**: Microsoft Enterprise support customers, including Premier and Unified, are automatically eligible for PyTorch Enterprise at no additional cost. The dedicated PyTorch team in Azure will prioritize, develop, and deliver hotfixes to customers as needed. These hotfixes will get tested and will be included in future PyTorch releases. In addition, Microsoft will extensively test PyTorch releases for performance regressions with continuous integration and realistic, demanding workloads from internal Microsoft applications.
-* **Azure integration**: The latest release of PyTorch will be integrated with Azure Machine Learning, along with other PyTorch add-ons, including ONNX Runtime for faster inferencing. Microsoft will continue to invest in the ONNX standard to improve PyTorch inference and training speed.
--
-## Get started with PyTorch Enterprise
-
-To get started with PyTorch Enterprise, join the Microsoft Premier or Unified support program. Contact your Microsoft account representative for additional information on different enterprise support options.
-
-If you would like to try out the PyTorch LTS version, you can do so at PyTorch.org.
-
-## Next steps
-* [PyTorch on Azure](https://azure.microsoft.com/develop/pytorch/)
-* [PyTorch Enterprise Support Program](https://aka.ms/PTELandingPage)
-* [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/)
pytorch-enterprise Support Boundaries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/pytorch-enterprise/support-boundaries.md
- Title: 'Support boundaries for PyTorch Enterprise on Azure'
-description: This article defines the support boundaries for PyTorch Enterprise.
---- Previously updated : 07/06/2021
-# Customer intent: As a data steward or catalog administrator, I need to onboard Azure data sources at scale before I register and scan them.
-
-# Support boundaries for PyTorch Enterprise
-
-This brief document describes the modules and components supported under Pytorch Enterprise.
--
-|Area|Supported versions|Notes|
-|-|-|-|
-|OS|Windows 10, Debian 9, Debian 10, Ubuntu 16.04.7 LTS, Ubuntu 18.04.5 LTS|We support the LTS versions of Debian and Ubuntu distributions, and only the x86_64 architecture.|
-|Python|3.6+||
-|PyTorch|1.8.1+||
-|CUDA Toolkit|10.2, 11.1||
-|ONNX Runtime|1.7+||
-|torchtext, torchvision, torch-tb-profiler, torchaudio| - |For libraries that havenΓÇÖt a 1.0 release, we support the specific versions that are compatible with the corresponding supported PyTorch version. For example, see these tables: [TorchVision](https://github.com/pytorch/vision#installation), [TorchText](https://github.com/pytorch/text#installation), [TorchAudio](https://github.com/pytorch/audio/#dependencies)|
-|torchserve|0.4.0+||
role-based-access-control Conditions Custom Security Attributes Example https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-custom-security-attributes-example.md
Create one or more role assignments that use a condition at a higher scope to ma
- [What is Azure attribute-based access control (Azure ABAC)?](conditions-overview.md) - [What are custom security attributes in Azure AD?](../active-directory/fundamentals/custom-security-attributes-overview.md)-- [Allow read access to blobs based on tags and custom security attributes](conditions-custom-security-attributes.md)
+- [Allow read access to blobs based on tags and custom security attributes (Preview)](conditions-custom-security-attributes.md)
role-based-access-control Conditions Custom Security Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-custom-security-attributes.md
You can also use Azure PowerShell to add role assignment conditions. The followi
1. Set the `Condition` property of the role assignment object. Be sure to use your attribute set name. ```powershell
- $groupRoleAssignment.Condition="((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List' })) OR (@Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project] StringEquals @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_sensitive`$>]))"
+ $groupRoleAssignment.Condition="((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})) OR (@Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project] StringEquals @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<`$key_case_sensitive`$>]))"
``` 1. Set the `ConditionVersion` property of the role assignment object.
You can also use Azure CLI to add role assignments conditions. The following com
1. Update the `condition` property. Be sure to use your attribute set name. ```azurecli
- "condition": "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List' })) OR (@Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project] StringEquals @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]))",
+ "condition": "((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})) OR (@Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project] StringEquals @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]))",
``` 1. Update the `conditionVersion` property.
role-based-access-control Conditions Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-faq.md
Title: FAQ for Azure role assignment conditions (preview)
-description: Frequently asked questions for Azure role assignment conditions (preview)
+ Title: FAQ for Azure role assignment conditions - Azure ABAC
+description: Frequently asked questions for Azure role assignment conditions
Previously updated : 11/16/2021 Last updated : 10/24/2022 #Customer intent:
-# FAQ for Azure role assignment conditions (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# FAQ for Azure role assignment conditions
## Frequently asked questions
You must write the storage container name, blob path, tag name, or values in the
If you add three or more expressions for a targeted action, you must define the logical grouping of those expressions in the code editor, Azure PowerShell, or Azure CLI. A logical grouping of `a AND b OR c` can be either `(a AND b) OR c` or `a AND (b OR c )`.
-**Are conditions supported via Privileged Identity Management (PIM) for Azure resources in preview?**
+**Are conditions supported via Azure AD Privileged Identity Management (Azure AD PIM) for Azure resources?**
-Yes. For more information, see [Assign Azure resource roles in Privileged Identity Management](../active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md).
+Yes, for specific roles. For more information, see [Assign Azure resource roles in Privileged Identity Management](../active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md).
**Are conditions supported for classic administrators?**
No, conditions in role assignments do not have an explicit deny effect. Conditio
## Next steps -- [Azure role assignment condition format and syntax (preview)](conditions-format.md)-- [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md)
+- [Azure role assignment condition format and syntax](conditions-format.md)
+- [Troubleshoot Azure role assignment conditions](conditions-troubleshoot.md)
role-based-access-control Conditions Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-format.md
Title: Azure role assignment condition format and syntax (preview) - Azure RBAC
+ Title: Azure role assignment condition format and syntax - Azure ABAC
description: Get an overview of the format and syntax of Azure role assignment conditions for Azure attribute-based access control (Azure ABAC).
Previously updated : 09/28/2022 Last updated : 10/24/2022 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
-# Azure role assignment condition format and syntax (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Azure role assignment condition format and syntax
A condition is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. For example, you can add a condition that requires an object to have a specific tag to read the object. This article describes the format and syntax of role assignment conditions.
Currently, conditions can be added to built-in or custom role assignments that h
- [Storage Queue Data Message Sender](built-in-roles.md#storage-queue-data-message-sender) - [Storage Queue Data Reader](built-in-roles.md#storage-queue-data-reader)
-For a list of the storage actions you can use in conditions, see [Actions and attributes for Azure role assignment conditions for Azure Blob Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md) and [Actions and attributes for Azure role assignment conditions for Azure queues (preview)](../storage/queues/queues-auth-abac-attributes.md).
+For a list of the storage actions you can use in conditions, see [Actions and attributes for Azure role assignment conditions for Azure Blob Storage](../storage/blobs/storage-auth-abac-attributes.md) and [Actions and attributes for Azure role assignment conditions for Azure queues](../storage/queues/queues-auth-abac-attributes.md).
## Attributes
Depending on the selected actions, the attribute might be found in different pla
> | | | | > | Resource | Indicates that the attribute is on the resource, such as a container name. | `@Resource` | > | Request | Indicates that the attribute is part of the action request, such as setting the blob index tag. | `@Request` |
-> | Principal | Indicates that the attribute is an Azure AD custom security attribute on the principal, such as a user, enterprise application (service principal), or managed identity. | `@Principal` |
+> | Principal | Indicates that the attribute is an Azure AD custom security attribute on the principal, such as a user, enterprise application (service principal), or managed identity. Principal attributes are currently in preview. | `@Principal` |
#### Resource and request attributes For a list of the blob storage or queue storage attributes you can use in conditions, see: -- [Actions and attributes for Azure role assignment conditions for Azure Blob Storage (preview)](../storage/blobs/storage-auth-abac-attributes.md)-- [Actions and attributes for Azure role assignment conditions for Azure queues (preview)](../storage/queues/queues-auth-abac-attributes.md)
+- [Actions and attributes for Azure role assignment conditions for Azure Blob Storage](../storage/blobs/storage-auth-abac-attributes.md)
+- [Actions and attributes for Azure role assignment conditions for Azure queues](../storage/queues/queues-auth-abac-attributes.md)
#### Principal attributes
+> [!IMPORTANT]
+> Principal attributes are currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ To use principal attributes, you must have **all** of the following: - Azure AD Premium P1 or P2 license
To use principal attributes, you must have **all** of the following:
For more information about custom security attributes, see: -- [Allow read access to blobs based on tags and custom security attributes](conditions-custom-security-attributes.md)-- [Principal does not appear in Attribute source](conditions-troubleshoot.md#symptomprincipal-does-not-appear-in-attribute-source)-- [Add or deactivate custom security attributes in Azure AD](../active-directory/fundamentals/custom-security-attributes-add.md)
+- [Allow read access to blobs based on tags and custom security attributes (Preview)](conditions-custom-security-attributes.md)
+- [Principal does not appear in Attribute source (Preview)](conditions-troubleshoot.md#symptomprincipal-does-not-appear-in-attribute-source)
+- [Add or deactivate custom security attributes in Azure AD (Preview)](../active-directory/fundamentals/custom-security-attributes-add.md)
## Function operators
a AND (b OR c)
## Next steps -- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)-- [Add or edit Azure role assignment conditions using the Azure portal (preview)](conditions-role-assignments-portal.md)
+- [Example Azure role assignment conditions for Blob Storage](../storage/blobs/storage-auth-abac-examples.md)
+- [Add or edit Azure role assignment conditions using the Azure portal](conditions-role-assignments-portal.md)
role-based-access-control Conditions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-overview.md
Title: What is Azure attribute-based access control (Azure ABAC)? (preview)
+ Title: What is Azure attribute-based access control (Azure ABAC)?
description: Get an overview of Azure attribute-based access control (Azure ABAC). Use role assignments with conditions to control access to Azure resources.
Previously updated : 05/24/2022 Last updated : 10/24/2022 #Customer intent: As a dev, devops, or it admin, I want to learn how to constrain access within a role assignment by using conditions.
-# What is Azure attribute-based access control (Azure ABAC)? (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# What is Azure attribute-based access control (Azure ABAC)?
Attribute-based access control (ABAC) is an authorization system that defines access based on attributes associated with security principals, resources, and environment. With ABAC, you can grant a security principal access to a resource based on attributes. Azure ABAC refers to the implementation of ABAC for Azure.
Here is what the condition looks like in code:
For more information about the format of conditions, see [Azure role assignment condition format and syntax](conditions-format.md).
-## Features of conditions
+## Status of condition features
-Here's a list of the primary features of conditions:
+Some features of conditions are still in preview. The following table lists the status of condition features:
| Feature | Status | Date | | | | |
-| Use the following [attributes](../storage/blobs/storage-auth-abac-attributes.md#azure-blob-storage-attributes) in a condition: Account name, Blob prefix, Encryption scope name, Is Current Version, Is hierarchical namespace enabled, Snapshot, Version ID | Preview | May 2022 |
+| Add conditions using the [condition editor in the Azure portal](conditions-role-assignments-portal.md) | GA | October 2022 |
+| Add conditions using [Azure PowerShell](conditions-role-assignments-powershell.md), [Azure CLI](conditions-role-assignments-cli.md), or [REST API](conditions-role-assignments-rest.md) | GA | October 2022 |
+| Use [resource and request attributes](conditions-format.md#attributes) for specific combinations of Azure storage resources, access attribute types, and storage account performance tiers. For more information, see [Status of condition features in Azure Storage](../storage/common/authorize-data-access.md#status-of-condition-features-in-azure-storage). | GA | October 2022 |
| Use [custom security attributes on a principal in a condition](conditions-format.md#principal-attributes) | Preview | November 2021 |
-| Add conditions to blob storage data role assignments | Preview | May 2021 |
-| Use attributes on a resource in a condition | Preview | May 2021 |
-| Use attributes that are part of the action request in a condition | Preview | May 2021 |
+| Use resource and request attributes in a condition | Preview | May 2021 |
-## Conditions and Privileged Identity Management (PIM)
+## Conditions and Azure AD PIM
-You can also add conditions to eligible role assignments using Privileged Identity Management (PIM). With PIM, your end users must activate an eligible role assignment to get permission to perform certain actions. Using conditions in PIM enables you not only to limit a user's access to a resource using fine-grained conditions, but also to use PIM to secure it with a time-bound setting, approval workflow, audit trail, and so on. For more information, see [Assign Azure resource roles in Privileged Identity Management](../active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md).
+You can also add conditions to eligible role assignments using Azure AD Privileged Identity Management (Azure AD PIM) for Azure resources. With Azure AD PIM, your end users must activate an eligible role assignment to get permission to perform certain actions. Using conditions in Azure AD PIM enables you not only to limit a user's access to a resource using fine-grained conditions, but also to use Azure AD PIM to secure it with a time-bound setting, approval workflow, audit trail, and so on. For more information, see [Assign Azure resource roles in Privileged Identity Management](../active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md).
## Terminology
Here are the known issues with conditions:
## Next steps -- [FAQ for Azure role assignment conditions (preview)](conditions-faq.md)-- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)-- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/blobs/storage-auth-abac-portal.md)
+- [FAQ for Azure role assignment conditions](conditions-faq.md)
+- [Example Azure role assignment conditions for Blob Storage](../storage/blobs/storage-auth-abac-examples.md)
+- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal](../storage/blobs/storage-auth-abac-portal.md)
role-based-access-control Conditions Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-prerequisites.md
Title: Prerequisites for Azure role assignment conditions (preview)
-description: Prerequisites for Azure role assignment conditions (preview).
+ Title: Prerequisites for Azure role assignment conditions - Azure ABAC
+description: Prerequisites for Azure role assignment conditions.
Previously updated : 10/19/2022 Last updated : 10/24/2022 #Customer intent:
-# Prerequisites for Azure role assignment conditions (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Prerequisites for Azure role assignment conditions
To add or edit Azure role assignment conditions, you must have the following prerequisites.
Just like role assignments, to add or update conditions, you must be signed in t
## Principal attributes
+> [!IMPORTANT]
+> Principal attributes are currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ To use principal attributes ([custom security attributes in Azure AD](../active-directory/fundamentals/custom-security-attributes-overview.md)), you must have **all** of the following: - Azure AD Premium P1 or P2 license
For more information about custom security attributes, see:
## Next steps -- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)-- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/blobs/storage-auth-abac-portal.md)
+- [Example Azure role assignment conditions for Blob Storage](../storage/blobs/storage-auth-abac-examples.md)
+- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal](../storage/blobs/storage-auth-abac-portal.md)
role-based-access-control Conditions Role Assignments Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-cli.md
Title: Add or edit Azure role assignment conditions using Azure CLI (preview) - Azure RBAC
+ Title: Add or edit Azure role assignment conditions using Azure CLI - Azure ABAC
description: Learn how to add, edit, list, or delete attribute-based access control (ABAC) conditions in Azure role assignments using Azure CLI and Azure role-based access control (Azure RBAC).
Previously updated : 05/07/2021 Last updated : 10/24/2022
-# Add or edit Azure role assignment conditions using Azure CLI (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Add or edit Azure role assignment conditions using Azure CLI
An [Azure role assignment condition](conditions-overview.md) is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. For example, you can add a condition that requires an object to have a specific tag to read the object. This article describes how to add, edit, list, or delete conditions for your role assignments using Azure CLI.
Alternatively, if you want to delete both the role assignment and the condition,
## Next steps -- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)-- [Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI (preview)](../storage/blobs/storage-auth-abac-cli.md)-- [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md)
+- [Example Azure role assignment conditions for Blob Storage](../storage/blobs/storage-auth-abac-examples.md)
+- [Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI](../storage/blobs/storage-auth-abac-cli.md)
+- [Troubleshoot Azure role assignment conditions](conditions-troubleshoot.md)
role-based-access-control Conditions Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-portal.md
Title: Add or edit Azure role assignment conditions using the Azure portal (preview) - Azure RBAC
+ Title: Add or edit Azure role assignment conditions using the Azure portal - Azure ABAC
description: Learn how to add, edit, view, or delete attribute-based access control (ABAC) conditions in Azure role assignments using the Azure portal and Azure role-based access control (Azure RBAC).
Previously updated : 09/28/2022 Last updated : 10/24/2022
-# Add or edit Azure role assignment conditions using the Azure portal (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Add or edit Azure role assignment conditions using the Azure portal
An [Azure role assignment condition](conditions-overview.md) is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. For example, you can add a condition that requires an object to have a specific tag to read the object. This article describes how to add, edit, view, or delete conditions for your role assignments using the Azure portal.
There are two ways that you can add a condition. You can add a condition when yo
If you don't see the Conditions (optional) tab, be sure you selected a role that supports conditions.
- ![Screenshot of Add role assignment page with Add condition tab for preview experience.](./media/shared/condition.png)
+ ![Screenshot of Add role assignment page with Add condition tab.](./media/shared/condition.png)
The Add role assignment condition page appears.
Once you have the Add role assignment condition page open, you can review the ba
## Next steps -- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)-- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/blobs/storage-auth-abac-portal.md)-- [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md)
+- [Example Azure role assignment conditions for Blob Storage](../storage/blobs/storage-auth-abac-examples.md)
+- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal](../storage/blobs/storage-auth-abac-portal.md)
+- [Troubleshoot Azure role assignment conditions](conditions-troubleshoot.md)
role-based-access-control Conditions Role Assignments Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-powershell.md
Title: Add or edit Azure role assignment conditions using Azure PowerShell (preview) - Azure RBAC
+ Title: Add or edit Azure role assignment conditions using Azure PowerShell - Azure ABAC
description: Learn how to add, edit, list, or delete attribute-based access control (ABAC) conditions in Azure role assignments using Azure PowerShell and Azure role-based access control (Azure RBAC).
Previously updated : 11/16/2021 Last updated : 10/24/2022
-# Add or edit Azure role assignment conditions using Azure PowerShell (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Add or edit Azure role assignment conditions using Azure PowerShell
An [Azure role assignment condition](conditions-overview.md) is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. For example, you can add a condition that requires an object to have a specific tag to read the object. This article describes how to add, edit, list, or delete conditions for your role assignments using Azure PowerShell.
Alternatively, if you want to delete both the role assignment and the condition,
## Next steps -- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)-- [Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell (preview)](../storage/blobs/storage-auth-abac-powershell.md)-- [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md)
+- [Example Azure role assignment conditions for Blob Storage](../storage/blobs/storage-auth-abac-examples.md)
+- [Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell](../storage/blobs/storage-auth-abac-powershell.md)
+- [Troubleshoot Azure role assignment conditions](conditions-troubleshoot.md)
role-based-access-control Conditions Role Assignments Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-rest.md
Title: Add or edit Azure role assignment conditions using the REST API (preview) - Azure RBAC
+ Title: Add or edit Azure role assignment conditions using the REST API - Azure ABAC
description: Learn how to add, edit, list, or delete attribute-based access control (ABAC) conditions in Azure role assignments using the REST API and Azure role-based access control (Azure RBAC).
Previously updated : 10/19/2022 Last updated : 10/24/2022
-# Add or edit Azure role assignment conditions using the REST API (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Add or edit Azure role assignment conditions using the REST API
An [Azure role assignment condition](conditions-overview.md) is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. For example, you can add a condition that requires an object to have a specific tag to read the object. This article describes how to add, edit, list, or delete conditions for your role assignments using the REST API.
Alternatively, if you want to delete both the role assignment and the condition,
## Next steps -- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)-- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](../storage/blobs/storage-auth-abac-portal.md)-- [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md)
+- [Example Azure role assignment conditions for Blob Storage](../storage/blobs/storage-auth-abac-examples.md)
+- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal](../storage/blobs/storage-auth-abac-portal.md)
+- [Troubleshoot Azure role assignment conditions](conditions-troubleshoot.md)
role-based-access-control Conditions Role Assignments Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-role-assignments-template.md
Title: Add Azure role assignment conditions using Azure Resource Manager templates (Preview) - Azure ABAC
+ Title: Add Azure role assignment conditions using Azure Resource Manager templates - Azure ABAC
description: Learn how to add attribute-based access control (ABAC) conditions in Azure role assignments using Azure Resource Manager templates and Azure role-based access control (Azure RBAC).
Previously updated : 10/19/2022 Last updated : 10/24/2022
-# Add Azure role assignment conditions using Azure Resource Manager templates (Preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Add Azure role assignment conditions using Azure Resource Manager templates
An [Azure role assignment condition](conditions-overview.md) is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. For example, you can add a condition that requires an object to have a specific tag to read the object. This article describes how to add conditions for your role assignments using Azure Resource Manager templates.
az deployment group create --resource-group example-group --template-file rbac-t
## Next steps -- [Example Azure role assignment conditions for Blob Storage (preview)](../storage/blobs/storage-auth-abac-examples.md)-- [Troubleshoot Azure role assignment conditions (preview)](conditions-troubleshoot.md)
+- [Example Azure role assignment conditions for Blob Storage](../storage/blobs/storage-auth-abac-examples.md)
+- [Troubleshoot Azure role assignment conditions](conditions-troubleshoot.md)
- [Assign Azure roles using Azure Resource Manager templates](role-assignments-template.md)
role-based-access-control Conditions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-troubleshoot.md
Title: Troubleshoot Azure role assignment conditions (preview)
-description: Troubleshoot Azure role assignment conditions (preview)
+ Title: Troubleshoot Azure role assignment conditions - Azure ABAC
+description: Troubleshoot Azure role assignment conditions
Previously updated : 09/28/2022 Last updated : 10/24/2022 #Customer intent:
-# Troubleshoot Azure role assignment conditions (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Troubleshoot Azure role assignment conditions
## General issues
Disable history expansion with the command `set +H`. To re-enable history expans
## Next steps -- [Azure role assignment condition format and syntax (preview)](conditions-format.md)-- [FAQ for Azure role assignment conditions (preview)](conditions-faq.md)
+- [Azure role assignment condition format and syntax](conditions-format.md)
+- [FAQ for Azure role assignment conditions](conditions-faq.md)
- [Troubleshoot custom security attributes in Azure AD (Preview)](../active-directory/fundamentals/custom-security-attributes-troubleshoot.md)
sentinel Bookmarks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/bookmarks.md
Last updated 11/09/2021
# Keep track of data during hunting with Microsoft Sentinel -- Threat hunting typically requires reviewing mountains of log data looking for evidence of malicious behavior. During this process, investigators find events that they want to remember, revisit, and analyze as part of validating potential hypotheses and understanding the full story of a compromise. Hunting bookmarks in Microsoft Sentinel help you do this, by preserving the queries you ran in **Microsoft Sentinel - Logs**, along with the query results that you deem relevant. You can also record your contextual observations and reference your findings by adding notes and tags. Bookmarked data is visible to you and your teammates for easy collaboration. Now you can identify and address gaps in MITRE ATT&CK technique coverage, across all hunting queries, by mapping your custom hunting queries to MITRE ATT&CK techniques.
-> [!IMPORTANT]
->
-> The mapping of MITRE ATT&CK techniques to bookmarks is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- You can also investigate more types of entities while hunting with bookmarks, by mapping the full set of entity types and identifiers supported by Microsoft Sentinel Analytics in your custom queries. This enables you to use bookmarks to explore the entities returned in hunting query results using [entity pages](entities.md#entity-pages), [incidents](investigate-cases.md) and the [investigation graph](investigate-cases.md#use-the-investigation-graph-to-deep-dive). If a bookmark captures results from a hunting query, it automatically inherits the query's MITRE ATT&CK technique and entity mappings.
-> [!IMPORTANT]
->
-> The mapping of an expanded set of entity types and identifiers to bookmarks is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
- If you find something that urgently needs to be addressed while hunting in your logs, you can easily create a bookmark and either promote it to an incident or add it to an existing incident. For more information about incidents, see [Investigate incidents with Microsoft Sentinel](investigate-cases.md). If you found something worth bookmarking, but that isn't immediately urgent, you can create a bookmark and then revisit your bookmarked data at any time on the **Bookmarks** tab of the **Hunting** pane. You can use filtering and search options to quickly find specific data for your current investigation.
Alternatively, you can view your bookmarked data directly in the **HuntingBookma
Viewing bookmarks from the table enables you to filter, summarize, and join bookmarked data with other data sources, making it easy to look for corroborating evidence. + ## Add a bookmark 1. In the Azure portal, navigate to **Microsoft Sentinel** > **Threat management** > **Hunting** to run queries for suspicious and anomalous behavior.
Viewing bookmarks from the table enables you to filter, summarize, and join book
1. On the right, in the **Add bookmark** pane, optionally, update the bookmark name, add tags, and notes to help you identify what was interesting about the item.
-1. **(Preview)** Bookmarks can be optionally mapped to MITRE ATT&CK techniques or sub-techniques. MITRE ATT&CK mappings are inherited from mapped values in hunting queries, but you can also create them manually. Select the MITRE ATT&CK tactic associated with the desired technique from the drop-down menu in the **Tactics & Techniques (Preview)** section of the **Add bookmark** pane. The menu will expand to show all the MITRE ATT&CK techniques, and you can select multiple techniques and sub-techniques in this menu.
+1. Bookmarks can be optionally mapped to MITRE ATT&CK techniques or sub-techniques. MITRE ATT&CK mappings are inherited from mapped values in hunting queries, but you can also create them manually. Select the MITRE ATT&CK tactic associated with the desired technique from the drop-down menu in the **Tactics & Techniques** section of the **Add bookmark** pane. The menu will expand to show all the MITRE ATT&CK techniques, and you can select multiple techniques and sub-techniques in this menu.
:::image type="content" source="media/bookmarks/mitre-attack-mapping.png" alt-text="Screenshot of how to map Mitre Attack tactics and techniques to bookmarks.":::
-1. **(Preview)** Now an expanded set of entities can be extracted from bookmarked query results for further investigation. In the **Entity mapping (Preview)** section, use the drop-downs to select [entity types and identifiers](entities-reference.md). Then map the column in the query results containing the corresponding identifier. For example:
+1. Now an expanded set of entities can be extracted from bookmarked query results for further investigation. In the **Entity mapping** section, use the drop-downs to select [entity types and identifiers](entities-reference.md). Then map the column in the query results containing the corresponding identifier. For example:
:::image type="content" source="media/bookmarks/map-entity-types-bookmark.png" alt-text="Screenshot to map entity types for hunting bookmarks.":::
- To view the bookmark in the investigation graph, you must map at least one entity. Entity mappings to account, host, IP, and URL entity types created before this preview are still supported, preserving backwards compatibility.
+ To view the bookmark in the investigation graph, you must map at least one entity. Entity mappings to account, host, IP, and URL entity types you've previously created are supported, preserving backwards compatibility.
1. Click **Save** to commit your changes and add the bookmark. All bookmarked data is shared with other analysts, and is a first step toward a collaborative investigation experience. > [!NOTE]
-> The log query results support bookmarks whenever this pane is opened from Microsoft Sentinel. For example, you select **General** > **Logs** from the navigation bar, select event links in the investigations graph, or select an alert ID from the full details of an incident (currently in preview). You can't create bookmarks when the **Logs** pane is opened from other locations, such as directly from Azure Monitor.
+> The log query results support bookmarks whenever this pane is opened from Microsoft Sentinel. For example, you select **General** > **Logs** from the navigation bar, select event links in the investigations graph, or select an alert ID from the full details of an incident. You can't create bookmarks when the **Logs** pane is opened from other locations, such as directly from Azure Monitor.
## View and update bookmarks
sentinel Hunting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/hunting.md
description: Use Microsoft Sentinel's built-in hunting queries to guide you into
Previously updated : 11/09/2021 Last updated : 09/28/2022 # Hunt for threats with Microsoft Sentinel -
-> [!IMPORTANT]
->
-> The cross-resource query experience and upgrades to **custom queries and bookmarks** (see marked items below) are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-- As security analysts and investigators, you want to be proactive about looking for security threats, but your various systems and security appliances generate mountains of data that can be difficult to parse and filter into meaningful events. Microsoft Sentinel has powerful hunting search and query tools to hunt for security threats across your organization's data sources. To help security analysts look proactively for new anomalies that weren't detected by your security apps or even by your scheduled analytics rules, Microsoft Sentinel's built-in hunting queries guide you into asking the right questions to find issues in the data you already have on your network. For example, one built-in query provides data about the most uncommon processes running on your infrastructure. You wouldn't want an alert about each time they are run - they could be entirely innocent - but you might want to take a look at the query on occasion to see if there's anything unusual. + ## Use built-in queries The [hunting dashboard](#use-the-hunting-dashboard) provides ready-made query examples designed to get you started and get you familiar with the tables and the query language. Queries run on data stored in log tables, such as for process creation, DNS events, or other event types.
Use queries before, during, and after a compromise to take the following actions
View the query's results, and select **New alert rule** > **Create Microsoft Sentinel alert**. Use the **Analytics rule wizard** to create a new rule based on your query. For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md).
-> [!TIP]
->
-> - Now in public preview, you can also create hunting and livestream queries over data stored in Azure Data Explorer. For more information, see details of [constructing cross-resource queries](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md) in the Azure Monitor documentation.
->
-> - Use community resources, such as the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Hunting%20Queries), to find additional queries and data sources.
+You can also create hunting and livestream queries over data stored in Azure Data Explorer. For more information, see details of [constructing cross-resource queries](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md) in the Azure Monitor documentation.
+
+Use community resources, such as the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Hunting%20Queries) to find additional queries and data sources.
## Use the hunting dashboard
The table shown lists all the queries written by Microsoft's team of security an
:::image type="content" source="media/hunting/hunting-start.png" alt-text="Microsoft Sentinel starts hunting" lightbox="media/hunting/hunting-start.png":::
-Use the hunting dashboard to identify where to start hunting, by looking at result count, spikes, or the change in result count over a 24-hour period. Sort and filter by favorites, data source, MITRE ATT&CK tactic or technique, results, results delta, or results delta percentage. View queries that still need data sources connected**, and get recommendations on how to enable these queries.
+Use the hunting dashboard to identify where to start hunting, by looking at result count, spikes, or the change in result count over a 24-hour period. Sort and filter by favorites, data source, MITRE ATT&CK tactic or technique, results, results delta, or results delta percentage. View queries that still need data sources connected, and get recommendations on how to enable these queries.
The following table describes detailed actions available from the hunting dashboard: | Action | Description | | | |
-| **See how queries apply to your environment** | Select the **Run all queries (Preview)** button, or select a subset of queries using the checkboxes to the left of each row and select the **Run selected queries (Preview)** button. <br><br>Running your queries can take anywhere from a few seconds to many minutes, depending on how many queries are selected, the time range, and the amount of data that is being queried. |
+| **See how queries apply to your environment** | Select the **Run all queries** button, or select a subset of queries using the check boxes to the left of each row and select the **Run selected queries** button. <br><br>Running your queries can take anywhere from a few seconds to many minutes, depending on how many queries are selected, the time range, and the amount of data that is being queried. |
| **View the queries that returned results** | After your queries are done running, view the queries that returned results using the **Results** filter: <br>- Sort to see which queries had the most or fewest results. <br>- View the queries that are not at all active in your environment by selecting *N/A* in the **Results** filter. <br>- Hover over the info icon (**i**) next to the *N/A* to see which data sources are required to make this query active. | | **Identify spikes in your data** | Identify spikes in the data by sorting or filtering on **Results delta** or **Results delta percentage**. <br><br>This compares the results of the last 24 hours against the results of the previous 24-48 hours, highlighting any large differences or relative difference in volume. | | **View queries mapped to the MITRE ATT&CK tactic** | The **MITRE ATT&CK tactic bar**, at the top of the table, lists how many queries are mapped to each MITRE ATT&CK tactic. The tactic bar gets dynamically updated based on the current set of filters applied. <br><br>This enables you to see which MITRE ATT&CK tactics show up when you filter by a given result count, a high result delta, *N/A* results, or any other set of filters. |
Create or modify a query and save it as your own query or share it with users wh
1. Fill in all the blank fields and select **Create**.
- 1. **(Preview)** Create entity mappings by selecting entity types, identifiers and columns.
+ 1. Create entity mappings by selecting entity types, identifiers and columns.
:::image type="content" source="media/hunting/map-entity-types-hunting.png" alt-text="Screenshot for mapping entity types in hunting queries.":::
- 1. **(Preview)** Map MITRE ATT&CK techniques to your hunting queries by selecting the tactic, technique and sub-technique (if applicable).
+ 1. Map MITRE ATT&CK techniques to your hunting queries by selecting the tactic, technique and sub-technique (if applicable).
:::image type="content" source="./media/hunting/mitre-attack-mapping-hunting.png" alt-text="New query" lightbox="./media/hunting/new-query.png":::
In the example above, start with the table name SecurityEvent and add piped elem
1. Select the green triangle and run the query. You can test the query and run it to look for anomalous behavior.
-> [!IMPORTANT]
->
-> We recommend that your query uses an [Advanced Security Information Model (ASIM) parser](normalization-about-parsers.md) and not a built-in table. This ensures that the query will support any current or future relevant data source rather than a single data source.
->
+We recommend that your query uses an [Advanced Security Information Model (ASIM) parser](normalization-about-parsers.md) and not a built-in table. This ensures that the query will support any current or future relevant data source rather than a single data source.
+ ## Create bookmarks
-During the hunting and investigation process, you may come across query results that may look unusual or suspicious. Bookmark these items to refer back to them in the future, such as when creating or enriching an incident for investigation.
+During the hunting and investigation process, you may come across query results that may look unusual or suspicious. Bookmark these items to refer back to them in the future, such as when creating or enriching an incident for investigation. Events such as potential root causes, indicators of compromise, or other notable events should be raised as a bookmark. If a key event you've bookmarked is severe enough to warrant an investigation, escalate it to an incident.
- In your results, mark the checkboxes for any rows you want to preserve, and select **Add bookmark**. This creates for a record for each marked row - a bookmark - that contains the row results as well as the query that created the results. You can add your own tags and notes to each bookmark.
- - **(Preview)** As with custom queries, you can enrich your bookmarks with entity mappings to extract multiple entity types and identifiers, and MITRE ATT&CK mappings to associate particular tactics and techniques.
- - **(Preview)** Bookmarks will default to use the same entity and MITRE ATT&CK technique mappings as the hunting query that produced the bookmarked results.
+ - As with scheduled analytics rules, you can enrich your bookmarks with entity mappings to extract multiple entity types and identifiers, and MITRE ATT&CK mappings to associate particular tactics and techniques.
+ - Bookmarks will default to use the same entity and MITRE ATT&CK technique mappings as the hunting query that produced the bookmarked results.
- View all the bookmarked findings by clicking on the **Bookmarks** tab in the main **Hunting** page. Add tags to bookmarks to classify them for filtering. For example, if you're investigating an attack campaign, you can create a tag for the campaign, apply the tag to any relevant bookmarks, and then filter all the bookmarks based on the campaign.
During the hunting and investigation process, you may come across query results
You can also create an incident from one or more bookmarks, or add one or more bookmarks to an existing incident. Select a checkbox to the left of any bookmarks you want to use, and then select **Incident actions** > **Create new incident** or **Add to existing incident**. Triage and investigate the incident like any other.
-> [!TIP]
-> Bookmarks stand to represent key events that are noteworthy and should be escalated to incidents if they are severe enough to warrant an investigation. Events such as potential root causes, indicators of compromise, or other notable events should be raised as a bookmark.
->
- For more information, see [Use bookmarks in hunting](bookmarks.md). ## Use notebooks to power investigations
The following operators are especially helpful in Microsoft Sentinel hunting que
- **find** - Find rows that match a predicate across a set of tables. -- **adx() (preview)** - This function performs cross-resource queries of Azure Data Explorer data sources from the Microsoft Sentinel hunting experience and Log Analytics. For more information, see [Cross-resource query Azure Data Explorer by using Azure Monitor](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md).
+- **adx()** - This function performs cross-resource queries of Azure Data Explorer data sources from the Microsoft Sentinel hunting experience and Log Analytics. For more information, see [Cross-resource query Azure Data Explorer by using Azure Monitor](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md).
## Next steps
sentinel Livestream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/livestream.md
description: This article describes how to use hunting Livestream in Microsoft S
Previously updated : 11/09/2021 Last updated : 09/29/2022 # Use hunting livestream in Microsoft Sentinel to detect threats -
-> [!IMPORTANT]
->
-> - The cross-resource query experience (see marked items below) are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
- Use hunting livestream to create interactive sessions that let you test newly created queries as events occur, get notifications from the sessions when a match is found, and launch investigations if necessary. You can quickly create a livestream session using any Log Analytics query. - **Test newly created queries as events occur**
You can create a livestream session from an existing hunting query, or create yo
- If you started livestream from scratch, create your query. > [!NOTE]
- > Livestream supports **cross-resource queries** (in preview) of data in Azure Data Explorer. [**Learn more about cross-resource queries**](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md#cross-query-your-log-analytics-or-application-insights-resources-and-azure-data-explorer).
+ > Livestream supports **cross-resource queries** of data in Azure Data Explorer. [**Learn more about cross-resource queries**](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md#cross-query-your-log-analytics-or-application-insights-resources-and-azure-data-explorer).
1. Select **Play** from the command bar.
service-bus-messaging Service Bus Go How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-go-how-to-use-queues.md
func GetMessage(count int, client *azservicebus.Client) {
} for _, message := range messages {
- body, err := message.Body()
- if err != nil {
- panic(err)
- }
+ body := message.Body
fmt.Printf("%s\n", string(body)) err = receiver.CompleteMessage(context.TODO(), message, nil)
func GetMessage(count int, client *azservicebus.Client) {
} for _, message := range messages {
- body, err := message.Body()
- if err != nil {
- panic(err)
- }
+ body := message.Body
fmt.Printf("%s\n", string(body)) err = receiver.CompleteMessage(context.TODO(), message, nil)
go run main.go
For more information, check out the following links: - [Azure Service Bus SDK for Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus)-- [Azure Service Bus SDK for Go on GitHub](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/messaging/azservicebus)
+- [Azure Service Bus SDK for Go on GitHub](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/messaging/azservicebus)
service-connector Tutorial Connect Web App App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-connect-web-app-app-configuration.md
Previously updated : 05/01/2022 Last updated : 10/24/2022+ # Tutorial: Connect a web app to Azure App Configuration with Service Connector
In this tutorial, use the Azure CLI to complete the following tasks:
- An Azure account with an active subscription. Your access role within the subscription must be "Contributor" or "Owner". [Create an account for free](https://azure.microsoft.com/free). - The Azure CLI. You can use it in [Azure Cloud Shell](https://shell.azure.com/) or [install it locally](/cli/azure/install-azure-cli).-- [.NET SDK](https://dotnet.microsoft.com/download) - [Git](/devops/develop/git/install-and-set-up-git) ## Sign in to Azure
spring-apps How To Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-config-server.md
The following table lists the configurable properties that you can use to set up
| `strict-host-key-checking` | No | Indicates whether the Config Server instance will fail to start when using the private `host-key`. Should be *true* (default value) or *false*. | > [!NOTE]
-> Config Server doesn't support SHA-2 signatures yet. Until support is added, use SHA-1 signatures or basic auth instead.
+> Config Server uses RSA keys with SHA-1 signatures for now. If you're using GitHub, for RSA public keys added to GitHub before November 2, 2021, the corresponding private key is supported. For RSA public keys added to GitHub after November 2, 2021, the corresponding private key is not supported, and we suggest using basic authentication instead.
### Private repository with basic authentication
spring-apps How To Enterprise Application Configuration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-application-configuration-service.md
The following image shows the three types of repository authentication supported
| `Strict host key checking` | No | Optional value that indicates whether the backend should be ignored if it encounters an error when using the provided `Host key`. Valid values are `true` and `false`. The default value is `true`. | > [!NOTE]
-> Application Configuration Service for Tanzu doesn't support SHA-2 signatures yet and we are actively working on to support it in future release. Before that, please use SHA-1 signatures or basic auth instead.
+> Application Configuration Service for Tanzu uses RSA keys with SHA-1 signatures for now. If you're using GitHub, for RSA public keys added to GitHub before November 2, 2021, the corresponding private key is supported. For RSA public keys added to GitHub after November 2, 2021, the corresponding private key is not supported, and we suggest using basic authentication instead.
To validate access to the target URI, select **Validate**. After validation completes successfully, select **Apply** to update the configuration settings.
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-attributes.md
Previously updated : 09/28/2022 Last updated : 10/19/2022
-# Actions and attributes for Azure role assignment conditions for Azure Blob Storage (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
->
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Actions and attributes for Azure role assignment conditions for Azure Blob Storage
This article describes the supported attribute dictionaries that can be used in conditions on Azure role assignments for each Azure Storage [DataAction](../../role-based-access-control/role-definitions.md#dataactions). For the list of Blob service operations that are affected by a specific permission or DataAction, see [Permissions for Blob service operations](/rest/api/storageservices/authorize-with-azure-active-directory#permissions-for-blob-service-operations). To understand the role assignment condition format, see [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md). + ## Suboperations Multiple Storage service operations can be associated with a single permission or DataAction. However, each of these operations that are associated with the same permission might support different parameters. *Suboperations* enable you to differentiate between service operations that require the same permission but support different set of attributes for conditions. Thus, by using a suboperation, you can specify one condition for access to a subset of operations that support a given parameter. Then, you can use another access condition for operations with the same action that doesn't support that parameter.
In this case, the optional suboperation `Blob.Write.WithTagHeaders` can be used
> [!NOTE] > Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions. For more information, see [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md).
-In this preview, storage accounts support the following suboperations:
+Storage accounts support the following suboperations:
> [!div class="mx-tableFixed"] > | Display name | DataAction | Suboperation |
This section lists the Azure Blob Storage attributes you can use in your conditi
## See also -- [Example Azure role assignment conditions (preview)](storage-auth-abac-examples.md)-- [Azure role assignment condition format and syntax (preview)](../../role-based-access-control/conditions-format.md)-- [Troubleshoot Azure role assignment conditions (preview)](../../role-based-access-control/conditions-troubleshoot.md)
+- [Example Azure role assignment conditions](storage-auth-abac-examples.md)
+- [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md)
+- [Troubleshoot Azure role assignment conditions](../../role-based-access-control/conditions-troubleshoot.md)
storage Storage Auth Abac Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-cli.md
Title: "Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI (preview) - Azure ABAC"
+ Title: "Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI - Azure ABAC"
description: Add a role assignment condition to restrict access to blobs using Azure CLI and Azure attribute-based access control (Azure ABAC).
-+ Previously updated : 09/01/2022 Last updated : 10/21/2022 #Customer intent:
-# Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Tutorial: Add a role assignment condition to restrict access to blobs using Azure CLI
In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some cases you might want to provide more fine-grained access control by adding a role assignment condition. In this tutorial, you learn how to: > [!div class="checklist"]
+>
> - Add a condition to a role assignment > - Restrict access to blobs based on a blob index tag + ## Prerequisites For information about the prerequisites to add or edit role assignment conditions, see [Conditions prerequisites](../../role-based-access-control/conditions-prerequisites.md).
Here is what the condition looks like in code:
You can authorize access to Blob storage from the Azure CLI either with Azure AD credentials or by using the storage account access key. This article shows how to authorize Blob storage operations using Azure AD. For more information, see [Quickstart: Create, download, and list blobs with Azure CLI](../blobs/storage-quickstart-blobs-cli.md)
-1. Use [az storage account](/cli/azure/storage/account) to create a storage account that is compatible with the blob index feature. For more information, see [Manage and find Azure Blob data with blob index tags (preview)](../blobs/storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
+1. Use [az storage account](/cli/azure/storage/account) to create a storage account that is compatible with the blob index feature. For more information, see [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
1. Use [az storage container](/cli/azure/storage/container) to create a new blob container within the storage account and set the Public access level to **Private (no anonymous access)**. 1. Use [az storage blob upload](/cli/azure/storage/blob#az-storage-blob-upload) to upload a text file to the container.
-1. Add the following blob index tag to the text file. For more information, see [Use blob index tags (preview) to manage and find data on Azure Blob Storage](../blobs/storage-blob-index-how-to.md).
+1. Add the following blob index tag to the text file. For more information, see [Use blob index tags to manage and find data on Azure Blob Storage](../blobs/storage-blob-index-how-to.md).
> [!NOTE] > Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions.
You can authorize access to Blob storage from the Azure CLI either with Azure AD
1. In the **Condition** column, click **View/Edit** to view the condition.
- ![Screenshot of Add role assignment condition in the Azure portal.](./media/shared/condition-view.png)
## Step 6: Test the condition
You can authorize access to Blob storage from the Azure CLI either with Azure AD
## Next steps - [Example Azure role assignment conditions](storage-auth-abac-examples.md)-- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](storage-auth-abac-attributes.md)
+- [Actions and attributes for Azure role assignment conditions in Azure Storage](storage-auth-abac-attributes.md)
- [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md)
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-examples.md
Title: Example Azure role assignment conditions for Blob Storage (preview)
+ Title: Example Azure role assignment conditions for Blob Storage
-description: Example Azure role assignment conditions for Blob Storage (preview).
+description: Example Azure role assignment conditions for Blob Storage.
Previously updated : 09/28/2022 Last updated : 10/21/2022 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
-# Example Azure role assignment conditions for Blob Storage (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Example Azure role assignment conditions for Blob Storage
This article list some examples of role assignment conditions for controlling access to Azure Blob Storage. + ## Prerequisites For information about the prerequisites to add or edit role assignment conditions, see [Conditions prerequisites](../../role-based-access-control/conditions-prerequisites.md).
Here are the settings to add this condition using the Azure portal.
## Next steps -- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)](storage-auth-abac-portal.md)-- [Actions and attributes for Azure role assignment conditions for Azure Blob Storage (preview)](storage-auth-abac-attributes.md)-- [Azure role assignment condition format and syntax (preview)](../../role-based-access-control/conditions-format.md)-- [Troubleshoot Azure role assignment conditions (preview)](../../role-based-access-control/conditions-troubleshoot.md)
+- [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal](storage-auth-abac-portal.md)
+- [Actions and attributes for Azure role assignment conditions for Azure Blob Storage](storage-auth-abac-attributes.md)
+- [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md)
+- [Troubleshoot Azure role assignment conditions](../../role-based-access-control/conditions-troubleshoot.md)
storage Storage Auth Abac Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-portal.md
Title: "Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview) - Azure ABAC"
+ Title: "Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal - Azure ABAC"
description: Add a role assignment condition to restrict access to blobs using the Azure portal and Azure attribute-based access control (Azure ABAC).
Previously updated : 09/01/2022 Last updated : 10/21/2022 #Customer intent:
-# Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal
In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some cases you might want to provide more fine-grained access control by adding a role assignment condition. In this tutorial, you learn how to: > [!div class="checklist"]
+>
> - Add a condition to a role assignment > - Restrict access to blobs based on a blob index tag + ## Prerequisites For information about the prerequisites to add or edit role assignment conditions, see [Conditions prerequisites](../../role-based-access-control/conditions-prerequisites.md).
Here is what the condition looks like in code:
| | | | Project | Cascade |
- ![Screenshot showing Upload blob pane with Blog index tags section.](./media/storage-auth-abac-portal/container-upload-blob.png)
1. Click the **Upload** button to upload the file.
Here is what the condition looks like in code:
1. Click the **Role assignments** tab to view the role assignments at this scope.
-1. Click **Add** > **Add role assignment**.
-
- ![Screenshot of Add > Add role assignment menu.](./media/storage-auth-abac-portal/add-role-assignment-menu.png)
+1. Click **Add** > **Add role assignment**. The Add role assignment page opens:
- The Add role assignment page opens.
1. On the **Roles** tab, select the [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) role.
- ![Screenshot of Add role assignment page with Roles tab.](./media/storage-auth-abac-portal/roles.png)
1. On the **Members** tab, select the user you created earlier.
- ![Screenshot of Add role assignment page with Members tab.](./media/storage-auth-abac-portal/members.png)
1. (Optional) In the **Description** box, enter **Read access to blobs with the tag Project=Cascade**.
Here is what the condition looks like in code:
## Step 4: Add a condition
-1. On the **Conditions (optional)** tab, click **Add condition**.
+1. On the **Conditions (optional)** tab, click **Add condition**. The Add role assignment condition page appears:
- ![Screenshot of Add role assignment condition page for a new condition.](./media/storage-auth-abac-portal/condition-add-new.png)
-
- The Add role assignment condition page appears.
1. In the Add action section, click **Add action**.
- The Select an action pane appears. This pane is a filtered list of data actions based on the role assignment that will be the target of your condition.
-
- ![Screenshot of Select an action pane with an action selected.](./media/storage-auth-abac-portal/condition-actions-select.png)
+ The Select an action pane appears. This pane is a filtered list of data actions based on the role assignment that will be the target of your condition. Check the box next to **Read a blob**, then click **Select**:
-1. Check the box next to **Read a blob**, then click **Select**.
1. In the Build expression section, click **Add expression**.
Here is what the condition looks like in code:
| Operator | StringEqualsIgnoreCase | | Value | Cascade |
- ![Screenshot of Build expression section for blob index tags.](./media/storage-auth-abac-portal/condition-expressions.png)
1. Scroll up to **Editor type** and click **Code**. The condition is displayed as code. You can make changes to the condition in this code editor. To go back to the visual editor, click **Visual**.
- ![Screenshot of condition displayed in code editor.](./media/storage-auth-abac-portal/condition-code.png)
-1. Click **Save** to add the condition and return the Add role assignment page.
+1. Click **Save** to add the condition and return to the Add role assignment page.
1. Click **Next**.
Here is what the condition looks like in code:
After a few moments, the security principal is assigned the role at the selected scope.
- ![Screenshot of role assignment list after assigning role.](./media/storage-auth-abac-portal/rg-role-assignments-condition.png)
## Step 5: Assign Reader role
Here is what the condition looks like in code:
1. Ensure that the authentication method is set to **Azure AD User Account** and not **Access key**.
- ![Screenshot of storage container with test files.](./media/storage-auth-abac-portal/test-storage-container.png)
1. Click the Baker text file.
Here is what the condition looks like in code:
## Next steps - [Example Azure role assignment conditions](storage-auth-abac-examples.md)-- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](storage-auth-abac-attributes.md)
+- [Actions and attributes for Azure role assignment conditions in Azure Storage](storage-auth-abac-attributes.md)
- [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md)
storage Storage Auth Abac Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-powershell.md
Title: "Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell (preview) - Azure ABAC"
+ Title: "Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell - Azure ABAC"
description: Add a role assignment condition to restrict access to blobs using Azure PowerShell and Azure attribute-based access control (Azure ABAC).
Previously updated : 09/01/2022 Last updated : 10/21/2022 #Customer intent:
-# Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Tutorial: Add a role assignment condition to restrict access to blobs using Azure PowerShell
In most cases, a role assignment will grant the permissions you need to Azure resources. However, in some cases you might want to provide more fine-grained access control by adding a role assignment condition. In this tutorial, you learn how to: > [!div class="checklist"]
+>
> - Add a condition to a role assignment > - Restrict access to blobs based on a blob index tag + ## Prerequisites For information about the prerequisites to add or edit role assignment conditions, see [Conditions prerequisites](../../role-based-access-control/conditions-prerequisites.md).
Here is what the condition looks like in code:
## Step 4: Set up storage
-1. Use [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) to create a storage account that is compatible with the blob index feature. For more information, see [Manage and find Azure Blob data with blob index tags (preview)](../blobs/storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
+1. Use [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount) to create a storage account that is compatible with the blob index feature. For more information, see [Manage and find Azure Blob data with blob index tags](../blobs/storage-manage-find-blobs.md#regional-availability-and-storage-account-support).
1. Use [New-AzStorageContainer](/powershell/module/az.storage/new-azstoragecontainer) to create a new blob container within the storage account and set the Public access level to **Private (no anonymous access)**. 1. Use [Set-AzStorageBlobContent](/powershell/module/az.storage/set-azstorageblobcontent) to upload a text file to the container.
-1. Add the following blob index tag to the text file. For more information, see [Use blob index tags (preview) to manage and find data on Azure Blob Storage](../blobs/storage-blob-index-how-to.md).
+1. Add the following blob index tag to the text file. For more information, see [Use blob index tags to manage and find data on Azure Blob Storage](../blobs/storage-blob-index-how-to.md).
> [!NOTE] > Blobs also support the ability to store arbitrary user-defined key-value metadata. Although metadata is similar to blob index tags, you must use blob index tags with conditions.
Here is what the condition looks like in code:
1. In the **Condition** column, click **View/Edit** to view the condition.
- ![Screenshot of Add role assignment condition in the Azure portal.](./media/shared/condition-view.png)
## Step 7: Test the condition
Here is what the condition looks like in code:
## Next steps - [Example Azure role assignment conditions](storage-auth-abac-examples.md)-- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](storage-auth-abac-attributes.md)
+- [Actions and attributes for Azure role assignment conditions in Azure Storage](storage-auth-abac-attributes.md)
- [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md)
storage Storage Auth Abac Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-security.md
Title: Security considerations for Azure role assignment conditions in Azure Blob Storage (preview)
+ Title: Security considerations for Azure role assignment conditions in Azure Blob Storage
description: Security considerations for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC).
Previously updated : 09/01/2022 Last updated : 10/19/2022
-# Security considerations for Azure role assignment conditions in Azure Blob Storage (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
-> This preview version is provided without a service level agreement, and it is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Security considerations for Azure role assignment conditions in Azure Blob Storage
To fully secure resources using [Azure attribute-based access control (Azure ABAC)](storage-auth-abac.md), you must also protect the [attributes](storage-auth-abac-attributes.md) used in the [Azure role assignment conditions](../../role-based-access-control/conditions-format.md). For instance, if your condition is based on a file path, then you should beware that access can be compromised if the principal has an unrestricted permission to rename a file path. This article describes security considerations that you should factor into your role assignment conditions. + ## Use of other authorization mechanisms Role assignment conditions are only evaluated when using Azure RBAC for authorization. These conditions can be bypassed if you allow access using alternate authorization methods:
For conditions on the source blob, `@Resource` conditions on the `Microsoft.Stor
## See also -- [Authorize access to blobs using Azure role assignment conditions (preview)](storage-auth-abac.md)-- [Actions and attributes for Azure role assignment conditions for Azure Blob Storage (preview)](storage-auth-abac-attributes.md)-- [What is Azure attribute-based access control (Azure ABAC)? (preview)](../../role-based-access-control/conditions-overview.md)
+- [Authorize access to blobs using Azure role assignment conditions](storage-auth-abac.md)
+- [Actions and attributes for Azure role assignment conditions for Azure Blob Storage](storage-auth-abac-attributes.md)
+- [What is Azure attribute-based access control (Azure ABAC)?](../../role-based-access-control/conditions-overview.md)
storage Storage Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac.md
Title: Authorize access to Azure Blob Storage using Azure role assignment conditions (preview)
+ Title: Authorize access to Azure Blob Storage using Azure role assignment conditions
-description: Authorize access to Azure Blob Storage and Azure Data Lake Storage Gen2 (ADLS G2) using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Blob Storage attributes.
+description: Authorize access to Azure Blob Storage and Azure Data Lake Storage Gen2 using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Blob Storage attributes.
Previously updated : 09/28/2022 Last updated : 10/24/2022
-# Authorize access to Azure Blob Storage using Azure role assignment conditions (preview)
-
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
->
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+# Authorize access to Azure Blob Storage using Azure role assignment conditions
Attribute-based access control (ABAC) is an authorization strategy that defines access levels based on attributes associated with security principals, resources, requests, and the environment. With ABAC, you can grant a security principal access to a resource based on a condition expressed as a predicate using these attributes.
-Azure ABAC builds on Azure role-based access control (Azure RBAC) by adding [conditions to Azure role assignments](../../role-based-access-control/conditions-overview.md). This preview includes support for role assignment conditions on Blobs and Data Lake Storage Gen2. It enables you to author role-assignment conditions based on principal, resource and request attributes.
+Azure ABAC builds on Azure role-based access control (Azure RBAC) by adding [conditions to Azure role assignments](../../role-based-access-control/conditions-overview.md). It enables you to author role-assignment conditions based on principal, resource and request attributes.
+ ## Overview of conditions in Azure Storage You can [use of Azure Active Directory](../common/authorize-data-access.md) (Azure AD) to authorize requests to Azure storage resources using Azure RBAC. Azure RBAC helps you manage access to resources by defining who has access to resources and what they can do with those resources, using role definitions and role assignments. Azure Storage defines a set of Azure [built-in roles](../../role-based-access-control/built-in-roles.md#storage) that encompass common sets of permissions used to access Azure storage data. You can also define custom roles with select sets of permissions. Azure Storage supports role assignments for both storage accounts and blob containers. Azure ABAC builds on Azure RBAC by adding [role assignment conditions](../../role-based-access-control/conditions-overview.md) in the context of specific actions. A *role assignment condition* is an additional check that is evaluated when the action on the storage resource is being authorized. This condition is expressed as a predicate using attributes associated with any of the following:+ - Security principal that is requesting authorization - Resource to which access is being requested - Parameters of the request - Environment from which the request originates The benefits of using role assignment conditions are:+ - **Enable finer-grained access to resources** - For example, if you want to grant a user read access to blobs in your storage accounts only if the blobs are tagged as Project=Sierra, you can use conditions on the read action using tags as an attribute. - **Reduce the number of role assignments you have to create and manage** - You can do this by using a generalized role assignment for a security group, and then restricting the access for individual members of the group using a condition that matches attributes of a principal with attributes of a specific resource being accessed (such as a blob or a container). - **Express access control rules in terms of attributes with business meaning** - For example, you can express your conditions using attributes that represent a project name, business application, organization function, or classification level. The tradeoff of using conditions is that you need a structured and consistent taxonomy when using attributes across your organization. Attributes must be protected to prevent access from being compromised. Also, conditions must be carefully designed and reviewed for their effect.
-Role-assignment conditions in Azure Storage are supported for Azure blob storage. You can also use conditions with accounts that have the [hierarchical namespace](../blobs/data-lake-storage-namespace.md) (HNS) feature enabled on them (ADLS G2).
-
+Role-assignment conditions in Azure Storage are supported for Azure blob storage. You can also use conditions with accounts that have the [hierarchical namespace](../blobs/data-lake-storage-namespace.md) (HNS) feature enabled on them (Azure Data Lake Storage Gen2).
## Supported attributes and operations+ You can configure conditions on role assignments for [DataActions](../../role-based-access-control/role-definitions.md#dataactions) to achieve these goals. You can use conditions with a [custom role](../../role-based-access-control/custom-roles.md) or select built-in roles. Note, conditions are not supported for management [Actions](../../role-based-access-control/role-definitions.md#actions) through the [Storage resource provider](/rest/api/storagerp).
-In this preview, you can add conditions to built-in roles or custom roles. The built-in roles on which you can use role-assignment conditions in this preview include:
+You can add conditions to built-in roles or custom roles. The built-in roles on which you can use role-assignment conditions include:
+ - [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader) - [Storage Blob Data Contributor](../../role-based-access-control/built-in-roles.md#storage-blob-data-contributor) - [Storage Blob Data Owner](../../role-based-access-control/built-in-roles.md#storage-blob-data-owner).
If you're working with conditions based on [blob index tags](../blobs/storage-ma
The [Azure role assignment condition format](../../role-based-access-control/conditions-format.md) allows use of `@Principal`, `@Resource` or `@Request` attributes in the conditions. A `@Principal` attribute is a custom security attribute on a principal, such as a user, enterprise application (service principal), or managed identity. A `@Resource` attribute refers to an existing attribute of a storage resource that is being accessed, such as a storage account, a container, or a blob. A `@Request` attribute refers to an attribute or parameter included in a storage operation request.
-Azure RBAC currently supports 2,000 role assignments in a subscription. If you need to create thousands of Azure role assignments, you may encounter this limit. Managing hundreds or thousands of role assignments can be difficult. In some cases, you can use conditions to reduce the number of role assignments on your storage account and make them easier to manage. You can [scale the management of role assignments](../../role-based-access-control/conditions-custom-security-attributes-example.md) using conditions and [Azure AD custom security attributes]() for principals.
-
+Azure RBAC currently supports 4,000 role assignments in a subscription. If you need to create thousands of Azure role assignments, you may encounter this limit. Managing hundreds or thousands of role assignments can be difficult. In some cases, you can use conditions to reduce the number of role assignments on your storage account and make them easier to manage. You can [scale the management of role assignments](../../role-based-access-control/conditions-custom-security-attributes-example.md) using conditions and [Azure AD custom security attributes]() for principals.
## Next steps - [Prerequisites for Azure role assignment conditions](../../role-based-access-control/conditions-prerequisites.md) - [Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal](storage-auth-abac-portal.md)-- [Actions and attributes for Azure role assignment conditions in Azure Storage (preview)](storage-auth-abac-attributes.md)-- [Example Azure role assignment conditions (preview)](storage-auth-abac-examples.md)-- [Troubleshoot Azure role assignment conditions (preview)](../../role-based-access-control/conditions-troubleshoot.md)-
+- [Actions and attributes for Azure role assignment conditions in Azure Storage](storage-auth-abac-attributes.md)
+- [Example Azure role assignment conditions](storage-auth-abac-examples.md)
+- [Troubleshoot Azure role assignment conditions](../../role-based-access-control/conditions-troubleshoot.md)
## See also -- [What is Azure attribute-based access control (Azure ABAC)? (preview)](../../role-based-access-control/conditions-overview.md)-- [FAQ for Azure role assignment conditions (preview)](../../role-based-access-control/conditions-faq.md)-- [Azure role assignment condition format and syntax (preview)](../../role-based-access-control/conditions-format.md)-- [Scale the management of Azure role assignments by using conditions and custom security attributes (preview)](../../role-based-access-control/conditions-custom-security-attributes-example.md)-- [Security considerations for Azure role assignment conditions in Azure Storage (preview)](storage-auth-abac-security.md)
+- [What is Azure attribute-based access control (Azure ABAC)?](../../role-based-access-control/conditions-overview.md)
+- [FAQ for Azure role assignment conditions](../../role-based-access-control/conditions-faq.md)
+- [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md)
+- [Scale the management of Azure role assignments by using conditions and custom security attributes](../../role-based-access-control/conditions-custom-security-attributes-example.md)
+- [Security considerations for Azure role assignment conditions in Azure Storage](storage-auth-abac-security.md)
storage Storage Quickstart Blobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md
Get started with the Azure Blob Storage client library for .NET. Azure Blob Stor
The examples in this quickstart show you how to use the Azure Blob Storage client library for .NET to: * [Create the project and configure dependencies](#setting-up)
-* [Authenticate to Azure](#authenticate-the-app-to-azure)
+* [Authenticate to Azure and authorize access to blob data](#authenticate-to-azure-and-authorize-access-to-blob-data)
* [Create a container](#create-a-container) * [Upload a blob to a container](#upload-a-blob-to-a-container) * [List blobs in a container](#list-blobs-in-a-container)
Console.WriteLine("Hello, World!");
## Object model
-Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data is data that does not adhere to a particular data model or definition, such as text or binary data. Blob storage offers three types of resources:
+Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data doesn't adhere to a particular data model or definition, such as text or binary data. Blob storage offers three types of resources:
- The storage account - A container in the storage account
storage Storage Quickstart Blobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java.md
description: In this quickstart, you learn how to use the Azure Blob Storage cli
Previously updated : 10/20/2022 Last updated : 10/24/2022
Use the following Java classes to interact with these resources:
These example code snippets show you how to perform the following actions with the Azure Blob Storage client library for Java: -- [Authenticate the client](#authenticate-the-client)
+- [Authenticate to Azure and authorize access to blob data](#authenticate-to-azure-and-authorize-access-to-blob-data)
- [Create a container](#create-a-container) - [Upload blobs to a container](#upload-blobs-to-a-container) - [List the blobs in a container](#list-the-blobs-in-a-container)
These example code snippets show you how to perform the following actions with t
> [!IMPORTANT] > Make sure you have the correct dependencies in pom.xml and the necessary directives for the code samples to work, as described in the [setting up](#setting-up) section.
-### Authenticate the client
+### Authenticate to Azure and authorize access to blob data
-Application requests to Azure Blob Storage must be authorized. Using the `DefaultAzureCredential` class provided by the **azure-identity** client library is the recommended approach for implementing passwordless connections to Azure services in your code, including Blob Storage.
-
-You can also authorize requests to Azure Blob Storage by using the account access key. However, this approach should be used with caution. Developers must be diligent to never expose the access key in an unsecure location. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` offers improved management and security benefits over the account key to allow passwordless authentication. Both options are demonstrated in the following example.
### [Passwordless (Recommended)](#tab/managed-identity)
You can authorize access to data in your storage account using the following ste
A connection string includes the storage account access key and uses it to authorize requests. Always be careful to never expose the keys in an unsecure location. > [!NOTE]
-> If you plan to use connection strings, you'll need permissions for the following Azure RBAC action: [Microsoft.Storage/storageAccounts/listkeys/action](../../role-based-access-control/resource-provider-operations.md#microsoftstorage). The least privilege built-in role with permissions for this action is [Storage Account Key Operator Service Role](../../role-based-access-control/built-in-roles.md#storage-account-key-operator-service-role), but any role which includes this action will work.
+> To authorize data access with the storage account access key, you'll need permissions for the following Azure RBAC action: [Microsoft.Storage/storageAccounts/listkeys/action](../../role-based-access-control/resource-provider-operations.md#microsoftstorage). The least privileged built-in role with permissions for this action is [Reader and Data Access](../../role-based-access-control/built-in-roles.md#reader-and-data-access), but any role which includes this action will work.
[!INCLUDE [retrieve credentials](../../../includes/retrieve-credentials.md)]
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
Title: 'Quickstart: Azure Blob Storage client library for Python'
description: In this quickstart, you learn how to use the Azure Blob Storage client library for Python to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container. Previously updated : 10/20/2022 Last updated : 10/24/2022
Use the following Python classes to interact with these resources:
These example code snippets show you how to do the following tasks with the Azure Blob Storage client library for Python: -- [Authenticate the client](#authenticate-the-client)
+- [Authenticate to Azure and authorize access to blob data](#authenticate-to-azure-and-authorize-access-to-blob-data)
- [Create a container](#create-a-container) - [Upload blobs to a container](#upload-blobs-to-a-container) - [List the blobs in a container](#list-the-blobs-in-a-container) - [Download blobs](#download-blobs) - [Delete a container](#delete-a-container)
-### Authenticate the client
+### Authenticate to Azure and authorize access to blob data
-Application requests to Azure Blob Storage must be authorized. Using the `DefaultAzureCredential` class provided by the Azure Identity client library is the recommended approach for implementing passwordless connections to Azure services in your code, including Blob Storage.
-
-You can also authorize requests to Azure Blob Storage by using the account access key. However, this approach should be used with caution. Developers must be diligent to never expose the access key in an unsecure location. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` offers improved management and security benefits over the account key to allow passwordless authentication. Both options are demonstrated in the following example.
### [Passwordless (Recommended)](#tab/managed-identity)
You can authorize access to data in your storage account using the following ste
A connection string includes the storage account access key and uses it to authorize requests. Always be careful to never expose the keys in an unsecure location. > [!NOTE]
-> If you plan to use connection strings, you'll need permissions for the following Azure RBAC action: [Microsoft.Storage/storageAccounts/listkeys/action](/azure/role-based-access-control/resource-provider-operations#microsoftstorage). The least privilege built-in role with permissions for this action is [Storage Account Key Operator Service Role](/azure/role-based-access-control/built-in-roles#storage-account-key-operator-service-role), but any role which includes this action will work.
+> To authorize data access with the storage account access key, you'll need permissions for the following Azure RBAC action: [Microsoft.Storage/storageAccounts/listkeys/action](../../role-based-access-control/resource-provider-operations.md#microsoftstorage). The least privileged built-in role with permissions for this action is [Reader and Data Access](../../role-based-access-control/built-in-roles.md#reader-and-data-access), but any role which includes this action will work.
[!INCLUDE [retrieve credentials](../../../includes/retrieve-credentials.md)]
storage Authorize Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorize-data-access.md
Previously updated : 04/14/2022 Last updated : 10/21/2022
Each authorization option is briefly described below:
- **Shared access signatures** for blobs, files, queues, and tables. Shared access signatures (SAS) provide limited delegated access to resources in a storage account via a signed URL. The signed URL specifies the permissions granted to the resource and the interval over which the signature is valid. A service SAS or account SAS is signed with the account key, while the user delegation SAS is signed with Azure AD credentials and applies to blobs only. For more information, see [Using shared access signatures (SAS)](storage-sas-overview.md). - - **Azure Active Directory (Azure AD) integration** for authorizing requests to blob, queue, and table resources. Microsoft recommends using Azure AD credentials to authorize requests to data when possible for optimal security and ease of use. For more information about Azure AD integration, see the articles for either [blob](../blobs/authorize-access-azure-active-directory.md), [queue](../queues/authorize-access-azure-active-directory.md), or [table](../tables/authorize-access-azure-active-directory.md) resources.
- You can use Azure role-based access control (Azure RBAC) to manage a security principal's permissions to blob, queue, and table resources in a storage account. You can additionally use Azure attribute-based access control (ABAC) to add conditions to Azure role assignments for blob resources. For more information about RBAC, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md). For more information about ABAC, see [What is Azure attribute-based access control (Azure ABAC)? (preview)](../../role-based-access-control/conditions-overview.md).
+ You can use Azure role-based access control (Azure RBAC) to manage a security principal's permissions to blob, queue, and table resources in a storage account. You can also use Azure attribute-based access control (ABAC) to add conditions to Azure role assignments for blob resources.
+
+ For more information about RBAC, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md).
+
+ For more information about ABAC and its feature status, see:
+ > [What is Azure attribute-based access control (Azure ABAC)?](../../role-based-access-control/conditions-overview.md)
+ >
+ > [The status of ABAC condition features](../../role-based-access-control/conditions-overview.md#status-of-condition-features)
+ >
+ > [The status of ABAC condition features in Azure Storage](#status-of-condition-features-in-azure-storage)
+
+<! After the core ABAC doc updates are published, change the heading in the second link above to: #status-of-condition-features >
- **Azure Active Directory Domain Services (Azure AD DS) authentication** for Azure Files. Azure Files supports identity-based authorization over Server Message Block (SMB) through Azure AD DS. You can use Azure RBAC for fine-grained control over a client's access to Azure Files resources in a storage account. For more information about Azure Files authentication using domain services, see the [overview](../files/storage-files-active-directory-overview.md).
Each authorization option is briefly described below:
- **Storage Local Users** can be used to access blobs with SFTP or files with SMB. Storage Local Users support container level permissions for authorization. See [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](../blobs/secure-file-transfer-protocol-support-how-to.md) for more information on how Storage Local Users can be used with SFTP.
+## Status of condition features in Azure Storage
+
+Currently, Azure attribute-based access control (Azure ABAC) is generally available (GA) for controlling access only to Azure Blob Storage, Azure Data Lake Storage Gen2, and Azure Queues using `request` and `resource` attributes in the standard storage account performance tier. It is either not available or in PREVIEW for other storage account performance tiers, resource types, and attributes.
+
+See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+The table below shows the current status of ABAC by storage account performance tier, storage resource type, and attribute type. Exceptions for specific attributes are also shown.
+
+| Performance tier | Resource types | Attribute types | Specific attributes | Availability |
+|--|--|--|--|--|
+| Standard | Blobs<br/>Data Lake Storage Gen2<br/>Queues | request<br/>resource | all except for the snapshot resource attribute for Data Lake Storage Gen2 | GA |
+| Standard | Data Lake Storage Gen2 | resource | snapshot | Preview |
+| Standard | Blobs<br/>Data Lake Storage Gen2<br/>Queues | principal | all | Preview |
+| Premium | Blobs<br/>Data Lake Storage Gen2<br/>Queues | request<br/>resource<br/>principal | all | Preview |
+ [!INCLUDE [storage-account-key-note-include](../../../includes/storage-account-key-note-include.md)] ## Next steps
storage Elastic San Connect Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-linux.md
+
+ Title: Connect to an Azure Elastic SAN (preview) volume - Linux.
+description: Learn how to connect to an Azure Elastic SAN (preview) volume from a Linux client.
+++ Last updated : 10/24/2022+++++
+# Connect to Elastic SAN (preview) volumes - Linux
+
+This article explains how to connect to an elastic storage area network (SAN) volume from a Linux client. For details on connecting from a Windows client, see [Connect to Elastic SAN (preview) volumes - Windows](elastic-san-connect-windows.md)
+
+## Prerequisites
+
+- Complete [Deploy an Elastic SAN (preview)](elastic-san-create.md)
+- An Azure Virtual Network, which you'll need to establish a connection from compute clients in Azure to your Elastic SAN volumes.
+
+## Limitations
++
+## Enable Storage service endpoint
+
+In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN.
+
+# [Portal](#tab/azure-portal)
+
+1. Navigate to your virtual network and select **Service Endpoints**.
+1. Select **+ Add** and for **Service** select **Microsoft.Storage**.
+1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**.
++
+# [PowerShell](#tab/azure-powershell)
+
+```powershell
+$resourceGroupName = "yourResourceGroup"
+$vnetName = "yourVirtualNetwork"
+$subnetName = "yourSubnet"
+
+$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName
+
+$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
+
+$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage" | Set-AzVirtualNetwork
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage"
+```
++
+## Configure networking
+
+Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks.
+
+By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For more information on networking, see [Configure Elastic SAN networking (preview)](elastic-san-networking.md).
+
+# [Portal](#tab/azure-portal)
+
+1. Navigate to your SAN and select **Volume groups**.
+1. Select a volume group and select **Create**.
+1. Add an existing virtual network and subnet and select **Save**.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow
+
+Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
+
+```
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
+```
++
+## Connect to a volume
+
+You can either create single sessions or multiple-sessions to every Elastic SAN volume based on your application's multi-threaded capabilities and performance requirements. To achieve higher IOPS and throughput to a volume and reach its maximum limits, use multiple sessions and adjust the queue depth and IO size as needed, if your workload allows.
+
+When using multiple sessions, generally, you should aggregate them with Multipath I/O. It allows you to aggregate multiple sessions from an iSCSI initiator to the target into a single device, and can improve performance by optimally distributing I/O over all available paths based on a load balancing policy.
+
+## Environment setup
+
+To create iSCSI connections from a Linux client, install the iSCSI initiator package. The exact command may vary depending on your distribution, and you should consult their documentation if necessary.
+
+As an example, with Ubuntu you'd use `sudo apt -y install open-iscsi` and with Red Hat Enterprise Linux (RHEL) you'd use `sudo yum install iscsi-initiator-utils -y`.
+
+### Multipath I/O
+
+Install the Multipath I/O package for your Linux distribution. The installation will vary based on your distribution, and you should consult their documentation. As an example, on Ubuntu the command would be `sudo apt install multipath-tools` and for RHEL the command would be `sudo yum install device-mapper-multipath`.
+
+Once you've installed the package, check if **/etc/multipath.conf** exists. If **/etc/multipath.conf** doesn't exist, create an empty file and use the settings in the following example for a general configuration. As an example, `mpathconf --enable` to create **/etc/multipath.conf** will create the file on RHEL.
+
+You'll need to make some modifications to **/etc/multipath.conf**. You'll need to add the devices section in the following example, and the defaults section in the following example sets some defaults that'll generally be applicable. If you need to make any other specific configurations, such as excluding volumes from the multipath topology, see the man page for multipath.conf.
+
+```
+defaults {
+    user_friendly_names yes # To create ‘mpathn’ names for multipath devices
+    path_grouping_policy multibus # To place all the paths in one priority group
+    path_selector "round-robin 0" # To use round robin algorithm to determine path for next I/O operation
+    failback immediate # For immediate failback to highest priority path group with active paths
+    no_path_retry 1 # To disable I/O queueing after retrying once when all paths are down
+}
+devices {
+ΓÇ» device {
+    vendor "MSFT"
+    product "Virtual HD"
+ΓÇ» }
+}
+```
+
+After creating or modifying the file, restart Multipath I/O. On Ubuntu, the command is `sudo systemctl restart multipath-tools.service` and on RHEL the command is `sudo systemctl restart multipathd`.
+
+### Gather information
+
+Before you can connect to a volume, you'll need to get **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort** from your Azure resources.
+
+Run the following command to get these values:
+
+```azurecli
+az elastic-san volume show -e yourSanName -g yourResourceGroup -v yourVolumeGroupName -n yourVolumeName
+```
+
+You should see a list of output that looks like the following:
+++
+Note down the values for **targetIQN**, **targetPortalHostName**, and **targetPortalPort**, you'll need them for the next sections.
+
+## Multi-session connections
+
+To establish multiple sessions to a volume, first you'll need to create a single session with particular parameters.
+
+To establish persistent iSCSI connections, modify **node.startup** in **/etc/iscsi/iscsid.conf** from **manual** to **automatic**.
+
+Replace **yourTargetIQN**, **yourTargetPortalHostName**, and **yourTargetPortalPort** with the values you kept, then run the following commands from your compute client to connect an Elastic SAN volume.
+
+```
+iscsiadm -m node --targetname yourTargetIQN --portal yourTargetPortalHostName:yourTargetPortalPort -o new
+
+iscsiadm -m node --targetname yourTargetIQN -p yourTargetPortalHostName:yourTargetPortalPort -l
+```
+
+Then, get the session ID and create as many sessions as needed with the session ID. To get the session ID, run `iscsiadm -m session` and you should see output similar to the following:
+
+```
+tcp:[15] <name>:port,-1 <iqn>
+tcp:[18] <name>:port,-1 <iqn>
+```
+15 is the session ID we'll use from the previous example.
+
+With the session ID, you can create as many sessions as you need however, none of the additional sessions are persistent, even if you modified node.startup. You must recreate them after each reboot. The following script is a loop that creates as many additional sessions as you specify. Replace **numberOfAdditionalSessions** with your desired number of additional sessions and replace **sessionID** with the session ID you'd like to use, then run the script.
+
+```
+for i in `seq 1 numberOfAdditionalSessions`; do sudo iscsiadm -m session -r sessionID --op new; done
+```
+
+You can verify the number of sessions using `sudo multipath -ll`
+
+## Single-session connections
+
+To establish persistent iSCSI connections, modify **node.startup** in **/etc/iscsi/iscsid.conf** from **manual** to **automatic**.
+
+Replace **yourTargetIQN**, **yourTargetPortalHostName**, and **yourTargetPortalPort** with the values you kept, then run the following commands from your compute client to connect an Elastic SAN volume.
+
+```
+iscsiadm -m node --targetname yourTargetIQN --portal yourTargetPortalHostName:yourTargetPortalPort -o new
+
+iscsiadm -m node --targetname yourTargetIQN -p yourTargetPortalHostName:yourTargetPortalPort -l
+```
+
+## Next steps
+
+[Configure Elastic SAN networking (preview)](elastic-san-networking.md)
storage Elastic San Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-windows.md
+
+ Title: Connect to an Azure Elastic SAN (preview) volume - Windows
+description: Learn how to connect to an Azure Elastic SAN (preview) volume from a Windows client.
+++ Last updated : 10/24/2022+++++
+# Connect to Elastic SAN (preview) volumes - Windows
+
+This article explains how to connect to an elastic storage area network (SAN) volume from a Windows client. For details on connecting from a Linux client, see [Connect to Elastic SAN (preview) volumes - Linux](elastic-san-connect-linux.md).
+
+## Prerequisites
+
+- Complete [Deploy an Elastic SAN (preview)](elastic-san-create.md)
+- An Azure Virtual Network, which you'll need to establish a connection from compute clients in Azure to your Elastic SAN volumes.
+
+## Limitations
++
+## Enable Storage service endpoint
+
+In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN.
+
+# [Portal](#tab/azure-portal)
+
+1. Navigate to your virtual network and select **Service Endpoints**.
+1. Select **+ Add** and for **Service** select **Microsoft.Storage**.
+1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**.
++
+# [PowerShell](#tab/azure-powershell)
+
+```powershell
+$resourceGroupName = "yourResourceGroup"
+$vnetName = "yourVirtualNetwork"
+$subnetName = "yourSubnet"
+
+$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName
+
+$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
+
+$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage" | Set-AzVirtualNetwork
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage"
+```
++
+## Configure networking
+
+Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks.
+
+By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For more information on networking, see [Configure Elastic SAN networking (preview)](elastic-san-networking.md).
+
+# [Portal](#tab/azure-portal)
+
+1. Navigate to your SAN and select **Volume groups**.
+1. Select a volume group and select **Create**.
+1. Add an existing virtual network and subnet and select **Save**.
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow
+
+Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
+
+```
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
+```
++
+## Connect to a volume
+
+You can either create single sessions or multiple-sessions to every Elastic SAN volume based on your application's multi-threaded capabilities and performance requirements. To achieve higher IOPS and throughput to a volume and reach its maximum limits, use multiple sessions and adjust the queue depth and IO size as needed, if your workload allows.
+
+When using multiple sessions, generally, you should aggregate them with Multipath I/O. It allows you to aggregate multiple sessions from an iSCSI initiator to the target into a single device, and can improve performance by optimally distributing I/O over all available paths based on a load balancing policy.
+
+## Set up your environment
+
+To create iSCSI connections from a Windows client, confirm the iSCSI service is running. If it's not, start the service, and set it to start automatically.
+
+```powershell
+# Confirm iSCSI is running
+Get-Service -Name MSiSCSI
+
+# If it's not running, start it
+Start-Service -Name MSiSCSI
+
+# Set it to start automatically
+Set-Service -Name MSiSCSI -StartupType Automatic
+```
+
+### Multipath I/O
+
+Multipath I/O enables highly available and fault-tolerant iSCSI network connections. It allows you to aggregate multiple sessions from an iSCSI initiator to the target into a single device, and can improve performance by optimally distributing I/O over all available paths based on a load balancing policy.
+
+Install Multipath I/O, enable multipath support for iSCSI devices, and set a default load balancing policy.
+
+```powershell
+# Install Multipath-IO
+Add-WindowsFeatureΓÇ»-NameΓÇ»'Multipath-IO'
+
+# Verify if the installation was successful
+Get-WindowsFeature -Name 'Multipath-IO'
+
+# Enable multipath support for iSCSI devices
+Enable-MSDSMAutomaticClaim -BusType iSCSI
+
+# Set the default load balancing policy based on your requirements. In this example, we set it to round robin
+# which should be optimal for most workloads.
+Set-MSDSMGlobalDefaultLoadBalancePolicy -Policy RR
+```
+
+### Gather information
+
+Before you can connect to a volume, you'll need to get **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort** from your Azure Elastic SAN volume.
+
+Run the following commands to get these values:
+
+```azurepowershell
+# Get the target name and iSCSI portal name to connect a volume to a client
+$connectVolume = Get-AzElasticSanVolume -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $searchedVolumeGroup -Name $searchedVolume
+$connectVolume.storagetargetiqn
+$connectVolume.storagetargetportalhostname
+$connectVolume.storagetargetportalport
+```
+
+Note down the values for **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort**, you'll need them for the next sections.
+
+## Multi-session configuration
+
+To create multiple sessions to each volume, you must configure the target and connect to it multiple times, based on the number of sessions you want to that volume.
+
+You can use the following scripts to create your connections.
+
+To script multi-session configurations, use two files. An XML configuration file includes the information for each volume you'd like to establish connections to, and a script that uses the XML files to create connections.
+
+The following example shows you how to format your XML file for the script, for each volume, create a new `<Target>` section:
+
+```xml
+<?xml version="1.0" encoding="utf-8"?>
+<Targets>
+  <Target>
+     <Iqn>Volume 1 Storage Target Iqn</Iqn>
+     <Hostname>Volume 1 Storage Target Portal Hostname</Hostname>
+     <Port>Volume 1 Storage Target Portal Port</Port>
+     <NumSessions>Number of sessions</NumSessions>
+     <EnableMultipath>true</EnableMultipath>
+  </Target>
+  <Target>
+     <Iqn>Volume 2 Storage Target Iqn</Iqn>
+     <Hostname>Volume 2 Storage Target Portal Hostname</Hostname>
+     <Port>Volume 2 Storage Target Portal Port</Port>
+     <NumSessions>Number of sessions</NumSessions>
+     <EnableMultipath>true</EnableMultipath>
+  </Target>
+</Targets>
+```
+
+Use the following script to create the connections, to run the script use `.\LoginTarget.ps1 -TargetConfigPath [path to config.xml]`:
+
+```
+param(
+  [string] $TargetConfigPath
+)
+$TargetConfig = New-Object XML
+$TargetConfig.Load($TargetConfigPath)
+foreach ($Target in $TargetConfig.Targets.Target)
+{
+  $TargetIqn = $Target.Iqn
+  $TargetHostname = $Target.Hostname
+  $TargetPort = $Target.Port
+  $NumSessions = $Target.NumSessions
+  $succeeded = 1
+  iscsicli AddTarget $TargetIqn * $TargetHostname $TargetPort * 0 * * * * * * * * * 0
+  while ($succeeded -le $NumSessions)
+  {
+    Write-Host "Logging session ${succeeded}/${NumSessions} into ${TargetIqn}"
+    $LoginOptions = '*'
+    if ($Target.EnableMultipath)
+    {
+        Write-Host "Enabled Multipath"
+        $LoginOptions = '0x00000002'
+    }
+    # PersistentLoginTarget will not establish login to the target until after the system is rebooted.
+    # Use LoginTarget if the target is needed before rebooting. Using just LoginTarget will not persist the
+    # session(s).
+    iscsicli PersistentLoginTarget $TargetIqn t $TargetHostname $TargetPort Root\ISCSIPRT\0000_0 -1 * $LoginOptions * * * * * * * * * 0
+    #iscsicli LoginTarget $TargetIqn t $TargetHostname $TargetPort Root\ISCSIPRT\0000_0 -1 * $LoginOptions * * * * * * * * * 0
+    if ($LASTEXITCODE -eq 0)
+    {
+        $succeeded += 1
+    }
+    Start-Sleep -s 1
+    Write-Host ""
+  }
+}
+```
+
+Verify the number of sessions your volume has with either `iscsicli SessionList` or `mpclaim -s -d`
+
+## Single-session configuration
+
+Replace **yourStorageTargetIQN**, **yourStorageTargetPortalHostName**, and **yourStorageTargetPortalPort** with the values you kept, then run the following commands from your compute client to connect an Elastic SAN volume. If you'd like to modify these commands, run `iscsicli commandHere -?` for information on the command and its parameters.
+
+```
+# Add target IQN
+# The *s are essential, as they are default arguments
+iscsicli AddTarget yourStorageTargetIQN * yourStorageTargetPortalHostName yourStorageTargetPortalPort * 0 * * * * * * * * * 0
+
+# Login
+# The *s are essential, as they are default arguments
+iscsicli LoginTarget yourStorageTargetIQN t yourStorageTargetPortalHostName yourStorageTargetPortalPort Root\ISCSIPRT\0000_0 -1 * * * * * * * * * * * 0
+
+# This command instructs the system to automatically reconnect after a reboot
+iscsicli PersistentLoginTarget yourStorageTargetIQN t yourStorageTargetPortalHostName yourStorageTargetPortalPort Root\ISCSIPRT\0000_0 -1 * * * * * * * * * * * 0
+```
+
+## Next steps
+
+[Configure Elastic SAN networking (preview)](elastic-san-networking.md)
storage Elastic San Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect.md
- Title: Connect to an Azure Elastic SAN (preview) volume
-description: Learn how to connect to an Azure Elastic SAN (preview) volume.
--- Previously updated : 10/12/2022-----
-# Connect to Elastic SAN (preview) volumes
-
-This article explains how to connect to an elastic storage area network (SAN) volume.
-
-## Prerequisites
--- Complete [Deploy an Elastic SAN (preview)](elastic-san-create.md)-- An Azure Virtual Network, which you'll need to establish a connection from compute clients in Azure to your Elastic SAN volumes.-
-## Limitations
--
-## Enable Storage service endpoint
-
-In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN.
-
-# [Portal](#tab/azure-portal)
-
-1. Navigate to your virtual network and select **Service Endpoints**.
-1. Select **+ Add** and for **Service** select **Microsoft.Storage**.
-1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**.
--
-# [PowerShell](#tab/azure-powershell)
-
-```powershell
-$resourceGroupName = "yourResourceGroup"
-$vnetName = "yourVirtualNetwork"
-$subnetName = "yourSubnet"
-
-$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName
-
-$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
-
-$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage" | Set-AzVirtualNetwork
-```
-
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage"
-```
--
-## Configure networking
-
-Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks.
-
-By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For more information on networking, see [Configure Elastic SAN networking (preview)](elastic-san-networking.md).
-
-# [Portal](#tab/azure-portal)
-
-1. Navigate to your SAN and select **Volume groups**.
-1. Select a volume group and select **Modify**.
-1. Add an existing virtual network and select **Save**.
-
-# [PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-$rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow
-
-Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
-
-```
-# [Azure CLI](#tab/azure-cli)
-
-```azurecli
-az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}"
-```
--
-## Connect to a volume
-
-You can connect to Elastic SAN volumes over iSCSI from multiple compute clients. The following sections cover how to establish connections from a Windows client and a Linux client.
-
-### Windows
-
-Before you can connect to a volume, you'll need to get **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort** from your Azure Elastic SAN volume.
-
-Run the following commands to get these values:
-
-```azurepowershell
-# Get the target name and iSCSI portal name to connect a volume to a client
-$connectVolume = Get-AzElasticSanVolume -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $searchedVolumeGroup -Name $searchedVolume
-$connectVolume.storagetargetiqn
-$connectVolume.storagetargetportalhostname
-$connectVolume.storagetargetportalport
-```
-
-Note down the values for **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort**, you'll need them for the next commands.
-
-Replace **yourStorageTargetIQN**, **yourStorageTargetPortalHostName**, and **yourStorageTargetPortalPort** with the values you kept, then run the following commands from your compute client to connect an Elastic SAN volume.
-
-```
-# Add target IQN
-# The *s are essential, as they are default arguments
-iscsicli AddTarget $yourStorageTargetIQN * $yourStorageTargetPortalHostName $yourStorageTargetPortalPort * 0 * * * * * * * * * 0
-
-# Login
-# The *s are essential, as they are default arguments
-iscsicli LoginTarget $yourStorageTargetIQN t $yourStorageTargetPortalHostName $yourStorageTargetPortalPort Root\ISCSIPRT\0000_0 -1 * * * * * * * * * * * 0
-
-```
-
-### Linux
-
-Before you can connect to a volume, you'll need to get **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort** from your Azure resources.
-
-Run the following command to get these values:
-
-```azurecli
-az elastic-san volume list -e $sanName -g $resourceGroupName -v $searchedVolumeGroup -n $searchedVolume
-```
-
-You should see a list of output that looks like the following:
---
-Note down the values for **StorageTargetIQN**, **StorageTargetPortalHostName**, and **StorageTargetPortalPort**, you'll need them for the next commands.
-
-Replace **yourStorageTargetIQN**, **yourStorageTargetPortalHostName**, and **yourStorageTargetPortalPort** with the values you kept, then run the following commands from your compute client to connect an Elastic SAN volume.
-
-```
-iscsiadm -m node --targetname **yourStorageTargetIQN** --portal **yourStorageTargetPortalHostName**:**yourStorageTargetPortalPort** -o new
-
-iscsiadm -m node --targetname **yourStorageTargetIQN** -p **yourStorageTargetPortalHostName**:**yourStorageTargetPortalPort** -l
-```
-
-## Next steps
-
-[Configure Elastic SAN networking (preview)](elastic-san-networking.md)
storage Elastic San Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-create.md
az elastic-san volume create --elastic-san-name $sanName -g $resourceGroupName -
## Next steps
-Now that you've deployed an Elastic SAN, [Connect to Elastic SAN (preview) volumes](elastic-san-connect.md).
+Now that you've deployed an Elastic SAN, Connect to Elastic SAN (preview) volumes from either [Windows](elastic-san-connect-windows.md) or [Linux](elastic-san-connect-linux.md) clients.
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
description: An overview of Azure Elastic SAN (preview), a service that enables
Previously updated : 10/12/2022 Last updated : 10/24/2022
You can manage virtual network rules for volume groups through the Azure portal,
### [Portal](#tab/azure-portal)
-Currently, you must use either the Azure PowerShell module or Azure CLI to manage virtual network rules for a volume group.
+1. Navigate to your SAN and select **Volume groups**.
+1. Select a volume group and select **Create**.
+1. Add an existing virtual network and subnet and select **Save**.
### [PowerShell](#tab/azure-powershell)
Currently, you must use either the Azure PowerShell module or Azure CLI to manag
- Add a network rule for a virtual network and subnet. ```azurepowershell
- $rule1 = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId <resourceIDHere> -Action Allow
+ $rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow
- Update-AzElasticSanVolumeGroup -ResourceGroupName $rgName -ElasticSanName $sanName -Name $volGroupName -NetworkAclsVirtualNetworkRule $rule1
+ Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule
``` > [!TIP]
storage Elastic San Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-scale-targets.md
description: Learn about the capacity, IOPS, and throughput rates for Azure Elas
Previously updated : 10/12/2022 Last updated : 10/24/2022
An Elastic SAN (preview) has three attributes that determine its performance: to
The total capacity of your Elastic SAN is determined by two different capacities, the base capacity and the additional capacity. Increasing the base capacity also increases the SAN's IOPS and throughput but is more costly than increasing the additional capacity. Increasing additional capacity doesn't increase IOPS or throughput.
-The maximum total capacity of your SAN is determined by the region where it's located and by its redundancy configuration. The minimum total capacity for an Elastic SAN is 64 tebibyte (TiB). Base or additional capacity can be increased in increments of 1 TiB.
+The maximum total capacity of your SAN is determined by the region where it's located and by its redundancy configuration. The minimum total capacity for an Elastic SAN is 1 tebibyte (TiB). Base or additional capacity can be increased in increments of 1 TiB.
### IOPS
The appliance scale targets vary depending on region and redundancy of the SAN i
|Maximum number of Elastic SAN that can be deployed per subscription per region |5 |5 | |Maximum total capacity (TiB) |100 |100 | |Maximum base capacity (TiB) |100 |100 |
-|Minimum total capacity (TiB) |64 |64 |
+|Minimum total capacity (TiB) |1 |1 |
|Maximum total IOPS |500,000 |500,000 | |Maximum total throughput (MB/s) |8,000 |8,000 |
ZRS is only available in France Central.
|Maximum number of Elastic SAN that can be deployed per subscription per region |5 | |Maximum total capacity (TiB) |200 | |Maximum base capacity (TiB) |100 |
-|Minimum total capacity (TiB) |64 |
+|Minimum total capacity (TiB) |1 |
|Maximum total IOPS |500,000 | |Maximum total throughput (MB/s) |8,000 |
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
Previously updated : 09/28/2022 Last updated : 10/24/2022
The AD DS account created by the cmdlet represents the storage account. If the A
> The `Join-AzStorageAccount` cmdlet will create an AD account to represent the storage account (file share) in AD. You can choose to register as a computer account or service logon account, see [FAQ](./storage-files-faq.md#security-authentication-and-access-control) for details. Service logon account passwords can expire in AD if they have a default password expiration age set on the AD domain or OU. Because computer account password changes are driven by the client machine and not AD, they don't expire in AD, although client computers change their passwords by default every 30 days. > For both account types, we recommend you check the password expiration age configured and plan to [update the password of your storage account identity](storage-files-identity-ad-ds-update-password.md) of the AD account before the maximum password age. You can consider [creating a new AD Organizational Unit in AD](/powershell/module/activedirectory/new-adorganizationalunit) and disabling password expiration policy on [computer accounts](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj852252(v=ws.11)) or service logon accounts accordingly.
-You must run the script below in PowerShell 5.1 on a device that's domain joined to your on-premises AD DS, using an on-premises AD DS credential that's synced to your Azure AD. The on-premises AD DS credential must have either **Owner** or **Contributor** Azure role on the storage account and have permissions to create a service logon account or computer account in the target AD. Replace the placeholder values with your own before executing the script.
+You must run the script below in PowerShell 5.1 on a device that's domain joined to your on-premises AD DS, using an on-premises AD DS credential that's synced to your Azure AD. To follow the [Least privilege principle](../../role-based-access-control/best-practices.md), the on-premises AD DS credential must have the following Azure roles:
+
+- **Reader** on the resource group where the target storage account is located.
+- **Contributor** on the storage account to be joined to AD DS (**Owner** will also work).
+
+The AD DS credential must also have permissions to create a service logon account or computer account in the target AD. Replace the placeholder values with your own before executing the script.
```PowerShell # Change the execution policy to unblock importing AzFilesHybrid.psm1 module
storage Storage How To Use Files Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md
description: Learn how to create and use Azure file shares with the Azure portal
Previously updated : 04/05/2022 Last updated : 10/24/2022 -+ ms.devlang: azurecli #Customer intent: As an IT admin new to Azure Files, I want to try out Azure Files so I can determine whether I want to subscribe to the service. # Quickstart: Create and use an Azure file share
-[Azure Files](storage-files-introduction.md) is Microsoft's easy-to-use cloud file system. Azure file shares can be mounted in Windows, Linux, and macOS. This guide shows you how to create an SMB Azure file share using either the Azure portal, Azure CLI, or Azure PowerShell module.
+[Azure Files](storage-files-introduction.md) is Microsoft's easy-to-use cloud file system. You can mount Azure file shares in Windows, Linux, and macOS operating systems. This article shows you how to create an SMB Azure file share using either the Azure portal, Azure CLI, or Azure PowerShell.
## Applies to+
+This Quickstart only applies to SMB Azure file shares. Standard and premium SMB file shares support locally redundant storage (LRS) and zone-redundant storage (ZRS). Standard file shares also support geo-redundant storage (GRS) and geo-zone-redundant storage (GZRS) options. For more information, see [Azure Storage redundancy](../common/storage-redundancy.md).
+ | File share type | SMB | NFS | |-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | | Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | | Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-## Pre-requisites
+## Get started
# [Portal](#tab/azure-portal)
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-If you'd like to install and use PowerShell locally, this guide requires the Azure PowerShell module Az version 7.0.0 or later. To find out which version of the Azure PowerShell module you're running, execute `Get-InstalledModule Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to log in to your Azure account. To use multi-factor authentication, you'll need to supply your Azure tenant ID, such as `Login-AzAccount -TenantId <TenantId>`.
+If you'd like to install and use PowerShell locally, you'll need the Azure PowerShell module Az version 7.0.0 or later. We recommend installing the latest available version. To find out which version of the Azure PowerShell module you're running, execute `Get-InstalledModule Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Login-AzAccount` to log in to your Azure account. To use multi-factor authentication, you'll need to supply your Azure tenant ID, such as `Login-AzAccount -TenantId <TenantId>`.
# [Azure CLI](#tab/azure-cli)
If you'd like to install and use PowerShell locally, this guide requires the Azu
- This article requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. -- By default, Azure CLI commands return JavaScript Object Notation (JSON). JSON is the standard way to send and receive messages from REST APIs. To facilitate working with JSON responses, some of the examples in this article use the *query* parameter on Azure CLI commands. This parameter uses the [JMESPath query language](http://jmespath.org/) to parse JSON. To learn more about how to use the results of Azure CLI commands by following the JMESPath query language, see the [JMESPath tutorial](http://jmespath.org/tutorial.html).
+- By default, Azure CLI commands return JavaScript Object Notation (JSON), which is the standard way to send and receive messages from REST APIs. To facilitate working with JSON responses, some of the examples in this article use the *query* parameter on Azure CLI commands. This parameter uses the [JMESPath query language](http://jmespath.org/) to parse JSON. To learn more about how to use the results of Azure CLI commands by following the JMESPath query language, see the [JMESPath tutorial](http://jmespath.org/tutorial.html).
To create an Azure file share:
![A screenshot of the data storage section of the storage account; select file shares.](media/storage-how-to-use-files-portal/create-file-share-1.png)
-1. On the menu at the top of the **File service** page, click **+ File share**. The **New file share** page drops down.
+1. On the menu at the top of the **File service** page, select **+ File share**. The **New file share** page drops down.
1. In **Name** type *myshare*. Leave **Transaction optimized** selected for **Tier**. 1. Select **Create** to create the Azure file share.
To create a new directory named *myDirectory* at the root of your Azure file sha
1. On the **File share settings** page, select the **myshare** file share. The page for your file share opens, indicating *no files found*. 1. On the menu at the top of the page, select **+ Add directory**. The **New directory** page drops down.
-1. Type *myDirectory* and then click **OK**.
+1. Type *myDirectory* and then select **OK**.
# [PowerShell](#tab/azure-powershell)
az storage directory create \
# [Portal](#tab/azure-portal)
-To demonstrate uploading a file, you first need to create or select a file to be uploaded. You may do this by whatever means you see fit. Once you've decided on the file you would like to upload:
+First, you need to create or select a file to upload. Do this by whatever means you see fit. When you've decided on the file you'd like to upload, follow these steps:
1. Select the **myDirectory** directory. The **myDirectory** panel opens. 1. In the menu at the top, select **Upload**. The **Upload files** panel opens.
To demonstrate uploading a file, you first need to create or select a file to be
1. Select the folder icon to open a window to browse your local files. 1. Select a file and then select **Open**.
-1. In the **Upload files** page, verify the file name and then select **Upload**.
+1. In the **Upload files** page, verify the file name, and then select **Upload**.
1. When finished, the file should appear in the list on the **myDirectory** page. # [PowerShell](#tab/azure-powershell)
Set-AzStorageFileContent `
If you're running PowerShell locally, substitute `~/CloudDrive/` with a path that exists on your machine.
-After uploading the file, you can use [Get-AzStorageFile](/powershell/module/Az.Storage/Get-AzStorageFile) cmdlet to check to make sure that the file was uploaded to your Azure file share.
+After uploading the file, you can use the [Get-AzStorageFile](/powershell/module/Az.Storage/Get-AzStorageFile) cmdlet to check to make sure that the file was uploaded to your Azure file share.
```azurepowershell-interactive Get-AzStorageFile `
Get-AzStorageFileContent `
-Destination "SampleDownload.txt" ```
-After downloading the file, you can use the `Get-ChildItem` to see that the file has been downloaded to your PowerShell Cloud Shell's scratch drive.
+After downloading the file, you can use the `Get-ChildItem` cmdlet to see that the file has been downloaded to your PowerShell Cloud Shell's scratch drive.
```azurepowershell-interactive Get-ChildItem | Where-Object { $_.Name -eq "SampleDownload.txt" }
az storage file download \
# [PowerShell](#tab/azure-powershell)
-When you are done, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to delete the resource group and all resources contained in the resource group.
+When you're done, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to delete the resource group and all resources contained in the resource group.
```azurepowershell-interactive Remove-AzResourceGroup -Name myResourceGroup
Remove-AzResourceGroup -Name myResourceGroup
# [Azure CLI](#tab/azure-cli)
-When you are done, you can use the [`az group delete`](/cli/azure/group) command to delete the resource group and all resources contained in the resource group:
+When you're done, you can use the [`az group delete`](/cli/azure/group) command to delete the resource group and all resources contained in the resource group:
```azurecli-interactive az group delete --name $resourceGroupName
storage Queues Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-auth-abac-attributes.md
Previously updated : 09/14/2022 Last updated : 10/19/2022
# Actions and attributes for Azure role assignment conditions for Azure queues
-> [!IMPORTANT]
-> Azure ABAC and Azure role assignment conditions are currently in preview.
->
-> This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- This article describes the supported attribute dictionaries that can be used in conditions on Azure role assignments for each Azure Storage [DataAction](../../role-based-access-control/role-definitions.md#dataactions). For the list of Queue service operations that are affected by a specific permission or DataAction, see [Permissions for Queue service operations](/rest/api/storageservices/authorize-with-azure-active-directory#permissions-for-queue-service-operations). To understand the role assignment condition format, see [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md). + ## Azure Queue storage actions This section lists the supported Azure Queue storage actions you can target for conditions.
This section lists the Azure Queue storage attributes you can use in your condit
## See also -- [Example Azure role assignment conditions (preview)](../blobs\storage-auth-abac-examples.md)-- [Azure role assignment condition format and syntax (preview)](../../role-based-access-control/conditions-format.md)-- [Troubleshoot Azure role assignment conditions (preview)](../../role-based-access-control/conditions-troubleshoot.md)
+- [Example Azure role assignment conditions](../blobs\storage-auth-abac-examples.md)
+- [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md)
+- [Troubleshoot Azure role assignment conditions](../../role-based-access-control/conditions-troubleshoot.md)
storage Queues Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-auth-abac.md
Previously updated : 09/14/2022 Last updated : 10/19/2022
# Authorize access to queues using Azure role assignment conditions
-> [!IMPORTANT]
-> Azure ABAC is currently in preview and is provided without a service level agreement. It is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- Attribute-based access control (ABAC) is an authorization strategy that defines access levels based on attributes associated with an access request such as the security principal, the resource, the environment and the request itself. With ABAC, you can grant a security principal access to a resource based on [Azure role assignment conditions](../../role-based-access-control/conditions-overview.md). + ## Overview of conditions in Azure Storage You can [use of Azure Active Directory](../common/authorize-data-access.md) (Azure AD) to authorize requests to Azure storage resources using Azure RBAC. Azure RBAC helps you manage access to resources by defining who has access to resources and what they can do with those resources, using role definitions and role assignments. Azure Storage defines a set of Azure [built-in roles](../../role-based-access-control/built-in-roles.md#storage) that encompass common sets of permissions used to access Azure storage data. You can also define custom roles with select sets of permissions. Azure Storage supports role assignments for both storage accounts and blob containers or queues.
Azure RBAC currently supports 2,000 role assignments in a subscription. If you n
- [What is Azure attribute-based access control (Azure ABAC)?](../../role-based-access-control/conditions-overview.md) - [FAQ for Azure role assignment conditions](../../role-based-access-control/conditions-faq.md) - [Azure role assignment condition format and syntax](../../role-based-access-control/conditions-format.md)-- [Scale the management of Azure role assignments by using conditions and custom security attributes (preview)](../../role-based-access-control/conditions-custom-security-attributes-example.md)
+- [Scale the management of Azure role assignments by using conditions and custom security attributes](../../role-based-access-control/conditions-custom-security-attributes-example.md)
- [Security considerations for Azure role assignment conditions in Azure Storage](..\blobs\storage-auth-abac-security.md)
stream-analytics Blob Storage Azure Data Lake Gen2 Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/blob-storage-azure-data-lake-gen2-output.md
To receive exactly once delivery for your Blob storage or ADLS Gen2 account, you
### Regions Availability
-This feature is currently supported in West Central US, Japan East and Canada Central.
+The feature is currently supported in West Central US, Japan East, Canada Central, Korea Central, North Europe, and South India.
## Blob output files
stream-analytics No Code Stream Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-stream-processing.md
The **Manage fields** transformation allows you to add, remove, or rename fields
:::image type="content" source="./media/no-code-stream-processing/manage-field-transformation.png" alt-text="Screenshot that shows selections for managing fields." lightbox="./media/no-code-stream-processing/manage-field-transformation.png" :::
-You can also add new field with the **Build-in Functions** to aggregate the data from upstream. Currently, the build-in functions we support are some functions in **String Functions**, **Date and Time Functions**, **Mathematical Functions**. To learn more about the definitions of these functions, see [Built-in Functions (Azure Stream Analytics)](/stream-analytics-query/built-in-functions-azure-stream-analytics.md).
+You can also add new field with the **Build-in Functions** to aggregate the data from upstream. Currently, the build-in functions we support are some functions in **String Functions**, **Date and Time Functions**, **Mathematical Functions**. To learn more about the definitions of these functions, see [Built-in Functions (Azure Stream Analytics)](/stream-analytics-query/built-in-functions-azure-stream-analytics).
:::image type="content" source="./media/no-code-stream-processing/build-in-functions-managed-fields.png" alt-text="Screenshot that shows the build-in functions." lightbox="./media/no-code-stream-processing/build-in-functions-managed-fields.png" :::
synapse-analytics Backup And Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/backup-and-restore.md
order by run_id desc
This feature enables you to manually trigger snapshots to create restore points of your data warehouse before and after large modifications. This capability ensures that restore points are logically consistent, which provides additional data protection in case of any workload interruptions or user errors for quick recovery time. User-defined restore points are available for seven days and are automatically deleted on your behalf. You cannot change the retention period of user-defined restore points. **42 user-defined restore points** are guaranteed at any point in time so they must be [deleted](/powershell/module/azurerm.sql/remove-azurermsqldatabaserestorepoint) before creating another restore point. You can trigger snapshots to create user-defined restore points through [PowerShell](/powershell/module/az.synapse/new-azsynapsesqlpoolrestorepoint?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.jsont#examples) or the Azure portal. > [!NOTE]
-> If you require restore points longer than 7 days, please vote for this capability [here](https://feedback.azure.com/d365community/idea/4c446fd9-0b25-ec11-b6e6-000d3a4f07b8). You can also create a user-defined restore point and restore from the newly created restore point to a new data warehouse. Once you have restored, you have the dedicated SQL pool online and can pause it indefinitely to save compute costs. The paused database incurs storage charges at the Azure Synapse storage rate. If you need an active copy of the restored data warehouse, you can resume which should take only a few minutes.
+> If you require restore points longer than 7 days, please vote for this capability [here](https://feedback.azure.com/d365community/idea/4c446fd9-0b25-ec11-b6e6-000d3a4f07b8).
+
+> [!NOTE]
+> In case you're looking for a Long-Term Backup (LTR) concept:
+> 1. Create a new user-defined restore point, or you can use one of the automatically generated restore points.
+> 2. Restore from the newly created restore point to a new data warehouse.
+> 3. After you have restored, you have the dedicated SQL pool online. Pause it indefinitely to save compute costs. The paused database incurs storage charges at the Azure Synapse storage rate.
+>
+> If you need an active copy of the restored data warehouse, you can resume, which should take only a few minutes.
### Restore point retention
synapse-analytics Maintenance Scheduling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/maintenance-scheduling.md
To view the maintenance schedule that has been applied to your Synapse SQL pool,
A maintenance schedule can be updated or changed at any time. If the selected instance is going through an active maintenance cycle, the settings will be saved. They'll become active during the next identified maintenance period. [Learn more](../../service-health/resource-health-overview.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) about monitoring your data warehouse during an active maintenance event.
+> [!NOTE]
+> In case you are using DW400c or lower, you will not be able to change the maintenance schedule because DW400c and lower data warehouse tiers could complete maintenance outside of a designated maintenance window.
++ ## Identifying the primary and secondary windows The primary and secondary windows must have separate day ranges. An example is a primary window of TuesdayΓÇôThursday and a secondary of window of SaturdayΓÇôSunday.
virtual-desktop App Attach File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-file-share.md
If you're storing your MSIX applications in Azure Files, then for your session h
| Azure object | Required role | Role function | |--|--|--|
-| Session host (VM computer objects)| Storage File Data SMB Share Contributor | Read and Execute, Read, List folder contents |
+| Session host (VM computer objects)| Storage File Data SMB Share Reader | Allows for read access to Azure File Share over SMB |
| Admins on File Share | Storage File Data SMB Share Elevated Contributor | Full control | | Users on File Share | Storage File Data SMB Share Contributor | Read and Execute, Read, List folder contents |
To assign session host VMs permissions for the storage account and file share:
6. Join the storage account to AD DS by following the instructions in [Part one: enable AD DS authentication for your Azure file shares](../storage/files/storage-files-identity-ad-ds-enable.md#option-one-recommended-use-azfileshybrid-powershell-module).
-7. Assign the synced AD DS group to Azure AD, and assign the storage account the Storage File Data SMB Share Contributor role.
+7. Assign the synced AD DS group to Azure AD, and assign the storage account the Storage File Data SMB Share Reader role.
8. Mount the file share to any session host by following the instructions in [Part two: assign share-level permissions to an identity](../storage/files/storage-files-identity-ad-ds-assign-permissions.md).
virtual-desktop Environment Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/environment-setup.md
Title: Azure Virtual Desktop environment - Azure
-description: Learn about the basic elements of a Azure Virtual Desktop environment, like host pools and app groups.
+description: Learn about the basic elements of an Azure Virtual Desktop environment, like host pools and app groups.
Previously updated : 05/02/2022 Last updated : 10/24/2022
You can set additional properties on the host pool to change its load-balancing
## App groups
-An app group is a logical grouping of applications installed on session hosts in the host pool. An app group can be one of two types:
+An app group is a logical grouping of applications installed on session hosts in the host pool.
-- RemoteApp, where users access the RemoteApps you individually select and publish to the app group-- Desktop, where users access the full desktop
+An app group can be one of two types:
-By default, a desktop app group (named "Desktop Application Group") is automatically created whenever you create a host pool. You can remove this app group at any time. However, you can't create another desktop app group in the host pool while a desktop app group exists. To publish RemoteApps, you must create a RemoteApp app group. You can create multiple RemoteApp app groups to accommodate different worker scenarios. Different RemoteApp app groups can also contain overlapping RemoteApps.
+- RemoteApp, where users access the RemoteApps you individually select and publish to the app group
+- Desktop, where users access the full desktop.
+
+Each host pool has a preferred app group type that dictates whether users see RemoteApp or Desktop apps in their feed if both resources have been published to the same user. By default, Azure Virtual Desktop automatically creates a Desktop app group named "Desktop Application Group" whenever you create a host pool and sets the host pool's preferred app group type to **Desktop**. You can remove the Desktop app group at any time. If you want your users to only see RemoteApps in their feed, you should set the **Preferred App Group Type** value to **RemoteApp**. You can't create another Desktop app group in the host pool while a Desktop app group exists.
-To publish resources to users, you must assign them to app groups. When assigning users to app groups, consider the following things:
+You must create a RemoteApp app group to publish RemoteApp apps. You can create multiple RemoteApp app groups to accommodate different worker scenarios. Different RemoteApp app groups can also contain overlapping RemoteApps. To publish resources to users, you must assign them to app groups.
-- We don't support assigning both the RemoteApp and desktop app groups in a single host pool to the same user. Doing so will cause a single user to have two user sessions in a single host pool. Users aren't supposed to have two active user sessions at the same time, as this can cause the following things to happen:
- - The session hosts become overloaded
- - Users get stuck when trying to login
- - Connections won't work
- - The screen turns black
- - The application crashes
- - Other negative effects on end-user experience and session performance
-- A user can be assigned to multiple app groups within the same host pool, and their feed will be an accumulation of both app groups.-- Personal host pools only allow and support RemoteApp app groups.
+When assigning users to app groups, consider the following things:
+
+- Azure Virtual Desktop doesn't support assigning both RemoteApp and Desktop app groups in a single host pool to the same user. Doing so will cause that user to have two user sessions in a single host pool at the same time. Users aren't supposed to have two user sessions in a single host pool, as this can cause the following things to happen:
+
+ - The session hosts become overloaded.
+ - Users get stuck when trying to sign in.
+ - Connections won't work.
+ - The screen turns black.
+ - The application crashes.
+ - Other negative effects on end-user experience and session performance.
+
+- You can assign a user to multiple app groups within the same host pool. Their feed will show apps from all their assigned app groups.
+- Personal host pools only allow and support RemoteApp app groups.
+
+>[!NOTE]
+>If your host poolΓÇÖs Preferred App Group Type is set to **Undefined**, that means that you havenΓÇÖt set the value yet. You must finish configuring your host pool by setting its Preferred App Group Type before you start using it to prevent app incompatibility and session host overload issues.
## Workspaces
A workspace is a logical grouping of application groups in Azure Virtual Desktop
## End users
-After you've assigned users to their app groups, they can connect to a Azure Virtual Desktop deployment with any of the Azure Virtual Desktop clients.
+After you've assigned users to their app groups, they can connect to an Azure Virtual Desktop deployment with any of the Azure Virtual Desktop clients.
## User sessions
To learn how to connect to Azure Virtual Desktop, see one of the following artic
- [Connect with a web browser](./user-documentation/connect-web.md) - [Connect with the Android client](./user-documentation/connect-android.md) - [Connect with the macOS client](./user-documentation/connect-macos.md)-- [Connect with the iOS client](./user-documentation/connect-ios.md)
+- [Connect with the iOS client](./user-documentation/connect-ios.md)
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
# Required URLs for Azure Virtual Desktop
-In order to deploy and make Azure Virtual Desktop available to your users, you must allow specific URLs that your session host virtual machines (VMs) can access them anytime. Users also need to be able to connect to certain URLs to access their Azure Virtual Desktop resources. This article lists the required URLs you need to allow for your session hosts and users. These URLs could be blocked if you're using [Azure Firewall](../firewall/protect-azure-virtual-desktop.md) or a third-party firewall or proxy service. Azure Virtual Desktop doesn't support deployments that block the URLs listed in this article.
+In order to deploy and make Azure Virtual Desktop available to your users, you must allow specific URLs that your session host virtual machines (VMs) can access them anytime. Users also need to be able to connect to certain URLs to access their Azure Virtual Desktop resources. This article lists the required URLs you need to allow for your session hosts and users. These URLs could be blocked if you're using [Azure Firewall](../firewall/protect-azure-virtual-desktop.md) or a third-party firewall or [proxy service](proxy-server-support.md). Azure Virtual Desktop doesn't support deployments that block the URLs listed in this article.
+
+>[!IMPORTANT]
+>Proxy Services that perform the following are not supported with Azure Virtual Desktop.
+>1. SSL Termination (Break and Inspect)
+>2. Require Authentication
You can validate that your session host VMs can connect to these URLs by following the steps to run the [Required URL Check tool](required-url-check-tool.md). The Required URL Check tool will validate each URL and show whether your session host VMs can access them. You can only use for deployments in the Azure public cloud, it does not check access for sovereign clouds.
virtual-machines Expand Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/expand-disks.md
Title: Expand virtual hard disks on a Linux VM description: Learn how to expand virtual hard disks on a Linux VM with the Azure CLI.-+ Previously updated : 09/08/2022- Last updated : 09/07/2022+
-# Expand virtual hard disks on a Linux VM with the Azure CLI
+# Expand virtual hard disks on a Linux VM
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-This article describes how to expand managed disks for a Linux virtual machine (VM) with the Azure CLI. You can [add data disks](add-disk.md) to provide for additional storage space, and you can also expand an existing data disk. The default virtual hard disk size for the operating system (OS) is typically 30 GB on a Linux VM in Azure. This article covers expanding either OS disks or data disks.
+This article describes how to expand managed disks for a Linux virtual machine (VM). You can [add data disks](add-disk.md) to provide for additional storage space, and you can also expand an existing data disk. The default virtual hard disk size for the operating system (OS) is typically 30 GB on a Linux VM in Azure. This article covers expanding either OS disks or data disks.
> [!WARNING]
-> Always make sure that your filesystem is in a healthy state, your disk partition table type will support the new size, and ensure your data is backed up before you perform disk expansion operations. For more information, see the [Azure Backup quickstart](../../backup/quick-backup-vm-portal.md).
+> Always make sure that your filesystem is in a healthy state, your disk partition table type (GPT or MBR) will support the new size, and ensure your data is backed up before you perform disk expansion operations. For more information, see the [Azure Backup quickstart](../../backup/quick-backup-vm-portal.md).
+
+## Identify Azure data disk object within the operating system
+
+In the case of expanding a data disk when there are several data disks present on the VM, it may be difficult to relate the Azure LUNs to the Linux devices. If the OS disk needs expansion, it will be clearly labeled in the Azure portal as the OS disk.
+
+Start by identifying the relationship between disk utilization, mount point, and device, with the ```df``` command.
+
+```bash
+linux:~ # df -Th
+Filesystem Type Size Used Avail Use% Mounted on
+/dev/sda1 xfs 97G 1.8G 95G 2% /
+<truncated>
+/dev/sdd1 ext4 32G 30G 727M 98% /opt/db/data
+/dev/sde1 ext4 32G 49M 30G 1% /opt/db/log
+```
+
+Here we can see, for example, the ```/opt/db/data``` filesystem is nearly full, and is located on the ```/dev/sdd1``` partition. The output of ```df``` will show the device path regardless of whether the disk is mounted by device path or the (preferred) UUID in the fstab. Also take note of the Type column, indicating the format of the filesystem. This will be important later.
+
+Now locate the LUN which correlates to ```/dev/sdd``` by examining the contents of ```/dev/disk/azure/scsi1```. The output of the following ```ls``` command will show that the device known as ```/dev/sdd``` within the Linux OS is located at LUN1 when looking in the Azure portal.
+
+```bash
+linux:~ # ls -alF /dev/disk/azure/scsi1/
+total 0
+drwxr-xr-x. 2 root root 140 Sep 9 21:54 ./
+drwxr-xr-x. 4 root root 80 Sep 9 21:48 ../
+lrwxrwxrwx. 1 root root 12 Sep 9 21:48 lun0 -> ../../../sdc
+lrwxrwxrwx. 1 root root 12 Sep 9 21:48 lun1 -> ../../../sdd
+lrwxrwxrwx. 1 root root 13 Sep 9 21:48 lun1-part1 -> ../../../sdd1
+lrwxrwxrwx. 1 root root 12 Sep 9 21:54 lun2 -> ../../../sde
+lrwxrwxrwx. 1 root root 13 Sep 9 21:54 lun2-part1 -> ../../../sde1
+```
## Expand an Azure Managed Disk ### Expand without downtime
-You can now expand your managed disks without deallocating your VM.
+You now may be able to expand your managed disks without deallocating your VM.
This feature has the following limitations: [!INCLUDE [virtual-machines-disks-expand-without-downtime-restrictions](../../../includes/virtual-machines-disks-expand-without-downtime-restrictions.md)]
-### Get started
+To register for the feature, use the following command:
+
+```azurecli
+az feature register --namespace Microsoft.Compute --name LiveResize
+```
+
+It may take a few minutes for registration to take complete. To confirm that you've registered, use the following command:
+
+```azurecli
+az feature show --namespace Microsoft.Compute --name LiveResize
+```
+
+### Expand Azure Managed Disk
+
+# [Azure CLI](#tab/azure-cli)
Make sure that you have the latest [Azure CLI](/cli/azure/install-az-cli2) installed and are signed in to an Azure account by using [az login](/cli/azure/reference-index#az-login).
This article requires an existing VM in Azure with at least one data disk attach
In the following samples, replace example parameter names such as *myResourceGroup* and *myVM* with your own values. > [!IMPORTANT]
-> If your disk meets the requirements in [Expand without downtime](#expand-without-downtime), you can skip step 1 and 3.
+> If you've enabled **LiveResize** and your disk meets the requirements in [Expand without downtime](#expand-without-downtime), you can skip step 1 and 3.
1. Operations on virtual hard disks can't be performed with the VM running. Deallocate your VM with [az vm deallocate](/cli/azure/vm#az-vm-deallocate). The following example deallocates the VM named *myVM* in the resource group named *myResourceGroup*:
In the following samples, replace example parameter names such as *myResourceGro
az vm start --resource-group myResourceGroup --name myVM ``` + ## Expand a disk partition and filesystem
-To use an expanded disk, expand the underlying partition and filesystem.
-
-1. SSH to your VM with the appropriate credentials. You can see the public IP address of your VM with [az vm show](/cli/azure/vm#az-vm-show):
-
- ```azurecli
- az vm show --resource-group myResourceGroup --name myVM -d --query [publicIps] --output tsv
- ```
-
-1. Expand the underlying partition and filesystem.
-
- a. If the disk is already mounted, unmount it:
-
- ```bash
- sudo umount /dev/sdc1
- ```
-
- b. Use `parted` to view disk information and resize the partition:
-
- ```bash
- sudo parted /dev/sdc
- ```
-
- View information about the existing partition layout with `print`. The output is similar to the following example, which shows the underlying disk is 215 GB:
-
- ```bash
- GNU Parted 3.2
- Using /dev/sdc1
- Welcome to GNU Parted! Type 'help' to view a list of commands.
- (parted) print
- Model: Unknown Msft Virtual Disk (scsi)
- Disk /dev/sdc1: 215GB
- Sector size (logical/physical): 512B/4096B
- Partition Table: loop
- Disk Flags:
-
- Number Start End Size File system Flags
- 1 0.00B 107GB 107GB ext4
- ```
-
- c. Expand the partition with `resizepart`. Enter the partition number, *1*, and a size for the new partition:
-
- ```bash
- (parted) resizepart
- Partition number? 1
- End? [107GB]? 215GB
- ```
-
- d. To exit, enter `quit`.
-
-1. With the partition resized, verify the partition consistency with `e2fsck`:
-
- ```bash
- sudo e2fsck -f /dev/sdc1
- ```
-
-1. Resize the filesystem with `resize2fs`:
-
- ```bash
- sudo resize2fs /dev/sdc1
- ```
-
-1. Mount the partition to the desired location, such as `/datadrive`:
-
- ```bash
- sudo mount /dev/sdc1 /datadrive
- ```
-
-1. To verify the data disk has been resized, use `df -h`. The following example output shows the data drive */dev/sdc1* is now 200 GB:
-
- ```bash
- Filesystem Size Used Avail Use% Mounted on
- /dev/sdc1 197G 60M 187G 1% /datadrive
- ```
-
-## Next steps
-* If you need additional storage, you can also [add data disks to a Linux VM](add-disk.md).
-* For more information about disk encryption, see [Azure Disk Encryption for Linux VMs](disk-encryption-overview.md).
+> [!NOTE]
+> While there are several tools that may be used for performing the partition resizing, the tools selected here are the same tools used by certain automated processes such as cloud-init. Using the ```parted``` tool also provides more universal compatibility with GPT disks, as older versions of some tools such as ```fdisk``` did not support the GUID Partition Table (GPT).
+
+The remainder of this article describes how to increase the size of a volume at the OS level, using the OS disk as the example. If the disk needing expansion is a data disk, the following procedures can be used as a guideline, substituting the disk device (for example ```/dev/sda```), volume names, mount points, and filesystem formats, as necessary.
+
+### Increase the size of the OS disk
+
+The following instructions apply to endorsed Linux distributions.
+
+> [!NOTE]
+> Before you proceed, make a full backup copy of your VM, or at a minimum take a snapshot of your OS disk.
+
+# [Ubuntu](#tab/ubuntu)
+
+On Ubuntu 16.x and newer, the root partition and filesystems will be automatically expanded to utilize all free contiguous space on the root disk by cloud-init, provided there is a small bit of free space for the resize operation. For this circumstance the sequence is simply
+
+1. Increase the size of the OS disk as detailed previously
+1. Restart the VM, and then access the VM using the **root** user account.
+1. Verify that the OS disk now displays an increased file system size.
+
+As shown in the following example, the OS disk has been resized from the portal to 100 GB. The **/dev/sda1** file system mounted on **/** now displays 97 GB.
+
+```
+user@ubuntu:~# df -Th
+Filesystem Type Size Used Avail Use% Mounted on
+udev devtmpfs 314M 0 314M 0% /dev
+tmpfs tmpfs 65M 2.3M 63M 4% /run
+/dev/sda1 ext4 97G 1.8G 95G 2% /
+tmpfs tmpfs 324M 0 324M 0% /dev/shm
+tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
+tmpfs tmpfs 324M 0 324M 0% /sys/fs/cgroup
+/dev/sda15 vfat 105M 3.6M 101M 4% /boot/efi
+/dev/sdb1 ext4 20G 44M 19G 1% /mnt
+tmpfs tmpfs 65M 0 65M 0% /run/user/1000
+user@ubuntu:~#
+```
+
+# [SuSE](#tab/suse)
+
+To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15, and SUSE SLES 15 for SAP:
+
+1. Follow the procedure above to expand the disk in the Azure infrastructure.
+
+1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user:
+
+ ```
+ linux:~ # sudo -i
+ ```
+
+1. Use the following command to install the **growpart** package, which will be used to resize the partition, if it is not already present:
+
+ ```
+ linux:~ # zypper install growpart
+ ```
+
+1. Use the `lsblk` command to find the partition mounted on the root of the file system (**/**). In this case, we see that partition 4 of device **sda** is mounted on **/**:
+
+ ```
+ linux:~ # lsblk
+ NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+ sda 8:0 0 48G 0 disk
+ Γö£ΓöÇsda1 8:1 0 2M 0 part
+ Γö£ΓöÇsda2 8:2 0 512M 0 part /boot/efi
+ Γö£ΓöÇsda3 8:3 0 1G 0 part /boot
+ ΓööΓöÇsda4 8:4 0 28.5G 0 part /
+ sdb 8:16 0 4G 0 disk
+ ΓööΓöÇsdb1 8:17 0 4G 0 part /mnt/resource
+ ```
+
+1. Resize the required partition by using the `growpart` command and the partition number determined in the preceding step:
+
+ ```
+ linux:~ # growpart /dev/sda 4
+ CHANGED: partition=4 start=3151872 old: size=59762655 end=62914527 new: size=97511391 end=100663263
+ ```
+
+1. Run the `lsblk` command again to check whether the partition has been increased.
+
+ The following output shows that the **/dev/sda4** partition has been resized to 46.5 GB:
+
+ ```
+ linux:~ # lsblk
+ NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+ sda 8:0 0 48G 0 disk
+ Γö£ΓöÇsda1 8:1 0 2M 0 part
+ Γö£ΓöÇsda2 8:2 0 512M 0 part /boot/efi
+ Γö£ΓöÇsda3 8:3 0 1G 0 part /boot
+ ΓööΓöÇsda4 8:4 0 46.5G 0 part /
+ sdb 8:16 0 4G 0 disk
+ ΓööΓöÇsdb1 8:17 0 4G 0 part /mnt/resource
+ ```
+
+1. Identify the type of file system on the OS disk by using the `lsblk` command with the `-f` flag:
+
+ ```
+ linux:~ # lsblk -f
+ NAME FSTYPE LABEL UUID MOUNTPOINT
+ sda
+ Γö£ΓöÇsda1
+ Γö£ΓöÇsda2 vfat EFI AC67-D22D /boot/efi
+ Γö£ΓöÇsda3 xfs BOOT 5731a128-db36-4899-b3d2-eb5ae8126188 /boot
+ ΓööΓöÇsda4 xfs ROOT 70f83359-c7f2-4409-bba5-37b07534af96 /
+ sdb
+ ΓööΓöÇsdb1 ext4 8c4ca904-cd93-4939-b240-fb45401e2ec6 /mnt/resource
+ ```
+
+1. Based on the file system type, use the appropriate commands to resize the file system.
+
+ For **xfs**, use this command:
+
+ ```
+ linux:~ #xfs_growfs /
+ ```
+
+ Example output:
+
+ ```
+ linux:~ # xfs_growfs /
+ meta-data=/dev/sda4 isize=512 agcount=4, agsize=1867583 blks
+ = sectsz=512 attr=2, projid32bit=1
+ = crc=1 finobt=0 spinodes=0 rmapbt=0
+ = reflink=0
+ data = bsize=4096 blocks=7470331, imaxpct=25
+ = sunit=0 swidth=0 blks
+ naming =version 2 bsize=4096 ascii-ci=0 ftype=1
+ log =internal bsize=4096 blocks=3647, version=2
+ = sectsz=512 sunit=0 blks, lazy-count=1
+ realtime =none extsz=4096 blocks=0, rtextents=0
+ data blocks changed from 7470331 to 12188923
+ ```
+
+ For **ext4**, use this command:
+
+ ```
+ linux:~ #resize2fs /dev/sda4
+ ```
+
+1. Verify the increased file system size for **df -Th** by using this command:
+
+ ```
+ linux:~ #df -Thl
+ ```
+
+ Example output:
+
+ ```
+ linux:~ # df -Thl
+ Filesystem Type Size Used Avail Use% Mounted on
+ devtmpfs devtmpfs 445M 4.0K 445M 1% /dev
+ tmpfs tmpfs 458M 0 458M 0% /dev/shm
+ tmpfs tmpfs 458M 14M 445M 3% /run
+ tmpfs tmpfs 458M 0 458M 0% /sys/fs/cgroup
+ /dev/sda4 xfs 47G 2.2G 45G 5% /
+ /dev/sda3 xfs 1014M 86M 929M 9% /boot
+ /dev/sda2 vfat 512M 1.1M 511M 1% /boot/efi
+ /dev/sdb1 ext4 3.9G 16M 3.7G 1% /mnt/resource
+ tmpfs tmpfs 92M 0 92M 0% /run/user/1000
+ tmpfs tmpfs 92M 0 92M 0% /run/user/490
+ ```
+
+ In the preceding example, we can see that the file system size for the OS disk has been increased.
+
+# [Red Hat with LVM](#tab/rhellvm)
+
+1. Follow the procedure above to expand the disk in the Azure infrastructure.
+
+1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user:
+
+ ```bash
+ [root@rhel-lvm ~]# sudo -i
+ ```
+
+1. Use the `lsblk` command to determine which logical volume (LV) is mounted on the root of the file system (**/**). In this case, we see that **rootvg-rootlv** is mounted on **/**. If a different filesystem is in need of resizing, substitute the LV and mount point throughout this section.
+
+ ```shell
+ [root@rhel-lvm ~]# lsblk -f
+ NAME FSTYPE LABEL UUID MOUNTPOINT
+ fd0
+ sda
+ Γö£ΓöÇsda1 vfat C13D-C339 /boot/efi
+ Γö£ΓöÇsda2 xfs 8cc4c23c-fa7b-4a4d-bba8-4108b7ac0135 /boot
+ Γö£ΓöÇsda3
+ ΓööΓöÇsda4 LVM2_member zx0Lio-2YsN-ukmz-BvAY-LCKb-kRU0-ReRBzh
+ Γö£ΓöÇrootvg-tmplv xfs 174c3c3a-9e65-409a-af59-5204a5c00550 /tmp
+ Γö£ΓöÇrootvg-usrlv xfs a48dbaac-75d4-4cf6-a5e6-dcd3ffed9af1 /usr
+ Γö£ΓöÇrootvg-optlv xfs 85fe8660-9acb-48b8-98aa-bf16f14b9587 /opt
+ Γö£ΓöÇrootvg-homelv xfs b22432b1-c905-492b-a27f-199c1a6497e7 /home
+ Γö£ΓöÇrootvg-varlv xfs 24ad0b4e-1b6b-45e7-9605-8aca02d20d22 /var
+ ΓööΓöÇrootvg-rootlv xfs 4f3e6f40-61bf-4866-a7ae-5c6a94675193 /
+ ```
+
+1. Check whether there is free space in the LVM volume group (VG) containing the root partition. If there is free space, skip to step 12.
+
+ ```bash
+ [root@rhel-lvm ~]# vgdisplay rootvg
+ Volume group
+ VG Name rootvg
+ System ID
+ Format lvm2
+ Metadata Areas 1
+ Metadata Sequence No 7
+ VG Access read/write
+ VG Status resizable
+ MAX LV 0
+ Cur LV 6
+ Open LV 6
+ Max PV 0
+ Cur PV 1
+ Act PV 1
+ VG Size <63.02 GiB
+ PE Size 4.00 MiB
+ Total PE 16132
+ Alloc PE / Size 6400 / 25.00 GiB
+ Free PE / Size 9732 / <38.02 GiB
+ VG UUID lPUfnV-3aYT-zDJJ-JaPX-L2d7-n8sL-A9AgJb
+ ```
+
+ In this example, the line **Free PE / Size** shows that there is 38.02 GB free in the volume group. No disk resizing is required before adding space to the volume group
+
+1. To increase the size of the OS disk in RHEL 7 and newer with LVM:
+
+ 1. Stop the VM.
+ 1. Increase the size of the OS disk from the portal.
+ 1. Start the VM.
+
+1. When the VM has restarted, complete the following steps:
+
+ Install the **cloud-utils-growpart** package to provide the **growpart** command, which is required to increase the size of the OS disk and the gdisk handler for GPT disk layouts This package is preinstalled on most marketplace images
+
+ ```bash
+ [root@rhel-lvm ~]# yum install cloud-utils-growpart gdisk
+ ```
+
+1. Determine which disk and partition holds the LVM physical volume (PV) or volumes in the volume group named **rootvg** by using the **pvscan** command. Note the size and free space listed between the brackets (**[** and **]**).
+
+ ```bash
+ [root@rhel-lvm ~]# pvscan
+ PV /dev/sda4 VG rootvg lvm2 [<63.02 GiB / <38.02 GiB free]
+ ```
+
+1. Verify the size of the partition by using `lsblk`.
+
+ ```bash
+ [root@rhel-lvm ~]# lsblk /dev/sda4
+ NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+ sda4 8:4 0 63G 0 part
+ Γö£ΓöÇrootvg-tmplv 253:1 0 2G 0 lvm /tmp
+ Γö£ΓöÇrootvg-usrlv 253:2 0 10G 0 lvm /usr
+ Γö£ΓöÇrootvg-optlv 253:3 0 2G 0 lvm /opt
+ Γö£ΓöÇrootvg-homelv 253:4 0 1G 0 lvm /home
+ Γö£ΓöÇrootvg-varlv 253:5 0 8G 0 lvm /var
+ ΓööΓöÇrootvg-rootlv 253:6 0 2G 0 lvm /
+ ```
+
+1. Expand the partition containing this PV using *growpart*, the device name, and partition number. Doing so will expand the specified partition to use all the free contiguous space on the device.
+
+ ```bash
+ [root@rhel-lvm ~]# growpart /dev/sda 4
+ CHANGED: partition=4 start=2054144 old: size=132161536 end=134215680 new: size=199272414 end=201326558
+ ```
+
+1. Verify that the partition has resized to the expected size by using the `lsblk` command again. Notice that in the example **sda4** has changed from 63G to 95G.
+
+ ```bash
+ [root@rhel-lvm ~]# lsblk /dev/sda4
+ NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+ sda4 8:4 0 95G 0 part
+ Γö£ΓöÇrootvg-tmplv 253:1 0 2G 0 lvm /tmp
+ Γö£ΓöÇrootvg-usrlv 253:2 0 10G 0 lvm /usr
+ Γö£ΓöÇrootvg-optlv 253:3 0 2G 0 lvm /opt
+ Γö£ΓöÇrootvg-homelv 253:4 0 1G 0 lvm /home
+ Γö£ΓöÇrootvg-varlv 253:5 0 8G 0 lvm /var
+ ΓööΓöÇrootvg-rootlv 253:6 0 2G 0 lvm /
+ ```
+
+1. Expand the PV to use the rest of the newly expanded partition
+
+ ```bash
+ [root@rhel-lvm ~]# pvresize /dev/sda4
+ Physical volume "/dev/sda4" changed
+ 1 physical volume(s) resized or updated / 0 physical volume(s) not resized
+ ```
+
+1. Verify the new size of the PV is the expected size, comparing to original **[size / free]** values.
+
+ ```bash
+ [root@rhel-lvm ~]# pvscan
+ PV /dev/sda4 VG rootvg lvm2 [<95.02 GiB / <70.02 GiB free]
+ ```
+
+1. Expand the desired logical volume (lv) by the desired amount, which does not need to be all the free space in the volume group. In the following example, **/dev/mapper/rootvg-rootlv** is resized from 2 GB to 12 GB (an increase of 10 GB) through the following command. This command will also resize the file system.
+
+ ```bash
+ [root@rhel-lvm ~]# lvresize -r -L +10G /dev/mapper/rootvg-rootlv
+ ```
+
+ Example output:
+
+ ```bash
+ [root@rhel-lvm ~]# lvresize -r -L +10G /dev/mapper/rootvg-rootlv
+ Size of logical volume rootvg/rootlv changed from 2.00 GiB (512 extents) to 12.00 GiB (3072 extents).
+ Logical volume rootvg/rootlv successfully resized.
+ meta-data=/dev/mapper/rootvg-rootlv isize=512 agcount=4, agsize=131072 blks
+ = sectsz=4096 attr=2, projid32bit=1
+ = crc=1 finobt=0 spinodes=0
+ data = bsize=4096 blocks=524288, imaxpct=25
+ = sunit=0 swidth=0 blks
+ naming =version 2 bsize=4096 ascii-ci=0 ftype=1
+ log =internal bsize=4096 blocks=2560, version=2
+ = sectsz=4096 sunit=1 blks, lazy-count=1
+ realtime =none extsz=4096 blocks=0, rtextents=0
+ data blocks changed from 524288 to 3145728
+ ```
+
+1. The `lvresize` command automatically calls the appropriate resize command for the filesystem in the LV. Verify whether **/dev/mapper/rootvg-rootlv**, which is mounted on **/**, has an increased file system size by using this command:
+
+ ```shell
+ [root@rhel-lvm ~]# df -Th /
+ ```
+
+ Example output:
+
+ ```shell
+ [root@rhel-lvm ~]# df -Th /
+ Filesystem Type Size Used Avail Use% Mounted on
+ /dev/mapper/rootvg-rootlv xfs 12G 71M 12G 1% /
+ [root@rhel-lvm ~]#
+ ```
+
+> [!NOTE]
+> To use the same procedure to resize any other logical volume, change the **lv** name in step **12**.
+
+# [Red Hat with raw disks](#tab/rhelraw)
+
+1. Follow the procedure above to expand the disk in the Azure infrastructure.
+
+1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user:
+
+ ```bash
+ [root@rhel-raw ~]# sudo -i
+ ```
+
+1. When the VM has restarted, perform the following steps:
+
+ 1. Install the **cloud-utils-growpart** package to provide the **growpart** command, which is required to increase the size of the OS disk and the gdisk handler for GPT disk layouts. This package is preinstalled on most marketplace images
+
+ ```bash
+ [root@rhel-raw ~]# yum install cloud-utils-growpart gdisk
+ ```
+
+1. Use the **lsblk -f** command to verify the partition and filesystem type holding the root (**/**) partition
+
+ ```bash
+ [root@rhel-raw ~]# lsblk -f
+ NAME FSTYPE LABEL UUID MOUNTPOINT
+ sda
+ Γö£ΓöÇsda1 xfs 2a7bb59d-6a71-4841-a3c6-cba23413a5d2 /boot
+ Γö£ΓöÇsda2 xfs 148be922-e3ec-43b5-8705-69786b522b05 /
+ Γö£ΓöÇsda14
+ ΓööΓöÇsda15 vfat 788D-DC65 /boot/efi
+ sdb
+ ΓööΓöÇsdb1 ext4 923f51ff-acbd-4b91-b01b-c56140920098 /mnt/resource
+ ```
+
+1. For verification, start by listing the partition table of the sda disk with **gdisk**. In this example, we see a 48.0 GiB disk with partition #2 sized 29.0 GiB. The disk was expanded from 30 GB to 48 GB in the Azure portal.
+
+ ```bash
+ [root@rhel-raw ~]# gdisk -l /dev/sda
+ GPT fdisk (gdisk) version 0.8.10
+
+ Partition table scan:
+ MBR: protective
+ BSD: not present
+ APM: not present
+ GPT: present
+
+ Found valid GPT with protective MBR; using GPT.
+ Disk /dev/sda: 100663296 sectors, 48.0 GiB
+ Logical sector size: 512 bytes
+ Disk identifier (GUID): 78CDF84D-9C8E-4B9F-8978-8C496A1BEC83
+ Partition table holds up to 128 entries
+ First usable sector is 34, last usable sector is 62914526
+ Partitions will be aligned on 2048-sector boundaries
+ Total free space is 6076 sectors (3.0 MiB)
+
+ Number Start (sector) End (sector) Size Code Name
+ 1 1026048 2050047 500.0 MiB 0700
+ 2 2050048 62912511 29.0 GiB 0700
+ 14 2048 10239 4.0 MiB EF02
+ 15 10240 1024000 495.0 MiB EF00 EFI System Partition
+ ```
+
+1. Expand the partition for root, in this case sda2 by using the **growpart** command. Using this command expands the partition to use all of the contiguous space on the disk.
+
+ ```bash
+ [root@rhel-raw ~]# growpart /dev/sda 2
+ CHANGED: partition=2 start=2050048 old: size=60862464 end=62912512 new: size=98613214 end=100663262
+ ```
+
+1. Now print the new partition table with **gdisk** again. Notice that partition 2 has is now sized 47.0 GiB
+
+ ```bash
+ [root@rhel-raw ~]# gdisk -l /dev/sda
+ GPT fdisk (gdisk) version 0.8.10
+
+ Partition table scan:
+ MBR: protective
+ BSD: not present
+ APM: not present
+ GPT: present
+
+ Found valid GPT with protective MBR; using GPT.
+ Disk /dev/sda: 100663296 sectors, 48.0 GiB
+ Logical sector size: 512 bytes
+ Disk identifier (GUID): 78CDF84D-9C8E-4B9F-8978-8C496A1BEC83
+ Partition table holds up to 128 entries
+ First usable sector is 34, last usable sector is 100663262
+ Partitions will be aligned on 2048-sector boundaries
+ Total free space is 4062 sectors (2.0 MiB)
+
+ Number Start (sector) End (sector) Size Code Name
+ 1 1026048 2050047 500.0 MiB 0700
+ 2 2050048 100663261 47.0 GiB 0700
+ 14 2048 10239 4.0 MiB EF02
+ 15 10240 1024000 495.0 MiB EF00 EFI System Partition
+ ```
+
+1. Expand the filesystem on the partition with **xfs_growfs**, which is appropriate for a standard marketplace-generated RedHat system:
+
+ ```bash
+ [root@rhel-raw ~]# xfs_growfs /
+ meta-data=/dev/sda2 isize=512 agcount=4, agsize=1901952 blks
+ = sectsz=4096 attr=2, projid32bit=1
+ = crc=1 finobt=0 spinodes=0
+ data = bsize=4096 blocks=7607808, imaxpct=25
+ = sunit=0 swidth=0 blks
+ naming =version 2 bsize=4096 ascii-ci=0 ftype=1
+ log =internal bsize=4096 blocks=3714, version=2
+ = sectsz=4096 sunit=1 blks, lazy-count=1
+ realtime =none extsz=4096 blocks=0, rtextents=0
+ data blocks changed from 7607808 to 12326651
+ ```
+
+1. Verify the new size is reflected with the **df** command
+
+ ```bash
+ [root@rhel-raw ~]# df -hl
+ Filesystem Size Used Avail Use% Mounted on
+ devtmpfs 452M 0 452M 0% /dev
+ tmpfs 464M 0 464M 0% /dev/shm
+ tmpfs 464M 6.8M 457M 2% /run
+ tmpfs 464M 0 464M 0% /sys/fs/cgroup
+ /dev/sda2 48G 2.1G 46G 5% /
+ /dev/sda1 494M 65M 430M 13% /boot
+ /dev/sda15 495M 12M 484M 3% /boot/efi
+ /dev/sdb1 3.9G 16M 3.7G 1% /mnt/resource
+ tmpfs 93M 0 93M 0% /run/user/1000
+ ```
virtual-machines Resize Os Disk Gpt Partition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/resize-os-disk-gpt-partition.md
- Title: Resize an OS disk that has a GPT partition
-description: This article provides instructions on how to resize an OS disk that has a GUID Partition Table (GPT) partition in Linux.
------- Previously updated : 05/03/2020----
-# Resize an OS disk that has a GPT partition
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-> [!NOTE]
-> This article applies only to OS disks that have a GUID Partition Table (GPT) partition.
-
-This article describes how to increase the size of an OS disk that has a GPT partition in Linux.
-
-## Identify whether the OS disk has an MBR or GPT partition
-
-Use the `parted` command to identify if the disk partition has been created with either a master boot record (MBR) partition or a GPT partition.
-
-### MBR partition
-
-In the following output, **Partition Table** shows a value of **msdos**. This value identifies an MBR partition.
-
-```
-[user@myvm ~]# parted -l /dev/sda
-Model: Msft Virtual Disk (scsi)
-Disk /dev/sda: 107GB
-Sector size (logical/physical): 512B/512B
-Partition Table: msdos
-Number Start End Size Type File system Flags
-1 1049kB 525MB 524MB primary ext4 boot
-2 525MB 34.4GB 33.8GB primary ext4
-[user@myvm ~]#
-```
-
-### GPT partition
-
-In the following output, **Partition Table** shows a value of **gpt**. This value identifies a GPT partition.
-
-```
-[user@myvm ~]# parted -l /dev/sda
-Model: Msft Virtual Disk (scsi)
-Disk /dev/sda: 68.7GB
-Sector size (logical/physical): 512B/512B
-Partition Table: gpt
-Disk Flags:
-
-Number Start End Size File system Name Flags
-1 1049kB 525MB 524MB fat16 EFI System Partition boot
-2 525MB 1050MB 524MB xfs
-3 1050MB 1052MB 2097kB bios_grub
-4 1052MB 68.7GB 67.7GB lvm
-```
-
-If your virtual machine (VM) has a GPT partition on your OS disk, increase the size of the OS disk.
-
-## Increase the size of the OS disk
-
-The following instructions apply to Linux-endorsed distributions.
-
-> [!NOTE]
-> Before you proceed, make a backup copy of your VM, or take a snapshot of your OS disk.
-
-### Ubuntu
-
-To increase the size of the OS disk in Ubuntu 16.*x* and 18.*x*:
-
-1. Stop the VM.
-1. Increase the size of the OS disk from the portal.
-1. Restart the VM, and then sign in to the VM as a **root** user.
-1. Verify that the OS disk now displays an increased file system size.
-
-In the following example, the OS disk has been resized from the portal to 100 GB. The **/dev/sda1** file system mounted on **/** now displays 97 GB.
-
-```
-user@myvm:~# df -Th
-Filesystem Type Size Used Avail Use% Mounted on
-udev devtmpfs 314M 0 314M 0% /dev
-tmpfs tmpfs 65M 2.3M 63M 4% /run
-/dev/sda1 ext4 97G 1.8G 95G 2% /
-tmpfs tmpfs 324M 0 324M 0% /dev/shm
-tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
-tmpfs tmpfs 324M 0 324M 0% /sys/fs/cgroup
-/dev/sda15 vfat 105M 3.6M 101M 4% /boot/efi
-/dev/sdb1 ext4 20G 44M 19G 1% /mnt
-tmpfs tmpfs 65M 0 65M 0% /run/user/1000
-user@myvm:~#
-```
-
-### SUSE
-
-To increase the size of the OS disk in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15, and SUSE SLES 15 for SAP:
-
-1. Stop the VM.
-1. Increase the size of the OS disk from the portal.
-1. Restart the VM.
-
-When the VM has restarted, complete these steps:
-
-1. Access your VM as a **root** user by using this command:
-
- ```
- # sudo -i
- ```
-
-1. Use the following command to install the **growpart** package, which you'll use to resize the partition:
-
- ```
- # zypper install growpart
- ```
-
-1. Use the `lsblk` command to find the partition mounted on the root of the file system (**/**). In this case, we see that partition 4 of device **sda** is mounted on **/**:
-
- ```
- # lsblk
- NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
- sda 8:0 0 48G 0 disk
- Γö£ΓöÇsda1 8:1 0 2M 0 part
- Γö£ΓöÇsda2 8:2 0 512M 0 part /boot/efi
- Γö£ΓöÇsda3 8:3 0 1G 0 part /boot
- ΓööΓöÇsda4 8:4 0 28.5G 0 part /
- sdb 8:16 0 4G 0 disk
- ΓööΓöÇsdb1 8:17 0 4G 0 part /mnt/resource
- ```
-
-1. Resize the required partition by using the `growpart` command and the partition number determined in the preceding step:
-
- ```
- # growpart /dev/sda 4
- CHANGED: partition=4 start=3151872 old: size=59762655 end=62914527 new: size=97511391 end=100663263
- ```
-
-1. Run the `lsblk` command again to check whether the partition has been increased.
-
- The following output shows that the **/dev/sda4** partition has been resized to 46.5 GB:
-
- ```
- linux:~ # lsblk
- NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
- sda 8:0 0 48G 0 disk
- Γö£ΓöÇsda1 8:1 0 2M 0 part
- Γö£ΓöÇsda2 8:2 0 512M 0 part /boot/efi
- Γö£ΓöÇsda3 8:3 0 1G 0 part /boot
- ΓööΓöÇsda4 8:4 0 46.5G 0 part /
- sdb 8:16 0 4G 0 disk
- ΓööΓöÇsdb1 8:17 0 4G 0 part /mnt/resource
- ```
-
-1. Identify the type of file system on the OS disk by using the `lsblk` command with the `-f` flag:
-
- ```
- linux:~ # lsblk -f
- NAME FSTYPE LABEL UUID MOUNTPOINT
- sda
- Γö£ΓöÇsda1
- Γö£ΓöÇsda2 vfat EFI AC67-D22D /boot/efi
- Γö£ΓöÇsda3 xfs BOOT 5731a128-db36-4899-b3d2-eb5ae8126188 /boot
- ΓööΓöÇsda4 xfs ROOT 70f83359-c7f2-4409-bba5-37b07534af96 /
- sdb
- ΓööΓöÇsdb1 ext4 8c4ca904-cd93-4939-b240-fb45401e2ec6 /mnt/resource
- ```
-
-1. Based on the file system type, use the appropriate commands to resize the file system.
-
- For **xfs**, use this command:
-
- ```
- #xfs_growfs /
- ```
-
- Example output:
-
- ```
- linux:~ # xfs_growfs /
- meta-data=/dev/sda4 isize=512 agcount=4, agsize=1867583 blks
- = sectsz=512 attr=2, projid32bit=1
- = crc=1 finobt=0 spinodes=0 rmapbt=0
- = reflink=0
- data = bsize=4096 blocks=7470331, imaxpct=25
- = sunit=0 swidth=0 blks
- naming =version 2 bsize=4096 ascii-ci=0 ftype=1
- log =internal bsize=4096 blocks=3647, version=2
- = sectsz=512 sunit=0 blks, lazy-count=1
- realtime =none extsz=4096 blocks=0, rtextents=0
- data blocks changed from 7470331 to 12188923
- ```
-
- For **ext4**, use this command:
-
- ```
- #resize2fs /dev/sda4
- ```
-
-1. Verify the increased file system size for **df -Th** by using this command:
-
- ```
- #df -Thl
- ```
-
- Example output:
-
- ```
- linux:~ # df -Thl
- Filesystem Type Size Used Avail Use% Mounted on
- devtmpfs devtmpfs 445M 4.0K 445M 1% /dev
- tmpfs tmpfs 458M 0 458M 0% /dev/shm
- tmpfs tmpfs 458M 14M 445M 3% /run
- tmpfs tmpfs 458M 0 458M 0% /sys/fs/cgroup
- /dev/sda4 xfs 47G 2.2G 45G 5% /
- /dev/sda3 xfs 1014M 86M 929M 9% /boot
- /dev/sda2 vfat 512M 1.1M 511M 1% /boot/efi
- /dev/sdb1 ext4 3.9G 16M 3.7G 1% /mnt/resource
- tmpfs tmpfs 92M 0 92M 0% /run/user/1000
- tmpfs tmpfs 92M 0 92M 0% /run/user/490
- ```
-
- In the preceding example, we can see that the file system size for the OS disk has been increased.
-
-### RHEL with LVM
-
-1. Access your VM as a **root** user by using this command:
-
- ```bash
- [root@dd-rhel7vm ~]# sudo -i
- ```
-
-1. Use the `lsblk` command to determine which logical volume (LV) is mounted on the root of the file system (**/**). In this case, we see that **rootvg-rootlv** is mounted on **/**. If you want another file system, substitute the LV and mount point throughout this article.
-
- ```shell
- [root@dd-rhel7vm ~]# lsblk -f
- NAME FSTYPE LABEL UUID MOUNTPOINT
- fd0
- sda
- Γö£ΓöÇsda1 vfat C13D-C339 /boot/efi
- Γö£ΓöÇsda2 xfs 8cc4c23c-fa7b-4a4d-bba8-4108b7ac0135 /boot
- Γö£ΓöÇsda3
- ΓööΓöÇsda4 LVM2_member zx0Lio-2YsN-ukmz-BvAY-LCKb-kRU0-ReRBzh
- Γö£ΓöÇrootvg-tmplv xfs 174c3c3a-9e65-409a-af59-5204a5c00550 /tmp
- Γö£ΓöÇrootvg-usrlv xfs a48dbaac-75d4-4cf6-a5e6-dcd3ffed9af1 /usr
- Γö£ΓöÇrootvg-optlv xfs 85fe8660-9acb-48b8-98aa-bf16f14b9587 /opt
- Γö£ΓöÇrootvg-homelv xfs b22432b1-c905-492b-a27f-199c1a6497e7 /home
- Γö£ΓöÇrootvg-varlv xfs 24ad0b4e-1b6b-45e7-9605-8aca02d20d22 /var
- ΓööΓöÇrootvg-rootlv xfs 4f3e6f40-61bf-4866-a7ae-5c6a94675193 /
- ```
-
-1. Check whether there's free space in the LVM volume group (VG) that contains the root partition. If there is free space, skip to step 12.
-
- ```bash
- [root@dd-rhel7vm ~]# vgdisplay rootvg
- Volume group
- VG Name rootvg
- System ID
- Format lvm2
- Metadata Areas 1
- Metadata Sequence No 7
- VG Access read/write
- VG Status resizable
- MAX LV 0
- Cur LV 6
- Open LV 6
- Max PV 0
- Cur PV 1
- Act PV 1
- VG Size <63.02 GiB
- PE Size 4.00 MiB
- Total PE 16132
- Alloc PE / Size 6400 / 25.00 GiB
- Free PE / Size 9732 / <38.02 GiB
- VG UUID lPUfnV-3aYT-zDJJ-JaPX-L2d7-n8sL-A9AgJb
- ```
-
- In this example, the line **Free PE / Size** shows that there's 38.02 GB free in the volume group. You don't need to resize the disk before you add space to the volume group.
-
-1. To increase the size of the OS disk in RHEL 7.*x* with LVM:
-
- 1. Stop the VM.
- 1. Increase the size of the OS disk from the portal.
- 1. Start the VM.
-
-1. When the VM has restarted, complete the following step:
-
- - Install the **cloud-utils-growpart** package to provide the **growpart** command, which is required to increase the size of the OS disk and the gdisk handler for GPT disk layouts. These packages are preinstalled on most marketplace images.
-
- ```bash
- [root@dd-rhel7vm ~]# yum install cloud-utils-growpart gdisk
- ```
-
-1. Determine which disk and partition holds the LVM physical volume or volumes (PV) in the volume group named **rootvg** by using the `pvscan` command. Note the size and free space listed between the brackets (**[** and **]**).
-
- ```bash
- [root@dd-rhel7vm ~]# pvscan
- PV /dev/sda4 VG rootvg lvm2 [<63.02 GiB / <38.02 GiB free]
- ```
-
-1. Verify the size of the partition by using `lsblk`.
-
- ```bash
- [root@dd-rhel7vm ~]# lsblk /dev/sda4
- NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
- sda4 8:4 0 63G 0 part
- Γö£ΓöÇrootvg-tmplv 253:1 0 2G 0 lvm /tmp
- Γö£ΓöÇrootvg-usrlv 253:2 0 10G 0 lvm /usr
- Γö£ΓöÇrootvg-optlv 253:3 0 2G 0 lvm /opt
- Γö£ΓöÇrootvg-homelv 253:4 0 1G 0 lvm /home
- Γö£ΓöÇrootvg-varlv 253:5 0 8G 0 lvm /var
- ΓööΓöÇrootvg-rootlv 253:6 0 2G 0 lvm /
- ```
-
-1. Expand the partition that contains this PV by using `growpart`, the device name, and the partition number. Doing so will expand the specified partition to use all the free contiguous space on the device.
-
- ```bash
- [root@dd-rhel7vm ~]# growpart /dev/sda 4
- CHANGED: partition=4 start=2054144 old: size=132161536 end=134215680 new: size=199272414 end=201326558
- ```
-
-1. Verify that the partition has resized to the expected size by using the `lsblk` command again. Notice that in the example **sda4** has changed from 63 GB to 95 GB.
-
- ```bash
- [root@dd-rhel7vm ~]# lsblk /dev/sda4
- NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
- sda4 8:4 0 95G 0 part
- Γö£ΓöÇrootvg-tmplv 253:1 0 2G 0 lvm /tmp
- Γö£ΓöÇrootvg-usrlv 253:2 0 10G 0 lvm /usr
- Γö£ΓöÇrootvg-optlv 253:3 0 2G 0 lvm /opt
- Γö£ΓöÇrootvg-homelv 253:4 0 1G 0 lvm /home
- Γö£ΓöÇrootvg-varlv 253:5 0 8G 0 lvm /var
- ΓööΓöÇrootvg-rootlv 253:6 0 2G 0 lvm /
- ```
-
-1. Expand the PV to use the rest of the newly expanded partition:
-
- ```bash
- [root@dd-rhel7vm ~]# pvresize /dev/sda4
- Physical volume "/dev/sda4" changed
- 1 physical volume(s) resized or updated / 0 physical volume(s) not resized
- ```
-
-1. Verify that the new size of the PV is the expected size, comparing it to the original **[size / free]** values:
-
- ```bash
- [root@dd-rhel7vm ~]# pvscan
- PV /dev/sda4 VG rootvg lvm2 [<95.02 GiB / <70.02 GiB free]
- ```
-
-1. Expand the desired logical volume (LV) by the amount you want. The amount doesn't need to be all the free space in the volume group. In the following example, **/dev/mapper/rootvg-rootlv** is resized from 2 GB to 12 GB (an increase of 10 GB). This command will also resize the file system.
-
- ```bash
- [root@dd-rhel7vm ~]# lvresize -r -L +10G /dev/mapper/rootvg-rootlv
- ```
-
- Example output:
-
- ```bash
- [root@dd-rhel7vm ~]# lvresize -r -L +10G /dev/mapper/rootvg-rootlv
- Size of logical volume rootvg/rootlv changed from 2.00 GiB (512 extents) to 12.00 GiB (3072 extents).
- Logical volume rootvg/rootlv successfully resized.
- meta-data=/dev/mapper/rootvg-rootlv isize=512 agcount=4, agsize=131072 blks
- = sectsz=4096 attr=2, projid32bit=1
- = crc=1 finobt=0 spinodes=0
- data = bsize=4096 blocks=524288, imaxpct=25
- = sunit=0 swidth=0 blks
- naming =version 2 bsize=4096 ascii-ci=0 ftype=1
- log =internal bsize=4096 blocks=2560, version=2
- = sectsz=4096 sunit=1 blks, lazy-count=1
- realtime =none extsz=4096 blocks=0, rtextents=0
- data blocks changed from 524288 to 3145728
- ```
-
-1. The `lvresize` command automatically calls the appropriate resize command for the file system in the LV. Check whether **/dev/mapper/rootvg-rootlv**, which is mounted on **/**, has an increased file system size by using this command:
-
- ```shell
- [root@dd-rhel7vm ~]# df -Th /
- ```
-
- Example output:
-
- ```shell
- [root@dd-rhel7vm ~]# df -Th /
- Filesystem Type Size Used Avail Use% Mounted on
- /dev/mapper/rootvg-rootlv xfs 12G 71M 12G 1% /
- [root@dd-rhel7vm ~]#
- ```
-
-> [!NOTE]
-> To use the same procedure to resize any other logical volume, change the LV name in step 12.
--
-### RHEL RAW
-
-To increase the size of the OS disk in an RHEL RAW partition:
-
-1. Stop the VM.
-1. Increase the size of the OS disk from the portal.
-1. Start the VM.
-
-When the VM has restarted, complete these steps:
-
-1. Access your VM as a **root** user by using the following command:
-
- ```bash
- [root@dd-rhel7vm ~]# sudo -i
- ```
-
-1. When the VM has restarted, complete the following step:
-
- - Install the **cloud-utils-growpart** package to provide the **growpart** command, which is required to increase the size of the OS disk and the gdisk handler for GPT disk layouts. This package is preinstalled on most marketplace images.
-
- ```bash
- [root@dd-rhel7vm ~]# yum install cloud-utils-growpart gdisk
- ```
-
-1. Use the **lsblk -f** command to verify the partition and filesystem type holding the root (**/**) partition:
-
- ```bash
- [root@vm-dd-cent7 ~]# lsblk -f
- NAME FSTYPE LABEL UUID MOUNTPOINT
- sda
- Γö£ΓöÇsda1 xfs 2a7bb59d-6a71-4841-a3c6-cba23413a5d2 /boot
- Γö£ΓöÇsda2 xfs 148be922-e3ec-43b5-8705-69786b522b05 /
- Γö£ΓöÇsda14
- ΓööΓöÇsda15 vfat 788D-DC65 /boot/efi
- sdb
- ΓööΓöÇsdb1 ext4 923f51ff-acbd-4b91-b01b-c56140920098 /mnt/resource
- ```
-
-1. For verification, start by listing the partition table of the sda disk with **gdisk**. In this example, we see a 48-GB disk with partition 2 at 29.0 GiB. The disk was expanded from 30 GB to 48 GB in the Azure portal.
-
- ```bash
- [root@vm-dd-cent7 ~]# gdisk -l /dev/sda
- GPT fdisk (gdisk) version 0.8.10
-
- Partition table scan:
- MBR: protective
- BSD: not present
- APM: not present
- GPT: present
-
- Found valid GPT with protective MBR; using GPT.
- Disk /dev/sda: 100663296 sectors, 48.0 GiB
- Logical sector size: 512 bytes
- Disk identifier (GUID): 78CDF84D-9C8E-4B9F-8978-8C496A1BEC83
- Partition table holds up to 128 entries
- First usable sector is 34, last usable sector is 62914526
- Partitions will be aligned on 2048-sector boundaries
- Total free space is 6076 sectors (3.0 MiB)
-
- Number Start (sector) End (sector) Size Code Name
- 1 1026048 2050047 500.0 MiB 0700
- 2 2050048 62912511 29.0 GiB 0700
- 14 2048 10239 4.0 MiB EF02
- 15 10240 1024000 495.0 MiB EF00 EFI System Partition
- ```
-
-1. Expand the partition for root, in this case sda2 by using the **growpart** command. Using this command expands the partition to use all of the contiguous space on the disk.
-
- ```bash
- [root@vm-dd-cent7 ~]# growpart /dev/sda 2
- CHANGED: partition=2 start=2050048 old: size=60862464 end=62912512 new: size=98613214 end=100663262
- ```
-
-1. Now print the new partition table with **gdisk** again. Notice that partition 2 has expanded to 47.0 GiB:
-
- ```bash
- [root@vm-dd-cent7 ~]# gdisk -l /dev/sda
- GPT fdisk (gdisk) version 0.8.10
-
- Partition table scan:
- MBR: protective
- BSD: not present
- APM: not present
- GPT: present
-
- Found valid GPT with protective MBR; using GPT.
- Disk /dev/sda: 100663296 sectors, 48.0 GiB
- Logical sector size: 512 bytes
- Disk identifier (GUID): 78CDF84D-9C8E-4B9F-8978-8C496A1BEC83
- Partition table holds up to 128 entries
- First usable sector is 34, last usable sector is 100663262
- Partitions will be aligned on 2048-sector boundaries
- Total free space is 4062 sectors (2.0 MiB)
-
- Number Start (sector) End (sector) Size Code Name
- 1 1026048 2050047 500.0 MiB 0700
- 2 2050048 100663261 47.0 GiB 0700
- 14 2048 10239 4.0 MiB EF02
- 15 10240 1024000 495.0 MiB EF00 EFI System Partition
- ```
-
-1. Expand the filesystem on the partition with **xfs_growfs**, which is appropriate for a standard marketplace-generated RedHat system:
-
- ```bash
- [root@vm-dd-cent7 ~]# xfs_growfs /
- meta-data=/dev/sda2 isize=512 agcount=4, agsize=1901952 blks
- = sectsz=4096 attr=2, projid32bit=1
- = crc=1 finobt=0 spinodes=0
- data = bsize=4096 blocks=7607808, imaxpct=25
- = sunit=0 swidth=0 blks
- naming =version 2 bsize=4096 ascii-ci=0 ftype=1
- log =internal bsize=4096 blocks=3714, version=2
- = sectsz=4096 sunit=1 blks, lazy-count=1
- realtime =none extsz=4096 blocks=0, rtextents=0
- data blocks changed from 7607808 to 12326651
- ```
-
-1. Verify the new size is reflected by using the **df** command:
-
- ```bash
- [root@vm-dd-cent7 ~]# df -hl
- Filesystem Size Used Avail Use% Mounted on
- devtmpfs 452M 0 452M 0% /dev
- tmpfs 464M 0 464M 0% /dev/shm
- tmpfs 464M 6.8M 457M 2% /run
- tmpfs 464M 0 464M 0% /sys/fs/cgroup
- /dev/sda2 48G 2.1G 46G 5% /
- /dev/sda1 494M 65M 430M 13% /boot
- /dev/sda15 495M 12M 484M 3% /boot/efi
- /dev/sdb1 3.9G 16M 3.7G 1% /mnt/resource
- tmpfs 93M 0 93M 0% /run/user/1000
- ```
virtual-network Ipv6 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ipv6-overview.md
The current IPv6 for Azure virtual network release has the following limitations
- Forward DNS for IPv6 is supported for Azure public DNS today but Reverse DNS is not yet supported. - While it is possible to create NSG rules for IPv4 and IPv6 within the same NSG, it is not currently possible to combine an IPv4 Subnet with an IPv6 subnet in the same rule when specifying IP prefixes. - ICMPv6 is not currently supported in Network Security Groups.
+- Azure Virtual WAN currently supports IPv4 traffic only.
## Pricing
virtual-network Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-gateway-resource.md
When NAT gateway is configured with public IP address 65.52.1.1, the source IPs
### NAT gateway dynamically allocates SNAT ports
-NAT gateway dynamically allocates SNAT ports across a subnet's private resources such as virtual machines. SNAT port inventory is made available by attaching public IP addresses to NAT gateway. All available SNAT portscan be used on-demand by any virtual machine in subnets configured with NAT gateway:
+NAT gateway dynamically allocates SNAT ports across a subnet's private resources such as virtual machines. SNAT port inventory is made available by attaching public IP addresses to NAT gateway. All available SNAT ports can be used on-demand by any virtual machine in subnets configured with NAT gateway:
:::image type="content" source="./media/nat-overview/lb-vnnat-chart.png" alt-text="Diagram that depicts the inventory of all available SNAT ports used by any VM on subnets configured with NAT.":::
virtual-wan About Client Address Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-client-address-pools.md
# About client address pools for point-to-site configurations
-This article describes Virtual WAN guidelines and requirements for allocating client address spaces when the virtual hub's point-to-site **Gateway scale units** are 40 or greater.
+This article describes Virtual WAN guidelines and requirements for allocating client address spaces.
-Point-to-site VPN gateways in the Virtual WAN hub are deployed with multiple instances. Each instance of a point-to-site VPN gateway can support up to 10,000 concurrent point-to-site user connections. As a result, for scale units greater than 40, Virtual WAN needs to deploy extra capacity, which requires a minimum number of address pools allocated for different scale units.
+## Background
-For instance, if a scale unit of 100 is chosen, 5 instances are deployed for the point-to-site VPN gateway in a virtual hub. This deployment can support 50,000 concurrent connections and **at least** 5 distinct address pools.
+Point-to-site VPN gateways in the Virtual WAN hub are deployed with one or more highly-available gateway instances. Each instance of a point-to-site VPN gateway can support up to 10,000 concurrent connections.
-**Available scale units**
+As a result, for scale units greater than 40 (support for more than 10,000 concurrent connections), Virtual WAN deploys an extra gateway instance to service every 10,000 additional connecting users.
-| Scale unit | Maximum supported clients | Minimum number of address pools |
-| | | |
-| 40 | 20000 | 2 |
-| 60 | 30000 | 3 |
-| 80 | 40000 | 4 |
-| 100 | 50000 | 5 |
-| 120 | 60000 | 6 |
-| 140 | 70000 | 7 |
-| 160 | 80000 | 8 |
-| 180 | 90000 | 9 |
-| 200 | 100000 | 10 |
+When a user connects to Virtual WAN, the connection is automatically load-balanced to all backend gateway instances. To ensure each Gateway instance can service connections, each gateway instance must have at least one unique address pool.
+
+For instance, if a scale unit of 100 is chosen, 5 gateway instances are deployed. This deployment can support 50,000 concurrent connections and **at least** 5 distinct address pools must be specified.
+
+## Address pools and multi-pool/user groups
+
+> [!NOTE]
+> There is no minimum scale unit required for the multi-pool/user group feature as long as sufficient address pools are allocated as described below.
+When a gateway is configured with the [multi-pool/user group feature](user-groups-about.md), multiple connection configurations are installed on the same Point-to-site VPN Gateway. Users from any group can connect to any gateway instance, meaning each connection configuration needs to have at least one address pool for every backend gateway instance.
+
+For instance, if a scale unit of 100 is chosen (5 gateway instances) on a gateway with three separate connection configurations, each configuration will need at least 5 address pools (total of 15 pools).
+
+| Connection Configuration | Associated User Groups | Minimum number of address pools |
+| | | |
+| Configuration 1| Finance, Human Resources | 5 |
+| Configuration 2| Engineering, Product Management| 5|
+| Configuration 3| Marketing | 5|
+
+**Available scale units**
+
+The following table summarizes the available scale unit choices for Point-to-site VPN Gateway.
+
+| Scale unit | Gateway Instances| Maximum supported clients | Minimum number of address pools per connection configuration|
+| | | | |
+1-20| 1| 500-10000 | 1|
+| 40 | 2| 20000 | 2 |
+| 60 | 3|30000 | 3 |
+| 80 | 4| 40000 | 4 |
+| 100 | 5 | 50000 | 5 |
+| 120 | 6| 60000 | 6 |
+| 140 | 7 | 70000 | 7 |
+| 160 | 8 | 80000 | 8 |
+| 180 | 9 | 90000 | 9 |
+| 200 | 10 |100000 | 10 |
+
## <a name="specify-address-pools"></a>Specifying address pools Point-to-site VPN address pool assignments are done automatically by Virtual WAN. See the following basic guidelines for specifying address pools.
-* One gateway instance allows for a maximum of 10,000 concurrent connections. As such, each address pool should contain at least 10,000 unique RFC1918 IP addresses.
+* One gateway instance allows for a maximum of 10,000 concurrent connections. As such, each address pool should contain at least 10,000 unique IPv4 addresses. If less than 10,000 addresses are assigned to each instance incoming connections will be rejected after all allocated IP addresses have been assigned.
* Multiple address pool ranges are automatically combined and assigned to a **single** gateway instance. This process is done in a round-robin manner for any gateway instances that have less than 10,000 IP addresses. For example, a pool with 5,000 addresses can be combined automatically by Virtual WAN with another pool that has 8,000 addresses and is assigned to a single gateway instance. * A single address pool is only assigned to a single gateway instance by Virtual WAN. * Address pools must be distinct. There can be no overlap between address pools. + > [!NOTE] > If an address pool is associated to a gateway instance that is undergoing maintenance, the address pool cannot be re-assigned to another instance.
virtual-wan Virtual Wan Global Transit Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-global-transit-network-architecture.md
This option lets enterprises leverage the Azure backbone to connect branches. Ho
> [!NOTE] > Disabling Branch-to-Branch Connectivity in Virtual WAN -
-> Virtual WAN can be configured to disable Branch-to-Branch connectivity. This configuation will block route propagation between VPN (S2S and P2S) and Express Route connected sites. This configuration will not affect branch-to-Vnet and Vnet-to-Vnet route propogation and connectivity. To configure this setting using Azure Portal: Under Virtual WAN Configuration menu, Choose Setting: Branch-to-Branch - Disabled.
+> Virtual WAN can be configured to disable Branch-to-Branch connectivity. This configuration will block route propagation between VPN (S2S and P2S) and Express Route connected sites. This configuration will not affect branch-to-Vnet and Vnet-to-Vnet route propagation and connectivity. To configure this setting using Azure Portal: Under Virtual WAN Configuration menu, Choose Setting: Branch-to-Branch - Disabled.
### Remote User-to-VNet (c)
vpn-gateway Vpn Gateway About Vpn Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md
Previously updated : 06/10/2022 Last updated : 10/24/2022
To help configure your VPN device, refer to the links that correspond to the app
For certain devices, you can download configuration scripts directly from Azure. For more information and download instructions, see [Download VPN device configuration scripts](vpn-gateway-download-vpndevicescript.md).
-### Devices with available configuration scripts
-- ## <a name="additionaldevices"></a>Non-validated VPN devices If you donΓÇÖt see your device listed in the Validated VPN devices table, your device still may work with a Site-to-Site connection. Contact your device manufacturer for additional support and configuration instructions.
vpn-gateway Vpn Gateway Download Vpndevicescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-download-vpndevicescript.md
- Previously updated : 09/02/2020 Last updated : 10/24/2022
This article walks you through downloading VPN device configuration scripts for
![download-script](./media/vpn-gateway-download-vpndevicescript/downloaddevicescript.png)
-The following devices have available scripts:
-- ## <a name="about"></a>About VPN device configuration scripts A cross-premises VPN connection consists of an Azure VPN gateway, an on-premises VPN device, and an IPsec S2S VPN tunnel connecting the two. The typical work flow includes the following steps:
Once the connection resource is created, follow the instructions below to downlo
![download66-script-2](./media/vpn-gateway-download-vpndevicescript/downloadscript-2.PNG)
-6. You are prompted to save the downloaded script (a text file) from your browser.
+6. You're prompted to save the downloaded script (a text file) from your browser.
7. Once you downloaded the configuration script, open it with a text editor and search for the keyword "REPLACE" to identify and examine the parameters that may need to be replaced. ![edit-script](./media/vpn-gateway-download-vpndevicescript/editscript.png)
Get-AzVirtualNetworkGatewayConnectionVpnDeviceConfigScript -Name $Connection -Re
## Apply the configuration script to your VPN device
-After you have downloaded and validated the configuration script, the next step is to apply the script to your VPN device. The actual procedure varies based on your VPN device makes and models. Consult the operation manuals or the instruction pages for your VPN devices.
+After you've downloaded and validated the configuration script, the next step is to apply the script to your VPN device. The actual procedure varies based on your VPN device makes and models. Consult the operation manuals or the instruction pages for your VPN devices.
## Next steps