Updates from: 04/25/2023 01:09:39
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Configure Authentication Sample React Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-react-spa-app.md
Previously updated : 11/17/2022 Last updated : 04/24/2023
You're now ready to test the React scoped access to the API. In this step, run b
```console npm install && npm update
- node index.js
+ npm start
``` The console window displays the port number where the application is hosted:
active-directory-b2c Enable Authentication React Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-react-spa-app.md
Previously updated : 07/07/2022 Last updated : 04/24/2023
You can use an existing React app, or [create a new React App](https://reactjs.o
``` npx create-react-app my-app cd my-app
-npm start
``` ## Step 2: Install the dependencies
active-directory-b2c Enable Authentication Web Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-application.md
Azure AD B2C identity provider settings are stored in the *appsettings.json* fil
"Instance": "https://<your-tenant-name>.b2clogin.com", "ClientId": "<web-app-application-id>", "Domain": "<your-b2c-domain>",
- "SignedOutCallbackPath": "/signout/<your-sign-up-in-policy>",
+ "SignedOutCallbackPath": "/signout-oidc
"SignUpSignInPolicyId": "<your-sign-up-in-policy>" } ```
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-operations.md
For user flows, these extension properties are [managed by using the Azure porta
Use the [Get organization details](/graph/api/organization-get) API to get your directory size quota. You need to add the `$select` query parameter as shown in the following HTTP request: ```http
- GET https://graph.microsoft.com/v1.0/organization/organization-id?$select=directorySizeQuota
+GET https://graph.microsoft.com/v1.0/organization/organization-id?$select=directorySizeQuota
``` Replace `organization-id` with your organization or tenant ID.
active-directory-b2c User Profile Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-profile-attributes.md
Extension attributes in the Graph API are named by using the convention `extensi
Note that the **Application (client) ID** as it's represented in the extension attribute name includes no hyphens. For example: ```json
- "extension_831374b3bd5041bfaa54263ec9e050fc_loyaltyNumber": "212342"
+"extension_831374b3bd5041bfaa54263ec9e050fc_loyaltyNumber": "212342"
``` The following data types are supported when defining an attribute in a schema extension:
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
Previously updated : 04/19/2023 Last updated : 04/24/2023
For example: In the diagram, the provisioning apps are set up for each geographi
* Use the [provisioning agent configuration wizard](../cloud-sync/how-to-install.md#install-the-agent) to register all child AD domains with your Azure AD tenant. * Create a separate HR2AD provisioning app for each target domain. * When configuring the provisioning app, select the respective child AD domain from the dropdown of available AD domains.
-* Use [scoping filters](define-conditional-rules-for-provisioning-user-accounts.md) in the provisioning app to define users to be processed by each app.
+* Use [scoping filters](define-conditional-rules-for-provisioning-user-accounts.md) in the provisioning app to define users that each app processes.
* Configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
For example: In the diagram, the provisioning apps are set up for each geographi
* Configure [referral chasing](../cloud-sync/how-to-manage-registry-options.md#configure-referral-chasing) on the provisioning agent. * Use the [provisioning agent configuration wizard](../cloud-sync/how-to-install.md#install-the-agent) to register the parent AD domain and all child AD domains with your Azure AD tenant. * Create a separate HR2AD provisioning app for each target domain.
-* When configuring each provisioning app, select the parent AD domain from the dropdown of available AD domains. This ensures forest-wide lookup while generating unique values for attributes like *userPrincipalName*, *samAccountName* and *mail*.
+* When configuring each provisioning app, select the parent AD domain from the dropdown of available AD domains. Selecting the parent domain ensures forest-wide lookup while generating unique values for attributes like *userPrincipalName*, *samAccountName* and *mail*.
* Use *parentDistinguishedName* with expression mapping to dynamically create user in the correct child domain and [OU container](#configure-active-directory-ou-container-assignment).
-* Use [scoping filters](define-conditional-rules-for-provisioning-user-accounts.md) in the provisioning app to define users to be processed by each app.
+* Use [scoping filters](define-conditional-rules-for-provisioning-user-accounts.md) in the provisioning app to define users that each app processes.
* To resolve cross-domain managers references, create a separate HR2AD provisioning app for updating only the *manager* attribute. Set the scope of this app to all users. * Configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
For example: In the diagram, a single provisioning app manages users present in
* Configure [referral chasing](../cloud-sync/how-to-manage-registry-options.md#configure-referral-chasing) on the provisioning agent. * Use the [provisioning agent configuration wizard](../cloud-sync/how-to-install.md#install-the-agent) to register the parent AD domain and all child AD domains with your Azure AD tenant. * Create a single HR2AD provisioning app for the entire forest.
-* When configuring the provisioning app, select the parent AD domain from the dropdown of available AD domains. This ensures forest-wide lookup while generating unique values for attributes like *userPrincipalName*, *samAccountName* and *mail*.
+* When configuring the provisioning app, select the parent AD domain from the dropdown of available AD domains. Selecting the parent domain ensures forest-wide lookup while generating unique values for attributes like *userPrincipalName*, *samAccountName* and *mail*.
* Use *parentDistinguishedName* with expression mapping to dynamically create user in the correct child domain and [OU container](#configure-active-directory-ou-container-assignment). * If you're using scoping filters, configure [skip out of scope deletions flag](skip-out-of-scope-deletions.md) to prevent accidental account deactivations.
You can also [customize the default attribute mappings](../app-provisioning/cust
### Determine user account status
-By default, the provisioning connector app maps the HR user profile status to the user account status in Active Directory or Azure AD to determine whether to enable or disable the user account.
+By default, the provisioning connector app maps the HR user profile status to the user account status. The status is used to determine whether to enable or disable the user account.
When you initiate the Joiners-Leavers process, gather the following requirements.
When you initiate the Joiners-Movers-Leavers process, gather the following requi
Depending on your requirements, you can modify the mappings to meet your integration goals. For more information, see the specific cloud HR app tutorial (such as [Workday](../saas-apps/workday-inbound-tutorial.md#part-4-configure-attribute-mappings)) for a list of custom attributes to map. ### Generate a unique attribute value
+Attributes like CN, samAccountName, and the UPN have unique constraints. You may need to generate unique attribute values when you initiate the Joiners process.
-When you initiate the Joiners process, you might need to generate unique attribute values when you set attributes like CN, samAccountName, and the UPN, which has unique constraints.
The Azure AD function [SelectUniqueValues](../app-provisioning/functions-for-customizing-application-data.md#selectuniquevalue) evaluates each rule and then checks the value generated for uniqueness in the target system. For an example, see [Generate unique value for the userPrincipalName (UPN) attribute](../app-provisioning/functions-for-customizing-application-data.md#generate-unique-value-for-userprincipalname-upn-attribute).
When the Azure AD provisioning service runs for the first time, it performs an [
After you're satisfied with the results of the initial cycle for test users, start the [incremental updates](../app-provisioning/how-provisioning-works.md#incremental-cycles). ## Plan testing and security-
-At each stage of your deployment from initial pilot through enabling user provisioning, ensure that you're testing that results are as expected and auditing the provisioning cycles.
+A deployment consists of stages ranging from the initial pilot to enabling user provisioning. At each stage, ensure that you're testing for expected results. Also, audit the provisioning cycles.
### Plan testing
After you configure the cloud HR app to Azure AD user provisioning, run test cas
|User is terminated in the cloud HR app.|- The user account is disabled in Active Directory.</br>- The user can't log into any enterprise apps protected by Active Directory. |User supervisory organization is updated in the cloud HR app.|Based on the attribute mapping, the user account moves from one OU to another in Active Directory.| |HR updates the user's manager in the cloud HR app.|The manager field in Active Directory is updated to reflect the new manager's name.|
-|HR rehires an employee into a new role.|Behavior depends on how the cloud HR app is configured to generate employee IDs:</br>- If the old employee ID is used for a rehired employee, the connector enables the existing Active Directory account for the user.</br>- If the rehired employee gets a new employee ID, the connector creates a new Active Directory account for the user.|
+|HR rehires an employee into a new role.|Behavior depends on how the cloud HR app is configured to generate employee IDs. If the old employee ID is used for a rehired employee, the connector enables the existing Active Directory account for the user. If the rehired employee gets a new employee ID, the connector creates a new Active Directory account for the user.|
|HR converts the employee to a contract worker or vice versa.|A new Active Directory account is created for the new persona and the old account gets disabled on the conversion effective date.| Use the previous results to determine how to transition your automatic user provisioning implementation into production based on your established timelines.
Choose the cloud HR app that aligns to your solution requirements.
## Manage your configuration
-Azure AD can provide additional insights into your organization's user provisioning usage and operational health through audit logs and reports.
+Azure AD can provide more insights into your organization's user provisioning usage and operational health through audit logs and reports.
### Gain insights from reports and logs
active-directory How To Mfa Authenticator Lite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-authenticator-lite.md
Users receive a notification in Outlook mobile to approve or deny sign-in, or th
## Enable Authenticator Lite
->[!NOTE]
->Rollout has not yet completed across Outlook applications. If this feature is enabled in your tenant, your users may not yet be prompted for the experience. To minimize user disruption, we recommend enabling this feature when the rollout completes.
- By default, Authenticator Lite is [Microsoft managed](concept-authentication-default-enablement.md#microsoft-managed-settings) and disabled during preview. After general availability, the Microsoft managed state default value will change to enable Authenticator Lite. ### Enablement Authenticator Lite in Azure portal UX
active-directory Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md
Previously updated : 04/20/2022 Last updated : 04/24/2023
This article describes how to enable Permissions Management in your organization. Once you've enabled Permissions Management, you can connect it to your Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) platforms. > [!NOTE]
-> To complete this task, you must have *global administrator* permissions as a user in that tenant. You can't enable Permissions Management as a user from other tenant who has signed in via B2B or via Azure Lighthouse.
+> To complete this task, you must have *Microsoft Entra Permissions Management Administrator* permissions. You can't enable Permissions Management as a user from another tenant who has signed in via B2B or via Azure Lighthouse.
:::image type="content" source="media/onboard-enable-tenant/dashboard.png" alt-text="A preview of what the permissions management dashboard looks like." lightbox="media/onboard-enable-tenant/dashboard.png":::
active-directory Partner List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/partner-list.md
Previously updated : 01/26/2023 Last updated : 04/24/2023
If you're a partner and would like to be considered for the Entra Permissions Ma
| ![Screenshot of a invoke logo.](media/partner-list/partner-invoke.png) | [Invoke's Entra PM multicloud risk assessment](https://www.invokellc.com/offers/microsoft-entra-permissions-management-multi-cloud-risk-assessment)| | ![Screenshot of a Vu logo.](media/partner-list/partner-oxford-computer-group.png) | [Permissions Management implementation and remediation](https://oxfordcomputergroup.com/microsoft-entra-permissions-management-implementation/)| | ![Screenshot of a Onfido logo.](media/partner-list/partner-ada-quest.png) | [adaQuest Microsoft Entra Permissions Management Risk Assessment](https://adaquest.com/entra-permission-risk-assessment/)-
+| ![Screenshot of Ascent Solutions logo.](media/partner-list/partner-ascent-solutions.png) | [Ascent Solutions Microsoft Entra Permissions Management Rapid Risk Assessment](https://www.meetascent.com/resources/microsoft-entra-permissions-rapid-risk-assessment)
+| ![Screenshot of Synergy Advisors logo.](media/partner-list/partner-synergy-advisors.png) | [Synergy Advisors Identity Optimization](https://synergyadvisors.biz/solutions-item/identity-optimization/)
+| ![Screenshot of BDO Digital logo.](media/partner-list/partner-bdo-digital.png) | [BDO Digital Managing Permissions Across Multicloud](https://www.bdodigital.com/services/security-compliance/cybersecurity/entra-permissions-management)
## Next steps * For an overview of Permissions Management, see [What's Permissions Management?](overview.md)
active-directory Product Privileged Role Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-privileged-role-insights.md
The **Azure AD Insights** tab shows you who is assigned to privileged roles in y
> [!NOTE] > Keep role assignments permanent if a user has a an additional Microsoft account (for example, an account they use to sign in to Microsoft services like Skype, or Outlook.com). If you require multi-factor authentication to activate a role assignment, a user with an additional Microsoft account will be locked out.
+## Prerequisite
+To view information on the Azure AD Insights tab, you must have Permissions Management Administrator role permissions.
+ ## View information in the Azure AD Insights tab 1. From the Permissions Management home page, select the **Azure AD Insights** tab.
active-directory Configure Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configure-token-lifetimes.md
To get started, download the latest [Microsoft Graph PowerShell SDK](/powershell
In the following steps, you'll create a policy that requires users to authenticate less frequently in your web app. This policy sets the lifetime of the access/ID tokens for your web app. ```powershell
-Connect-MgGraph -Scopes "Policy.ReadWrite.ApplicationConfiguration"
+Connect-MgGraph -Scopes "Policy.ReadWrite.ApplicationConfiguration","Policy.Read.All","Application.ReadWrite.All"
# Create a token lifetime policy $params = @{
GET https://graph.microsoft.com/v1.0/policies/tokenLifetimePolicies/4d2f137b-e8a
``` ## Next steps
-Learn about [authentication session management capabilities](../conditional-access/howto-conditional-access-session-lifetime.md) in Azure AD Conditional Access.
+Learn about [authentication session management capabilities](../conditional-access/howto-conditional-access-session-lifetime.md) in Azure AD Conditional Access.
active-directory Msal Logging Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-logging-dotnet.md
Example:
{ public EventLogLevel MinLogLevel { get; }
- public TestIdentityLogger()
+ public MyIdentityLogger()
{ //Try to pull the log level from an environment variable var msalEnvLogLevel = Environment.GetEnvironmentVariable("MSAL_LOG_LEVEL");
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-management-azure-portal.md
If a device is managed by another management authority, like Microsoft Intune, b
You can use a device ID to verify the device ID details on the device or to troubleshoot via PowerShell. To access the copy option, select the device.
-![Screenshot that shows a device ID and the copy button.](./media/device-management-azure-portal/35.png)
+![Screenshot that shows a device ID and the copy button.](./media/device-management-azure-portal/device-details.png)
## View or copy BitLocker keys You can view and copy BitLocker keys to allow users to recover encrypted drives. These keys are available only for Windows devices that are encrypted and store their keys in Azure AD. You can find these keys when you view a device's details by selecting **Show Recovery Key**. Selecting **Show Recovery Key** will generate an audit log, which you can find in the `KeyManagement` category.
-![Screenshot that shows how to view BitLocker keys.](./media/device-management-azure-portal/device-details-show-bitlocker-key.png)
+![Screenshot that shows how to view BitLocker keys.](./media/device-management-azure-portal/show-bitlocker-key.png)
To view or copy BitLocker keys, you need to be the owner of the device or have one of these roles:
In this preview, you have the ability to infinitely scroll, reorder columns, and
- Compliant state - Join type (Azure AD joined, Hybrid Azure AD joined, Azure AD registered) - Activity timestamp-- OS
+- OS Type and Version
- Device type (printer, secure VM, shared device, registered device) - MDM - Autopilot
active-directory Groups Bulk Download Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-download-members.md
Previously updated : 06/23/2022 Last updated : 04/24/2023
You can bulk download the members of a group in your organization to a comma-sep
1. Sign in to [the Azure portal](https://portal.azure.com) with an account in the organization. 1. In Azure AD, select **Groups** > **All groups**. 1. Open the group whose membership you want to download, and then select **Members**.
-1. On the **Members** page, select **Download members** to download a CSV file listing the group members.
+1. On the **Members** page, select **Bulk operations** and choose, **Download members** to download a CSV file listing the group members.
![The Download Members command is on the profile page for the group](./media/groups-bulk-download-members/download-panel.png)
active-directory Concept Fundamentals Security Defaults https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-security-defaults.md
All users in your tenant must register for multifactor authentication (MFA) in t
Administrators have increased access to your environment. Because of the power these highly privileged accounts have, you should treat them with special care. One common method to improve the protection of privileged accounts is to require a stronger form of account verification for sign-in. In Azure AD, you can get a stronger account verification by requiring multifactor authentication. > [!TIP]
-> We recommend having separate accounts for administration and standard productivity tasks to significantly reduce the number of times your admins are prompted for MFA.
+> Recommendations for your admins:
+> - Ensure all your admins sign in after enabling security defaults so that they can register for authentication methods.
+> - Have separate accounts for administration and standard productivity tasks to significantly reduce the number of times your admins are prompted for MFA.
After registration with Azure AD Multifactor Authentication is finished, the following Azure AD administrator roles will be required to do extra authentication every time they sign in:
active-directory Scenario Azure First Sap Identity Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/scenario-azure-first-sap-identity-integration.md
Based on these assumptions, we focus mostly on the products and services present
> [!NOTE] > Most of the guidance here applies to [Azure Active Directory B2C](../../active-directory-b2c/overview.md) as well, but there are some important differences. See [Using Azure AD B2C as the Identity Provider](#using-azure-ad-b2c-as-the-identity-provider) for more information.
+> [!WARNING]
+> Be aware of the SAP SAML assertion limits and impact of the length of SAP Cloud Foundry role collection names and amount of collections proxied by groups in SAP Cloud Identity Service. See SAP note [2732890](https://launchpad.support.sap.com/?sap-support-cross-site-visitor-id=b73c7292f9a46d52#/notes/2732890) for more information. Exceeded limits result in authorization issues.
+ ## Recommendations ### Summary
active-directory Identity Governance Applications Define https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-applications-define.md
Title: Define organizational policies for governing access to applications in your environment
-description: Microsoft Entra Identity Governance allows you to balance your organization's need for security and employee productivity with the right processes and visibility. You can define policies for how users should obtain access to your business critical applications integrated with Microsoft Entra.
+description: Microsoft Entra Identity Governance allows you to balance your organization's need for security and employee productivity with the right processes and visibility. You can define policies for how users should obtain access to your business critical applications integrated with Microsoft Entra Identity Governance.
documentationcenter: ''
active-directory Identity Governance Organizational Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-organizational-roles.md
This article discusses how to model organizational roles, using entitlement mana
## Migrating an organizational role model
-The following table illustrates how concepts in organizational role definitions you might be familiar with in other products correspond to capabilities in Entra Identity Governance entitlement management.
+The following table illustrates how concepts in organizational role definitions you might be familiar with in other products correspond to capabilities in entitlement management.
| Concept in organizational role modeling | Representation in Entitlement Management | | | |
active-directory Howto Identity Protection Investigate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-investigate-risk.md
Administrators can then choose to take action on these events. Administrators ca
- Confirm user compromise - Dismiss user risk - Block user from signing in-- Investigate further using Azure ATP
+- Investigate further using Microsoft Defender for Identity
## Risky sign-ins
active-directory Application Management Certs Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/application-management-certs-faq.md
To renew an application token signing certificate, see [How to renew a token sig
## How do I update Azure AD after changing my federation certificates? To update Azure AD after changing your federation certificates, see [Renew federation certificates for Microsoft 365 and Azure Active Directory](../hybrid/how-to-connect-fed-o365-certs.md).+
+## Can I use the same SAML certificate across different apps?
+
+When it's the first time configuring SSO on an enterprise app, we do provide a default SAML certificate that is used across Azure AD. However, if you need to use the same certificate across multiple apps that aren't the default Azure AD one, then you need to use an external Certificate Authority and upload the PFX file. The reason is that Azure AD doesn't provide access to private keys from internally issued certificates.
active-directory Howspace Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/howspace-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
|Attribute|Type|Supported for filtering|Required by Howspace| ||||| |displayName|String|&check;|&check;
- |externalId|String||
+ |externalId|String||&check;
|members|Reference|| 1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
active-directory Sauce Labs Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sauce-labs-tutorial.md
Previously updated : 03/26/2023 Last updated : 04/24/2023
In this article, you learn how to integrate Sauce Labs with Azure Active Directo
You configure and test Azure AD single sign-on for Sauce Labs in a test environment. Sauce Labs supports both **SP** and **IDP** initiated single sign-on and **Just In Time** user provisioning. > [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. If your company has more than one organization at Sauce Labs to be integrated with SAML SSO within a single Azure tenant, please refer to the following [documentation](https://docs.saucelabs.com/basics/sso/setting-up-sso-special-cases/#single-identity-provider-and-multiple-organizations-at-sauce-labs).
## Prerequisites
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure Sauce Labs SSO
-To configure single sign-on on **Sauce Labs** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Sauce Labs support team](mailto:support@saucelabs.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on Sauce Labs side, please refer this [documentation](https://docs.saucelabs.com/basics/sso/setting-up-sso/#integrating-with-sauce-labs-service-provider) to set up SAML SSO connection properly on both sides. For any help or queries, please contact [Sauce Labs support team](mailto:support@saucelabs.com).
### Create Sauce Labs test user
aks Access Control Managed Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/access-control-managed-azure-ad.md
+
+ Title: Cluster access control with AKS-managed Azure Active Directory integration
+description: Learn how to access clusters when integrating Azure AD in your Azure Kubernetes Service (AKS) clusters.
+ Last updated : 04/20/2023+++
+# Cluster access control with AKS-managed Azure Active Directory integration
+
+When you integrate Azure AD with your AKS cluster, you can use [Conditional Access][aad-conditional-access] or Privileged Identity Management (PIM) for just-in-time requests to control access to your cluster. This article shows you how to enable Conditional Access and PIM on your AKS clusters.
+
+> [!NOTE]
+> Azure AD Conditional Access and Privileged Identity Management are Azure AD Premium capabilities requiring a Premium P2 SKU. For more on Azure AD SKUs, see the [pricing guide][aad-pricing].
+
+## Before you begin
+
+* See [AKS-managed Azure Active Directory integration](./managed-azure-ad.md) for an overview and setup instructions.
+
+## Use Conditional Access with Azure AD and AKS
+
+1. In the Azure portal, go to the **Azure Active Directory** page and select **Enterprise applications**.
+2. Select **Conditional Access** > **Policies** > **New policy**.
+
+ :::image type="content" source="./media/managed-aad/conditional-access-new-policy.png" alt-text="Screenshot of adding a Conditional Access policy." lightbox="./media/managed-aad/conditional-access-new-policy.png":::
+
+3. Enter a name for the policy, such as *aks-policy*.
+
+4. Under **Assignments**, select **Users and groups**. Choose the users and groups you want to apply the policy to. In this example, choose the same Azure AD group that has administrator access to your cluster.
+
+ :::image type="content" source="./media/managed-aad/conditional-access-users-groups.png" alt-text="Screenshot of selecting users or groups to apply the Conditional Access policy." lightbox="./media/managed-aad/conditional-access-users-groups.png":::
+
+5. Under **Cloud apps or actions** > **Include**, select **Select apps**. Search for **Azure Kubernetes Service** and select **Azure Kubernetes Service AAD Server**.
+
+ :::image type="content" source="./media/managed-aad/conditional-access-apps.png" alt-text="Screenshot of selecting Azure Kubernetes Service AD Server for applying the Conditional Access policy." lightbox="./media/managed-aad/conditional-access-apps.png":::
+
+6. Under **Access controls** > **Grant**, select **Grant access**, **Require device to be marked as compliant**, and **Require all the selected controls**.
+
+ :::image type="content" source="./media/managed-aad/conditional-access-grant-compliant.png" alt-text="Screenshot of selecting to only allow compliant devices for the Conditional Access policy." lightbox="./media/managed-aad/conditional-access-grant-compliant.png" :::
+
+7. Confirm your settings, set **Enable policy** to **On**, and then select **Create**.
+
+ :::image type="content" source="./media/managed-aad/conditional-access-enable-policy.png" alt-text="Screenshot of enabling the Conditional Access policy." lightbox="./media/managed-aad/conditional-access-enable-policy.png":::
+
+### Verify your Conditional Access policy has been successfully listed
+
+1. Get the user credentials to access the cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
+ ```
+
+2. Follow the instructions to sign in.
+
+3. View the nodes in the cluster using the `kubectl get nodes` command.
+
+ ```azurecli-interactive
+ kubectl get nodes
+ ```
+
+4. In the Azure portal, navigate to **Azure Active Directory** and select **Enterprise applications** > **Activity** > **Sign-ins**.
+
+5. Under the **Conditional Access** column you should see a status of *Success*. Select the event and then select the **Conditional Access** tab. Your Conditional Access policy will be listed.
+
+ :::image type="content" source="./media/managed-aad/conditional-access-sign-in-activity.png" alt-text="Screenshot that shows failed sign-in entry due to Conditional Access policy." lightbox="./media/managed-aad/conditional-access-sign-in-activity.png":::
+
+## Configure just-in-time cluster access with Azure AD and AKS
+
+1. In the Azure portal, go to **Azure Active Directory** and select **Properties**.
+
+2. Note the value listed under **Tenant ID**. It will be referenced in a later step as `<tenant-id>`.
+
+ :::image type="content" source="./media/managed-aad/jit-get-tenant-id.png" alt-text="Screenshot of the Azure portal screen for Azure Active Directory with the tenant's ID highlighted." lightbox="./media/managed-aad/jit-get-tenant-id.png":::
+
+3. Select **Groups** > **New group**.
+
+ :::image type="content" source="./media/managed-aad/jit-create-new-group.png" alt-text="Screenshot of the Azure portal Active Directory groups screen with the New Group option highlighted." lightbox="./media/managed-aad/jit-create-new-group.png":::
+
+4. Verify the group type **Security** is selected and specify a group name, such as *myJITGroup*. Under the option **Azure AD roles can be assigned to this group (Preview)**, select **Yes** and then select **Create**.
+
+ :::image type="content" source="./media/managed-aad/jit-new-group-created.png" alt-text="Screenshot of the new group creation screen in the Azure portal." lightbox="./media/managed-aad/jit-new-group-created.png":::
+
+5. On the **Groups** page, select the group you just created and note the Object ID. It will be referenced in a later step as `<object-id>`.
+
+ :::image type="content" source="./media/managed-aad/jit-get-object-id.png" alt-text="Screenshot of the Azure portal screen for the just-created group with the Object ID highlighted." lightbox="./media/managed-aad/jit-get-object-id.png":::
+
+6. Create the AKS cluster with AKS-managed Azure AD integration using the [`az aks create`][az-aks-create] command with the `--aad-admin-group-objects-ids` and `--aad-tenant-id parameters` and include the values noted in the steps earlier.
+
+ ```azurecli-interactive
+ az aks create -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <object-id> --aad-tenant-id <tenant-id>
+ ```
+
+7. In the Azure portal, select **Activity** > **Privileged Access (Preview)** > **Enable Privileged Access**.
+
+ :::image type="content" source="./media/managed-aad/jit-enabling-priv-access.png" alt-text="Screenshot of the Privileged access (Preview) page in the Azure portal with Enable privileged access highlighted." lightbox="./media/managed-aad/jit-enabling-priv-access.png":::
+
+8. To grant access, select **Add assignments**.
+
+ :::image type="content" source="./media/managed-aad/jit-add-active-assignment.png" alt-text="Screenshot of the Privileged access (Preview) screen in the Azure portal after enabling. The option to Add assignments is highlighted." lightbox="./media/managed-aad/jit-add-active-assignment.png":::
+
+9. From the **Select role** drop-down list, select the users and groups you want to grant cluster access. These assignments can be modified at any time by a group administrator. Then select **Next**.
+
+ :::image type="content" source="./media/managed-aad/jit-adding-assignment.png" alt-text="Screenshot of the Add assignments Membership screen in the Azure portal with a sample user selected to be added as a member. The Next option is highlighted." lightbox="./media/managed-aad/jit-adding-assignment.png":::
+
+10. Under **Assignment type**, select **Active** and then specify the desired duration. Provide a justification and then select **Assign**.
+
+ :::image type="content" source="./media/managed-aad/jit-set-active-assignment-details.png" alt-text="Screenshot of the Add assignments Setting screen in the Azure portal. An assignment type of Active is selected and a sample justification has been given. The Assign option is highlighted." lightbox="./media/managed-aad/jit-set-active-assignment-details.png":::
+
+For more information about assignment types, see [Assign eligibility for a privileged access group (preview) in Privileged Identity Management][aad-assignments].
+
+### Verify just-in-time access is working by accessing the cluster
+
+1. Get the user credentials to access the cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
+ ```
+
+2. Follow the steps to sign in.
+
+3. Use the `kubectl get nodes` command to view the nodes in the cluster.
+
+ ```azurecli-interactive
+ kubectl get nodes
+ ```
+
+4. Note the authentication requirement and follow the steps to authenticate. If successful, you should see an output similar to the following example output:
+
+ ```output
+ To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AAAAAAAAA to authenticate.
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-61156405-vmss000000 Ready agent 6m36s v1.18.14
+ aks-nodepool1-61156405-vmss000001 Ready agent 6m42s v1.18.14
+ aks-nodepool1-61156405-vmss000002 Ready agent 6m33s v1.18.14
+ ```
+
+### Apply just-in-time access at the namespace level
+
+1. Integrate your AKS cluster with [Azure RBAC](manage-azure-rbac.md).
+
+2. Associate the group you want to integrate with just-in-time access with a namespace in the cluster using the [`az role assignment create`][az-role-assignment-create] command.
+
+ ```azurecli-interactive
+ az role assignment create --role "Azure Kubernetes Service RBAC Reader" --assignee <AAD-ENTITY-ID> --scope $AKS_ID/namespaces/<namespace-name>
+ ```
+
+3. Associate the group you configured at the namespace level with PIM to complete the configuration.
+
+## Troubleshooting
+
+If `kubectl get nodes` returns an error similar to the following:
+
+```output
+Error from server (Forbidden): nodes is forbidden: User "aaaa11111-11aa-aa11-a1a1-111111aaaaa" cannot list resource "nodes" in API group "" at the cluster scope
+```
+
+Make sure the admin of the security group has given your account an *Active* assignment.
+
+## Next steps
+
+* Use [kubelogin](https://github.com/Azure/kubelogin) to access features for Azure authentication that aren't available in kubectl.
+
+<!-- LINKS - External -->
+[aad-pricing]: https://azure.microsoft.com/pricing/details/active-directory/
+
+<!-- LINKS - Internal -->
+[aad-conditional-access]: ../active-directory/conditional-access/overview.md
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[aad-assignments]: ../active-directory/privileged-identity-management/groups-assign-member-owner.md#assign-an-owner-or-member-of-a-group
+[az-aks-create]: /cli/azure/aks#az_aks_create
aks Aks Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-diagnostics.md
Last updated 11/15/2022
# Azure Kubernetes Service Diagnostics (preview) overview
-Troubleshooting Azure Kubernetes Service (AKS) cluster issues plays an important role in maintaining your cluster, especially if your cluster is running mission-critical workloads. AKS Diagnostics (preview) is an intelligent, self-diagnostic experience that:
+Troubleshooting Azure Kubernetes Service (AKS) cluster issues plays an important role in maintaining your cluster, especially if your cluster is running mission-critical workloads. AKS Diagnostics (preview) is an intelligent, self-diagnostic experience with the following features:
* Helps you identify and resolve problems in your cluster. * Is cloud-native.
-* Requires no extra configuration or billing cost.
+* Requires no extra configuration or billing costs.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
To access AKS Diagnostics:
1. Sign in to the [Azure portal](https://portal.azure.com) 1. From **All services** in the Azure portal, select **Kubernetes Service**. 1. Select **Diagnose and solve problems** in the left navigation, which opens AKS Diagnostics.
-1. Choose a category that best describes the issue of your cluster, like _Cluster Node Issues_, by:
-
- * Using the keywords in the homepage tile.
- * Typing a keyword that best describes your issue in the search bar.
+1. Choose a category that best describes the issue of your cluster, like _Cluster Node Issues_, using the keywords in the homepage tile or typing a keyword that best describes your issue in the search bar.
![Homepage](./media/concepts-diagnostics/aks-diagnostics-homepage.png) ## View a diagnostic report
-After you click on a category, you can view a diagnostic report specific to your cluster. Diagnostic reports intelligently call out any issues in your cluster with status icons. You can drill down on each topic by clicking **More Info** to see a detailed description of:
+After selecting a category, you can view a diagnostic report specific to your cluster. Diagnostic reports intelligently call out any issues in your cluster with status icons. You can drill down on each topic by clicking **More Info** to see a detailed description of:
* Issues * Recommended actions * Links to helpful docs * Related-metrics
-* Logging data
+* Logging data
Diagnostic reports generate based on the current state of your cluster after running various checks. They can be useful for pinpointing the problem of your cluster and understanding next steps to resolve the issue.
aks Auto Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md
If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node i
## Using cluster auto-upgrade
-Automatically completed upgrades are functionally the same as manual upgrades. The timing of upgrades is determined by the selected channel. When making changes to auto-upgrade, allow 24 hours for the changes to take effect.
+Automatically completed upgrades are functionally the same as manual upgrades. The timing of upgrades is determined by the [selected auto-upgrade channel][planned-maintenance]. When making changes to auto-upgrade, allow 24 hours for the changes to take effect.
The following upgrade channels are available:
aks Azure Ad Integration Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-integration-cli.md
> [!WARNING] > **The feature described in this document, Azure AD Integration (legacy), will be deprecated on June 1st, 2023. >
-> AKS has a new improved [AKS-managed Azure AD][managed-aad] experience that doesn't require you to manage server or client application. If you want to migrate follow the instructions [here][managed-aad-migrate].
+> AKS has a new improved [AKS-managed Azure AD][managed-aad] experience that doesn't require you to manage server or client applications. If you want to migrate follow the instructions [here][managed-aad-migrate].
Azure Kubernetes Service (AKS) can be configured to use Azure Active Directory (AD) for user authentication. In this configuration, you can log into an AKS cluster using an Azure AD authentication token. Cluster operators can also configure Kubernetes role-based access control (Kubernetes RBAC) based on a user's identity or directory group membership.
For best practices on identity and resource control, see [Best practices for aut
[rbac-authorization]: concepts-identity.md#kubernetes-rbac [operator-best-practices-identity]: operator-best-practices-identity.md [azure-ad-rbac]: azure-ad-rbac.md
-[managed-aad]: managed-aad.md
-[managed-aad-migrate]: managed-aad.md#upgrade-to-aks-managed-azure-ad-integration
+[managed-aad]: managed-azure-ad.md
+[managed-aad-migrate]: managed-azure-ad.md#upgrade-a-legacy-azure-ad-cluster-to-aks-managed-azure-ad-integration
aks Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-rbac.md
az ad group delete --group opssre
<!-- LINKS - internal --> [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [install-azure-cli]: /cli/azure/install-azure-cli
-[azure-ad-aks-cli]: managed-aad.md
+[azure-ad-aks-cli]: managed-azure-ad.md
[az-aks-show]: /cli/azure/aks#az_aks_show [az-ad-group-create]: /cli/azure/ad/group#az_ad_group_create [az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
az ad group delete --group opssre
[rbac-authorization]: concepts-identity.md#kubernetes-rbac [operator-best-practices-identity]: operator-best-practices-identity.md [terraform-on-azure]: /azure/developer/terraform/overview
-[enable-azure-ad-integration-existing-cluster]: managed-aad.md#enable-aks-managed-azure-ad-integration-on-your-existing-cluster
+[enable-azure-ad-integration-existing-cluster]: managed-azure-ad.md#use-an-existing-cluster
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Previously updated : 04/17/2023 Last updated : 04/21/2023 # Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
When the status reflects *Registered*, refresh the registration of the *Microsof
az provider register --namespace Microsoft.ContainerService ```
-## Upgrade an existing cluster to CNI Overlay - Preview
-
-> [!NOTE]
-> The upgrade capability is still in preview and requires the preview AKS Azure CLI extension.
-
-You can update an existing Azure CNI cluster to Overlay if the cluster meets certain criteria. A cluster must:
--- be on Kubernetes version 1.22+-- **not** be using the dynamic pod IP allocation feature-- **not** have network policies enabled-- **not** be using any Windows node pools with docker as the container runtime-
-The upgrade process will trigger each node pool to be re-imaged simultaneously (i.e. upgrading each node pool separately to Overlay is not supported). Any disruptions to cluster networking will be similar to a node image upgrade or Kubernetes version upgrade where each node in a node pool is re-imaged.
-
-> [!WARNING]
-> Due to the limitation around Windows Overlay pods incorrectly SNATing packets from host network pods, this has a more detrimental effect for clusters upgrading to Overlay.
-
-While nodes are being upgraded to use the CNI Overlay feature, pods that are on nodes which haven't been upgraded yet will not be able to communicate with pods on Windows nodes that have been upgraded to Overlay. In other words, Overlay Windows pods will not be able to reply to any traffic from pods still running with an IP from the node subnet.
-
-This network disruption will only occur during the upgrade. Once the migration to Overlay has completed for all node pools, all Overlay pods will be able to communicate successfully with the Windows pods.
-
-> [!NOTE]
-> The upgrade completion doesn't change the existing limitation that host network pods **cannot** communicate with Windows Overlay pods.
- ## Next steps To learn how to utilize AKS with your own Container Network Interface (CNI) plugin, see [Bring your own Container Network Interface (CNI) plugin](use-byo-cni.md).
aks Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-identity.md
As shown in the graphic above, the API server calls the AKS webhook server and p
10. Once authorized, the API server returns a response to `kubectl`. 11. `kubectl` provides feedback to the user.
-Learn how to integrate AKS with Azure AD with our [AKS-managed Azure AD integration how-to guide](managed-aad.md).
+Learn how to integrate AKS with Azure AD with our [AKS-managed Azure AD integration how-to guide](managed-azure-ad.md).
## AKS service permissions
For more information on core Kubernetes and AKS concepts, see the following arti
[openid-connect]: ../active-directory/develop/v2-protocols-oidc.md [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [azure-rbac]: ../role-based-access-control/overview.md
-[aks-aad]: managed-aad.md
+[aks-aad]: managed-azure-ad.md
[aks-concepts-clusters-workloads]: concepts-clusters-workloads.md [aks-concepts-security]: concepts-security.md [aks-concepts-scale]: concepts-scale.md
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md
For more information on core Kubernetes and AKS concepts, see:
[microsoft-defender-for-containers]: ../defender-for-cloud/defender-for-containers-introduction.md [aks-daemonsets]: concepts-clusters-workloads.md#daemonsets [aks-upgrade-cluster]: upgrade-cluster.md
-[aks-aad]: ./managed-aad.md
+[aks-aad]: ./managed-azure-ad.md
[aks-add-np-containerd]: learn/quick-windows-container-deploy-cli.md#add-a-windows-server-node-pool-with-containerd [aks-concepts-clusters-workloads]: concepts-clusters-workloads.md [aks-concepts-identity]: concepts-identity.md
aks Csi Migrate In Tree Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-migrate-in-tree-volumes.md
The following are important considerations to evaluate:
* Creates a new PersistentVolume with name `existing-pv-csi` for all PersistentVolumes in namespaces for storage class `storageClassName`. * Configure new PVC name as `existing-pvc-csi`.
- * Updates the application (deployment/StatefulSet) to refer to new PVC.
* Creates a new PVC with the PV name you specify. ```bash
The following are important considerations to evaluate:
* `namespace` - The cluster namespace * `sourceStorageClass` - The in-tree storage driver-based StorageClass * `targetCSIStorageClass` - The CSI storage driver-based StorageClass, which can be either one of the default storage classes that have the provisioner set to **disk.csi.azure.com** or **file.csi.azure.com**. Or you can create a custom storage class as long as it is set to either one of those two provisioners.
- * `volumeSnapshotClass` - Name of the volume snapshot class. For example, `custom-disk-snapshot-sc`.
* `startTimeStamp` - Provide a start time in the format **yyyy-mm-ddthh:mm:ssz**. * `endTimeStamp` - Provide an end time in the format **yyyy-mm-ddthh:mm:ssz**.
aks Egress Udr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-udr.md
Azure load balancers [don't incur a charge until a rule is placed](https://azure
## Deploy a cluster with outbound type of UDR and Azure Firewall
-To illustrate the application of a cluster with outbound type using a user-defined route, a cluster can be configured on a virtual network with an Azure Firewall on its own subnet. See this example on the [restrict egress traffic with Azure firewall example](limit-egress-traffic.md#restrict-egress-traffic-using-azure-firewall).
+To illustrate the application of a cluster with outbound type using a user-defined route, a cluster can be configured on a virtual network with an Azure Firewall on its own subnet. See this example on the [restrict egress traffic with Azure firewall example](limit-egress-traffic.md).
> [!IMPORTANT] > Outbound type of UDR requires there is a route for 0.0.0.0/0 and next hop destination of NVA (Network Virtual Appliance) in the route table.
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
Learn more about deploying and managing AKS.
[concepts-identity]: concepts-identity.md [concepts-storage]: concepts-storage.md [conf-com-node]: ../confidential-computing/confidential-nodes-aks-overview.md
-[aad]: managed-aad.md
+[aad]: managed-azure-ad.md
[aks-monitor]: monitor-aks.md [azure-monitor]: /previous-versions/azure/azure-monitor/containers/containers [azure-logs]: ../azure-monitor/logs/log-analytics-overview.md
aks Keda Deploy Add On Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-arm.md
You can troubleshoot KEDA add-on problems in [this article][keda-troubleshoot].
[az aks update]: /cli/azure/aks#az-aks-update [az-group-delete]: /cli/azure/group#az-group-delete [keda-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-kubernetes-event-driven-autoscaling-add-on?context=/azure/aks/context/aks-context
-[aks-firewall-requirements]: limit-egress-traffic.md#azure-global-required-network-rules
+[aks-firewall-requirements]: outbound-rules-control-egress.md#azure-global-required-network-rules
[az-provider-register]: /cli/azure/provider#az-provider-register [az-feature-register]: /cli/azure/feature#az-feature-register [az-feature-show]: /cli/azure/feature#az-feature-show
aks Keda Deploy Add On Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-deploy-add-on-cli.md
You can troubleshoot KEDA add-on problems in [this article][keda-troubleshoot].
[az aks update]: /cli/azure/aks#az-aks-update [az-group-delete]: /cli/azure/group#az-group-delete [keda-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-kubernetes-event-driven-autoscaling-add-on?context=/azure/aks/context/aks-context
-[aks-firewall-requirements]: limit-egress-traffic.md#azure-global-required-network-rules
+[aks-firewall-requirements]: outbound-rules-control-egress.md#azure-global-required-network-rules
[kubectl]: https://kubernetes.io/docs/user-guide/kubectl [keda]: https://keda.sh/
aks Kubernetes Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-portal.md
This section addresses common problems and troubleshooting steps.
To access the Kubernetes resources, you must have access to the AKS cluster, the Kubernetes API, and the Kubernetes objects. Ensure that you're either a cluster administrator or a user with the appropriate permissions to access the AKS cluster. For more information on cluster security, see [Access and identity options for AKS][concepts-identity]. >[!NOTE]
-> The Kubernetes resource view in the Azure portal is only supported by [managed-AAD enabled clusters](managed-aad.md) or non-AAD enabled clusters. If you're using a managed-AAD enabled cluster, your AAD user or identity needs to have the respective roles/role bindings to access the Kubernetes API and the permission to pull the [user `kubeconfig`](control-kubeconfig-access.md).
+> The Kubernetes resource view in the Azure portal is only supported by [managed-AAD enabled clusters](managed-azure-ad.md) or non-AAD enabled clusters. If you're using a managed-AAD enabled cluster, your AAD user or identity needs to have the respective roles/role bindings to access the Kubernetes API and the permission to pull the [user `kubeconfig`](control-kubeconfig-access.md).
### Enable resource view
This article showed you how to access Kubernetes resources from the Azure portal
[concepts-identity]: concepts-identity.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [deployments]: concepts-clusters-workloads.md#deployments-and-yaml-manifests
-[aks-managed-aad]: managed-aad.md
-[cli-aad-upgrade]: managed-aad.md#upgrade-to-aks-managed-azure-ad-integration
+[aks-managed-aad]: managed-azure-ad.md
+[cli-aad-upgrade]: managed-azure-ad.md#upgrade-a-legacy-azure-ad-cluster-to-aks-managed-azure-ad-integration
[enable-monitor]: ../azure-monitor/containers/container-insights-enable-existing-clusters.md
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
Title: Restrict egress traffic in Azure Kubernetes Service (AKS)
-description: Learn what ports and addresses are required to control egress traffic in Azure Kubernetes Service (AKS)
+ Title: Control egress traffic using Azure Firewall in Azure Kubernetes Service (AKS)
+description: Learn how to control egress traffic using Azure Firewall in Azure Kubernetes Service (AKS)
Previously updated : 07/26/2022 Last updated : 03/10/2023
-#Customer intent: As an cluster operator, I want to restrict egress traffic for nodes to only access defined ports and addresses and improve cluster security.
+#Customer intent: As a cluster operator, I want to restrict egress traffic for nodes to only access defined ports and addresses and improve cluster security.
-# Control egress traffic for cluster nodes in Azure Kubernetes Service (AKS)
+# Control egress traffic using Azure Firewall in Azure Kubernetes Service (AKS)
-This article provides the necessary details that allow you to secure outbound traffic from your Azure Kubernetes Service (AKS). It contains the cluster requirements for a base AKS deployment, and additional requirements for optional addons and features. [An example will be provided at the end on how to configure these requirements with Azure Firewall](#restrict-egress-traffic-using-azure-firewall). However, you can apply this information to any outbound restriction method or appliance.
-
-## Background
-
-AKS clusters are deployed on a virtual network. This network can be managed (created by AKS) or custom (pre-configured by the user beforehand). In either case, the cluster has **outbound** dependencies on services outside of that virtual network (the service has no inbound dependencies).
-
-For management and operational purposes, nodes in an AKS cluster need to access certain ports and fully qualified domain names (FQDNs). These endpoints are required for the nodes to communicate with the API server, or to download and install core Kubernetes cluster components and node security updates. For example, the cluster needs to pull base system container images from Microsoft Container Registry (MCR).
-
-The AKS outbound dependencies are almost entirely defined with FQDNs, which don't have static addresses behind them. The lack of static addresses means that Network Security Groups can't be used to lock down the outbound traffic from an AKS cluster.
-
-By default, AKS clusters have unrestricted outbound (egress) internet access. This level of network access allows nodes and services you run to access external resources as needed. If you wish to restrict egress traffic, a limited number of ports and addresses must be accessible to maintain healthy cluster maintenance tasks. The simplest solution to securing outbound addresses lies in use of a firewall device that can control outbound traffic based on domain names. Azure Firewall, for example, can restrict outbound HTTP and HTTPS traffic based on the FQDN of the destination. You can also configure your preferred firewall and security rules to allow these required ports and addresses.
-
-> [!IMPORTANT]
-> This document covers only how to lock down the traffic leaving the AKS subnet. AKS has no ingress requirements by default. Blocking **internal subnet traffic** using network security groups (NSGs) and firewalls is not supported. To control and block the traffic within the cluster, use [***Network Policies***][network-policy].
-
-## Required outbound network rules and FQDNs for AKS clusters
-
-The following network and FQDN/application rules are required for an AKS cluster, you can use them if you wish to configure a solution other than Azure Firewall.
-
-* IP Address dependencies are for non-HTTP/S traffic (both TCP and UDP traffic)
-* FQDN HTTP/HTTPS endpoints can be placed in your firewall device.
-* Wildcard HTTP/HTTPS endpoints are dependencies that can vary with your AKS cluster based on a number of qualifiers.
-* AKS uses an admission controller to inject the FQDN as an environment variable to all deployments under kube-system and gatekeeper-system, that ensures all system communication between nodes and API server uses the API server FQDN and not the API server IP.
-* If you have an app or solution that needs to talk to the API server, you must add an **additional** network rule to allow *TCP communication to port 443 of your API server's IP*.
-* On rare occasions, if there's a maintenance operation your API server IP might change. Planned maintenance operations that can change the API server IP are always communicated in advance.
-
-### Azure Global required network rules
-
-The required network rules and IP address dependencies are:
-
-| Destination Endpoint | Protocol | Port | Use |
-|-|-|||
-| **`*:1194`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:1194`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:1194`** <br/> *Or* <br/> **`APIServerPublicIP:1194`** `(only known after cluster creation)` | UDP | 1194 | For tunneled secure communication between the nodes and the control plane. This is not required for [private clusters][aks-private-clusters], or for clusters with the *konnectivity-agent* enabled. |
-| **`*:9000`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:9000`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:9000`** <br/> *Or* <br/> **`APIServerPublicIP:9000`** `(only known after cluster creation)` | TCP | 9000 | For tunneled secure communication between the nodes and the control plane. This is not required for [private clusters][aks-private-clusters], or for clusters with the *konnectivity-agent* enabled. |
-| **`*:123`** or **`ntp.ubuntu.com:123`** (if using Azure Firewall network rules) | UDP | 123 | Required for Network Time Protocol (NTP) time synchronization on Linux nodes. This is not required for nodes provisioned after March 2021. |
-| **`CustomDNSIP:53`** `(if using custom DNS servers)` | UDP | 53 | If you're using custom DNS servers, you must ensure they're accessible by the cluster nodes. |
-| **`APIServerPublicIP:443`** `(if running pods/deployments that access the API Server)` | TCP | 443 | Required if running pods/deployments that access the API Server, those pods/deployments would use the API IP. This port is not required for [private clusters][aks-private-clusters]. |
-
-### Azure Global required FQDN / application rules
-
-The following FQDN / application rules are required:
-
-| Destination FQDN | Port | Use |
-|-|--|-|
-| **`*.hcp.<location>.azmk8s.io`** | **`HTTPS:443`** | Required for Node <-> API server communication. Replace *\<location\>* with the region where your AKS cluster is deployed. This is required for clusters with *konnectivity-agent* enabled. Konnectivity also uses Application-Layer Protocol Negotiation (ALPN) to communicate between agent and server. Blocking or rewriting the ALPN extension will cause a failure. This is not required for [private clusters][aks-private-clusters]. |
-| **`mcr.microsoft.com`** | **`HTTPS:443`** | Required to access images in Microsoft Container Registry (MCR). This registry contains first-party images/charts (for example, coreDNS, etc.). These images are required for the correct creation and functioning of the cluster, including scale and upgrade operations. |
-| **`*.data.mcr.microsoft.com`** | **`HTTPS:443`** | Required for MCR storage backed by the Azure content delivery network (CDN). |
-| **`management.azure.com`** | **`HTTPS:443`** | Required for Kubernetes operations against the Azure API. |
-| **`login.microsoftonline.com`** | **`HTTPS:443`** | Required for Azure Active Directory authentication. |
-| **`packages.microsoft.com`** | **`HTTPS:443`** | This address is the Microsoft packages repository used for cached *apt-get* operations. Example packages include Moby, PowerShell, and Azure CLI. |
-| **`acs-mirror.azureedge.net`** | **`HTTPS:443`** | This address is for the repository required to download and install required binaries like kubenet and Azure CNI. |
-
-### Azure China 21Vianet required network rules
-
-The required network rules and IP address dependencies are:
-
-| Destination Endpoint | Protocol | Port | Use |
-|-|-|||
-| **`*:1194`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.Region:1194`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:1194`** <br/> *Or* <br/> **`APIServerPublicIP:1194`** `(only known after cluster creation)` | UDP | 1194 | For tunneled secure communication between the nodes and the control plane. |
-| **`*:9000`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:9000`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:9000`** <br/> *Or* <br/> **`APIServerPublicIP:9000`** `(only known after cluster creation)` | TCP | 9000 | For tunneled secure communication between the nodes and the control plane. |
-| **`*:22`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:22`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:22`** <br/> *Or* <br/> **`APIServerPublicIP:22`** `(only known after cluster creation)` | TCP | 22 | For tunneled secure communication between the nodes and the control plane. |
-| **`*:123`** or **`ntp.ubuntu.com:123`** (if using Azure Firewall network rules) | UDP | 123 | Required for Network Time Protocol (NTP) time synchronization on Linux nodes. |
-| **`CustomDNSIP:53`** `(if using custom DNS servers)` | UDP | 53 | If you're using custom DNS servers, you must ensure they're accessible by the cluster nodes. |
-| **`APIServerPublicIP:443`** `(if running pods/deployments that access the API Server)` | TCP | 443 | Required if running pods/deployments that access the API Server, those pod/deployments would use the API IP. |
-
-### Azure China 21Vianet required FQDN / application rules
-
-The following FQDN / application rules are required:
-
-| Destination FQDN | Port | Use |
-||--|-|
-| **`*.hcp.<location>.cx.prod.service.azk8s.cn`**| **`HTTPS:443`** | Required for Node <-> API server communication. Replace *\<location\>* with the region where your AKS cluster is deployed. |
-| **`*.tun.<location>.cx.prod.service.azk8s.cn`**| **`HTTPS:443`** | Required for Node <-> API server communication. Replace *\<location\>* with the region where your AKS cluster is deployed. |
-| **`mcr.microsoft.com`** | **`HTTPS:443`** | Required to access images in Microsoft Container Registry (MCR). This registry contains first-party images/charts (for example, coreDNS, etc.). These images are required for the correct creation and functioning of the cluster, including scale and upgrade operations. |
-| **`.data.mcr.microsoft.com`** | **`HTTPS:443`** | Required for MCR storage backed by the Azure Content Delivery Network (CDN). |
-| **`management.chinacloudapi.cn`** | **`HTTPS:443`** | Required for Kubernetes operations against the Azure API. |
-| **`login.chinacloudapi.cn`** | **`HTTPS:443`** | Required for Azure Active Directory authentication. |
-| **`packages.microsoft.com`** | **`HTTPS:443`** | This address is the Microsoft packages repository used for cached *apt-get* operations. Example packages include Moby, PowerShell, and Azure CLI. |
-| **`*.azk8s.cn`** | **`HTTPS:443`** | This address is for the repository required to download and install required binaries like kubenet and Azure CNI. |
-
-### Azure US Government required network rules
-
-The required network rules and IP address dependencies are:
-
-| Destination Endpoint | Protocol | Port | Use |
-|-|-|||
-| **`*:1194`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:1194`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:1194`** <br/> *Or* <br/> **`APIServerPublicIP:1194`** `(only known after cluster creation)` | UDP | 1194 | For tunneled secure communication between the nodes and the control plane. |
-| **`*:9000`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:9000`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:9000`** <br/> *Or* <br/> **`APIServerPublicIP:9000`** `(only known after cluster creation)` | TCP | 9000 | For tunneled secure communication between the nodes and the control plane. |
-| **`*:123`** or **`ntp.ubuntu.com:123`** (if using Azure Firewall network rules) | UDP | 123 | Required for Network Time Protocol (NTP) time synchronization on Linux nodes. |
-| **`CustomDNSIP:53`** `(if using custom DNS servers)` | UDP | 53 | If you're using custom DNS servers, you must ensure they're accessible by the cluster nodes. |
-| **`APIServerPublicIP:443`** `(if running pods/deployments that access the API Server)` | TCP | 443 | Required if running pods/deployments that access the API Server, those pods/deployments would use the API IP. |
-
-### Azure US Government required FQDN / application rules
-
-The following FQDN / application rules are required:
-
-| Destination FQDN | Port | Use |
-||--|-|
-| **`*.hcp.<location>.cx.aks.containerservice.azure.us`** | **`HTTPS:443`** | Required for Node <-> API server communication. Replace *\<location\>* with the region where your AKS cluster is deployed.|
-| **`mcr.microsoft.com`** | **`HTTPS:443`** | Required to access images in Microsoft Container Registry (MCR). This registry contains first-party images/charts (for example, coreDNS, etc.). These images are required for the correct creation and functioning of the cluster, including scale and upgrade operations. |
-| **`*.data.mcr.microsoft.com`** | **`HTTPS:443`** | Required for MCR storage backed by the Azure content delivery network (CDN). |
-| **`management.usgovcloudapi.net`** | **`HTTPS:443`** | Required for Kubernetes operations against the Azure API. |
-| **`login.microsoftonline.us`** | **`HTTPS:443`** | Required for Azure Active Directory authentication. |
-| **`packages.microsoft.com`** | **`HTTPS:443`** | This address is the Microsoft packages repository used for cached *apt-get* operations. Example packages include Moby, PowerShell, and Azure CLI. |
-| **`acs-mirror.azureedge.net`** | **`HTTPS:443`** | This address is for the repository required to install required binaries like kubenet and Azure CNI. |
-
-## Optional recommended FQDN / application rules for AKS clusters
-
-The following FQDN / application rules are optional but recommended for AKS clusters:
-
-| Destination FQDN | Port | Use |
-|--||-|
-| **`security.ubuntu.com`, `azure.archive.ubuntu.com`, `changelogs.ubuntu.com`** | **`HTTP:80`** | This address lets the Linux cluster nodes download the required security patches and updates. |
-
-If you choose to block/not allow these FQDNs, the nodes will only receive OS updates when you do a [node image upgrade](node-image-upgrade.md) or [cluster upgrade](upgrade-cluster.md). Keep in mind that Node Image Upgrades also come with updated packages including security fixes.
-
-## GPU enabled AKS clusters
-
-### Required FQDN / application rules
-
-The following FQDN / application rules are required for AKS clusters that have GPU enabled:
-
-| Destination FQDN | Port | Use |
-|--|--|-|
-| **`nvidia.github.io`** | **`HTTPS:443`** | This address is used for correct driver installation and operation on GPU-based nodes. |
-| **`us.download.nvidia.com`** | **`HTTPS:443`** | This address is used for correct driver installation and operation on GPU-based nodes. |
-| **`download.docker.com`** | **`HTTPS:443`** | This address is used for correct driver installation and operation on GPU-based nodes. |
-
-## Windows Server based node pools
-
-### Required FQDN / application rules
-
-The following FQDN / application rules are required for using Windows Server based node pools:
-
-| Destination FQDN | Port | Use |
-|-|--|-|
-| **`onegetcdn.azureedge.net, go.microsoft.com`** | **`HTTPS:443`** | To install windows-related binaries |
-| **`*.mp.microsoft.com, www.msftconnecttest.com, ctldl.windowsupdate.com`** | **`HTTP:80`** | To install windows-related binaries |
-
-If you choose to block/not allow these FQDNs, the nodes will only receive OS updates when you do a [node image upgrade](node-image-upgrade.md) or [cluster upgrade](upgrade-cluster.md). Keep in mind that Node Image Upgrades also come with updated packages including security fixes.
--
-## AKS addons and integrations
-
-### Microsoft Defender for Containers
-
-#### Required FQDN / application rules
-
-The following FQDN / application rules are required for AKS clusters that have Microsoft Defender for Containers enabled.
-
-| FQDN | Port | Use |
-|--|--|-|
-| **`login.microsoftonline.com`** | **`HTTPS:443`** | Required for Active Directory Authentication. |
-| **`*.ods.opinsights.azure.com`** | **`HTTPS:443`** | Required for Microsoft Defender to upload security events to the cloud.|
-| **`*.oms.opinsights.azure.com`** | **`HTTPS:443`** | Required to Authenticate with LogAnalytics workspaces.|
-
-### CSI Secret Store
-
-#### Required FQDN / application rules
-
-The following FQDN / application rules are required for AKS clusters that have CSI Secret Store enabled.
-
-| FQDN | Port | Use |
-|--|--|-|
-| **`vault.azure.net`** | **`HTTPS:443`** | Required for CSI Secret Store addon pods to talk to Azure KeyVault server.|
-
-### Azure Monitor for containers
-
-There are two options to provide access to Azure Monitor for containers, you may allow the Azure Monitor [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) **or** provide access to the required FQDN/Application Rules.
-
-#### Required network rules
-
-The following FQDN / application rules are required:
-
-| Destination Endpoint | Protocol | Port | Use |
-|-|-|||
-| [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureMonitor:443`** | TCP | 443 | This endpoint is used to send metrics data and logs to Azure Monitor and Log Analytics. |
-
-#### Required FQDN / application rules
-
-The following FQDN / application rules are required for AKS clusters that have the Azure Monitor for containers enabled:
-
-| FQDN | Port | Use |
-|--|--|-|
-| **`dc.services.visualstudio.com`** | **`HTTPS:443`** | This endpoint is used for metrics and monitoring telemetry using Azure Monitor. |
-| **`*.ods.opinsights.azure.com`** | **`HTTPS:443`** | This endpoint is used by Azure Monitor for ingesting log analytics data. |
-| **`*.oms.opinsights.azure.com`** | **`HTTPS:443`** | This endpoint is used by omsagent, which is used to authenticate the log analytics service. |
-| **`*.monitoring.azure.com`** | **`HTTPS:443`** | This endpoint is used to send metrics data to Azure Monitor. |
-
-### Azure Policy
-
-#### Required FQDN / application rules
-
-The following FQDN / application rules are required for AKS clusters that have the Azure Policy enabled.
-
-| FQDN | Port | Use |
-|--|--|-|
-| **`data.policy.core.windows.net`** | **`HTTPS:443`** | This address is used to pull the Kubernetes policies and to report cluster compliance status to policy service. |
-| **`store.policy.core.windows.net`** | **`HTTPS:443`** | This address is used to pull the Gatekeeper artifacts of built-in policies. |
-| **`dc.services.visualstudio.com`** | **`HTTPS:443`** | Azure Policy add-on that sends telemetry data to applications insights endpoint. |
-
-#### Azure China 21Vianet Required FQDN / application rules
-
-The following FQDN / application rules are required for AKS clusters that have the Azure Policy enabled.
-
-| FQDN | Port | Use |
-|--|--|-|
-| **`data.policy.azure.cn`** | **`HTTPS:443`** | This address is used to pull the Kubernetes policies and to report cluster compliance status to policy service. |
-| **`store.policy.azure.cn`** | **`HTTPS:443`** | This address is used to pull the Gatekeeper artifacts of built-in policies. |
-
-#### Azure US Government Required FQDN / application rules
-
-The following FQDN / application rules are required for AKS clusters that have the Azure Policy enabled.
-
-| FQDN | Port | Use |
-|--|--|-|
-| **`data.policy.azure.us`** | **`HTTPS:443`** | This address is used to pull the Kubernetes policies and to report cluster compliance status to policy service. |
-| **`store.policy.azure.us`** | **`HTTPS:443`** | This address is used to pull the Gatekeeper artifacts of built-in policies. |
-
-## Cluster extensions
-
-### Required FQDN / application rules
-
-The following FQDN / application rules are required for using cluster extensions on AKS clusters.
-
-| FQDN | Port | Use |
-|--|--|-|
-| **`<region>.dp.kubernetesconfiguration.azure.com`** | **`HTTPS:443`** | This address is used to fetch configuration information from the Cluster Extensions service and report extension status to the service.|
-| **`mcr.microsoft.com, *.data.mcr.microsoft.com`** | **`HTTPS:443`** | This address is required to pull container images for installing cluster extension agents on AKS cluster.|
-
-#### Azure US Government Required FQDN / application rules
-
-The following FQDN / application rules are required for using cluster extensions on AKS clusters.
-
-| FQDN | Port | Use |
-|--|--|-|
-| **`<region>.dp.kubernetesconfiguration.azure.us`** | **`HTTPS:443`** | This address is used to fetch configuration information from the Cluster Extensions service and report extension status to the service. |
-| **`mcr.microsoft.com, *.data.mcr.microsoft.com`** | **`HTTPS:443`** | This address is required to pull container images for installing cluster extension agents on AKS cluster.|
---
-> [!NOTE]
-> If any addon does not explicitly stated here, that means the core requirements are covering it.
-
-## Restrict egress traffic using Azure firewall
-
-Azure Firewall provides an Azure Kubernetes Service (`AzureKubernetesService`) FQDN Tag to simplify this configuration.
+This article provides a walkthrough of how to use the [Outbound network and FQDN rules for AKS clusters][outbound-fqdn-rules] to control egress traffic using Azure Firewall in AKS. To simplify this configuration, Azure Firewall provides an Azure Kubernetes Service (`AzureKubernetesService`) FQDN that restricts outbound traffic from the AKS cluster. This article also provides an example of how to configure public inbound traffic via the firewall.
> [!NOTE]
-> The FQDN tag contains all the FQDNs listed above and is kept automatically up to date.
>
-> We recommend having a minimum of 20 Frontend IPs on the Azure Firewall for production scenarios to avoid incurring in SNAT port exhaustion issues.
+> The FQDN tag contains all the FQDNs listed in [Outbound network and FQDN rules for AKS clusters][outbound-fqdn-rules] and is automatically updated.
+>
+> For production scenarios, we recommend having a *minimum of 20 frontend IPs* on the Azure Firewall to avoid SNAT port exhaustion issues.
-Below is an example architecture of the deployment:
+The following information provides an example architecture of the deployment:
![Locked down topology](media/limit-egress-traffic/aks-azure-firewall-egress.png)
-* Public Ingress is forced to flow through firewall filters
- * AKS agent nodes are isolated in a dedicated subnet.
- * [Azure Firewall](../firewall/overview.md) is deployed in its own subnet.
- * A DNAT rule translates the FW public IP into the LB frontend IP.
-* Outbound requests start from agent nodes to the Azure Firewall internal IP using a [user-defined route](egress-outboundtype.md)
- * Requests from AKS agent nodes follow a UDR that has been placed on the subnet the AKS cluster was deployed into.
+* **Public ingress is forced to flow through firewall filters**
+ * AKS agent nodes are isolated in a dedicated subnet
+ * [Azure Firewall](../firewall/overview.md) is deployed in its own subnet
+ * A DNAT rule translates the firewall public IP into the load balancer frontend IP
+* **Outbound requests start from agent nodes to the Azure Firewall internal IP using a [user-defined route (UDR)](egress-outboundtype.md)**
+ * Requests from AKS agent nodes follow a UDR that has been placed on the subnet the AKS cluster was deployed into
* Azure Firewall egresses out of the virtual network from a public IP frontend * Access to the public internet or other Azure services flows to and from the firewall frontend IP address
- * Optionally, access to the AKS control plane is protected by [API server Authorized IP ranges](./api-server-authorized-ip-ranges.md), which includes the firewall public frontend IP address.
-* Internal Traffic
- * Optionally, instead or in addition to a [Public Load Balancer](load-balancer-standard.md) you can use an [Internal Load Balancer](internal-lb.md) for internal traffic, which you could isolate on its own subnet as well.
-
-The below steps make use of Azure Firewall's `AzureKubernetesService` FQDN tag to restrict the outbound traffic from the AKS cluster and provide an example how to configure public inbound traffic via the firewall.
+ * Access to the AKS control plane can be protected by [API server authorized IP ranges](./api-server-authorized-ip-ranges.md), including the firewall public frontend IP address
+* **Internal traffic**
+ * You can use an [internal load balancer](internal-lb.md) for internal traffic, which you could isolate on its own subnet, instead of or alongside a [public load balancer](load-balancer-standard.md)
-### Set configuration via environment variables
+## Set configuration using environment variables
Define a set of environment variables to be used in resource creations.
FWROUTE_NAME="${PREFIX}-fwrn"
FWROUTE_NAME_INTERNET="${PREFIX}-fwinternet" ```
-### Create a virtual network with multiple subnets
+## Create a virtual network with multiple subnets
-Provision a virtual network with two separate subnets, one for the cluster, one for the firewall. Optionally you could also create one for internal service ingress.
+Provision a virtual network with two separate subnets: one for the cluster and one for the firewall. Optionally, you can create one for internal service ingress.
![Empty network topology](media/limit-egress-traffic/empty-network.png)
-Create a resource group to hold all of the resources.
+1. Create a resource group using the [`az group create`][az-group-create] command.
-```azurecli
-# Create Resource Group
+ ```azurecli
+ az group create --name $RG --location $LOC
+ ```
-az group create --name $RG --location $LOC
-```
+2. Create a virtual network with two subnets to host the AKS cluster and the Azure Firewall using the [`az network vnet create`][az-network-vnet-create] and [`az network vnet subnet create`][az-network-vnet-subnet-create] commands.
-Create a virtual network with two subnets to host the AKS cluster and the Azure Firewall. Each will have their own subnet. Let's start with the AKS network.
+ ```azurecli
+ # Dedicated virtual network with AKS subnet
+ az network vnet create \
+ --resource-group $RG \
+ --name $VNET_NAME \
+ --location $LOC \
+ --address-prefixes 10.42.0.0/16 \
+ --subnet-name $AKSSUBNET_NAME \
+ --subnet-prefix 10.42.1.0/24
-```azurecli
-# Dedicated virtual network with AKS subnet
-
-az network vnet create \
- --resource-group $RG \
- --name $VNET_NAME \
- --location $LOC \
- --address-prefixes 10.42.0.0/16 \
- --subnet-name $AKSSUBNET_NAME \
- --subnet-prefix 10.42.1.0/24
-
-# Dedicated subnet for Azure Firewall (Firewall name cannot be changed)
-
-az network vnet subnet create \
- --resource-group $RG \
- --vnet-name $VNET_NAME \
- --name $FWSUBNET_NAME \
- --address-prefix 10.42.2.0/24
-```
+ # Dedicated subnet for Azure Firewall (Firewall name can't be changed)
+ az network vnet subnet create \
+ --resource-group $RG \
+ --vnet-name $VNET_NAME \
+ --name $FWSUBNET_NAME \
+ --address-prefix 10.42.2.0/24
+ ```
-### Create and set up an Azure Firewall with a UDR
+## Create and set up an Azure Firewall with a UDR
-Azure Firewall inbound and outbound rules must be configured. The main purpose of the firewall is to enable organizations to configure granular ingress and egress traffic rules into and out of the AKS Cluster.
-
-![Firewall and UDR](media/limit-egress-traffic/firewall-udr.png)
+You need to configure Azure Firewall inbound and outbound rules. The main purpose of the firewall is to enable organizations to configure granular ingress and egress traffic rules into and out of the AKS cluster.
> [!IMPORTANT]
-> If your cluster or application creates a large number of outbound connections directed to the same or small subset of destinations, you might require more firewall frontend IPs to avoid maxing out the ports per frontend IP.
-> For more information on how to create an Azure firewall with multiple IPs, see [**here**](../firewall/quick-create-multiple-ip-template.md)
+>
+> If your cluster or application creates a large number of outbound connections directed to the same or a small subset of destinations, you might require more firewall frontend IPs to avoid maxing out the ports per frontend IP.
+>
+> For more information on how to create an Azure Firewall with multiple IPs, see [Create an Azure Firewall with multiple public IP addresses using Bicep](../firewall/quick-create-multiple-ip-bicep.md).
-Create a standard SKU public IP resource that will be used as the Azure Firewall frontend address.
+![Firewall and UDR](media/limit-egress-traffic/firewall-udr.png)
-```azurecli
-az network public-ip create -g $RG -n $FWPUBLICIP_NAME -l $LOC --sku "Standard"
-```
+1. Create a standard SKU public IP resource using the [`az network public-ip create`][az-network-public-ip-create] command. This resource will be used as the Azure Firewall frontend address.
-Register the preview cli-extension to create an Azure Firewall.
+ ```azurecli
+ az network public-ip create -g $RG -n $FWPUBLICIP_NAME -l $LOC --sku "Standard"
+ ```
-```azurecli
-# Install Azure Firewall preview CLI extension
+2. Register the [Azure Firewall preview CLI extension](https://github.com/Azure/azure-cli-extensions/tree/main/src/azure-firewall) to create an Azure Firewall using the [`az extension add`][az-extension-add] command.
-az extension add --name azure-firewall
+ ```azurecli
+ az extension add --name azure-firewall
+ ```
-# Deploy Azure Firewall
+3. Create an Azure Firewall and enable DNS proxy using the [`az network firewall create`][az-network-firewall-create] command and setting the `--enable-dns-proxy` to `true`.
-az network firewall create -g $RG -n $FWNAME -l $LOC --enable-dns-proxy true
-```
+ ```azurecli
+ az network firewall create -g $RG -n $FWNAME -l $LOC --enable-dns-proxy true
+ ```
-The IP address created earlier can now be assigned to the firewall frontend.
+ Setting up the public IP address to the Azure Firewall may take a few minutes. Once it's ready, the IP address created earlier can be assigned to the firewall front end.
-> [!NOTE]
-> Set up of the public IP address to the Azure Firewall may take a few minutes.
-> To leverage FQDN on network rules we need DNS proxy enabled, when enabled the firewall will listen on port 53 and will forward DNS requests to the DNS server specified above. This will allow the firewall to translate that FQDN automatically.
+ > [!NOTE]
+ >
+ > To leverage FQDN on network rules, we need DNS proxy enabled. When DNS proxy is enabled, the firewall listens on port 53 and forwards DNS requests to the DNS server specified above. This allows the firewall to translate the FQDN automatically.
-```azurecli
-# Configure Firewall IP Config
+4. Create an Azure Firewall IP configuration using the [`az network firewall ip-config create`][az-network-firewall-ip-config-create] command.
-az network firewall ip-config create -g $RG -f $FWNAME -n $FWIPCONFIG_NAME --public-ip-address $FWPUBLICIP_NAME --vnet-name $VNET_NAME
-```
+ ```azurecli
+ az network firewall ip-config create -g $RG -f $FWNAME -n $FWIPCONFIG_NAME --public-ip-address $FWPUBLICIP_NAME --vnet-name $VNET_NAME
+ ```
-When the previous command has succeeded, save the firewall frontend IP address for configuration later.
+5. Once the previous command succeeds, save the firewall frontend IP address for configuration later.
-```azurecli
-# Capture Firewall IP Address for Later Use
+ ```azurecli
+ FWPUBLIC_IP=$(az network public-ip show -g $RG -n $FWPUBLICIP_NAME --query "ipAddress" -o tsv)
+ FWPRIVATE_IP=$(az network firewall show -g $RG -n $FWNAME --query "ipConfigurations[0].privateIpAddress" -o tsv)
+ ```
-FWPUBLIC_IP=$(az network public-ip show -g $RG -n $FWPUBLICIP_NAME --query "ipAddress" -o tsv)
-FWPRIVATE_IP=$(az network firewall show -g $RG -n $FWNAME --query "ipConfigurations[0].privateIpAddress" -o tsv)
-```
-
-> [!NOTE]
-> If you use secure access to the AKS API server with [authorized IP address ranges](./api-server-authorized-ip-ranges.md), you need to add the firewall public IP into the authorized IP range.
+ > [!NOTE]
+ >
+ > If you use secure access to the AKS API server with [authorized IP address ranges](./api-server-authorized-ip-ranges.md), you need to add the firewall public IP into the authorized IP range.
### Create a UDR with a hop to Azure Firewall
-Azure automatically routes traffic between Azure subnets, virtual networks, and on-premises networks. If you want to change any of Azure's default routing, you do so by creating a route table.
+Azure automatically routes traffic between Azure subnets, virtual networks, and on-premises networks. If you want to change any of Azure's default routing, you can create a route table.
-Create an empty route table to be associated with a given subnet. The route table will define the next hop as the Azure Firewall created above. Each subnet can have zero or one route table associated to it.
+1. Create an empty route table to be associated with a given subnet using the [`az network route-table create`][az-network-route-table-create] command. The route table will define the next hop as the Azure Firewall created above. Each subnet can have zero or one route table associated to it.
-```azurecli
-# Create UDR and add a route for Azure Firewall
+ ```azurecli
+ az network route-table create -g $RG -l $LOC --name $FWROUTE_TABLE_NAME
+ ```
-az network route-table create -g $RG -l $LOC --name $FWROUTE_TABLE_NAME
-az network route-table route create -g $RG --name $FWROUTE_NAME --route-table-name $FWROUTE_TABLE_NAME --address-prefix 0.0.0.0/0 --next-hop-type VirtualAppliance --next-hop-ip-address $FWPRIVATE_IP
-az network route-table route create -g $RG --name $FWROUTE_NAME_INTERNET --route-table-name $FWROUTE_TABLE_NAME --address-prefix $FWPUBLIC_IP/32 --next-hop-type Internet
-```
+2. Create routes in the route table for the subnets using the [`az network route-table route create`][az-network-route-table-route-create] command.
+
+ ```azurecli
+ az network route-table route create -g $RG --name $FWROUTE_NAME --route-table-name $FWROUTE_TABLE_NAME --address-prefix 0.0.0.0/0 --next-hop-type VirtualAppliance --next-hop-ip-address $FWPRIVATE_IP
+
+ az network route-table route create -g $RG --name $FWROUTE_NAME_INTERNET --route-table-name $FWROUTE_TABLE_NAME --address-prefix $FWPUBLIC_IP/32 --next-hop-type Internet
+ ```
-See [virtual network route table documentation](../virtual-network/virtual-networks-udr-overview.md#user-defined) about how you can override Azure's default system routes or add additional routes to a subnet's route table.
+For information on how to override Azure's default system routes or add additional routes to a subnet's route table, see the [virtual network route table documentation](../virtual-network/virtual-networks-udr-overview.md#user-defined).
-### Adding firewall rules
+### Add firewall rules
> [!NOTE]
-> For applications outside of the kube-system or gatekeeper-system namespaces that needs to talk to the API server, an additional network rule to allow TCP communication to port 443 for the API server IP in addition to adding application rule for fqdn-tag AzureKubernetesService is required.
+>
+> For applications outside of the kube-system or gatekeeper-system namespaces that need to talk to the API server, an additional network rule to allow TCP communication to port 443 for the API server IP in addition to adding application rule for fqdn-tag `AzureKubernetesService` is required.
+This section covers three network rules and an application rule you can use to configure on your firewall. You may need to adapt these rules based on your deployment.
-Below are three network rules you can use to configure on your firewall, you may need to adapt these rules based on your deployment. The first rule allows access to port 9000 via TCP. The second rule allows access to port 1194 and 123 via UDP (if you're deploying to Azure China 21Vianet, you might require [more](#azure-china-21vianet-required-network-rules)). Both these rules will only allow traffic destined to the Azure Region CIDR that we're using, in this case East US.
-Finally, we'll add a third network rule opening port 123 to `ntp.ubuntu.com` FQDN via UDP (adding an FQDN as a network rule is one of the specific features of Azure Firewall, and you'll need to adapt it when using your own options).
+* The first network rule allows access to port 9000 via TCP.
+* The second network rule allows access to port 1194 and 123 via UDP. If you're deploying to Azure China 21Vianet, see the [Azure China 21Vianet required network rules](./outbound-rules-control-egress.md#azure-china-21vianet-required-network-rules). Both these rules will only allow traffic destined to the Azure Region CIDR in this article, which is East US.
+* The third network rule opens port 123 to `ntp.ubuntu.com` FQDN via UDP. Adding an FQDN as a network rule is one of the specific features of Azure Firewall, so you'll need to adapt it when using your own options.
+* The application rule covers all needed FQDNs accessible through TCP port 443 and port 80.
-After setting the network rules, we'll also add an application rule using the `AzureKubernetesService` that covers all needed FQDNs accessible through TCP port 443 and port 80.
+1. Create the network rules using the [`az network firewall network-rule create`][az-network-firewall-network-rule-create] command.
-```
-# Add FW Network Rules
+ ```azurecli
+ az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'apiudp' --protocols 'UDP' --source-addresses '*' --destination-addresses "AzureCloud.$LOC" --destination-ports 1194 --action allow --priority 100
-az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'apiudp' --protocols 'UDP' --source-addresses '*' --destination-addresses "AzureCloud.$LOC" --destination-ports 1194 --action allow --priority 100
-az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'apitcp' --protocols 'TCP' --source-addresses '*' --destination-addresses "AzureCloud.$LOC" --destination-ports 9000
-az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'time' --protocols 'UDP' --source-addresses '*' --destination-fqdns 'ntp.ubuntu.com' --destination-ports 123
+ az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'apitcp' --protocols 'TCP' --source-addresses '*' --destination-addresses "AzureCloud.$LOC" --destination-ports 9000
-# Add FW Application Rules
+ az network firewall network-rule create -g $RG -f $FWNAME --collection-name 'aksfwnr' -n 'time' --protocols 'UDP' --source-addresses '*' --destination-fqdns 'ntp.ubuntu.com' --destination-ports 123
+ ```
-az network firewall application-rule create -g $RG -f $FWNAME --collection-name 'aksfwar' -n 'fqdn' --source-addresses '*' --protocols 'http=80' 'https=443' --fqdn-tags "AzureKubernetesService" --action allow --priority 100
-```
+2. Create the application rule using the [`az network firewall application-rule create`][az-network-firewall-application-rule-create] command.
-See [Azure Firewall documentation](../firewall/overview.md) to learn more about the Azure Firewall service.
+ ```azurecli
+ az network firewall application-rule create -g $RG -f $FWNAME --collection-name 'aksfwar' -n 'fqdn' --source-addresses '*' --protocols 'http=80' 'https=443' --fqdn-tags "AzureKubernetesService" --action allow --priority 100
+ ```
+
+To learn more about Azure Firewall, see the [Azure Firewall documentation](../firewall/overview.md).
### Associate the route table to AKS
-To associate the cluster with the firewall, the dedicated subnet for the cluster's subnet must reference the route table created above. Association can be done by issuing a command to the virtual network holding both the cluster and firewall to update the route table of the cluster's subnet.
+To associate the cluster with the firewall, the dedicated subnet for the cluster's subnet must reference the route table created above. Use the [`az network vnet subnet update`][az-network-vnet-subnet-update] command to associate the route table to AKS.
```azurecli
-# Associate route table with next hop to Firewall to the AKS subnet
- az network vnet subnet update -g $RG --vnet-name $VNET_NAME --name $AKSSUBNET_NAME --route-table $FWROUTE_TABLE_NAME ```
-### Deploy AKS with outbound type of UDR to the existing network
+## Deploy an AKS cluster with a UPR outbound type to the existing network
-Now an AKS cluster can be deployed into the existing virtual network. We'll also use [outbound type `userDefinedRouting`](egress-outboundtype.md), this feature ensures any outbound traffic will be forced through the firewall and no other egress paths will exist (by default the Load Balancer outbound type could be used).
+Now, you can deploy an AKS cluster into the existing virtual network. You will use the [`userDefinedRouting` outbound type](egress-outboundtype.md), which ensures that any outbound traffic is forced through the firewall and no other egress paths will exist. The [`loadBalancer` outbound type](egress-outboundtype.md#outbound-type-of-loadbalancer) can also be used.
![aks-deploy](media/limit-egress-traffic/aks-udr-fw.png)
-The target subnet to be deployed into is defined with the environment variable, `$SUBNETID`. We didn't define the `$SUBNETID` variable in the previous steps. To set the value for the subnet ID, you can use the following command:
+The target subnet to be deployed into is defined with the environment variable, `$SUBNETID`. Set the value for the subnet ID using the following command:
```azurecli SUBNETID=$(az network vnet subnet show -g $RG --vnet-name $VNET_NAME --name $AKSSUBNET_NAME --query id -o tsv)
SUBNETID=$(az network vnet subnet show -g $RG --vnet-name $VNET_NAME --name $AKS
You'll define the outbound type to use the UDR that already exists on the subnet. This configuration will enable AKS to skip the setup and IP provisioning for the load balancer.
-> [!IMPORTANT]
-> For more information on outbound type UDR including limitations, see [**egress outbound type UDR**](egress-outboundtype.md#limitations).
- > [!TIP]
-> Additional features can be added to the cluster deployment such as [**Private Cluster**](private-clusters.md).
+> You can add additional features to the cluster deployment, such as [**private clusters**](private-clusters.md).
>
-> The AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) can be added to limit API server access to only the firewall's public endpoint. The authorized IP ranges feature is denoted in the diagram as optional. When enabling the authorized IP range feature to limit API server access, your developer tools must use a jumpbox from the firewall's virtual network or you must add all developer endpoints to the authorized IP range.
+> You can add the AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) to limit API server access to only the firewall's public endpoint. The authorized IP ranges feature is denoted in the diagram as optional. When enabling the authorized IP range feature to limit API server access, your developer tools must use a jumpbox from the firewall's virtual network, or you must add all developer endpoints to the authorized IP range.
-#### Create an AKS cluster with system-assigned identities
+### Create an AKS cluster with system-assigned identities
> [!NOTE]
-> AKS will create a system-assigned kubelet identity in the Node resource group if you do not [specify your own kubelet managed identity][Use a pre-created kubelet managed identity].
->
-> For user defined routing (UDR), system-assigned identity only supports CNI network plugin. Because for kubelet network plugin, AKS cluster needs permission on route table as kubernetes cloud-provider manages rules.
+> AKS will create a system-assigned kubelet identity in the node resource group if you don't [specify your own kubelet managed identity][Use a pre-created kubelet managed identity].
+>
+> For user-defined routing, system-assigned identity only supports the CNI network plugin.
-You can create an AKS cluster using a system-assigned managed identity with CNI network plugin by running the following CLI command.
+Create an AKS cluster using a system-assigned managed identity with the CNI network plugin using the [`az aks create`][az-aks-create] command.
```azurecli az aks create -g $RG -n $AKSNAME -l $LOC \
az aks create -g $RG -n $AKSNAME -l $LOC \
--api-server-authorized-ip-ranges $FWPUBLIC_IP ```
-#### Create an AKS cluster with user-assigned identities
+### Create user-assigned identities
-##### Create user-assigned managed identities
+If you don't have user-assigned identities, follow the steps in this section. If you already have user-assigned identities, skip to [Create an AKS cluster with user-assigned identities](#create-an-aks-cluster-with-user-assigned-identities).
-If you don't have a control plane managed identity, you can create by running the following [az identity create][az-identity-create] command:
+1. Create a control plane managed identity using the [`az identity create`][az-identity-create] command.
-```azurecli-interactive
-az identity create --name myIdentity --resource-group myResourceGroup
-```
+ ```azurecli-interactive
+ az identity create --name myIdentity --resource-group myResourceGroup
+ ```
-The output should resemble the following:
-
-```output
-{
- "clientId": "<client-id>",
- "clientSecretUrl": "<clientSecretUrl>",
- "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
- "location": "westus2",
- "name": "myIdentity",
- "principalId": "<principal-id>",
- "resourceGroup": "myResourceGroup",
- "tags": {},
- "tenantId": "<tenant-id>",
- "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
-}
-```
+ The output should resemble the following example output:
-If you don't have a kubelet managed identity, you can create one by running the following [az identity create][az-identity-create] command:
+ ```output
+ {
+ "clientId": "<client-id>",
+ "clientSecretUrl": "<clientSecretUrl>",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myIdentity",
+ "location": "westus2",
+ "name": "myIdentity",
+ "principalId": "<principal-id>",
+ "resourceGroup": "myResourceGroup",
+ "tags": {},
+ "tenantId": "<tenant-id>",
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
+ }
+ ```
-```azurecli-interactive
-az identity create --name myKubeletIdentity --resource-group myResourceGroup
-```
+2. Create a kubelet managed identity using the [`az identity create`][az-identity-create] command.
-The output should resemble the following:
-
-```output
-{
- "clientId": "<client-id>",
- "clientSecretUrl": "<clientSecretUrl>",
- "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity",
- "location": "westus2",
- "name": "myKubeletIdentity",
- "principalId": "<principal-id>",
- "resourceGroup": "myResourceGroup",
- "tags": {},
- "tenantId": "<tenant-id>",
- "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
-}
-```
+ ```azurecli
+ az identity create --name myKubeletIdentity --resource-group myResourceGroup
+ ```
-> [!NOTE]
-> For creating and using your own VNet and route table where the resources are outside of the worker node resource group, the CLI will add the role assignment automatically. If you are using an ARM template or other client, you need to use the Principal ID of the cluster managed identity to perform a [role assignment.][add role to identity]
+ The output should resemble the following example output:
+
+ ```output
+ {
+ "clientId": "<client-id>",
+ "clientSecretUrl": "<clientSecretUrl>",
+ "id": "/subscriptions/<subscriptionid>/resourcegroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myKubeletIdentity",
+ "location": "westus2",
+ "name": "myKubeletIdentity",
+ "principalId": "<principal-id>",
+ "resourceGroup": "myResourceGroup",
+ "tags": {},
+ "tenantId": "<tenant-id>",
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities"
+ }
+ ```
+
+ > [!NOTE]
+ > If you create your own VNet and route table where the resources are outside of the worker node resource group, the CLI will add the role assignment automatically. If you're using an ARM template or other client, you need to use the Principal ID of the cluster managed identity to perform a [role assignment][add role to identity].
-##### Create an AKS cluster with user-assigned identities
+### Create an AKS cluster with user-assigned identities
-Now you can use the following command to create your AKS cluster with your existing identities in the subnet. Provide the control plane identity resource ID via `assign-identity` and the kubelet managed identity via `assign-kubelet-identity`:
+Create an AKS cluster with your existing identities in the subnet using the [`az aks create`][az-aks-create] command and providing your control plane identity resource ID kubelet managed identity.
```azurecli az aks create -g $RG -n $AKSNAME -l $LOC \
az aks create -g $RG -n $AKSNAME -l $LOC \
--assign-kubelet-identity <kubelet-identity-resource-id> ```
+## Enable developer access to the API server
-### Enable developer access to the API server
+If you used authorized IP ranges for your cluster in the previous step, you need to add your developer tooling IP addresses to the AKS cluster list of approved IP ranges so you access the API server from there. You can also configure a jumpbox with the needed tooling inside a separate subnet in the firewall's virtual network.
-If you used authorized IP ranges for the cluster on the previous step, you must add your developer tooling IP addresses to the AKS cluster list of approved IP ranges in order to access the API server from there. Another option is to configure a jumpbox with the needed tooling inside a separate subnet in the Firewall's virtual network.
+1. Retrieve your IP address using the following command:
-Add another IP address to the approved ranges with the following command
+ ```bash
+ CURRENT_IP=$(dig @resolver1.opendns.com ANY myip.opendns.com +short)
+ ```
-```azurecli
-# Retrieve your IP address
-CURRENT_IP=$(dig @resolver1.opendns.com ANY myip.opendns.com +short)
+2. Add the IP address to the approved ranges using the [`az aks update`][az-aks-update] command.
-# Add to AKS approved list
-az aks update -g $RG -n $AKSNAME --api-server-authorized-ip-ranges $CURRENT_IP/32
-```
+ ```azurecli
+ az aks update -g $RG -n $AKSNAME --api-server-authorized-ip-ranges $CURRENT_IP/32
+ ```
-Use the [az aks get-credentials][az-aks-get-credentials] command to configure `kubectl` to connect to your newly created Kubernetes cluster.
+3. Configure `kubectl` to connect to your AKS cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
-```azurecli
-az aks get-credentials -g $RG -n $AKSNAME
-```
+ ```azurecli
+ az aks get-credentials -g $RG -n $AKSNAME
+ ```
-### Deploy a public service
+## Deploy a public service
-You can now start exposing services and deploying applications to this cluster. In this example, we'll expose a public service, but you may also choose to expose an internal service via [internal load balancer](internal-lb.md).
+You can now start exposing services and deploying applications to this cluster. In this example, we'll expose a public service, but you also might want to expose an internal service using an [internal load balancer](internal-lb.md).
![Public Service DNAT](media/limit-egress-traffic/aks-create-svc.png)
-Deploy the Azure voting app application by copying the yaml below to a file named `example.yaml`.
-
-```yaml
-# voting-storage-deployment.yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: voting-storage
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: voting-storage
- template:
+1. Copy the following YAML and save it as a file named `example.yaml`.
+
+ ```yaml
+ # voting-storage-deployment.yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: voting-storage
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: voting-storage
+ template:
+ metadata:
+ labels:
+ app: voting-storage
+ spec:
+ containers:
+ - name: voting-storage
+ image: mcr.microsoft.com/aks/samples/voting/storage:2.0
+ args: ["--ignore-db-dir=lost+found"]
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ ports:
+ - containerPort: 3306
+ name: mysql
+ volumeMounts:
+ - name: mysql-persistent-storage
+ mountPath: /var/lib/mysql
+ env:
+ - name: MYSQL_ROOT_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: voting-storage-secret
+ key: MYSQL_ROOT_PASSWORD
+ - name: MYSQL_USER
+ valueFrom:
+ secretKeyRef:
+ name: voting-storage-secret
+ key: MYSQL_USER
+ - name: MYSQL_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: voting-storage-secret
+ key: MYSQL_PASSWORD
+ - name: MYSQL_DATABASE
+ valueFrom:
+ secretKeyRef:
+ name: voting-storage-secret
+ key: MYSQL_DATABASE
+ volumes:
+ - name: mysql-persistent-storage
+ persistentVolumeClaim:
+ claimName: mysql-pv-claim
+
+ # voting-storage-secret.yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: voting-storage-secret
+ type: Opaque
+ data:
+ MYSQL_USER: ZGJ1c2Vy
+ MYSQL_PASSWORD: UGFzc3dvcmQxMg==
+ MYSQL_DATABASE: YXp1cmV2b3Rl
+ MYSQL_ROOT_PASSWORD: UGFzc3dvcmQxMg==
+
+ # voting-storage-pv-claim.yaml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: mysql-pv-claim
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 1Gi
+
+ # voting-storage-service.yaml
+ apiVersion: v1
+ kind: Service
metadata:
- labels:
+ name: voting-storage
+ labels:
app: voting-storage spec:
- containers:
- - name: voting-storage
- image: mcr.microsoft.com/aks/samples/voting/storage:2.0
- args: ["--ignore-db-dir=lost+found"]
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- ports:
- - containerPort: 3306
- name: mysql
- volumeMounts:
- - name: mysql-persistent-storage
- mountPath: /var/lib/mysql
- env:
- - name: MYSQL_ROOT_PASSWORD
- valueFrom:
- secretKeyRef:
- name: voting-storage-secret
- key: MYSQL_ROOT_PASSWORD
- - name: MYSQL_USER
- valueFrom:
- secretKeyRef:
- name: voting-storage-secret
- key: MYSQL_USER
- - name: MYSQL_PASSWORD
- valueFrom:
- secretKeyRef:
- name: voting-storage-secret
- key: MYSQL_PASSWORD
- - name: MYSQL_DATABASE
- valueFrom:
- secretKeyRef:
- name: voting-storage-secret
- key: MYSQL_DATABASE
- volumes:
- - name: mysql-persistent-storage
- persistentVolumeClaim:
- claimName: mysql-pv-claim
-
-# voting-storage-secret.yaml
-apiVersion: v1
-kind: Secret
-metadata:
- name: voting-storage-secret
-type: Opaque
-data:
- MYSQL_USER: ZGJ1c2Vy
- MYSQL_PASSWORD: UGFzc3dvcmQxMg==
- MYSQL_DATABASE: YXp1cmV2b3Rl
- MYSQL_ROOT_PASSWORD: UGFzc3dvcmQxMg==
-
-# voting-storage-pv-claim.yaml
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: mysql-pv-claim
-spec:
- accessModes:
- - ReadWriteOnce
- resources:
- requests:
- storage: 1Gi
-
-# voting-storage-service.yaml
-apiVersion: v1
-kind: Service
-metadata:
- name: voting-storage
- labels:
- app: voting-storage
-spec:
- ports:
- - port: 3306
- name: mysql
- selector:
- app: voting-storage
-
-# voting-app-deployment.yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: voting-app
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: voting-app
- template:
+ ports:
+ - port: 3306
+ name: mysql
+ selector:
+ app: voting-storage
+
+ # voting-app-deployment.yaml
+ apiVersion: apps/v1
+ kind: Deployment
metadata:
- labels:
+ name: voting-app
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: voting-app
+ template:
+ metadata:
+ labels:
+ app: voting-app
+ spec:
+ containers:
+ - name: voting-app
+ image: mcr.microsoft.com/aks/samples/voting/app:2.0
+ imagePullPolicy: Always
+ ports:
+ - containerPort: 8080
+ name: http
+ env:
+ - name: MYSQL_HOST
+ value: "voting-storage"
+ - name: MYSQL_USER
+ valueFrom:
+ secretKeyRef:
+ name: voting-storage-secret
+ key: MYSQL_USER
+ - name: MYSQL_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: voting-storage-secret
+ key: MYSQL_PASSWORD
+ - name: MYSQL_DATABASE
+ valueFrom:
+ secretKeyRef:
+ name: voting-storage-secret
+ key: MYSQL_DATABASE
+ - name: ANALYTICS_HOST
+ value: "voting-analytics"
+
+ # voting-app-service.yaml
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: voting-app
+ labels:
app: voting-app spec:
- containers:
- - name: voting-app
- image: mcr.microsoft.com/aks/samples/voting/app:2.0
- imagePullPolicy: Always
- ports:
- - containerPort: 8080
- name: http
- env:
- - name: MYSQL_HOST
- value: "voting-storage"
- - name: MYSQL_USER
- valueFrom:
- secretKeyRef:
- name: voting-storage-secret
- key: MYSQL_USER
- - name: MYSQL_PASSWORD
- valueFrom:
- secretKeyRef:
- name: voting-storage-secret
- key: MYSQL_PASSWORD
- - name: MYSQL_DATABASE
- valueFrom:
- secretKeyRef:
- name: voting-storage-secret
- key: MYSQL_DATABASE
- - name: ANALYTICS_HOST
- value: "voting-analytics"
-
-# voting-app-service.yaml
-apiVersion: v1
-kind: Service
-metadata:
- name: voting-app
- labels:
- app: voting-app
-spec:
- type: LoadBalancer
- ports:
- - port: 80
- targetPort: 8080
- name: http
- selector:
- app: voting-app
-
-# voting-analytics-deployment.yaml
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: voting-analytics
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: voting-analytics
- version: "2.0"
- template:
+ type: LoadBalancer
+ ports:
+ - port: 80
+ targetPort: 8080
+ name: http
+ selector:
+ app: voting-app
+
+ # voting-analytics-deployment.yaml
+ apiVersion: apps/v1
+ kind: Deployment
metadata:
- labels:
+ name: voting-analytics
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: voting-analytics
+ version: "2.0"
+ template:
+ metadata:
+ labels:
+ app: voting-analytics
+ version: "2.0"
+ spec:
+ containers:
+ - name: voting-analytics
+ image: mcr.microsoft.com/aks/samples/voting/analytics:2.0
+ imagePullPolicy: Always
+ ports:
+ - containerPort: 8080
+ name: http
+ env:
+ - name: MYSQL_HOST
+ value: "voting-storage"
+ - name: MYSQL_USER
+ valueFrom:
+ secretKeyRef:
+ name: voting-storage-secret
+ key: MYSQL_USER
+ - name: MYSQL_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: voting-storage-secret
+ key: MYSQL_PASSWORD
+ - name: MYSQL_DATABASE
+ valueFrom:
+ secretKeyRef:
+ name: voting-storage-secret
+ key: MYSQL_DATABASE
+
+ # voting-analytics-service.yaml
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: voting-analytics
+ labels:
app: voting-analytics
- version: "2.0"
spec:
- containers:
- - name: voting-analytics
- image: mcr.microsoft.com/aks/samples/voting/analytics:2.0
- imagePullPolicy: Always
- ports:
- - containerPort: 8080
- name: http
- env:
- - name: MYSQL_HOST
- value: "voting-storage"
- - name: MYSQL_USER
- valueFrom:
- secretKeyRef:
- name: voting-storage-secret
- key: MYSQL_USER
- - name: MYSQL_PASSWORD
- valueFrom:
- secretKeyRef:
- name: voting-storage-secret
- key: MYSQL_PASSWORD
- - name: MYSQL_DATABASE
- valueFrom:
- secretKeyRef:
- name: voting-storage-secret
- key: MYSQL_DATABASE
-
-# voting-analytics-service.yaml
-apiVersion: v1
-kind: Service
-metadata:
- name: voting-analytics
- labels:
- app: voting-analytics
-spec:
- ports:
- - port: 8080
- name: http
- selector:
- app: voting-analytics
-```
+ ports:
+ - port: 8080
+ name: http
+ selector:
+ app: voting-analytics
+ ```
-Deploy the service by running:
+2. Deploy the service using the `kubectl apply` command.
-```bash
-kubectl apply -f example.yaml
-```
+ ```bash
+ kubectl apply -f example.yaml
+ ```
-### Add a DNAT rule to Azure Firewall
+## Add a DNAT rule to Azure Firewall
> [!IMPORTANT]
-> When you use Azure Firewall to restrict egress traffic and create a user-defined route (UDR) to force all egress traffic, make sure you create an appropriate DNAT rule in Firewall to correctly allow ingress traffic. Using Azure Firewall with a UDR breaks the ingress setup due to asymmetric routing. (The issue occurs if the AKS subnet has a default route that goes to the firewall's private IP address, but you're using a public load balancer - ingress or Kubernetes service of type: LoadBalancer). In this case, the incoming load balancer traffic is received via its public IP address, but the return path goes through the firewall's private IP address. Because the firewall is stateful, it drops the returning packet because the firewall isn't aware of an established session. To learn how to integrate Azure Firewall with your ingress or service load balancer, see [Integrate Azure Firewall with Azure Standard Load Balancer](../firewall/integrate-lb.md).
-
-To configure inbound connectivity, a DNAT rule must be written to the Azure Firewall. To test connectivity to your cluster, a rule is defined for the firewall frontend public IP address to route to the internal IP exposed by the internal service.
+>
+> When you use Azure Firewall to restrict egress traffic and create a UDR to force all egress traffic, make sure you create an appropriate DNAT rule in Azure Firewall to correctly allow ingress traffic. Using Azure Firewall with a UDR breaks the ingress setup due to asymmetric routing. The issue occurs if the AKS subnet has a default route that goes to the firewall's private IP address, but you're using a public load balancer - ingress or Kubernetes service of type `loadBalancer`. In this case, the incoming load balancer traffic is received via its public IP address, but the return path goes through the firewall's private IP address. Because the firewall is stateful, it drops the returning packet because the firewall isn't aware of an established session. To learn how to integrate Azure Firewall with your ingress or service load balancer, see [Integrate Azure Firewall with Azure Standard Load Balancer](../firewall/integrate-lb.md).
-The destination address can be customized as it's the port on the firewall to be accessed. The translated address must be the IP address of the internal load balancer. The translated port must be the exposed port for your Kubernetes service.
+To configure inbound connectivity, you need to write a DNAT rule to the Azure Firewall. To test connectivity to your cluster, a rule is defined for the firewall frontend public IP address to route to the internal IP exposed by the internal service. The destination address can be customized. The translated address must be the IP address of the internal load balancer. The translated port must be the exposed port for your Kubernetes service. You also need to specify the internal IP address assigned to the load balancer created by the Kubernetes service.
-You'll need to specify the internal IP address assigned to the load balancer created by the Kubernetes service. Retrieve the address by running:
+1. Get the internal IP address assigned to the load balancer using the `kubectl get services` command.
-```bash
-kubectl get services
-```
+ ```bash
+ kubectl get services
+ ```
-The IP address needed will be listed in the EXTERNAL-IP column, similar to the following.
+ The IP address will be listed in the `EXTERNAL-IP` column, as shown in the following example output:
-```bash
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-kubernetes ClusterIP 10.41.0.1 <none> 443/TCP 10h
-voting-analytics ClusterIP 10.41.88.129 <none> 8080/TCP 9m
-voting-app LoadBalancer 10.41.185.82 20.39.18.6 80:32718/TCP 9m
-voting-storage ClusterIP 10.41.221.201 <none> 3306/TCP 9m
-```
+ ```bash
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ kubernetes ClusterIP 10.41.0.1 <none> 443/TCP 10h
+ voting-analytics ClusterIP 10.41.88.129 <none> 8080/TCP 9m
+ voting-app LoadBalancer 10.41.185.82 20.39.18.6 80:32718/TCP 9m
+ voting-storage ClusterIP 10.41.221.201 <none> 3306/TCP 9m
+ ```
-Get the service IP by running:
+2. Get the service IP using the `kubectl get svc voting-app` command.
-```bash
-SERVICE_IP=$(kubectl get svc voting-app -o jsonpath='{.status.loadBalancer.ingress[*].ip}')
-```
+ ```bash
+ SERVICE_IP=$(kubectl get svc voting-app -o jsonpath='{.status.loadBalancer.ingress[*].ip}')
+ ```
-Add the NAT rule by running:
+3. Add the NAT rule using the [`az network firewall nat-rule create`][az-network-firewall-nat-rule-create] command.
-```azurecli
-az network firewall nat-rule create --collection-name exampleset --destination-addresses $FWPUBLIC_IP --destination-ports 80 --firewall-name $FWNAME --name inboundrule --protocols Any --resource-group $RG --source-addresses '*' --translated-port 80 --action Dnat --priority 100 --translated-address $SERVICE_IP
-```
+ ```azurecli
+ az network firewall nat-rule create --collection-name exampleset --destination-addresses $FWPUBLIC_IP --destination-ports 80 --firewall-name $FWNAME --name inboundrule --protocols Any --resource-group $RG --source-addresses '*' --translated-port 80 --action Dnat --priority 100 --translated-address $SERVICE_IP
+ ```
-### Validate connectivity
+## Validate connectivity
Navigate to the Azure Firewall frontend IP address in a browser to validate connectivity.
-You should see the AKS voting app. In this example, the Firewall public IP was `52.253.228.132`.
+You should see the AKS voting app. In this example, the firewall public IP was `52.253.228.132`.
![Screenshot shows the A K S Voting App with buttons for Cats, Dogs, and Reset, and totals.](media/limit-egress-traffic/aks-vote.png) ### Clean up resources
-To clean up Azure resources, delete the AKS resource group.
+To clean up Azure resources, delete the AKS resource group using the [`az group delete`][az-group-delete] command.
```azurecli az group delete -g $RG
az group delete -g $RG
## Next steps
-In this article, you learned what ports and addresses to allow if you want to restrict egress traffic for the cluster. You also saw how to secure your outbound traffic using Azure Firewall.
-
-If needed, you can generalize the steps above to forward the traffic to your preferred egress solution, following the [Outbound Type `userDefinedRoute` documentation](egress-outboundtype.md).
-
-If you want to restrict how pods communicate between themselves and East-West traffic restrictions within cluster see [Secure traffic between pods using network policies in AKS][network-policy].
+In this article, you learned how to secure your outbound traffic using Azure Firewall. If needed, you can generalize the steps above to forward the traffic to your preferred egress solution following the [Outbound Type `userDefinedRoute` documentation](egress-outboundtype.md).
<!-- LINKS - internal -->
-[aks-quickstart-cli]: kubernetes-walkthrough.md
-[aks-quickstart-portal]: kubernetes-walkthrough-portal.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[network-policy]: use-network-policies.md
-[azure-firewall]: ../firewall/overview.md
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[aks-upgrade]: upgrade-cluster.md
-[aks-support-policies]: support-policies.md
-[aks-faq]: faq.md
-[aks-private-clusters]: private-clusters.md
+
+[az-group-create]: /cli/azure/group#az_group_create
+[outbound-fqdn-rules]: ./outbound-rules-control-egress.md
+[az-network-vnet-create]: /cli/azure/network/vnet#az_network_vnet_create
+[az-network-vnet-subnet-create]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_create
+[az-network-vnet-subnet-update]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_update
+[az-network-public-ip-create]: /cli/azure/network/public-ip#az_network_public_ip_create
+[az-extension-add]: /cli/azure/extension#az_extension_add
+[az-network-firewall-create]: /cli/azure/network/firewall#az_network_firewall_create
+[az-network-firewall-ip-config-create]: /cli/azure/network/firewall/ip-config#az_network_firewall_ip_config_create
+[az-network-route-table-create]: /cli/azure/network/route-table#az_network_route_table_create
+[az-network-route-table-route-create]: /cli/azure/network/route-table/route#az_network_route_table_route_create
+[az-network-firewall-network-rule-create]: /cli/azure/network/firewall/network-rule#az_network_firewall_network_rule_create
+[az-network-firewall-application-rule-create]: /cli/azure/network/firewall/application-rule#az_network_firewall_application_rule_create
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-update]: /cli/azure/aks#az_aks_update
+[az-network-firewall-nat-rule-create]: /cli/azure/network/firewall/nat-rule#az-network-firewall-nat-rule-create
+[az-group-delete]: /cli/azure/group#az_group_delete
[add role to identity]: use-managed-identity.md#add-role-assignment-for-control-plane-identity
-[Create an AKS cluster with user-assigned identities]: limit-egress-traffic.md#create-an-aks-cluster-with-user-assigned-identities
[Use a pre-created kubelet managed identity]: use-managed-identity.md#use-a-pre-created-kubelet-managed-identity [az-identity-create]: /cli/azure/identity#az_identity_create [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
aks Manage Abort Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-abort-operations.md
The abort operation supports the following scenarios:
You can use the [az aks nodepool](/cli/azure/aks/nodepool) command with the `operation-abort` argument to abort an operation on a node pool or a managed cluster.
-The following example terminates an operation on a node pool on a specified cluster by its name and resource group that holds the cluster.
-
+The following example terminates an operation on a node pool on a specified cluster.
```azurecli-interactive az aks nodepool operation-abort --resource-group myResourceGroup --cluster-name myAKSCluster --name myNodePool ```
-The following example terminates an operation against a specified managed cluster its name and resource group that holds the cluster.
+The following example terminates an operation on a specified cluster.
```azurecli-interactive az aks operation-abort --name myAKSCluster --resource-group myResourceGroup
The following example terminates a process for a specified agent pool.
/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedclusters/{resourceName}/agentPools/{agentPoolName}/abort ```
-The following example terminates a process for a specified managed cluster.
+The following example terminates a process for a specified cluster.
```rest /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedclusters/{resourceName}/abort
aks Manage Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-azure-rbac.md
# Use Azure role-based access control for Kubernetes Authorization
-When you leverage [integrated authentication between Azure Active Directory (Azure AD) and AKS](managed-aad.md), you can use Azure AD users, groups, or service principals as subjects in [Kubernetes role-based access control (Kubernetes RBAC)][kubernetes-rbac]. This feature frees you from having to separately manage user identities and credentials for Kubernetes. However, you still have to set up and manage Azure RBAC and Kubernetes RBAC separately.
+When you leverage [integrated authentication between Azure Active Directory (Azure AD) and AKS](managed-azure-ad.md), you can use Azure AD users, groups, or service principals as subjects in [Kubernetes role-based access control (Kubernetes RBAC)][kubernetes-rbac]. This feature frees you from having to separately manage user identities and credentials for Kubernetes. However, you still have to set up and manage Azure RBAC and Kubernetes RBAC separately.
This article covers how to use Azure RBAC for Kubernetes Authorization, which allows for the unified management and access control across Azure resources, AKS, and Kubernetes resources. For more information, see [Azure RBAC for Kubernetes Authorization][kubernetes-rbac].
This article covers how to use Azure RBAC for Kubernetes Authorization, which al
* You need the Azure CLI version 2.24.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. * You need `kubectl`, with a minimum version of [1.18.3](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1183).
-* You need managed Azure AD integration enabled on your cluster before you can add Azure RBAC for Kubernetes authorization. If you need to enable managed Azure AD integration, see [Use Azure AD in AKS](managed-aad.md).
+* You need managed Azure AD integration enabled on your cluster before you can add Azure RBAC for Kubernetes authorization. If you need to enable managed Azure AD integration, see [Use Azure AD in AKS](managed-azure-ad.md).
* If you have CRDs and are making custom role definitions, the only way to cover CRDs today is to use `Microsoft.ContainerService/managedClusters/*/read`. For the remaining objects, you can use the specific API groups, such as `Microsoft.ContainerService/apps/deployments/read`. * New role assignments can take up to five minutes to propagate and be updated by the authorization server. * This article requires that the Azure AD tenant configured for authentication is same as the tenant for the subscription that holds your AKS cluster.
To learn more about AKS authentication, authorization, Kubernetes RBAC, and Azur
[az-provider-register]: /cli/azure/provider#az_provider_register [az-group-create]: /cli/azure/group#az_group_create [az-aks-update]: /cli/azure/aks#az_aks_update
-[managed-aad]: ./managed-aad.md
+[managed-aad]: ./managed-azure-ad.md
[install-azure-cli]: /cli/azure/install-azure-cli [az-role-definition-create]: /cli/azure/role/definition#az_role_definition_create [az-aks-get-credentials]: /cli/azure/aks#az_aks_get-credentials
aks Manage Local Accounts Managed Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-local-accounts-managed-azure-ad.md
+
+ Title: Manage local accounts with AKS-managed Azure Active Directory integration
+description: Learn how to managed local accounts when integrating Azure AD in your Azure Kubernetes Service (AKS) clusters.
+ Last updated : 04/20/2023+++
+# Manage local accounts with AKS-managed Azure Active Directory integration
+
+When you deploy an AKS cluster, local accounts are enabled by default. Even when you enable RBAC or Azure AD integration, `--admin` access still exists as a non-auditable backdoor option. This article shows you how to disable local accounts on an existing cluster, create a new cluster with local accounts disabled, and re-enable local accounts on existing clusters.
+
+## Before you begin
+
+* See [AKS-managed Azure Active Directory integration](./managed-azure-ad.md) for an overview and setup instructions.
+
+## Disable local accounts
+
+You can disable local accounts using the parameter `disable-local-accounts`. The `properties.disableLocalAccounts` field has been added to the managed cluster API to indicate whether the feature is enabled or not on the cluster.
+
+> [!NOTE]
+>
+> * On clusters with Azure AD integration enabled, users assigned to an Azure AD administrators group specified by `aad-admin-group-object-ids` can still gain access using non-administrator credentials. On clusters without Azure AD integration enabled and `properties.disableLocalAccounts` set to `true`, any attempt to authenticate with user or admin credentials will fail.
+>
+> * After disabling local user accounts on an existing AKS cluster where users might have authenticated with local accounts, the administrator must [rotate the cluster certificates](certificate-rotation.md) to revoke certificates they might have had access to. If this is a new cluster, no action is required.
+
+### Create a new cluster without local accounts
+
+1. Create a new AKS cluster without any local accounts using the [`az aks create`][az-aks-create] command with the `disable-local-accounts` flag.
+
+ ```azurecli-interactive
+ az aks create -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --disable-local-accounts
+ ```
+
+2. In the output, confirm local accounts are disabled by checking that the field `properties.disableLocalAccounts` is set to `true`.
+
+ ```output
+ "properties": {
+ ...
+ "disableLocalAccounts": true,
+ ...
+ }
+ ```
+
+3. Run the [`az aks get-credentials`][az-aks-get-credentials] command to ensure the cluster is set to disable local accounts.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group <resource-group> --name <cluster-name> --admin
+ ```
+
+ Your output should show the following error message indicating the feature is preventing access:
+
+ ```output
+ Operation failed with status: 'Bad Request'. Details: Getting static credential isn't allowed because this cluster is set to disable local accounts.
+ ```
+
+### Disable local accounts on an existing cluster
+
+1. Disable local accounts on an existing Azure AD integration enabled AKS cluster using the [`az aks update`][az-aks-update] command with the `disable-local-accounts` parameter.
+
+ ```azurecli-interactive
+ az aks update -g <resource-group> -n <cluster-name> --disable-local-accounts
+ ```
+
+2. In the output, confirm local accounts are disabled by checking that the field `properties.disableLocalAccounts` is set to `true`.
+
+ ```output
+ "properties": {
+ ...
+ "disableLocalAccounts": true,
+ ...
+ }
+ ```
+
+3. Run the [`az aks get-credentials`][az-aks-get-credentials] command to ensure the cluster is set to disable local accounts.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group <resource-group> --name <cluster-name> --admin
+ ```
+
+ Your output should show the following error message indicating the feature is preventing access:
+
+ ```output
+ Operation failed with status: 'Bad Request'. Details: Getting static credential isn't allowed because this cluster is set to disable local accounts.
+ ```
+
+### Re-enable local accounts on an existing cluster
+
+1. Re-enable a disabled local account on an existing cluster using the [`az aks update`][az-aks-update] command with the `enable-local-accounts` parameter.
+
+ ```azurecli-interactive
+ az aks update -g <resource-group> -n <cluster-name> --enable-local-accounts
+ ```
+
+2. In the output, confirm local accounts are re-enabled by checking that the field `properties.disableLocalAccounts` is set to `false`.
+
+ ```output
+ "properties": {
+ ...
+ "disableLocalAccounts": false,
+ ...
+ }
+ ```
+
+3. Run the [`az aks get-credentials`][az-aks-get-credentials] command to ensure the cluster is set to enable local accounts.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group <resource-group> --name <cluster-name> --admin
+ ```
+
+ Your output should show the following message indicating you have successfully enabled local accounts on the cluster:
+
+ ```output
+ Merged "<cluster-name>-admin" as current context in C:\Users\<username>\.kube\config
+ ```
+
+## Next steps
+
+* Learn about [Azure RBAC integration for Kubernetes Authorization][azure-rbac-integration].
+
+<!-- LINKS - Internal -->
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-update]: /cli/azure/aks#az_aks_update
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[azure-rbac-integration]: manage-azure-rbac.md
aks Managed Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/managed-aad.md
- Title: Use Azure AD in Azure Kubernetes Service
-description: Learn how to use Azure AD in Azure Kubernetes Service (AKS)
- Previously updated : 04/17/2023----
-# AKS-managed Azure Active Directory integration
-
-AKS-managed Azure Active Directory (Azure AD) integration simplifies the Azure AD integration process. Previously, you were required to create a client and server app, and the Azure AD tenant had to grant Directory Read permissions. Now, the AKS resource provider manages the client and server apps for you.
-
-## Azure AD authentication overview
-
-Cluster administrators can configure Kubernetes role-based access control (Kubernetes RBAC) based on a user's identity or directory group membership. Azure AD authentication is provided to AKS clusters with OpenID Connect. OpenID Connect is an identity layer built on top of the OAuth 2.0 protocol. For more information on OpenID Connect, see the [Open ID connect documentation][open-id-connect].
-
-Learn more about the Azure AD integration flow in the [Azure AD documentation](concepts-identity.md#azure-ad-integration).
-
-## Limitations
-
-* AKS-managed Azure AD integration can't be disabled.
-* Changing an AKS-managed Azure AD integrated cluster to legacy Azure AD isn't supported.
-* Clusters without Kubernetes RBAC enabled aren't supported with AKS-managed Azure AD integration.
-
-## Before you begin
-
-* Make sure Azure CLI version 2.29.0 or later is installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-* You need `kubectl`, with a minimum version of [1.18.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1181) or [`kubelogin`](https://github.com/Azure/kubelogin). The difference between the minor versions of Kubernetes and `kubectl` shouldn't be more than 1 version. You'll experience authentication issues if you don't use the correct version.
-* If you're using [helm](https://github.com/helm/helm), you need a minimum version of helm 3.3.
-* This article requires that you have an Azure AD group for your cluster. This group will be registered as an admin group on the cluster to grant cluster admin permissions. If you don't have an existing Azure AD group, you can create one using the [`az ad group create`](/cli/azure/ad/group#az_ad_group_create) command.
-
-## Create an AKS cluster with Azure AD enabled
-
-1. Create an Azure resource group using the [`az group create`][az-group-create] command.
-
- ```azurecli-interactive
- az group create --name myResourceGroup --location centralus
- ```
-
-2. Create an AKS cluster and enable administration access for your Azure AD group using the [`az aks create`][az-aks-create] command.
-
- ```azurecli-interactive
- az aks create -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id> [--aad-tenant-id <id>]
- ```
-
- A successful creation of an AKS-managed Azure AD cluster has the following section in the response body:
-
- ```output
- "AADProfile": {
- "adminGroupObjectIds": [
- "5d24****-****-****-****-****afa27aed"
- ],
- "clientAppId": null,
- "managed": true,
- "serverAppId": null,
- "serverAppSecret": null,
- "tenantId": "72f9****-****-****-****-****d011db47"
- }
- ```
-
-## Access an Azure AD enabled cluster
-
-Before you access the cluster using an Azure AD defined group, you need the [Azure Kubernetes Service Cluster User](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-user-role) built-in role.
-
-1. Get the user credentials to access the cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
-
- ```azurecli-interactive
- az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
- ```
-
-2. Follow the instructions to sign in.
-
-3. Use the `kubectl get nodes` command to view nodes in the cluster.
-
- ```azurecli-interactive
- kubectl get nodes
- ```
-
-4. Set up [Azure role-based access control (Azure RBAC)](./azure-ad-rbac.md) to configure other security groups for your clusters.
-
-## Troubleshooting access issues with Azure AD
-
-> [!IMPORTANT]
-> The steps described in this section bypass the normal Azure AD group authentication. Use them only in an emergency.
-
-If you're permanently blocked by not having access to a valid Azure AD group with access to your cluster, you can still obtain the admin credentials to access the cluster directly. You need to have access to the [Azure Kubernetes Service Cluster Admin](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) built-in role.
-
-```azurecli-interactive
-az aks get-credentials --resource-group myResourceGroup --name myManagedCluster --admin
-```
-
-## Enable AKS-managed Azure AD integration on your existing cluster
-
-Enable AKS-managed Azure AD integration on your existing Kubernetes RBAC enabled cluster using the [`az aks update`][az-aks-update] command. Make sure to set your admin group to keep access on your cluster.
-
-```azurecli-interactive
-az aks update -g MyResourceGroup -n MyManagedCluster --enable-aad --aad-admin-group-object-ids <id-1> [--aad-tenant-id <id>]
-```
-
-A successful activation of an AKS-managed Azure AD cluster has the following section in the response body:
-
-```output
-"AADProfile": {
- "adminGroupObjectIds": [
- "5d24****-****-****-****-****afa27aed"
- ],
- "clientAppId": null,
- "managed": true,
- "serverAppId": null,
- "serverAppSecret": null,
- "tenantId": "72f9****-****-****-****-****d011db47"
- }
-```
-
-Download user credentials again to access your cluster by following the steps in [access an Azure AD enabled cluster][access-cluster].
-
-## Upgrade to AKS-managed Azure AD integration
-
-If your cluster uses legacy Azure AD integration, you can upgrade to AKS-managed Azure AD integration with no downtime using the [`az aks update`][az-aks-update] command.
-
-```azurecli-interactive
-az aks update -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id> [--aad-tenant-id <id>]
-```
-
-A successful migration of an AKS-managed Azure AD cluster has the following section in the response body:
-
-```output
-"AADProfile": {
- "adminGroupObjectIds": [
- "5d24****-****-****-****-****afa27aed"
- ],
- "clientAppId": null,
- "managed": true,
- "serverAppId": null,
- "serverAppSecret": null,
- "tenantId": "72f9****-****-****-****-****d011db47"
- }
-```
-
-In order to access the cluster, follow the steps in [access an Azure AD enabled cluster][access-cluster] to update kubeconfig.
-
-## Non-interactive sign in with kubelogin
-
-There are some non-interactive scenarios, such as continuous integration pipelines, that aren't currently available with `kubectl`. You can use [`kubelogin`](https://github.com/Azure/kubelogin) to connect to the cluster with a non-interactive service principal credential.
-
-Starting with Kubernetes version 1.24, the default format of the clusterUser credential for Azure AD clusters is `exec`, which requires [kubelogin](https://github.com/Azure/kubelogin) binary in the execution PATH. If you use the Azure CLI, it prompts you to download kubelogin. For non-Azure AD clusters, or Azure AD clusters where the version of Kubernetes is older than 1.24, there is no change in behavior. The version of kubeconfig installed continues to work.
-
-An optional query parameter named `format` is available when retrieving the clusterUser credential to overwrite the default behavior change. You can set the value to `azure` to use the original kubeconfig format.
-
-Example:
-
-```azurecli-interactive
-az aks get-credentials --format azure
-```
-
-For Azure AD integrated clusters using a version of Kubernetes newer than 1.24, it uses the kubelogin format automatically and no conversion is needed. For Azure AD integrated clusters running a version older than 1.24, you need to run the following commands to convert the kubeconfig format manually
-
-```azurecli-interactive
-export KUBECONFIG=/path/to/kubeconfig
-kubelogin convert-kubeconfig
-```
-
-## Disable local accounts
-
-When you deploy an AKS cluster, local accounts are enabled by default. Even when enabling RBAC or Azure AD integration, `--admin` access still exists as a non-auditable backdoor option. You can disable local accounts using the parameter `disable-local-accounts`. The `properties.disableLocalAccounts` field has been added to the managed cluster API to indicate whether the feature is enabled or not on the cluster.
-
-> [!NOTE]
->
-> * On clusters with Azure AD integration enabled, users assigned to an Azure AD administrators group specified by `aad-admin-group-object-ids` can still gain access using non-administrator credentials. On clusters without Azure AD integration enabled and `properties.disableLocalAccounts` set to `true`, any attempt to authenticate with user or admin credentials will fail.
->
-> * After disabling local user accounts on an existing AKS cluster where users might have authenticated with local accounts, the administrator must [rotate the cluster certificates](certificate-rotation.md) to revoke certificates they might have had access to. If this is a new cluster, no action is required.
-
-### Create a new cluster without local accounts
-
-Create a new AKS cluster without any local accounts using the [`az aks create`][az-aks-create] command with the `disable-local-accounts` flag.
-
-```azurecli-interactive
-az aks create -g <resource-group> -n <cluster-name> --enable-aad --aad-admin-group-object-ids <aad-group-id> --disable-local-accounts
-```
-
-In the output, confirm local accounts have been disabled by checking the field `properties.disableLocalAccounts` is set to `true`.
-
-```output
-"properties": {
- ...
- "disableLocalAccounts": true,
- ...
-}
-```
-
-Attempting to get admin credentials will fail with an error message indicating the feature is preventing access:
-
-```azurecli-interactive
-az aks get-credentials --resource-group <resource-group> --name <cluster-name> --admin
-
-Operation failed with status: 'Bad Request'. Details: Getting static credential isn't allowed because this cluster is set to disable local accounts.
-```
-
-### Disable local accounts on an existing cluster
-
-Disable local accounts on an existing Azure AD integration enabled AKS cluster using the [`az aks update`][az-aks-update] command with the `disable-local-accounts` parameter.
-
-```azurecli-interactive
-az aks update -g <resource-group> -n <cluster-name> --disable-local-accounts
-```
-
-In the output, confirm local accounts have been disabled by checking the field `properties.disableLocalAccounts` is set to `true`.
-
-```output
-"properties": {
- ...
- "disableLocalAccounts": true,
- ...
-}
-```
-
-Attempting to get admin credentials will fail with an error message indicating the feature is preventing access:
-
-```azurecli-interactive
-az aks get-credentials --resource-group <resource-group> --name <cluster-name> --admin
-
-Operation failed with status: 'Bad Request'. Details: Getting static credential isn't allowed because this cluster is set to disable local accounts.
-```
-
-### Re-enable local accounts on an existing cluster
-
-AKS supports enabling a disabled local account on an existing cluster using the [`az aks update`][az-aks-update] command with the `enable-local-accounts` parameter.
-
-```azurecli-interactive
-az aks update -g <resource-group> -n <cluster-name> --enable-local-accounts
-```
-
-In the output, confirm local accounts have been re-enabled by checking the field `properties.disableLocalAccounts` is set to `false`.
-
-```output
-"properties": {
- ...
- "disableLocalAccounts": false,
- ...
-}
-```
-
-Attempting to get admin credentials will succeed:
-
-```azurecli-interactive
-az aks get-credentials --resource-group <resource-group> --name <cluster-name> --admin
-
-Merged "<cluster-name>-admin" as current context in C:\Users\<username>\.kube\config
-```
-
-## Use Conditional Access with Azure AD and AKS
-
-When integrating Azure AD with your AKS cluster, you can also use [Conditional Access][aad-conditional-access] to control access to your cluster.
-
-> [!NOTE]
-> Azure AD Conditional Access is an Azure AD Premium capability.
-
-Create an example Conditional Access policy to use with AKS:
-
-1. In the Azure portal, go to the **Azure Active Directory** page and select **Enterprise applications**.
-2. Select **Conditional Access** > **Policies** >**New policy**.
- :::image type="content" source="./media/managed-aad/conditional-access-new-policy.png" alt-text="Adding a Conditional Access policy":::
-3. Enter a name for the policy, for example **aks-policy**.
-4. Under **Assignments** select **Users and groups**. Choose the users and groups you want to apply the policy to. In this example, choose the same Azure AD group that has administrator access to your cluster.
- :::image type="content" source="./media/managed-aad/conditional-access-users-groups.png" alt-text="Selecting users or groups to apply the Conditional Access policy":::
-5. Under **Cloud apps or actions > Include**, select **Select apps**. Search for **Azure Kubernetes Service** and select **Azure Kubernetes Service AAD Server**.
- :::image type="content" source="./media/managed-aad/conditional-access-apps.png" alt-text="Selecting Azure Kubernetes Service AD Server for applying the Conditional Access policy":::
-6. Under **Access controls > Grant**, select **Grant access**, **Require device to be marked as compliant**, and **Require all the selected controls**.
- :::image type="content" source="./media/managed-aad/conditional-access-grant-compliant.png" alt-text="Selecting to only allow compliant devices for the Conditional Access policy":::
-7. Confirm your settings, set **Enable policy** to **On**, and then select **Create**.
- :::image type="content" source="./media/managed-aad/conditional-access-enable-policy.png" alt-text="Enabling the Conditional Access policy":::
-
-After creating the Conditional Access policy, verify it has been successfully listed:
-
-1. Get the user credentials to access the cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
-
- ```azurecli-interactive
- az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
- ```
-
-2. Follow the instructions to sign in.
-
-3. View the nodes in the cluster using the `kubectl get nodes` command.
-
- ```azurecli-interactive
- kubectl get nodes
- ```
-
-4. In the Azure portal, navigate to **Azure Active Directory** and select **Enterprise applications** > **Activity** > **Sign-ins**.
-
-5. Under the **Conditional Access** column you should see a status of **Success**. Select the event and then select **Conditional Access** tab. Your Conditional Access policy will be listed.
- :::image type="content" source="./media/managed-aad/conditional-access-sign-in-activity.png" alt-text="Screenshot that shows failed sign-in entry due to Conditional Access policy.":::
-
-## Configure just-in-time cluster access with Azure AD and AKS
-
-Another option for cluster access control is to use Privileged Identity Management (PIM) for just-in-time requests.
-
->[!NOTE]
-> PIM is an Azure AD Premium capability requiring a Premium P2 SKU. For more on Azure AD SKUs, see the [pricing guide][aad-pricing].
-
-Integrate just-in-time access requests with an AKS cluster using AKS-managed Azure AD integration:
-
-1. In the Azure portal, go to **Azure Active Directory** and select **Properties**.
-2. Note the value listed under **Tenant ID**. It will be referenced in a later step as `<tenant-id>`.
- :::image type="content" source="./media/managed-aad/jit-get-tenant-id.png" alt-text="In a web browser, the Azure portal screen for Azure Active Directory is shown with the tenant's ID highlighted.":::
-3. Select **Groups** > **New group**.
- :::image type="content" source="./media/managed-aad/jit-create-new-group.png" alt-text="Shows the Azure portal Active Directory groups screen with the 'New Group' option highlighted.":::
-4. Verify the group type **Security** is selected and specify a group name, such as **myJITGroup**. Under the option **Azure AD roles can be assigned to this group (Preview)**, select **Yes** and then select **Create**.
- :::image type="content" source="./media/managed-aad/jit-new-group-created.png" alt-text="Shows the Azure portal's new group creation screen.":::
-5. On the **Groups** page, select the group you just created and note the Object ID. It will be referenced in a later step as `<object-id>`.
- :::image type="content" source="./media/managed-aad/jit-get-object-id.png" alt-text="Shows the Azure portal screen for the just-created group, highlighting the Object Id":::
-6. Create the AKS cluster with AKS-managed Azure AD integration using the [`az aks create`][az-aks-create] command with the `--aad-admin-group-objects-ids` and `--aad-tenant-id parameters` and include the values noted in the steps earlier.
-
- ```azurecli-interactive
- az aks create -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <object-id> --aad-tenant-id <tenant-id>
- ```
-
-7. In the Azure portal, select **Activity** > **Privileged Access (Preview)** > **Enable Privileged Access**.
- :::image type="content" source="./media/managed-aad/jit-enabling-priv-access.png" alt-text="The Azure portal's Privileged access (Preview) page is shown, with 'Enable privileged access' highlighted":::
-8. To grant access, select **Add assignments**.
- :::image type="content" source="./media/managed-aad/jit-add-active-assignment.png" alt-text="The Azure portal's Privileged access (Preview) screen after enabling is shown. The option to 'Add assignments' is highlighted.":::
-9. From the **Select role** drop-down list, select the users and groups you want to grant cluster access. These assignments can be modified at any time by a group administrator. Then select **Next**.
- :::image type="content" source="./media/managed-aad/jit-adding-assignment.png" alt-text="The Azure portal's Add assignments Membership screen is shown, with a sample user selected to be added as a member. The option 'Next' is highlighted.":::
-10. Under **Assignment type**, select **Active** and then specify the desired duration. Provide a justification and then select **Assign**. For more information about assignment types, see [Assign eligibility for a privileged access group (preview) in Privileged Identity Management][aad-assignments].
- :::image type="content" source="./media/managed-aad/jit-set-active-assignment-details.png" alt-text="The Azure portal's Add assignments Setting screen is shown. An assignment type of 'Active' is selected and a sample justification has been given. The option 'Assign' is highlighted.":::
-
-Once the assignments have been made, verify just-in-time access is working by accessing the cluster:
-
-1. Get the user credentials to access the cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
-
- ```azurecli-interactive
- az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
- ```
-
-2. Follow the steps to sign in.
-
-3. Use the `kubectl get nodes` command to view the nodes in the cluster.
-
- ```azurecli-interactive
- kubectl get nodes
- ```
-
-4. Note the authentication requirement and follow the steps to authenticate. If successful, you should see an output similar to the following example output:
-
- ```output
- To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AAAAAAAAA to authenticate.
- NAME STATUS ROLES AGE VERSION
- aks-nodepool1-61156405-vmss000000 Ready agent 6m36s v1.18.14
- aks-nodepool1-61156405-vmss000001 Ready agent 6m42s v1.18.14
- aks-nodepool1-61156405-vmss000002 Ready agent 6m33s v1.18.14
- ```
-
-### Apply just-in-time access at the namespace level
-
-1. Integrate your AKS cluster with [Azure RBAC](manage-azure-rbac.md).
-2. Associate the group you want to integrate with just-in-time access with a namespace in the cluster using the [`az role assignment create`][az-role-assignment-create] command.
-
- ```azurecli-interactive
- az role assignment create --role "Azure Kubernetes Service RBAC Reader" --assignee <AAD-ENTITY-ID> --scope $AKS_ID/namespaces/<namespace-name>
- ```
-
-3. Associate the group you configured at the namespace level with PIM to complete the configuration.
-
-### Troubleshooting
-
-If `kubectl get nodes` returns an error similar to the following:
-
-```output
-Error from server (Forbidden): nodes is forbidden: User "aaaa11111-11aa-aa11-a1a1-111111aaaaa" cannot list resource "nodes" in API group "" at the cluster scope
-```
-
-Make sure the admin of the security group has given your account an *Active* assignment.
-
-## Next steps
-
-* Learn about [Azure RBAC integration for Kubernetes Authorization][azure-rbac-integration].
-* Learn about [Azure AD integration with Kubernetes RBAC][azure-ad-rbac].
-* Use [kubelogin](https://github.com/Azure/kubelogin) to access features for Azure authentication that aren't available in kubectl.
-* Learn more about [AKS and Kubernetes identity concepts][aks-concepts-identity].
-* Use [Azure Resource Manager (ARM) templates][aks-arm-template] to create AKS-managed Azure AD enabled clusters.
-
-<!-- LINKS - external -->
-[aks-arm-template]: /azure/templates/microsoft.containerservice/managedclusters
-[aad-pricing]: https://azure.microsoft.com/pricing/details/active-directory/
-
-<!-- LINKS - Internal -->
-[aad-conditional-access]: ../active-directory/conditional-access/overview.md
-[azure-rbac-integration]: manage-azure-rbac.md
-[aks-concepts-identity]: concepts-identity.md
-[azure-ad-rbac]: azure-ad-rbac.md
-[az-aks-create]: /cli/azure/aks#az_aks_create
-[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
-[az-group-create]: /cli/azure/group#az_group_create
-[open-id-connect]:../active-directory/develop/v2-protocols-oidc.md
-[access-cluster]: #access-an-azure-ad-enabled-cluster
-[aad-assignments]: ../active-directory/privileged-identity-management/groups-assign-member-owner.md#assign-an-owner-or-member-of-a-group
-[az-aks-update]: /cli/azure/aks#az_aks_update
-[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
aks Managed Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/managed-azure-ad.md
+
+ Title: AKS-managed Azure Active Directory integration
+description: Learn how to configure Azure AD for your Azure Kubernetes Service (AKS) clusters.
+ Last updated : 04/17/2023++++
+# AKS-managed Azure Active Directory integration
+
+AKS-managed Azure Active Directory (Azure AD) integration simplifies the Azure AD integration process. Previously, you were required to create a client and server app, and the Azure AD tenant had to grant Directory Read permissions. Now, the AKS resource provider manages the client and server apps for you.
+
+Cluster administrators can configure Kubernetes role-based access control (Kubernetes RBAC) based on a user's identity or directory group membership. Azure AD authentication is provided to AKS clusters with OpenID Connect. OpenID Connect is an identity layer built on top of the OAuth 2.0 protocol. For more information on OpenID Connect, see the [Open ID connect documentation][open-id-connect].
+
+Learn more about the Azure AD integration flow in the [Azure AD documentation](concepts-identity.md#azure-ad-integration).
+
+## Limitations
+
+* AKS-managed Azure AD integration can't be disabled.
+* Changing an AKS-managed Azure AD integrated cluster to legacy Azure AD isn't supported.
+* Clusters without Kubernetes RBAC enabled aren't supported with AKS-managed Azure AD integration.
+
+## Before you begin
+
+* Make sure you have Azure CLI version 2.29.0 or later is installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+* You need `kubectl` with a minimum version of [1.18.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1181) or [`kubelogin`](https://github.com/Azure/kubelogin). The difference between the minor versions of Kubernetes and `kubectl` shouldn't be more than *one* version. You'll experience authentication issues if you don't use the correct version.
+* If you're using [helm](https://github.com/helm/helm), you need a minimum version of helm 3.3.
+* This article requires you have an Azure AD group for your cluster. This group will be registered as an admin group on the cluster to grant admin permissions. If you don't have an existing Azure AD group, you can create one using the [`az ad group create`](/cli/azure/ad/group#az_ad_group_create) command.
+
+## Enable AKS-managed Azure AD integration on your AKS cluster
+
+### Create a new cluster
+
+1. Create an Azure resource group using the [`az group create`][az-group-create] command.
+
+ ```azurecli-interactive
+ az group create --name myResourceGroup --location centralus
+ ```
+
+2. Create an AKS cluster and enable administration access for your Azure AD group using the [`az aks create`][az-aks-create] command.
+
+ ```azurecli-interactive
+ az aks create -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id> [--aad-tenant-id <id>]
+ ```
+
+ A successful creation of an AKS-managed Azure AD cluster has the following section in the response body:
+
+ ```output
+ "AADProfile": {
+ "adminGroupObjectIds": [
+ "5d24****-****-****-****-****afa27aed"
+ ],
+ "clientAppId": null,
+ "managed": true,
+ "serverAppId": null,
+ "serverAppSecret": null,
+ "tenantId": "72f9****-****-****-****-****d011db47"
+ }
+ ```
+
+### Use an existing cluster
+
+* Enable AKS-managed Azure AD integration on your existing Kubernetes RBAC enabled cluster using the [`az aks update`][az-aks-update] command. Make sure to set your admin group to keep access on your cluster.
+
+ ```azurecli-interactive
+ az aks update -g MyResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id-1> [--aad-tenant-id <id>]
+ ```
+
+ A successful activation of an AKS-managed Azure AD cluster has the following section in the response body:
+
+ ```output
+ "AADProfile": {
+ "adminGroupObjectIds": [
+ "5d24****-****-****-****-****afa27aed"
+ ],
+ "clientAppId": null,
+ "managed": true,
+ "serverAppId": null,
+ "serverAppSecret": null,
+ "tenantId": "72f9****-****-****-****-****d011db47"
+ }
+ ```
+
+### Upgrade a legacy Azure AD cluster to AKS-managed Azure AD integration
+
+* If your cluster uses legacy Azure AD integration, you can upgrade to AKS-managed Azure AD integration with no downtime using the [`az aks update`][az-aks-update] command.
+
+ ```azurecli-interactive
+ az aks update -g myResourceGroup -n myManagedCluster --enable-aad --aad-admin-group-object-ids <id> [--aad-tenant-id <id>]
+ ```
+
+ A successful migration of an AKS-managed Azure AD cluster has the following section in the response body:
+
+ ```output
+ "AADProfile": {
+ "adminGroupObjectIds": [
+ "5d24****-****-****-****-****afa27aed"
+ ],
+ "clientAppId": null,
+ "managed": true,
+ "serverAppId": null,
+ "serverAppSecret": null,
+ "tenantId": "72f9****-****-****-****-****d011db47"
+ }
+ ```
+
+## Access your AKS-managed Azure AD enabled cluster
+
+1. Get the user credentials to access your cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myManagedCluster
+ ```
+
+2. Follow the instructions to sign in.
+
+3. View the nodes in the cluster using the `kubectl get nodes` command.
+
+ ```azurecli-interactive
+ kubectl get nodes
+ ```
+
+## Non-interactive sign-in with kubelogin
+
+There are some non-interactive scenarios, such as continuous integration pipelines, that aren't currently available with `kubectl`. You can use [`kubelogin`](https://github.com/Azure/kubelogin) to connect to the cluster with a non-interactive service principal credential. Starting with Kubernetes version 1.24, the default format of the clusterUser credential for Azure AD clusters is `exec`, which requires [`kubelogin`](https://github.com/Azure/kubelogin) binary in the execution PATH.
+
+* When getting the clusterUser credential, you can use the `format` query parameter to overwrite the default behavior change. You can set the value to `azure` to use the original kubeconfig format:
+
+ ```azurecli-interactive
+ az aks get-credentials --format azure
+ ```
+
+* Azure AD integrated clusters using a Kubernetes version newer than 1.24 automatically use the `kubelogin` format.
+
+* If your Azure AD integrated clusters use a Kubernetes version older than 1.24, you need to convert the kubeconfig format manually.
+
+ ```azurecli-interactive
+ export KUBECONFIG=/path/to/kubeconfig
+ kubelogin convert-kubeconfig
+ ```
+
+## Troubleshoot access issues with AKS-managed Azure AD
+
+> [!IMPORTANT]
+> The steps described in this section bypass the normal Azure AD group authentication. Use them only in an emergency.
+
+If you're permanently blocked by not having access to a valid Azure AD group with access to your cluster, you can still get admin credentials to directly access the cluster. You need to have access to the [Azure Kubernetes Service Cluster Admin](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) built-in role.
+
+## Next steps
+
+* Learn about [Azure AD integration with Kubernetes RBAC][azure-ad-rbac].
+* Learn more about [AKS and Kubernetes identity concepts][aks-concepts-identity].
+* Use [Azure Resource Manager (ARM) templates][aks-arm-template] to create AKS-managed Azure AD enabled clusters.
+
+<!-- LINKS - external -->
+[aks-arm-template]: /azure/templates/microsoft.containerservice/managedclusters
+
+<!-- LINKS - Internal -->
+[aks-concepts-identity]: concepts-identity.md
+[azure-ad-rbac]: azure-ad-rbac.md
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-group-create]: /cli/azure/group#az_group_create
+[open-id-connect]:../active-directory/develop/v2-protocols-oidc.md
+[az-aks-update]: /cli/azure/aks#az_aks_update
aks Operator Best Practices Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-identity.md
For more information about cluster operations in AKS, see the following best pra
<!-- INTERNAL LINKS --> [aks-concepts-identity]: concepts-identity.md
-[azure-ad-integration]: managed-aad.md
+[azure-ad-integration]: managed-azure-ad.md
[aks-aad]: azure-ad-integration-cli.md [managed-identities]: ../active-directory/managed-identities-azure-resources/overview.md [aks-best-practices-scheduler]: operator-best-practices-scheduler.md
aks Outbound Rules Control Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/outbound-rules-control-egress.md
+
+ Title: Outbound network and FQDN rules for Azure Kubernetes Service (AKS) clusters
+description: Learn what ports and addresses are required to control egress traffic in Azure Kubernetes Service (AKS)
+++ Last updated : 03/10/2023++
+#Customer intent: As an cluster operator, I want to learn the network and FQDNs rules to control egress traffic and improve security for my AKS clusters.
++
+# Outbound network and FQDN rules for Azure Kubernetes Service (AKS) clusters
+
+This article provides the necessary details that allow you to secure outbound traffic from your Azure Kubernetes Service (AKS). It contains the cluster requirements for a base AKS deployment and additional requirements for optional addons and features. You can apply this information to any outbound restriction method or appliance.
+
+To see an example configuration using Azure Firewall, visit [Control egress traffic using Azure Firewall in AKS](limit-egress-traffic.md).
+
+## Background
+
+AKS clusters are deployed on a virtual network. This network can either be customized and pre-configured by you or it can be created and managed by AKS. In either case, the cluster has **outbound**, or egress, dependencies on services outside of the virtual network.
+
+For management and operational purposes, nodes in an AKS cluster need to access certain ports and fully qualified domain names (FQDNs). These endpoints are required for the nodes to communicate with the API server or to download and install core Kubernetes cluster components and node security updates. For example, the cluster needs to pull base system container images from Microsoft Container Registry (MCR).
+
+The AKS outbound dependencies are almost entirely defined with FQDNs, which don't have static addresses behind them. The lack of static addresses means you can't use network security groups (NSGs) to lock down the outbound traffic from an AKS cluster.
+
+By default, AKS clusters have unrestricted outbound internet access. This level of network access allows nodes and services you run to access external resources as needed. If you wish to restrict egress traffic, a limited number of ports and addresses must be accessible to maintain healthy cluster maintenance tasks. The simplest solution to securing outbound addresses is using a firewall device that can control outbound traffic based on domain names. Azure Firewall can restrict outbound HTTP and HTTPS traffic based on the FQDN of the destination. You can also configure your preferred firewall and security rules to allow these required ports and addresses.
+
+> [!IMPORTANT]
+>
+> This document covers only how to lock down the traffic leaving the AKS subnet. AKS has no ingress requirements by default. Blocking **internal subnet traffic** using network security groups (NSGs) and firewalls isn't supported. To control and block the traffic within the cluster, see [Secure traffic between pods using network policies in AKS][use-network-policies].
+
+## Required outbound network rules and FQDNs for AKS clusters
+
+The following network and FQDN/application rules are required for an AKS cluster. You can use them if you wish to configure a solution other than Azure Firewall.
+
+* IP address dependencies are for non-HTTP/S traffic (both TCP and UDP traffic).
+* FQDN HTTP/HTTPS endpoints can be placed in your firewall device.
+* Wildcard HTTP/HTTPS endpoints are dependencies that can vary with your AKS cluster based on a number of qualifiers.
+* AKS uses an admission controller to inject the FQDN as an environment variable to all deployments under kube-system and gatekeeper-system. This ensures all system communication between nodes and API server uses the API server FQDN and not the API server IP.
+* If you have an app or solution that needs to talk to the API server, you must add an **additional** network rule to allow **TCP communication to port 443 of your API server's IP**.
+* On rare occasions, if there's a maintenance operation, your API server IP might change. Planned maintenance operations that can change the API server IP are always communicated in advance.
+
+### Azure Global required network rules
+
+| Destination Endpoint | Protocol | Port | Use |
+|-|-|||
+| **`*:1194`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:1194`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:1194`** <br/> *Or* <br/> **`APIServerPublicIP:1194`** `(only known after cluster creation)` | UDP | 1194 | For tunneled secure communication between the nodes and the control plane. This isn't required for [private clusters][private-clusters], or for clusters with the *konnectivity-agent* enabled. |
+| **`*:9000`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:9000`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:9000`** <br/> *Or* <br/> **`APIServerPublicIP:9000`** `(only known after cluster creation)` | TCP | 9000 | For tunneled secure communication between the nodes and the control plane. This isn't required for [private clusters][private-clusters], or for clusters with the *konnectivity-agent* enabled. |
+| **`*:123`** or **`ntp.ubuntu.com:123`** (if using Azure Firewall network rules) | UDP | 123 | Required for Network Time Protocol (NTP) time synchronization on Linux nodes. This isn't required for nodes provisioned after March 2021. |
+| **`CustomDNSIP:53`** `(if using custom DNS servers)` | UDP | 53 | If you're using custom DNS servers, you must ensure they're accessible by the cluster nodes. |
+| **`APIServerPublicIP:443`** `(if running pods/deployments that access the API Server)` | TCP | 443 | Required if running pods/deployments that access the API Server, those pods/deployments would use the API IP. This port isn't required for [private clusters][private-clusters]. |
+
+### Azure Global required FQDN / application rules
+
+| Destination FQDN | Port | Use |
+|-|--|-|
+| **`*.hcp.<location>.azmk8s.io`** | **`HTTPS:443`** | Required for Node <-> API server communication. Replace *\<location\>* with the region where your AKS cluster is deployed. This is required for clusters with *konnectivity-agent* enabled. Konnectivity also uses Application-Layer Protocol Negotiation (ALPN) to communicate between agent and server. Blocking or rewriting the ALPN extension will cause a failure. This isn't required for [private clusters][private-clusters]. |
+| **`mcr.microsoft.com`** | **`HTTPS:443`** | Required to access images in Microsoft Container Registry (MCR). This registry contains first-party images/charts (for example, coreDNS, etc.). These images are required for the correct creation and functioning of the cluster, including scale and upgrade operations. |
+| **`*.data.mcr.microsoft.com`** | **`HTTPS:443`** | Required for MCR storage backed by the Azure content delivery network (CDN). |
+| **`management.azure.com`** | **`HTTPS:443`** | Required for Kubernetes operations against the Azure API. |
+| **`login.microsoftonline.com`** | **`HTTPS:443`** | Required for Azure Active Directory authentication. |
+| **`packages.microsoft.com`** | **`HTTPS:443`** | This address is the Microsoft packages repository used for cached *apt-get* operations. Example packages include Moby, PowerShell, and Azure CLI. |
+| **`acs-mirror.azureedge.net`** | **`HTTPS:443`** | This address is for the repository required to download and install required binaries like kubenet and Azure CNI. |
+
+### Azure China 21Vianet required network rules
++
+| Destination Endpoint | Protocol | Port | Use |
+|-|-|||
+| **`*:1194`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.Region:1194`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:1194`** <br/> *Or* <br/> **`APIServerPublicIP:1194`** `(only known after cluster creation)` | UDP | 1194 | For tunneled secure communication between the nodes and the control plane. |
+| **`*:9000`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:9000`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:9000`** <br/> *Or* <br/> **`APIServerPublicIP:9000`** `(only known after cluster creation)` | TCP | 9000 | For tunneled secure communication between the nodes and the control plane. |
+| **`*:22`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:22`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:22`** <br/> *Or* <br/> **`APIServerPublicIP:22`** `(only known after cluster creation)` | TCP | 22 | For tunneled secure communication between the nodes and the control plane. |
+| **`*:123`** or **`ntp.ubuntu.com:123`** (if using Azure Firewall network rules) | UDP | 123 | Required for Network Time Protocol (NTP) time synchronization on Linux nodes. |
+| **`CustomDNSIP:53`** `(if using custom DNS servers)` | UDP | 53 | If you're using custom DNS servers, you must ensure they're accessible by the cluster nodes. |
+| **`APIServerPublicIP:443`** `(if running pods/deployments that access the API Server)` | TCP | 443 | Required if running pods/deployments that access the API Server, those pod/deployments would use the API IP. |
+
+### Azure China 21Vianet required FQDN / application rules
+
+| Destination FQDN | Port | Use |
+||--|-|
+| **`*.hcp.<location>.cx.prod.service.azk8s.cn`**| **`HTTPS:443`** | Required for Node <-> API server communication. Replace *\<location\>* with the region where your AKS cluster is deployed. |
+| **`*.tun.<location>.cx.prod.service.azk8s.cn`**| **`HTTPS:443`** | Required for Node <-> API server communication. Replace *\<location\>* with the region where your AKS cluster is deployed. |
+| **`mcr.microsoft.com`** | **`HTTPS:443`** | Required to access images in Microsoft Container Registry (MCR). This registry contains first-party images/charts (for example, coreDNS, etc.). These images are required for the correct creation and functioning of the cluster, including scale and upgrade operations. |
+| **`.data.mcr.microsoft.com`** | **`HTTPS:443`** | Required for MCR storage backed by the Azure Content Delivery Network (CDN). |
+| **`management.chinacloudapi.cn`** | **`HTTPS:443`** | Required for Kubernetes operations against the Azure API. |
+| **`login.chinacloudapi.cn`** | **`HTTPS:443`** | Required for Azure Active Directory authentication. |
+| **`packages.microsoft.com`** | **`HTTPS:443`** | This address is the Microsoft packages repository used for cached *apt-get* operations. Example packages include Moby, PowerShell, and Azure CLI. |
+| **`*.azk8s.cn`** | **`HTTPS:443`** | This address is for the repository required to download and install required binaries like kubenet and Azure CNI. |
+
+### Azure US Government required network rules
+
+| Destination Endpoint | Protocol | Port | Use |
+|-|-|||
+| **`*:1194`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:1194`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:1194`** <br/> *Or* <br/> **`APIServerPublicIP:1194`** `(only known after cluster creation)` | UDP | 1194 | For tunneled secure communication between the nodes and the control plane. |
+| **`*:9000`** <br/> *Or* <br/> [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureCloud.<Region>:9000`** <br/> *Or* <br/> [Regional CIDRs](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) - **`RegionCIDRs:9000`** <br/> *Or* <br/> **`APIServerPublicIP:9000`** `(only known after cluster creation)` | TCP | 9000 | For tunneled secure communication between the nodes and the control plane. |
+| **`*:123`** or **`ntp.ubuntu.com:123`** (if using Azure Firewall network rules) | UDP | 123 | Required for Network Time Protocol (NTP) time synchronization on Linux nodes. |
+| **`CustomDNSIP:53`** `(if using custom DNS servers)` | UDP | 53 | If you're using custom DNS servers, you must ensure they're accessible by the cluster nodes. |
+| **`APIServerPublicIP:443`** `(if running pods/deployments that access the API Server)` | TCP | 443 | Required if running pods/deployments that access the API Server, those pods/deployments would use the API IP. |
+
+### Azure US Government required FQDN / application rules
+
+| Destination FQDN | Port | Use |
+||--|-|
+| **`*.hcp.<location>.cx.aks.containerservice.azure.us`** | **`HTTPS:443`** | Required for Node <-> API server communication. Replace *\<location\>* with the region where your AKS cluster is deployed.|
+| **`mcr.microsoft.com`** | **`HTTPS:443`** | Required to access images in Microsoft Container Registry (MCR). This registry contains first-party images/charts (for example, coreDNS, etc.). These images are required for the correct creation and functioning of the cluster, including scale and upgrade operations. |
+| **`*.data.mcr.microsoft.com`** | **`HTTPS:443`** | Required for MCR storage backed by the Azure content delivery network (CDN). |
+| **`management.usgovcloudapi.net`** | **`HTTPS:443`** | Required for Kubernetes operations against the Azure API. |
+| **`login.microsoftonline.us`** | **`HTTPS:443`** | Required for Azure Active Directory authentication. |
+| **`packages.microsoft.com`** | **`HTTPS:443`** | This address is the Microsoft packages repository used for cached *apt-get* operations. Example packages include Moby, PowerShell, and Azure CLI. |
+| **`acs-mirror.azureedge.net`** | **`HTTPS:443`** | This address is for the repository required to install required binaries like kubenet and Azure CNI. |
+
+## Optional recommended FQDN / application rules for AKS clusters
+
+The following FQDN / application rules aren't required, but are recommended for AKS clusters:
+
+| Destination FQDN | Port | Use |
+|--||-|
+| **`security.ubuntu.com`, `azure.archive.ubuntu.com`, `changelogs.ubuntu.com`** | **`HTTP:80`** | This address lets the Linux cluster nodes download the required security patches and updates. |
+
+If you choose to block/not allow these FQDNs, the nodes will only receive OS updates when you do a [node image upgrade](node-image-upgrade.md) or [cluster upgrade](upgrade-cluster.md). Keep in mind that node image upgrades also come with updated packages including security fixes.
+
+## GPU enabled AKS clusters required FQDN / application rules
+
+| Destination FQDN | Port | Use |
+|--|--|-|
+| **`nvidia.github.io`** | **`HTTPS:443`** | This address is used for correct driver installation and operation on GPU-based nodes. |
+| **`us.download.nvidia.com`** | **`HTTPS:443`** | This address is used for correct driver installation and operation on GPU-based nodes. |
+| **`download.docker.com`** | **`HTTPS:443`** | This address is used for correct driver installation and operation on GPU-based nodes. |
+
+## Windows Server based node pools required FQDN / application rules
+
+| Destination FQDN | Port | Use |
+|-|--|-|
+| **`onegetcdn.azureedge.net, go.microsoft.com`** | **`HTTPS:443`** | To install windows-related binaries |
+| **`*.mp.microsoft.com, www.msftconnecttest.com, ctldl.windowsupdate.com`** | **`HTTP:80`** | To install windows-related binaries |
+
+If you choose to block/not allow these FQDNs, the nodes will only receive OS updates when you do a [node image upgrade](node-image-upgrade.md) or [cluster upgrade](upgrade-cluster.md). Keep in mind that Node Image Upgrades also come with updated packages including security fixes.
+
+## AKS addons and integrations
+
+### Microsoft Defender for Containers
+
+#### Required FQDN / application rules
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`login.microsoftonline.com`** | **`HTTPS:443`** | Required for Active Directory Authentication. |
+| **`*.ods.opinsights.azure.com`** | **`HTTPS:443`** | Required for Microsoft Defender to upload security events to the cloud.|
+| **`*.oms.opinsights.azure.com`** | **`HTTPS:443`** | Required to Authenticate with LogAnalytics workspaces.|
+
+### CSI Secret Store
+
+#### Required FQDN / application rules
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`vault.azure.net`** | **`HTTPS:443`** | Required for CSI Secret Store addon pods to talk to Azure KeyVault server.|
+
+### Azure Monitor for containers
+
+There are two options to provide access to Azure Monitor for containers:
+
+- Allow the Azure Monitor [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags).
+- Provide access to the required FQDN/application rules.
+
+#### Required network rules
+
+| Destination Endpoint | Protocol | Port | Use |
+|-|-|||
+| [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - **`AzureMonitor:443`** | TCP | 443 | This endpoint is used to send metrics data and logs to Azure Monitor and Log Analytics. |
+
+#### Required FQDN / application rules
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`dc.services.visualstudio.com`** | **`HTTPS:443`** | This endpoint is used for metrics and monitoring telemetry using Azure Monitor. |
+| **`*.ods.opinsights.azure.com`** | **`HTTPS:443`** | This endpoint is used by Azure Monitor for ingesting log analytics data. |
+| **`*.oms.opinsights.azure.com`** | **`HTTPS:443`** | This endpoint is used by omsagent, which is used to authenticate the log analytics service. |
+| **`*.monitoring.azure.com`** | **`HTTPS:443`** | This endpoint is used to send metrics data to Azure Monitor. |
+
+### Azure Policy
+
+#### Required FQDN / application rules
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`data.policy.core.windows.net`** | **`HTTPS:443`** | This address is used to pull the Kubernetes policies and to report cluster compliance status to policy service. |
+| **`store.policy.core.windows.net`** | **`HTTPS:443`** | This address is used to pull the Gatekeeper artifacts of built-in policies. |
+| **`dc.services.visualstudio.com`** | **`HTTPS:443`** | Azure Policy add-on that sends telemetry data to applications insights endpoint. |
+
+#### Azure China 21Vianet required FQDN / application rules
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`data.policy.azure.cn`** | **`HTTPS:443`** | This address is used to pull the Kubernetes policies and to report cluster compliance status to policy service. |
+| **`store.policy.azure.cn`** | **`HTTPS:443`** | This address is used to pull the Gatekeeper artifacts of built-in policies. |
+
+#### Azure US Government required FQDN / application rules
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`data.policy.azure.us`** | **`HTTPS:443`** | This address is used to pull the Kubernetes policies and to report cluster compliance status to policy service. |
+| **`store.policy.azure.us`** | **`HTTPS:443`** | This address is used to pull the Gatekeeper artifacts of built-in policies. |
+
+## Cluster extensions
+
+### Required FQDN / application rules
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`<region>.dp.kubernetesconfiguration.azure.com`** | **`HTTPS:443`** | This address is used to fetch configuration information from the Cluster Extensions service and report extension status to the service.|
+| **`mcr.microsoft.com, *.data.mcr.microsoft.com`** | **`HTTPS:443`** | This address is required to pull container images for installing cluster extension agents on AKS cluster.|
+
+#### Azure US Government required FQDN / application rules
+
+| FQDN | Port | Use |
+|--|--|-|
+| **`<region>.dp.kubernetesconfiguration.azure.us`** | **`HTTPS:443`** | This address is used to fetch configuration information from the Cluster Extensions service and report extension status to the service. |
+| **`mcr.microsoft.com, *.data.mcr.microsoft.com`** | **`HTTPS:443`** | This address is required to pull container images for installing cluster extension agents on AKS cluster.|
+
+> [!NOTE]
+>
+> For any addons that aren't explicitly stated here, the core requirements cover it.
+
+## Next steps
+
+In this article, you learned what ports and addresses to allow if you want to restrict egress traffic for the cluster.
+
+If you want to restrict how pods communicate between themselves and East-West traffic restrictions within cluster see [Secure traffic between pods using network policies in AKS][use-network-policies].
+
+<!-- LINKS - internal -->
+
+[private-clusters]: ./private-clusters.md
+[use-network-policies]: ./use-network-policies.md
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes history](https://en.wikipedia.org/
| K8s version | Upstream release | AKS preview | AKS GA | End of life | |--|-|--||-|
-| 1.22 | Aug-04-21 | Sept 2021 | Dec 2021 | Dec 2022 |
-| 1.23 | Dec 2021 | Jan 2022 | Apr 2022 | Apr 2023 |
| 1.24 | Apr-22-22 | May 2022 | Jul 2022 | Jul 2023 | 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Dec 2023 | 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024
aks Trusted Access Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/trusted-access-feature.md
This article shows you how to enable secure access from your Azure services to y
## Trusted Access feature overview
-Trusted Access enables you to give explicit consent to your system-assigned MSI of allowed resources to access your AKS clusters using an Azure resource *RoleBinding*. Your Azure resources access AKS clusters through the AKS regional gateway via system-assigned managed identity authentication with the appropriate Kubernetes permissions via an Azure resource *Role*. The Trusted Access feature allows you to access AKS clusters with different configurations, including but not limited to [private clusters](private-clusters.md), [clusters with local accounts disabled](managed-aad.md#disable-local-accounts), [Azure AD clusters](azure-ad-integration-cli.md), and [authorized IP range clusters](api-server-authorized-ip-ranges.md).
+Trusted Access enables you to give explicit consent to your system-assigned MSI of allowed resources to access your AKS clusters using an Azure resource *RoleBinding*. Your Azure resources access AKS clusters through the AKS regional gateway via system-assigned managed identity authentication with the appropriate Kubernetes permissions via an Azure resource *Role*. The Trusted Access feature allows you to access AKS clusters with different configurations, including but not limited to [private clusters](private-clusters.md), [clusters with local accounts disabled](manage-local-accounts-managed-azure-ad.md#disable-local-accounts), [Azure AD clusters](azure-ad-integration-cli.md), and [authorized IP range clusters](api-server-authorized-ip-ranges.md).
## Prerequisites
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
Title: Upgrade an Azure Kubernetes Service (AKS) cluster
description: Learn how to upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates. Previously updated : 12/17/2020 Last updated : 04/21/2023 # Upgrade an Azure Kubernetes Service (AKS) cluster
Part of the AKS cluster lifecycle involves performing periodic upgrades to the l
For AKS clusters that use multiple node pools or Windows Server nodes, see [Upgrade a node pool in AKS][nodepool-upgrade]. To upgrade a specific node pool without doing a Kubernetes cluster upgrade, see [Upgrade a specific node pool][specific-nodepool].
-> [!NOTE]
-> Any upgrade operation, whether performed manually or automatically, will upgrade the node image version if not already on the latest. The latest version is contingent on a full AKS release, and can be determined by visiting the [AKS release tracker][release-tracker].
+## Kubernetes version upgrades
+
+When you upgrade a supported AKS cluster, Kubernetes minor versions can't be skipped. You must perform all upgrades sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* isn't allowed.
+
+Skipping multiple versions can only be done when upgrading from an *unsupported version* back to a *supported version*. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available. When performing an upgrade from an *unsupported version* that skips two or more minor versions, the upgrade is performed without any guarantee of functionality and is excluded from the service-level agreements and limited warranty. If your version is significantly out of date, we recommend you recreate your cluster.
> [!NOTE]
-> Performing upgrade operations requires the `Microsoft.ContainerService/managedClusters/agentPools/write` RBAC role. For more on Azure RBAC roles, see the [Azure resource provider operations]
+> Any upgrade operation, whether performed manually or automatically, will upgrade the node image version if not already on the latest. The latest version is contingent on a full AKS release and can be determined by visiting the [AKS release tracker][release-tracker].
## Before you begin * If you're using Azure CLI, this article requires that you're running the Azure CLI version 2.34.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. * If you're using Azure PowerShell, this tutorial requires that you're running Azure PowerShell version 5.9.0 or later. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][azure-powershell-install].--
+* Performing upgrade operations requires the `Microsoft.ContainerService/managedClusters/agentPools/write` RBAC role. For more on Azure RBAC roles, see the [Azure resource provider operations]
> [!WARNING]
-> An AKS cluster upgrade triggers a cordon and drain of your nodes. If you have a low compute quota available, the upgrade may fail. For more information, see [increase quotas](../azure-portal/supportability/regional-quota-requests.md)
+> An AKS cluster upgrade triggers a cordon and drain of your nodes. If you have a low compute quota available, the upgrade may fail. For more information, see [increase quotas](../azure-portal/supportability/regional-quota-requests.md).
## Check for available AKS cluster upgrades ### [Azure CLI](#tab/azure-cli)
-To check which Kubernetes releases are available for your cluster, use the [az aks get-upgrades][az-aks-get-upgrades] command. The following example checks for available upgrades to *myAKSCluster* in *myResourceGroup*:
+Check which Kubernetes releases are available for your cluster using the [`az aks get-upgrades`][az-aks-get-upgrades] command.
```azurecli-interactive az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster --output table ```
-> [!NOTE]
-> When you upgrade a supported AKS cluster, Kubernetes minor versions can't be skipped. All upgrades must be performed sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* is not allowed.
->
-> Skipping multiple versions can only be done when upgrading from an _unsupported version_ back to a _supported version_. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available. When performing an upgrade from an _unsupported version_ that skips two or more minor versions, the upgrade is performed without any guarantee of functionality and is excluded from the service-level agreements and limited warranty. If your version is significantly out of date, it's recommended to re-create the cluster.
- The following example output shows that the cluster can be upgraded to versions *1.19.1* and *1.19.3*:
-```console
+```output
Name ResourceGroup MasterVersion Upgrades - -- default myResourceGroup 1.18.10 1.19.1, 1.19.3 ```
-The following example output means that the appservice-kube extension isn't compatible with your Azure CLI version (a minimum of version 2.34.1 is required):
-
-```console
-The 'appservice-kube' extension is not compatible with this version of the CLI.
-You have CLI core version 2.0.81 and this extension requires a min of 2.34.1.
-Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info.
-```
-
-If you receive this output, you need to update your Azure CLI version. The `az upgrade` command was added in version 2.11.0 and doesn't work with versions prior to 2.11.0. Older versions can be updated by reinstalling Azure CLI as described in [Install the Azure CLI](/cli/azure/install-azure-cli). If your Azure CLI version is 2.11.0 or later, you'll receive a message to run `az upgrade` to upgrade Azure CLI to the latest version.
-
-If your Azure CLI is updated and you receive the following example output, it means that no upgrades are available:
-
-```console
-ERROR: Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info.
-```
-
-If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when `az aks get-upgrades` shows that no upgrades are available.
- ### [Azure PowerShell](#tab/azure-powershell)
-To check which Kubernetes releases are available for your cluster, use the [Get-AzAksUpgradeProfile][get-azaksupgradeprofile] command. The following example checks for available upgrades to *myAKSCluster* in *myResourceGroup*:
+Check which Kubernetes releases are available for your cluster using [`Get-AzAksUpgradeProfile`][get-azaksupgradeprofile] command.
```azurepowershell-interactive
- Get-AzAksUpgradeProfile -ResourceGroupName myResourceGroup -ClusterName myAKSCluster |
- Select-Object -Property Name, ControlPlaneProfileKubernetesVersion -ExpandProperty ControlPlaneProfileUpgrade |
- Format-Table -Property *
+Get-AzAksUpgradeProfile -ResourceGroupName myResourceGroup -ClusterName myAKSCluster |
+Select-Object -Property Name, ControlPlaneProfileKubernetesVersion -ExpandProperty ControlPlaneProfileUpgrade |
+Format-Table -Property *
```
-> [!NOTE]
-> When you upgrade a supported AKS cluster, Kubernetes minor versions can't be skipped. All upgrades must be performed sequentially by major version number. For example, upgrades between *1.14.x* -> *1.15.x* or *1.15.x* -> *1.16.x* are allowed, however *1.14.x* -> *1.16.x* is not allowed.
->
-> Skipping multiple versions can only be done when upgrading from an _unsupported version_ back to a _supported version_. For example, an upgrade from an unsupported *1.10.x* -> a supported *1.15.x* can be completed if available. When performing an upgrade from an _unsupported version_ that skips two or more minor versions, the upgrade is performed without any guarantee of functionality and is excluded from the service-level agreements and limited warranty. If your version is significantly out of date, it's recommended to re-create the cluster.
- The following example output shows that the cluster can be upgraded to versions *1.19.1* and *1.19.3*:
-```Output
-Name ControlPlaneProfileKubernetesVersion IsPreview KubernetesVersion
-- --
-default 1.18.10 1.19.1
-default 1.18.10 1.19.3
+```output
+Name ControlPlaneProfileKubernetesVersion IsPreview KubernetesVersion
+- --
+default 1.18.10 1.19.1
+default 1.18.10 1.19.3
```
-If no upgrade is available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when `Get-AzAksUpgradeProfile` shows that no upgrades are available.
- ### [Azure portal](#tab/azure-portal)
-To check which Kubernetes releases are available for your cluster:
+Check which Kubernetes releases are available for your cluster using the following steps:
1. Sign in to the [Azure portal](https://portal.azure.com). 2. Navigate to your AKS cluster. 3. Under **Settings**, select **Cluster configuration**.
-4. In **Kubernetes version**, select **Upgrade version**. This will redirect you to a new page.
+4. In **Kubernetes version**, select **Upgrade version**.
5. In **Kubernetes version**, select the version to check for available upgrades.
-If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when no upgrades are available.
-
-The Azure portal also highlights all the deprecated APIs between your current version and newer, available versions you intend to migrate to. For more information, see [the Kubernetes API Removal and Deprecation process][k8s-deprecation].
+The Azure portal highlights all the deprecated APIs between your current version and newer, available versions you intend to migrate to. For more information, see [the Kubernetes API Removal and Deprecation process][k8s-deprecation].
:::image type="content" source="./media/upgrade-cluster/portal-upgrade.png" alt-text="The screenshot of the upgrade blade for an AKS cluster in the Azure portal. The automatic upgrade field shows 'patch' selected, and several APIs deprecated between the selected Kubernetes version and the cluster's current version are described.":::
-## Stop cluster upgrades automatically on API breaking changes (Preview)
+### Troubleshoot AKS cluster upgrade error messages
+### [Azure CLI](#tab/azure-cli)
-To stay within a supported Kubernetes version, you usually have to upgrade your cluster at least once per year and prepare for all possible disruptions. These disruptions include ones caused by API breaking changes, deprecations, and dependencies such as Helm and CSI. It can be difficult to anticipate these disruptions and migrate critical workloads without experiencing any downtime.
+The following example output means the `appservice-kube` extension isn't compatible with your Azure CLI version (a minimum of version 2.34.1 is required):
-AKS now automatically stops upgrade operations consisting of a minor version change if deprecated APIs are detected. This feature alerts you with an error message if it detects usage of APIs that are deprecated in the targeted version.
+```output
+The 'appservice-kube' extension is not compatible with this version of the CLI.
+You have CLI core version 2.0.81 and this extension requires a min of 2.34.1.
+Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info.
+```
-All of the following criteria must be met in order for the stop to occur:
+If you receive this output, you need to update your Azure CLI version. The `az upgrade` command was added in version 2.11.0 and doesn't work with versions prior to 2.11.0. You can update older versions by reinstalling Azure CLI as described in [Install the Azure CLI](/cli/azure/install-azure-cli). If your Azure CLI version is 2.11.0 or later, you receive a message to run `az upgrade` to upgrade Azure CLI to the latest version.
-* The upgrade operation is a Kubernetes minor version change for the cluster control plane
+If your Azure CLI is updated and you receive the following example output, it means that no upgrades are available:
-* The Kubernetes version you are upgrading to is 1.26 or later
+```output
+ERROR: Table output unavailable. Use the --query option to specify an appropriate query. Use --debug for more info.
+```
-* If performed via REST, the upgrade operation uses a preview API version of `2023-01-02-preview` or later
+If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when `az aks get-upgrades` shows that no upgrades are available.
-* If performed via Azure CLI, the `aks-preview` CLI extension 0.5.134 or later must be installed
+### [Azure PowerShell](#tab/azure-powershell)
-* The last seen usage seen of deprecated APIs for the targeted version you are upgrading to must occur within 12 hours before the upgrade operation. AKS records usage hourly, so any usage of deprecated APIs within one hour isn't guaranteed to appear in the detection.
+If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when `Get-AzAksUpgradeProfile` shows that no upgrades are available.
-If all of these criteria are true when you attempt an upgrade, you'll receive an error message similar to the following example:
+### [Azure portal](#tab/azure-portal)
-```output
-Bad Request({
- "code": "ValidationError",
- "message": "Control Plane upgrade is blocked due to recent usage of a Kubernetes API deprecated in the specified version. Please refer to https://kubernetes.io/docs/reference/using-api/deprecation-guide to migrate the usage. To bypass this error, set IgnoreKubernetesDeprecations in upgradeSettings.overrideSettings. Bypassing this error without migrating usage will result in the deprecated Kubernetes API calls failing. Usage details: 1 error occurred:\n\t* usage has been detected on API flowcontrol.apiserver.k8s.io.prioritylevelconfigurations.v1beta1, and was recently seen at: 2023-03-23 20:57:18 +0000 UTC, which will be removed in 1.26\n\n",
- "subcode": "UpgradeBlockedOnDeprecatedAPIUsage"
-})
-```
+If no upgrades are available, create a new cluster with a supported version of Kubernetes and migrate your workloads from the existing cluster to the new cluster. It's not supported to upgrade a cluster to a newer Kubernetes version when no upgrades are available.
-### Mitigating stopped upgrade operations
+
-After receiving the error message, you have two options to mitigate the issue:
+## Upgrade an AKS cluster
-#### Remove usage of deprecated APIs (recommended)
+During the cluster upgrade process, AKS performs the following operations:
-To remove usage of deprecated APIs, follow these steps:
+* Add a new buffer node (or as many nodes as configured in [max surge](#customize-node-surge-upgrade)) to the cluster that runs the specified Kubernetes version.
+* [Cordon and drain][kubernetes-drain] one of the old nodes to minimize disruption to running applications. If you're using max surge, it [cordons and drains][kubernetes-drain] as many nodes at the same time as the number of buffer nodes specified.
+* When the old node is fully drained, it's reimaged to receive the new version and becomes the buffer node for the following node to be upgraded.
+* This process repeats until all nodes in the cluster have been upgraded.
+* At the end of the process, the last buffer node is deleted, maintaining the existing agent node count and zone balance.
-1. Remove the deprecated API, which is listed in the error message. In the Azure portal, navigate to your cluster's overview page, and select **Diagnose and solve problems**. You can find recent usages detected under the **Known Issues, Availability and Performance** category by navigating to **Selected Kubernetes API deprecations** on the left-hand side. You can also check past API usage by enabling [container insights][container-insights] and exploring kube audit logs.
- :::image type="content" source="./media/upgrade-cluster/applens-api-detection-inline.png" lightbox="./media/upgrade-cluster/applens-api-detection-full.png" alt-text="A screenshot of the Azure portal showing the 'Selected Kubernetes API deprecations' section.":::
+> [!IMPORTANT]
+> Ensure that any `PodDisruptionBudgets` (PDBs) allow for at least *one* pod replica to be moved at a time otherwise the drain/evict operation will fail.
+> If the drain operation fails, the upgrade operation will fail by design to ensure that the applications are not disrupted. Please correct what caused the operation to stop (incorrect PDBs, lack of quota, and so on) and re-try the operation.
-2. Wait 12 hours from the time the last deprecated API usage was seen.
+### [Azure CLI](#tab/azure-cli)
-3. Retry your cluster upgrade.
+1. Upgrade your cluster using the [`az aks upgrade`][az-aks-upgrade] command.
-#### Bypass validation to ignore API changes
+ ```azurecli-interactive
+ az aks upgrade \
+ --resource-group myResourceGroup \
+ --name myAKSCluster \
+ --kubernetes-version KUBERNETES_VERSION
+ ```
-To bypass validation to ignore API breaking changes, set the property `upgrade-settings` to `IgnoreKubernetesDeprecations`. You will need to use the `aks-preview` Azure CLI extension version 0.5.134 or later. This method is not recommended, as deprecated APIs in the targeted Kubernetes version may not work at all long term. It is advised to remove them as soon as possible after the upgrade completes.
+2. Confirm the upgrade was successful using the [`az aks show`][az-aks-show] command.
-```azurecli-interactive
-az aks update --name myAKSCluster --resource-group myResourceGroup --upgrade-settings IgnoreKubernetesDeprecations --upgrade-override-until 2023-04-01T13:00:00Z
-```
+ ```azurecli-interactive
+ az aks show --resource-group myResourceGroup --name myAKSCluster --output table
+ ```
-The `upgrade-override-until` property is used to define the end of the window during which validation will be bypassed. If no value is set, it will default the window to three days from the current time. The date and time you specify must be in the future.
+ The following example output shows that the cluster now runs *1.19.1*:
-> [!NOTE]
-> `Z` is the zone designator for the zero UTC/GMT offset, also known as 'Zulu' time. This example sets the end of the window to `13:00:00` GMT. For more information, see [Combined date and time representations](https://wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations).
+ ```output
+ Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
+ - - - -
+ myAKSCluster eastus myResourceGroup 1.19.1 Succeeded myakscluster-dns-379cbbb9.hcp.eastus.azmk8s.io
+ ```
-After a successful override, performing an upgrade operation will ignore any deprecated API usage for the targeted version.
+### [Azure PowerShell](#tab/azure-powershell)
-## Customize node surge upgrade
+1. Upgrade your cluster using the [`Set-AzAksCluster`][set-azakscluster] command.
-> [!IMPORTANT]
-> Node surges require subscription quota for the requested max surge count for each upgrade operation. For example, a cluster that has 5 node pools, each with a count of 4 nodes, has a total of 20 nodes. If each node pool has a max surge value of 50%, additional compute and IP quota of 10 nodes (2 nodes * 5 pools) is required to complete the upgrade.
->
-> If using Azure CNI, validate there are available IPs in the subnet as well to [satisfy IP requirements of Azure CNI](configure-azure-cni.md).
+ ```azurepowershell-interactive
+ Set-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -KubernetesVersion <KUBERNETES_VERSION>
+ ```
-By default, AKS configures upgrades to surge with one extra node. A default value of one for the max surge settings will enable AKS to minimize workload disruption by creating an extra node before the cordon/drain of existing applications to replace an older versioned node. The max surge value may be customized per node pool to enable a trade-off between upgrade speed and upgrade disruption. By increasing the max surge value, the upgrade process completes faster, but setting a large value for max surge may cause disruptions during the upgrade process.
+2. Confirm the upgrade was successful using the [`Get-AzAksCluster`][get-azakscluster] command.
-For example, a max surge value of 100% provides the fastest possible upgrade process (doubling the node count) but also causes all nodes in the node pool to be drained simultaneously. You may wish to use a higher value such as this for testing environments. For production node pools, we recommend a max_surge setting of 33%.
+ ```azurepowershell-interactive
+ Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
+ Format-Table -Property Name, Location, KubernetesVersion, ProvisioningState, Fqdn
+ ```
-AKS accepts both integer values and a percentage value for max surge. An integer such as "5" indicates five extra nodes to surge. A value of "50%" indicates a surge value of half the current node count in the pool. Max surge percent values can be a minimum of 1% and a maximum of 100%. A percent value is rounded up to the nearest node count. If the max surge value is higher than the required number of nodes to be upgraded, the number of nodes to be upgraded is used for the max surge value.
+ The following example output shows that the cluster now runs *1.19.1*:
-During an upgrade, the max surge value can be a minimum of 1 and a maximum value equal to the number of nodes in your node pool. You can set larger values, but the maximum number of nodes used for max surge won't be higher than the number of nodes in the pool at the time of upgrade.
+ ```output
+ Name Location KubernetesVersion ProvisioningState Fqdn
+ - -- -- -- -
+ myAKSCluster eastus 1.19.1 Succeeded myakscluster-dns-379cbbb9.hcp.eastus.azmk8s.io
+ ```
-> [!IMPORTANT]
-> The max surge setting on a node pool is persistent. Subsequent Kubernetes upgrades or node version upgrades will use this setting. You may change the max surge value for your node pools at any time. For production node pools, we recommend a max-surge setting of 33%.
+### [Azure portal](#tab/azure-portal)
-Use the following commands to set max surge values for new or existing node pools.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Navigate to your AKS cluster.
+3. Under **Settings**, select **Cluster configuration**.
+4. In **Kubernetes version**, select **Upgrade version**.
+5. In **Kubernetes version**, select your desired version and then select **Save**.
+6. Navigate to your AKS cluster **Overview** page, and select the **Kubernetes version** to confirm the upgrade was successful.
-```azurecli-interactive
-# Set max surge for a new node pool
-az aks nodepool add -n mynodepool -g MyResourceGroup --cluster-name MyManagedCluster --max-surge 33%
-```
+The Azure portal highlights all the deprecated APIs between your current version and newer, available versions you intend to migrate to. For more information, see [the Kubernetes API removal and deprecation process][k8s-deprecation].
-```azurecli-interactive
-# Update max surge for an existing node pool
-az aks nodepool update -n mynodepool -g MyResourceGroup --cluster-name MyManagedCluster --max-surge 5
-```
-## Upgrade an AKS cluster
+
-### [Azure CLI](#tab/azure-cli)
+## View the upgrade events
-With a list of available versions for your AKS cluster, use the [az aks upgrade][az-aks-upgrade] command to upgrade. During the upgrade process, AKS will:
+When you upgrade your cluster, the following Kubernetes events may occur on each node:
-- Add a new buffer node (or as many nodes as configured in [max surge](#customize-node-surge-upgrade)) to the cluster that runs the specified Kubernetes version.-- [Cordon and drain][kubernetes-drain] one of the old nodes to minimize disruption to running applications. If you're using max surge, it will [cordon and drain][kubernetes-drain] as many nodes at the same time as the number of buffer nodes specified.-- When the old node is fully drained, it will be reimaged to receive the new version, and it will become the buffer node for the following node to be upgraded.-- This process repeats until all nodes in the cluster have been upgraded.-- At the end of the process, the last buffer node will be deleted, maintaining the existing agent node count and zone balance.
+* **Surge**: Creates a surge node.
+* **Drain**: Evicts pods from the node. Each pod has a 30-second timeout to complete the eviction.
+* **Update**: Update of a node succeeds or fails.
+* **Delete**: Deletes a surge node.
+Use `kubectl get events` to show events in the default namespaces while running an upgrade. For example:
```azurecli-interactive
-az aks upgrade \
- --resource-group myResourceGroup \
- --name myAKSCluster \
- --kubernetes-version KUBERNETES_VERSION
+kubectl get events
```
-It takes a few minutes to upgrade the cluster, depending on how many nodes you have.
+The following example output shows some of the above events listed during an upgrade.
-> [!IMPORTANT]
-> Ensure that any `PodDisruptionBudgets` (PDBs) allow for at least 1 pod replica to be moved at a time otherwise the drain/evict operation will fail.
-> If the drain operation fails, the upgrade operation will fail by design to ensure that the applications are not disrupted. Please correct what caused the operation to stop (incorrect PDBs, lack of quota, and so on) and re-try the operation.
+```output
+...
+default 2m1s Normal Drain node/aks-nodepool1-96663640-vmss000001 Draining node: [aks-nodepool1-96663640-vmss000001]
+...
+default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surge node [aks-nodepool1-96663640-vmss000002 nodepool1] for agentpool %!s(MISSING)
+...
+```
-To confirm that the upgrade was successful, use the [az aks show][az-aks-show] command:
+## Stop cluster upgrades automatically on API breaking changes (Preview)
-```azurecli-interactive
-az aks show --resource-group myResourceGroup --name myAKSCluster --output table
-```
-The following example output shows that the cluster now runs *1.19.1*:
+To stay within a supported Kubernetes version, you usually have to upgrade your cluster at least once per year and prepare for all possible disruptions. These disruptions include ones caused by API breaking changes, deprecations, and dependencies such as Helm and CSI. It can be difficult to anticipate these disruptions and migrate critical workloads without experiencing any downtime.
-```json
-Name Location ResourceGroup KubernetesVersion ProvisioningState Fqdn
- - - - -
-myAKSCluster eastus myResourceGroup 1.19.1 Succeeded myakscluster-dns-379cbbb9.hcp.eastus.azmk8s.io
-```
+AKS now automatically stops upgrade operations consisting of a minor version change if deprecated APIs are detected. This feature alerts you with an error message if it detects usage of APIs that are deprecated in the targeted version.
-### [Azure PowerShell](#tab/azure-powershell)
+All of the following criteria must be met in order for the stop to occur:
-With a list of available versions for your AKS cluster, use the [Set-AzAksCluster][set-azakscluster] cmdlet to upgrade. During the upgrade process, AKS will:
+* The upgrade operation is a Kubernetes minor version change for the cluster control plane.
+* The Kubernetes version you're upgrading to is 1.26 or later
+* If performed via REST, the upgrade operation uses a preview API version of `2023-01-02-preview` or later.
+* If performed via Azure CLI, the `aks-preview` CLI extension 0.5.134 or later must be installed.
+* The last seen usage of deprecated APIs for the targeted version you're upgrading to must occur within 12 hours before the upgrade operation. AKS records usage hourly, so any usage of deprecated APIs within one hour isn't guaranteed to appear in the detection.
-- Add a new buffer node (or as many nodes as configured in [max surge](#customize-node-surge-upgrade)) to the cluster that runs the specified Kubernetes version.-- [Cordon and drain][kubernetes-drain] one of the old nodes to minimize disruption to running applications. If you're using max surge, it will [cordon and drain][kubernetes-drain] as many nodes at the same time as the number of buffer nodes specified.-- When the old node is fully drained, it will be reimaged to receive the new version, and it will become the buffer node for the following node to be upgraded.-- This process repeats until all nodes in the cluster have been upgraded.-- At the end of the process, the last buffer node will be deleted, maintaining the existing agent node count and zone balance.
+### Mitigating stopped upgrade operations
+If you attempt an upgrade and all of the previous criteria are met, you receive an error message similar to the following example error message:
-```azurepowershell-interactive
-Set-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -KubernetesVersion <KUBERNETES_VERSION>
+```output
+Bad Request({
+ "code": "ValidationError",
+ "message": "Control Plane upgrade is blocked due to recent usage of a Kubernetes API deprecated in the specified version. Please refer to https://kubernetes.io/docs/reference/using-api/deprecation-guide to migrate the usage. To bypass this error, set IgnoreKubernetesDeprecations in upgradeSettings.overrideSettings. Bypassing this error without migrating usage will result in the deprecated Kubernetes API calls failing. Usage details: 1 error occurred:\n\t* usage has been detected on API flowcontrol.apiserver.k8s.io.prioritylevelconfigurations.v1beta1, and was recently seen at: 2023-03-23 20:57:18 +0000 UTC, which will be removed in 1.26\n\n",
+ "subcode": "UpgradeBlockedOnDeprecatedAPIUsage"
+})
```
-It takes a few minutes to upgrade the cluster, depending on how many nodes you have.
+After receiving the error message, you have two options to mitigate the issue. You can either [remove usage of deprecated APIs (recommended)](#remove-usage-of-deprecated-apis-recommended) or [bypass validation to ignore API changes](#bypass-validation-to-ignore-api-changes).
-> [!IMPORTANT]
-> Ensure that any `PodDisruptionBudgets` (PDBs) allow for at least 1 pod replica to be moved at a time otherwise the drain/evict operation will fail.
-> If the drain operation fails, the upgrade operation will fail by design to ensure that the applications are not disrupted. Please correct what caused the operation to stop (incorrect PDBs, lack of quota, and so on) and re-try the operation.
+### Remove usage of deprecated APIs (recommended)
-To confirm that the upgrade was successful, use the [Get-AzAksCluster][get-azakscluster] command:
+1. In the Azure portal, navigate to your cluster's overview page, and select **Diagnose and solve problems**.
-```azurepowershell-interactive
-Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
- Format-Table -Property Name, Location, KubernetesVersion, ProvisioningState, Fqdn
-```
+2. Navigate to the **Known Issues, Availability and Performance** category, and select **Selected Kubernetes API deprecations**.
-The following example output shows that the cluster now runs *1.19.1*:
+ :::image type="content" source="./media/upgrade-cluster/applens-api-detection-inline.png" lightbox="./media/upgrade-cluster/applens-api-detection-full.png" alt-text="A screenshot of the Azure portal showing the 'Selected Kubernetes API deprecations' section.":::
-```Output
-Name Location KubernetesVersion ProvisioningState Fqdn
-- -- -- -- -
-myAKSCluster eastus 1.19.1 Succeeded myakscluster-dns-379cbbb9.hcp.eastus.azmk8s.io
-```
+3. Wait 12 hours from the time the last deprecated API usage was seen.
-### [Azure portal](#tab/azure-portal)
+4. Retry your cluster upgrade.
-You can also manually upgrade your cluster in the Azure portal.
+You can also check past API usage by enabling [Container Insights][container-insights] and exploring kube audit logs.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Navigate to your AKS cluster.
-3. Under **Settings**, select **Cluster configuration**.
-4. In **Kubernetes version**, select **Upgrade version**. This will redirect you to a new page.
-5. In **Kubernetes version**, select your desired version and then select **Save**.
+### Bypass validation to ignore API changes
-The Azure portal also highlights all the deprecated APIs between your current version and newer, available versions you intend to migrate to. For more information, see [the Kubernetes API Removal and Deprecation process][k8s-deprecation].
+> [!NOTE]
+> This method requires you to use the `aks-preview` Azure CLI extension version 0.5.134 or later. This method isn't recommended, as deprecated APIs in the targeted Kubernetes version may not work long term. We recommend to removing them as soon as possible after the upgrade completes.
+Bypass validation to ignore API breaking changes using the [`az aks update`][az-aks-update] command and setting the `upgrade-settings` property to `IgnoreKubernetesDeprecations` and setting the `upgrade-override-until` property to define the end of the window during which validation is bypassed. If no value is set, it defaults the window to three days from the current time. The date and time you specify must be in the future.
-It takes a few minutes to upgrade the cluster, depending on how many nodes you have.
+```azurecli-interactive
+az aks update --name myAKSCluster --resource-group myResourceGroup --upgrade-settings IgnoreKubernetesDeprecations --upgrade-override-until 2023-04-01T13:00:00Z
+```
-To confirm that the upgrade was successful, navigate to your AKS cluster in the Azure portal. On the **Overview** page, select the **Kubernetes version**.
+> [!NOTE]
+> `Z` is the zone designator for the zero UTC/GMT offset, also known as 'Zulu' time. This example sets the end of the window to `13:00:00` GMT. For more information, see [Combined date and time representations](https://wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations).
-
+## Customize node surge upgrade
-## View the upgrade events
+> [!IMPORTANT]
+>
+> Node surges require subscription quota for the requested max surge count for each upgrade operation. For example, a cluster that has five node pools, each with a count of four nodes, has a total of 20 nodes. If each node pool has a max surge value of 50%, additional compute and IP quota of 10 nodes (2 nodes * 5 pools) is required to complete the upgrade.
+>
+> The max surge setting on a node pool is persistent. Subsequent Kubernetes upgrades or node version upgrades will use this setting. You may change the max surge value for your node pools at any time. For production node pools, we recommend a max-surge setting of 33%.
+>
+> If you're using Azure CNI, validate there are available IPs in the subnet to [satisfy IP requirements of Azure CNI](configure-azure-cni.md).
-When you upgrade your cluster, the following Kubernetes events may occur on each node:
+By default, AKS configures upgrades to surge with one extra node. A default value of one for the max surge settings enables AKS to minimize workload disruption by creating an extra node before the cordon/drain of existing applications to replace an older versioned node. The max surge value can be customized per node pool to enable a trade-off between upgrade speed and upgrade disruption. When you increase the max surge value, the upgrade process completes faster. If you set a large value for max surge, you might experience disruptions during the upgrade process.
-- Surge ΓÇô Create surge node.-- Drain ΓÇô Pods are being evicted from the node. Each pod has a 30-second timeout to complete the eviction.-- Update ΓÇô Update of a node has succeeded or failed.-- Delete ΓÇô Deleted a surge node.
+For example, a max surge value of *100%* provides the fastest possible upgrade process (doubling the node count) but also causes all nodes in the node pool to be drained simultaneously. You might want to use a higher value such as this for testing environments. For production node pools, we recommend a `max_surge` setting of *33%*.
-Use `kubectl get events` to show events in the default namespaces while running an upgrade. For example:
+AKS accepts both integer values and a percentage value for max surge. An integer such as *5* indicates five extra nodes to surge. A value of *50%* indicates a surge value of half the current node count in the pool. Max surge percent values can be a minimum of *1%* and a maximum of *100%*. A percent value is rounded up to the nearest node count. If the max surge value is higher than the required number of nodes to be upgraded, the number of nodes to be upgraded is used for the max surge value.
-```azurecli-interactive
-kubectl get events
-```
+During an upgrade, the max surge value can be a minimum of *1* and a maximum value equal to the number of nodes in your node pool. You can set larger values, but the maximum number of nodes used for max surge isn't higher than the number of nodes in the pool at the time of upgrade.
-The following example output shows some of the above events listed during an upgrade.
+### Set max surge values
-```output
-...
-default 2m1s Normal Drain node/aks-nodepool1-96663640-vmss000001 Draining node: [aks-nodepool1-96663640-vmss000001]
-...
-default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surge node [aks-nodepool1-96663640-vmss000002 nodepool1] for agentpool %!s(MISSING)
-...
+Set max surge values for new or existing node pools using the following commands:
+
+```azurecli-interactive
+# Set max surge for a new node pool
+az aks nodepool add -n mynodepool -g MyResourceGroup --cluster-name MyManagedCluster --max-surge 33%
+
+# Update max surge for an existing node pool
+az aks nodepool update -n mynodepool -g MyResourceGroup --cluster-name MyManagedCluster --max-surge 5
``` ## Set auto-upgrade channel
-In addition to manually upgrading a cluster, you can set an auto-upgrade channel on your cluster. For more information, see [Auto-upgrading an AKS cluster][aks-auto-upgrade].
+You can set an auto-upgrade channel on your cluster. For more information, see [Auto-upgrading an AKS cluster][aks-auto-upgrade].
-## Special considerations for node pools that span multiple Availability Zones
+## Special considerations for node pools that span multiple availability zones
-AKS uses best-effort zone balancing in node groups. During an Upgrade surge, zone(s) for the surge node(s) in Virtual Machine Scale Sets is unknown ahead of time. This can temporarily cause an unbalanced zone configuration during an upgrade. However, AKS deletes the surge node(s) once the upgrade has been completed and preserves the original zone balance. If you desire to keep your zones balanced during upgrade, increase the surge to a multiple of three nodes. Virtual Machine Scale Sets will then balance your nodes across Availability Zones with best-effort zone balancing.
+AKS uses best-effort zone balancing in node groups. During an upgrade surge, the zones for the surge nodes in Virtual Machine Scale Sets are unknown ahead of time, which can temporarily cause an unbalanced zone configuration during an upgrade. However, AKS deletes surge nodes once the upgrade completes and preserves the original zone balance. If you want to keep your zones balanced during upgrades, you can increase the surge to a multiple of *three nodes*, and Virtual Machine Scale Sets balances your nodes across availability zones with best-effort zone balancing.
-If you have PVCs backed by Azure LRS Disks, theyΓÇÖll be bound to a particular zone, and they may fail to recover immediately if the surge node doesnΓÇÖt match the zone of the PVC. This could cause downtime on your application when the Upgrade operation continues to drain nodes but the PVs are bound to a zone. To handle this case and maintain high availability, configure a [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) on your application. This allows Kubernetes to respect your availability requirements during Upgrade's drain operation.
+If you have PVCs backed by Azure LRS Disks, theyΓÇÖll be bound to a particular zone. They may fail to recover immediately if the surge node doesnΓÇÖt match the zone of the PVC. This could cause downtime on your application when the upgrade operation continues to drain nodes but the PVs are bound to a zone. To handle this case and maintain high availability, configure a [Pod Disruption Budget](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) on your application to allow Kubernetes to respect your availability requirements during the drain operation.
## Next steps
-This article showed you how to upgrade an existing AKS cluster. To learn more about deploying and managing AKS clusters, see the set of tutorials.
+This article showed you how to upgrade an existing AKS cluster. To learn more about deploying and managing AKS clusters, see the following tutorials:
> [!div class="nextstepaction"] > [AKS tutorials][aks-tutorial-prepare-app]
This article showed you how to upgrade an existing AKS cluster. To learn more ab
[az-aks-get-upgrades]: /cli/azure/aks#az_aks_get_upgrades [get-azaksupgradeprofile]: /powershell/module/az.aks/get-azaksupgradeprofile [az-aks-upgrade]: /cli/azure/aks#az_aks_upgrade
+[az-aks-update]: /cli/azure/aks#az_aks_update
[set-azakscluster]: /powershell/module/az.aks/set-azakscluster [az-aks-show]: /cli/azure/aks#az_aks_show [get-azakscluster]: /powershell/module/az.aks/get-azakscluster
aks Workload Identity Migrate From Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md
Title: Migrate your Azure Kubernetes Service (AKS) pod to use workload identity (preview)
+ Title: Migrate your Azure Kubernetes Service (AKS) pod to use workload identity
description: In this Azure Kubernetes Service (AKS) article, you learn how to configure your Azure Kubernetes Service pod to authenticate with workload identity. Previously updated : 03/14/2023 Last updated : 04/24/2023 # Migrate from pod managed-identity to workload identity
-This article focuses on migrating from a pod-managed identity to Azure Active Directory (Azure AD) workload identity (preview) for your Azure Kubernetes Service (AKS) cluster. It also provides guidance depending on the version of the [Azure Identity][azure-identity-supported-versions] client library used by your container-based application.
+This article focuses on migrating from a pod-managed identity to Azure Active Directory (Azure AD) workload identity for your Azure Kubernetes Service (AKS) cluster. It also provides guidance depending on the version of the [Azure Identity][azure-identity-supported-versions] client library used by your container-based application.
## Before you begin -- The Azure CLI version 2.40.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+The Azure CLI version 2.40.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Migration scenarios
If your cluster isn't using the latest version of the Azure Identity SDK, you ha
- [Deploy the workload with migration sidecar](#deploy-the-workload-with-migration-sidecar) to proxy the application IMDS transactions. - Once you verify the authentication transactions are completing successfully, you can [remove the pod-managed identity](#remove-pod-managed-identity) annotations from your application and then remove the pod-managed identity add-on.
+ > [!NOTE]
+ > The migration sidecar is **not supported for production use**. This feature is meant to give you time to migrate your application SDK's to a supported version, and not meant or intended to be a long-term solution.
+ - Rewrite your application to support the latest version of the [Azure Identity][azure-identity-supported-versions] client library. Afterwards, perform the following steps: - Restart your application deployment to begin authenticating using the workload identity.
az identity federated-credential create --name federatedIdentityName --identity-
## Deploy the workload with migration sidecar > [!NOTE]
-> The migration sidecar is **not supported for production usage**. This feature was designed to give customers time to migrate there application SDK's to a supported version and not be a long running process.
+> The migration sidecar is **not supported for production use**. This feature is meant to give you time to migrate your application SDK's to a supported version, and not meant or intended to be a long-term solution.
If your application is using managed identity and still relies on IMDS to get an access token, you can use the workload identity migration sidecar to start migrating to workload identity. This sidecar is a migration solution and in the long-term applications, you should modify their code to use the latest Azure Identity SDKs that support client assertion.
After you've completed your testing and the application is successfully able to
## Next steps
-This article showed you how to set up your pod to authenticate using a workload identity as a migration option. For more information about Azure AD workload identity (preview), see the following [Overview][workload-identity-overview] article.
+This article showed you how to set up your pod to authenticate using a workload identity as a migration option. For more information about Azure AD workload identity, see the following [Overview][workload-identity-overview] article.
<!-- INTERNAL LINKS --> [pod-annotations]: workload-identity-overview.md#pod-annotations
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
description: Reference index for all Azure API Management policies and settings.
-+ Last updated 12/01/2022
api-management Protect With Defender For Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/protect-with-defender-for-apis.md
Last updated 04/20/2023
# Enable advanced API security features using Microsoft Defender for Cloud
-<!-- Update links to D4APIs docs when available -->
-Defender for APIs, a capability of [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction), offers full lifecycle protection, detection, and response coverage for APIs that are managed in Azure API Management. The service empowers security practitioners to gain visibility into their business-critical APIs, understand their security posture, prioritize vulnerability fixes, and detect active runtime threats within minutes.
+[Defender for APIs](/azure/defender-for-cloud/defender-for-apis-introduction), a capability of [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction), offers full lifecycle protection, detection, and response coverage for APIs that are managed in Azure API Management. The service empowers security practitioners to gain visibility into their business-critical APIs, understand their security posture, prioritize vulnerability fixes, and detect active runtime threats within minutes.
Capabilities of Defender for APIs include:
This article shows how to use the Azure portal to enable Defender for APIs from
Onboarding APIs to Defender for APIs is a two-step process: enabling the Defender for APIs plan for the subscription, and onboarding unprotected APIs in your API Management instances.   > [!TIP]
-> You can also onboard to Defender for APIs directly in the Defender for Cloud interface, where more API security insights and inventory experiences are available.
+> You can also onboard to Defender for APIs directly in the [Defender for Cloud interface](/azure/defender-for-cloud/defender-for-apis-deploy), where more API security insights and inventory experiences are available.
### Enable the Defender for APIs plan for a subscription
For the security alerts received, Defender for APIs suggests necessary steps to
## Offboard protected APIs from Defender for APIs
-You can remove APIs from protection by Defender for APIs by using Defender for Cloud in the portal. For more information, see the Microsoft Defender for Cloud documentation.
+You can remove APIs from protection by Defender for APIs by using Defender for Cloud in the portal. For more information, see [Manage your Defender for APIs deployment](/azure/defender-for-cloud/defender-for-apis-manage).
## Next steps * Learn more about [Defender for Cloud](/azure/defender-for-cloud/defender-for-cloud-introduction)
+* Learn more about [API findings, recommendations, and alerts](/azure/defender-for-cloud/defender-for-apis-posture) in Defender for APIs
* Learn how to [upgrade and scale](upgrade-and-scale.md) an API Management instance
app-service Deploy Local Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-local-git.md
When you push commits to your App Service repository, App Service deploys the fi
You can also change the `DEPLOYMENT_BRANCH` app setting in the Azure Portal, by selecting **Configuration** under **Settings** and adding a new Application Setting with a name of `DEPLOYMENT_BRANCH` and value of `main`.
-> [!NOTE]
-> You can also change the `DEPLOYMENT_BRANCH` using the Azure Portal interface, by selecting **Deployment Center** under **Deployment** and modifying the `Branch`.
- ## Troubleshoot deployment You may see the following common error messages when you use Git to publish to an App Service app in Azure:
application-gateway Ingress Controller Install Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-existing.md
Previously updated : 11/4/2019 Last updated : 04/24/2023
resources, and creates and applies Application Gateway config based on the statu
- Option 1: [Set up aad-pod-identity](#set-up-aad-pod-identity) and create Azure Identity on ARMs - Option 2: [Using a Service Principal](#using-a-service-principal) - [Install Ingress Controller using Helm](#install-ingress-controller-as-a-helm-chart)-- [Multi-cluster / Shared Application Gateway](#multi-cluster--shared-application-gateway): Install AGIC in an environment, where Application Gateway is
+- [Shared Application Gateway](#shared-application-gateway): Install AGIC in an environment, where Application Gateway is
shared between one or more AKS clusters and/or other Azure components. ## Prerequisites
This document assumes you already have the following tools and infrastructure in
- [AKS](https://azure.microsoft.com/services/kubernetes-service/) with [Azure Container Networking Interface (CNI)](../aks/configure-azure-cni.md) - [Application Gateway v2](./tutorial-autoscale-ps.md) in the same virtual network as AKS - [AAD Pod Identity](https://github.com/Azure/aad-pod-identity) installed on your AKS cluster-- [Cloud Shell](https://shell.azure.com/) is the Azure shell environment, which has `az` CLI, `kubectl`, and `helm` installed. These tools are required for the commands below.
+- [Cloud Shell](https://shell.azure.com/) is the Azure shell environment, which has `az` CLI, `kubectl`, and `helm` installed. These tools are required for the following commands:
-Please __backup your Application Gateway's configuration__ before installing AGIC:
+**Backup your Application Gateway's configuration** before installing AGIC:
1. using [Azure portal](https://portal.azure.com/) navigate to your `Application Gateway` instance 2. from `Export template` click `Download`
-The zip file you downloaded will have JSON templates, bash, and PowerShell scripts you could use to restore App
+The zip file you downloaded contains JSON templates, bash, and PowerShell scripts you could use to restore App
Gateway should that become necessary ## Install Helm
-[Helm](../aks/kubernetes-helm.md) is a package manager for
-Kubernetes. We will leverage it to install the `application-gateway-kubernetes-ingress` package.
+[Helm](../aks/kubernetes-helm.md) is a package manager for Kubernetes, used to install the `application-gateway-kubernetes-ingress` package.
Use [Cloud Shell](https://shell.azure.com/) to install Helm: 1. Install [Helm](../aks/kubernetes-helm.md) and run the following to add `application-gateway-kubernetes-ingress` helm package:
Next we need to create an Azure identity and give it permissions ARM.
Use [Cloud Shell](https://shell.azure.com/) to run all of the following commands and create an identity: 1. Create an Azure identity **in the same resource group as the AKS nodes**. Picking the correct resource group is
-important. The resource group required in the command below is *not* the one referenced on the AKS portal pane. This is
+important. The resource group required in the following commands is *not* the one referenced on the AKS portal pane. This is
the resource group of the `aks-agentpool` virtual machines. Typically that resource group starts with `MC_` and contains the name of your AKS. For instance: `MC_resourceGroup_aksABCD_westus`
the resource group of the `aks-agentpool` virtual machines. Typically that resou
az identity create -g <agent-pool-resource-group> -n <identity-name> ```
-1. For the role assignment commands below we need to obtain `principalId` for the newly created identity:
+1. For the role assignment, commands we need to obtain `principalId` for the newly created identity:
```azurecli az identity show -g <resourcegroup> -n <identity-name> ```
-1. Give the identity `Contributor` access to your Application Gateway. For this you need the ID of the Application Gateway, which will
-look something like this: `/subscriptions/A/resourceGroups/B/providers/Microsoft.Network/applicationGateways/C`
+1. Give the identity `Contributor` access to your Application Gateway. For this you need the ID of the Application Gateway, which
+looks something like this: `/subscriptions/A/resourceGroups/B/providers/Microsoft.Network/applicationGateways/C`
Get the list of Application Gateway IDs in your subscription with: `az network application-gateway list --query '[].id'`
look something like this: `/subscriptions/A/resourceGroups/B/providers/Microsoft
``` ## Using a Service Principal
-It is also possible to provide AGIC access to ARM via a Kubernetes secret.
+It's also possible to provide AGIC access to ARM via a Kubernetes secret.
1. Create an Active Directory Service Principal and encode with base64. The base64 encoding is required for the JSON blob to be saved to Kubernetes.
In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use
helm repo update ```
-1. Download helm-config.yaml, which will configure AGIC:
+1. Download helm-config.yaml, which configures AGIC:
```bash wget https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/sample-helm-config.yaml -O helm-config.yaml ```
- Or copy the YAML file below:
+ Or copy the following YAML file:
```yaml # This file contains the essential configs for the ingress controller helm chart
In the first few steps, we install Helm's Tiller on your Kubernetes cluster. Use
verbosityLevel: 3 ################################################################################
- # Specify which application gateway the ingress controller will manage
+ # Specify which application gateway the ingress controller must manage
# appgw: subscriptionId: <subscriptionId> resourceGroup: <resourceGroupName> name: <applicationGatewayName>
- # Setting appgw.shared to "true" will create an AzureIngressProhibitedTarget CRD.
+ # Setting appgw.shared to "true" creates an AzureIngressProhibitedTarget CRD.
# This prohibits AGIC from applying config for any host/path. # Use "kubectl get AzureIngressProhibitedTargets" to view and change this. shared: false ################################################################################
- # Specify which kubernetes namespace the ingress controller will watch
+ # Specify which kubernetes namespace the ingress controller must watch
# Default value is "default" # Leaving this variable out or setting it to blank or empty string would # result in Ingress Controller observing all acessible namespaces.
Refer to [this how-to guide](ingress-controller-expose-service-over-http-https.m
-## Multi-cluster / Shared Application Gateway
-By default AGIC assumes full ownership of the Application Gateway it is linked to. AGIC version 0.8.0 and later can
+## Shared Application Gateway
+By default AGIC assumes full ownership of the Application Gateway it's linked to. AGIC version 0.8.0 and later can
share a single Application Gateway with other Azure components. For instance, we could use the same Application Gateway for an app
-hosted on Virtual Machine Scale Set as well as an AKS cluster.
+hosted on Virtual Machine Scale Set and an AKS cluster.
-Please __backup your Application Gateway's configuration__ before enabling this setting:
+**Backup your Application Gateway's configuration** before enabling this setting:
1. using [Azure portal](https://portal.azure.com/) navigate to your `Application Gateway` instance 2. from `Export template` click `Download`
-The zip file you downloaded will have JSON templates, bash, and PowerShell scripts you could use to restore Application Gateway
+The zip file you downloaded contains JSON templates, bash, and PowerShell scripts you could use to restore Application Gateway
### Example Scenario Let's look at an imaginary Application Gateway, which manages traffic for two web sites: - `dev.contoso.com` - hosted on a new AKS, using Application Gateway and AGIC - `prod.contoso.com` - hosted on an [Azure Virtual Machine Scale Set](https://azure.microsoft.com/services/virtual-machine-scale-sets/)
-With default settings, AGIC assumes 100% ownership of the Application Gateway it is pointed to. AGIC overwrites all of App
+With default settings, AGIC assumes 100% ownership of the Application Gateway it's pointed to. AGIC overwrites all of App
Gateway's configuration. If we were to manually create a listener for `prod.contoso.com` (on Application Gateway), without
-defining it in the Kubernetes Ingress, AGIC will delete the `prod.contoso.com` config within seconds.
+defining it in the Kubernetes Ingress, AGIC deletes the `prod.contoso.com` config within seconds.
To install AGIC and also serve `prod.contoso.com` from our Virtual Machine Scale Set machines, we must constrain AGIC to configuring `dev.contoso.com` only. This is facilitated by instantiating the following
related to that hostname.
### Enable with new AGIC installation To limit AGIC (version 0.8.0 and later) to a subset of the Application Gateway configuration modify the `helm-config.yaml` template.
-Under the `appgw:` section, add `shared` key and set it to to `true`.
+Under the `appgw:` section, add `shared` key and set it to `true`.
```yaml appgw:
Apply the Helm changes:
ingress-azure application-gateway-kubernetes-ingress/ingress-azure ```
-As a result your AKS will have a new instance of `AzureIngressProhibitedTarget` called `prohibit-all-targets`:
+As a result your AKS has a new instance of `AzureIngressProhibitedTarget` called `prohibit-all-targets`:
```bash kubectl get AzureIngressProhibitedTargets prohibit-all-targets -o yaml ``` The object `prohibit-all-targets`, as the name implies, prohibits AGIC from changing config for *any* host and path.
-Helm install with `appgw.shared=true` will deploy AGIC, but won't make any changes to Application Gateway.
+Helm install with `appgw.shared=true` deploys AGIC, but doesn't make any changes to Application Gateway.
### Broaden permissions
-Since Helm with `appgw.shared=true` and the default `prohibit-all-targets` blocks AGIC from applying any config.
+Since Helm with `appgw.shared=true` and the default `prohibit-all-targets` blocks AGIC from applying a config, broaden AGIC permissions:
-Broaden AGIC permissions with:
1. Create a new `AzureIngressProhibitedTarget` with your specific setup: ```bash cat <<EOF | kubectl apply -f -
are going to reuse the existing Application Gateway and manually configure a lis
`staging.contoso.com`. But manually tweaking Application Gateway config (via [portal](https://portal.azure.com), [ARM APIs](/rest/api/resources/) or [Terraform](https://www.terraform.io/)) would conflict with AGIC's assumptions of full ownership. Shortly after we apply
-changes, AGIC will overwrite or delete them.
+changes, AGIC overwrites or deletes them.
We can prohibit AGIC from making changes to a subset of configuration.
We can prohibit AGIC from making changes to a subset of configuration.
``` 3. Modify Application Gateway config via portal - add listeners, routing rules, backends etc. The new object we created
-(`manually-configured-staging-environment`) will prohibit AGIC from overwriting Application Gateway configuration related to
+(`manually-configured-staging-environment`) prohibits AGIC from overwriting Application Gateway configuration related to
`staging.contoso.com`.
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
Title: Azure Automation Update Management overview
description: This article provides an overview of the Update Management feature that implements updates for your Windows and Linux machines. Previously updated : 11/25/2022 Last updated : 04/22/2023 # Update Management overview
+> [!Important]
+> - Automation Update management relies on [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**. [Update management center (Preview)](../../update-center/overview.md) (UMC) is the v2 version of Automation Update management and the future of Update management in Azure. UMC is a native service in Azure and does not rely on [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../../azure-monitor/agents/agents-overview.md).
+> - Guidance for migrating from Automation Update management to Update management center will be provided to customers once the latter is Generally Available. For customers using Automation Update management, we recommend continuing to use the Log Analytics agent and **NOT** migrate to Azure Monitoring agent until migration guidance is provided for Update management or else Automation Update management will not work. Also, the Log Analytics agent would not be deprecated before moving all Automation Update management customers to UMC.
+ You can use Update Management in Azure Automation to manage operating system updates for your Windows and Linux virtual machines in Azure, physical or VMs in on-premises environments, and in other cloud environments. You can quickly assess the status of available updates and manage the process of installing required updates for your machines reporting to Update Management. As a service provider, you may have onboarded multiple customer tenants to [Azure Lighthouse](../../lighthouse/overview.md). Update Management can be used to assess and schedule update deployments to machines in multiple subscriptions in the same Azure Active Directory (Azure AD) tenant, or across tenants using Azure Lighthouse.
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
Title: "Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters." Previously updated : 01/18/2023 Last updated : 04/20/2023 description: "With cluster connect, you can securely connect to Azure Arc-enabled Kubernetes clusters without requiring any inbound port to be enabled on the firewall."
Before you begin, review the [conceptual overview of the cluster connect feature
### [Azure CLI](#tab/azure-cli)
-1. With the `kubeconfig` file pointing to the `apiserver` of your Kubernetes cluster, create a service account in any namespace (the following command creates it in the default namespace):
+1. With the `kubeconfig` file pointing to the `apiserver` of your Kubernetes cluster, run this command to create a service account. This example creates the service account in the default namespace, but you can substitute any other namespace for `default`.
```console
- kubectl create serviceaccount demo-user
+ kubectl create serviceaccount demo-user -n default
```
-1. Create ClusterRoleBinding to grant this [service account the appropriate permissions on the cluster](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-rolebinding). Example:
+1. Create ClusterRoleBinding to grant this [service account the appropriate permissions on the cluster](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-rolebinding). If you used a different namespace in the first command, substitute it here for `default`.
```console kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --serviceaccount default:demo-user
Before you begin, review the [conceptual overview of the cluster connect feature
### [Azure PowerShell](#tab/azure-powershell)
-1. With the `kubeconfig` file pointing to the `apiserver` of your Kubernetes cluster, create a service account in any namespace (the following command creates it in the default namespace):
+1. With the `kubeconfig` file pointing to the `apiserver` of your Kubernetes cluster, run this command to create a service account. This example creates the service account in the default namespace, but you can substitute any other namespace for `default`.
```console
- kubectl create serviceaccount demo-user
+ kubectl create serviceaccount demo-user -n default
```
-1. Create ClusterRoleBinding or RoleBinding to grant this [service account the appropriate permissions on the cluster](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-rolebinding). Example:
+1. Create ClusterRoleBinding or RoleBinding to grant this [service account the appropriate permissions on the cluster](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-rolebinding). If you used a different namespace in the first command, substitute it here for `default`.
```console kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --serviceaccount default:demo-user
azure-functions Functions Host Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json.md
For more information about these settings, see [Sampling in Application Insights
### applicationInsights.snapshotConfiguration
-For more information on snapshots, see [Debug snapshots on exceptions in .NET apps](../azure-monitor/app/snapshot-debugger.md) and [Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot.md).
+For more information on snapshots, see [Debug snapshots on exceptions in .NET apps](../azure-monitor/app/snapshot-debugger.md) and [Troubleshoot problems enabling Application Insights Snapshot Debugger or viewing snapshots](/troubleshoot/azure/azure-monitor/app-insights/snapshot-debugger-troubleshoot).
|Property | Default | Description | | | | |
azure-maps How To Search For Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-search-for-address.md
The [Search service] is a set of RESTful APIs designed to help developers search addresses, places, and business listings by name, category, and other geographic information. In addition to supporting traditional geocoding, services can also reverse geocode addresses and cross streets based on latitudes and longitudes. Latitude and longitude values returned by the search can be used as parameters in other Azure Maps services, such as [Route] and [Weather] services.
-In this article, you'll learn how to:
+This article demonstrates how to:
* Request latitude and longitude coordinates for an address (geocode address location) by using [Search Address]. * Search for an address or Point of Interest (POI) using [Fuzzy Search]. * Use [Reverse Address Search] to translate coordinate location to street address.
-* Translate coordinate location into a human understandable cross street using [Search Address Reverse Cross Street]. Most often, this is needed in tracking applications that receive a GPS feed from a device or asset, and wish to know where the coordinate is located.
+* Translate coordinate location into a human understandable cross street using [Search Address Reverse Cross Street], most often needed in tracking applications that receive a GPS feed from a device or asset, and wish to know where the coordinate is located.
## Prerequisites
This tutorial uses the [Postman] application, but you may choose a different API
## Request latitude and longitude for an address (geocoding)
-In this example, we'll use [Get Search Address] to convert an address into latitude and longitude coordinates. This process is also called *geocoding*. In addition to returning the coordinates, the response will also return detailed address properties such as street, postal code, municipality, and country/region information.
+The example in this section uses [Get Search Address] to convert an address into latitude and longitude coordinates. This process is also called *geocoding*. In addition to returning the coordinates, the response also returns detailed address properties such as street, postal code, municipality, and country/region information.
>[!TIP] >If you have a set of addresses to geocode, you can use [Post Search Address Batch] to send a batch of queries in a single request.
In this example, we'll use [Get Search Address] to convert an address into latit
https://atlas.microsoft.com/search/address/json?&subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&language=en-US&query=400 Broad St, Seattle, WA 98109 ```
-3. Click the blue **Send** button. The response body will contain data for a single location.
+3. Select the blue **Send** button. The response body contains data for a single location.
-4. Now, we'll search an address that has more than one possible locations. In the **Params** section, change the `query` key to `400 Broad, Seattle`. Click the blue **Send** button.
+4. Next, search an address that has more than one possible locations. In the **Params** section, change the `query` key to `400 Broad, Seattle`. Select the blue **Send** button.
:::image type="content" source="./media/how-to-search-for-address/search-address.png" alt-text="Search for address"::: 5. Next, try setting the `query` key to `400 Broa`.
-6. Click the **Send** button. You can now see that the response includes responses from multiple countries. To geobias results to the relevant area for your users, always add as many location details as possible to the request.
+6. Select the **Send** button. The response includes results from multiple countries. To geobias results to the relevant area for your users, always add as many location details as possible to the request.
## Fuzzy Search
-[Fuzzy Search] supports standard single line and free-form searches. We recommend that you use the Azure Maps Search Fuzzy API when you don't know your user input type for a search request. The query input can be a full or partial address. It can also be a Point of Interest (POI) token, like a name of POI, POI category or name of brand. Furthermore, to improve the relevance of your search results, the query results can be constrained by a coordinate location and radius, or by defining a bounding box.
+[Fuzzy Search] supports standard single line and free-form searches. We recommend that you use the Azure Maps Search Fuzzy API when you don't know your user input type for a search request. The query input can be a full or partial address. It can also be a Point of Interest (POI) token, like a name of POI, POI category or name of brand. Furthermore, to improve the relevance of your search results, constrain the query results using a coordinate location and radius, or by defining a bounding box.
->[!TIP]
->Most Search queries default to maxFuzzyLevel=1 to gain performance and reduce unusual results. You can adjust fuzziness levels by using the `maxFuzzyLevel` or `minFuzzyLevel` parameters. For more information on `maxFuzzyLevel` and a complete list of all optional parameters, see [Fuzzy Search URI Parameters].
+> [!TIP]
+> Most Search queries default to `maxFuzzyLevel=1` to improve performance and reduce unusual results. Adjust fuzziness levels by using the `maxFuzzyLevel` or `minFuzzyLevel` parameters. For more information on `maxFuzzyLevel` and a complete list of all optional parameters, see [Fuzzy Search URI Parameters].
### Search for an address using Fuzzy Search
-In this example, we'll use Fuzzy Search to search the entire world for `pizza`. Then, we'll show you how to search over the scope of a specific country. Finally, we'll show you how to use a coordinate location and radius to scope a search over a specific area, and limit the number of returned results.
+The example in this section uses `Fuzzy Search` to search the entire world for *pizza*, then searches over the scope of a specific country. Finally, it demonstrates how to use a coordinate location and radius to scope a search over a specific area, and limit the number of returned results.
->[!IMPORTANT]
->To geobias results to the relevant area for your users, always add as many location details as possible. To learn more, see [Best Practices for Search].
+> [!IMPORTANT]
+> To geobias results to the relevant area for your users, always add as many location details as possible. For more information, see [Best Practices for Search].
1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
In this example, we'll use Fuzzy Search to search the entire world for `pizza`.
https://atlas.microsoft.com/search/fuzzy/json?&api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=pizza ```
- >[!NOTE]
- >The _json_ attribute in the URL path determines the response format. This article uses json for ease of use and readability. To find other supported response formats, see the `format` parameter definition in the [URI Parameter reference] documentation.
+ > [!NOTE]
+ > The _json_ attribute in the URL path determines the response format. This article uses json for ease of use and readability. To find other supported response formats, see the `format` parameter definition in the [URI Parameter reference] documentation.
-3. Click **Send** and review the response body.
+3. Select **Send** and review the response body.
- The ambiguous query string for "pizza" returned 10 [point of interest result] (POI) in both the "pizza" and "restaurant" categories. Each result includes details such as street address, latitude and longitude values, view port, and entry points for the location. The results are now varied for this query, and are not tied to any reference location.
+ The ambiguous query string for "pizza" returned 10 [point of interest result] (POI) in both the "pizza" and "restaurant" categories. Each result includes details such as street address, latitude and longitude values, view port, and entry points for the location. The results are now varied for this query, and aren't tied to any reference location.
- In the next step, we'll use the `countrySet` parameter to specify only the countries/regions for which your application needs coverage. For a complete list of supported countries/regions, see [Search Coverage].
+ In the next step, you'll use the `countrySet` parameter to specify only the countries/regions for which your application needs coverage. For a complete list of supported countries/regions, see [Search Coverage].
-4. The default behavior is to search the entire world, potentially returning unnecessary results. Next, weΓÇÖll search for pizza only the United States. Add the `countrySet` key to the **Params** section, and set its value to `US`. Setting the `countrySet` key to `US` will bound the results to the United States.
+4. The default behavior is to search the entire world, potentially returning unnecessary results. Next, search for pizza only in the United States. Add the `countrySet` key to the **Params** section, and set its value to `US`. Setting the `countrySet` key to `US` bounds the results to the United States.
:::image type="content" source="./media/how-to-search-for-address/search-fuzzy-country.png" alt-text="Search for pizza in the United States"::: The results are now bounded by the country code and the query returns pizza restaurants in the United States.
-5. To get an even more targeted search, you can search over the scope of a lat./lon. coordinate pair. In this example, we'll use the lat./lon. of the Seattle Space Needle. Since we only want to return results within a 400-meters radius, we'll add the `radius` parameter. Also, we'll add the `limit` parameter to limit the results to the five closest pizza places.
+5. To get an even more targeted search, you can search over the scope of a lat/lon coordinate pair. The following example uses the lat/lon coordinates of the Seattle Space Needle. Since we only want to return results within a 400-meters radius, we add the `radius` parameter. Also, we add the `limit` parameter to limit the results to the five closest pizza places.
In the **Params** section, add the following key/value pairs:
- | Key | Value |
- |--||
- | lat | 47.620525 |
- | lon | -122.349274 |
- | radius | 400 |
- | limit | 5|
+ | Key | Value |
+ |--||
+ | lat | 47.620525 |
+ | lon | -122.349274|
+ | radius | 400 |
+ | limit | 5 |
-6. Click **Send**. The response includes results for pizza restaurants near the Seattle Space Needle.
+6. Select **Send**. The response includes results for pizza restaurants near the Seattle Space Needle.
## Search for a street address using Reverse Address Search [Get Search Address Reverse] translates coordinates into human readable street addresses. This API is often used for applications that consume GPS feeds and want to discover addresses at specific coordinate points.
->[!IMPORTANT]
->To geobias results to the relevant area for your users, always add as many location details as possible. To learn more, see [Best Practices for Search].
+> [!IMPORTANT]
+> To geobias results to the relevant area for your users, always add as many location details as possible. For more information, see [Best Practices for Search].
->[!TIP]
->If you have a set of coordinate locations to reverse geocode, you can use [Post Search Address Reverse Batch] to send a batch of queries in a single request.
+> [!TIP]
+> If you have a set of coordinate locations to reverse geocode, you can use [Post Search Address Reverse Batch] to send a batch of queries in a single request.
-In this example, we'll be making reverse searches using a few of the optional parameters that are available. For the full list of optional parameters, see [Reverse Search Parameters].
+This example demonstrates making reverse searches using a few of the optional parameters that are available. For the full list of optional parameters, see [Reverse Search Parameters].
1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
In this example, we'll be making reverse searches using a few of the optional pa
https://atlas.microsoft.com/search/address/reverse/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=47.591180,-122.332700&number=1 ```
-3. Click **Send**, and review the response body. You should see one query result. The response includes key address information about Safeco Field.
+3. Select **Send**, and review the response body. You should see one query result. The response includes key address information about Safeco Field.
-4. Now, we'll add the following key/value pairs to the **Params** section:
+4. Next, add the following key/value pairs to the **Params** section:
| Key | Value | Returns |--|||
In this example, we'll be making reverse searches using a few of the optional pa
:::image type="content" source="./media/how-to-search-for-address/search-reverse.png" alt-text="Search reverse.":::
-5. Click **Send**, and review the response body.
+5. Select **Send**, and review the response body.
-6. Next, we'll add the `entityType` key, and set its value to `Municipality`. The `entityType` key will override the `returnMatchType` key in the previous step. We'll also need to remove `returnSpeedLimit` and `returnRoadUse` since we're requesting information about the municipality. For all possible entity types, see [Entity Types].
+6. Next, we add the `entityType` key, and set its value to `Municipality`. The `entityType` key overrides the `returnMatchType` key in the previous step. `returnSpeedLimit` and `returnRoadUse` also need removed since you're requesting information about the municipality. For all possible entity types, see [Entity Types].
:::image type="content" source="./media/how-to-search-for-address/search-reverse-entity-type.png" alt-text="Search reverse entityType.":::
-7. Click **Send**. Compare the results to the results returned in step 5. Because the requested entity type is now `municipality`, the response does not include street address information. Also, the returned `geometryId` can be used to request boundary polygon through Azure Maps Get [Search Polygon API].
+7. Select **Send**. Compare the results to the results returned in step 5. Because the requested entity type is now `municipality`, the response doesn't include street address information. Also, the returned `geometryId` can be used to request boundary polygon through Azure Maps Get [Search Polygon API].
->[!TIP]
->To get more information on these parameters, as well as to learn about others, see [Reverse Search Parameters].
+> [!TIP]
+> For more information on these as well as other parameters, see [Reverse Search Parameters].
## Search for cross street using Reverse Address Cross Street Search
-In this example, we'll search for a cross street based on the coordinates of an address.
+This example demonstrates how to search for a cross street based on the coordinates of an address.
1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
In this example, we'll search for a cross street based on the coordinates of an
:::image type="content" source="./media/how-to-search-for-address/search-address-cross.png" alt-text="Search cross street.":::
-3. Click **Send**, and review the response body. You'll notice that the response contains a `crossStreet` value of `South Atlantic Street`.
+3. Select **Send**, and review the response body. Notice that the response contains a `crossStreet` value of `South Atlantic Street`.
## Next steps > [!div class="nextstepaction"]
-> [Azure Maps Search service](/rest/api/maps/search)
+> [Azure Maps Search service]
> [!div class="nextstepaction"]
-> [Best practices for Azure Maps Search service](how-to-use-best-practices-for-search.md)
+> [Best practices for Azure Maps Search service]
-[Search service]: /rest/api/maps/search
-[Route]: /rest/api/maps/route
-[Weather]: /rest/api/maps/weather
-[Search Address]: /rest/api/maps/search/getsearchaddress
-[Fuzzy Search]: /rest/api/maps/search/getsearchfuzzy
-[Reverse Address Search]: /rest/api/maps/search/getsearchaddressreverse
-[Search Address Reverse Cross Street]: /rest/api/maps/search/getsearchaddressreversecrossstreet
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[Postman]: https://www.postman.com/
-[Get Search Address]: /rest/api/maps/search/getsearchaddress
-[Post Search Address Batch]: /rest/api/maps/search/postsearchaddressbatch
+[Azure Maps Search service]: /rest/api/maps/search
+[Best practices for Azure Maps Search service]: how-to-use-best-practices-for-search.md
+[Best Practices for Search]: how-to-use-best-practices-for-search.md#geobiased-search-results
+[Entity Types]: /rest/api/maps/search/getsearchaddressreverse#entitytype
[Fuzzy Search URI Parameters]: /rest/api/maps/search/getsearchfuzzy#uri-parameters
+[Fuzzy Search]: /rest/api/maps/search/getsearchfuzzy
[Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse
+[Get Search Address]: /rest/api/maps/search/getsearchaddress
[point of interest result]: /rest/api/maps/search/getsearchpoi#searchpoiresponse
+[Post Search Address Batch]: /rest/api/maps/search/postsearchaddressbatch
[Post Search Address Reverse Batch]: /rest/api/maps/search/postsearchaddressreversebatch
+[Postman]: https://www.postman.com/
+[Reverse Address Search Results]: /rest/api/maps/search/getsearchaddressreverse#searchaddressreverseresult
+[Reverse Address Search]: /rest/api/maps/search/getsearchaddressreverse
[Reverse Search Parameters]: /rest/api/maps/search/getsearchaddressreverse#uri-parameters
-[Best Practices for Search]: how-to-use-best-practices-for-search.md#geobiased-search-results
[Road Use Types]: /rest/api/maps/search/getsearchaddressreverse#uri-parameters
-[Reverse Address Search Results]: /rest/api/maps/search/getsearchaddressreverse#searchaddressreverseresult
-[URI Parameter reference]: /rest/api/maps/search/getsearchfuzzy#uri-parameters
+[Route]: /rest/api/maps/route
+[Search Address Reverse Cross Street]: /rest/api/maps/search/getsearchaddressreversecrossstreet
+[Search Address]: /rest/api/maps/search/getsearchaddress
[Search Coverage]: geocoding-coverage.md [Search Polygon API]: /rest/api/maps/search/getsearchpolygon
-[Entity Types]: /rest/api/maps/search/getsearchaddressreverse#entitytype
+[Search service]: /rest/api/maps/search
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[URI Parameter reference]: /rest/api/maps/search/getsearchfuzzy#uri-parameters
+[Weather]: /rest/api/maps/weather
azure-monitor Alerts Log Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-webhook.md
Title: Webhook actions for log alerts in Azure alerts
-description: Describes how to configure a log alert pushes with webhook action and available customizations
+description: This article describes how to configure log alert pushes with webhook action and available customizations.
Last updated 2/23/2022
# Webhook actions for log alert rules
-[Log alert](alerts-log.md) supports [configuring webhook action groups](./action-groups.md#webhook). In this article, we'll describe what properties are available. Webhook actions allow you to invoke a single HTTP POST request. The service that's called should support webhooks and know how to use the payload it receives.
+[Log alerts](alerts-log.md) support [configuring webhook action groups](./action-groups.md#webhook). In this article, we describe the properties that are available. You can use webhook actions to invoke a single HTTP POST request. The service that's called should support webhooks and know how to use the payload it receives.
-> [!NOTE]
-> It is recommended you use [common alert schema](../alerts/alerts-common-schema.md) for your webhook integrations. The common alert schema provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor. For log alerts rules that have a custom JSON payload defined, enabling the common alert schema reverts the payload schema to the one described [here](../alerts/alerts-common-schema.md#alert-context-fields-for-log-alerts). This means that if you want to have a custom JSON payload defined, the webhook can't use the common alert schema. Alerts with the common schema enabled have an upper size limit of 256 KB per alert, bigger alert will not include search results. When the search results aren't included, you should use the `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results via the Log Analytics API.
+We recommend that you use [common alert schema](../alerts/alerts-common-schema.md) for your webhook integrations. The common alert schema provides the advantage of having a single extensible and unified alert payload across all the alert services in Azure Monitor.
+
+For log alert rules that have a custom JSON payload defined, enabling the common alert schema reverts the payload schema to the one described in [Common alert schema](../alerts/alerts-common-schema.md#alert-context-fields-for-log-alerts). If you want to have a custom JSON payload defined, the webhook can't use the common alert schema.
+
+Alerts with the common schema enabled have an upper size limit of 256 KB per alert. A bigger alert doesn't include search results. When the search results aren't included, use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to access query results via the Log Analytics API.
## Sample payloads This section shows sample payloads for webhooks for log alerts. The sample payloads include examples when the payload is standard and when it's custom.
The following sample payload is for a standard webhook when it's used for log al
The following sample payload is for a standard webhook action that's used for alerts based on Log Analytics: > [!NOTE]
-> The "Severity" field value changes if you've [switched to the current scheduledQueryRules API](/previous-versions/azure/azure-monitor/alerts/alerts-log-api-switch) from the [legacy Log Analytics Alert API](./api-alerts.md).
+> The `"Severity"` field value changes if you've [switched to the current scheduledQueryRules API](/previous-versions/azure/azure-monitor/alerts/alerts-log-api-switch) from the [legacy Log Analytics Alert API](./api-alerts.md).
```json {
The following sample payload is for a standard webhook when it's used for log al
### Log alert with a custom JSON payload (up to API version `2018-04-16`) > [!NOTE]
-> Custom JSON-based webhook is not supported from API version `2021-08-01`.
+> A custom JSON-based webhook isn't supported from API version `2021-08-01`.
-Default webhook action properties and their custom JSON parameter names:
+The following table lists default webhook action properties and their custom JSON parameter names.
| Parameter | Variable | Description | |: |: |: |
-| *AlertRuleName* |#alertrulename |Name of the alert rule. |
-| *Severity* |#severity |Severity set for the fired log alert. |
-| *AlertThresholdOperator* |#thresholdoperator |Threshold operator for the alert rule. |
-| *AlertThresholdValue* |#thresholdvalue |Threshold value for the alert rule. |
-| *LinkToSearchResults* |#linktosearchresults |Link to the Analytics portal that returns the records from the query that created the alert. |
-| *LinkToSearchResultsAPI* |#linktosearchresultsapi |Link to the Analytics API that returns the records from the query that created the alert. |
-| *LinkToFilteredSearchResultsUI* |#linktofilteredsearchresultsui |Link to the Analytics portal that returns the records from the query filtered by dimensions value combinations that created the alert. |
-| *LinkToFilteredSearchResultsAPI* |#linktofilteredsearchresultsapi |Link to the Analytics API that returns the records from the query filtered by dimensions value combinations that created the alert. |
-| *ResultCount* |#searchresultcount |Number of records in the search results. |
-| *Search Interval End time* |#searchintervalendtimeutc |End time for the query in UTC, with the format mm/dd/yyyy HH:mm:ss AM/PM. |
-| *Search Interval* |#searchinterval |Time window for the alert rule, with the format HH:mm:ss. |
-| *Search Interval StartTime* |#searchintervalstarttimeutc |Start time for the query in UTC, with the format mm/dd/yyyy HH:mm:ss AM/PM.
-| *SearchQuery* |#searchquery |Log search query used by the alert rule. |
-| *SearchResults* |"IncludeSearchResults": true|Records returned by the query as a JSON table, limited to the first 1,000 records. "IncludeSearchResults": true is added in a custom JSON webhook definition as a top-level property. |
-| *Dimensions* |"IncludeDimensions": true|Dimensions value combinations that triggered that alert as a JSON section. "IncludeDimensions": true is added in a custom JSON webhook definition as a top-level property. |
-| *Alert Type*| #alerttype | The type of log alert rule configured as [Metric measurement or Number of results](./alerts-unified-log.md#measure).|
-| *WorkspaceID* |#workspaceid |ID of your Log Analytics workspace. |
-| *Application ID* |#applicationid |ID of your Application Insights app. |
-| *Subscription ID* |#subscriptionid |ID of your Azure subscription used. |
+| `AlertRuleName` |#alertrulename |Name of the alert rule. |
+| `Severity` |#severity |Severity set for the fired log alert. |
+| `AlertThresholdOperator` |#thresholdoperator |Threshold operator for the alert rule. |
+| `AlertThresholdValue` |#thresholdvalue |Threshold value for the alert rule. |
+| `LinkToSearchResults` |#linktosearchresults |Link to the Analytics portal that returns the records from the query that created the alert. |
+| `LinkToSearchResultsAPI` |#linktosearchresultsapi |Link to the Analytics API that returns the records from the query that created the alert. |
+| `LinkToFilteredSearchResultsUI` |#linktofilteredsearchresultsui |Link to the Analytics portal that returns the records from the query filtered by dimensions value combinations that created the alert. |
+| `LinkToFilteredSearchResultsAPI` |#linktofilteredsearchresultsapi |Link to the Analytics API that returns the records from the query filtered by dimensions value combinations that created the alert. |
+| `ResultCount` |#searchresultcount |Number of records in the search results. |
+| `Search Interval End time` |#searchintervalendtimeutc |End time for the query in UTC, with the format mm/dd/yyyy HH:mm:ss AM/PM. |
+| `Search Interval` |#searchinterval |Time window for the alert rule, with the format HH:mm:ss. |
+| `Search Interval StartTime` |#searchintervalstarttimeutc |Start time for the query in UTC, with the format mm/dd/yyyy HH:mm:ss AM/PM.
+| `SearchQuery` |#searchquery |Log search query used by the alert rule. |
+| `SearchResults` |"IncludeSearchResults": true|Records returned by the query as a JSON table, limited to the first 1,000 records. "IncludeSearchResults": true is added in a custom JSON webhook definition as a top-level property. |
+| `Dimensions` |"IncludeDimensions": true|Dimensions value combinations that triggered that alert as a JSON section. "IncludeDimensions": true is added in a custom JSON webhook definition as a top-level property. |
+| `Alert Type`| #alerttype | The type of log alert rule configured as [Metric measurement or Number of results](./alerts-unified-log.md#measure).|
+| `WorkspaceID` |#workspaceid |ID of your Log Analytics workspace. |
+| `Application ID` |#applicationid |ID of your Application Insights app. |
+| `Subscription ID` |#subscriptionid |ID of your Azure subscription used. |
-You can use the **Include custom JSON payload for webhook** to get a custom JSON payload using the parameters. You can also generate additional properties.
-For example, you might specify the following custom payload that includes a single parameter called *text*. The service that this webhook calls expects this parameter:
+You can use **Include custom JSON payload for webhook** to get a custom JSON payload by using the parameters. You can also generate more properties.
+
+For example, you might specify the following custom payload that includes a single parameter called `text`. The service that this webhook calls expects this parameter:
```json
For example, you might specify the following custom payload that includes a sing
"text":"#alertrulename fired with #searchresultcount over threshold of #thresholdvalue." } ```
-This example payload resolves to something like the following when it's sent to the webhook:
+
+This example payload resolves to something like the following example when it's sent to the webhook:
```json { "text":"My Alert Rule fired with 18 records over threshold of 10 ." } ```
-Variables in a custom webhook must be specified within a JSON enclosure. For example, referencing "#searchresultcount" in the webhook example will output based on the alert results.
-To include search results, add **IncludeSearchResults** as a top-level property in the custom JSON. Search results are included as a JSON structure, so results can't be referenced in custom defined fields.
+Variables in a custom webhook must be specified within a JSON enclosure. For example, referencing `#searchresultcount` in the webhook example generates output based on the alert results.
+
+To include search results, add **IncludeSearchResults** as a top-level property in the custom JSON. Search results are included as a JSON structure, so results can't be referenced in custom-defined fields.
> [!NOTE]
-> The **View Webhook** button next to the **Include custom JSON payload for webhook** option displays preview of what was provided. It doesn't contain actual data, but is representative of the JSON schema that will be used.
+> The **View Webhook** button next to the **Include custom JSON payload for webhook** option displays a preview of what was provided. It doesn't contain actual data but is representative of the JSON schema that will be used.
-For example, to create a custom payload that includes just the alert name and the search results, use this configuration:
+For example, to create a custom payload that includes only the alert name and the search results, use this configuration:
```json {
azure-monitor Proactive Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-diagnostics.md
Title: Smart detection in Azure Application Insights | Microsoft Docs
-description: Application Insights performs automatic deep analysis of your app telemetry and warns you of potential problems.
+ Title: Smart detection in Application Insights | Microsoft Docs
+description: Application Insights performs automatic deep analysis of your app telemetry and warns you about potential problems.
Last updated 02/07/2019
# Smart detection in Application Insights >[!NOTE]
->You can migrate smart detection on your Application Insights resource to be based on alerts. The migration creates alert rules for the different smart detection modules. Once created, you can manage and configure these rules just like any other Azure Monitor alert rules. You can also configure action groups for these rules, thus enabling multiple methods of taking actions or triggering notification on new detections.
+>You can migrate smart detection on your Application Insights resource to be based on alerts. The migration creates alert rules for the different smart detection modules. After it's created, you can manage and configure these rules like any other Azure Monitor alert rules. You can also configure action groups for these rules to enable multiple methods of taking actions or triggering notification on new detections.
>
-> For more information, see [Smart Detection Alerts migration](./alerts-smart-detections-migration.md).
+> For more information, see [Smart detection alerts migration](./alerts-smart-detections-migration.md).
-Smart detection automatically warns you of potential performance problems and failure anomalies in your web application. It performs proactive analysis of the telemetry that your app sends to [Application Insights](../app/app-insights-overview.md). If there is a sudden rise in failure rates, or abnormal patterns in client or server performance, you get an alert. This feature needs no configuration. It operates if your application sends enough telemetry.
+Smart detection automatically warns you of potential performance problems and failure anomalies in your web application. It performs proactive analysis of the telemetry that your app sends to [Application Insights](../app/app-insights-overview.md). If there's a sudden rise in failure rates or abnormal patterns in client or server performance, you get an alert. This feature needs no configuration. It operates if your application sends enough telemetry.
-You can access the detections issued by smart detection both from the emails you receive, and from the smart detection pane.
+You can access the detections issued by smart detection from the emails you receive and from the smart detection pane.
## Review your smart detections You can discover detections in two ways: * **You receive an email** from Application Insights. Here's a typical example:
- ![Email alert](./media/proactive-diagnostics/03.png)
+ ![Screenshot that shows an email alert.](./media/proactive-diagnostics/03.png)
- Click the large button to open more detail in the portal.
-* **The smart detection pane** in Application Insights. Select **Smart detection** under the **Investigate** menu to see a list of recent detections.
+ Select **See the analysis of this issue** to see more information in the portal.
+* **The smart detection pane** in Application Insights. Under the **Investigate** menu, select **Smart Detection** to see a list of recent detections.
-![View recent detections](./media/proactive-diagnostics/04.png)
+ ![Screenshot that shows recent detections.](./media/proactive-diagnostics/04.png)
Select a detection to view its details. ## What problems are detected?
-Smart detection detects and notifies about various issues, such as:
+Smart detection detects and notifies you about various issues:
-* [Smart detection - Failure Anomalies](./proactive-failure-diagnostics.md). We use machine learning to set the expected rate of failed requests for your app, correlating with load, and other factors. Notifies if the failure rate goes outside the expected envelope.
-* [Smart detection - Performance Anomalies](./smart-detection-performance.md). Notifies if response time of an operation or dependency duration is slowing down, compared to historical baseline. It also notifies if we identify an anomalous pattern in response time, or page load time.
-* General degradations and issues, like [Trace degradation](./proactive-trace-severity.md), [Memory leak](./proactive-potential-memory-leak.md), [Abnormal rise in Exception volume](./proactive-exception-volume.md) and [Security anti-patterns](./proactive-application-security-detection-pack.md).
+* [Smart detection - Failure Anomalies](./proactive-failure-diagnostics.md): Notifies if the failure rate goes outside the expected envelope. We use machine learning to set the expected rate of failed requests for your app, correlating with load and other factors.
+* [Smart detection - Performance Anomalies](./smart-detection-performance.md): Notifies if response time of an operation or dependency duration is slowing down compared to the historical baseline. It also notifies if we identify an anomalous pattern in response time or page load time.
+* **General degradations and issues**: [Trace degradation](./proactive-trace-severity.md), [Memory leak](./proactive-potential-memory-leak.md), [Abnormal rise in Exception volume](./proactive-exception-volume.md), and [Security anti-patterns](./proactive-application-security-detection-pack.md).
-(The help links in each notification take you to the relevant articles.)
+The help links in each notification take you to the relevant articles.
## Smart detection email notifications All smart detection rules, except for rules marked as _preview_, are configured by default to send email notifications when detections are found.
-Configuring email notifications for a specific smart detection rule can be done by opening the smart detection **Settings** pane and selecting the rule, which will open the **Edit rule** pane.
-
-Alternatively, you can change the configuration using Azure Resource Manager templates. For more information, see [Manage Application Insights smart detection rules using Azure Resource Manager templates](./proactive-arm-config.md) for more details.
+You can configure email notifications for a specific smart detection rule. On the smart detection **Settings** pane, select the rule to open the **Edit rule** pane.
+Alternatively, you can change the configuration by using Azure Resource Manager templates. For more information, see [Manage Application Insights smart detection rules by using Azure Resource Manager templates](./proactive-arm-config.md).
## Next steps These diagnostic tools help you inspect the telemetry from your app: * [Metric explorer](../essentials/metrics-charts.md) * [Search explorer](../app/diagnostic-search.md)
-* [Analytics - powerful query language](../logs/log-analytics-tutorial.md)
+* [Analytics: Powerful query language](../logs/log-analytics-tutorial.md)
-Smart Detection is automatic. But maybe you'd like to set up some more alerts?
+Smart detection is automatic, but if you want to set up more alerts, see:
* [Manually configured metric alerts](./alerts-log.md) * [Availability web tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability)
azure-monitor App Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md
This section lists all supported platforms and frameworks.
* [React](./javascript-framework-extensions.md) * [React Native](./javascript-framework-extensions.md) * [Angular](./javascript-framework-extensions.md)
-* [Windows desktop applications, services, and worker roles](https://github.com/Microsoft/appcenter)
-* [Universal Windows app](https://github.com/Microsoft/appcenter) (App Center)
-* [Android](https://github.com/Microsoft/appcenter) (App Center)
-* [iOS](https://github.com/Microsoft/appcenter) (App Center)
> [!NOTE] > OpenTelemetry-based instrumentation is available in preview for [C#, Node.js, and Python](opentelemetry-enable.md). Review the limitations noted at the beginning of each language's official documentation. If you require a full-feature experience, use the existing Application Insights SDKs.
azure-monitor Ilogger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md
Title: Application Insights logging with .NET description: Learn how to use Application Insights with the ILogger interface in .NET. Previously updated : 01/24/2023 Last updated : 04/24/2023 ms.devlang: csharp # Application Insights logging with .NET
-In this article, you'll learn how to capture logs with Application Insights in .NET apps by using the [`Microsoft.Extensions.Logging.ApplicationInsights`][nuget-ai] provider package. If you use this provider, you can query and analyze your logs by using the Application Insights tools.
+In this article, you learn how to capture logs with Application Insights in .NET apps by using the [`Microsoft.Extensions.Logging.ApplicationInsights`][nuget-ai] provider package. If you use this provider, you can query and analyze your logs by using the Application Insights tools.
[nuget-ai]: https://www.nuget.org/packages/Microsoft.Extensions.Logging.ApplicationInsights [nuget-ai-ws]: https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService
namespace WebApplication
-With the NuGet package installed, and the provider being registered with dependency injection, the app is ready to log. With constructor injection, either <xref:Microsoft.Extensions.Logging.ILogger> or the generic-type alternative <xref:Microsoft.Extensions.Logging.ILogger%601> is required. When these implementations are resolved, `ApplicationInsightsLoggerProvider` will provide them. Logged messages or exceptions will be sent to Application Insights.
+With the NuGet package installed, and the provider being registered with dependency injection, the app is ready to log. With constructor injection, either <xref:Microsoft.Extensions.Logging.ILogger> or the generic-type alternative <xref:Microsoft.Extensions.Logging.ILogger%601> is required. When these implementations are resolved, `ApplicationInsightsLoggerProvider` provides them. Logged messages or exceptions are sent to Application Insights.
Consider the following example controller:
public class ValuesController : ControllerBase
} ```
-For more information, see [Logging in ASP.NET Core](/aspnet/core/fundamentals/logging).
+For more information, see [Logging in ASP.NET Core](/aspnet/core/fundamentals/logging) and [What Application Insights telemetry type is produced from ILogger logs? Where can I see ILogger logs in Application Insights?](#what-application-insights-telemetry-type-is-produced-from-ilogger-logs-where-can-i-see-ilogger-logs-in-application-insights).
## Console application
namespace ConsoleApp
+For more information, see [What Application Insights telemetry type is produced from ILogger logs? Where can I see ILogger logs in Application Insights?](#what-application-insights-telemetry-type-is-produced-from-ilogger-logs-where-can-i-see-ilogger-logs-in-application-insights).
+ ## Frequently asked questions
+### What Application Insights telemetry type is produced from ILogger logs? Where can I see ILogger logs in Application Insights?
+
+`ApplicationInsightsLoggerProvider` captures `ILogger` logs and creates `TraceTelemetry` from them. If an `Exception` object is passed to the `Log` method on `ILogger`, `ExceptionTelemetry` is created instead of `TraceTelemetry`.
+
+These telemetry items can be found in the same places as any other `TraceTelemetry` or `ExceptionTelemetry` items for Application Insights, including the Azure portal, analytics, or the Visual Studio local debugger.
+
+If you prefer to always send `TraceTelemetry`, use this snippet:
+
+```csharp
+builder.AddApplicationInsights(
+ options => options.TrackExceptionsAsExceptionTelemetry = false);
+```
### Why do some ILogger logs not have the same properties as others? Application Insights captures and sends `ILogger` logs by using the same `TelemetryConfiguration` information that's used for every other telemetry. But there's an exception. By default, `TelemetryConfiguration` isn't fully set up when you log from *Program.cs* or *Startup.cs*. Logs from these places won't have the default configuration, so they won't be running all `TelemetryInitializer` instances and `TelemetryProcessor` instances.
public class MyController : ApiController
> [!NOTE] > If you use the `Microsoft.ApplicationInsights.AspNetCore` package to enable Application Insights, modify this code to get `TelemetryClient` directly in the constructor. For an example, see [this FAQ](../faq.yml).
-### What Application Insights telemetry type is produced from ILogger logs? Where can I see ILogger logs in Application Insights?
-
-`ApplicationInsightsLoggerProvider` captures `ILogger` logs and creates `TraceTelemetry` from them. If an `Exception` object is passed to the `Log` method on `ILogger`, `ExceptionTelemetry` is created instead of `TraceTelemetry`.
-
-These telemetry items can be found in the same places as any other `TraceTelemetry` or `ExceptionTelemetry` items for Application Insights, including the Azure portal, analytics, or the Visual Studio local debugger.
-
-If you prefer to always send `TraceTelemetry`, use this snippet:
-
-```csharp
-builder.AddApplicationInsights(
- options => options.TrackExceptionsAsExceptionTelemetry = false);
-```
- ### I don't have the SDK installed, and I use the Azure Web Apps extension to enable Application Insights for my ASP.NET Core applications. How do I use the new provider? The Application Insights extension in Azure Web Apps uses the new provider. You can modify the filtering rules in the *appsettings.json* file for your application.
azure-monitor Usage Cohorts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-cohorts.md
Title: Application Insights usage cohorts | Microsoft Docs description: Analyze different sets or users, sessions, events, or operations that have something in common. Previously updated : 07/30/2021 Last updated : 05/24/2023 # Application Insights cohorts
azure-monitor Usage Impact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-impact.md
Title: Application Insights usage impact - Azure Monitor description: Analyze how different properties potentially affect conversion rates for parts of your apps. Previously updated : 07/30/2021 Last updated : 05/24/2023 # Impact analysis with Application Insights
Analyzing performance is only a subset of Impact's capabilities. Impact supports
## Impact analysis workbook
-To use the Impact analysis workbook, in your Application Insights resources go to **Usage** > **Impact** and select **Impact Analysis Workbook**. Or on the **Workbooks** tab, select **Public Templates**. Then under **Usage**, select **User Impact Analysis**.
+To use the Impact analysis workbook, in your Application Insights resources go to **Usage** > **More** and select **User Impact Analysis Workbook**. Or on the **Workbooks** tab, select **Public Templates**. Then under **Usage**, select **User Impact Analysis**.
:::image type="content" source="./media/usage-impact/workbooks-gallery.png" alt-text="Screenshot that shows the Workbooks Gallery on public templates." lightbox="./media/usage-impact/workbooks-gallery.png":::
azure-monitor Usage Segmentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-segmentation.md
Title: User, session, and event analysis in Application Insights description: Demographic analysis of users of your web app. Previously updated : 07/30/2021 Last updated : 05/24/2023
If you don't yet see data in the **Users**, **Sessions**, or **Events** panes in
Three of the **Usage** panes use the same tool to slice and dice telemetry from your web app from three perspectives. By filtering and splitting the data, you can uncover insights about the relative use of different pages and features. * **Users tool**: How many people used your app and its features? Users are counted by using anonymous IDs stored in browser cookies. A single person using different browsers or machines will be counted as more than one user.
-* **Sessions tool**: How many sessions of user activity have included certain pages and features of your app? A session is counted after half an hour of user inactivity, or after 24 hours of continuous use.
+* **Sessions tool**: How many sessions of user activity have included certain pages and features of your app? A session is reset after half an hour of user inactivity, or after 24 hours of continuous use.
* **Events tool**: How often are certain pages and features of your app used? A page view is counted when a browser loads a page from your app, provided you've [instrumented it](./javascript.md).
- A custom event represents one occurrence of something happening in your app. It's often a user interaction like a button selection or the completion of a task. You insert code in your app to [generate custom events](./api-custom-events-metrics.md#trackevent).
+ A custom event represents one occurrence of something happening in your app. It's often a user interaction like a button selection or the completion of a task. You insert code in your app to [generate custom events](./api-custom-events-metrics.md#trackevent) or use the [Click Analytics](javascript-feature-extensions.md#feature-extensions-for-the-application-insights-javascript-sdk-click-analytics) extension.
> [!NOTE] > For information on an alternatives to using [anonymous IDs](./data-model-complete.md#anonymous-user-id) and ensuring an accurate count, see the documentation for [authenticated IDs](./data-model-complete.md#authenticated-user-id).
+Clicking **View More Insights** displays the following information:
+- Application Performance: Sessions, Events, and a Performance evaluation related to users' perception of responsiveness.
+- Properties: Charts containing up to six user properties such as browser version, country or region, and operating system.
+- Meet Your Users: View timelines of user activity.
+ ## Query for certain users Explore different groups of users by adjusting the query options at the top of the Users tool:
azure-monitor Dashboard Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/dashboard-upgrade.md
- Title: Upgrading your Log Analytics Dashboard visualizations
-description: Learn how to upgrade your Log Analytics Dashboard visualizations with queries that can provide powerful insights.
--- Previously updated : 10/27/2021--
-# Upgrading your Log Analytics Dashboard visualizations
-
-In February 2020, we introduced an improved visualization technology. Improving and enhancing your ability to visualize query results and reach powerful insights, fast.
-
-You can read more about this upgrade in this [Azure Update](https://azure.microsoft.com/updates/azure-monitor-log-analytics-upgraded-results-visualization/).
-
-This new visualization technology is paving the way for new and improved experiences around your query result set.
-
-## Dashboards in Azure
-
-Azure dashboards are a way to visualize the status of your entire Azure surface area. They are designed to provide a single pane of glass to your Azure estate status and allow a variety of shortcuts to common actions.
-
-For more information, see [Azure dashboards](../../azure-portal/azure-portal-dashboards.md)
--
-## Upgrading Log Analytics dashboard parts
-
-The new visualization technology addresses some common issues with the old implementation and introduces some new capabilities to pinned Log Analytics parts:
--- **Same available types** - All visualization types available in Log Analytics are available as pinned parts on dashboards.--- **Consistent look-and-feel** - The visualization look-and-feel for pinned parts are now almost identical to those in Log Analytics. The differences are due to optimizations which require subtle differences in the data contents of the visual. See the considerations part of this document for more insight into those differences.--- **Tooltips and labels** ΓÇô New pinned Log Analytics visualizations support tooltips. Pie and doughnut charts now support labels.--- **Interactive legends** ΓÇô Clicking the visualization legend allows adding/removing of dimensions from the pinned visual as in Log Analytics.-
-## Stage 1 - Opt-in upgrade message
-
-When a Log Analytics pinned part is able to be upgraded, a new *opt-in* notification appears on Log Analytics pinned parts in dashboards allowing users to upgrade their visualization. If you want to experience the new visualizations to upgrade selected visualizations in their dashboard.
-
-
-![Sidebar](media/dashboard-upgrade/update-message-1.png)
-
-![Screenshot that shows how to update the tile visualization.](media/dashboard-upgrade/update-message-2.png)
-
-> [!WARNING]
-> Once the dashboard is published, the upgrade is irreversible. However, changes are discarded if you navigate away from the dashboard without re-publishing.
-
-Once clicked, the visualization will be updated to the new technology. Subtle differences in the visualization may occur to align with their look-and-feel in Log Analytics.
-
-After the visualizations are upgraded, you need to republish your dashboard for the change to take effect.
-
-![Screenshot that shows upgraded visualizations.](media/dashboard-upgrade/update-message-3.png)
-
-## Stage 2 - Migration of all dashboards
-
-After an initial opt-in period is over, the Log Analytics team will upgrade all dashboards in the system. Aligning all Azure dashboards allows the team to introduce more visualizations and experience improvements across the board.
-
-## Considerations
-
-Log Analytics visualizations pinned to a dashboard have some specific behavior designed for an optimal experience. Review the following design considerations when pinning a visualization to a dashboard.
-
-### Query time scope - 30-day limit
-
-As dashboards may contain multiple visualizations from multiple queries, the time scope for a single pinned query is limited to 30 days. A single query may only run on a time span smaller or equal to 30 days. This limitation is to ensure a reasonable dashboard load time.
-
-### Query data values - 25 values and other grouping
-
-Dashboards can be visually dense and complex. To reduce cognitive load when viewing a dashboard, we optimize the visualizations by limiting the display to 25 different data types. When there are more than 25, Log Analytics optimizes the data. It individually shows the 25 types with most data as separate and then groups the remaining values into an ΓÇ£otherΓÇ¥ value. The following chart shows such a case.
-
-![Screenshot that shows a dashboard with 25 different data types.](media/dashboard-upgrade/values-25-limit.png)
-
-### Query results limit
-
-A query underlying a Log Analytics dashboard can return up to 2000 records.
-
-### Dashboard refresh on load
-
-Dashboards are refreshed upon load. All queries related to dashboard-pinned Log Analytics visualizations are executed and the dashboard is refreshed once it loads. If the dashboard page remains open, the data in the dashboard is refreshed every 60 minutes.
-
-## Next steps
-
-[Create and share dashboards in Log Analytics](../visualize/tutorial-logs-dashboards.md)
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
Search jobs are asynchronous queries that fetch records into a new search table within your workspace for further analytics. The search job uses parallel processing and can run for hours across large datasets. This article describes how to create a search job and how to query its resulting data. > [!NOTE]
-> The search job feature is currently not supported for the following cases:
-> - Workspaces with [customer-managed keys](customer-managed-keys.md).
-> - Workspaces in the China East 2 region.
+> The search job feature is currently not supported for workspaces with [customer-managed keys](customer-managed-keys.md).
## When to use search jobs
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
Title: Move resources to a new subscription or resource group description: Use Azure Resource Manager to move resources to a new resource group or subscription. Previously updated : 01/30/2023- Last updated : 04/24/2023+ # Move resources to a new resource group or subscription
If your move requires setting up new dependent resources, you'll experience an i
Moving a resource only moves it to a new resource group or subscription. It doesn't change the location of the resource. + ## Changed resource ID When you move a resource, you change its resource ID. The standard format for a resource ID is `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}`. When you move a resource to a new resource group or subscription, you change one or more values in that path.
When the move has completed, you're notified of the result.
### Validate
-To test your move scenario without actually moving the resources, use the [Invoke-AzResourceAction](/powershell/module/az.resources/invoke-azresourceaction) command. Use this command only when you need to predetermine the results. To run this operation, you need the:
-
-* Resource ID of the source resource group
-* Resource ID of the target resource group
-* Resource ID of each resource to move
+To test your move scenario without actually moving the resources, use the [Invoke-AzResourceAction](/powershell/module/az.resources/invoke-azresourceaction) command. Use this command only when you need to predetermine the results.
```azurepowershell
+$sourceName = "sourceRG"
+$destinationName = "destinationRG"
+$resourcesToMove = @("app1", "app2")
+
+$sourceResourceGroup = Get-AzResourceGroup -Name $sourceName
+$destinationResourceGroup = Get-AzResourceGroup -Name $destinationName
+
+$resources = Get-AzResource -ResourceGroupName $sourceName | Where-Object { $_.Name -in $resourcesToMove }
+ Invoke-AzResourceAction -Action validateMoveResources `--ResourceId "/subscriptions/{subscription-id}/resourceGroups/{source-rg}" `--Parameters @{ resources= @("/subscriptions/{subscription-id}/resourceGroups/{source-rg}/providers/{resource-provider}/{resource-type}/{resource-name}", "/subscriptions/{subscription-id}/resourceGroups/{source-rg}/providers/{resource-provider}/{resource-type}/{resource-name}", "/subscriptions/{subscription-id}/resourceGroups/{source-rg}/providers/{resource-provider}/{resource-type}/{resource-name}");targetResourceGroup = '/subscriptions/{subscription-id}/resourceGroups/{destination-rg}' }
+-ResourceId $sourceResourceGroup.ResourceId `
+-Parameters @{ resources= $resources.ResourceId;targetResourceGroup = $destinationResourceGroup.ResourceId }
``` If validation passes, you see no output.
If validation fails, you see an error message describing why the resources can't
To move existing resources to another resource group or subscription, use the [Move-AzResource](/powershell/module/az.resources/move-azresource) command. The following example shows how to move several resources to a new resource group. ```azurepowershell-interactive
-$webapp = Get-AzResource -ResourceGroupName OldRG -ResourceName ExampleSite
-$plan = Get-AzResource -ResourceGroupName OldRG -ResourceName ExamplePlan
-Move-AzResource -DestinationResourceGroupName NewRG -ResourceId $webapp.ResourceId, $plan.ResourceId
+$sourceName = "sourceRG"
+$destinationName = "destinationRG"
+$resourcesToMove = @("app1", "app2")
+
+$resources = Get-AzResource -ResourceGroupName $sourceName | Where-Object { $_.Name -in $resourcesToMove }
+
+Move-AzResource -DestinationResourceGroupName $destinationName -ResourceId $resources.ResourceId
``` To move to a new subscription, include a value for the `DestinationSubscriptionId` parameter.
az resource move --destination-group newgroup --ids $webapp $plan
To move to a new subscription, provide the `--destination-subscription-id` parameter.
+## Use Python
+
+### Validate
+
+To test your move scenario without actually moving the resources, use the [ResourceManagementClient.resources.begin_validate_move_resources](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.resourcesoperations#azure-mgmt-resource-resources-v2022-09-01-operations-resourcesoperations-begin-validate-move-resources) method. Use this method only when you need to predetermine the results.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+source_name = "sourceRG"
+destination_name = "destinationRG"
+resources_to_move = ["app1", "app2"]
+
+destination_resource_group = resource_client.resource_groups.get(destination_name)
+
+resources = [
+ resource for resource in resource_client.resources.list_by_resource_group(source_name)
+ if resource.name in resources_to_move
+]
+
+resource_ids = [resource.id for resource in resources]
+
+validate_move_resources_result = resource_client.resources.begin_validate_move_resources(
+ source_name,
+ {
+ "resources": resource_ids,
+ "target_resource_group": destination_resource_group.id
+ }
+).result()
+
+print("Validate move resources result: {}".format(validate_move_resources_result))
+```
+
+If validation passes, you see no output.
+
+If validation fails, you see an error message describing why the resources can't be moved.
+
+### Move
+
+To move existing resources to another resource group or subscription, use the [ResourceManagementClient.resources.begin_move_resources](/python/api/azure-mgmt-resource/azure.mgmt.resource.resources.v2022_09_01.operations.resourcesoperations#azure-mgmt-resource-resources-v2022-09-01-operations-resourcesoperations-begin-move-resources) method. The following example shows how to move several resources to a new resource group.
+
+```python
+import os
+from azure.identity import AzureCliCredential
+from azure.mgmt.resource import ResourceManagementClient
+
+credential = AzureCliCredential()
+subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
+
+resource_client = ResourceManagementClient(credential, subscription_id)
+
+source_name = "sourceRG"
+destination_name = "destinationRG"
+resources_to_move = ["app1", "app2"]
+
+destination_resource_group = resource_client.resource_groups.get(destination_name)
+
+resources = [
+ resource for resource in resource_client.resources.list_by_resource_group(source_name)
+ if resource.name in resources_to_move
+]
+
+resource_ids = [resource.id for resource in resources]
+
+resource_client.resources.begin_move_resources(
+ source_name,
+ {
+ "resources": resource_ids,
+ "target_resource_group": destination_resource_group.id
+ }
+)
+```
+ ## Use REST API ### Validate
azure-vmware Set Up Backup Server For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/set-up-backup-server-for-azure-vmware-solution.md
Title: Set up Azure Backup Server for Azure VMware Solution
description: Set up your Azure VMware Solution environment to back up virtual machines using Azure Backup Server. Previously updated : 08/23/2022 Last updated : 04/20/2023 # Set up Azure Backup Server for Azure VMware Solution
This article helps you prepare your Azure VMware Solution environment to back up
## Supported VMware features -- **Agentless backup:** Azure Backup Server doesn't require an agent to be installed on the vCenter Server or ESXi server to back up the VM. Instead, provide the IP address or fully qualified domain name (FQDN) and the sign in credentials used to authenticate the VMware vCenter Server with Azure Backup Server.
+- **Agentless backup:** Azure Backup Server doesn't require an agent to be installed on the vCenter Server or ESXi server to back up the VM. Instead, provide the IP address or fully qualified domain name (FQDN) and the sign-in credentials used to authenticate the VMware vCenter Server with Azure Backup Server.
- **Cloud-integrated backup:** Azure Backup Server protects workloads to disk and the cloud. The backup and recovery workflow of Azure Backup Server helps you manage long-term retention and offsite backup. - **Detect and protect VMs managed by vCenter Server:** Azure Backup Server detects and protects VMs deployed on a vCenter Server or ESXi hosts. Azure Backup Server also detects VMs managed by vCenter Server so that you can protect large deployments.-- **Folder-level auto protection:** vCenter Server lets you organize your VMs into Virtual Machine folders. Azure Backup Server detects these folders. You can use it to protect VMs at the folder level, including all subfolders. When protecting folders, Azure Backup Server protects the VMs in that folder and protects VMs added later. Azure Backup Server detects new VMs daily, protecting them automatically. As you organize your VMs in recursive folders, Azure Backup Server automatically detects and protects the new VMs deployed in the recursive folders.
+- **Folder-level auto protection:** vCenter Server lets you organize your VMs into Virtual Machine folders. Azure Backup Server detects these folders. You can use it to protect VMs at the folder level, including all subfolders. During the protection of folders, Azure Backup Server protects the VMs in that folder and protects VMs added later. Azure Backup Server detects new VMs daily, protecting them automatically. As you organize your VMs in recursive folders, Azure Backup Server automatically detects and protects the new VMs deployed in the recursive folders.
- **Azure Backup Server continues to protect vMotioned VMs within the cluster:** As VMs are vMotioned for dynamic resource load balancing within the cluster, Azure Backup Server automatically detects and continues VM protection. - **Recover necessary files faster:** Azure Backup Server can recover files or folders from a Windows VM without recovering the entire VM.-- **Application Consistent Backups:** If VMware Tools is not installed, a crash consistent backup will be executed. When VMware Tools is installed with Microsoft Windows virtual machines, all applications that support VSS freeze and thaw operations will support application consistent backups. When VMware Tools is installed with Linux virtual machines, application consistent snapshots are supported by calling the pre and post scripts.
+- **Application Consistent Backups:** If the *VMware Tools* isn't installed, a crash consistent backup will be executed. When the *VMware Tools* is installed with Microsoft Windows virtual machines, all applications that support VSS freeze and thaw operations will support application consistent backups. When the *VMware Tools* is installed with Linux virtual machines, application consistent snapshots are supported by calling the pre and post scripts.
## Limitations
+- If you're using *Azure Backup Server V3*, then you must install [Update Rollup 2](https://support.microsoft.com/topic/update-rollup-2-for-microsoft-azure-backup-server-v3-350de164-0ae4-459a-8acf-7777dbb7fd73). New installations from the Azure portal now use *Azure Backup Server V4* that supports vSphere, version *6.5* to *8.0*.
+- You can't back up user snapshots before the first Azure Backup Server backup. After Azure Backup Server finishes the first backup, then you can back up user snapshots.
- Update Rollup 2 for Azure Backup Server v3 must be installed.-- You can't backup user snapshots before the first Azure Backup Server backup. After Azure Backup Server finishes the first backup, then you can back up user snapshots. - Azure Backup Server can't protect VMware vSphere VMs with pass-through disks and physical raw device mappings (pRDMs). - Azure Backup Server can't detect or protect VMware vSphere vApps.
To set up Azure Backup Server for Azure VMware Solution, you must finish the fol
Azure Backup Server is deployed as an Azure infrastructure as a service (IaaS) VM to protect Azure VMware Solution VMs. ## Prerequisites for the Azure Backup Server environment
Follow the instructions in the [Create your first Windows VM in the Azure portal
> Azure Backup Server is designed to run on a dedicated, single-purpose server. You can't install Azure Backup Server on a computer that: > * Runs as a domain controller. > * Has the Application Server role installed.
-> * Is a System Center Operations Manager management server.
+> * Is a System Center Operations Manager management server?
> * Runs Exchange Server.
-> * Is a node of a cluster.
+> * Is a node of a cluster?
### Disks and storage
If you want to scale your deployment, you have the following options:
### .NET Framework
-The VM must have .NET Framework 3.5 SP1 or higher installed.
+The VM must have .NET Framework 4.5 or higher installed.
### Join a domain
Follow the steps in this section to download, extract, and install the software
> [!NOTE] > You must download all the files to the same folder. Because the download size of the files together is greater than 3 GB, it might take up to 60 minutes for the download to complete.
- :::image type="content" source="../backup/media/backup-azure-microsoft-azure-backup/downloadcenter.png" alt-text="Screenshot showing Microsoft Azure Backup files to download.":::
+ :::image type="content" source="../backup/media/backup-azure-microsoft-azure-backup/downloadcenter.png" alt-text="Screenshot showing the Microsoft Azure Backup files to download.":::
### Extract the software package
If you downloaded the software package to a different server, copy the files to
1. Select **Extract** to begin the extraction process.
- :::image type="content" source="../backup/media/backup-azure-microsoft-azure-backup/extract/03.png" alt-text="Screenshot showing Microsoft Azure Backup files ready to extract.":::
+ :::image type="content" source="../backup/media/backup-azure-microsoft-azure-backup/extract/03.png" alt-text="Screenshot showing the Microsoft Azure Backup files ready to extract.":::
1. Once extracted, select the option to **Execute setup.exe** and then select **Finish**. > [!TIP]
-> You can also locate the setup.exe file from the folder where you extracted the software package.
+>- You can also locate the setup.exe file from the folder where you extracted the software package.
+>- To use your own SQL Server instance, ensure that you're using the supported SQL Server versions - SQL Server 2022 and 2019.
### Install the software package
-1. On the setup window under **Install**, select **Microsoft Azure Backup** to open the setup wizard.
+1. On the setup window under **Install**, select **Microsoft Azure Backup** to open the setup wizard and accept any licensing terms from the list that appears.
1. On the **Welcome** screen, select **Next** to continue to the **Prerequisite Checks** page.
If you downloaded the software package to a different server, copy the files to
When you use your own SQL Server instance, make sure you add builtin\Administrators to the sysadmin role to the main database sysadmin role.
- **Configure reporting services with SQL Server 2017**
+ **Configure reporting services with SQL Server 2019 or 2022**
- If you use your instance of SQL Server 2017, you must configure SQL Server 2017 Reporting Services (SSRS) manually. After configuring SSRS, make sure to set the **IsInitialized** property of SSRS to **True**. When set to **True**, Azure Backup Server assumes that SSRS is already configured and skips the SSRS configuration.
+ If you use your instance of SQL Server, you must configure SQL Server Reporting Services (SSRS) manually. After configuring SSRS, make sure to set the **IsInitialized** property of SSRS to **True**. When set to **True**, Azure Backup Server assumes that SSRS is already configured and skips the SSRS configuration.
To check the SSRS configuration status, run:
If you downloaded the software package to a different server, copy the files to
### Install Update Rollup 2 for Microsoft Azure Backup Server (MABS) version 3 Installing the Update Rollup 2 for Microsoft Azure Backup Server (MABS) version 3 is mandatory for protecting the workloads. You can find the bug fixes and installation instructions in the [knowledge base article](https://support.microsoft.com/help/5004579/).+ ## Add storage to Azure Backup Server Azure Backup Server v3 supports Modern Backup Storage that offers:
Azure Backup Server v3 supports Modern Backup Storage that offers:
Add the data disks with the Azure Backup Server VM's required storage capacity if not already added.
-Azure Backup Server v3 only accepts storage volumes. When you add a volume, Azure Backup Server formats the volume to Resilient File System (ReFS), which Modern Backup Storage requires.
+Azure Backup Server only accepts storage volumes. When you add a volume, Azure Backup Server formats the volume to Resilient File System (ReFS), which Modern Backup Storage requires.
### Add volumes to Azure Backup Server disk storage
Azure Backup Server v3 only accepts storage volumes. When you add a volume, Azur
1. Select **OK** to format these volumes to ReFS so that Azure Backup Server can use Modern Backup Storage benefits.
+## Upgrade to Azure Backup Server V4 from Azure Backup Server V3
+
+If you're already using Azure Backup Server V3 to back up AVS VMs, you can [upgrade to Azure Backup Server V4](../backup/backup-azure-microsoft-azure-backup.md#upgrade-mabs) to get access to the latest features and bug fixes.
+++ ## Next steps Now that you've covered how to set up Azure Backup Server for Azure VMware Solution, you can use the following resources to learn more.
azure-web-pubsub Howto Generate Client Access Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-generate-client-access-url.md
+
+ Title: How to generate client access URL for Azure Web PubSub clients
+description: How to generate client access URL for Azure Web PubSub clients.
++ Last updated : 04/25/2023++++
+# How to generate client access URL for the clients
+
+A client, be it a browser 💻, a mobile app 📱, or an IoT device 💡, uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. This article shows you several ways to get the Client Access URL.
+
+- For quick start, copy one from the Azure portal
+- For development, generate the value using [Web PubSub server SDK](./reference-server-sdk-js.md)
+- If you're using Azure AD, you can also invoke the [Generate Client Token REST API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token)
+
+## Copy from the Azure portal
+In the Keys tab in Azure portal, there's a Client URL Generator tool to quickly generate a Client Access URL for you, as shown in the following diagram. Values input here aren't stored.
++
+## Generate from service SDK
+The same Client Access URL can be generated by using the Web PubSub server SDK.
+
+# [JavaScript](#tab/javascript)
+
+1. Follow [Getting started with server SDK](./reference-server-sdk-js.md#getting-started) to create a `WebPubSubServiceClient` object `service`
+
+2. Generate Client Access URL by calling `WebPubSubServiceClient.getClientAccessToken`:
+ * Configure user ID
+ ```js
+ let token = await serviceClient.getClientAccessToken({ userId: "user1" });
+ ```
+ * Configure the lifetime of the token
+ ```js
+ let token = await serviceClient.getClientAccessToken({ expirationTimeInMinutes: 5 });
+ ```
+ * Configure a role that can join group `group1` directly when it connects using this Client Access URL
+ ```js
+ let token = await serviceClient.getClientAccessToken({ roles: ["webpubsub.joinLeaveGroup.group1"] });
+ ```
+ * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+ ```js
+ let token = await serviceClient.getClientAccessToken({ roles: ["webpubsub.sendToGroup.group1"] });
+ ```
+ * Configure a group `group1` that the client joins once it connects using this Client Access URL
+ ```js
+ let token = await serviceClient.getClientAccessToken({ groups: ["group1"] });
+ ```
+
+# [C#](#tab/csharp)
+
+1. Follow [Getting started with server SDK](./reference-server-sdk-csharp.md#getting-started) to create a `WebPubSubServiceClient` object `service`
+
+2. Generate Client Access URL by calling `WebPubSubServiceClient.GetClientAccessUri`:
+ * Configure user ID
+ ```csharp
+ var url = service.GetClientAccessUri(userId: "user1");
+ ```
+ * Configure the lifetime of the token
+ ```csharp
+ var url = service.GetClientAccessUri(expiresAfter: TimeSpan.FromMinutes(5));
+ ```
+ * Configure a role that can join group `group1` directly when it connects using this Client Access URL
+ ```csharp
+ var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.joinLeaveGroup.group1" });
+ ```
+ * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+ ```csharp
+ var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.sendToGroup.group1" });
+ ```
+ * Configure a group `group1` that the client joins once it connects using this Client Access URL
+ ```csharp
+ var url = service.GetClientAccessUri(groups: new string[] { "group1" });
+ ```
+
+# [Python](#tab/python)
+
+1. Follow [Getting started with server SDK](./reference-server-sdk-python.md#install-the-package) to create a `WebPubSubServiceClient` object `service`
+
+2. Generate Client Access URL by calling `WebPubSubServiceClient.get_client_access_token`:
+ * Configure user ID
+ ```python
+ token = service.get_client_access_token(user_id="user1")
+ ```
+ * Configure the lifetime of the token
+ ```python
+ token = service.get_client_access_token(minutes_to_expire=5)
+ ```
+ * Configure a role that can join group `group1` directly when it connects using this Client Access URL
+ ```python
+ token = service.get_client_access_token(roles=["webpubsub.joinLeaveGroup.group1"])
+ ```
+ * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+ ```python
+ token = service.get_client_access_token(roles=["webpubsub.sendToGroup.group1"])
+ ```
+ * Configure a group `group1` that the client joins once it connects using this Client Access URL
+ ```python
+ token = service.get_client_access_token(groups=["group1"])
+ ```
+
+# [Java](#tab/java)
+
+1. Follow [Getting started with server SDK](./reference-server-sdk-java.md#getting-started) to create a `WebPubSubServiceClient` object `service`
+
+2. Generate Client Access URL by calling `WebPubSubServiceClient.getClientAccessToken`:
+ * Configure user ID
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.setUserId(id);
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+ * Configure the lifetime of the token
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.setExpiresAfter(Duration.ofDays(1));
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+ * Configure a role that can join group `group1` directly when it connects using this Client Access URL
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.addRole("webpubsub.joinLeaveGroup.group1");
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+ * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.addRole("webpubsub.sendToGroup.group1");
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
+ * Configure a group `group1` that the client joins once it connects using this Client Access URL
+ ```java
+ GetClientAccessTokenOptions option = new GetClientAccessTokenOptions();
+ option.setGroups(Arrays.asList("group1")),
+ WebPubSubClientAccessToken token = service.getClientAccessToken(option);
+ ```
++
+In real-world code, we usually have a server side to host the logic generating the Client Access URL. When a client request comes in, the server side can use the general authentication/authorization workflow to validate the client request. Only valid client requests can get the Client Access URL back.
+
+## Invoke the Generate Client Token REST API
+
+You can enable Azure AD in your service and use the Azure AD token to invoke [Generate Client Token rest API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token) to get the token for the client to use.
+
+1. Follow [Authorize from application](./howto-authorize-from-application.md) to enable Azure AD.
+2. Follow [Get Azure AD token](./howto-authorize-from-application.md#use-postman-to-get-the-azure-ad-token) to get the Azure AD token with Postman.
+3. Use the Azure AD token to invoke `:generateToken` with Postman:
+ 1. For the URI, enter `https://{Endpoint}/api/hubs/{hub}/:generateToken?api-version=2022-11-01`
+ 2. On the **Auth** tab, select **Bearer Token** and paste the Azure AD token fetched in the previous step
+ 3. Select **Send** and you see the Client Access Token in the response:
+ ```json
+ {
+ "token": "ABCDEFG.ABC.ABC"
+ }
+ ```
+4. The Client Access URI is in the format of `wss://<endpoint>/client/hubs/<hub_name>?access_token=<token>`
+
azure-web-pubsub Quickstart Use Client Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-use-client-sdk.md
Note that the SDK is available as a [NuGet packet](https://www.nuget.org/package
### Create and connect to the Web PubSub service
-This code example creates a Web PubSub client that connects to the Web PubSub service instance. A client uses a Client Access URL to connect and authenticate with the service. It's best practice to not hard code the Client Access URL in your code. In the production world, we usually set up an app server to return this URL on demand.
+This code example creates a Web PubSub client that connects to the Web PubSub service instance. A client uses a Client Access URL to connect and authenticate with the service. It's best practice to not hard code the Client Access URL in your code. In the production world, we usually set up an app server to return this URL on demand. [Generate Client Access URL](./howto-generate-client-access-url.md) describes the practice in detail.
For this example, you can use the Client Access URL you generated in the portal.
azure-web-pubsub Quickstarts Event Notifications From Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstarts-event-notifications-from-clients.md
npm install @azure/web-pubsub-client
#### 2. Connect to Web PubSub A client, be it a browser, a mobile app, or an IoT device, uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`.
-A client can have a few ways to obtain the Client Access URL. For this quick start, you can copy and paste one from Azure portal shown in the following diagram.
+A client can have a few ways to obtain the Client Access URL. For this quick start, you can copy and paste one from Azure portal shown in the following diagram. It's best practice to not hard code the Client Access URL in your code. In the production world, we usually set up an app server to return this URL on demand. [Generate Client Access URL](./howto-generate-client-access-url.md) describes the practice in detail.
![The diagram shows how to get **Client Access Url**.](./media/quickstarts-event-notifications-from-clients/generate-client-url-no-group.png)
azure-web-pubsub Quickstarts Pubsub Among Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstarts-pubsub-among-clients.md
dotnet add package Azure.Messaging.WebPubSub.Client --prerelease
## Connect to Web PubSub
-A client, be it a browser 💻, a mobile app 📱, or an IoT device 💡, uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. A client can have a few ways to obtain the Client Access URL. For this quick start, you can copy and paste one from Azure portal shown in the following diagram.
+A client, be it a browser 💻, a mobile app 📱, or an IoT device 💡, uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. A client can have a few ways to obtain the Client Access URL. For this quick start, you can copy and paste one from Azure portal shown in the following diagram. It's best practice to not hard code the Client Access URL in your code. In the production world, we usually set up an app server to return this URL on demand. [Generate Client Access URL](./howto-generate-client-access-url.md) describes the practice in detail.
![The diagram shows how to get client access url.](./media/howto-websocket-connect/generate-client-url.png)
azure-web-pubsub Quickstarts Push Messages From Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstarts-push-messages-from-server.md
npm install @azure/web-pubsub-client
#### Connect to your Web PubSub resource and register a listener for the `server-message` event A client uses a ***Client Access URL*** to connect and authenticate with your resource.
-This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. A client can have a few ways to obtain the Client Access URL. For this quick start, you can copy and paste one from Azure portal shown in the following diagram.
+This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. A client can have a few ways to obtain the Client Access URL. For this quick start, you can copy and paste one from Azure portal shown in the following diagram. It's best practice to not hard code the Client Access URL in your code. In the production world, we usually set up an app server to return this URL on demand. [Generate Client Access URL](./howto-generate-client-access-url.md) describes the practice in detail.
![The diagram shows how to get client access url.](./media/quickstarts-push-messages-from-server/push-messages-from-server.png)
backup Azure Kubernetes Service Backup Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-troubleshoot.md
The extension pods aren't exempt, and require the Azure Active Directory (Azure
```Error {"Message":"Error in the getting the Configurations: error {Post \https://centralus.dp.kubernetesconfiguration.azure.com/subscriptions/ subscriptionid /resourceGroups/ aksclusterresourcegroup /provider/managedclusters/clusters/ aksclustername /configurations/getPendingConfigs?api-version=2021-11-01\: dial tcp: lookup centralus.dp.kubernetesconfiguration.azure.com on 10.63.136.10:53: no such host}","LogType":"ConfigAgentTrace","LogLevel":"Error","Environment":"prod","Role":"ClusterConfigAgent","Location":"centralus","ArmId":"/subscriptions/ subscriptionid /resourceGroups/ aksclusterresourcegroup /providers/Microsoft.ContainerService/managedclusters/ aksclustername ","CorrelationId":"","AgentName":"ConfigAgent","AgentVersion":"1.8.14","AgentTimestamp":"2023/01/19 20:24:16"}` ```
-**Cause**: Specific FQDN/application rules are required to use cluster extensions in the AKS clusters. [Learn more](../aks/limit-egress-traffic.md#cluster-extensions).
+**Cause**: Specific FQDN/application rules are required to use cluster extensions in the AKS clusters. [Learn more](../aks/outbound-rules-control-egress.md#cluster-extensions).
This error appears due to absence of these FQDN rules because of which configuration information from the Cluster Extensions service wasn't available.
backup Azure Kubernetes Service Cluster Backup Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-concept.md
# Prerequisites for Azure Kubernetes Service backup using Azure Backup (preview)
-This article describes the prerequisites for Azure Kubernetes Sercuce (AKS) backup.
+This article describes the prerequisites for Azure Kubernetes Service (AKS) backup.
-Azure Backup now allows you to back up AKS clusters (cluster resources and persistent volumes attached to the cluster) using a backup extension, which must be installed in the cluster. Backup vault communicates with the cluster via this Backup Extension to perform backup and restore operations. Based on the least privileged security model, a Backup vault must have *Trusted Access* enabled to communicate with the AKS cluster.
+Azure Backup now allows you to back up AKS clusters (cluster resources and persistent volumes attached to the cluster) using a backup extension, which must be installed in the cluster. Backup vault communicates with the cluster via this Backup Extension to perform backup and restore operations. Based on the least privileged security model, a Backup vault must have *Trusted Access* enabled to communicate with the AKS cluster.
## Backup Extension
To enable backup for an AKS cluster, see the following prerequisites: .
- The Backup Extension during installation fetches Container Images stored in Microsoft Container Registry (MCR). If you enable a firewall on the AKS cluster, the extension installation process might fail due to access issues on the Registry. Learn [how to allow MCR access from the firewall](../container-registry/container-registry-firewall-access-rules.md#configure-client-firewall-rules-for-mcr). -- Install Backup Extension on the AKS clusters following the [required FQDN/application rules](../aks/limit-egress-traffic.md#required-fqdn--application-rules-6).
+- Install Backup Extension on the AKS clusters following the [required FQDN/application rules](../aks/outbound-rules-control-egress.md).
- If you've any previous installation of *Velero* in the AKS cluster, you need to delete it before installing Backup Extension.
backup Back Up Hyper V Virtual Machines Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/back-up-hyper-v-virtual-machines-mabs.md
Title: Back up Hyper-V virtual machines with MABS description: This article contains the procedures for backing up and recovery of virtual machines using Microsoft Azure Backup Server (MABS). Previously updated : 01/16/2023 Last updated : 03/01/2023
To open the Recovery Wizard and recover a virtual machine, follow these steps:
## Restore an individual file from a Hyper-V VM
-You can restore individual files from a protected Hyper-V VM recovery point. This feature is only available for Windows Server VMs. Restoring individual files is similar to restoring the entire VM, except you browse into the VMDK and find the file(s) you want, before starting the recovery process.
+You can restore individual files from a protected Hyper-V VM recovery point, both disk and online. This feature is only available for Windows Server VMs. Restoring individual files is similar to restoring the entire VM, except you browse into the VMDK and find the file(s) you want, before starting the recovery process.
To recover an individual file or select files from a Windows Server VM, follow these steps: >[!Note]
->Restoring an individual file from a Hyper-V VM is available only for Windows VM and Disk Recovery Points.
+> With MABS v4 and later, you can restore an individual file from a Hyper-V VM from both disk and online recovery points. The VM should be a Windows Server VM.
+>
+> Additionally, for item-level recovery from an online recovery point, ensure that the Hyper-V role is installed on the MABS Server, automatic mounting of volumes is enabled, and the VM VHD doesn't contain a dynamic disk. The item-level recovery for online recovery points works by mounting the VM recovery point using *iSCSI* for browsing, and only one VM can be mounted at a given time.
1. On the MABS Administrator Console, select **Recovery** view.
To recover an individual file or select files from a Windows Server VM, follow t
1. On the **Recovery Points for** pane, use the calendar to select the date that contains the desired recovery point(s). Depending on how the backup policy has been configured, dates can have more than one recovery point. Once you've selected the day when the recovery point was taken, make sure you've chosen the correct **Recovery time**. If the selected date has multiple recovery points, choose your recovery point by selecting it in the Recovery time drop-down menu. Once you chose the recovery point, the list of recoverable items appears in the Path pane.
-1. To find the files you want to recover, in the **Path** pane, double-click the item in the Recoverable item column to open it. Select the file, files, or folders you want to recover. To select multiple items, press the **Ctrl** key while selecting each item. Use the **Path** pane to search the list of files or folders appearing in the **Recoverable Item** column.**Search list below** doesn't search into subfolders. To search through subfolders, double-click the folder. Use the Up button to move from a child folder into the parent folder. You can select multiple items (files and folders), but they must be in the same parent folder. You can't recover items from multiple folders in the same recovery job.
+1. To find the files you want to recover, in the **Path** pane, double-click the item in the Recoverable item column to open it.
+
+ If you use an online recovery point, wait until the recovery point is mounted. Once the mount is complete, select the *VM*, *VHD disk*, and the *volume* you want to restore until the files and folders are listed.
+
+ Select the file, files, or folders you want to recover. To select multiple items, press the **Ctrl** key while selecting each item. Use the **Path** pane to search the list of files or folders appearing in the **Recoverable Item** column.**Search list below** doesn't search into subfolders. To search through subfolders, double-click the folder. Use the Up button to move from a child folder into the parent folder. You can select multiple items (files and folders), but they must be in the same parent folder. You can't recover items from multiple folders in the same recovery job.
![Screenshot shows how to review Recovery Selection in Hyper-v VM.](./media/back-up-hyper-v-virtual-machines-mabs/hyper-v-vm-rp-disk-ilr-2.png)
To recover an individual file or select files from a Windows Server VM, follow t
1. On the **Summary** screen, review your settings and select **Recover** to start the recovery process. The **Recovery status** screen shows the progression of the recovery operation.
+>[!Tip]
+>You can perform item-level restore of online recovery points for Hyper-V VMs running Windows also from *Add external DPM Server* to recover VM files and folders quickly.
+
+>[!Note]
+>By default, *eight parallel recoveries* are supported. You can increase the number of parallel restore jobs by adding the following registry key:
+>**Key Path**: `HKLM\Software\Microsoft\Microsoft Data Protection Manager\Configuration\ MaxParallelRecoveryJobs`
+>- **32 Bit DWORD**: HyperV
+>- **Data**: `<number>`
+ ## Next steps [Recover data from Azure Backup Server](./backup-azure-alternate-dpm-server.md)
backup Backup Azure Backup Server Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-server-vmware.md
Title: Back up VMware VMs with Azure Backup Server description: In this article, learn how to use Azure Backup Server to back up VMware VMs running on a VMware vCenter/ESXi server. Previously updated : 01/18/2023 Last updated : 03/03/2023
This article describes how to back up VMware VMs running on VMware ESXi hosts/vCenter Server to Azure using Azure Backup Server (MABS).
->[!Note]
->With MABS v3 Update Rollup 2 release, you can now back up VMware 7.0 VMs as well.
- ## VMware VM protection workflow To protect VMware VM using Azure Backup you need to:
MABS provides the following features when backing up VMware virtual machines:
| MABS versions | Supported VMware VM versions for backup | | | |
+| MABS v4 | VMware server 8.0, 7.0, 6.7, or 6.5 (Licensed version) |
| MABS v3 UR2 | VMware server 7.0, 6.7, 6.5, or 6.0 (Licensed Version) | | MABS v3 UR1 | VMware server 6.7, 6.5, 6.0, or 5.5 (Licensed Version) |
Before you start backing up a VMware virtual machine, review the following list
- MABS can't protect VMware VMs with pass-through disks and physical raw device mappings (pRDM). - MABS can't detect or protect VMware vApps. - MABS can't protect VMware VMs with existing snapshots.
+- MABS v4 doesn't support the *DataSets* feature for VMware 8.0.
## Before you start
You can modify the number of jobs by using the registry key as shown below (not
> [!NOTE] > You can modify the number of jobs to a higher value. If you set the jobs number to 1, replication jobs run serially. To increase the number to a higher value, you must consider the VMware performance. Consider the number of resources in use and additional usage required on VMWare vSphere Server, and determine the number of delta replication jobs to run in parallel. Also, this change will affect only the newly created protection groups. For existing protection groups you must temporarily add another VM to the protection group. This should update the protection group configuration accordingly. You can remove this VM from the protection group after the procedure is completed.
-## VMware vSphere 6.7 and 7.0
+## VMware vSphere 6.7, 7.0, and 8.0
-To back up vSphere 6.7 and 7.0, follow these steps:
+To back up vSphere 6.7, 7.0, and 8.0, follow these steps:
- Enable TLS 1.2 on the MABS Server
Windows Registry Editor Version 5.00
With MABS V3 UR1 (and later), you can exclude the specific disk from VMware VM backup. The configuration script **ExcludeDisk.ps1** is located in the `C:\Program Files\Microsoft Azure Backup Server\DPM\DPM\bin folder`.
-> [!NOTE]
-> This feature is applicable for MABS V3 UR1 (and later).
- To configure the disk exclusion, follow these steps: ### Identify the VMware VM and disk details to be excluded
backup Backup Azure Microsoft Azure Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-microsoft-azure-backup.md
Title: Use Azure Backup Server to back up workloads description: In this article, learn how to prepare your environment to protect and back up workloads using Microsoft Azure Backup Server (MABS).- Previously updated : 08/26/2022+ Last updated : 03/01/2023
> >
-> Applies To: MABS v3. (MABS v2 is no longer supported. If you're using a version earlier than MABS v3, please upgrade to the latest version.)
+> Applies To: MABS v4.
-This article explains how to prepare your environment to back up workloads using Microsoft Azure Backup Server (MABS). With Azure Backup Server, you can protect application workloads such as Hyper-V VMs, Microsoft SQL Server, SharePoint Server, Microsoft Exchange, and Windows clients from a single console.
+This article explains how to prepare your environment to back up workloads using Microsoft Azure Backup Server (MABS). With Azure Backup Server, you can protect application workloads, such as Hyper-V VMs, VMware VMs, Azure Stack HCI VMs, Microsoft SQL Server, SharePoint Server, Microsoft Exchange, and Windows clients from a single console.
> [!NOTE]
-> Azure Backup Server can now protect VMware VMs and provides improved security capabilities. Install the product as explained in the sections below and the latest Azure Backup Agent. To learn more about backing up VMware servers with Azure Backup Server, see the article, [Use Azure Backup Server to back up a VMware server](backup-azure-backup-server-vmware.md). To learn about security capabilities, refer to [Azure Backup security features documentation](backup-azure-security-feature.md).
+> To learn more about backing up VMware servers with Azure Backup Server, see the article, [Use Azure Backup Server to back up a VMware server](backup-azure-backup-server-vmware.md). To learn about security capabilities, refer to [Azure Backup security features documentation](backup-azure-security-feature.md).
> >
The first step towards getting the Azure Backup Server up and running is to set
### Using a server in Azure
-When choosing a server for running Azure Backup Server, it's recommended you start with a gallery image of Windows Server 2016 Datacenter or Windows Server 2019 Datacenter. The article, [Create your first Windows virtual machine in the Azure portal](../virtual-machines/windows/quick-create-portal.md?toc=/azure/virtual-machines/windows/toc.json), provides a tutorial for getting started with the recommended virtual machine in Azure, even if you've never used Azure before. The recommended minimum requirements for the server virtual machine (VM) should be: Standard_A4_v2 with four cores and 8-GB RAM.
+When choosing a server for running Azure Backup Server, it's recommended you start with a gallery image of Windows Server 2022 Datacenter or Windows Server 2019 Datacenter. The article, [Create your first Windows virtual machine in the Azure portal](../virtual-machines/windows/quick-create-portal.md?toc=/azure/virtual-machines/windows/toc.json), provides a tutorial for getting started with the recommended virtual machine in Azure, even if you've never used Azure before. The recommended minimum requirements for the server virtual machine (VM) should be: Standard_A4_v2 with four cores and 8-GB RAM.
Protecting workloads with Azure Backup Server has many nuances. The [protection matrix for MABS](./backup-mabs-protection-matrix.md) helps explain these nuances. Before deploying the machine, read this article completely.
If you don't want to run the base server in Azure, you can run the server on a H
| Operating System | Platform | SKU | |: | |: |
-| Windows Server 2019 |64 bit |Standard, Datacenter, Essentials |
-| Windows Server 2016 and latest SPs |64 bit |Standard, Datacenter, Essentials |
+| Windows Server 2022 |64 bit |Standard, Datacenter, Essentials |
+| Windows Server 2019 |64 bit |Standard, Datacenter, Essentials |
You can deduplicate the DPM storage using Windows Server Deduplication. Learn more about how [DPM and deduplication](/system-center/dpm/deduplicate-dpm-storage) work together when deployed in Hyper-V VMs.
Once the extraction process complete, check the box to launch the freshly extrac
>[!NOTE] >
- >If you wish to use your own SQL server, the supported SQL Server versions are SQL Server 2014 SP1 or higher, 2016 and 2017. All SQL Server versions should be Standard or Enterprise 64-bit.
+ >If you wish to use your own SQL server, the supported SQL Server versions are SQL Server 2022 and 2019. All SQL Server versions should be Standard or Enterprise 64-bit.
>Azure Backup Server won't work with a remote SQL Server instance. The instance being used by Azure Backup Server needs to be local. If you're using an existing SQL server for MABS, the MABS setup only supports the use of *named instances* of SQL server. ![Azure Backup Server - SQL check](./media/backup-azure-microsoft-azure-backup/sql/01.png)
Once the extraction process complete, check the box to launch the freshly extrac
8. The installation happens in phases. In the first phase, the Microsoft Azure Recovery Services Agent is installed on the server. The wizard also checks for Internet connectivity. If Internet connectivity is available, you can continue with the installation. If not, you need to provide proxy details to connect to the Internet. >[!Important]
- >If you run into errors in vault registration, ensure that you're using the latest version of the MARS agent, instead of the version packaged with MABS server. You can download the latest version [from here](https://aka.ms/azurebackup_agent) and replace the *MARSAgentInstaller.exe* file in *System Center Microsoft Azure Backup Server v3\MARSAgent* folder before installation and registration on new servers.
+ >If you run into errors in vault registration, ensure that you're using the latest version of the MARS agent, instead of the version packaged with MABS server. You can download the latest version [from here](https://aka.ms/azurebackup_agent) and replace the *MARSAgentInstaller.exe* file in *MARSAgent* folder in the extracted path before installation and registration on new servers.
The next step is to configure the Microsoft Azure Recovery Services Agent. As a part of the configuration, you'll have to provide your vault credentials to register the machine to the Recovery Services vault. You'll also provide a passphrase to encrypt/decrypt the data sent between Azure and your premises. You can automatically generate a passphrase or provide your own minimum 16-character passphrase. Continue with the wizard until the agent has been configured.
Here are the steps if you need to move MABS to a new server, while retaining the
1. In the display pane, select the client computers for which you want to update the protection agent. 2. Shut down the original Azure Backup server or take it offline. 3. Reset the machine account in Active Directory.
-4. Install Server 2016 on a new machine and give it the same machine name as the original Azure Backup server.
+4. Install Windows Server on a new machine and give it the same machine name as the original Azure Backup server.
5. Join the domain.
-6. Install Azure Backup Server V3 or later (move MABS Storage pool disks from old server and import).
+6. Install Azure Backup Server V4 or later (move MABS Storage pool disks from old server and import).
7. Restore the DPMDB taken in step 1. 8. Attach the storage from the original backup server to the new server. 9. From SQL, restore the DPMDB.
If your machine has limited internet access, ensure that firewall settings on th
* `*.microsoftonline.com` * `*.windows.net` * `www.msftconnecttest.com`
+ * `www.msftconnecttest.com`
+ * `*.blob.core.windows.net`
+ * `*.queue.core.windows.net`
+ * `*.blob.storage.azure.net`
+ * IP addresses * 20.190.128.0/18 * 40.126.0.0/18
It's possible to take an Azure subscription from an *Expired* or *Deprovisioned*
Use the following procedures to upgrade MABS.
-### Upgrade from MABS V2 to V3
+### Upgrade from MABS V3 to V4
> [!NOTE] >
-> MABS V2 isn't a prerequisite for installing MABS V3. However, you can upgrade to MABS V3 only from MABS V2.
+> MABS V3 isn't a prerequisite for installing MABS V4. However, you can upgrade to MABS V4 only from MABS V3 (RTM, Update Rollup 1 and Update Rollup 2).
Use the following steps to upgrade MABS:
-1. To upgrade from MABS V2 to MABS V3, upgrade your OS to Windows Server 2016 or Windows Server 2019 if needed.
+1. To upgrade from MABS V3 to MABS V4, upgrade your OS to Windows Server 2022 or Windows Server 2019 if needed.
-2. Upgrade your server. The steps are similar to [installation](#install-and-upgrade-azure-backup-server). However, for SQL settings, you'll get an option to upgrade your SQL instance to SQL 2017, or to use your own instance of SQL server 2017.
+2. Upgrade your server. The steps are similar to [installation](#install-and-upgrade-azure-backup-server). However, for SQL settings, you'll get an option to upgrade your SQL instance to SQL 2022, or to use your own instance of SQL server.
> [!NOTE] > > Don't exit while your SQL instance is being upgraded. Exiting will uninstall the SQL reporting instance and so an attempt to re-upgrade MABS will fail.
- > [!IMPORTANT]
- >
- > As part of SQL 2017 upgrade, we backup the SQL encryption keys and uninstall the reporting services. After SQL server upgrade, reporting service(14.0.6827.4788) is installed & encryption keys are restored.
- >
- > When configuring SQL 2017 manually, refer to *SSRS configuration with SQL 2017* section under Install instructions.
- 3. Update the protection agents on the protected servers. 4. Backups should continue without the need to restart your production servers. 5. You can begin protecting your data now. If you're upgrading to Modern Backup Storage, while protecting, you can also choose the volumes you wish to store the backups in, and check for under provisioned space. [Learn more](backup-mabs-add-storage.md).
+## Increase maximum parallel online backups
+
+You can increase the number of maximum parallel online backup jobs from the default 8 to a configurable number using the following registry keys (if your underlying hardware and network bandwidth can support it).
+
+The example below increases the limit to 12 jobs.
+
+- `[HKEY_LOCAL_MACHINE\Software\Microsoft\Windows Azure Backup\DbgSettings\OnlineBackup]`
+
+ "MaxParallelBackupJobs"=dword:0000000C
+
+- `[HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft Data Protection Manager\Configuration\DPMTaskController\MaxRunningTasksThreshold]`
+
+ "6e7c76f4-a832-4418-a772-8e58fd7466cb"=dword:0000000C
+ ## Troubleshooting If Microsoft Azure Backup server fails with errors during the setup phase (or backup or restore), refer to this [error codes document](https://support.microsoft.com/kb/3041338) for more information.
backup Backup Azure Private Endpoints Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-concept.md
The following table lists the scenarios and recommendations:
| Azure Files backup | Azure Files backups are stored in the local storage account. So it doesn't require private endpoints for backup and restore. | >[!Note]
->- Private endpoints are supported with only DPM server 2022 and later.
->- Private endpoints are currently not supported with MABS.
+>Private endpoints are supported with only DPM server 2022, MABS v4, and later.
## Difference in network connections for private endpoints
backup Backup Azure Security Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-security-feature.md
Title: Security features that protect hybrid backups
description: Learn how to use security features in Azure Backup to make backups more secure Previously updated : 11/30/2022 Last updated : 03/01/2023
The security features mentioned in this article provide defense mechanisms again
When [immutability](backup-azure-immutable-vault-concept.md?tabs=recovery-services-vault) for your Recovery Services vault is enabled, operations that reduce the cloud backup retention or remove cloud backup for on-premises data sources are blocked.
-### Immutability support for DPM
+### Immutability support for DPM and MABS
-This feature is supported from DPM 2022 UR1 with MARS agent version *2.0.9250.0* and higher.
+This feature is supported with MARS agent version *2.0.9250.0* and higher from DPM 2022 UR1 and MABS v4.
The following table lists the disallowed operations on DPM connected to an immutable Recovery:
-| Operation on Immutable vault | Result with DPM 2022 UR1 and latest MARS agent | Result with older DPM and or MARS agent |
+| Operation on Immutable vault | Result with DPM 2022 UR1, MABS v4, and latest MARS agent | Result with older DPM/MABS and or MARS agent |
| | | | | **Remove Data Source from protection group configured for online backup** | 81001: The backup item(s) can't be deleted because it has active recovery points, and the selected vault is an immutable vault. | 130001: Microsoft Azure Backup encountered an internal error. | | **Stop protection with delete data** | 81001: The backup item(s) can't be deleted because it has active recovery points, and the selected vault is an immutable vault. | 130001: Microsoft Azure Backup encountered an internal error. |
The following table lists the disallowed operations for MARS when immutability i
- [Get started with Azure Recovery Services vault](backup-azure-vms-first-look-arm.md) to enable these features. - [Download the latest Azure Recovery Services agent](https://aka.ms/azurebackup_agent) to help protect Windows computers and guard your backup data against attacks.-- [Download the latest Azure Backup Server](https://support.microsoft.com/help/4457852/microsoft-azure-backup-server-v3) to help protect workloads and guard your backup data against attacks.-- [Download UR12 for System Center 2012 R2 Data Protection Manager](https://support.microsoft.com/help/3209592/update-rollup-12-for-system-center-2012-r2-data-protection-manager) or [download UR2 for System Center 2016 Data Protection Manager](https://support.microsoft.com/help/3209593/update-rollup-2-for-system-center-2016-data-protection-manager) to help protect workloads and guard your backup data against attacks.+
backup Backup Azure Sql Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-mabs.md
Title: Back up SQL Server by using Azure Backup Server description: In this article, learn the configuration to back up SQL Server databases by using Microsoft Azure Backup Server (MABS). Previously updated : 01/16/2023 Last updated : 03/01/2023
Microsoft Azure Backup Server (MABS) provides backup and recovery for SQL Server
## Supported scenarios -- MABS v3 UR2 supports SQL Server Failover Cluster Instance (FCI) using Cluster Shared Volume (CSV).
+- MABS v3 UR2, MABS v4, or later supports SQL Server Failover Cluster Instance (FCI) using Cluster Shared Volume (CSV).
- Protection of SQL Server FCI with Storage Spaces Direct on Azure, and SQL Server FCI with Azure shared disks is supported with this feature. The DPM server must be deployed in the Azure Virtual Machine to protect the SQL FCI instance, deployed on the Azure VMs. - A SQL Server Always On availability group with theses preferences: - Prefer Secondary
To protect SQL Server databases in Azure, first create a backup policy:
![Screenshot shows how to set up disk allocation in MABS.](./media/backup-azure-backup-sql/pg-storage.png)
- By default, MABS creates one volume per data source (SQL Server database). The volume is used for the initial backup copy. In this configuration, Logical Disk Manager (LDM) limits MABS protection to 300 data sources (SQL Server databases). To work around this limitation, select **Co-locate data in DPM Storage Pool**. If you use this option, MABS uses a single volume for multiple data sources. This setup allows MABS to protect up to 2,000 SQL Server databases.
-
- If you select **Automatically grow the volumes**, then MABS can account for the increased backup volume as the production data grows. If you don't select **Automatically grow the volumes**, then MABS limits the backup storage to the data sources in the protection group.
+ *Total data size* is the size of the data you want to back up, and disk space to be provisioned on DPM is the space that MABS recommends for the protection group. DPM chooses the ideal backup volume based on the settings. However, you can edit the backup volume choices in the disk allocation details. For the workloads, select the preferred storage in the dropdown menu. The edits change the values for *Total Storage* and *Free Storage* in the **Available Disk Storage** pane. *Underprovisioned space* is the amount of storage that DPM suggests you add to the volume for continuous smooth backups.
+
1. If you're an administrator, you can choose to transfer this initial backup **Automatically over the network** and choose the time of transfer. Or choose to **Manually** transfer the backup. Then select **Next**. ![Screenshot shows how to choose a replica-creation method in MABS.](./media/backup-azure-backup-sql/pg-manual.png)
backup Backup Mabs Add Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-add-storage.md
Title: Use Modern Backup Storage with Azure Backup Server description: Learn about the new features in Azure Backup Server. This article describes how to upgrade your Backup Server installation. Previously updated : 11/13/2018 Last updated : 03/01/2023+
Azure Backup Server V2 and later supports Modern Backup Storage that offers storage savings of 50 percent, backups that are three times faster, and more efficient storage. It also offers workload-aware storage. > [!NOTE]
-> To use Modern Backup Storage, you must run Backup Server V2 or V3 on Windows Server 2016 or V3 on Windows Server 2019.
+> To use Modern Backup Storage, you must run Backup Server V2 or later on Windows Server 2016 or later.
> If you run Backup Server V2 on an earlier version of Windows Server, Azure Backup Server can't take advantage of Modern Backup Storage. Instead, it protects workloads as it does with Backup Server V1. For more information, see the Backup Server version [protection matrix](backup-mabs-protection-matrix.md). >
-> To achieve enhanced backup performances we recommend to deploy MABS v3 with tiered storage on Windows Server 2019. Refer to the DPM article ΓÇ£[Set up MBS with Tiered Storage](/system-center/dpm/add-storage#set-up-mbs-with-tiered-storage)ΓÇ¥ for steps to configure tiered storage.
+> To achieve enhanced backup performance we recommend deploying MABS v4 with tiered storage on Windows Server 2022. To configure tiered storage, see [Set up MBS with Tiered Storage](/system-center/dpm/add-storage#set-up-mbs-with-tiered-storage).
## Volumes in Backup Server
Backup Server V2 or later accepts storage volumes. When you add a volume, Backup
## Create a volume for Modern Backup Storage
-Using Backup Server V2 or later with volumes as disk storage can help you maintain control over storage. A volume can be a single disk. However, if you want to extend storage in the future, create a volume out of a disk created by using storage spaces. This can help if you want to expand the volume for backup storage. This section offers best practices for creating a volume with this setup.
+Using Backup Server with volumes as disk storage can help you maintain control over storage. A volume can be a single disk. However, if you want to extend storage in the future, create a volume out of a disk created by using storage spaces. This can help if you want to expand the volume for backup storage. This section offers best practices for creating a volume with this setup.
1. In Server Manager, select **File and Storage Services** > **Volumes** > **Storage Pools**. Under **PHYSICAL DISKS**, select **New Storage Pool**.
The changes you make by using PowerShell are reflected in the Backup Server Admi
![Disks and volumes in the Administrator Console](./media/backup-mabs-add-storage/mabs-add-storage-9.png)
-## Migrate legacy storage to Modern Backup Storage
+## Migrate legacy storage to Modern Backup Storage for MABS v2
After you upgrade to or install Backup Server V2 and upgrade the operating system to Windows Server 2016, update your protection groups to use Modern Backup Storage. By default, protection groups aren't changed. They continue to function as they were initially set up.
To add disk storage:
1. In the Administrator Console, select **Management** > **Disk Storage** > **Add**.
- ![Add Disk Storage dialog](/system-center/dpm/media/upgrade-to-dpm-2016/dpm-2016-add-disk-storage.png)
+
2. In the **Add Disk Storage** dialog, select **Add disks**.
backup Backup Mabs Install Azure Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-install-azure-stack.md
Title: Install Azure Backup Server on Azure Stack description: In this article, learn how to use Azure Backup Server to protect or back up workloads in Azure Stack. Previously updated : 01/31/2019 Last updated : 04/20/2023
This article explains how to install Azure Backup Server on Azure Stack. With Az
> [!NOTE] > To learn about security capabilities, refer to [Azure Backup security features documentation](backup-azure-security-feature.md). >-
-## Azure Backup Server protection matrix
-
-Azure Backup Server protects the following Azure Stack virtual machine workloads.
-
-| Protected data source | Protection and recovery |
-| | -- |
-| Windows Server Semi Annual Channel - Datacenter/Enterprise/Standard | Volumes, files, folders |
-| Windows Server 2016 - Datacenter/Enterprise/Standard | Volumes, files, folders |
-| Windows Server 2012 R2 - Datacenter/Enterprise/Standard | Volumes, files, folders |
-| Windows Server 2012 - Datacenter/Enterprise/Standard | Volumes, files, folders |
-| Windows Server 2008 R2 - Datacenter/Enterprise/Standard | Volumes, files, folders |
-| SQL Server 2016 | Database |
-| SQL Server 2014 | Database |
-| SQL Server 2012 SP1 | Database |
-| SharePoint 2016 | Farm, database, frontend, web server |
-| SharePoint 2013 | Farm, database, frontend, web server |
-| SharePoint 2010 | Farm, database, frontend, web server |
+>For more information on tbe supported workloads, see the [Azure Backup Server protection matrix](backup-mabs-protection-matrix.md).
## Prerequisites for the Azure Backup Server environment
If you want to scale your deployment, you have the following options:
### .NET Framework
-.NET Framework 3.5 SP1 or higher must be installed on the virtual machine.
+.NET Framework 4.5 or higher must be installed on the virtual machine.
### Joining a domain
The Azure Backup Server virtual machine must be joined to a domain. A domain use
## Using an IaaS VM in Azure Stack
-When choosing a server for Azure Backup Server, start with a Windows Server 2012 R2 Datacenter or Windows Server 2016 Datacenter gallery image. The article, [Create your first Windows virtual machine in the Azure portal](../virtual-machines/windows/quick-create-portal.md?toc=/azure/virtual-machines/windows/toc.json), provides a tutorial for getting started with the recommended virtual machine. The recommended minimum requirements for the server virtual machine (VM) should be: A2 Standard with two cores and 3.5-GB RAM.
+When choosing a server for Azure Backup Server, start with a Windows Server 2022 Datacenter or Windows Server 2019 Datacenter gallery image. The article, [Create your first Windows virtual machine in the Azure portal](../virtual-machines/windows/quick-create-portal.md?toc=/azure/virtual-machines/windows/toc.json), provides a tutorial for getting started with the recommended virtual machine. The recommended minimum requirements for the server virtual machine (VM) should be: A2 Standard with two cores and 3.5-GB RAM.
Protecting workloads with Azure Backup Server has many nuances. The [protection matrix for MABS](./backup-mabs-protection-matrix.md) helps explain these nuances. Before deploying the machine, read this article completely.
There are two ways to download the Azure Backup Server installer. You can downlo
8. In the Microsoft Azure Backup Server download page, choose a language, and select **Download**.
- ![Download center opens](./media/backup-mabs-install-azure-stack/mabs-download-center-page.png)
- 9. The Azure Backup Server installer is composed of eight files - an installer and seven .bin files. Check **File Name** to select all required files and select **Next**. Download all files to the same folder. ![Download center, selected files](./media/backup-mabs-install-azure-stack/download-center-selected-files.png)
Azure Backup Server shares code with Data Protection Manager. You'll see referen
After checking, if the virtual machine has the necessary prerequisites to install Azure Backup Server, select **Next**.
- ![Azure Backup Server - requirements met](./media/backup-mabs-install-azure-stack/mabs-install-wizard-sql-ready-10.png)
- If a failure occurs with a recommendation to restart the machine, then restart the machine. After restarting the machine, restart the installer, and when you get to the **SQL Settings** screen, select **Check Again**. 5. In the **Installation Settings**, provide a location for the installation of Microsoft Azure Backup server files and select **Next**.
Azure Backup Server shares code with Data Protection Manager. You'll see referen
The installer launches the **Register Server Wizard**.
-12. Switch to your Azure subscription and your Recovery Services vault. In the **Prepare Infrastructure** menu, select **Download** to download vault credentials. If the **Download** button in step 2 isn't active, select **Already downloaded or using the latest Azure Backup Server installation** to activate the button. The vault credentials download to the location where you store downloads. Be aware of this location because you'll need it for the next step.
+12. Switch to your Azure subscription and your Recovery Services vault. In the **Prepare Infrastructure** menu, select **Download** to download vault credentials. If the **Download** button in step 2 isn't active, select **Already downloaded or using the latest Azure Backup Server installation** to activate the button. The vault credentials download to the location where your downloads are stored. Be aware of this location because you'll need it for the next step.
![Download vault credentials](./media/backup-mabs-install-azure-stack/download-mars-credentials-17.png)
Azure Backup Server shares code with Data Protection Manager. You'll see referen
15. When the installer completes, the **Status** shows that all software has been successfully installed.
- ![Software has installed successfully](./media/backup-mabs-install-azure-stack/mabs-install-wizard-done-22.png)
- When installation completes, the Azure Backup Server console and the Azure Backup Server PowerShell icons are created on the server desktop. ## Add backup storage
Once you know the state of the Azure connectivity and of the Azure subscription,
If your machine has limited internet access, ensure that firewall settings on the machine or proxy allow the following URLs and IP addresses:
-* URLs
- * `www.msftncsi.com`
- * `*.Microsoft.com`
- * `*.WindowsAzure.com`
- * `*.microsoftonline.com`
- * `*.windows.net`
- * `www.msftconnecttest.com`
-* IP addresses
- * 20.190.128.0/18
- * 40.126.0.0/18
+- **URLs**
+ - `www.msftncsi.com`
+ - `*.Microsoft.com`
+ - `*.WindowsAzure.com`
+ - `*.microsoftonline.com`
+ - `*.windows.net`
+ - `www.msftconnecttest.com`
+ - `*.blob.core.windows.net`
+ - `*.queue.core.windows.net`
+ - `*.blob.storage.azure.net`
+- **IP addresses**
+ - `20.190.128.0/18`
+ - `40.126.0.0/18`
Once connectivity to Azure is restored to the Azure Backup Server, the Azure subscription state determines the operations that can be performed. Once the server is **Connected**, use the table in [Network connectivity](backup-mabs-install-azure-stack.md#network-connectivity) to see the available operations.
Once connectivity to Azure is restored to the Azure Backup Server, the Azure sub
It's possible to change an Azure subscription from *Expired* or *Deprovisioned* state to *Active* state. While the subscription state isn't *Active*: -- While a subscription is *Deprovisioned*, it loses functionality. Restoring the subscription to *Active*, revives the backup/restore functionality. If backup data on the local disk was retained with a sufficiently large retention period, that backup data can be retrieved. However, backup data in Azure is irretrievably lost once the subscription enters the *Deprovisioned* state.
+- While a subscription is *Deprovisioned*, it loses functionality. If you restore the subscription to *Active*, it revives the backup/restore functionality. If the backup data on the local disk was retained with a sufficiently large retention period, that backup data can be retrieved. However, backup data in Azure is irretrievably lost once the subscription enters the *Deprovisioned* state.
- While a subscription is *Expired*, it loses functionality. Scheduled backups don't run while a subscription is *Expired*. ## Troubleshooting
-If Microsoft Azure Backup server fails with errors during the setup phase (or backup or restore), see the [error codes document](https://support.microsoft.com/kb/3041338).
+If Microsoft Azure Backup server fails with errors during the setup phase (backup or restore), see the [error codes document](https://support.microsoft.com/kb/3041338).
You can also refer to [Azure Backup related FAQs](backup-azure-backup-faq.yml) ## Next steps
backup Backup Mabs Protection Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-protection-matrix.md
Title: MABS (Azure Backup Server) V3 UR1 protection matrix
-description: This article provides a support matrix listing all workloads, data types, and installations that Azure Backup Server protects.
Previously updated : 08/08/2022
+ Title: MABS (Azure Backup Server) V4 protection matrix
+description: This article provides a support matrix listing all workloads, data types, and installations that Azure Backup Server v4 protects.
Last updated : 04/20/2023
-# MABS (Azure Backup Server) V3 UR1 (and later) protection matrix
+# MABS (Azure Backup Server) V4 (and later) protection matrix
This article lists the various servers and workloads that you can protect with Azure Backup Server. The following matrix lists what can be protected with Azure Backup Server.
-Use the following matrix for MABS v3 UR1 (and later):
+Use the following matrix for MABS v4 (and later):
* Workloads ΓÇô The workload type of technology.
Use the following matrix for MABS v3 UR1 (and later):
* Protection and recovery ΓÇô List the detailed information about the workloads such as supported storage container or supported deployment. >[!NOTE]
->Support for the 32-bit protection agent is deprecated with MABS v3 UR1 (and later). See [32-Bit protection agent deprecation](backup-mabs-whats-new-mabs.md#32-bit-protection-agent-deprecation).
+>Support for the 32-bit protection agent isn't supported with MABS v4 (and later). See [32-Bit protection agent deprecation](backup-mabs-whats-new-mabs.md#32-bit-protection-agent-deprecation).
## Protection support matrix
The following sections details the protection support matrix for MABS:
| **Workload** | **Version** | **Azure Backup Server installation** | **Azure Backup Server** | **Protection and recovery** | | -- | | | | |
-| Client computers (64-bit) | Windows 11, Windows 10 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 and V3 UR2 | Volume, share, folder, files, deduped volumes <br><br> Protected volumes must be NTFS. FAT and FAT32 aren't supported. <br><br> Volumes must be at least 1 GB. Azure Backup Server uses Volume Shadow Copy Service (VSS) to take the data snapshot and the snapshot only works if the volume is at least 1 GB. |
-| Servers (64-bit) | Windows Server 2022, 2019, 2016, 2012 R2, 2012 <br /><br />(Including Windows Server Core edition) | Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack | V3 UR1 and V3 UR2 | Volume, share, folder, file <br><br> Deduped volumes (NTFS only) <br><br>When protecting a WS 2016 NTFS deduped volume with MABS v3 running on Windows Server 2019, the recoveries may be affected. We have a fix for doing recoveries in a non-deduped way that will be part of later versions of MABS. Contact MABS support if you need this fix on MABS v3 UR1.<br><br> When protecting a WS 2019 NTFS deduped volume with MABS v3 on Windows Server 2016, the backups and restores will be non-deduped. This means that the backups will consume more space on the MABS server than the original NTFS deduped volume. <br><br> System state and bare metal (Not supported when workload is running as Azure virtual machine) |
-| SQL Server | SQL Server 2019, 2017, 2016 and [supported SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202016), 2014 and supported [SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202014) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Azure Stack | V3 UR1 and V3 UR2 | All deployment scenarios: database <br><br> MABS v3 UR2 and later supports the backup of SQL database, stored on the Cluster Shared Volume. <br><br> MABS v3 UR1 supports the backup of SQL databases over ReFS volumes <br><br> MABS doesn't support SQL Server databases hosted on Windows Server 2012 Scale-Out File Servers (SOFS). <br><br> MABS can't protect SQL server Distributed Availability Group (DAG) or Availability Group (AG), where the role name on the failover cluster is different than the named AG on SQL. |
-| Exchange | Exchange 2019, 2016 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack <br><br> Azure virtual machine (when workload is running as Azure virtual machine) | V3 UR1 and V3 UR2 | Protect (all deployment scenarios): Standalone Exchange server, database under a database availability group (DAG) <br><br> Recover (all deployment scenarios): Mailbox, mailbox databases under a DAG <br><br> Backup of Exchange over ReFS is supported with MABS v3 UR1 |
-| SharePoint | SharePoint 2019, 2016 with latest SPs | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Azure Stack | V3 UR1 and V3 UR2 | Protect (all deployment scenarios): Farm, frontend web server content <br><br> Recover (all deployment scenarios): Farm, database, web application, file, or list item, SharePoint search, frontend web server <br><br> Protecting a SharePoint farm that's using the SQL Server 2012 Always On feature for the content databases isn't supported. |
+| Client computers (64-bit) | Windows 11, Windows 10 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V4 | Volume, share, folder, files, deduped volumes <br><br> Protected volumes must be NTFS. FAT and FAT32 aren't supported. <br><br> Volumes must be at least 1 GB. Azure Backup Server uses Volume Shadow Copy Service (VSS) to take the data snapshot and the snapshot only works if the volume is at least 1 GB. |
+| Servers (64-bit) | Windows Server 2022, 2019, 2016, 2012, 2012 R2 <br /><br />(Including Windows Server Core edition) | Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack | V4 | Volume, share, folder, file <br><br> Deduped volumes (NTFS only) <br><br> System state and bare metal (Not supported when workload is running as Azure virtual machine). <br><br> To protect Windows Server 2012 and 2012 R2, install [Visual C++ 2015](https://www.microsoft.com/download/details.aspx?id=48145) on the protected server. |
+| SQL Server | SQL Server 2022, 2019, 2017, 2016 and [supported SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202016), 2014 and supported [SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202014) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Azure Stack | V4 | All deployment scenarios: database <br><br> SQL database, stored on the Cluster Shared Volume and ReFS volumes. <br><br> MABS doesn't support SQL Server databases hosted on Scale-Out File Servers (SOFS). <br><br> MABS can't protect SQL server Distributed Availability Group (DAG) or Availability Group (AG), where the role name on the failover cluster is different than the named AG on SQL. |
+| Exchange | Exchange 2019, 2016 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack <br><br> Azure virtual machine (when workload is running as Azure virtual machine) | V4 | Protect (all deployment scenarios): Standalone Exchange server, database under a database availability group (DAG) <br><br> Recover (all deployment scenarios): Mailbox, mailbox databases under a DAG <br><br> Backup of Exchange over ReFS is supported with MABS v3 UR1 |
+| SharePoint | SharePoint 2019, 2016 with latest SPs | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Azure Stack | V4 | Protect (all deployment scenarios): Farm, frontend web server content <br><br> Recover (all deployment scenarios): Farm, database, web application, file, or list item, SharePoint search, frontend web server <br><br> Protecting a SharePoint farm that's using the SQL Server 2012 Always On feature for the content databases isn't supported. |
## VM Backup | **Workload** | **Version** | **Azure Backup Server installation** | **Supported Azure Backup Server** | **Protection and recovery** | | | - | | - | |
-| Hyper-V host - MABS protection agent on Hyper-V host server, cluster, or VM | Windows Server 2022, 2019, 2016, 2012 R2, 2012 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 and V3 UR2 | Protect: Virtual machines, cluster shared volumes (CSVs) <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives |
-| Azure Stack HCI | V1, 20H2, and 21H2 | Physical server <br><br> Hyper-V / Azure Stack HCI virtual machine <br><br> VMware virtual machine | V3 UR2 and later | Protect: Virtual machines, cluster shared volumes (CSVs) <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives |
-| VMware VMs | VMware server 5.5, 6.0, or 6.5, 6.7 (Licensed Version) | Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 | Protect: VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives <br><br> VMware vApps aren't supported. |
-| VMware VMs | VMware server 7.0, 6.7, 6.5 or 6.0 (Licensed Version) | Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR2 and later | Protect: VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives <br><br> VMware vApps aren't supported. |
+| Hyper-V host - MABS protection agent on Hyper-V host server, cluster, or VM | Windows Server 2022, 2019, 2016, 2012 R2, 2012 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V4 | Protect: Virtual machines, cluster shared volumes (CSVs) <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives |
+| Azure Stack HCI | V1, 20H2, 21H2, and 22H2 | Physical server <br><br> Hyper-V / Azure Stack HCI virtual machine <br><br> VMware virtual machine | V4 | Protect: Virtual machines, cluster shared volumes (CSVs) <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives |
+| VMware VMs | VMware server 6.5, 6.7, 7.0, 8.0 (Licensed Version) | Hyper-V virtual machine <br><br> VMware virtual machine | V4 | Protect: VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives <br><br> VMware vApps aren't supported. <br><br> vSphere 8.0 DataSets feature isn't supported for backup. |
>[!NOTE] > MABS doesn't support backup of virtual machines with pass-through disks or those that use a remote VHD. We recommend that in these scenarios you use guest-level backup using MABS, and install an agent on the virtual machine to back up the data.
The following sections details the protection support matrix for MABS:
| **Workload** | **Version** | **Azure Backup Server installation** | **Supported Azure Backup Server** | **Protection and recovery** | | | -- | | - | |
-| Linux | Linux running as [Hyper-V](back-up-hyper-v-virtual-machines-mabs.md) or [VMware](backup-azure-backup-server-vmware.md) guest | Physical server, On-premises Hyper-V VM, Windows VM in VMware | V3 UR1 and V3 UR2 | Hyper-V must be running on Windows Server 2012 R2, Windows Server 2016, or Windows Server 2019. Protect: Entire virtual machine <br><br> Recover: Entire virtual machine <br><br> Only file-consistent snapshots are supported. <br><br> For a complete list of supported Linux distributions and versions, see the article, [Linux on distributions endorsed by Azure](../virtual-machines/linux/endorsed-distros.md). |
-
-## Azure ExpressRoute support
-
-You can back up your data over Azure ExpressRoute with public peering (available for old circuits) and Microsoft peering. Backup over private peering isn't supported.
-
-With public peering: Ensure access to the following domains/addresses:
-
-* URLs
- * `www.msftncsi.com`
- * `*.Microsoft.com`
- * `*.WindowsAzure.com`
- * `*.microsoftonline.com`
- * `*.windows.net`
- * `www.msftconnecttest.com`
-* IP addresses
- * 20.190.128.0/18
- * 40.126.0.0/18
-
-With Microsoft peering, select the following services/regions and relevant community values:
-
-* Azure Active Directory (12076:5060)
-* Microsoft Azure Region (according to the location of your Recovery Services vault)
-* Azure Storage (according to the location of your Recovery Services vault)
-
-For more information, see the [ExpressRoute routing requirements](../expressroute/expressroute-routing.md).
-
->[!NOTE]
->Public Peering is deprecated for new circuits.
+| Linux | Linux running as [Hyper-V](back-up-hyper-v-virtual-machines-mabs.md) or [VMware](backup-azure-backup-server-vmware.md) or [Stack](backup-mabs-install-azure-stack.md) guest | Physical server, On-premises Hyper-V VM, Stack VM or VMware VM running Windows Server. | V4 | Hyper-V must be running on Windows Server 2016, Windows Server 2019, or Windows Server 2022. Protect: Entire virtual machine <br><br> Recover: Entire virtual machine <br><br> Only file-consistent snapshots are supported. <br><br> For a complete list of supported Linux distributions and versions, see the article, [Linux on distributions endorsed by Azure](../virtual-machines/linux/endorsed-distros.md). |
## Operating systems and applications at end of support
For on-premises or hosted environments that you can't upgrade or migrate to Azur
|Workload |Version |Azure Backup Server installation |Azure Backup Server |Protection and recovery | ||--||--|--|
-|Servers (64-bit) | Windows Server 2008 R2 SP1, Windows Server 2008 SP2 (You need to install [Windows Management Framework](https://www.microsoft.com/download/details.aspx?id=54616)) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | Volume, share, folder, file, system state/bare metal |
+|Servers (64-bit) | Windows Server 2008 R2 SP1, Windows Server 2008 SP2 (You need to install [Windows Management Framework](https://www.microsoft.com/download/details.aspx?id=54616)), Windows Server 2012, Windows Server 2012 R2. | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | Volume, share, folder, file, system state/bare metal |
## Cluster support
Azure Backup Server can protect data in the following clustered applications:
* SQL Server - Azure Backup Server doesn't support backing up SQL Server databases hosted on cluster-shared volumes (CSVs). >[!NOTE]
->- MABS V3 UR1 supports the protection of Hyper-V virtual machines on Cluster Shared Volumes (CSVs). Protection of other workloads hosted on CSVs isn't supported.
->- MABS v3 UR2 additionally supports SQL Server Failover Cluster Instance (FCI) using Cluster Shared Volumes (CSVs).
+>MABS V4 supports the protection of Hyper-V virtual machines and SQL Server Failover Cluster Instance (FCI) on Cluster Shared Volumes (CSVs). Protection of other workloads hosted on CSVs isn't supported.
Azure Backup Server can protect cluster workloads that are located in the same domain as the MABS server, and in a child or trusted domain. If you want to protect data sources in untrusted domains or workgroups, use NTLM or certificate authentication for a single server, or certificate authentication only for a cluster.
backup Backup Mabs Release Notes V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-release-notes-v3.md
Title: Release notes for Microsoft Azure Backup Server v3 description: This article provides the information about the known issues and workarounds for Microsoft Azure Backup Server (MABS) v3. Previously updated : 07/27/2021 Last updated : 04/20/2023 ms.asset: 0c4127f2-d936-48ef-b430-a9198e425d81
This article provides the known issues and workarounds for Microsoft Azure Backup Server (MABS) V3.
-## Backup and recovery fails for clustered workloads
+## MABS V4 known issues and workarounds
+
+If you're protecting Windows Server 2012 and 2012 R2, you need to install Visual C++ redistributable 2015 manually on the protected server. You can download [Visual C++ Redistributable for Visual Studio 2015 from Official Microsoft Download Center](https://www.microsoft.com/en-in/download/details.aspx?id=48145).
+
+## MABS V3 known issues and workarounds
+
+### Backup and recovery fails for clustered workloads
**Description:** Backup/restore fails for clustered data sources such as Hyper-V cluster or SQL cluster (SQL Always On) or Exchange in database availability group (DAG) after upgrading MABS V2 to MABS V3.
+>[!Note]
+>This issue is fixed in MABS V4.
+ **Work around:** To prevent this, open SQL Server Management Studio (SSMS)) and run the following SQL script on the DPM DB: ```sql
This article provides the known issues and workarounds for Microsoft Azure Backu
GO ```
-## Upgrade to MABS V3 fails in Russian locale
+### Upgrade to MABS V3 fails in Russian locale
**Description:** Upgrade from MABS V2 to MABS V3 in Russian locale fails with an error code **4387**.
This article provides the known issues and workarounds for Microsoft Azure Backu
9. Start MSDPM service.
-## After installing UR1 the MABS reports aren't updated with new RDL files
+### After installing UR1, the MABS reports aren't updated with new RDL files
**Description**: With UR1, the MABS report formatting issue is fixed with updated RDL files. The new RDL files aren't automatically replaced with existing files.
backup Backup Mabs Sharepoint Azure Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-sharepoint-azure-stack.md
Title: Back up a SharePoint farm on Azure Stack description: Use Azure Backup Server to back up and restore your SharePoint data on Azure Stack. This article provides the information to configure your SharePoint farm so that desired data can be stored in Azure. You can restore protected SharePoint data from disk or from Azure. Previously updated : 10/20/2022 Last updated : 03/02/2023
This article describes how to back up and restore SharePoint data using Microsof
Microsoft Azure Backup Server (MABS) enables you to back up a SharePoint farm (on Azure Stack) to Microsoft Azure, which gives an experience similar to back up of other data sources. Azure Backup provides flexibility in the backup schedule to create daily, weekly, monthly, or yearly backup points, and gives you retention policy options for various backup points. It also provides the capability to store local disk copies for quick recovery-time objectives (RTO) and to store copies to Azure for economical, long-term retention.
-In this article, you'll learn about:
-
-> [!div class="checklist"]
-> - SharePoint supported scenarios
-> - Prerequisites
-> - Configure backup
-> - Monitor operations
-> - Restore a SharePoint item from disk by using MABS
-> - Restore a SharePoint database from Azure by using MABS
-> - Switch the Front-End Web Server
## SharePoint supported scenarios
-You need to confirm the following supported scenarios before you back up a SharePoint farm to Azure.
+You need to confirm the supported scenarios before you back up a SharePoint farm to Azure from the [support matrix](backup-mabs-protection-matrix.md).
### Supported scenarios
Azure Backup for MABS supports the following scenarios:
| Workload | Version | SharePoint deployment | Protection and recovery | | | | | |
-| SharePoint |SharePoint 2016, SharePoint 2013, SharePoint 2010 |SharePoint deployed as an Azure Stack virtual machine <br> -- <br> SQL Always On | Protect SharePoint Farm recovery options: Recovery farm, database, and file or list item from disk recovery points. Farm and database recovery from Azure recovery points. |
+| SharePoint |SharePoint 2019, SharePoint 2016 with latest SPs |SharePoint deployed as an Azure Stack virtual machine <br> -- <br> SQL Always On | Protect SharePoint Farm recovery options: Recovery farm, database, and file or list item from disk recovery points. Farm and database recovery from Azure recovery points. |
### Unsupported scenarios
Follow these steps:
1. In **Select Group Members**, expand the server that holds the WFE role.
- If there's more than one WFE server, select the one on which you installed ConfigureSharePoint.exe.
+ If there's more than one WFE server, select the one on which you installed *ConfigureSharePoint.exe*.
When you expand the computer running SharePoint, MABS queries VSS to see what data MABS can protect. If the SharePoint database is remote, MABS connects to it. If SharePoint data sources don't appear, check that the VSS writer is running on the computer that's running SharePoint and on any remote instance of SQL Server. Then, ensure that the MABS agent is installed both on the computer running SharePoint and on the remote instance of SQL Server. Also, ensure that SharePoint databases aren't being protected elsewhere as SQL Server databases.
backup Backup Mabs Whats New Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-whats-new-mabs.md
Title: What's new in Microsoft Azure Backup Server description: Microsoft Azure Backup Server gives you enhanced backup capabilities for protecting VMs, files and folders, workloads, and more. Previously updated : 07/27/2021 Last updated : 03/02/2023
-# What's new in Microsoft Azure Backup Server (MABS)
+# What's new in Microsoft Azure Backup Server (MABS)?
-## WhatΓÇÖs new in MABS v3 UR2
+Microsoft Azure Backup Server gives you enhanced backup capabilities to protect VMs, files and folders, workloads, and more.
+
+## What's new in MABS V4 RTM
+
+Microsoft Azure Backup Server version 4 (MABS V4) includes critical bug fixes and the support for Windows Server 2022, SQL 2022, Azure Stack HCI 22H2, and other features and enhancements. To view the list of bugs fixed and the installation instructions for MABS V4, see [KB article 5024199](https://support.microsoft.com/help/5024199/).
+
+The following table lists the included features in MABS V4:
+
+| Supported feature | Description |
+| | |
+| Windows Server 2022 support | You can install MABS V4 on and protect Windows Server 2022. To use MABS V4 with *WS2022*, you can either upgrade your operation system (OS) to *WS2022* before installing/upgrading to MABS V4, or you can upgrade your OS after installing/upgrading V4 on *WS2019*. <br><br> MABS V4 is a full release, and can be installed directly on Windows Server 2022, Windows Server 2019, or can be upgraded from MABS V3. Learn more [about the installation prerequisites](backup-azure-microsoft-azure-backup.md#software-package) before you upgrade to or install Backup Server V4. |
+| SQL Server 2022 support | You can install MABS V4 with SQL 2022 as the MABS database. You can upgrade the SQL Server from SQL 2017 to SQL 2022, or install it fresh. You can also back up SQL 2022 workload with MABS V4. |
+| Private Endpoint Support | With MABS V4, you can use private endpoints to send your online backups to Azure Backup Recovery Services vault. [Learn more](backup-azure-private-endpoints-concept.md). |
+| Azure Stack HCI 22H2 support | MABS V4 now supports protection of workloads running in Azure Stack HCI V1 till 22H2. [Learn more](back-up-azure-stack-hyperconverged-infrastructure-virtual-machines.md). |
+| VMware 8.0 support | MABS V4 can now back up VMware VMs running on VMware 8.0. MABS V4 supports VMware, version 6.5 to 8.0. [Learn more](backup-azure-backup-server-vmware.md). <br><br> Note that MABS V4 doesn't support the DataSets feature added in vSphere 8.0. |
+| Item-level recovery from online recovery points for Hyper-V and Stack HCI VMs running Windows Server | With MABS V4, you can perform item-level recovery of files and folders from your online recovery point for VMs running Windows Server on Hyper-V or Stack HCI without downloading the entire recovery point. <br><br> Go to the *Recovery* pane, select a *VM online recovery point* and double-click the *recoverable item* to browse and recover its contents at a file/folder level. <br><br> [Learn more](back-up-hyper-v-virtual-machines-mabs.md). |
+| Parallel Restore of VMware and Hyper-V VMs | MABS V4 supports parallel restore of [VMware](restore-azure-backup-server-vmware.md) and [Hyper-V](back-up-hyper-v-virtual-machines-mabs.md) virtual machines. With earlier versions of MABS, restore of VMware VM and Hyper-V virtual machine was restricted to only one restore job at a time. With MABS V4, by default you can restore *eight* VMs in parallel and this number can be increased using a registry key. |
+| Parallel online backup jobs - limit enhancement | MABS V4 supports increasing the maximum parallel online backup jobs from *eight* to a configurable limit based on your hardware and network limitations through a registry key for faster online backups. [Learn more](backup-azure-microsoft-azure-backup.md). |
+| Faster Item Level Recoveries | MABS V4 moves away from File Catalog for online backup of file/folder workloads. File Catalog was necessary to restore individual files and folders from online recovery points, but increased backup time by uploading file metadata. <br><br> MABS V4 uses an *iSCSI mount* to provide faster individual file restores and reduces backup time, because file metadata doesn't need to be uploaded. |
+
+## WhatΓÇÖs new in MABS v3 UR2?
Microsoft Azure Backup Server (MABS) version 3 UR2 supports the following new features/feature updates.
For information about the UR2 issues fixes and the installation instructions, se
### Support for Azure Stack HCI
-With MABS v3 UR2, you can backup Virtual Machines on Azure Stack HCI. [Learn more](./back-up-azure-stack-hyperconverged-infrastructure-virtual-machines.md).
+With MABS v3 UR2, you can back up Virtual Machines on Azure Stack HCI. [Learn more](./back-up-azure-stack-hyperconverged-infrastructure-virtual-machines.md).
### Support for VMware 7.0
MABS v3 UR2 supports optimized volume migration. The optimized volume migration
MABS v3 UR2 supports Offline backup using Azure Data Box. With Microsoft Azure Data Box integration, you can overcome the challenge of moving terabytes of backup data from on-premises to Azure storage. Azure Data Box saves the effort required to procure your own Azure-compatible disks and connectors or to provision temporary storage as a staging location. Microsoft also handles the end-to-end transfer logistics, which you can track through the Azure portal. [Learn more](./offline-backup-azure-data-box-dpm-mabs.md).
-## What's new in MABS V3 UR1
+## What's new in MABS V3 UR1?
Microsoft Azure Backup Server (MABS) version 3 UR1 is the latest update, and includes critical bug fixes and other features and enhancements. To view the list of bugs fixed and the installation instructions for MABS V3 UR1, see KB article [4534062](https://support.microsoft.com/help/4534062).
With MABS v3 UR1, support for 32-bit protection agent is no longer supported. Yo
>[!NOTE] >Review the [updated protection matrix](./backup-mabs-protection-matrix.md) to learn the supported workloads for protection with MABS UR 1.
-## What's new in MABS V3 RTM
+## What's new in MABS V3 RTM?
Microsoft Azure Backup Server version 3 (MABS V3) includes critical bug fixes, Windows Server 2019 support, SQL 2017 support, and other features and enhancements. To view the list of bugs fixed and the installation instructions for MABS V3, see KB article [4457852](https://support.microsoft.com/help/4457852/microsoft-azure-backup-server-v3).
backup Backup Support Matrix Mabs Dpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-mabs-dpm.md
Title: MABS & System Center DPM support matrix description: This article summarizes Azure Backup support when you use Microsoft Azure Backup Server (MABS) or System Center DPM to back up on-premises and Azure VM resources. Previously updated : 02/17/2019 Last updated : 04/20/2023
MABS is based on System Center DPM and provides similar functionality with a few
- For both MABS and DPM, Azure provides long-term backup storage. In addition, DPM allows you to back up data for long-term storage on tape. MABS doesn't provide this functionality. - [You can back up a primary DPM server with a secondary DPM server](/system-center/dpm/back-up-the-dpm-server). The secondary server will protect the primary server database and the data source replicas stored on the primary server. If the primary server fails, the secondary server can continue to protect workloads that are protected by the primary server, until the primary server is available again. MABS doesn't provide this functionality.
-You download MABS from the [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=57520). It can be run on-premises or on an Azure VM.
+You can download MABS from the [Microsoft Download Center](https://go.microsoft.com/fwLink/?LinkId=626082). It can be run on-premises or on an Azure VM.
DPM and MABS support backing up a wide variety of apps, and server and client operating systems. They provide multiple backup scenarios:
Azure Backup can back up DPM/MABS instances that are running any of the followin
**Scenario** | **DPM/MABS** |
-**MABS on an Azure VM** | Windows 2016 Datacenter.<br/><br/> Windows 2019 Datacenter.<br/><br/> We recommend that you start with an image from the marketplace.<br/><br/> Minimum Standard_A4_v2 with four cores and 8-GB RAM.
-**DPM on an Azure VM** | System Center 2012 R2 with Update 3 or later.<br/><br/> Windows operating system as [required by System Center](/system-center/dpm/prepare-environment-for-dpm#dpm-server).<br/><br/> We recommend that you start with an image from the marketplace.<br/><br/> Minimum Standard_A4_v2 with four cores and 8-GB RAM.
-**MABS on-premises** | MABS v3 and later: Windows Server 2016 or Windows Server 2019
+**MABS on an Azure VM** | MABS v4 and later: Windows 2022 Datacenter, Windows 2019 Datacenter <br><br> MABS v3, UR1 and UR2: Windows 2019 Datacenter, Windows 2016 Datacenter <br/><br/> We recommend that you start with an image from the marketplace.<br/><br/> Minimum Standard_A4_v2 with four cores and 8-GB RAM.
+**DPM on an Azure VM** | System Center 2012 R2 with Update 3 or later<br/><br/> Windows operating system as [required by System Center](/system-center/dpm/prepare-environment-for-dpm#dpm-server).<br/><br/> We recommend that you start with an image from the marketplace.<br/><br/> Minimum Standard_A4_v2 with four cores and 8-GB RAM.
+**MABS on-premises** | MABS v4 and later: Windows Server 2022 or Windows Server 2019 <br><br> MABS v3, UR1 and UR2: Windows Server 2019 and Windows Server 2016
**DPM on-premises** | Physical server/Hyper-V VM: System Center 2012 SP1 or later.<br/><br/> VMware VM: System Center 2012 R2 with Update 5 or later. >[!NOTE]
Azure Backup can back up DPM/MABS instances that are running any of the followin
**Installation** | Install DPM/MABS on a single-purpose machine.<br/><br/> Don't install DPM/MABS on a domain controller, on a machine with the Application Server role installation, on a machine that's running Microsoft Exchange Server or System Center Operations Manager, or on a cluster node.<br/><br/> [Review all DPM system requirements](/system-center/dpm/prepare-environment-for-dpm#dpm-server). **Domain** | DPM/MABS should be joined to a domain. Install first, and then join DPM/MABS to a domain. Moving DPM/MABS to a new domain after deployment isn't supported. **Storage** | Modern backup storage (MBS) is supported from DPM 2016/MABS v2 and later. It isn't available for MABS v1.
-**MABS upgrade** | You can directly install MABS v3, or upgrade to MABS v3 from MABS v2. [Learn more](backup-azure-microsoft-azure-backup.md#upgrade-mabs).
+**MABS upgrade** | You can directly install MABS v4, or upgrade to MABS v4 from MABS v3 UR1 and UR2. [Learn more](backup-azure-microsoft-azure-backup.md#upgrade-mabs).
**Moving MABS** | Moving MABS to a new server while retaining the storage is supported if you're using MBS.<br/><br/> The server must have the same name as the original. You can't change the name if you want to keep the same storage pool, and use the same MABS database to store data recovery points.<br/><br/> You'll need a backup of the MABS database because you'll need to restore it. >[!NOTE]
You can deploy MABS on an Azure Stack VM so that you can manage backup of Azure
**Component** | **Details** |
-**MABS on Azure Stack VM** | At least size A2. We recommend you start with a Windows Server 2012 R2 or Windows Server 2016 image from Azure Marketplace.<br/><br/> Don't install anything else on the MABS VM.
+**MABS on Azure Stack VM** | At least size A2. We recommend you start with a Windows Server 2019 or Windows Server 2022 image from Azure Marketplace.<br/><br/> Don't install anything else on the MABS VM.
**MABS storage** | Use a separate storage account for the MABS VM. The MARS agent running on MABS needs temporary storage for a cache location and to hold data restored from the cloud. **MABS storage pool** | The size of the MABS storage pool is determined by the number and size of disks that are attached to the MABS VM. Each Azure Stack VM size has a maximum number of disks. For example, A2 is four disks. **MABS retention** | Don't retain backed up data on the local MABS disks for more than five days. **MABS scale up** | To scale up your deployment, you can increase the size of the MABS VM. For example, you can change from A to D series.<br/><br/> You can also ensure that you're offloading data with backup to Azure regularly. If necessary, you can deploy additional MABS servers.
-**.NET Framework on MABS** | The MABS VM needs .NET Framework 3.3 SP1 or later installed on it.
+**.NET Framework on MABS** | The MABS VM needs .NET Framework 4.5 or later installed on it.
**MABS domain** | The MABS VM must be joined to a domain. A domain user with admin privileges must install MABS on the VM. **Azure Stack VM data backup** | You can back up files, folders, and apps.
-**Supported backup** | These operating systems are supported for VMs that you want to back up:<br/><br/> Windows Server Semi-Annual Channel (Datacenter, Enterprise, Standard)<br/><br/> Windows Server 2016, Windows Server 2012 R2, Windows Server 2008 R2
-**SQL Server support for Azure Stack VMs** | Back up SQL Server 2016, SQL Server 2014, SQL Server 2012 SP1.<br/><br/> Back up and recover a database.
-**SharePoint support for Azure Stack VMs** | SharePoint 2016, SharePoint 2013, SharePoint 2010.<br/><br/> Back up and recover a farm, database, front end, and web server.
+**Supported backup** | These operating systems are supported for VMs that you want to back up: <br/><br/> Windows Server 2022, Windows Server 2019, Windows Server 20016, Windows Server 2012, Windows Server 2012 R2
+**SQL Server support for Azure Stack VMs** | Back up SQL Server 2022, SQL Server 2019, SQL Server 2017, SQL Server 2016 (SPs), and SQL Server 2014 (SPs).<br/><br/> Back up and recover a database.
+**SharePoint support for Azure Stack VMs** | SharePoint 2019, SharePoint 2016 with latest SPs.<br/><br/> Back up and recover a farm, database, front end, and web server.
**Network requirements for backed up VMs** | All VMs in Azure Stack workload must belong to the same virtual network and belong to the same subscription.
-## DPM/MABS networking support
+## Networking and access support
-### URL access
-
-The DPM server/MABS server needs access to these URLs and IP addresses:
-
-* URLs
- * `www.msftncsi.com`
- * `*.Microsoft.com`
- * `*.WindowsAzure.com`
- * `*.microsoftonline.com`
- * `*.windows.net`
- * `www.msftconnecttest.com`
-* IP addresses
- * 20.190.128.0/18
- * 40.126.0.0/18:
-
-### Azure ExpressRoute support
-
-You can back up your data over Azure ExpressRoute with public peering (available for old circuits) and Microsoft peering. Backup over private peering isn't supported.
-
-With public peering: Ensure access to the following domains/addresses:
-
-* URLs
- * `www.msftncsi.com`
- * `*.Microsoft.com`
- * `*.WindowsAzure.com`
- * `*.microsoftonline.com`
- * `*.windows.net`
- * `www.msftconnecttest.com`
-* IP addresses
- * 20.190.128.0/18
- * 40.126.0.0/18
-
-With Microsoft peering, select the following services/regions and relevant community values:
--- Azure Active Directory (12076:5060)-- Microsoft Azure Region (according to the location of your Recovery Services vault)-- Azure Storage (according to the location of your Recovery Services vault)-
-For more information, see the [ExpressRoute routing requirements](../expressroute/expressroute-routing.md).
-
->[!NOTE]
->Public Peering is deprecated for new circuits.
### DPM/MABS connectivity to Azure Backup
No connectivity for more than 15 days | Expired/deprovisioned | No backup to dis
|Requirement |Details | |||
-|Domain | The DPM/MABS server should be in a Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, Windows Server 2012 domain. |
+|Domain | The DPM/MABS server should be in a Windows Server 2022, Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, Windows Server 2012 domain. |
|Domain trust | DPM/MABS supports data protection across forests, as long as you establish a forest-level, two-way trust between the separate forests. <BR><BR> DPM/MABS can protect servers and workstations across domains, within a forest that has a two-way trust relationship with the DPM/MABS server domain. To protect computers in workgroups or untrusted domains, see [Back up and restore workloads in workgroups and untrusted domains.](/system-center/dpm/back-up-machines-in-workgroups-and-untrusted-domains) <br><br> To back up Hyper-V server clusters, they must be located in the same domain as the MABS server or in a trusted or child domain. You can back up servers and clusters in an untrusted domain or workload using NTLM or certificate authentication for a single server, or certificate authentication only for a cluster. | ## DPM/MABS storage support
For information on the various servers and workloads that you can protect with D
## Deduplicated volumes support
->[!NOTE]
-> Deduplication support for MABS depends on operating system support.
+Deduplication support for MABS depends on operating system support.
+
+### For NTFS volumes with MABS v4
+
+| Operating system of protected server | Operating system of MABS server | MABS version | Dedupe support |
+| | | | |
+| Windows Server 2022 | Windows Server 2022 | MABS v4 | Y |
+| Windows Server 2019 | Windows Server 2022 | MABS v4 | Y |
+| Windows Server 2016 | Windows Server 2022 | MABS v4 | Y* |
+| Windows Server 2022 | Windows Server 2019 | MABS v4 | N |
+| Windows Server 2019 | Windows Server 2019 | MABS v4 | Y |
+| Windows Server 2016 | Windows Server 2019 | MABS v4 | Y* |
-### For NTFS volumes
+**Deduped NTFS volumes in Windows Server 2016 Protected Servers are non-deduplicated during restore.*
-| Operating system of protected server | Operating system of MABS server | MABS version | Dedup support |
+
+### For NTFS volumes with MABS v3
+
+| Operating system of protected server | Operating system of MABS server | MABS version | Dedupe support |
| | - | | -- | | Windows Server 2019 | Windows Server 2019 | MABS v3 | Y | | Windows Server 2016 | Windows Server 2019 | MABS v3 | Y* |
For information on the various servers and workloads that you can protect with D
### For ReFS Volumes
->[!NOTE]
-> We have identified a few issues with backups of deduplicated ReFS volumes. We are working on fixing these, and will update this section as soon as we have a fix available. Until then, we are removing the support for backup of deduplicated ReFS volumes from MABS v3.
->
-> MABS v3 UR1 and later continues to support protection and recovery of normal ReFS volumes.
+- We've identified a few issues with backups of deduplicated ReFS volumes. We're working on fixing these, and will update this section as soon as we have a fix available. Until then, we're removing the support for backup of deduplicated ReFS volumes from MABS v3 and v4.
+
+- MABS v3 UR1, MABS v4, and later continues to support protection and recovery of normal ReFS volumes.
## Next steps
backup Guidance Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/guidance-best-practices.md
Title: Guidance and best practices description: Discover the best practices and guidance for backing up cloud and on-premises workload to the cloud Previously updated : 12/22/2022 Last updated : 03/01/2023
Azure Backup enables data protection for various workloads (on-premises and clou
### Management plane
-* **Access control**: Vaults (Recovery Services and Backup vaults) provide the management capabilities and are accessible via the Azure portal, Backup Center, Vault dashboards, SDK, CLI, and even REST APIs. It's also an Azure role-based access control (Azure RBAC) boundary, providing you the option to restrict access to backups only to authorized Backup Admins.
+* **Access control**: Vaults (Recovery Services and Backup vaults) provide the management capabilities and are accessible via the Azure portal, Backup Center, Vault dashboards, SDK, CLI, and even REST APIs. It's also an Azure role-based access control (Azure RBAC) boundary, providing you with the option to restrict access to backups only to authorized Backup Admins.
* **Policy management**: Azure Backup Policies within each vault define when the backups should be triggered and the duration they need to be retained. You can also manage these policies and apply them across multiple items.
While scheduling your backup policy, consider the following points:
* If retention is reduced, recovery points are marked for pruning in the next clean-up job, and subsequently deleted. * The latest retention rules apply for all retention points (excluding on-demand retention points). So if the retention period is extended (for example to 100 days), then when the backup is taken, followed by retention reduction (for example from 100 days to seven days), all backup data will be retained according to the last specified retention period (that is, 7 days).
-* Azure Backup provides you the flexibility to *stop protecting and manage your backups*:
+* Azure Backup provides you with the flexibility to *stop protecting and manage your backups*:
* *Stop protection and retain backup data*. If you're retiring or decommissioning your data source (VM, application), but need to retain data for audit or compliance purposes, then you can use this option to stop all future backup jobs from protecting your data source and retain the recovery points that have been backed up. You can then restore or resume VM protection. * *Stop protection and delete backup data*. This option will stop all future backup jobs from protecting your VM and delete all the recovery points. You won't be able to restore the VM nor use Resume backup option.
To fulfill all these needs, use [Azure Private Endpoint](../private-link/private
[Learn more](./private-endpoints.md#get-started-with-creating-private-endpoints-for-backup) about how to create and use private endpoints for Azure Backup inside your virtual networks.
-* When you enable private endpoints for the vault, they're only used for backup and restore of SQL and SAP HANA workloads in an Azure VM and MARS agent backups. You can use the vault for the backup of other workloads as well (they wonΓÇÖt require private endpoints though). In addition to the backup of SQL and SAP HANA workloads and backup using the MARS agent, private endpoints are also used to perform file recovery in the case of Azure VM backup. [Learn more here](private-endpoints-overview.md#recommended-and-supported-scenarios).
+* When you enable private endpoints for the vault, they're only used for backup and restore of SQL and SAP HANA workloads in an Azure VM, MARS agent, DPM/MABS backups. You can use the vault for the backup of other workloads as well (they wonΓÇÖt require private endpoints though). In addition to the backup of SQL and SAP HANA workloads, backup using the MARS agent and DPM/MABS Server, private endpoints are also used to perform file recovery in the case of Azure VM backup. [Learn more here](private-endpoints-overview.md#recommended-and-supported-scenarios).
* Azure Active Directory doesn't currently support private endpoints. So, IPs and FQDNs required for Azure Active Directory will need to be allowed outbound access from the secured network when performing backup of databases in Azure VMs and backup using the MARS agent. You can also use NSG tags and Azure Firewall tags for allowing access to Azure AD, as applicable. Learn more about the [prerequisites here](./private-endpoints.md#before-you-start).
Governance in Azure is primarily implemented with [Azure Policy](../governance/p
### Auto-configure newly provisioned backup infrastructure with Azure Policy at Scale -- Whenever new infrastructure is provisioned and new VMs are created, as a backup admin, you need to ensure their protection. You can easily configure backups for one or two VMs. But it becomes complex when you need to configure hundreds or even thousands of VMs at scale. To simplify the process of configuring backups, Azure Backup provides you a set of built-in Azure Policies to govern your backup estate.
+- Whenever new infrastructure is provisioned and new VMs are created, as a backup admin, you need to ensure their protection. You can easily configure backups for one or two VMs. But it becomes complex when you need to configure hundreds or even thousands of VMs at scale. To simplify the process of configuring backups, Azure Backup provides you with a set of built-in Azure Policies to govern your backup estate.
- **Auto-enable backup on VMs using Policy (Central backup team model)**: If your organization has a central backup team that manages backups across application teams, you can use this policy to configure backup to an existing central Recovery Services vault in the same subscription and location as that of the VMs. You can choose to include/exclude VMs that contain a certain tag from the policy scope. [Learn more](backup-azure-auto-enable-backup.md#policy-1configure-backup-on-vms-without-a-given-tag-to-an-existing-recovery-services-vault-in-the-same-location).
Governance in Azure is primarily implemented with [Azure Policy](../governance/p
- **Monitoring Policy**: To generate the Backup Reports for your resources, enable the diagnostic settings when you create a new vault. Often, adding a diagnostic setting manually per vault can be a cumbersome task. So, you can utilize an Azure built-in policy that configures the diagnostics settings at scale to all vaults in each subscription or resource group, with Log Analytics as the destination. -- **Audit-only Policy**: Azure Backup also provides you an Audit-only policy that identifies the VMs with no backup configuration.
+- **Audit-only Policy**: Azure Backup also provides you with an Audit-only policy that identifies the VMs with no backup configuration.
### Azure Backup cost considerations
The Azure Backup service offers the flexibility to effectively manage your costs
* Optimize retention settings for Instant Restore. * Choose the right backup type to meet requirements, while taking supported backup types (full, incremental, log, differential) by the workload in Azure Backup.
-* **Reduce the backup storage cost with Selectively backup disks**: Exclude disk (preview feature) provides an efficient and cost-effective choice to selectively back up critical data. For example, you can back up only one disk when you don't want to back up all disks attached to a VM. This is also useful when you have multiple backup solutions. For example, to back up your databases or data with a workload backup solution (SQL Server database in Azure VM backup), use Azure VM level backup for selected disks.
+* **Reduce the backup storage cost with Selectively backup disks**: Exclude disk (preview feature) provides an efficient and cost-effective choice to selectively backup critical data. For example, you can back up only one disk when you don't want to back up all disks attached to a VM. This is also useful when you have multiple backup solutions. For example, to back up your databases or data with a workload backup solution (SQL Server database in Azure VM backup), use Azure VM level backup for selected disks.
- **Speed up your Restores and minimize RTO using the Instant Restore feature**: Azure Backup takes snapshots of Azure VMs and stores them along with the disks to boost recovery point creation and to speed up restore operations. This is called Instant Restore. This feature allows a restore operation from these snapshots by cutting down the restore times. It reduces the time needed to transform and copy data back from the vault. Therefore, itΓÇÖll incur storage costs for the snapshots taken during this period. Learn more about [Azure Backup Instant Recovery capability](./backup-instant-restore-capability.md).
As a backup user or administrator, you should be able to monitor all backup solu
* In addition, * You can send data (for example, jobs, policies, and so on) to the **Log Analytics** workspace. This will enable the features of Azure Monitor Logs to enable correlation of data with other monitoring data collected by Azure Monitor, consolidate log entries from multiple Azure subscriptions and tenants into one location for analysis together, use log queries to perform complex analysis and gain deep insights on Log entries. [Learn more here](../azure-monitor/essentials/activity-log.md#send-to-log-analytics-workspace). * You can send data to an Azure event hub to send entries outside of Azure, for example to a third-party SIEM (Security Information and Event Management) or other log analytics solution. [Learn more here](../azure-monitor/essentials/activity-log.md#send-to-azure-event-hubs).
- * You can send data to an Azure Storage account if you want to retain your log data longer than 90 days for audit, static analysis, or backup. If you only need to retain your events for 90 days or less, you don't need to set up archives to a storage account, since Activity Log events are kept in the Azure platform for 90 days. [Learn more](../azure-monitor/essentials/activity-log.md#send-to-azure-storage).
+ * You can send data to an Azure Storage account if you want to retain your log data longer than 90 days for audit, static analysis, or back up. If you only need to retain your events for 90 days or less, you don't need to set up archives to a storage account, since Activity Log events are kept in the Azure platform for 90 days. [Learn more](../azure-monitor/essentials/activity-log.md#send-to-azure-storage).
### Alerts
backup Microsoft Azure Backup Server Protection V3 Ur1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/microsoft-azure-backup-server-protection-v3-ur1.md
+
+ Title: MABS (Azure Backup Server) V3 UR1 protection matrix
+description: This article provides a support matrix listing all workloads, data types, and installations that Azure Backup Server protects.
Last updated : 08/08/2022++++++
+# MABS (Azure Backup Server) V3 UR1 (and later) protection matrix
+
+This article lists the various servers and workloads that you can protect with Azure Backup Server. The following matrix lists what can be protected with Azure Backup Server.
+
+Use the following matrix for MABS v3 UR1 (and later):
+
+* Workloads ΓÇô The workload type of technology.
+
+* Version ΓÇô Supported MABS version for the workloads.
+
+* MABS installation ΓÇô The computer/location where you wish to install MABS.
+
+* Protection and recovery ΓÇô List the detailed information about the workloads such as supported storage container or supported deployment.
+
+>[!NOTE]
+>Support for the 32-bit protection agent is deprecated with MABS v3 UR1 (and later). See [32-Bit protection agent deprecation](backup-mabs-whats-new-mabs.md#32-bit-protection-agent-deprecation).
+
+## Protection support matrix
+
+The following sections details the protection support matrix for MABS:
+
+* [Applications Backup](#applications-backup)
+* [VM Backup](#vm-backup)
+* [Linux](#linux)
+
+## Applications backup
+
+| **Workload** | **Version** | **Azure Backup Server installation** | **Azure Backup Server** | **Protection and recovery** |
+| -- | | | | |
+| Client computers (64-bit) | Windows 11, Windows 10 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 and V3 UR2 | Volume, share, folder, files, deduped volumes <br><br> Protected volumes must be NTFS. FAT and FAT32 aren't supported. <br><br> Volumes must be at least 1 GB. Azure Backup Server uses Volume Shadow Copy Service (VSS) to take the data snapshot and the snapshot only works if the volume is at least 1 GB. |
+| Servers (64-bit) | Windows Server 2022, 2019, 2016, 2012 R2, 2012 <br /><br />(Including Windows Server Core edition) | Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack | V3 UR1 and V3 UR2 | Volume, share, folder, file <br><br> Deduped volumes (NTFS only) <br><br>When protecting a WS 2016 NTFS deduped volume with MABS v3 running on Windows Server 2019, the recoveries may be affected. We have a fix for doing recoveries in a non-deduped way that will be part of later versions of MABS. Contact MABS support if you need this fix on MABS v3 UR1.<br><br> When protecting a WS 2019 NTFS deduped volume with MABS v3 on Windows Server 2016, the backups and restores will be non-deduped. This means that the backups will consume more space on the MABS server than the original NTFS deduped volume. <br><br> System state and bare metal (Not supported when workload is running as Azure virtual machine) |
+| SQL Server | SQL Server 2019, 2017, 2016 and [supported SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202016), 2014 and supported [SPs](https://support.microsoft.com/lifecycle/search?alpha=SQL%20Server%202014) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Azure Stack | V3 UR1 and V3 UR2 | All deployment scenarios: database <br><br> MABS v3 UR2 and later supports the backup of SQL database, stored on the Cluster Shared Volume. <br><br> MABS v3 UR1 supports the backup of SQL databases over ReFS volumes <br><br> MABS doesn't support SQL Server databases hosted on Windows Server 2012 Scale-Out File Servers (SOFS). <br><br> MABS can't protect SQL server Distributed Availability Group (DAG) or Availability Group (AG), where the role name on the failover cluster is different than the named AG on SQL. |
+| Exchange | Exchange 2019, 2016 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure Stack <br><br> Azure virtual machine (when workload is running as Azure virtual machine) | V3 UR1 and V3 UR2 | Protect (all deployment scenarios): Standalone Exchange server, database under a database availability group (DAG) <br><br> Recover (all deployment scenarios): Mailbox, mailbox databases under a DAG <br><br> Backup of Exchange over ReFS is supported with MABS v3 UR1 |
+| SharePoint | SharePoint 2019, 2016 with latest SPs | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine <br><br> Azure virtual machine (when workload is running as Azure virtual machine) <br><br> Azure Stack | V3 UR1 and V3 UR2 | Protect (all deployment scenarios): Farm, frontend web server content <br><br> Recover (all deployment scenarios): Farm, database, web application, file, or list item, SharePoint search, frontend web server <br><br> Protecting a SharePoint farm that's using the SQL Server 2012 Always On feature for the content databases isn't supported. |
+
+## VM Backup
+
+| **Workload** | **Version** | **Azure Backup Server installation** | **Supported Azure Backup Server** | **Protection and recovery** |
+| | - | | - | |
+| Hyper-V host - MABS protection agent on Hyper-V host server, cluster, or VM | Windows Server 2022, 2019, 2016, 2012 R2, 2012 | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 and V3 UR2 | Protect: Virtual machines, cluster shared volumes (CSVs) <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives |
+| Azure Stack HCI | V1, 20H2, and 21H2 | Physical server <br><br> Hyper-V / Azure Stack HCI virtual machine <br><br> VMware virtual machine | V3 UR2 and later | Protect: Virtual machines, cluster shared volumes (CSVs) <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives |
+| VMware VMs | VMware server 5.5, 6.0, or 6.5, 6.7 (Licensed Version) | Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR1 | Protect: VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives <br><br> VMware vApps aren't supported. |
+| VMware VMs | VMware server 7.0, 6.7, 6.5 or 6.0 (Licensed Version) | Hyper-V virtual machine <br><br> VMware virtual machine | V3 UR2 and later | Protect: VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage <br><br> Recover: Virtual machine, Item-level recovery of files and folders available only for Windows, volumes, virtual hard drives <br><br> VMware vApps aren't supported. |
+
+>[!NOTE]
+> MABS doesn't support backup of virtual machines with pass-through disks or those that use a remote VHD. We recommend that in these scenarios you use guest-level backup using MABS, and install an agent on the virtual machine to back up the data.
+
+## Linux
+
+| **Workload** | **Version** | **Azure Backup Server installation** | **Supported Azure Backup Server** | **Protection and recovery** |
+| | -- | | - | |
+| Linux | Linux running as [Hyper-V](back-up-hyper-v-virtual-machines-mabs.md) or [VMware](backup-azure-backup-server-vmware.md) guest | Physical server, On-premises Hyper-V VM, Windows VM in VMware | V3 UR1 and V3 UR2 | Hyper-V must be running on Windows Server 2012 R2, Windows Server 2016, or Windows Server 2019. Protect: Entire virtual machine <br><br> Recover: Entire virtual machine <br><br> Only file-consistent snapshots are supported. <br><br> For a complete list of supported Linux distributions and versions, see the article, [Linux on distributions endorsed by Azure](../virtual-machines/linux/endorsed-distros.md). |
+
+## Azure ExpressRoute support
+
+You can back up your data over Azure ExpressRoute with public peering (available for old circuits) and Microsoft peering. Backup over private peering isn't supported.
+
+With public peering: Ensure access to the following domains/addresses:
+
+* URLs
+ * `www.msftncsi.com`
+ * `*.Microsoft.com`
+ * `*.WindowsAzure.com`
+ * `*.microsoftonline.com`
+ * `*.windows.net`
+ * `www.msftconnecttest.com`
+* IP addresses
+ * 20.190.128.0/18
+ * 40.126.0.0/18
+
+With Microsoft peering, select the following services/regions and relevant community values:
+
+* Azure Active Directory (12076:5060)
+* Microsoft Azure Region (according to the location of your Recovery Services vault)
+* Azure Storage (according to the location of your Recovery Services vault)
+
+For more information, see the [ExpressRoute routing requirements](../expressroute/expressroute-routing.md).
+
+>[!NOTE]
+>Public Peering is deprecated for new circuits.
+
+## Operating systems and applications at end of support
+
+Support for the following operating systems and applications in MABS are deprecated. We recommended you to upgrade them to continue protecting your data.
+
+If the existing commitments prevent upgrading Windows Server or SQL Server, migrate them to Azure and [use Azure Backup to protect the servers](./index.yml). For more information, see [migration of Windows Server, apps and workloads](https://azure.microsoft.com/migration/windows-server/).
+
+For on-premises or hosted environments that you can't upgrade or migrate to Azure, activate Extended Security Updates for the machines for protection and support. Note that only limited editions are eligible for Extended Security Updates. For more information, see [Frequently asked questions](https://www.microsoft.com/windows-server/extended-security-updates).
+
+|Workload |Version |Azure Backup Server installation |Azure Backup Server |Protection and recovery |
+||--||--|--|
+|Servers (64-bit) | Windows Server 2008 R2 SP1, Windows Server 2008 SP2 (You need to install [Windows Management Framework](https://www.microsoft.com/download/details.aspx?id=54616)) | Physical server <br><br> Hyper-V virtual machine <br><br> VMware virtual machine | Volume, share, folder, file, system state/bare metal |
+
+## Cluster support
+
+Azure Backup Server can protect data in the following clustered applications:
+
+* File servers
+
+* SQL Server
+
+* Hyper-V - If you protect a Hyper-V cluster using scaled-out MABS protection agent, you can't add secondary protection for the protected Hyper-V workloads.
+
+* Exchange Server - Azure Backup Server can protect non-shared disk clusters for supported Exchange Server versions (cluster-continuous replication), and can also protect Exchange Server configured for local continuous replication.
+
+* SQL Server - Azure Backup Server doesn't support backing up SQL Server databases hosted on cluster-shared volumes (CSVs).
+
+>[!NOTE]
+>- MABS V3 UR1 supports the protection of Hyper-V virtual machines on Cluster Shared Volumes (CSVs). Protection of other workloads hosted on CSVs isn't supported.
+>- MABS v3 UR2 additionally supports SQL Server Failover Cluster Instance (FCI) using Cluster Shared Volumes (CSVs).
+
+Azure Backup Server can protect cluster workloads that are located in the same domain as the MABS server, and in a child or trusted domain. If you want to protect data sources in untrusted domains or workgroups, use NTLM or certificate authentication for a single server, or certificate authentication only for a cluster.
+
+## Data protection issues
+
+* MABS can't back up VMs using shared drives (which are potentially attached to other VMs) as the Hyper-V VSS writer can't back up volumes that are backed up by shared VHDs.
+
+* When you protect a shared folder, the path to the shared folder includes the logical path on the volume. If you move the shared folder, protection will fail. If you must move a protected shared folder, remove it from its protection group and then add it to protection after the move. Also, if you change the path of a protected data source on a volume that uses the Encrypting File System (EFS) and the new file path exceeds 5120 characters, data protection will fail.
+
+* You can't change the domain of a protected computer and continue protection without disruption. Also, you can't change the domain of a protected computer and associate the existing replicas and recovery points with the computer when it's reprotected. If you must change the domain of a protected computer, then first remove the data sources on the computer from protection. Then protect the data source on the computer after it has a new domain.
+
+* You can't change the name of a protected computer and continue protection without disruption. Also, you can't change the name of a protected computer and associate the existing replicas and recovery points with the computer when it's reprotected. If you must change the name of a protected computer, then first remove the data sources on the computer from protection. Then protect the data source on the computer after it has a new name.
+
+* MABS automatically identifies the time zone of a protected computer during installation of the protection agent. If a protected computer is moved to a different time zone after protection is configured, ensure that you change the computer time in Control Panel. Then update the time zone in the MABS database.
+
+* MABS can protect workloads in the same domain as the MABS server, or in child and trusted domains. You can also protect the following workloads in workgroups and untrusted domains using NTLM or certificate authentication:
+
+ * SQL Server
+ * File Server
+ * Hyper-V
+
+ These workloads can be running on a single server or in a cluster configuration. To protect a workload that isn't in a trusted domain, see [Prepare computers in workgroups and untrusted domains](/system-center/dpm/back-up-machines-in-workgroups-and-untrusted-domains?view=sc-dpm-2019&preserve-view=true#supported-scenarios) for exact details of what's supported and what authentication is required.
+
+## Unsupported data types
+
+MABS doesn't support protecting the following data types:
+
+* Hard links
+
+* Reparse points, including DFS links and junction points
+
+* Mount point metadata - A protection group can contain data with mount points. In this case DPM protects the mounted volume that is the target of the mount point, but it doesn't protect the mount point metadata. When you recover data containing mount points, you'll need to manually recreate your mount point hierarchy.
+
+* Data in mounted volumes within mounted volumes
+
+* Recycle Bin
+
+* Paging files
+
+* System Volume Information folder. To protect system information for a computer, you'll need to select the computer's system state as the protect group member.
+
+* Non-NTFS volumes
+
+* Files containing hard links or symbolic links from Windows Vista.
+
+* Data on file shares hosting UPDs (User Profile Disks)
+
+* Files with any of the following combinations of attributes:
+
+ * Encryption and reparse
+
+ * Encryption and Single Instance Storage (SIS)
+
+ * Encryption and case-sensitivity
+
+ * Encryption and sparse
+
+ * Case-sensitivity and SIS
+
+ * Compression and SIS
+
+## Next steps
+
+* [Support matrix for backup with Microsoft Azure Backup Server or System Center DPM](backup-support-matrix-mabs-dpm.md)
backup Microsoft Azure Backup Server Protection V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/microsoft-azure-backup-server-protection-v3.md
Title: What Azure Backup Server V3 RTM can back up description: This article provides a protection matrix listing all workloads, data types, and installations that Azure Backup Serve V3 RTM protects. Previously updated : 08/08/2022 Last updated : 04/20/2023
The following matrix lists what can be protected with Azure Backup Server V3 RTM
|Hyper-V host - MABS protection agent on Hyper-V host server, cluster, or VM|Windows Server 2012 R2 - Datacenter and Standard|Physical server<br /><br />On-premises Hyper-V virtual machine|V3, V2|Protect: Hyper-V computers, cluster shared volumes (CSVs)<br /><br />Recover: Virtual machine, Item-level recovery of files and folder, volumes, virtual hard drives| |Hyper-V host - MABS protection agent on Hyper-V host server, cluster, or VM|Windows Server 2012 - Datacenter and Standard|Physical server<br /><br />On-premises Hyper-V virtual machine|V3, V2|Protect: Hyper-V computers, cluster shared volumes (CSVs)<br /><br />Recover: Virtual machine, Item-level recovery of files and folder, volumes, virtual hard drives| |VMware VMs|VMware vCenter/vSphere ESX/ESXi Licensed Version 5.5/6.0/6.5 |Physical server, <br/>On-premises Hyper-V VM, <br/> Windows VM in VMware|V3, V2|VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage<br /> Item-level recovery of files and folders is available only for Windows VMs, VMware vApps are not supported.|
-|VMware VMs|[VMware vSphere Licensed version 6.7 and 7.0](backup-azure-backup-server-vmware.md#vmware-vsphere-67-and-70) |Physical server, <br/>On-premises Hyper-V VM, <br/> Windows VM in VMware|V3|VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage<br /> Item-level recovery of files and folders is available only for Windows VMs, VMware vApps are not supported.|
+|VMware VMs|[VMware vSphere Licensed version 6.7, 7.0](backup-azure-backup-server-vmware.md#vmware-vsphere-67-70-and-80) |Physical server, <br/>On-premises Hyper-V VM, <br/> Windows VM in VMware|V3|VMware VMs on cluster-shared volumes (CSVs), NFS, and SAN storage<br /> Item-level recovery of files and folders is available only for Windows VMs, VMware vApps are not supported.|
|Linux|Linux running as [Hyper-V](back-up-hyper-v-virtual-machines-mabs.md) or [VMware](backup-azure-backup-server-vmware.md) guest|Physical server, <br/>On-premises Hyper-V VM, <br/> Windows VM in VMware|V3, V2|Hyper-V must be running on Windows Server 2012 R2 or Windows Server 2016. Protect: Entire virtual machine<br /><br />Recover: Entire virtual machine <br/><br/> Only file-consistent snapshots are supported. <br/><br/> For a complete list of supported Linux distributions and versions, see the article, [Linux on distributions endorsed by Azure](../virtual-machines/linux/endorsed-distros.md).| ### Operating systems and applications at end of support
backup Private Endpoints Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints-overview.md
Title: Private endpoints overview description: Understand the use of private endpoints for Azure Backup and the scenarios where using private endpoints helps maintain the security of your resources. Previously updated : 02/20/2023 Last updated : 03/01/2023
While private endpoints are enabled for the vault, they're used for backup and r
| **Azure Files backup** | Azure Files backups are stored in the local storage account. So it doesn't require private endpoints for backup and restore. | >[!NOTE]
-> - Private endpoints are supported with only DPM server 2022 and later.
-> - Private endpoints are not yet supported with MABS.
-
+>Private endpoints are supported with only DPM server 2022, MABS v4, and later.
## Difference in network connections due to private endpoints
backup Restore Azure Backup Server Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-backup-server-vmware.md
Title: Restore VMware VMs with Azure Backup Server description: Use Azure Backup Server (MABS) to restore VMware VMs running on a VMware vCenter/ESXi server.- Previously updated : 08/18/2019+ Last updated : 03/01/2023
This article explains how to use Microsoft Azure Backup Server (MABS) to restore
1. In the MABS Administrator Console, select **Recovery view**.
-2. Using the Browse pane, browse or filter to find the VM you want to recover. Once you select a VM or folder, the Recovery points for pane displays the available recovery points.
+2. On the **Browse** pane, browse or filter to find the VM you want to recover. Once you select a VM or folder, the Recovery points for pane displays the available recovery points.
![Available recovery points](./media/restore-azure-backup-server-vmware/recovery-points.png)
You can restore individual files from a protected VM recovery point. This featur
1. In the MABS Administrator Console, select **Recovery** view.
-2. Using the **Browse** pane, browse or filter to find the VM you want to recover. Once you select a VM or folder, the **Recovery points for pane** displays the available recovery points.
+2. On the **Browse** pane, browse or filter to find the VM you want to recover. Once you select a VM or folder, the **Recovery points for pane** displays the available recovery points.
!["Recovery points for" pane](./media/restore-azure-backup-server-vmware/vmware-rp-disk.png)
You can restore individual files from a protected VM recovery point. This featur
9. On the **Specify Recovery Options** screen, choose which security setting to apply. You can opt to modify the network bandwidth usage throttling, but throttling is disabled by default. Also, **SAN Recovery** and **Notification** aren't enabled. 10. On the **Summary** screen, review your settings and select **Recover** to start the recovery process. The **Recovery status** screen shows the progression of the recovery operation.
+## VMware parallel restore in MABS v4 (and later)
+
+MABS v4 supports restoring more than one VMware VMs protected from the same vCenter in parallel. By default, eight parallel recoveries are supported. You can increase the number of parallel restore jobs by adding the following registry key.
+
+>[!Note]
+>Before you increase the number of parallel recoveries, you need to consider the VMware performance. Considering the number of resources in use and additional usage required on VMware vSphere Server, you need to determine the number of recoveries to run in parallel.
+>
+>**Key Path**: `HKLM\ Software\Microsoft\Microsoft Data Protection Manager\Configuration\ MaxParallelRecoveryJobs`
+>- **32 Bit DWORD**: VMware
+>- **Data**: `<number>`. The value should be the number (decimal) of virtual machines that you select for parallel recovery.
+ ## Next steps For troubleshooting issues when using Azure Backup Server, review the [troubleshooting guide for Azure Backup Server](./backup-azure-mabs-troubleshoot.md).
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary
+- April 2023
+ - [Microsoft Azure Backup Server v4 is now generally available](#microsoft-azure-backup-server-v4-is-now-generally-available)
- March 2023 - [Multiple backups per day for Azure VMs is now generally available](#multiple-backups-per-day-for-azure-vms-is-now-generally-available) - [Immutable vault for Azure Backup is now generally available](#immutable-vault-for-azure-backup-is-now-generally-available)
You can learn more about the new releases by bookmarking this page or by [subscr
- [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Microsoft Azure Backup Server v4 is now generally available
+
+Azure Backup now provides Microsoft Azure Backup Server (MABS) v4, the latest edition of on-premises backup solution.
+
+- It can *protect* and *run on* Windows Server 2022, Azure Stack HCI 22H2, vSphere 8.0, and SQL Server 2022.
+- It contains stability improvements and bug fixes on *MABS v3 UR2*.
+
+For more information see [What's new in MABS](backup-mabs-whats-new-mabs.md).
## Multiple backups per day for Azure VMs is now generally available Azure Backup now enables you to create a backup policy to take multiple backups a day. With this capability, you can also define the duration in which your backup jobs would trigger and align your backup schedule with the working hours when there are frequent updates to Azure Virtual Machines.
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
Previously updated : 03/21/2023 Last updated : 04/24/2023
The following sections provide you with a quick guide to the quotas and limits t
| Limit Name | Limit Value | |--|--|
-| OpenAI resources per region within Azure subscription | 2 |
+| OpenAI resources per region per Azure subscription | 3 |
| Requests per minute per model* | Davinci-models (002 and later): 120 <br> ChatGPT model (preview): 300 <br> GPT-4 models (preview): 18 <br> All other models: 300 | | Tokens per minute per model* | Davinci-models (002 and later): 40,000 <br> ChatGPT model: 120,000 <br> All other models: 120,000 | | Max fine-tuned model deployments* | 2 |
container-apps Azure Arc Create Container App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-create-container-app.md
If there's an error when running a query, try again in 10-15 minutes. There may
```kusto let StartTime = ago(72h); let EndTime = now();
-ContainerAppsConsoleLogs_CL
+ContainerAppConsoleLogs_CL
| where TimeGenerated between (StartTime .. EndTime)
-| where AppName_s =~ "my-container-app"
+| where ContainerAppName_s =~ "my-container-app"
```
-The application logs for all the apps hosted in your Kubernetes cluster are logged to the Log Analytics workspace in the custom log table named `ContainerAppsConsoleLogs_CL`.
+The application logs for all the apps hosted in your Kubernetes cluster are logged to the Log Analytics workspace in the custom log table named `ContainerAppConsoleLogs_CL`.
* **Log_s** contains application logs for a given Container Apps extension * **AppName_s** contains the Container App app name. In addition to logs you write via your application code, the *Log_s* column also contains logs on container startup and shutdown.
container-apps Dapr Keda Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-keda-scaling.md
+
+ Title: Scale Dapr applications with KEDA scalers using Bicep
+description: Learn how to use KEDA scalers to scale an Azure Container App and its Dapr sidecar.
++++ Last updated : 04/17/2023++
+# Scale Dapr applications with KEDA scalers
+
+[Azure Container Apps automatically scales HTTP traffic to zero.](./scale-app.md) However, to scale non-HTTP traffic (like [Dapr](https://docs.dapr.io/) pub/sub and bindings), you can use [KEDA scalers](https://keda.sh/) to scale your application and its Dapr sidecar up and down, based on the number of pending inbound events and messages.
+
+This guide demonstrates how to configure the scale rules of a Dapr pub/sub application with a KEDA messaging scaler. For context, refer to the corresponding sample pub/sub applications:
+- [Microservice communication using pub/sub in **C#**](https://github.com/Azure-Samples/pubsub-dapr-csharp-servicebus)
+- [Microservice communication using pub/sub in **JavaScript**](https://github.com/Azure-Samples/pubsub-dapr-nodejs-servicebus)
+- [Microservice communication using pub/sub in **Python**](https://github.com/Azure-Samples/pubsub-dapr-python-servicebus)
++
+In the above samples, the application uses the following elements:
+1. The `checkout` publisher is an application that is meant to run indefinitely and never scale down to zero, despite never receiving any incoming HTTP traffic.
+1. The Dapr Azure Service Bus pub/sub component.
+1. An `order-processor` subscriber container app picks up messages received via the `orders` topic and processed as they arrive.
+1. The scale rule for Azure Service Bus, which is responsible for scaling up the `order-processor` service and its Dapr sidecar when messages start to arrive to the `orders` topic.
++
+Let's take a look at how to apply the scaling rules in a Dapr application.
+
+## Publisher container app
+
+The `checkout` publisher is a headless service that runs indefinitely and never scales down to zero.
+
+By default, [the Container Apps runtime assigns an HTTP-based scale rule to applications](./scale-app.md), which drives scaling based on the number of incoming HTTP requests. In the following example, `minReplicas` is set to `1`. This configuration ensures the container app doesn't follow the default behavior of scaling to zero with no incoming HTTP traffic.
+
+```bicep
+resource checkout 'Microsoft.App/containerApps@2022-03-01' = {
+ name: 'ca-checkout-${resourceToken}'
+ location: location
+ identity: {
+ type: 'SystemAssigned'
+ }
+ properties: {
+ //...
+ template: {
+ //...
+ // Scale the minReplicas to 1
+ scale: {
+ minReplicas: 1
+ maxReplicas: 1
+ }
+ }
+ }
+}
+```
+
+## Subscriber container app
+
+The following `order-processor` subscriber app includes a custom scale rule that monitors a resource of type `azure-servicebus`. With this rule, the app (and its sidecar) scales up and down as needed based on the number of pending messages in the Bus.
+
+```bicep
+resource orders 'Microsoft.App/containerApps@2022-03-01' = {
+ name: 'ca-orders-${resourceToken}'
+ location: location
+ tags: union(tags, {
+ 'azd-service-name': 'orders'
+ })
+ identity: {
+ type: 'SystemAssigned'
+ }
+ properties: {
+ managedEnvironmentId: containerAppsEnvironment.id
+ configuration: {
+ //...
+ // Enable Dapr on the container app
+ dapr: {
+ enabled: true
+ appId: 'orders'
+ appProtocol: 'http'
+ appPort: 5001
+ }
+ //...
+ }
+ template: {
+ //...
+ // Set the scale property on the order-processor resource
+ scale: {
+ minReplicas: 0
+ maxReplicas: 10
+ rules: [
+ {
+ name: 'topic-based-scaling'
+ custom: {
+ type: 'azure-servicebus'
+ metadata: {
+ topicName: 'orders'
+ subscriptionName: 'membership-orders'
+ messageCount: '30'
+ }
+ auth: [
+ {
+ secretRef: 'sb-root-connectionstring'
+ triggerParameter: 'connection'
+ }
+ ]
+ }
+ }
+ ]
+ }
+ }
+ }
+}
+```
+
+## How the scaler works
+
+Notice the `messageCount` property on the scaler's configuration in the subscriber app:
+
+```bicep
+{
+ //...
+ properties: {
+ //...
+ template: {
+ //...
+ scale: {
+ //...
+ rules: [
+ //...
+ custom: {
+ //...
+ metadata: {
+ //...
+ messageCount: '30'
+ }
+ }
+ ]
+ }
+ }
+ }
+}
+```
+
+This property tells the scaler how many messages each instance of the application can process at the same time. In this example, the value is set to `30`, indicating that there should be one instance of the application created for each group of 30 messages waiting in the topic.
+
+For example, if 150 messages are waiting, KEDA scales the app out to five instances. The `maxReplicas` property is set to `10`, meaning even with a large number of messages in the topic, the scaler never creates more than `10` instances of this application. This setting ensures you don't scale up too much and accrue too much cost.
+
+## Next steps
+
+[Learn more about using Dapr components with Azure Container Apps.](./dapr-overview.md)
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
Now that you've learned about Dapr and some of the challenges it solves:
- Try [Deploying a Dapr application to Azure Container Apps using the Azure CLI][dapr-quickstart] or [Azure Resource Manager][dapr-arm-quickstart]. - Walk through a tutorial [using GitHub Actions to automate changes for a multi-revision, Dapr-enabled container app][dapr-github-actions]. - Learn how to [perform event-driven work using Dapr bindings][dapr-bindings-tutorial]
+- [Scale your Dapr applications using KEDA scalers][dapr-keda]
- [Answer common questions about the Dapr integration with Azure Container Apps][dapr-faq] <!-- Links Internal -->
Now that you've learned about Dapr and some of the challenges it solves:
[dapr-arm-quickstart]: ./microservices-dapr-azure-resource-manager.md [dapr-github-actions]: ./dapr-github-actions.md [dapr-bindings-tutorial]: ./microservices-dapr-bindings.md
+[dapr-keda]: ./dapr-keda-scaling.md
[dapr-faq]: ./faq.yml#dapr <!-- Links External -->
container-apps Microservices Dapr Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-bindings.md
Previously updated : 03/08/2023 Last updated : 04/11/2023 zone_pivot_group_filename: container-apps/dapr-zone-pivot-groups.json zone_pivot_groups: dapr-languages-set
azd down
- Learn more about [deploying Dapr applications to Azure Container Apps](./microservices-dapr.md). - Learn more about [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) and [making your applications compatible with `azd`](/azure/developer/azure-developer-cli/make-azd-compatible).
+- [Scale your Dapr applications using KEDA scalers](./dapr-keda-scaling.md)
container-apps Microservices Dapr Pubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-pubsub.md
Previously updated : 03/16/2023 Last updated : 04/11/2023 zone_pivot_group_filename: container-apps/dapr-zone-pivot-groups.json zone_pivot_groups: dapr-languages-set
azd down
## Next steps - Learn more about [deploying Dapr applications to Azure Container Apps](./microservices-dapr.md).-- Learn more about [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) and [making your applications compatible with `azd`](/azure/developer/azure-developer-cli/make-azd-compatible).
+- Learn more about [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) and [making your applications compatible with `azd`](/azure/developer/azure-developer-cli/make-azd-compatible).
+- [Scale your Dapr applications using KEDA scalers](./dapr-keda-scaling.md)
container-instances Confidential Containers Attestation Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/confidential-containers-attestation-concepts.md
Last updated 04/20/2023
-# What is attestation?
+# Attestation in Confidential containers on Azure Container Instances
Attestation is an essential part of confidential computing and appears in the definition by the Confidential Computing Consortium ΓÇ£Confidential Computing is the protection of data in use by performing computation in a hardware-based, attested Trusted Execution Environment."
In Confidential Containers on ACI you can use an attestation token to verify tha
- Is running on an Azure compliant utility VM. - Is enforcing the expected confidential computing enforcement policy (cce) that was generated using [tooling](https://github.com/Azure/azure-cli-extensions/blob/main/src/confcom/azext_confcom/README.md).
-## Full attestation in confidential containers on Azure Container Instances
+## Full attestation
-Expanding upon this concept of attestation. Full attestation captures all the components that are part of the Trusted Execution Environment that is remotely verifiable. To achieve full attestation, in Confidential Containers, we have introduced the notion of a cce policy, which defines a set of rules, which is enforced in the utility VM. The security policy is encoded in the attestation report as an SHA-256 digest stored in the HostData attribute, as provided to the PSP by the host operating system during the VM boot-up. This means that the security policy enforced by the utility VM is immutable throughout the lifetime of the utility VM.
+Expanding upon this concept of attestation. Full attestation captures all the components that are part of the Trusted Execution Environment that is remotely verifiable. To achieve full attestation, in Confidential Containers, we have introduced the notion of a cce policy, which defines a set of rules, which is enforced in the utility VM. The security policy is encoded in the attestation report as an SHA-256 digest stored in the HostData attribute, as provided to the AMD SEV-SNP hardware by the host operating system during the VM boot-up. This means that the security policy enforced by the utility VM is immutable throughout the lifetime of the utility VM.
-The exhaustive list of attributes that are part of the SEV-SNP attestation can be found [here](https://www.amd.com/system/files/TechDocs/SEV-SNP%20PSP%20API%20Specification.pdf).
+The exhaustive list of attributes that are part of the SEV-SNP attestation can be found [here](https://www.amd.com/system/files/TechDocs/56860.pdf).
Some important fields to consider in an attestation token returned by [Microsoft Azure Attestation ( MAA )](../attestation/overview.md)
-| Claim | Sample value | Description |
-||-|-|
-| x-ms-attestation-type | sevsnpvm | String value that describes the attestation type. For example, in this scenario sevsnp hardware |
-| x-ms-compliance-status | azure-compliant-uvm | Compliance status of the utility VM that runs the container group. |
-| x-ms-sevsnpvm-hostdata | 670fff86714a650a49b58fadc1e90fedae0eb32dd51e34931c1e7a1839c08f6f | Hash of the cce policy that was generated during deployment. |
-| x-ms-sevsnpvm-is-debuggable | false | Flag to indicate whether the underlying hardware is running in debug mode |
+| Claim | Sample value | Description |
+|::|:-:|:--:|
+| x-ms-attestation-type | sevsnpvm | String value that describes the attestation type. For example, in this scenario sevsnp hardware |
+| x-ms-compliance-status | azure-compliant-uvm | Compliance status of the utility VM that runs the container group. |
+| x-ms-sevsnpvm-hostdata | 670fff86714a650a49b58fadc1e90fedae0eb32dd51e34931c1e7a1839c08f6f | Hash of the cce policy that was generated using tooling during deployment. |
+| x-ms-sevsnpvm-is-debuggable | false | Flag to indicate whether the underlying hardware is running in debug mode |
## Sample attestation token generated by MAA
container-registry Container Registry Troubleshoot Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-troubleshoot-access.md
Related links:
* [Connect privately to an Azure container registry using Azure Private Link](container-registry-private-link.md) * [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md) * [Restrict access to a container registry using a service endpoint in an Azure virtual network](container-registry-vnet.md)
-* [Required outbound network rules and FQDNs for AKS clusters](../aks/limit-egress-traffic.md#required-outbound-network-rules-and-fqdns-for-aks-clusters)
+* [Required outbound network rules and FQDNs for AKS clusters](../aks/outbound-rules-control-egress.md#required-outbound-network-rules-and-fqdns-for-aks-clusters)
* [Kubernetes: Debugging DNS resolution](https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/) * [Virtual network service tags](../virtual-network/service-tags-overview.md)
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/introduction.md
Last updated 03/07/2023
-# What is Azure Cosmos DB for MongoDB vCore?
+# What is Azure Cosmos DB for MongoDB vCore? (Preview)
Azure Cosmos DB for MongoDB vCore provides developers with a fully managed MongoDB-compatible database service for building modern applications with a familiar architecture. With Cosmos DB for MongoDB vCore, developers can enjoy the benefits of native Azure integrations, low total cost of ownership (TCO), and the familiar vCore architecture when migrating existing applications or building new ones.
cost-management-billing Ea Billing Administration Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-billing-administration-partners.md
+
+ Title: EA billing administration for partners in the Azure portal
+description: This article explains the common tasks that a partner administrator accomplishes in the Azure portal to manage indirect enterprise agreements.
++ Last updated : 04/24/2023++++++
+# EA billing administration for partners in the Azure portal
+
+This article explains the common tasks that a partner administrator accomplishes in the Azure portal https://portal.azure.com to manage indirect EAs. An indirect EA is one where a customer signs an agreement with a Microsoft partner. The partner administrator manages their indirect EAs on behalf of their customers.
+
+## Access the Azure portal
+
+The partner organization is referred to as the **billing account** in the Azure portal. Partner administrators can sign in to the Azure portal to view and manage their partner organization. The partner organization contains their customer's enrollments. However, the partner doesn't have an enrollment of their own. A customer's enrollment is shown in the Azure portal as a **billing profile**.
+
+A partner administrator user can have access to multiple partner organizations (billing account scopes). All the information and activity in the Azure portal are in the context of a billing account _scope_. It's important that the partner administrator first selects a billing scope and then does administrative tasks in the context of the selected scope.
+
+### Select a billing scope
+
+1. Sign in to the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes).
+1. Search for **Cost Management + Billing** and select it.
+ :::image type="content" source="./media/ea-billing-administration-partners/search-cost-management-billing.png" alt-text="Screenshot showing search for Cost Management + Billing." lightbox="./media/ea-billing-administration-partners/search-cost-management-billing.png" :::
+1. In the left navigation menu, select **Billing scopes** and then select the billing account that you want to work with.
+ :::image type="content" source="./media/ea-billing-administration-partners/billing-scopes.png" alt-text="Screenshot showing select a billing scope." lightbox="./media/ea-billing-administration-partners/billing-scopes.png" :::
+
+## Manage a partner organization
+
+Partner administrator users can view and manage the partner organization. After a partner administrator selects a partner organization billing scope from Cost management + Billing, they see the Partner management overview page where they can view the following information:
+
+- Partner organization details such as name, ID, and authentication setting
+- List of active and extended enrollments and the option to download details
+- List of enrollments expiring in the next 180 days, so that partner admin can act to renew
+- List of enrollments with other status
+
+The partner administrator uses the left navigation menu items to perform the following tasks:
+
+- **Access control (IAM)** ΓÇô To add, edit, and delete partner administrator users.
+- **Billing profiles** ΓÇô To view a list of enrollments.
+- **Billing scopes** ΓÇô To view a list of all billing scopes that they have access to.
+- **New support request** ΓÇô To create new support request.
+
+## Manage partner administrators
+
+Every partner administrator in the Azure portal can add or remove other partner administrators. Partner administrators are associated with partner organizations billing account. They aren't associated _directly_ with the enrollments.
+
+Partners can view all the details of the billing account and billing profiles for indirect enrollments. The partner administrator can perform the following write operations.
+
+- Update the billing account authentication type
+- Add, edit, and delete another partner administrator user
+- Set the markup of the billing profile for indirect enrollments
+- Update the PO number of the billing profile for indirect enrollments
+- Generate API the key of billing profile for indirect enrollments
+
+A partner administrator with read-only access can view all billing account and billing profiles details. However, they can't perform any write operations.
+
+### Add a partner administrator
+
+You can add a new partner administrator with the following steps:
+
+1. In the Azure portal, sign in as a partner administrator.
+1. Search for **Cost Management + Billing** and select it.
+1. In the left navigation menu, select **Billing scopes** and then select the billing account that you want to work with.
+1. In the left navigation menu, select **Access control (IAM)**.
+ :::image type="content" source="./media/ea-billing-administration-partners/access-control.png" alt-text="Screenshot showing select Access Control (IAM)." lightbox="./media/ea-billing-administration-partners/access-control.png" :::
+1. At the top of the page, select **Add Partner Admin**.
+ :::image type="content" source="./media/ea-billing-administration-partners/add-partner-admin.png" alt-text="Screenshot showing select Add Partner Admin." lightbox="./media/ea-billing-administration-partners/add-partner-admin.png" :::
+1. In the Add Role Assignment window, enter the email address of the user to whom you want to give access.
+1. Select the authentication type.
+1. Select **Provide read-only access** if you want to provide read-only (reader) access.
+1. Enter a notification contact if you want to inform someone about the role assignment.
+1. Select the notification frequency.
+1. Select **Add**.
+ :::image type="content" source="./media/ea-billing-administration-partners/add-role-assignment.png" alt-text="Screenshot showing Add role assignment window." lightbox="./media/ea-billing-administration-partners/add-role-assignment.png" :::
+
+### Edit a partner administrator
+
+You can edit a partner administrator user role using the following steps.
+
+1. In the Azure portal, sign in as a partner administrator.
+1. Search for **Cost Management + Billing** and select it.
+1. In the left navigation menu, select **Billing scopes** and then select the billing account that you want to work with.
+1. In the left navigation menu, select **Access control (IAM)**.
+1. In the list of administrators, in the row for the user that you want to edit, select the ellipsis (**…**) symbol, and then select **Edit**.
+ :::image type="content" source="./media/ea-billing-administration-partners/edit-role-assignment.png" alt-text="Screenshot showing Edit partner admin." lightbox="./media/ea-billing-administration-partners/edit-role-assignment.png" :::
+1. In the Edit role assignment window, select **Provide read-only access**.
+1. Select the **Notification frequency** option and choose the frequency.
+1. **Apply** the changes.
+
+### Remove a partner administrator
+
+To revoke access of a partner administrator/reader, you delete the user from the billing account. After access is revoked, the user can't view or manage the billing account.
+
+1. In the Azure portal, sign in as a partner administrator.
+1. Search for **Cost Management + Billing** and select it.
+1. In the left navigation menu, select **Billing scopes** and then select the billing account that you want to work with.
+1. In the left navigation menu, select **Access control (IAM)**.
+1. In the list of administrators, in the row for the user that you want to delete, select the ellipsis (**…**) symbol, and then select **Delete**.
+1. In the Delete role assignment window, select **Yes, I want to delete this partner administrator** to confirm that you want to delete the partner administrator.
+1. At the bottom of the window, select **Delete**.
+
+## Manage partner notifications
+
+Partner administrators can manage the frequency that they receive usage notifications for their enrollments. They automatically receive weekly notifications of their unbilled balance. They can change the notification frequency to daily, weekly, monthly, or disable them completely.
+
+If a user doesn't receive a notification, verify that the user's notification settings are correct with the following steps.
+
+1. In the Azure portal, sign in as a partner administrator.
+1. Search for **Cost Management + Billing** and select it.
+1. In the left navigation menu, select **Billing scopes** and then select the billing account that you want to work with.
+1. In the left navigation menu, select **Access control (IAM)**.
+1. In the list of administrators, in the row for the user that you want to edit, select the ellipsis (**…**) symbol, and then select **Edit**.
+1. In the Edit role assignment window, in the **Notification frequency** list, select a frequency.
+1. **Apply** the changes.
+
+## View and manage enrollments
+
+Partner administrators can view a list of their customer enrollments (billing profiles) in the Azure portal. Each customer's EA enrollment is represented as a billing profile to the partner.
+
+### View the enrollment list
+
+1. In the Azure portal, sign in as a partner administrator.
+1. Search for **Cost Management + Billing** and select it.
+1. In the left navigation menu, select **Billing scopes** and then select the billing account that you want to work with.
+1. In the left navigation menu, select **Billing profiles**.
+ :::image type="content" source="./media/ea-billing-administration-partners/billing-profiles.png" alt-text="Screenshot showing the Billing profiles enrollment list." lightbox="./media/ea-billing-administration-partners/billing-profiles.png" :::
+
+By default, all active enrollments are shown. You can change the status filter to view the entire list of enrollments associated with the partner organization. Then you can select an enrollment to manage.
+
+## Next steps
+
+- To view usage and charges for a specific enrollment, see the [View your usage summary details and download reports for EA enrollments](direct-ea-azure-usage-charges-invoices.md) article.
cost-management-billing Ea Portal Agreements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-agreements.md
Title: Azure EA agreements and amendments
description: This article explains how Azure EA agreements and amendments affect your Azure EA portal use. Previously updated : 11/18/2022 Last updated : 04/24/2023
The article describes how Azure EA agreements and amendments might affect your a
## Enrollment provisioning status
-The start date of a new Azure Prepayment (previously called monetary commitment) is defined by the date that the regional operations center processed it. Since Azure Prepayment orders via the Azure EA portal are processed in the UTC time zone, you may experience some delay if your Azure Prepayment purchase order was processed in a different region. The coverage start date on the purchase order shows the start of the Azure Prepayment. The coverage start date is when the Azure Prepayment appears in the Azure EA portal.
+The start date of a new Azure Prepayment (previously called monetary commitment) is defined by the date that the regional operations center processed it. Since Azure Prepayment orders via the Azure portal are processed in the UTC time zone, you may experience some delay if your Azure Prepayment purchase order was processed in a different region. The coverage start date on the purchase order shows the start of the Azure Prepayment. The coverage start date is when the Azure Prepayment appears in the Azure portal.
## Support for enterprise customers
The start date of a new Azure Prepayment (previously called monetary commitment)
An enrollment has one of the following status values. Each value determines how you can use and access an enrollment. The enrollment status determines at which stage your enrollment is. It tells you if the enrollment needs to be activated before it can be used. Or, if the initial period has expired and you're charged for usage overage.
-**Pending** - The enrollment administrator needs to sign in to the Azure EA portal. After the administrator signs in, the enrollment switches to **Active** status.
+**Pending** - The enrollment administrator needs to sign in to the Azure portal. After the administrator signs in, the enrollment switches to **Active** status.
-**Active** - The enrollment is accessible and usable. You can create accounts and subscriptions in the Azure EA portal. Direct customers can create departments, accounts and subscriptions in the [Azure portal](https://portal.azure.com). The enrollment remains active until the enterprise agreement end date.
+**Active** - The enrollment is accessible and usable. You can create departments, accounts, and subscriptions in the [Azure portal](https://portal.azure.com). The enrollment remains active until the enterprise agreement end date.
**Indefinite Extended Term** - Indefinite extended term status occurs after the enterprise agreement end date is reached and is expired. When an agreement enters into an extended term, it doesn't receive discounted pricing. Instead, pricing is at retail rates. Before the EA enrollment reaches the enterprise agreement end date, the Enrollment Administrator should decide to:
As of August 1, 2019, new opt-out forms aren't accepted for Azure commercial cus
## Partner markup
-In the Azure EA portal, Partner Price Markup helps to enable better cost reporting for customers. The Azure EA portal shows usage and prices configured by partners for their customers.
+In the Azure portal, Partner Price Markup helps to enable better cost reporting for customers. The Azure portal shows usage and prices configured by partners for their customers.
-Markup allows partner administrators to add a percentage markup to their indirect enterprise agreements. Percentage markup applies to all Microsoft first party service information in the Azure EA portal such as: meter rates, Azure Prepayment, and orders. After the markup is published by the partner, the customer sees Azure costs in the Azure EA portal. For example, usage summary, price lists, and downloaded usage reports.
+Markup allows partner administrators to add a percentage markup to their indirect enterprise agreements. Percentage markup applies to all Microsoft first party service information in the Azure portal such as: meter rates, Azure Prepayment, and orders. After the markup is published by the partner, the customer sees Azure costs in the Azure portal. For example, usage summary, price lists, and downloaded usage reports.
Starting in September 2019, partners can apply markup anytime during a term. They don't need to wait until the term next anniversary to apply markup.
Microsoft won't access or utilize the provided markup and associated prices for
### How the calculation works
-The LSP provides a single percentage number in the EA portal.  All commercial information on the portal will be uplifted by the percentage provided by the LSP. Example:
+The LSP provides a single percentage number in the Azure portal.  All commercial information on the portal will be uplifted by the percentage provided by the LSP. Example:
- Customer signs an EA with Azure Prepayment of USD 100,000. - The meter rate for Service A is USD 10 / Hour.
Let's look at an example. For an Azure Savings Plan commitment amount of 3.33/ho
### How to add a price markup
+**You can add price markup on Azure portal with the following steps:**
+
+**In the Azure portal,**
+
+- Sign in as a partner administrator.
+- Search for Cost Management + Billing and select it.
+- In the left navigation menu, select Billing scopes and then select the billing account that you want to work with.
+- In the left navigation menu, select Billing Profile and then select the billing profile that you want to work with.
+- In the left navigation menu, select Markup.
+- To add markup, click on ΓÇ£set markupΓÇ¥.
+- Enter the markup percentage and select Preview.
+- Review the credit and usage charges before and after markup update.
+- Accept the disclaimer and click on Publish to published the markup.
+- End customer should be able to view credits and charges details.
+
+**You can add price markup on Azure Enterprise portal with the following steps:**
+ **Step One: Add price markup** 1. From the Enterprise Portal, select **Reports** on the left navigation.
Pricing with markup will be available to enterprise administrators immediately a
To check if an enrollment has a markup published, select **Manage** on the left navigation, and select the **Enrollment** tab. Select the enrollment box to check, and view the markup status under _Enrollment Detail_. It will display the current status of the markup feature for that EA as Disabled, Preview, or Published.
+**To check markup status of an enrollment on Azure portal, follow the below steps:**
+
+- In the Azure portal, sign in as a partner administrator.
+- Search for Cost Management + Billing and select it.
+- Select Billing scopes and then select the billing account that you want to work with.
+- In the left navigation menu, select Billing scopes and then select the billing account that you want to work with.
+- In the left navigation menu, select Billing Profile
+- You can view the markup status of an enrollment
+ ### How can the customer download usage estimates? Once partner markup is published, the indirect customer will have access to balance and charge .csv monthly files and usage detail .csv files. The usage detail files will include resource rate and extended cost. ### How can I as partner apply markup to existing EA customer(s) that was earlier with another partner?
-Partners can use the markup feature (on Azure EA) after a Change of Channel Partner is processed; no need to wait for the next anniversary term.
+Partners can use the markup feature (on Azure EA portal or Azure portal) after a Change of Channel Partner is processed; no need to wait for the next anniversary term.
## Resource Prepayment and requesting quota increases
Enterprise Administrators can assign Account Owners to prepare previously purcha
1. Select the **Download** symbol in the top-right corner of the page. 1. Find the corresponding Plan SKU part numbers with filter on column **Included Quantity** and select values greater than 0 (zero).
-Direct customer can view price sheet in Azure portal. See [view price sheet in Azure portal](ea-pricing.md#download-pricing-for-an-enterprise-agreement).
+EA customer can view price sheet in Azure portal. See [view price sheet in Azure portal](ea-pricing.md#download-pricing-for-an-enterprise-agreement).
### Existing/New account owners to create new subscriptions
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-ea-roles.md
Previously updated : 12/16/2022 Last updated : 04/24/2023
The following sections describe the limitations and capabilities of each role.
|Add or remove Department Administrators|✔|✘|✘|✔|✘|✘|✘| |View Accounts in the enrollment |✔|✔|✔|✔⁵|✔⁵|✘|✔| |Add Accounts to the enrollment and change Account Owner|✔|✘|✘|✔⁵|✘|✘|✘|
-|Purchase reservations|Γ£ö|Γ£ÿ|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|
+|Purchase reservations|✔|✘⁶|✔|✘|✘|✘|✘|
|Create and manage subscriptions and subscription permissions|✘|✘|✘|✘|✘|✔|✘| - ⁴ Notification contacts are sent email communications about the Azure Enterprise Agreement. - ⁵ Task is limited to accounts in your department.
+- ⁶ The Enterprise Administrator (read only) role doesn't allow reservation purchases. However, if the EA Admin (read only) is also a subscription owner or subscription reservation purchaser, they can purchase a reservation.
## Add a new enterprise administrator
Direct EA admins can add department admins in the Azure portal. For more informa
|View department spending quotas|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö| |Set department spending quotas|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ÿ| |View organization's EA price sheet|Γ£ö|Γ£ö|Γ£ö|Γ£ÿ|Γ£ÿ|Γ£ÿ|Γ£ö|
-|View usage and cost details|✔|✔|✔|✔⁶|✔⁶|✔⁷|✔|
+|View usage and cost details|✔|✔|✔|✔⁷|✔⁷|✔⁸|✔|
|Manage resources in Azure portal|✘|✘|✘|✘|✘|✔|✘| -- ⁶ Requires that the Enterprise Administrator enable **DA view charges** policy in the Enterprise portal. The Department Administrator can then see cost details for the department.-- ⁷ Requires that the Enterprise Administrator enable **AO view charges** policy in the Enterprise portal. The Account Owner can then see cost details for the account.
+- ⁷ Requires that the Enterprise Administrator enable **DA view charges** policy in the Enterprise portal. The Department Administrator can then see cost details for the department.
+- ⁸ Requires that the Enterprise Administrator enable **AO view charges** policy in the Enterprise portal. The Account Owner can then see cost details for the account.
## See pricing for different user roles
cost-management-billing Reservation Amortization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-amortization.md
In Cost analysis, you view costs with a metric. They include Actual cost and Amo
**Actual cost** - Shows the purchase as it appears on your bill. For example, if you bought a one-year reservation for $1200 in January 2022, cost analysis shows a $1200 cost in the month of January for the reservation. It doesn't show a reservation cost for other months of the year. If you group your actual costs by VM, then a VM that received the reservation benefit for a given month would have zero cost for the month.
-**Amortized cost** - Shows a reservation purchase split as an amortized cost over the duration of the reservation term. With the same example above, cost analysis shows a different amount for each month depending on the number of days in the month. If you group costs by VM in this example, you'd see cost attributed to each VM that received the reservation benefit. However, _unused reservation_ costs are attributed to the subscription used to buy the reservation because the unused portion isn't attributable to any specific resource or subscription.
+**Amortized cost** - Shows a reservation purchase split as an amortized cost over the duration of the reservation term. With the same example above, cost analysis shows a different amount for each month depending on the number of days in the month. If you group costs by VM in this example, you'd see cost attributed to each VM that received the reservation benefit. However, _unused reservation_ costs are not attributed to the subscription used to buy the reservation because the unused portion isn't attributable to any specific resource or subscription.
## View amortized costs
Another easy way to view reservation amortized cost is to use the **Reservations
## Next steps -- Read [Charge back Azure Reservation costs](charge-back-usage.md) to learn more about charge back processes.
+- Read [Charge back Azure Reservation costs](charge-back-usage.md) to learn more about charge back processes.
ddos-protection Ddos Protection Sku Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md
The following table shows features and corresponding SKUs.
| Mitigation flow logs| Yes| Yes | | Mitigation policies tuned to customers application | Yes| Yes | | Integration with Firewall Manager | Yes | Yes |
-| Azure Sentinel data connector and workbook | Yes | Yes |
+| Microsoft Sentinel data connector and workbook | Yes | Yes |
| Protection of resources across subscriptions in a tenant | Yes | Yes | | Public IP Standard SKU protection | Yes | Yes | | Public IP Basic SKU protection | No | Yes |
defender-for-cloud Auto Deploy Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md
Title: Configure Microsoft Defender for Cloud to automatically assess machines f
description: Use Microsoft Defender for Cloud to ensure your machines have a vulnerability assessment solution -- Previously updated : 04/18/2023 Last updated : 04/24/2023 # Automatically configure vulnerability assessment for your machines
defender-for-cloud Concept Gcp Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-gcp-connector.md
description: Learn how the GCP connector works on Microsoft Defender for Cloud.
Previously updated : 02/09/2023 Last updated : 04/23/2023 # Microsoft Defender for Cloud's GCP connector
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
The **Azure Policy add-on for Kubernetes** collects cluster and workload configu
|--|--|--|--|--|--|--| | microsoft-defender-collector-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment. | SYS_ADMIN, <br>SYS_RESOURCE, <br>SYS_PTRACE | memory: 296Mi<br> <br> cpu: 360m | No | | microsoft-defender-collector-misc-* | kube-system | [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment that aren't bounded to a specific node. | N/A | memory: 64Mi <br> <br>cpu: 60m | No |
-| microsoft-defender-publisher-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | Publish the collected data to Microsoft Defender for Containers backend service where the data will be processed for and analyzed. | N/A | memory: 200Mi  <br> <br> cpu: 60m | Https 443 <br> <br> Learn more about the [outbound access prerequisites](../aks/limit-egress-traffic.md#microsoft-defender-for-containers) |
+| microsoft-defender-publisher-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | Publish the collected data to Microsoft Defender for Containers backend service where the data will be processed for and analyzed. | N/A | memory: 200Mi  <br> <br> cpu: 60m | Https 443 <br> <br> Learn more about the [outbound access prerequisites](../aks/outbound-rules-control-egress.md#microsoft-defender-for-containers) |
\* Resource limits aren't configurable; Learn more about [Kubernetes resources limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes)
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
The Defender for Storage (classic) will still continue to be supported for three
### Can I switch back to the Defender for Storage (classic)?
-Yes, using the REST API, you can return to using the Defender for Storage (classic).
+Yes, you can use the REST API to return to the Defender for Storage (classic) plan.
+
+If you want to switch back to the Defender for Storage (classic) plan, you need to do two things. First, disable the new Defender for Storage plan that is enabled now. Second, check if there are any policies that can re-enable the new plan and turn them off too. **The two Azure built-in policies enabling the new plan are Configure Microsoft Defender for Storage to be enabled** and **Configure basic Microsoft Defender for Storage to be enabled (Activity Monitoring only).**
### How can I calculate the cost of each plan?
defender-for-cloud Enable Enhanced Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-enhanced-security.md
Title: Enable Microsoft Defender for Cloud's integrated workload protections
description: Learn how to enable enhanced security features to extend the protections of Microsoft Defender for Cloud to your hybrid and multicloud resources Previously updated : 01/24/2023 Last updated : 04/23/2023
defender-for-cloud Enable Vulnerability Assessment Agentless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-vulnerability-assessment-agentless.md
Previously updated : 11/14/2022 Last updated : 04/24/2023 # Find vulnerabilities and collect software inventory with agentless scanning (Preview)
If you have Defender for Servers P2 already enabled and agentless scanning is tu
### Agentless vulnerability assessment on Azure
-To enable agentless vulnerability assessment on Azure:
+**To enable agentless vulnerability assessment on Azure**:
1. From Defender for Cloud's menu, open **Environment settings**. 1. Select the relevant subscription.
defender-for-cloud Episode Twenty Eight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-eight.md
Last updated 04/20/2023
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Security Policy Enhancements in Defender for Cloud](episode-twenty-nine.md)
defender-for-cloud Episode Twenty Nine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-nine.md
+
+ Title: Security policy enhancements in Defender for Cloud | Defender for Cloud in the field
+
+description: Learn about security policy enhancements and dashboard in Defender for Cloud
+ Last updated : 04/23/2023++
+# Security policy enhancements in Defender for Cloud
+
+**Episode description**: In this episode of Defender for Cloud in the field, Tuval Rozner joins Yuri Diogenes to talk about the new security policy enhancements. Tuval covers the new security policy dashboard within Defender for Cloud, how to filter, and create exemptions from a single place without having to make changes in the Azure Policy dashboard. Tuval also demonstrates how to use the new dashboard and customize policies.
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=1145810e-fc14-4d73-8d63-ea861aefb30b" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+- [01:21](/shows/mdc-in-the-field/security-policy#time=01m21s) - The rationale behind changing the security policy assignment experience
+- [02:20](/shows/mdc-in-the-field/security-policy#time=02m20s) - What's new in the security policy assignment in Defender for Cloud?
+- [04:20](/shows/mdc-in-the-field/security-policy#time=04m20s) - Demonstration
+- [12:02](/shows/mdc-in-the-field/security-policy#time=12m02s) - What's next?
+
+## Recommended resources
+ - Learn more about [managing security policies](tutorial-security-policy.md)
+ - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+ - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+ - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Follow us on social media:
+
+ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
description: Learn about deploying Microsoft Defender for Endpoint from Microsof
Previously updated : 01/15/2023 Last updated : 04/24/2023 # Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
Microsoft Defender for Cloud includes a bundle of recommendations that are avail
## Prerequisites -- Add the [Required FQDN/application rules for Azure policy](../aks/limit-egress-traffic.md#azure-policy).
+- Add the [Required FQDN/application rules for Azure policy](../aks/outbound-rules-control-egress.md#azure-policy).
- (For non AKS clusters) [Connect an existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md). ## Enable Kubernetes data plane hardening
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 01/10/2023 Last updated : 04/23/2023 zone_pivot_groups: connect-aws-accounts
The native cloud connector requires:
> [!NOTE] > Each plan has its own requirements for permissions, and might incur charges.
- :::image type="content" source="media/quickstart-onboard-aws/add-aws-account-plans-selection.png" alt-text="The select plans tab is where you choose which Defender for Cloud capabilities to enable for this AWS account.":::
+ :::image type="content" source="media/quickstart-onboard-aws/add-aws-account-plans-selection.png" alt-text="The select plans tab is where you choose which Defender for Cloud capabilities to enable for this AWS account." lightbox="media/quickstart-onboard-aws/add-aws-account-plans-selection.png":::
> [!IMPORTANT] > To present the current status of your recommendations, the CSPM plan queries the AWS resource APIs several times a day. These read-only API calls incur no charges, but they *are* registered in CloudTrail if you've enabled a trail for read events. As explained in [the AWS documentation](https://aws.amazon.com/cloudtrail/pricing/), there are no additional charges for keeping one trail. If you're exporting the data out of AWS (for example, to an external SIEM), this increased volume of calls might also increase ingestion costs. In such cases, We recommend filtering out the read-only calls from the Defender for Cloud user or role ARN: `arn:aws:iam::[accountId]:role/CspmMonitorAws` (this is the default role name, confirm the role name configured on your account).
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project to Microsoft Defender for Cloud description: Monitoring your GCP resources from Microsoft Defender for Cloud Previously updated : 01/25/2023 Last updated : 04/23/2023 zone_pivot_groups: connect-gcp-accounts
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 04/20/2023 Last updated : 04/24/2023 # What's new in Microsoft Defender for Cloud?
Updates in April include:
- [Alerts automatic export to Log Analytics workspace have been deprecated](#alerts-automatic-export-to-log-analytics-workspace-have-been-deprecated) - [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers) - [New Azure Active Directory authentication-related recommendations for Azure Data Services](#new-azure-active-directory-authentication-related-recommendations-for-azure-data-services)-
+- [Two recommendations related to missing Operating System (OS) updates were released to GA](#two-recommendations-related-to-missing-operating-system-os-updates-were-released-to-ga)
### Agentless Container Posture in Defender CSPM (Preview) The new Agentless Container Posture (Preview) capabilities are available as part of the Defender CSPM (Cloud Security Posture Management) plan.
We have added four new Azure Active Directory authentication-related recommendat
| Azure Synapse Workspace authentication mode should be Azure Active Directory Only | Azure Active Directory only authentication methods improves security by ensuring that Synapse Workspaces exclusively require Azure AD identities for authentication. [Learn more](https://aka.ms/Synapse). | [Synapse Workspaces should use only Azure Active Directory identities for authentication](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f2158ddbe-fefa-408e-b43f-d4faef8ff3b8) | | Azure Database for MySQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for MySQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | [An Azure Active Directory administrator should be provisioned for MySQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f146412e9-005c-472b-9e48-c87b72ac229e) | | Azure Database for PostgreSQL should have an Azure Active Directory administrator provisioned | Provision an Azure AD administrator for your Azure Database for PostgreSQL to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services | [An Azure Active Directory administrator should be provisioned for PostgreSQL servers](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2fb4dec045-250a-48c2-b5cc-e0c4eec8b5b4) |+
+### Two recommendations related to missing Operating System (OS) updates were released to GA
+
+The recommendations `System updates should be installed on your machines (powered by Update management center)` and `Machines should be configured to periodically check for missing system updates` have been released for General Availability.
+
+To use the new recommendation, you need to:
+
+- Connect your non-Azure machines to Arc.
+- [Enable the periodic assessment property](../update-center/assessment-options.md#periodic-assessment). You can use the [Fix button](implement-security-recommendations.md).
+ in the new recommendation, `Machines should be configured to periodically check for missing system updates` to fix the recommendation.
+
+After completing these steps, you can remove the old recommendation `System updates should be installed on your machines`, by disabling it from Defender for Cloud's built-in initiative in Azure policy.
+
+The two versions of the recommendations:
+
+- [`System updates should be installed on your machines`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SystemUpdatesRecommendationDetailsWithRulesBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27/subscriptionIds~/%5B%220cd6095b-b140-41ec-ad1d-32f2f7493386%22%2C%220ee78edb-a0ad-456c-a0a2-901bf542c102%22%2C%2284ca48fe-c942-42e5-b492-d56681d058fa%22%2C%22b2a328a7-ffff-4c09-b643-a4758cf170bc%22%2C%22eef8b6d5-94da-4b36-9327-a662f2674efb%22%2C%228d5565a3-dec1-4ee2-86d6-8aabb315eec4%22%2C%22e0fd569c-e34a-4249-8c24-e8d723c7f054%22%2C%22dad45786-32e5-4ef3-b90e-8e0838fbadb6%22%2C%22a5f9f0d3-a937-4de5-8cf3-387fce51e80c%22%2C%220368444d-756e-4ca6-9ecd-e964248c227a%22%2C%22e686ef8c-d35d-4e9b-92f8-caaaa7948c0a%22%2C%222145a411-d149-4010-84d4-40fe8a55db44%22%2C%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%22615f5f56-4ba9-45cf-b644-0c09d7d325c8%22%2C%22487bb485-b5b0-471e-9c0d-10717612f869%22%2C%22cb9eb375-570a-4e75-b83a-77dd942bee9f%22%2C%224bbecc02-f2c3-402a-8e01-1dfb1ffef499%22%2C%22432a7068-99ae-4975-ad38-d96b71172cdf%22%2C%22c0620f27-ac38-468c-a26b-264009fe7c41%22%2C%22a1920ebd-59b7-4f19-af9f-5e80599e88e4%22%2C%22b43a6159-1bea-4fa2-9407-e875fdc0ff55%22%2C%22d07c0080-170c-4c24-861d-9c817742986a%22%2C%22ae71ef11-a03f-4b4f-a0e6-ef144727c711%22%2C%2255a24be0-d9c3-4ecd-86b6-566c7aac2512%22%2C%227afc2d66-d5b4-4e84-970b-a782e3e4cc46%22%2C%2252a442a2-31e9-42f9-8e3e-4b27dbf82673%22%2C%228c4b5b03-3b24-4ed0-91f5-a703cd91b412%22%2C%22e01de573-132a-42ac-9ee2-f9dea9dd2717%22%2C%22b5c0b80f-5932-4d47-ae25-cd617dac90ce%22%2C%22e4e06275-58d1-4081-8f1b-be12462eb701%22%2C%229b4236fe-df75-4289-bf00-40628ed41fd9%22%2C%2221d8f407-c4c4-452e-87a4-e609bfb86248%22%2C%227d411d23-59e5-4e2e-8566-4f59de4544f2%22%2C%22b74d5345-100f-408a-a7ca-47abb52ba60d%22%2C%22f30787b9-82a8-4e74-bb0f-f12d64ecc496%22%2C%22482e1993-01d4-4b16-bff4-1866929176a1%22%2C%2226596251-f2f3-4e31-8a1b-f0754e32ad73%22%2C%224628298e-882d-4f12-abf4-a9f9654960bb%22%2C%224115b323-4aac-47f4-bb13-22af265ed58b%22%2C%22911e3904-5112-4232-a7ee-0d1811363c28%22%2C%22cd0fa82d-b6b6-4361-b002-050c32f71353%22%2C%22dd4c2dac-db51-4cd0-b734-684c6cc360c1%22%2C%22d2c9544f-4329-4642-b73d-020e7fef844f%22%2C%22bac420ed-c6fc-4a05-8ac1-8c0c52da1d6e%22%2C%2250ff7bc0-cd15-49d5-abb2-e975184c2f65%22%2C%223cd95ff9-ac62-4b5c-8240-0cd046687ea0%22%2C%2213723929-6644-4060-a50a-cc38ebc5e8b1%22%2C%2209fa8e83-d677-474f-8f73-2a954a0b0ea4%22%2C%22ca38bc19-cf50-48e2-bbe6-8c35b40212d8%22%2C%22bf163a87-8506-4eb3-8d14-c2dc95908830%22%2C%221278a874-89fc-418c-b6b9-ac763b000415%22%2C%223b2fda06-3ef6-454a-9dd5-994a548243e9%22%2C%226560575d-fa06-4e7d-95fb-f962e74efd7a%22%2C%22c3547baf-332f-4d8f-96bd-0659b39c7a59%22%2C%222f96ae42-240b-4228-bafa-26d8b7b03bf3%22%2C%2229de2cfc-f00a-43bb-bdc8-3108795bd282%22%2C%22a1ffc958-d2c7-493e-9f1e-125a0477f536%22%2C%2254b875cc-a81a-4914-8bfd-1a36bc7ddf4d%22%2C%22407ff5d7-0113-4c5c-8534-f5cfb09298f5%22%2C%22365a62ee-6166-4d37-a936-03585106dd50%22%2C%226d17b59e-06c4-4203-89d2-de793ebf5452%22%2C%229372b318-ed3a-4504-95a6-941201300f78%22%2C%223c1bb38c-82e3-4f8d-a115-a7110ba70d05%22%2C%22c6dcd830-359f-44d0-b4d4-c1ba95e86f48%22%2C%2209e8ad18-7bdb-43b8-80c4-43ee53460e0b%22%2C%22dcbdac96-1896-478d-89fc-c95ed43f4596%22%2C%22d23422cf-c0f2-4edc-a306-6e32b181a341%22%2C%228c2c7b23-848d-40fe-b817-690d79ad9dfd%22%2C%221163fbbe-27e7-4b0f-8466-195fe5417043%22%2C%223905431d-c062-4c17-8fd9-c51f89f334c4%22%2C%227ea26ded-0260-4e78-9336-285d4d9e33d2%22%2C%225ccdbd03-f1b1-4b59-a609-300685e17ce3%22%2C%22bcdc6eb0-74cd-40b6-b3a9-584b33cea7b6%22%2C%22d557e825-27b1-4819-8af5-dc2429af91c9%22%2C%222bb50811-92b6-43a1-9d80-745962d9c759%22%2C%22409111bf-3097-421c-ad68-a44e716edf58%22%2C%2249e3f635-484a-43d1-b953-b29e1871ba88%22%2C%22b77ec8a9-04ed-48d2-a87a-e5887b978ba6%22%2C%22075423e9-7d33-4166-8bdf-3920b04e3735%22%2C%22ef143bbb-6a7e-4a3f-b64f-2f23330e0116%22%2C%2224afc59a-f969-4f83-95c9-3b70f52d833d%22%2C%22a8783cc5-1171-4c34-924f-6f71a20b21ec%22%2C%220079a9bb-e218-496a-9880-d27ad6192f52%22%2C%226f53185c-ea09-4fc3-9075-318dec805303%22%2C%22588845a8-a4a7-4ab1-83a1-1388452e8c0c%22%2C%22b68b2f37-1d37-4c2f-80f6-c23de402792e%22%2C%22eec2de82-6ab2-4a84-ae5f-57e9a10bf661%22%2C%22227531a4-d775-435b-a878-963ed8d0d18f%22%2C%228cff5d56-95fb-4a74-ab9d-079edb45313e%22%2C%22e72e5254-f265-4e95-9bd2-9ee8e7329051%22%2C%228ae1955e-f748-4273-a507-10159ba940f9%22%2C%22f6869ac6-2a40-404f-acd3-d07461be771a%22%2C%2285b3dbca-5974-4067-9669-67a141095a76%22%2C%228168a4f2-74d6-4663-9951-8e3a454937b7%22%2C%229ec1d932-0f3f-486c-acc6-e7d78b358f9b%22%2C%2279f57c16-00fe-48da-87d4-5192e86cd047%22%2C%22bac044cf-49e1-4843-8dda-1ce9662606c8%22%2C%22009d0e9f-a42a-470e-b315-82496a88cf0f%22%2C%2268f3658f-0090-4277-a500-f02227aaee97%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null)
+- [`System updates should be installed on your machines (powered by Update management center)`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SystemUpdatesV2RecommendationDetailsBlade/assessmentKey/e1145ab1-eb4f-43d8-911b-36ddf771d13f/subscriptionIds~/%5B%220cd6095b-b140-41ec-ad1d-32f2f7493386%22%2C%220ee78edb-a0ad-456c-a0a2-901bf542c102%22%2C%2284ca48fe-c942-42e5-b492-d56681d058fa%22%2C%22b2a328a7-ffff-4c09-b643-a4758cf170bc%22%2C%22eef8b6d5-94da-4b36-9327-a662f2674efb%22%2C%228d5565a3-dec1-4ee2-86d6-8aabb315eec4%22%2C%22e0fd569c-e34a-4249-8c24-e8d723c7f054%22%2C%22dad45786-32e5-4ef3-b90e-8e0838fbadb6%22%2C%22a5f9f0d3-a937-4de5-8cf3-387fce51e80c%22%2C%220368444d-756e-4ca6-9ecd-e964248c227a%22%2C%22e686ef8c-d35d-4e9b-92f8-caaaa7948c0a%22%2C%222145a411-d149-4010-84d4-40fe8a55db44%22%2C%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%22615f5f56-4ba9-45cf-b644-0c09d7d325c8%22%2C%22487bb485-b5b0-471e-9c0d-10717612f869%22%2C%22cb9eb375-570a-4e75-b83a-77dd942bee9f%22%2C%224bbecc02-f2c3-402a-8e01-1dfb1ffef499%22%2C%22432a7068-99ae-4975-ad38-d96b71172cdf%22%2C%22c0620f27-ac38-468c-a26b-264009fe7c41%22%2C%22a1920ebd-59b7-4f19-af9f-5e80599e88e4%22%2C%22b43a6159-1bea-4fa2-9407-e875fdc0ff55%22%2C%22d07c0080-170c-4c24-861d-9c817742986a%22%2C%22ae71ef11-a03f-4b4f-a0e6-ef144727c711%22%2C%2255a24be0-d9c3-4ecd-86b6-566c7aac2512%22%2C%227afc2d66-d5b4-4e84-970b-a782e3e4cc46%22%2C%2252a442a2-31e9-42f9-8e3e-4b27dbf82673%22%2C%228c4b5b03-3b24-4ed0-91f5-a703cd91b412%22%2C%22e01de573-132a-42ac-9ee2-f9dea9dd2717%22%2C%22b5c0b80f-5932-4d47-ae25-cd617dac90ce%22%2C%22e4e06275-58d1-4081-8f1b-be12462eb701%22%2C%229b4236fe-df75-4289-bf00-40628ed41fd9%22%2C%2221d8f407-c4c4-452e-87a4-e609bfb86248%22%2C%227d411d23-59e5-4e2e-8566-4f59de4544f2%22%2C%22b74d5345-100f-408a-a7ca-47abb52ba60d%22%2C%22f30787b9-82a8-4e74-bb0f-f12d64ecc496%22%2C%22482e1993-01d4-4b16-bff4-1866929176a1%22%2C%2226596251-f2f3-4e31-8a1b-f0754e32ad73%22%2C%224628298e-882d-4f12-abf4-a9f9654960bb%22%2C%224115b323-4aac-47f4-bb13-22af265ed58b%22%2C%22911e3904-5112-4232-a7ee-0d1811363c28%22%2C%22cd0fa82d-b6b6-4361-b002-050c32f71353%22%2C%22dd4c2dac-db51-4cd0-b734-684c6cc360c1%22%2C%22d2c9544f-4329-4642-b73d-020e7fef844f%22%2C%22bac420ed-c6fc-4a05-8ac1-8c0c52da1d6e%22%2C%2250ff7bc0-cd15-49d5-abb2-e975184c2f65%22%2C%223cd95ff9-ac62-4b5c-8240-0cd046687ea0%22%2C%2213723929-6644-4060-a50a-cc38ebc5e8b1%22%2C%2209fa8e83-d677-474f-8f73-2a954a0b0ea4%22%2C%22ca38bc19-cf50-48e2-bbe6-8c35b40212d8%22%2C%22bf163a87-8506-4eb3-8d14-c2dc95908830%22%2C%221278a874-89fc-418c-b6b9-ac763b000415%22%2C%223b2fda06-3ef6-454a-9dd5-994a548243e9%22%2C%226560575d-fa06-4e7d-95fb-f962e74efd7a%22%2C%22c3547baf-332f-4d8f-96bd-0659b39c7a59%22%2C%222f96ae42-240b-4228-bafa-26d8b7b03bf3%22%2C%2229de2cfc-f00a-43bb-bdc8-3108795bd282%22%2C%22a1ffc958-d2c7-493e-9f1e-125a0477f536%22%2C%2254b875cc-a81a-4914-8bfd-1a36bc7ddf4d%22%2C%22407ff5d7-0113-4c5c-8534-f5cfb09298f5%22%2C%22365a62ee-6166-4d37-a936-03585106dd50%22%2C%226d17b59e-06c4-4203-89d2-de793ebf5452%22%2C%229372b318-ed3a-4504-95a6-941201300f78%22%2C%223c1bb38c-82e3-4f8d-a115-a7110ba70d05%22%2C%22c6dcd830-359f-44d0-b4d4-c1ba95e86f48%22%2C%2209e8ad18-7bdb-43b8-80c4-43ee53460e0b%22%2C%22dcbdac96-1896-478d-89fc-c95ed43f4596%22%2C%22d23422cf-c0f2-4edc-a306-6e32b181a341%22%2C%228c2c7b23-848d-40fe-b817-690d79ad9dfd%22%2C%221163fbbe-27e7-4b0f-8466-195fe5417043%22%2C%223905431d-c062-4c17-8fd9-c51f89f334c4%22%2C%227ea26ded-0260-4e78-9336-285d4d9e33d2%22%2C%225ccdbd03-f1b1-4b59-a609-300685e17ce3%22%2C%22bcdc6eb0-74cd-40b6-b3a9-584b33cea7b6%22%2C%22d557e825-27b1-4819-8af5-dc2429af91c9%22%2C%222bb50811-92b6-43a1-9d80-745962d9c759%22%2C%22409111bf-3097-421c-ad68-a44e716edf58%22%2C%2249e3f635-484a-43d1-b953-b29e1871ba88%22%2C%22b77ec8a9-04ed-48d2-a87a-e5887b978ba6%22%2C%22075423e9-7d33-4166-8bdf-3920b04e3735%22%2C%22ef143bbb-6a7e-4a3f-b64f-2f23330e0116%22%2C%2224afc59a-f969-4f83-95c9-3b70f52d833d%22%2C%22a8783cc5-1171-4c34-924f-6f71a20b21ec%22%2C%220079a9bb-e218-496a-9880-d27ad6192f52%22%2C%226f53185c-ea09-4fc3-9075-318dec805303%22%2C%22588845a8-a4a7-4ab1-83a1-1388452e8c0c%22%2C%22b68b2f37-1d37-4c2f-80f6-c23de402792e%22%2C%22eec2de82-6ab2-4a84-ae5f-57e9a10bf661%22%2C%22227531a4-d775-435b-a878-963ed8d0d18f%22%2C%228cff5d56-95fb-4a74-ab9d-079edb45313e%22%2C%22e72e5254-f265-4e95-9bd2-9ee8e7329051%22%2C%228ae1955e-f748-4273-a507-10159ba940f9%22%2C%22f6869ac6-2a40-404f-acd3-d07461be771a%22%2C%2285b3dbca-5974-4067-9669-67a141095a76%22%2C%228168a4f2-74d6-4663-9951-8e3a454937b7%22%2C%229ec1d932-0f3f-486c-acc6-e7d78b358f9b%22%2C%2279f57c16-00fe-48da-87d4-5192e86cd047%22%2C%22bac044cf-49e1-4843-8dda-1ce9662606c8%22%2C%22009d0e9f-a42a-470e-b315-82496a88cf0f%22%2C%2268f3658f-0090-4277-a500-f02227aaee97%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null)
+
+will both be available until the [Log Analytics agent is deprecated on August 31, 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/), which is when the older version (`System updates should be installed on your machines`) of the recommendation will be deprecated as well. Both recommendations return the same results and are available under the same control `Apply system updates`.
+
+The new recommendation `System updates should be installed on your machines (powered by Update management center)`, has a remediation flow available through the Fix button, which can be used to remediate any results through the Update Management Center (Preview). This remediation process is still in Preview.
+
+The new recommendation `System updates should be installed on your machines (powered by Update management center)`, isn't expected to affect your Secure Score, as it will have the same results as the old recommendation `System updates should be installed on your machines`.
+
+The prerequisite recommendation ([Enable the periodic assessment property](../update-center/assessment-options.md#periodic-assessment)) will have a negative effect on your Secure Score. You can be remediated the effect with the available [Fix button](implement-security-recommendations.md).
+ ## March 2023 Updates in March include:
defender-for-cloud Support Matrix Cloud Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-cloud-environment.md
This article indicates which Defender for Cloud features are supported in Azure
In the support table, **NA** indicates that the feature is not available.
-**Feature/Plan** | **Details** | **Azure** | **Azure Government** | **Azure China**<br/><br/>**21Vianet**
- | | | |
-**Foundational CSPM** | | | |
+**Feature/Plan** | **Azure** | **Azure Government** | **Azure China**<br/><br/>**21Vianet**
+ | | |
+**FOUNDATIONAL CSPM FEATURES** | | |
[Continuous export](./continuous-export.md) | GA | GA | GA [Workflow automation](./workflow-automation.md) | GA | GA | GA [Recommendation exemption rules](./exempt-resource.md) | Public preview | NA | NA
In the support table, **NA** indicates that the feature is not available.
[Asset inventory](./asset-inventory.md) | GA | GA | GA [Azure Workbooks support](./custom-dashboards-azure-workbooks.md) | GA | GA | GA [Microsoft Defender for Cloud Apps integration](./other-threat-protections.md#display-recommendations-in-microsoft-defender-for-cloud-apps) | GA | GA | GA
+**DEFENDER FOR CLOUD PLANS** | | |
+**[Agentless discovery for Kubernetes](concept-agentless-containers.md)** | Public preview | NA | NA
+**[Agentless vulnerability assessments for container images.](concept-agentless-containers.md)**<br/><br/> Including registry scanning (up to 20 unique images per billable resources) | Public preview | NA | NA
**[Defender CSPM](concept-cloud-security-posture-management.md)** | GA | NA | NA **[Defender for APIs](defender-for-apis-introduction.md)** | Public preview | NA | NA **[Defender for App Service](defender-for-app-service-introduction.md)** | GA | NA | NA **[Defender for Azure Cosmos DB](concept-defender-for-cosmos.md)** | Public preview | NA | NA
-**[Defender for Azure SQL database servers](defender-for-sql-introduction.md)**<br/><br/> Partial GA in Vianet21<br/> - A subset of alerts/vulnerability assessments is available.<br/>- Behavioral threat protection isn't available. | GA | GA | GA
-**[Defender for Containers](defender-for-containers-introduction.md)**| GA | GA | GA
-[Azure Arc extension for Kubernetes clusters/servers/data services](defender-for-kubernetes-azure-arc.md): | Public preview | NA | NA
-Runtime visibility of vulnerabilities in container images | Public preview | NA | NA
+**[Defender for Azure SQL database servers](defender-for-sql-introduction.md)**<br/><br/> Partial GA in Vianet21<br/> - A subset of alerts/vulnerability assessments is available.<br/>- Behavioral threat protection isn't available.| GA | GA | GA
+**[Defender for Containers](defender-for-containers-introduction.md)**<br/><br/>Support for Arc-enabled Kubernetes clusters (and therefore AWS EKS too) is in public preview and not available on Azure Government.<br/>Run-time visibility of vulnerabilities in container images is also a preview feature. | GA | GA | GA
+[Defender extension for Azure Arc-enabled Kubernetes clusters/servers/data services](defender-for-kubernetes-azure-arc.md). Requires Defender for Containers/Defender for Kubernetes. | Public preview | NA | NA
**[Defender for DNS](defender-for-dns-introduction.md)** | GA | GA | GA **[Defender for Key Vault](./defender-for-key-vault-introduction.md)** | GA | NA | NA
-[Defender for Kubernetes](./defender-for-kubernetes-introduction.md)<br/><br/> Defender for Kubernetes is deprecated and doesn't include new features. [Learn more](defender-for-kubernetes-introduction.md) | GA | GA | GA
+**[Defender for Kubernetes](./defender-for-kubernetes-introduction.md)**<br/><br/> Defender for Kubernetes is deprecated and replaced by Defender for Containers. Support for Azure Arc-enabled clusters is in public preview and not available in government clouds. [Learn more](defender-for-kubernetes-introduction.md). | GA | GA | GA
**[Defender for open-source relational databases](defender-for-databases-introduction.md)** | GA | NA | NA **[Defender for Resource Manager](./defender-for-resource-manager-introduction.md)** | GA | GA | GA
-**[Defender for Servers](plan-defender-for-servers.md)** | | | |
+**DEFENDER FOR SERVERS FEATURES** | | |
[Just-in-time VM access](./just-in-time-access-usage.md) | GA | GA | GA [File integrity monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA [Adaptive application controls](./adaptive-application-controls.md) | GA | GA | GA [Adaptive network hardening](./adaptive-network-hardening.md) | GA | GA | NA
-[Docker host hardening](./harden-docker-hosts.md) | | GA | GA | GA
+[Docker host hardening](./harden-docker-hosts.md) | GA | GA | GA
[Integrated Qualys scanner](./deploy-vulnerability-assessment-vm.md) | GA | NA | NA [Compliance dashboard/reports](./regulatory-compliance-dashboard.md)<br/><br/> Compliance standards might differ depending on the cloud type.| GA | GA | GA
-[Defender for Endpoint integration](./integration-defender-for-endpoint.md) | | GA | GA | NA
+[Defender for Endpoint integration](./integration-defender-for-endpoint.md) | GA | GA | NA
[Connect AWS account](./quickstart-onboard-aws.md) | GA | NA | NA [Connect GCP project](./quickstart-onboard-gcp.md) | GA | NA | NA
-**[Defender for Storage](./defender-for-storage-introduction.md)**<br/><br/> Some alerts in Defender for Storage are in public preview. | GA | GA | NA
+**[Defender for Storage](./defender-for-storage-introduction.md)**<br/><br/> Some threat protection alerts for Defender for Storage are in public preview. | GA | GA (activity monitoring) | NA
**[Defender for SQL servers on machines](./defender-for-sql-introduction.md)** | GA | GA | NA
+**[Kubernetes workload protection](kubernetes-workload-protections.md)** | GA | GA | GA
**[Microsoft Sentinel bi-directional alert synchronization](../sentinel/connect-azure-security-center.md)** | Public preview | NA | NA
defender-for-cloud Support Matrix Defender For Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-cloud.md
Last updated 04/13/2023
# Support matrices for Defender for Cloud
-This article indicates the Azure clouds, Azure services, and client operating systems that are supported by Microsoft Defender for Cloud.
+This article describes Azure services and client operating systems that are supported by Microsoft Defender for Cloud. For Azure cloud support, review [this article](support-matrix-cloud-environment.md)
## Security benefits for Azure services
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
**Estimated date for change: April 2023**
+Legacy PCI DSS v3.2.1 and legacy SOC TSP are set to be fully deprecated and replaced by [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) initiative and [PCI DSS v4](/azure/compliance/offerings/offering-pci-dss) initiative.
+ We're announcing the full deprecation of support of [PCI DSS](/azure/compliance/offerings/offering-pci-dss) standard/initiative in Azure China 21Vianet.
-Legacy PCI DSS v3.2.1 and legacy SOC TSP are set to be fully deprecated and replaced by [SOC 2 Type 2](/azure/compliance/offerings/offering-soc-2) initiative and [PCI DSS v4](/azure/compliance/offerings/offering-pci-dss) initiative.
Learn how to [Customize the set of standards in your regulatory compliance dashboard](update-regulatory-compliance-packages.md). #### Deprecation of identity recommendations V1
The following security recommendations will be released as GA and replace the V1
We're announcing the release of Vulnerability Assessment for Linux images in Azure container registries powered by Microsoft Defender Vulnerability Management (MDVM) in Defender CSPM. This release includes daily scanning of images. Findings used in the Security Explorer and attack paths will rely on MDVM Vulnerability Assessment instead of the Qualys scanner.
-The existing recommendation "Container registry images should have vulnerability findings resolved" is replaced by a new recommendation powered by MDVM:
+The existing recommendation "Container registry images should have vulnerability findings resolved" will be replaced by a new recommendation powered by MDVM:
|Recommendation | Description | Assessment Key| |--|--|--| | Container registry images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to  improving your security posture, significantly reducing the attack surface for your containerized workloads. |dbd0cb49-b563-45e7-9724-889e799fa648 <br> is replaced by c0b7cfc6-3172-465a-b378-53c7ff2cc0d5
-The recommendation "Running container images should have vulnerability findings resolved" (assessment key 41503391-efa5-47ee-9282-4eff6131462c) is temporarily removed and will be replaced soon by a new recommendation powered by MDVM.
+The recommendation "Running container images should have vulnerability findings resolved" (assessment key 41503391-efa5-47ee-9282-4eff6131462c) will be temporarily removed and will be replaced soon by a new recommendation powered by MDVM.
Learn more about [Microsoft Defender Vulnerability Management (MDVM)](https://learn.microsoft.com/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management).
Learn more about [Microsoft Defender Vulnerability Management (MDVM)](https://le
**Estimated date for change: May 2023**
- The current container recommendations in Defender for Containers are renamed as follows:
+ The current container recommendations in Defender for Containers will be renamed as follows:
|Recommendation | Description | Assessment Key| |--|--|--|
defender-for-iot Virtual Sensor Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-sensor-hyper-v.md
This procedure describes how to create a virtual machine by using Hyper-V.
1. Enter a name for the virtual machine.
-1. Select **Specify Generation** > **Generation 1** or **Generation 2**.
+1. Select **Generation** and set it to **Generation 1**, and then select **Next**.
1. Specify the memory allocation [according to your organization's needs](../ot-appliance-sizing.md), in standard RAM denomination (eg. 8192, 16384, 32768). Do not enable **Dynamic Memory**.
devtest-labs Devtest Lab Attach Detach Data Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-attach-detach-data-disk.md
description: Learn how to attach or detach a data disk for a lab virtual machine
Previously updated : 03/29/2022 Last updated : 04/24/2023 # Attach or detach a data disk for a lab virtual machine in Azure DevTest Labs
You can also delete a detached data disk, by selecting **Delete** from the conte
## Next steps
-For information about transferring data disks for claimable lab VMs, see [Transfer the data disk](devtest-lab-add-claimable-vm.md#transfer-the-data-disk).
+For information about transferring data disks for claimable lab VMs, see [Transfer the data disk](devtest-lab-add-claimable-vm.md#transfer-the-data-disk).
devtest-labs Devtest Lab Auto Startup Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-auto-startup-vm.md
description: Learn how to configure auto-start settings for VMs in a lab. This s
Previously updated : 03/29/2022 Last updated : 04/24/2023 # Automatically start lab VMs with auto-start in Azure DevTest Labs
After you configure the auto-start policy, follow these steps for each VM that y
:::image type="content" source="./media/devtest-lab-auto-startup-vm/select-auto-start.png" alt-text="Screenshot of selecting Yes on the Auto-start page.":::
+1. On the VM Overview page, your VM shows **Opted-in** status for auto-start.
+
+ :::image type="content" source="media/devtest-lab-auto-startup-vm/vm-overview-auto-start.png" alt-text="Screenshot showing vm with opted-in status for auto-start checked." lightbox="media/devtest-lab-auto-startup-vm/vm-overview-auto-start.png":::
+
+ You can also see the auto-start status for the VM on the lab Overview page.
+
+ :::image type="content" source="media/devtest-lab-auto-startup-vm/lab-overview-auto-start-status.png" alt-text="Screenshot showing the lab overview page, with VM auto-start set to Yes." lightbox="media/devtest-lab-auto-startup-vm/lab-overview-auto-start-status.png":::
+ ## Next steps - [Manage auto shutdown policies for a lab in Azure DevTest Labs](devtest-lab-auto-shutdown.md)
devtest-labs Use Command Line Start Stop Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/use-command-line-start-stop-virtual-machines.md
description: Use Azure PowerShell or Azure CLI command lines and scripts to star
Previously updated : 03/29/2022 Last updated : 04/24/2023 ms.devlang: azurecli
When you want to script or automate start or stop for lab VMs, use PowerShell or
## Prerequisites - A [lab VM in DevTest Labs](devtest-lab-add-vm.md).-- For Azure PowerShell, the [Az module](/powershell/azure/new-azureps-module-az) installed on your workstation. Make sure you have the latest version. If necessary, run `Update-Module -Name Az` to update the module.
+- For Azure PowerShell, the [Az PowerShell module](/powershell/azure/new-azureps-module-az) installed on your workstation. Make sure you have the latest version. If necessary, run `Update-Module -Name Az` to update the module.
- For Azure CLI, [Azure CLI ](/cli/azure/install-azure-cli) installed on your workstation. ## Azure PowerShell script
The following PowerShell script starts or stops a VM in a lab by using [Invoke-A
## Azure CLI script
-The following script provides [Azure CLI](/cli/azure/get-started-with-azure-cli) commands for starting or stopping a lab VM. The variables in this script are for a Windows environment. Bash or other environments have slight variations.
+The following script provides [Azure CLI](/cli/azure/get-started-with-azure-cli) commands for starting or stopping a lab VM. The variables in this script are for a Windows environment, like a command prompt. Bash or other environments have slight variations.
1. Provide appropriate values for *`<Subscription ID>`*, *`<lab name>`*, *`<VM name>`*, and the *`<Start or Stop>`* action to take. ```azurecli
- set SUBSCIPTIONID=<Subscription ID>
+ set SUBSCRIPTIONID=<Subscription ID>
set DEVTESTLABNAME=<lab name> set VMNAME=<VM name> set ACTION=<Start or Stop>
The following script provides [Azure CLI](/cli/azure/get-started-with-azure-cli)
```azurecli az login
- REM az account set --subscription %SUBSCIPTIONID%
+ REM az account set --subscription %SUBSCRIPTIONID%
``` 1. Get the name of the resource group that contains the lab.
energy-data-services How To Set Up Private Links https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-set-up-private-links.md
This article describes how to set up a private endpoint for Azure Data Manager f
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
+> [!NOTE]
+> Terraform currently does not support private endpoint creation for Azure Data Manager for Energy.
+ ## Prerequisites [Create a virtual network](../virtual-network/quick-create-portal.md) in the same subscription as the Azure Data Manager for Energy Preview instance. This virtual network will allow automatic approval of the Private Link endpoint.
Use the following steps to create a private endpoint for an existing Azure Data
|**Subscription**| Your subscription| |**Resource type**| **Microsoft.OpenEnergyPlatform/energyServices**| |**Resource**| Your Azure Data Manager for Energy Preview instance|
- |**Target sub-resource**| **MEDS** (for Azure Data Manager for Energy Preview) by default|
+ |**Target sub-resource**| **Azure Data Manager for Energy** (for Azure Data Manager for Energy Preview) by default|
[![Screenshot of resource information for a private endpoint.](media/how-to-manage-private-links/private-links-4-resource.png)](media/how-to-manage-private-links/private-links-4-resource.png#lightbox)
Use the following steps to create a private endpoint for an existing Azure Data
## Next steps <!-- Add a context sentence for the following links -->
-To learn more about using customer Lockbox as an interface to review and approve or reject access requests.
+To learn more about using Customer Lockbox as an interface to review and approve or reject access requests.
> [!div class="nextstepaction"] > [Use Lockbox for Azure Data Manager for Energy Preview](how-to-create-lockbox.md)
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
When you deploy an ExpressRoute gateway, Azure manages the compute and functions
* Count of routes learned from peers * Frequency of routes changed * Number of VMs in the virtual network
-* Count of active flows
+* Active flows
* Max flows created per second It's highly recommended you set alerts for each of these metrics so that you're aware of when your gateway could be seeing performance issues.
Set up your ExpressRoute connection.
* [Create and modify a circuit](expressroute-howto-circuit-arm.md) * [Create and modify peering configuration](expressroute-howto-routing-arm.md)
-* [Link a VNet to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md)
+* [Link a VNet to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md)
external-attack-surface-management Data Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/data-connections.md
To accurately present the infrastructure that matters most to your organization,
<br>Attack Surface Insights provide an actionable set of results based on the key insights delivered through dashboards in Defender EASM. This option provides less granular metadata on each asset; instead, it categorizes assets based on the corresponding insight(s) and provides the high-level context required to investigate further. This option is ideal for those who want to integrate these pre-determined insights into custom reporting workflows in conjunction with data from other tools.
-## **Configuring data connections**
+## **Configuration overviews**
**Accessing data connections**
To accurately present the infrastructure that matters most to your organization,
**Connection prerequisites** <br>To successfully create a data connection, users must first ensure that they have completed the required steps to grant Defender EASM permission to the tool of their choice. This process enables the application to ingest our exported data and provides the authentication credentials needed to configure the connection.
-**Configuring Log Analytics permissions via UI**
+## Configuring Log Analytics permissions
1. Open the Log Analytics workspace that will ingest your Defender EASM data, or [create a new workspace](/azure/azure-monitor/logs/quick-create-workspace?tabs=azure-portal).
-1. Select **Access control (IAM)** from the left-hand navigation pane. For more information on access control, see [identity documentation](/azure/cloud-adoption-framework/decision-guides/identity/).
+2. Select **Access control (IAM)** from the left-hand navigation pane. For more information on access control, see [identity documentation](/azure/cloud-adoption-framework/decision-guides/identity/).
![Screenshot of Log Analytics Access control.](media/data-connections/data-connector-2.png)
-1. On this page, select **+Add** to create a new role assignment.
-1. From the **Role** tab, select **Contributor**. Click **Next**.
-1. Open the **Members** tab. Click **+ Select members** to open a configuration pane. Search for **ΓÇ£EASM APIΓÇ¥** and click on the value in the members list. Once done, click **Select**, then **Review + assign.**
-1. Once the role assignment has been created, select **Agents** from the **Settings** section of the left-hand navigation menu.
+3. On this page, select **+Add** to create a new role assignment.
+4. From the **Role** tab, select **Contributor**. Click **Next**.
+5. Open the **Members** tab. Click **+ Select members** to open a configuration pane. Search for **ΓÇ£EASM APIΓÇ¥** and click on the value in the members list. Once done, click **Select**, then **Review + assign.**
+6. Once the role assignment has been created, select **Agents** from the **Settings** section of the left-hand navigation menu.
![Screenshot of Log Analytics agents.](media/data-connections/data-connector-3.png)
-1. Expand the **Log Analytics agent instructions** section to view your Workspace ID and Primary key. These values will be used to set up your data connection. Save the values in the following format: *WorkspaceId=XXX;ApiKey=YYY*
+7. Expand the **Log Analytics agent instructions** section to view your Workspace ID and Primary key. These values will be used to set up your data connection. Save the values in the following format: *WorkspaceId=XXX;ApiKey=YYY*
+
+Please note that use of this data connection is subject to the pricing structure of Log Analytics. See [Azure monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for more information.
-**Configuring Data Explorer permissions**
+
+
+## Configuring Data Explorer permissions
1. Open the Data Explorer cluster that will ingest your Defender EASM data or [create a new cluster](/azure/data-explorer/create-cluster-database-portal). 1. Select **Databases** in the Data section of the left-hand navigation menu.
To accurately present the infrastructure that matters most to your organization,
-**Add a data connection**
+## Add a data connection
<br>Users can connect their Defender EASM data to either Log Analytics or Azure Data Explorer. To do so, simply select **ΓÇ£Add connectionΓÇ¥** for the appropriate tool from the Data Connections page. A configuration pane will open on the right-hand side of the Data Connections screen. The following four fields are required:
A configuration pane will open on the right-hand side of the Data Connections sc
Once all four fields are configured, select **Add** to create the data connection. At this point, the Data Connections page will display a banner that indicates the resource has been successfully created and data will begin populating within 30 minutes. Once connections are created, they will be listed under the applicable tool on the main Data Connections page.
-**Edit or delete a data connection**
+## Edit or delete a data connection
<br>Users can edit or delete a data connection. For example, you may notice that a connection is listed as ΓÇ£DisconnectedΓÇ¥ and would therefore need to re-enter the configuration details to fix the issue. To edit or delete a data connection:
To edit or delete a data connection:
ΓÇó **Updated**: the date and time that the data connection was last updated. ![Screenshot of test connections.](media/data-connections/data-connector-9.png)
-1. From this page, users can elect to reconnect, edit or delete their data connection.<br>
- ΓÇó **Reconnect**: this option attempts to validate the data connection without any changes to the configuration. This option is best for those who have validated the authentication credentials used for the data connection.<br>
- ΓÇó **Edit**: this option allows users to change the configuration for the data connection.<br>
- ΓÇó **Delete**: this option deletes the data connection.
-
---
+1. From this page, users can elect to reconnect, edit or delete their data connection.
+ - **Reconnect**: this option attempts to validate the data connection without any changes to the configuration. This option is best for those who have validated the authentication credentials used for the data connection.
+ - **Edit**: this option allows users to change the configuration for the data connection.
+ - **Delete**: this option deletes the data connection.
+
firewall Fqdn Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/fqdn-tags.md
The following table shows the current FQDN tags you can use. Microsoft maintains
|AzureHDInsight|Allows outbound access for HDInsight platform traffic. This tag doesnΓÇÖt cover customer-specific Storage or SQL traffic from HDInsight. Enable these using [Service Endpoints](../virtual-network/tutorial-restrict-network-access-to-resources.md) or add them manually.| |WindowsVirtualDesktop|Allows outbound Azure Virtual Desktop (formerly Windows Virtual Desktop) platform traffic. This tag doesnΓÇÖt cover deployment-specific Storage and Service Bus endpoints created by Azure Virtual Desktop. Additionally, DNS and KMS network rules are required. For more information about integrating Azure Firewall with Azure Virtual Desktop, see [Use Azure Firewall to protect Azure Virtual Desktop deployments](protect-azure-virtual-desktop.md).| |AzureKubernetesService (AKS)|Allows outbound access to AKS. For more information, see [Use Azure Firewall to protect Azure Kubernetes Service (AKS) Deployments](protect-azure-kubernetes-service.md).|
-|Office365<br><br>For example: Office365.Skype.Optimize|Several Office 365 tags are available to allow outbound access by Office 365 product and category. For more information, see [Office 365 URLs and IP address ranges](/microsoft-365/enterprise/urls-and-ip-address-ranges).|
+|Office365<br><br>For example: Office365.Skype.Optimize|Several Office 365 tags are available to allow outbound access by Office 365 product and category. For more information, see [Use Azure Firewall to protect Office 365](protect-office-365.md).|
|Windows365|Allows outbound communication to Windows 365, excluding network endpoints for Microsoft Intune. To allow outbound communication to port 5671, create a separated network rule. For more information, see Windows 365 [Network requirements](/windows-365/enterprise/requirements-network).| > [!NOTE]
firewall Protect Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-office-365.md
+
+ Title: Use Azure Firewall to protect Office 365
+description: Learn how to use Azure Firewall to protect Office 365
++++ Last updated : 03/28/2023+++
+# Use Azure Firewall to protect Office 365
+
+You can use the Azure Firewall built-in Service Tags and FQDN tags to allow outbound communication to [Office 365 endpoints and IP addresses](/microsoft-365/enterprise/urls-and-ip-address-ranges).
+
+## Tags creation
+
+For each Office 365 product and category, Azure Firewall automatically retrieves the required endpoints and IP addresses, and creates tags accordingly:
+
+- Tag name: all names begin with **Office365** and are followed by:
+ - Product: Exchange / Skype / SharePoint / Common
+ - Category: Optimize / Allow / Default
+ - Required / Not required (optional)
+- Tag type:
+ - **FQDN tag** represents only the required FQDNs for the specific product and category that communicate over HTTP/HTTPS (ports 80/443) and can be used in Application Rules to secure traffic to these FQDNs and protocols.
+ - **Service tag** represents only the required IPv4 addresses and ranges for the specific product and category and can be used in Network Rules to secure traffic to these IP addresses and to any required port.
+
+You should accept a tag being available for a specific combination of product, category and required / not required in the following cases:
+- For a Service Tag ΓÇô this specific combination exists and has required IPv4 addresses listed.
+- For an FQDN Rule ΓÇô this specific combination exists and has required FQDNs listed which communicate to ports 80/443.
+
+Tags are updated automatically with any modifications to the required IPv4 addresses and FQDNs. New tags might be created automatically in the future as well if new combinations of product and category are added.
+
+Network rule collection:
+
+Application rule collection:
+
+## Rules configuration
+
+These built-in tags provide granularity to allow and protect the outbound traffic to Office 365 based on your preferences and usage. You can allow outbound traffic only to specific products and categories for a specific source. You can also use [Azure Firewall PremiumΓÇÖs TLS Inspection and IDPS](premium-features.md) to monitor some of the traffic. For example, traffic to endpoints in the Default category that can be treated as normal Internet outbound traffic. For more information about Office 365 endpoint categories, see [New Office 365 endpoint categories](/microsoft-365/enterprise/microsoft-365-network-connectivity-principles#new-office-365-endpoint-categories).
+
+When you create the rules, ensure you define the required TCP ports (for network rules) and protocols (for application rules) as required by Office 365. If a specific combination of product, category and required/not required have both a Service Tag and an FQDN tag, you should create representative rules for both tags to fully cover the required communication.
+
+## Limitations
+
+If a specific combination of product, category and required/not required has only FQDNs required, but uses TCP ports that aren't 80/443, an FQDN tag isn't be created for this combination. Application Rules can only cover HTTP, HTTPS or MSSQL. To allow communication to these FQDNs, create your own network rules with these FQDNs and ports.
+For more information, see [Use FQDN filtering in network rules](fqdn-filtering-network-rules.md).
+
+## Next steps
+
+- Learn more about Office 365 network connectivity: [Microsoft 365 network connectivity overview](/microsoft-365/enterprise/microsoft-365-networking-overview)
+
firewall Service Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/service-tags.md
Azure Firewall service tags can be used in the network rules destination field.
Azure Firewall supports the following Service Tags to use in Azure Firewall Network rules: - Tags for various Microsoft and Azure services listed in [Virtual network service tags](../virtual-network/service-tags-overview.md#available-service-tags).-- Tags for the required IP addresses of Office365 services, split by Office365 product and category. You must define the TCP/UDP ports specified in the [Office 365 documentation](/microsoft-365/enterprise/urls-and-ip-address-ranges) inside your rules.
+- Tags for the required IP addresses of Office365 services, split by Office365 product and category. You must define the TCP/UDP ports in your rules. For more information, see [Use Azure Firewall to protect Office 365](protect-office-365.md).
## Configuration
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/overview.md
compatible. The following table shows a list of supported operating systems on A
| Publisher | Name | Versions | | | -- | - |
+| Alma | AlmaLinux | 9 |
| Amazon | Linux | 2 | | Canonical | Ubuntu Server | 14.04 - 20.x | | Credativ | Debian | 8 - 10.x |
-| Microsoft | Windows Server | 2012 - 2022 |
+| Microsoft | CBL-Mariner | 1 - 2 |
| Microsoft | Windows Client | Windows 10 |
-| Oracle | Oracle-Linux | 7.x-8.x |
-| OpenLogic | CentOS | 7.3 -8.x |
-| Red Hat | Red Hat Enterprise Linux\* | 7.4 - 8.x |
+| Microsoft | Windows Server | 2012 - 2022 |
+| Oracle | Oracle-Linux | 7.x - 8.x |
+| OpenLogic | CentOS | 7.3 - 8.x |
+| Red Hat | Red Hat Enterprise Linux\* | 7.4 - 9.x |
+| Rocky | Rocky Linux | 9 |
| SUSE | SLES | 12 SP3-SP5, 15.x | \* Red Hat CoreOS isn't supported.
governance General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/troubleshoot/general.md
This issue occurs when a cluster egress is locked down.
Ensure that the domains and ports mentioned in the following article are open: -- [Required outbound network rules and fully qualified domain names (FQDNs) for AKS clusters](../../../aks/limit-egress-traffic.md#required-outbound-network-rules-and-fqdns-for-aks-clusters)
+- [Required outbound network rules and fully qualified domain names (FQDNs) for AKS clusters](../../../aks/outbound-rules-control-egress.md#required-outbound-network-rules-and-fqdns-for-aks-clusters)
### Scenario: The add-on is unable to reach the Azure Policy service endpoint because of the aad-pod-identity configuration
hdinsight Apache Hadoop Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-introduction.md
description: An introduction to HDInsight, and the Apache Hadoop technology stac
Previously updated : 03/31/2022 Last updated : 04/24/2023 #Customer intent: As a data analyst, I want understand what is Hadoop and how it is offered in Azure HDInsight so that I can decide on using HDInsight instead of on premises clusters.
hdinsight Apache Hadoop Use Hive Beeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-beeline.md
Title: Use Apache Beeline with Apache Hive - Azure HDInsight
description: Learn how to use the Beeline client to run Hive queries with Hadoop on HDInsight. Beeline is a utility for working with HiveServer2 over JDBC. Previously updated : 11/18/2021 Last updated : 04/24/2023 # Use the Apache Beeline client with Apache Hive
hdinsight Hdinsight Hadoop Linux Use Ssh Unix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-linux-use-ssh-unix.md
description: "You can access HDInsight using Secure Shell (SSH). This document p
Previously updated : 03/31/2022 Last updated : 04/24/2023 # Connect to HDInsight (Apache Hadoop) using SSH
hdinsight Hdinsight Hadoop Use Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-blob-storage.md
description: Learn how to query data from Azure storage and Azure Data Lake Stor
Previously updated : 03/31/2022 Last updated : 04/24/2023 # Use Azure storage with Azure HDInsight clusters
hdinsight Hdinsight Hadoop Use Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2.md
description: Learn how to use Azure Data Lake Storage Gen2 with Azure HDInsight
Previously updated : 03/31/2022 Last updated : 04/24/2023 # Use Azure Data Lake Storage Gen2 with Azure HDInsight clusters
hdinsight Hdinsight Restrict Outbound Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-restrict-outbound-traffic.md
description: Learn how to configure outbound network traffic restriction for Azu
Previously updated : 03/31/2022 Last updated : 04/24/2023 # Configure outbound network traffic for Azure HDInsight clusters using Firewall
hdinsight Apache Kafka Mirror Maker 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-mirror-maker-2.md
Title: Use MirrorMaker 2 to replicate Apache Kafka topics - Azure HDInsight
-description: Learn how to use Use MirrorMaker 2 to replicate Apache Kafka topics
+ Title: Use MirrorMaker 2 to migrate Kafka clusters between different Azure HDInsight versions - Azure HDInsight
+description: Learn how to use MirrorMaker 2 to migrate Kafka clusters between different Azure HDInsight versions
Previously updated : 03/10/2023 Last updated : 04/25/2023
-# Use MirrorMaker 2 to replicate Apache Kafka topics with Kafka on HDInsight
+# Use MirrorMaker 2 to migrate Kafka clusters between different Azure HDInsight versions
Learn how to use Apache Kafka's mirroring feature to replicate topics to a secondary cluster. You can run mirroring as a continuous process, or intermittently, to migrate data from one cluster to another. In this article, you use mirroring to replicate topics between two HDInsight clusters. These clusters are in different virtual networks in different datacenters.
-> [!WARNING]
-> Don't use mirroring as a means to achieve fault-tolerance. The offset to items within a topic are different between the primary and secondary clusters, so clients can't use the two interchangeably. If you are concerned about fault tolerance, you should set replication for the topics within your cluster. For more information, see [Get started with Apache Kafka on HDInsight](apache-kafka-get-started.md).
+> [!NOTE]
+> 1. We can use mirroring cluster as a fault tolerance.
+> 2. This is valid only is primary cluster HDI Kafka 2.4.1, 3.2.0 and secondary cluster is HDI Kafka 3.2.0 versions.
+> 3. Secondary cluster would work seamlessly if your primary cluster went down.
+> 4. Consumer group offsets will be automatically translated to secondary cluster.
+> 5. Just point your primary cluster consumers to secondary cluster with same consumer group and your consumer group will start consuming from the offset where it left in primary cluster.
+> 6. The only difference would be that the topic name in backup cluster will change from TOPIC_NAME to primary-cluster-name.TOPIC_NAME.
## How Apache Kafka mirroring works
This architecture features two clusters in different resource groups and virtual
1. Create two new Kafka clusters:
- | Cluster name | Resource group | Virtual network | Storage account |
+ | Cluster name |HDInsight version| Resource group | Virtual network | Storage account |
|||||
- | primary-kafka-cluster | kafka-primary-rg | kafka-primary-vnet | kafkaprimarystorage |
- | secondary-kafka-cluster | kafka-secondary-rg | kafka-secondary-vnet | kafkasecondarystorage |
+ | primary-kafka-cluster | 5.0|kafka-primary-rg | kafka-primary-vnet | kafkaprimarystorage |
+ | secondary-kafka-cluster |5.1|kafka-secondary-rg | kafka-secondary-vnet | kafkasecondarystorage |
> [!NOTE] > From now onwards we will use `primary-kafka-cluster` as `PRIMARYCLUSTER` and `secondary-kafka-cluster` as `SECONDARYCLUSTER`.
This architecture features two clusters in different resource groups and virtual
``` 1. Edit the `/etc/hosts` file of secondary cluster and add those entries here.
-1. After making the changes, the `/etc/hosts` file for `SECONDARYCLUSTER` looks like the given image.
+1. After you making the changes, the `/etc/hosts` file for `SECONDARYCLUSTER` looks like the given image.
:::image type="content" source="./media/apache-kafka-mirror-maker2/ect-host.png" lightbox="./media/apache-kafka-mirror-maker2/ect-host.png" alt-text="Screenshot that shows etc hosts file output." border="false":::
This architecture features two clusters in different resource groups and virtual
``` 1. Here source is your `PRIMARYCLUSTER` and destination is your `SECONDARYCLUSTR`. Replace it everywhere with correct name and replace `source.bootstrap.servers` and `destination.bootstrap.servers` with correct FQDN or IP of their respective worker nodes.
-1. You can control the topics that you want to replicate along with configurations using regular expressions. `replication.factor=3` makes the replication factor = 3 for all the topic which Mirror maker script creates by itself.
+1. You can use regular expressions to specify the topics and their configurations that you want to replicate. By setting the `replication.factor` parameter to 3, you can ensure that all topics created by the MirrorMaker script hsd a replication factor of 3.
1. Increase the replication factor from 1 to 3 for these topics ``` checkpoints.topic.replication.factor=1
This architecture features two clusters in different resource groups and virtual
destination->source.enabled=true destination->source.topics = .* ```
+1. For automated consumer offset sync, we need to enable replication and control the sync duration too. Following property syncs offset every 30 second. For active-active scenario, we need to do it both ways.
+ ```
+ groups=.*
+
+ emit.checkpoints.enabled = true
+ source->destination.sync.group.offsets.enabled = true
+ source->destination.sync.group.offsets.interval.ms=30000
+
+ destination->source.sync.group.offsets.enabled = true
+ destination->source.sync.group.offsets.interval.ms=30000
+ ```
+1. If we donΓÇÖt want to replicate internal topics across clusters, then use following property
+
+ ```
+ topics.blacklist="*.internal,__.*"
+ ```
+
1. Final Configuration file after changes should look like this ``` # specify any number of cluster aliases
This architecture features two clusters in different resource groups and virtual
secondary-kafka-cluster->primary-kafka-cluster.topics = .* groups=.*
+ emit.checkpoints.enabled = true
+ primary-kafka-cluster->secondary-kafka-cluster.sync.group.offsets.enabled=true
+ primary-kafka-cluster->secondary-kafka-cluster.sync.group.offsets.interval.ms=30000
+ secondary-kafka-cluster->primary-kafka-cluster.sync.group.offsets.enabled = true
+ secondary-kafka-cluster->primary-kafka-cluster.sync.group.offsets.interval.ms=30000
topics.blacklist="*.internal,__.*" # Setting replication factor of newly created remote topics
This architecture features two clusters in different resource groups and virtual
export clusterName='primary-kafka-cluster' export TOPICNAME='TestMirrorMakerTopic' export KAFKABROKERS='wn0-primar:9092'
- export KAFKAZKHOSTS='zk0-primar:2181'
-
+ export KAFKAZKHOSTS='zk0-primar:2181'
+
//Start Producer
- bash /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list $KAFKABROKERS --topic $TOPICNAME
+
+ # For Kafka 2.4
+ bash /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --zookeeper $KAFKAZKHOSTS --topic $TOPICNAME
+ # For Kafka 3.2
+ bash /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --boostrap-server $KAFKABROKERS --topic $TOPICNAME
```
-1. Now start consumer in `SECONDARYCLUSTER`
-
+1. Now start the consumer in PRIMARYCLUSTER with a consumer group
+ ```
+ //Start Consumer
+
+ # For Kafka 2.4
+ bash /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --zookeeper $KAFKAZKHOSTS --topic $TOPICNAME -ΓÇôgroup my-group ΓÇô-from- beginning
+
+ # For Kafka 3.2
+ bash /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --boostrap-server $KAFKABROKERS --topic $TOPICNAME -ΓÇôgroup my-group ΓÇô-from-beginning
```
- export clusterName='secondary-kafka-cluster'
- export TOPICNAME='TestMirrorMakerTopic'
- export KAFKABROKERS='wn0-second:9092'
- export KAFKAZKHOSTS='zk0-second:2181'
-
- # List all the topics whether they are replicated or not
+1. Now stop the consumer in PRIMARYCONSUMER and start consumer in SECONDARYCLUSTER with same consumer group
+ ```
+ export clusterName='secondary-kafka-cluster'
+
+ export TOPICNAME='primary-kafka-cluster.TestMirrorMakerTopic'
+
+ export KAFKABROKERS='wn0-second:9092'
+
+ export KAFKAZKHOSTS='zk0-second:2181'
+
+ # List all the topics whether they're replicated or not
bash /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper $KAFKAZKHOSTS --list # Start Consumer bash /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server $KAFKABROKERS --topic $TOPICNAME --from-beginning ```
+ You can notice that in secondary cluster consumer group my-group cant't consume any messages because, already consumed by primary cluster consumer group. Now produce more messages in primary-cluster and try to consumer then in secondary-cluster. You are able to consume from `SECONDARYCLUSTER`.
## Delete cluster
hdinsight Apache Kafka Producer Consumer Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-producer-consumer-api.md
description: Learn how to use the Apache Kafka Producer and Consumer APIs with K
Previously updated : 03/31/2022 Last updated : 04/24/2023 #Customer intent: As a developer, I need to create an application that uses the Kafka consumer/producer API with Kafka on HDInsight
hdinsight Set Up Pyspark Interactive Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/set-up-pyspark-interactive-environment.md
keywords: VScode,Azure HDInsight Tools,Hive,Python,PySpark,Spark,HDInsight,Hadoo
Previously updated : 03/30/2022 Last updated : 04/24/2023 # Set up the PySpark interactive environment for Visual Studio Code
hdinsight Apache Spark Create Standalone Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-create-standalone-application.md
description: Tutorial - Create a Spark application written in Scala with Apache
Previously updated : 03/30/2022 Last updated : 04/24/2023 # Customer intent: As a developer new to Apache Spark and to Apache Spark in Azure HDInsight, I want to learn how to create a Scala Maven application for Spark in HDInsight using IntelliJ.
hdinsight Apache Spark Livy Rest Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-livy-rest-interface.md
description: Learn how to use Apache Spark REST API to submit Spark jobs remotel
Previously updated : 04/01/2022 Last updated : 04/24/2023 # Use Apache Spark REST API to submit remote jobs to an HDInsight Spark cluster
hdinsight Apache Spark Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-overview.md
description: This article provides an introduction to Spark in HDInsight and the
Previously updated : 03/30/2022 Last updated : 04/24/2023 # Customer intent: As a developer new to Apache Spark and Apache Spark in Azure HDInsight, I want to have a basic understanding of Microsoft's implementation of Apache Spark in Azure HDInsight so I can decide if I want to use it rather than build my own cluster.
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview.md
Previously updated : 04/20/2023 Last updated : 04/24/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-This article provides an introductory overview of the MedTech service. The MedTech service is a Platform as a Service (PaaS) within the Azure Health Data Services. The MedTech service that enables you to ingest device data, transform it into a unified FHIR format, and store it in an enterprise-scale, secure, and compliant cloud environment. 
+This article provides an introductory overview of the MedTech service. The MedTech service is a Platform as a Service (PaaS) within the Azure Health Data Services. The MedTech service enables you to ingest device data, transform it into a unified FHIR format, and store it in an enterprise-scale, secure, and compliant cloud environment. 
The MedTech service was built to help customers that were dealing with the challenge of gaining relevant insights from device data coming in from multiple and diverse sources. No matter the device or structure, the MedTech service normalizes that device data into a common format, allowing the end user to then easily capture trends, run analytics, and build Artificial Intelligence (AI) models. In the enterprise healthcare setting, the MedTech service is used in the context of remote patient monitoring, virtual health, and clinical trials.
The following diagram outlines the basic elements of how the MedTech service tra
The MedTech service processes device data in five stages:
-1. **Ingest** - The MedTech service asynchronously loads the device messages from the event hub at high speed.
+1. **Ingest** - The MedTech service asynchronously reads the device message from the event hub at high speed.
2. **Normalize** - After the device message has been ingested, the MedTech service uses the device mapping to streamline and convert the device data into a normalized schema format.
The MedTech service processes device data in five stages:
4. **Transform** - When the normalized data is grouped, it's transformed through the FHIR destination mapping and is ready to become FHIR Observations.
-5. **Persist** - After the transformation is done, the new data is sent to the FHIR service and persisted as FHIR Observations.
+5. **Persist** - After the transformation is done, the newly transformed data is sent to the FHIR service and persisted as FHIR Observations.
## Key features of the MedTech service
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
You can use the data export and rules capabilities in IoT Central to integrate w
- [Extend Azure IoT Central with custom rules using Stream Analytics, Azure Functions, and SendGrid](howto-create-custom-rules.md) - [Extend Azure IoT Central with custom analytics using Azure Databricks](howto-create-custom-analytics.md)
-You can use IoT Edge devices connected to your IoT Central application to integrate with [Azure Video Analyzer](/previous-versions/azure/azure-video-analyzer/video-analyzer-docs/articles/azure-video-analyzer/video-analyzer-docs/overview).
- ## Integrate with companion applications IoT Central provides rich operator dashboards and visualizations. However, some IoT solutions must integrate with existing applications, or require new companion applications to expand their capabilities. To integrate with other applications, use IoT Central extensibility points such as the REST API and the continuous data export feature.
iot-hub Migrate Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/migrate-tls-certificate.md
You should start planning now for the effects of migrating your IoT hubs to the
The IoT Hub team will begin migrating IoT hubs by region on **February 15, 2023** and completing by October 15, 2023. After all IoT hubs have migrated, then DPS will perform its migration between January 15 and February 15, 2024.
-The subscription owners of each IoT hub will receive an email notification two weeks before their migration date.
+For each IoT hub, you can expect the following:
+
+* **One to two weeks before migration**: The subscription owners of each IoT hub receive an email notification informing them of their migration date. This notification doesn't apply to hubs that are manually migrated.
+* **Day of the migration**: The IoT hub switches its TLS certificate to the DigiCert Global Root G2, which results in no downtime for the IoT hub. IoT Hub doesn't force device reconnections.
+* **Following the migration**: The subscription owners receive a notification confirming that the IoT hub was migrated. Devices attempt to reconnect based on their individual retry logic, at which point they request and receive the new server certificate from IoT Hub and reconnect only if they trust the Digicert Global Root G2.
### Request an extension
No, only the [global Azure cloud](https://azure.microsoft.com/global-infrastruct
Yes, IoT Central uses both IoT Hub and DPS in the backend. The TLS migration will affect your solution, and you need to update your devices to maintain connection.
-You can migrate your application from the Baltimore CyberTrust Root to the DigiCert Global G2 Root on your own schedule. We recommend the following process: 
+You can migrate your application from the Baltimore CyberTrust Root to the DigiCert Global G2 Root on your own schedule. We recommend the following process:
+ 1. **Keep the Baltimore CyberTrust Root on your device until the transition period is completed on 15 February 2024** (necessary to prevent connection interruption).
-2. **In addition** to the Baltimore Root, ensure the DigiCert Global G2 Root is added to your trusted root store.
-3. Make sure you arenΓÇÖt pinning any intermediate or leaf certificates and are using the public roots to perform TLS server validation.
+2. **In addition** to the Baltimore Root, ensure the DigiCert Global G2 Root is added to your trusted root store.
+3. Make sure you arenΓÇÖt pinning any intermediate or leaf certificates and are using the public roots to perform TLS server validation.
4. In your IoT Central application you can find the Root Certification settings underΓÇ»**Settings**ΓÇ»>ΓÇ»**Application**ΓÇ»>ΓÇ»**Baltimore Cybertrust Migration**.ΓÇ»
- 1. Select **DigiCert Global G2 Root** to migrate to the new certificate root.
- 2. Click **Save** to initiate the migration.
- 3. If needed, you can migrate back to the Baltimore root by selecting **Baltimore CyberTrust Root** and saving the changes. This option is available until 15 May 2023 and will then be disabled as Microsoft will start initiating the migration.
+ 1. Select **DigiCert Global G2 Root** to migrate to the new certificate root.
+ 2. Click **Save** to initiate the migration.
+ 3. If needed, you can migrate back to the Baltimore root by selecting **Baltimore CyberTrust Root** and saving the changes. This option is available until 15 May 2023 and will then be disabled as Microsoft will start initiating the migration.
### How long will it take my devices to reconnect?
iot Iot Overview Analyze Visualize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-analyze-visualize.md
In Azure IoT, analysis and visualization services are used to identify and displ
## Azure Digital Twins
-The Azure Digital Twins service lets you build and maintain models that are live, up-to-date representations of the real world. You can query, analyze, and generate visualizations from these models to extract business insights. An example model might be a representation of a building that includes information about the rooms, the devices in the rooms, and the relationships between the rooms and devices. The real-world data that populates these models is typically collected from IoT devices and sent through an IoT hub.
+The [Azure Digital Twins](../digital-twins/overview.md) service lets you build and maintain models that are live, up-to-date representations of the real world. You can query, analyze, and generate visualizations from these models to extract business insights. An example model might be a representation of a building that includes information about the rooms, the devices in the rooms, and the relationships between the rooms and devices. The real-world data that populates these models is typically collected from IoT devices and sent through an IoT hub.
## External services
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
Any backend endpoint that has achieved a healthy state is eligible for receiving
New TCP connections will succeed to remaining healthy backend endpoint.
-If a backend endpoint's health probe fails, established TCP connections to this backend endpoint continue.
+If a backend endpoint's health probe fails, established TCP connections to this backend endpoint continue. However, if a backend pool only contains a single endpoint, then existing flows will terminate.
-If all probes for all instances in a backend pool fail, no new flows will be sent to the backend pool. Standard Load Balancer will permit established TCP flows to continue. Basic Load Balancer will terminate all existing TCP flows to the backend pool.
+If all probes for all instances in a backend pool fail, no new flows will be sent to the backend pool. Standard Load Balancer will permit established TCP flows to continue given that a backend pool has more than one backend endpoint. Basic Load Balancer will terminate all existing TCP flows to the backend pool.
Load Balancer is a pass through service. Load Balancer doesn't terminate TCP connections. The flow is always between the client and the VM's guest OS and application. A pool with all probes down results in a frontend that won't respond to TCP connection open attempts. There isn't a healthy backend endpoint to receive the flow and respond with an acknowledgment.
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
For more information on the `hbi_workspace` flag, see the [data encryption](conc
[Kubernetes Cluster](./how-to-attach-kubernetes-anywhere.md) running behind an outbound proxy server or firewall needs extra egress network configuration. * For Kubernetes with Azure Arc connection, configure the [Azure Arc network requirements](../azure-arc/kubernetes/network-requirements.md) needed by Azure Arc agents.
-* For AKS cluster without Azure Arc connection, configure the [AKS extension network requirements](../aks/limit-egress-traffic.md#cluster-extensions).
+* For AKS cluster without Azure Arc connection, configure the [AKS extension network requirements](../aks/outbound-rules-control-egress.md#cluster-extensions).
Besides above requirements, the following outbound URLs are also required for Azure Machine Learning,
machine-learning How To Deploy Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-kubernetes-extension.md
In this article, you can learn:
- [Using a service principal with AKS](../aks/kubernetes-service-principal.md) is **not supported** by Azure Machine Learning. The AKS cluster must use a **managed identity** instead. Both **system-assigned managed identity** and **user-assigned managed identity** are supported. For more information, see [Use a managed identity in Azure Kubernetes Service](../aks/use-managed-identity.md). - When your AKS cluster used service principal is converted to use Managed Identity, before installing the extension, all node pools need to be deleted and recreated, rather than updated directly.-- [Disabling local accounts](../aks/managed-aad.md#disable-local-accounts) for AKS is **not supported** by Azure Machine Learning. When the AKS Cluster is deployed, local accounts are enabled by default.
+- [Disabling local accounts](../aks/manage-local-accounts-managed-azure-ad.md#disable-local-accounts) for AKS is **not supported** by Azure Machine Learning. When the AKS Cluster is deployed, local accounts are enabled by default.
- If your AKS cluster has an [Authorized IP range enabled to access the API server](../aks/api-server-authorized-ip-ranges.md), enable the Azure Machine Learning control plane IP ranges for the AKS cluster. The Azure Machine Learning control plane is deployed across paired regions. Without access to the API server, the machine learning pods can't be deployed. Use the [IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=56519) for both the [paired regions](../availability-zones/cross-region-replication-azure.md) when enabling the IP ranges in an AKS cluster. - Azure Machine Learning does not support attaching an AKS cluster cross subscription. If you have an AKS cluster in a different subscription, you must first [connect it to Azure-Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md) and specify in the same subscription as your Azure Machine Learning workspace. - Azure Machine Learning does not guarantee support for all preview stage features in AKS. For example, [Azure AD pod identity](../aks/use-azure-ad-pod-identity.md) is not supported.
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-curated-environments.md
This article lists the curated environments with latest framework versions in Az
### Azure Container for PyTorch (ACPT)
-**Description**: Recommended environment for Deep Learning with PyTorch on Azure containing the Azure Machine Learning SDK with the latest compatible versions of Ubuntu, Python, PyTorch, CUDA\RocM,NebulaML combined with optimizers like ORT Training,+DeepSpeed+MSCCL+ORT MoE, and checkpointing using NebulaML and more.
+**Description**: Recommended environment for Deep Learning with PyTorch on Azure containing the Azure Machine Learning SDK with the latest compatible versions of Ubuntu, Python, PyTorch, CUDA\RocM, and NebulaML combined with optimizers like ORT Training, +DeepSpeed+MSCCL+ORT MoE, and checkpointing using NebulaML and more.
To learn more, see [Azure Container for PyTorch (ACPT)](resource-azure-container-for-pytorch.md).
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/azure-machine-learning-release-notes.md
Previously updated : 02/13/2022 Last updated : 04/10/2023 # Azure Machine Learning Python SDK release notes
__RSS feed__: Get notified when this page is updated by copying and pasting the
`https://learn.microsoft.com/api/search/rss?search=%22Azure+machine+learning+release+notes%22&locale=en-us`
+## 2023-04-10
+
+### Azure Machine Learning SDK for Python v1.50.0
+ + **azureml-contrib-automl-dnn-forecasting**
+ + Added support for forecasting at given quantiles for TCN models.
+ + **azureml-responsibleai**
+ + updated common environment and azureml-responsibleai package to raiwidgets and responsibleai 0.26.0
+ + **azureml-train-automl-runtime**
+ + Fix MLTable handling for model test scenario
+ + **azureml-training-tabular**
+ + Added quantiles as parameter in the forecast_quantile method.
++ ## 2023-03-01 ### Announcing end of support for Python 3.7 in Azure Machine Learning SDK v1 packages
__RSS feed__: Get notified when this page is updated by copying and pasting the
### Azure Machine Learning SDK for Python v1.49.0 + **Breaking changes**
- + Starting with v1.49.0 and above, the following AutoML algorithms will not be supported.
+ + Starting with v1.49.0 and above, the following AutoML algorithms won't be supported.
+ Regression: FastLinearRegressor, OnlineGradientDescentRegressor + Classification: AveragedPerceptronClassifier. + Use v1.48.0 or below to continue using these algorithms.
__RSS feed__: Get notified when this page is updated by copying and pasting the
+ **azureml-contrib-automl-dnn-forecasting** + Nonscalar metrics for TCNForecaster will now reflect values from the last epoch. + Forecast horizon visuals for train-set and test-set are now available while running the TCN training experiment.
- + Runs will not fail anymore because of "Failed to calculate TCN metrics" error. The warning message that says "Forecast Metric calculation resulted in error, reporting back worst scores" will still be logged. Instead we raise exception when we face inf/nan validation loss for more than two times consecutively with a message "Invalid Model, TCN training did not converge.". The customers need be aware of the fact that loaded models may return nan/inf values as predictions while inferencing after this change.
+ + Runs will not fail anymore because of "Failed to calculate TCN metrics" error. The warning message that says "Forecast Metric calculation resulted in error, reporting back worst scores" will still be logged. Instead we raise exception when we face inf/nan validation loss for more than two times consecutively with a message "Invalid Model, TCN training didn't converge.". The customers need be aware of the fact that loaded models may return nan/inf values as predictions while inferencing after this change.
+ **azureml-core** + Azure Machine Learning workspace creation makes use of Log Analytics Based Application Insights in preparation for deprecation of Classic Application Insights. Users wishing to use Classic Application Insights resources can still specify their own to bring when creating an Azure Machine Learning workspace. + **azureml-interpret**
__RSS feed__: Get notified when this page is updated by copying and pasting the
+ Added model serializer and pyfunc model to azureml-responsibleai package for saving and retrieving models easily + **azureml-train-automl-runtime** + Added docstring for ManyModels Parameters and HierarchicalTimeSeries Parameters
- + Fixed bug where generated code does not do train/test splits correctly.
+ + Fixed bug where generated code doesn't do train/test splits correctly.
+ Fixed a bug that was causing forecasting generated code training jobs to fail. ## 2022-10-25
__RSS feed__: Get notified when this page is updated by copying and pasting the
+ **azureml-automl-dnn-nlp** + Customers will no longer be allowed to specify a line in CoNLL, which only comprises with a token. The line must always either be an empty newline or one with exactly one token followed by exactly one space followed by exactly one label. + **azureml-contrib-automl-dnn-forecasting**
- + There is a corner case where samples are reduced to 1 after the cross validation split but sample_size still points to the count before the split and hence batch_size ends up being more than sample count in some cases. In this fix we initialize sample_size after the split
+ + There's a corner case where samples are reduced to 1 after the cross validation split but sample_size still points to the count before the split and hence batch_size ends up being more than sample count in some cases. In this fix we initialize sample_size after the split
+ **azureml-core** + Added deprecation warning when inference customers use CLI/SDK v1 model deployment APIs to deploy models and also when Python version is 3.6 and less. + The following values of `AZUREML_LOG_DEPRECATION_WARNING_ENABLED` change the behavior as follows:
__RSS feed__: Get notified when this page is updated by copying and pasting the
+ Now OutputDatasetConfig is supported as the input of the MM/HTS pipeline builder. The mappings are: 1) OutputTabularDatasetConfig -> treated as unpartitioned tabular dataset. 2) OutputFileDatasetConfig -> treated as filed dataset. + **azureml-train-automl-runtime** + Added data validation that requires the number of minority class samples in the dataset to be at least as much as the number of CV folds requested.
- + Automatic cross-validation parameter configuration is now available for AutoML forecasting tasks. Users can now specify "auto" for n_cross_validations and cv_step_size or leave them empty, and AutoML will provide those configurations base on your data. However, currently this feature is not supported when TCN is enabled.
+ + Automatic cross-validation parameter configuration is now available for AutoML forecasting tasks. Users can now specify "auto" for n_cross_validations and cv_step_size or leave them empty, and AutoML will provide those configurations base on your data. However, currently this feature isn't supported when TCN is enabled.
+ Forecasting Parameters in Many Models and Hierarchical Time Series can now be passed via object rather than using individual parameters in dictionary. + Enabled forecasting model endpoints with quantiles support to be consumed in Power BI. + Updated AutoML scipy dependency upper bound to 1.5.3 from 1.5.2
This breaking change comes from the June release of `azureml-inference-server-ht
+ **azureml-interpret** + updated azureml-interpret package to interpret-community 0.25.0 + **azureml-pipeline-core**
- + Do not print run detail anymore if `pipeline_run.wait_for_completion` with `show_output=False`
+ + Don't print run detail anymore if `pipeline_run.wait_for_completion` with `show_output=False`
+ **azureml-train-automl-runtime** + Fixes a bug that would cause code generation to fail when the azureml-contrib-automl-dnn-forecasting package is present in the training environment. + Fix error when using a test dataset without a label column with AutoML Model Testing.
This breaking change comes from the June release of `azureml-inference-server-ht
+ Added ability to get predictions on the training data (in-sample prediction) for forecasting. + **azureml-core** + Added support to set stream column type, mount and download stream columns in tabular dataset.
- + New optional fields added to Kubernetes.attach_configuration(identity_type=None, identity_ids=None) which allow attaching KubernetesCompute with either SystemAssigned or UserAssigned identity. New identity fields will be included when calling print(compute_target) or compute_target.serialize(): identity_type, identity_id, principal_id, and tenant_id/client_id.
+ + New optional fields added to Kubernetes.attach_configuration(identity_type=None, identity_ids=None) which allow attaching KubernetesCompute with either SystemAssigned or UserAssigned identity. New identity fields are included when calling print(compute_target) or compute_target.serialize(): identity_type, identity_id, principal_id, and tenant_id/client_id.
+ **azureml-dataprep** + Added support to set stream column type for tabular dataset. added support to mount and download stream columns in tabular dataset. + **azureml-defaults**
This breaking change comes from the June release of `azureml-inference-server-ht
+ Fixed a bug where to_dask_dataframe would fail because of a race condition. + Dataset from_files now supports skipping of data extensions for large input data + **azureml-defaults**
- + We are removing the dependency azureml-model-management-sdk==1.0.1b6.post1 from azureml-defaults.
+ + We're removing the dependency azureml-model-management-sdk==1.0.1b6.post1 from azureml-defaults.
+ **azureml-interpret** + updated azureml-interpret to interpret-community 0.19.* + **azureml-pipeline-core**
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ **Bug fixes and improvements** + **azureml-core** + Added the ability to override the default timeout value for artifact uploading via the "AZUREML_ARTIFACTS_DEFAULT_TIMEOUT" environment variable.
- + Fixed a bug where docker settings in Environment object on ScriptRunConfig are not respected.
+ + Fixed a bug where docker settings in Environment object on ScriptRunConfig aren't respected.
+ Allow partitioning a dataset when copying it to a destination. + Added a custom mode to the OutputDatasetConfig to enable passing created Datasets in pipelines through a link function. These support enhancements made to enable Tabular Partitioning for PRS. + Added a new KubernetesCompute compute type to azureml-core.
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
### Azure Machine Learning SDK for Python v1.26.0 + **Bug fixes and improvements** + **azureml-automl-core**
- + Fixed an issue where Naive models would be recommended in AutoMLStep runs and fail with lag or rolling window features. These models will not be recommended when target lags or target rolling window size are set.
+ + Fixed an issue where Naive models would be recommended in AutoMLStep runs and fail with lag or rolling window features. These models won't be recommended when target lags or target rolling window size are set.
+ Changed console output when submitting an AutoML run to show a portal link to the run. + **azureml-core** + Added HDFS mode in documentation.
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ With setting show_output to True when deploy models, inference configuration and deployment configuration will be replayed before sending the request to server. + **azureml-core** + Added functionality to filter Tabular Datasets by column values and File Datasets by metadata.
- + Previously, it was possibly for users to create provisioning configurations for ComputeTarget's that did not satisfy the password strength requirements for the `admin_user_password` field (that is, that they must contain at least 3 of the following: One lowercase letter, one uppercase letter, one digit, and one special character from the following set: ``\`~!@#$%^&*()=+_[]{}|;:./'",<>?``). If the user created a configuration with a weak password and ran a job using that configuration, the job would fail at runtime. Now, the call to `AmlCompute.provisioning_configuration` throws a `ComputeTargetException` with an accompanying error message explaining the password strength requirements.
- + Additionally, it was also possible in some cases to specify a configuration with a negative number of maximum nodes. It is no longer possible to do this. Now, `AmlCompute.provisioning_configuration` throws a `ComputeTargetException` if the `max_nodes` argument is a negative integer.
+ + Previously, it was possibly for users to create provisioning configurations for ComputeTarget's that didn't satisfy the password strength requirements for the `admin_user_password` field (that is, that they must contain at least 3 of the following: One lowercase letter, one uppercase letter, one digit, and one special character from the following set: ``\`~!@#$%^&*()=+_[]{}|;:./'",<>?``). If the user created a configuration with a weak password and ran a job using that configuration, the job would fail at runtime. Now, the call to `AmlCompute.provisioning_configuration` throws a `ComputeTargetException` with an accompanying error message explaining the password strength requirements.
+ + Additionally, it was also possible in some cases to specify a configuration with a negative number of maximum nodes. It's no longer possible to do this. Now, `AmlCompute.provisioning_configuration` throws a `ComputeTargetException` if the `max_nodes` argument is a negative integer.
+ With setting show_output to True when deploy models, inference configuration and deployment configuration will be displayed. + With setting show_output to True when wait for the completion of model deployment, the progress of deployment operation will be displayed. + Allow customer specified Azure Machine Learning auth config directory through environment variable: AZUREML_AUTH_CONFIG_DIR
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ Previously, it was possible to create a provisioning configuration with the minimum node count less than the maximum node count. This has now been fixed. If you now try to create a provisioning configuration with `min_nodes < max_nodes` the SDK will raises a `ComputeTargetException`. + Fixes bug in wait_for_completion in AmlCompute, which caused the function to return control flow before the operation was actually complete + Run.fail() is now deprecated, use Run.tag() to mark run as failed or use Run.cancel() to mark the run as canceled.
- + Show error message 'Environment name expected str, {} found' when provided environment name is not a string.
+ + Show error message 'Environment name expected str, {} found' when provided environment name isn't a string.
+ **azureml-train-automl-client** + Fixed a bug that prevented AutoML experiments performed on Azure Databricks clusters from being canceled.
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ **azureml-automl-core** + Fixed bug where an extra pip dependency was added to the conda yml file for vision models. + **azureml-automl-runtime**
- + Fixed a bug where classical forecasting models (for example, AutoArima) could receive training data wherein rows with imputed target values were not present. This violated the data contract of these models. * Fixed various bugs with lag-by-occurrence behavior in the time-series lagging operator. Previously, the lag-by-occurrence operation did not mark all imputed rows correctly and so would not always generate the correct occurrence lag values. Also fixed some compatibility issues between the lag operator and the rolling window operator with lag-by-occurrence behavior. This previously resulted in the rolling window operator dropping some rows from the training data that it should otherwise use.
+ + Fixed a bug where classical forecasting models (for example, AutoArima) could receive training data wherein rows with imputed target values weren't present. This violated the data contract of these models. * Fixed various bugs with lag-by-occurrence behavior in the time-series lagging operator. Previously, the lag-by-occurrence operation didn't mark all imputed rows correctly and so wouldn't always generate the correct occurrence lag values. Also fixed some compatibility issues between the lag operator and the rolling window operator with lag-by-occurrence behavior. This previously resulted in the rolling window operator dropping some rows from the training data that it should otherwise use.
+ **azureml-core** + Adding support for Token Authentication by audience. + Add `process_count` to [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration) to support multi-process multi-node PyTorch jobs.
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ **azureml-train-core** + Fix to remove another registration on datastore for resume run feature + **azureml-widgets**
- + Customers should not see changes to existing run data visualization using the widget, and now will have support if they optionally use conditional hyperparameters.
+ + Customers shouldn't see changes to existing run data visualization using the widget, and now will have support if they optionally use conditional hyperparameters.
+ The user run widget now includes a detailed explanation for why a run is in the queued state.
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
### Azure Machine Learning SDK for Python v1.20.0 + **Bug fixes and improvements** + **azure-cli-ml**
- + framework_version added in OptimizationConfig. It will be used when model is registered with framework MULTI.
+ + framework_version added in OptimizationConfig. It's used when model is registered with framework MULTI.
+ **azureml-contrib-optimization**
- + framework_version added in OptimizationConfig. It will be used when model is registered with framework MULTI.
+ + framework_version added in OptimizationConfig. It's used when model is registered with framework MULTI.
+ **azureml-pipeline-steps** + Introducing CommandStep, which would take command to process. Command can include executables, shell commands, scripts, etc. + **azureml-core**
The `ml` extension to the Azure CLI is the next-generation interface for Azure M
+ Pin the package: pyjwt to avoid pulling in breaking in versions upcoming releases. + Creating an experiment returns the active or last archived experiment with that same given name if such experiment exists or a new experiment. + Calling get_experiment by name returns the active or last archived experiment with that given name.
- + Users cannot rename an experiment while reactivating it.
+ + Users can't rename an experiment while reactivating it.
+ Improved error message to include potential fixes when a dataset is incorrectly passed to an experiment (for example, ScriptRunConfig). + Improved documentation for `OutputDatasetConfig.register_on_complete` to include the behavior of what will happen when the name already exists. + Specifying dataset input and output names that have the potential to collide with common environment variables will now result in a warning
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ **azureml-explain-model** + The azureml-explain-model package is officially deprecated + **azureml-mlflow**
- + Resolved a bug in mlflow.projects.run against azureml backend where Finalizing state was not handled properly.
+ + Resolved a bug in mlflow.projects.run against azureml backend where Finalizing state wasn't handled properly.
+ **azureml-pipeline-core** + Add support to create, list and get pipeline schedule based one pipeline endpoint. + Improved the documentation of PipelineData.as_dataset with an invalid usage example - Using PipelineData.as_dataset improperly will now result in a ValueException being thrown
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Support Triton No Code Deploy + outputs directories specified in Run.start_logging() will now be tracked when using run in interactive scenarios. The tracked files are visible on ML Studio upon calling Run.complete() + File encoding can be now specified during dataset creation with `Dataset.Tabular.from_delimited_files` and `Dataset.Tabular.from_json_lines_files` by passing the `encoding` argument. The supported encodings are 'utf8', 'iso88591', 'latin1', 'ascii', utf16', 'utf32', 'utf8bom' and 'windows1252'.
- + Bug fix when environment object is not passed to ScriptRunConfig constructor.
+ + Bug fix when environment object isn't passed to ScriptRunConfig constructor.
+ Updated Run.cancel() to allow cancel of a local run from another machine. + **azureml-dataprep** + Fixed dataset mount timeout issues.
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
### Azure Machine Learning SDK for Python v1.14.0 + **Bug fixes and improvements** + **azure-cli-ml**
- + Grid Profiling removed from the SDK and is not longer supported.
+ + Grid Profiling removed from the SDK and isn't longer supported.
+ **azureml-accel-models** + azureml-accel-models package now supports TensorFlow 2.x + **azureml-automl-core**
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ **azureml-automl-runtime** + Fixed a bug where AutoArima iterations would fail with a PredictionException and the message: "Silent failure occurred during prediction." + **azureml-cli-common**
- + Grid Profiling removed from the SDK and is not longer supported.
+ + Grid Profiling removed from the SDK and isn't longer supported.
+ **azureml-contrib-server** + Update description of the package for pypi overview page. + **azureml-core**
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ **azureml-automl-runtime** + Set horovod for text DNN to always use fp16 compression. + This release supports models greater than 4 Gb.
- + Fixed issue where AutoML fails with ImportError: cannot import name `RollingOriginValidator`.
+ + Fixed issue where AutoML fails with ImportError: can't import name `RollingOriginValidator`.
+ Upgraded AutoML dependencies: `scikit-learn` (now 0.22.1), `pandas` (now 0.25.1), `numpy` (now 1.18.2). + **azureml-contrib-automl-dnn-forecasting** + Upgraded AutoML dependencies: `scikit-learn` (now 0.22.1), `pandas` (now 0.25.1), `numpy` (now 1.18.2).
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Improved calculation of forecast quantiles when lookback features are disabled. + Fixed bool sparse matrix handling when computing explanations after AutoML. + **azureml-core**
- + A new method `run.get_detailed_status()` now shows the detailed explanation of current run status. It is currently only showing explanation for `Queued` status.
+ + A new method `run.get_detailed_status()` now shows the detailed explanation of current run status. It's currently only showing explanation for `Queued` status.
+ Add image_name and image_label parameters to Model.package() to enable renaming the built package image. + New method `set_pip_requirements()` to set the entire pip section in [`CondaDependencies`](/python/api/azureml-core/azureml.core.conda_dependencies.condadependencies) at once. + Enable registering credential-less ADLS Gen2 datastore.
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Environment.get_image_details() return object type changed. `DockerImageDetails` class replaced `dict`, image details are available from the new class properties. Changes are backward compatible. + Fix bug for Environment.from_pip_requirements() to preserve dependencies structure + Fixed a bug where log_list would fail if an int and double were included in the same list.
- + While enabling private link on an existing workspace, please note that if there are compute targets associated with the workspace, those targets will not work if they are not behind the same virtual network as the workspace private endpoint.
+ + While enabling private link on an existing workspace, please note that if there are compute targets associated with the workspace, those targets won't work if they are not behind the same virtual network as the workspace private endpoint.
+ Made `as_named_input` optional when using datasets in experiments and added `as_mount` and `as_download` to `FileDataset`. The input name will automatically generate if `as_mount` or `as_download` is called. + **azureml-automl-core** + Unhandled exceptions in AutoML now point to a known issues HTTP page, where more information about the errors can be found.
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Users can now specify a time series frequency for forecasting tasks by using the `freq` parameter. + AutoML Forecasting now supports rolling evaluation, which applies to the use case that the length of a test or validation set is longer than the input horizon, and known y_pred value is used as forecasting context. + **azureml-core**
- + Warning messages will be printed if no files were downloaded from the datastore in a run.
+ + Warning messages are printed if no files were downloaded from the datastore in a run.
+ Added documentation for `skip_validation` to the `Datastore.register_azure_sql_database method`. + Users are required to upgrade to sdk v1.10.0 or above to create an auto approved private endpoint. This includes the Notebook resource that is usable behind the VNet. + Expose NotebookInfo in the response of get workspace.
Learn more about [image instance segmentation labeling](../how-to-label-data.md)
+ Improved error handling around specific models during `get_output` + Fixed call to fitted_model.fit(X, y) for classification with y transformer + Enabled customized forward fill imputer for forecasting tasks
- + A new ForecastingParameters class will be used instead of forecasting parameters in a dict format
+ + A new ForecastingParameters class is used instead of forecasting parameters in a dict format
+ Improved target lag autodetection + Added limited availability of multi-noded, multi-gpu distributed featurization with BERT + **azureml-automl-runtime**
Access the following web-based authoring tools from the studio:
+ Accept string compute names to be passed to ParallelRunConfig + **azureml-core** + Added Environment.clone(new_name) API to create a copy of Environment object
- + Environment.docker.base_dockerfile accepts filepath. If able to resolve a file, the content will be read into base_dockerfile environment property
+ + Environment.docker.base_dockerfile accepts filepath. If able to resolve a file, the content is read into base_dockerfile environment property
+ Automatically reset mutually exclusive values for base_image and base_dockerfile when user manually sets a value in Environment.docker + Added user_managed flag in RSection that indicates whether the environment is managed by user or by Azure Machine Learning. + Dataset: Fixed dataset download failure if data path containing unicode characters.
Access the following web-based authoring tools from the studio:
+ Moved Machine learning and training code in AzureML-AutoML-Core to a new package AzureML-AutoML-Runtime. + **azureml-contrib-dataset** + When calling `to_pandas_dataframe` on a labeled dataset with the download option, you can now specify whether to overwrite existing files or not.
- + When calling `keep_columns` or `drop_columns` that results in a time series, label, or image column being dropped, the corresponding capabilities will be dropped for the dataset as well.
+ + When calling `keep_columns` or `drop_columns` that results in a time series, label, or image column being dropped, the corresponding capabilities are dropped for the dataset as well.
+ Fixed an issue with pytorch loader for the object detection task. + **azureml-contrib-interpret** + Removed explanation dashboard widget from azureml-contrib-interpret, changed package to reference the new one in interpret_community
Access the following web-based authoring tools from the studio:
+ Improve performance of `workspace.datasets`. + Added the ability to register Azure SQL Database Datastore using username and password authentication + Fix for loading RunConfigurations from relative paths.
- + When calling `keep_columns` or `drop_columns` that results in a time series column being dropped, the corresponding capabilities will be dropped for the dataset as well.
+ + When calling `keep_columns` or `drop_columns` that results in a time series column being dropped, the corresponding capabilities are dropped for the dataset as well.
+ **azureml-interpret** + updated version of interpret-community to 0.2.0 + **azureml-pipeline-steps**
Access the following web-based authoring tools from the studio:
+ **azureml-contrib-dataset** + After importing azureml-contrib-dataset, you can call `Dataset.Labeled.from_json_lines` instead of `._Labeled` to create a labeled dataset. + When calling `to_pandas_dataframe` on a labeled dataset with the download option, you can now specify whether to overwrite existing files or not.
- + When calling `keep_columns` or `drop_columns` that results in a time series, label, or image column being dropped, the corresponding capabilities will be dropped for the dataset as well.
+ + When calling `keep_columns` or `drop_columns` that results in a time series, label, or image column being dropped, the corresponding capabilities are dropped for the dataset as well.
+ Fixed issues with PyTorch loader when calling `dataset.to_torchvision()`. + **Bug fixes and improvements**
Access the following web-based authoring tools from the studio:
+ Added Load Balancer Type to MLC for AKS types. + Added append_prefix bool parameter to download_files in run.py and download_artifacts_from_prefix in artifacts_client. This flag is used to selectively flatten the origin filepath so only the file or folder name is added to the output_directory + Fix deserialization issue for `run_config.yml` with dataset usage.
- + When calling `keep_columns` or `drop_columns` that results in a time series column being dropped, the corresponding capabilities will be dropped for the dataset as well.
+ + When calling `keep_columns` or `drop_columns` that results in a time series column being dropped, the corresponding capabilities are dropped for the dataset as well.
+ **azureml-interpret** + Updated interpret-community version to 0.1.0.3 + **azureml-train-automl**
Azure Machine Learning is now a resource provider for Event Grid, you can config
+ **New features** + Added dataset monitors through the [**azureml-datadrift**](/python/api/azureml-datadrift) package, allowing for monitoring time series datasets for data drift or other statistical changes over time. Alerts and events can be triggered if drift is detected or other conditions on the data are met. See [our documentation](how-to-monitor-datasets.md) for details.
- + Announcing two new editions (also referred to as a SKU interchangeably) in Azure Machine Learning. With this release, you can now create either a Basic or Enterprise Azure Machine Learning workspace. All existing workspaces will be defaulted to the Basic edition, and you can go to the Azure portal or to the studio to upgrade the workspace anytime. You can create either a Basic or Enterprise workspace from the Azure portal. Read [our documentation](./how-to-manage-workspace.md) to learn more. From the SDK, the edition of your workspace can be determined using the "sku" property of your workspace object.
+ + Announcing two new editions (also referred to as a SKU interchangeably) in Azure Machine Learning. With this release, you can now create either a Basic or Enterprise Azure Machine Learning workspace. All existing workspaces are defaulted to the Basic edition, and you can go to the Azure portal or to the studio to upgrade the workspace anytime. You can create either a Basic or Enterprise workspace from the Azure portal. Read [our documentation](./how-to-manage-workspace.md) to learn more. From the SDK, the edition of your workspace can be determined using the "sku" property of your workspace object.
+ We have also made enhancements to Azure Machine Learning Compute - you can now view metrics for your clusters (like total nodes, running nodes, total core quota) in Azure Monitor, besides viewing Diagnostic logs for debugging. In addition, you can also view currently running or queued runs on your cluster and details such as the IPs of the various nodes on your cluster. You can view these either in the portal or by using corresponding functions in the SDK or CLI. + **Preview features**
Azure Machine Learning is now a resource provider for Event Grid, you can config
+ Models can be registered with two new frameworks, Onnx and TensorFlow. - Model registration accepts sample input data, sample output data and resource configuration for the model. + **azureml-automl-core** + Training an iteration would run in a child process only when runtime constraints are being set.
- + Added a guardrail for forecasting tasks, to check whether a specified max_horizon causes a memory issue on the given machine or not. If it will, a guardrail message will be displayed.
+ + Added a guardrail for forecasting tasks, to check whether a specified max_horizon causes a memory issue on the given machine or not. If it will, a guardrail message is displayed.
+ Added support for complex frequencies like two years and one month. -Added comprehensible error message if frequency cannot be determined. + Add azureml-defaults to auto generated conda env to solve the model deployment failure + Allow intermediate data in Azure Machine Learning Pipeline to be converted to tabular dataset and used in `AutoMLStep`.
Azure Machine Learning is now a resource provider for Event Grid, you can config
+ [**azureml-datadrift**](/python/api/azureml-datadrift) + Moved from `azureml-contrib-datadrift` into `azureml-datadrift` + Added support for monitoring time series datasets for drift and other statistical measures
- + New methods `create_from_model()` and `create_from_dataset()` to the [`DataDriftDetector`](/python/api/azureml-datadrift/azureml.datadrift.datadriftdetector%28class%29) class. The `create()` method will be deprecated.
+ + New methods `create_from_model()` and `create_from_dataset()` to the [`DataDriftDetector`](/python/api/azureml-datadrift/azureml.datadrift.datadriftdetector%28class%29) class. The `create()` method is deprecated.
+ Adjustments to the visualizations in Python and UI in the Azure Machine Learning studio. + Support weekly and monthly monitor scheduling, in addition to daily for dataset monitors. + Support backfill of data monitor metrics to analyze historical data for dataset monitors.
At the time, of this release, the following browsers are supported: Chrome, Fire
+ **New features** + You can now request to execute specific inspectors (for example, histogram, scatter plot, etc.) on specific columns.
- + Added a parallelize argument to `append_columns`. If True, data will be loaded into memory but execution will run in parallel; if False, execution is streaming but single-threaded.
+ + Added a parallelize argument to `append_columns`. If True, data is loaded into memory but execution will run in parallel; if False, execution is streaming but single-threaded.
## 2019-07-23
At the time, of this release, the following browsers are supported: Chrome, Fire
+ Forecasting now allows different frequencies in train and test sets if they can be aligned. For example, "quarterly starting in January" and at "quarterly starting in October" can be aligned. + The property "parameters" was added to the TimeSeriesTransformer. + Remove old exception classes.
- + In forecasting tasks, the `target_lags` parameter now accepts a single integer value or a list of integers. If the integer was provided, only one lag will be created. If a list is provided, the unique values of lags will be taken. target_lags=[1, 2, 2, 4] will create lags of one, two and four periods.
+ + In forecasting tasks, the `target_lags` parameter now accepts a single integer value or a list of integers. If the integer was provided, only one lag is created. If a list is provided, the unique values of lags will be taken. target_lags=[1, 2, 2, 4] will create lags of one, two and four periods.
+ Fix the bug about losing columns types after the transformation (bug linked); + In `model.forecast(X, y_query)`, allow y_query to be an object type containing None(s) at the begin (#459519). + Add expected values to `automl` output
At the time, of this release, the following browsers are supported: Chrome, Fire
+ Add support for token authentication in AKS webservices. + Add `get_token()` method to `Webservice` objects. + Added CLI support to manage machine learning datasets.
- + `Datastore.register_azure_blob_container` now optionally takes a `blob_cache_timeout` value (in seconds) which configures blobfuse's mount parameters to enable cache expiration for this datastore. The default is no timeout, such as when a blob is read, it stays in the local cache until the job is finished. Most jobs prefer this setting, but some jobs need to read more data from a large dataset than will fit on their nodes. For these jobs, tuning this parameter helps them succeed. Take care when tuning this parameter: setting the value too low can result in poor performance, as the data used in an epoch may expire before being used again. All reads will be done from blob storage/network rather than the local cache, which negatively impacts training times.
+ + `Datastore.register_azure_blob_container` now optionally takes a `blob_cache_timeout` value (in seconds) which configures blobfuse's mount parameters to enable cache expiration for this datastore. The default is no timeout, such as when a blob is read, it stays in the local cache until the job is finished. Most jobs prefer this setting, but some jobs need to read more data from a large dataset than will fit on their nodes. For these jobs, tuning this parameter helps them succeed. Take care when tuning this parameter: setting the value too low can result in poor performance, as the data used in an epoch may expire before being used again. All reads are done from blob storage/network rather than the local cache, which negatively impacts training times.
+ Model description can now properly be updated after registration + Model and Image deletion now provides more information about upstream objects that depend on them, which causes the delete to fail + Improve resource utilization of remote runs using azureml.mlflow.
migrate Troubleshoot Changed Block Tracking Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-changed-block-tracking-replication.md
ms. Previously updated : 12/12/2022 Last updated : 04/24/2023
The component trying to replicate data to Azure is either down or not responding
1. [Download](../storage/common/storage-use-azcopy-v10.md) azcopy.
- 2. Look for the appliance Storage Account in the Resource Group. The Storage Account has a name that resembles migrategwsa\*\*\*\*\*\*\*\*\*\*. This is the value of parameter [account] in the above command.
+ 2. Look for the appliance Storage Account in the Resource Group. The Storage Account has a name that resembles *migrategwsa\*\*\*\*\*\*\*\*\*\**. This is the value of parameter [account] in the above command.
- 3. Search for your storage account in the Azure portal. Ensure that the subscription you use to search is the same subscription (target subscription) in which the storage account is created. Go to Containers in the Blob Service section. Select **+Container** and create a Container. Leave Public Access Level to the default selected value.
+ 3. Search for your storage account in the Azure portal. Ensure that the subscription you use to search is the same subscription (target subscription) in which the storage account is created. Go to Containers in the Blob Service section. Select **+Container** and create a Container. Retain Public Access Level to the default selected value.
- 4. Go to **Settings** > **Shared Access Signature**. Select Container in **Allowed Resource Type**.Select Generate SAS and connection string. Copy the SAS value.
+ 4. Go to **Settings** > **Shared Access Signature** and select **Container** in **Allowed Resource Type**.
+
+ 5. Select Generate SAS and connection string and copy the SAS token. If you're using PowerShell, ensure you enclose the URL with single quotation marks (**' '**).
- 5. Execute the above command in Command Prompt by replacing account, container, SAS with the values obtained in steps 2, 3, and 4 respectively.
+ 5. Execute the above command in Command Prompt by replacing account, container, SAS with the values obtained in steps b, c, and e respectively.
Alternatively, [download](https://go.microsoft.com/fwlink/?linkid=2138967) the Azure Storage Explore on to the appliance and try to upload 10 blobs of ~64 MB into the storage accounts. If there's no issue, the upload should be successful.
The component trying to replicate data to Azure is either down or not responding
4. Check for connectivity issues between Azure Migrate appliance and Service Bus:
- This test checks if the Azure Migrate appliance can communicate to the Azure Migrate Cloud Service backend. The appliance communicates to the service backend through Service Bus and Event Hubs message queues. To validate connectivity from the appliance to the Service Bus, [download](https://go.microsoft.com/fwlink/?linkid=2139104) the Service Bus Explorer, try to connect to the appliance Service Bus and perform send message/receive message. If there's no issue, this should be successful.
+ > [!Note]
+ > This is applicable only for the projects that are set up with public endpoint.<br/> A Service bus refers to the ServiceBusNamespace type resource in the resource group for a Migrate project. The name of the Service Bus is of the formatΓÇ»*migratelsa(keyvaultsuffix)*. The Migrate key vault suffix is available in the gateway.json file on the appliance. <br/>
+ > For example, if the gateway.json contains: <br/>
+ > *"AzureKeyVaultArmId": "/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.KeyVault/vaults/migratekv1329610309"*,<br/> the service bus namespace resource will be *migratelsa1329610309*.
+
+ This test checks if the Azure Migrate appliance can communicate to the Azure Migrate Cloud Service backend. The appliance communicates to the service backend through Service Bus and Event Hubs message queues. To validate connectivity from the appliance to the Service Bus, [download](https://go.microsoft.com/fwlink/?linkid=2139104) the Service Bus Explorer, try to connect to the appliance Service Bus and perform the send message/receive message operations. If there's no issue, this should be successful.
**Steps to run the test:**
The possible causes include:
4. **Connectivity issues between Azure Migrate appliance and Azure Service Bus:**
- This test will check whether the Azure Migrate appliance can communicate to the Azure Migrate Cloud Service backend. The appliance communicates to the service backend through Service Bus and Event Hubs message queues. To validate connectivity from the appliance to the Service Bus, [download](https://go.microsoft.com/fwlink/?linkid=2139104) the Service Bus Explorer, try to connect to the appliance Service Bus and perform send message/receive message. If there's no issue, this should be successful.
+ This test will check whether the Azure Migrate appliance can communicate to the Azure Migrate Cloud Service backend. The appliance communicates to the service backend through Service Bus and Event Hubs message queues. To validate connectivity from the appliance to the Service Bus, [download](https://go.microsoft.com/fwlink/?linkid=2139104) the Service Bus Explorer, try to connect to the appliance Service Bus and perform the send message/receive message operations. If there's no issue, this should be successful.
**Steps to run the test:**
migrate Tutorial App Containerization Aspnet Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-aspnet-kubernetes.md
ms. Previously updated : 12/01/2022 Last updated : 04/24/2023 # ASP.NET app containerization and migration to Azure Kubernetes Service In this article, you'll learn how to containerize ASP.NET applications and migrate them to [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) using the Azure Migrate: App Containerization tool. The containerization process doesnΓÇÖt require access to your codebase and provides an easy way to containerize existing applications. The tool works by using the running state of the applications on a server to determine the application components and helps you package them in a container image. The containerized application can then be deployed on Azure Kubernetes Service (AKS).
-The Azure Migrate: App Containerization tool currently supports -
+The Azure Migrate: App Containerization tool currently supports:
- Containerizing ASP.NET apps and deploying them on Windows containers on Azure Kubernetes Service.-- Containerizing ASP.NET apps and deploying them on Windows containers on Azure App Service. [Learn more](./tutorial-app-containerization-aspnet-app-service.md)-- Containerizing Java Web Apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on AKS. [Learn more](./tutorial-app-containerization-java-kubernetes.md)-- Containerizing Java Web Apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on App Service. [Learn more](./tutorial-app-containerization-java-app-service.md)
+- Containerizing ASP.NET apps and deploying them on Windows containers on Azure App Service. [Learn more](./tutorial-app-containerization-aspnet-app-service.md).
+- Containerizing Java Web apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on AKS. [Learn more](./tutorial-app-containerization-java-kubernetes.md).
+- Containerizing Java Web apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on App Service. [Learn more](./tutorial-app-containerization-java-app-service.md).
-The Azure Migrate: App Containerization tool helps you to -
+The Azure Migrate: App Containerization tool helps you to:
- **Discover your application**: The tool remotely connects to the application servers running your ASP.NET application and discovers the application components. The tool creates a Dockerfile that can be used to create a container image for the application. - **Build the container image**: You can inspect and further customize the Dockerfile as per your application requirements and use that to build your application container image. The application container image is pushed to an Azure Container Registry you specify.
The Azure Migrate: App Containerization tool helps you to -
While all applications won't benefit from a straight shift to containers without significant rearchitecting, some of the benefits of moving existing apps to containers without rewriting include: -- **Improved infrastructure utilization:** With containers, multiple applications can share resources and be hosted on the same infrastructure. This can help you consolidate infrastructure and improve utilization.-- **Simplified management:** By hosting your applications on a modern managed platform like AKS and App Service, you can simplify your management practices. You can achieve this by retiring or reducing the infrastructure maintenance and management processes that you'd traditionally perform with owned infrastructure.-- **Application portability:** With increased adoption and standardization of container specification formats and platforms, application portability is no longer a concern.-- **Adopt modern management with DevOps:** Helps you adopt and standardize on modern practices for management and security and transition to DevOps.
+- **Improved infrastructure utilization** - With containers, multiple applications can share resources and be hosted on the same infrastructure. This can help you consolidate infrastructure and improve utilization.
+- **Simplified management** - By hosting your applications on a modern managed platform like AKS and App Service, you can simplify your management practices. You can achieve this by retiring or reducing the infrastructure maintenance and management processes that you'd traditionally perform with owned infrastructure.
+- **Application portability** - With increased adoption and standardization of container specification formats and platforms, application portability is no longer a concern.
+- **Adopt modern management with DevOps** - Helps you adopt and standardize on modern practices for management and security and transition to DevOps.
In this tutorial, you'll learn how to:
Before you begin this tutorial, you should:
**Requirement** | **Details** |
-**Identify a machine to install the tool** | A Windows machine to install and run the Azure Migrate: App Containerization tool. The Windows machine could be a server (Windows Server 2016 or later) or client (Windows 10) operating system, meaning that the tool can run on your desktop as well. <br/><br/> The Windows machine running the tool should have network connectivity to the servers/virtual machines hosting the ASP.NET applications to be containerized.<br/><br/> Ensure that 6-GB space is available on the Windows machine running the Azure Migrate: App Containerization tool for storing application artifacts. <br/><br/> The Windows machine should have internet access, directly or via a proxy. <br/> <br/>Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6)
-**Application servers** | Enable PowerShell remoting on the application servers: Sign in to the application server and Follow [these](/powershell/module/microsoft.powershell.core/enable-psremoting) instructions to turn on PowerShell remoting. <br/><br/> If the application server is running Window Server 2008 R2, ensure that PowerShell 5.1 is installed on the application server. Follow the instruction [here](/powershell/scripting/windows-powershell/wmf/setup/install-configure) to download and install PowerShell 5.1 on the application server. <br/><br/> Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6)
-**ASP.NET application** | The tool currently supports <br/><br/> - ASP.NET applications using Microsoft .NET framework 3.5 or later.<br/> - Application servers running Windows Server 2008 R2 or later (application servers should be running PowerShell version 5.1). <br/> - Applications running on Internet Information Services (IIS) 7.5 or later. <br/><br/> The tool currently doesn't support <br/><br/> - Applications requiring Windows authentication (AKS doesnΓÇÖt support gMSA currently). <br/> - Applications that depend on other Windows services hosted outside IIS.
+**Identify a machine to install the tool** | A Windows machine to install and run the Azure Migrate: App Containerization tool. The Windows machine could be a server (Windows Server 2016 or later) or client (Windows 10) operating system, meaning that the tool can run on your desktop as well. <br/><br/> The Windows machine running the tool should have network connectivity to the servers/virtual machines hosting the ASP.NET applications to be containerized.<br/><br/> Ensure that 6-GB space is available on the Windows machine running the Azure Migrate: App Containerization tool for storing application artifacts. <br/><br/> The Windows machine should have internet access, directly or via a proxy. <br/> <br/>Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6).
+**Application servers** | Enable PowerShell remoting on the application servers: Sign in to the application server and follow [these](/powershell/module/microsoft.powershell.core/enable-psremoting) instructions to turn on PowerShell remoting. <br/><br/> If the application server is running Windows Server 2008 R2, ensure that PowerShell 5.1 is installed on the application server. Follow the instruction [here](/powershell/scripting/windows-powershell/wmf/setup/install-configure) to download and install PowerShell 5.1 on the application server. <br/><br/> Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6).
+**ASP.NET application** | The tool currently supports:<br/> - ASP.NET applications using Microsoft .NET framework 3.5 or later. <br/>- Application servers running Windows Server 2008 R2 or later (application servers should be running PowerShell version 5.1). <br/>- Applications running on Internet Information Services (IIS) 7.5 or later. <br/><br/> The tool currently doesn't support: <br/>- Applications requiring Windows authentication (The App Containerization tool currently doesn't support gMSA). <br/>- Applications that depend on other Windows services hosted outside IIS.
## Prepare an Azure user account If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.
-Once your subscription is set up, you'll need an Azure user account with:
-- Owner permissions on the Azure subscription-- Permissions to register Azure Active Directory apps
+Once your subscription is set up, you need an Azure user account with:
+- Owner permissions on the Azure subscription.
+- Permissions to register Azure Active Directory apps.
If you just created a free Azure account, you're the owner of your subscription. If you're not the subscription owner, work with the owner to assign the permissions as follows: 1. In the Azure portal, search for "subscriptions", and under **Services**, select **Subscriptions**.
- ![Search box to search for the Azure subscription.](./media/tutorial-discover-vmware/search-subscription.png)
+ ![Screenshot of search box to search for the Azure subscription.](./media/tutorial-discover-vmware/search-subscription.png)
1. In the **Subscriptions** page, select the subscription in which you want to create an Azure Migrate project.
If you just created a free Azure account, you're the owner of your subscription.
1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
+1. Assign the following role. For detailed steps, see [Assigning Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
- | Setting | Value |
+ | **Setting** | **Value** |
| | | | Role | Owner | | Assign access to | User | | Members | azmigrateuser (in this example) |
- ![Add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ ![Screenshot of add role assignment page in Azure portal.](../../includes/role-based-access-control/media/add-role-assignment-page.png)
1. Your Azure account also needs **permissions to register Azure Active Directory apps.**
If you just created a free Azure account, you're the owner of your subscription.
1. In **User settings**, verify that Azure AD users can register applications (set to **Yes** by default).
- ![Verify in User Settings that users can register Active Directory apps.](./media/tutorial-discover-vmware/register-apps.png)
+ ![Screenshot of verification in User Settings if users can register Active Directory apps.](./media/tutorial-discover-vmware/register-apps.png)
1. In case the 'App registrations' settings is set to 'No', request the tenant/global admin to assign the required permission. Alternately, the tenant/global admin can assign the **Application Developer** role to an account to allow the registration of Azure Active Directory App. [Learn more](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
If you just created a free Azure account, you're the owner of your subscription.
4. Select **ASP.NET web apps** as the type of application you want to containerize. 5. To specify target Azure service, select **Containers on Azure Kubernetes Service**.
- ![Default load-up for App Containerization tool.](./media/tutorial-containerize-apps-aks/tool-home.png)
+ ![Screenshot of default load-up for App Containerization tool.](./media/tutorial-containerize-apps-aks/tool-home.png)
-### Complete tool pre-requisites
+### Complete tool prerequisites
1. Accept the **license terms**, and read the third-party information. 6. In the tool web app > **Set up prerequisites**, do the following steps: - **Connectivity**: The tool checks that the Windows machine has internet access. If the machine uses a proxy:
If you just created a free Azure account, you're the owner of your subscription.
1. You'll need a device code to authenticate with Azure. Selecting on **Sign in** will open a modal with the device code. 2. Select **Copy code & sign in** to copy the device code and open an Azure sign in prompt in a new browser tab. If it doesn't appear, make sure you've disabled the pop-up blocker in the browser.
- ![Modal showing device code.](./media/tutorial-containerize-apps-aks/login-modal.png)
+ ![Screenshot of modal showing device code.](./media/tutorial-containerize-apps-aks/login-modal.png)
1. On the new tab, paste the device code and complete the sign in using your Azure account credentials. You can close the browser tab after sign in is complete and return to the App Containerization tool screen. 1. Select the **Azure tenant** that you want to use.
To troubleshoot any issues with the tool, you can look at the log files on the W
## Next steps -- Containerizing ASP.NET web apps and deploying them on Windows containers on App Service. [Learn more](./tutorial-app-containerization-aspnet-app-service.md)-- Containerizing Java web apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on AKS. [Learn more](./tutorial-app-containerization-java-kubernetes.md)-- Containerizing Java web apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on App Service. [Learn more](./tutorial-app-containerization-java-app-service.md)
+- Containerizing ASP.NET web apps and deploying them on Windows containers on App Service. [Learn more](./tutorial-app-containerization-aspnet-app-service.md).
+- Containerizing Java web apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on AKS. [Learn more](./tutorial-app-containerization-java-kubernetes.md).
+- Containerizing Java web apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on App Service. [Learn more](./tutorial-app-containerization-java-app-service.md).
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
> [!NOTE] > This article references the term slave, which Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+## April 2023
+
+- **Known issues**
+
+ [Storage Auto-grow](./concepts-service-tiers-storage.md#storage-auto-grow): When [storage auto-grow feature](./concepts-service-tiers-storage.md#storage-auto-grow) is enabled and pre-provisioned [IOPS](./concepts-service-tiers-storage.md#iops) is increased, it may result in unexpected increase in the storage size of the instance. We are actively working to resolve this issue and will provide updates as soon as they are available.
++ ## March 2023 - **Azure Resource Health**
openshift Concepts Ovn Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/concepts-ovn-kubernetes.md
The OVN-Kubernetes CNI cluster network provider offers the following features:
> [!NOTE] > As of ARO 4.11, OVN-Kubernetes is the CNI for all ARO clusters. In already existing clusters, migrating from the previous SDN standard to OVN is not supported.
+>
+> If your cluster uses any part of the 100.64.0.0/16 IP address range, you cannot migrate to OVN-Kubernetes because it uses this IP address range internally.
For more information about OVN-Kubernetes CNI network provider, see [About the OVN-Kubernetes default Container Network Interface (CNI) network provider](https://docs.openshift.com/container-platform/latest/networking/ovn_kubernetes_network_provider/about-ovn-kubernetes.html).
reliability Sovereign Cloud China https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/sovereign-cloud-china.md
This section outlines variations and considerations when using Azure Bot Service
| Product | Unsupported, limited, and/or modified features | Notes | ||--||
-|Azure Machine learning| See [Azure Machine Learning feature availability across Azure in China cloud regions](../machine-learning/reference-machine-learning-cloud-parity.md#azure-china-21vianet). | |
+|Azure Machine Learning| See [Azure Machine Learning feature availability across Azure in China cloud regions](../machine-learning/reference-machine-learning-cloud-parity.md#azure-china-21vianet). | |
| Cognitive | Cognitive
This section outlines variations and considerations when using Networking servic
||--|| | Private Link| <li>For Private Link services availability, see [Azure Private Link availability](../private-link/availability.md).<li>For Private DNS zone names, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md#government). |
+### Security
+
+This section outlines variations and considerations when using Security services.
+
+| Product | Unsupported, limited, and/or modified features | Notes |
+||--||
+| Microsoft Sentinel| For Microsoft Sentinel availability, see [Microsoft Sentinel availability](../sentinel/feature-availability.md). |
+ ### Azure Container Apps This section outlines variations and considerations when using Azure Container Apps services.
For IP rangers for Azure in China, download [Azure Datacenter IP Ranges in China
| Azure Bot Services | <\*.botframework.com> | <\*.botframework.azure.cn> | | Azure Key Vault API | \*.vault.azure.net | \*.vault.azure.cn | | Sign in with PowerShell: <br>- Azure classic portal <br>- Azure Resource Manager <br>- Azure AD| - Add-AzureAccount<br>- Connect-AzureRmAccount <br> - Connect-msolservice |  - Add-AzureAccount -Environment AzureChinaCloud <br> - Connect-AzureRmAccount -Environment AzureChinaCloud <br>- Connect-msolservice -AzureEnvironment AzureChinaCloud |
-| Azure Container Apps Default Domain | \*.azurecontainerapps.io | No default domain is provided for external enviromment. The [custom domain](/azure/container-apps/custom-domains-certificates) is required. |
+| Azure Container Apps Default Domain | \*.azurecontainerapps.io | No default domain is provided for external environment. The [custom domain](/azure/container-apps/custom-domains-certificates) is required. |
| Azure Container Apps Event Stream Endpoint | \<region\>.azurecontainerapps.dev | \<region\>.chinanorth3.azurecontainerapps-dev.cn | ### Application Insights
search Cognitive Search Incremental Indexing Conceptual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-incremental-indexing-conceptual.md
REST API version `2020-06-30-Preview` or later provides incremental enrichment t
## Limitations
-If you are using [SharePoint indexer (Preview](search-howto-index-sharepoint-online.md), it is not recommended that the Incremental enrichment feature is used. There are conditions that may rise when indexing with this preview feature that would require to reset the indexer and invalidate the cache.
+> [!CAUTION]
+> If you're using the [SharePoint Online indexer (Preview)](search-howto-index-sharepoint-online.md), you should avoid incremental enrichment. Under certain circumstances, the cache becomes invalid, requiring an [indexer reset and run](search-howto-run-reset-indexers.md), should you choose to reload it.
## Next steps
search Search Howto Incremental Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-incremental-index.md
Azure Storage is used to store cached enrichments. The storage account must be [
Preview APIs or beta Azure SDKs are required for enabling cache on an indexer. The portal does not currently provide an option for caching enrichment.
+> [!CAUTION]
+> If you're using the [SharePoint Online indexer (Preview)](search-howto-index-sharepoint-online.md), you should avoid incremental enrichment. Under certain circumstances, the cache becomes invalid, requiring an [indexer reset and run](search-howto-run-reset-indexers.md), should you choose to reload it.
+ ## Enable on new indexers On new indexers, add the "cache" property in the indexer definition payload when calling [Create or Update Indexer (2021-04-30-Preview)](/rest/api/searchservice/preview-api/create-or-update-indexer). You can also use the previous preview API version, 2020-06-30-Preview.
sentinel Domain Based Essential Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/domain-based-essential-solutions.md
These essential solutions like other Microsoft Sentinel domain solutions don't h
## Next steps -- [Find ASIM-based domain essential solutions like the Network Session Essentials](sentinel-solutions-catalog.md)-- [Using the Advanced Security Information Model (ASIM)](/azure/sentinel/normalization-about-parsers)
+- [Find ASIM-based domain essential solutions](sentinel-solutions-catalog.md) like the Network Session Essentials and DNS Essentials Solution for Microsoft Sentinel
+- [Using the Advanced Security Information Model (ASIM)](/azure/sentinel/normalization-about-parsers)
sentinel Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/feature-availability.md
+
+ Title: Cloud feature availability in Microsoft Sentinel
+description: This article describes feature availability in Microsoft Sentinel across different Azure environments.
++++ Last updated : 02/02/2023++
+# Cloud feature availability in Microsoft Sentinel
+
+This article describes feature availability in Microsoft Sentinel across different Azure environments.
+
+## Analytics
+
+|Feature |Azure commercial |Azure China 21Vianet |
+||||
+|[Analytics rules health](monitor-analytics-rule-integrity.md) |Public Preview |&#10060; |
+|[MITRE ATT&CK dashboard](mitre-coverage.md) |Public Preview |&#10060; |
+|[NRT rules](near-real-time-rules.md) |Public Preview |&#x2705; |
+|[Recommendations](detection-tuning.md) |Public Preview |&#10060; |
+|[Scheduled](detect-threats-built-in.md) and [Microsoft rules](create-incidents-from-alerts.md) |GA |&#x2705; |
+
+## Content and content management
+
+|Feature |Azure commercial |Azure China 21Vianet |
+||||
+|[Content hub](sentinel-solutions.md) and [solutions](sentinel-solutions-catalog.md) |Public preview |&#10060; |
+|[Repositories](ci-cd.md?tabs=github) |Public preview |&#10060; |
+|[Workbooks](monitor-your-data.md) |GA |&#x2705; |
+
+## Data collection
+
+|Feature |Azure commercial |Azure China 21Vianet |
+||||
+|[Amazon Web Services](connect-aws.md?tabs=ct) |GA |&#10060; |
+|[Amazon Web Services S3 (Preview)](connect-aws.md?tabs=s3) |Public Preview |&#10060; |
+|[Azure Active Directory](connect-azure-active-directory.md) |GA |&#x2705; <sup>[1](#logsavailable)</sup> |
+|[Azure Active Directory Identity Protection](connect-services-api-based.md) |GA |&#10060; |
+|[Azure Activity](data-connectors/azure-activity.md) |GA |&#x2705; |
+|[Azure DDoS Protection](connect-services-diagnostic-setting-based.md) |GA |&#10060; |
+|[Azure Firewall](data-connectors/azure-firewall.md) |GA |&#x2705; |
+|[Azure Information Protection (Preview)](data-connectors/azure-information-protection.md) |Deprecated |&#10060; |
+|[Azure Key Vault](data-connectors/azure-key-vault.md) |Public Preview |&#x2705; |
+|[Azure Kubernetes Service (AKS)](data-connectors/azure-kubernetes-service-aks.md) |Public Preview |&#x2705; |
+|[Azure SQL Databases](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/azure-sentinel-sql-solution-query-deep-dive/ba-p/2597961) |GA |&#x2705; |
+|[Azure Web Application Firewall (WAF)](data-connectors/azure-web-application-firewall-waf.md) |GA |&#x2705; |
+|[Cisco ASA](data-connectors/cisco-asa.md) |GA |&#x2705; |
+|[Codeless Connectors Platform](create-codeless-connector.md?tabs=deploy-via-arm-template%2Cconnect-via-the-azure-portal) |Public Preview |&#10060; |
+|[Common Event Format (CEF)](connect-common-event-format.md) |GA |&#x2705; |
+|[Common Event Format (CEF) via AMA (Preview)](connect-cef-ama.md) |Public Preview |&#x2705; |
+|[Data Connectors health](monitor-data-connector-health.md#use-the-sentinelhealth-data-table-public-preview) |Public Preview |&#10060; |
+|[DNS](data-connectors/dns.md) |Public Preview |&#x2705; |
+|[GCP Pub/Sub Audit Logs](connect-google-cloud-platform.md) |Public Preview |&#10060; |
+|[Microsoft 365 Defender](connect-microsoft-365-defender.md?tabs=MDE) |GA |&#10060; |
+|[Microsoft Purview Insider Risk Management (Preview)](sentinel-solutions-catalog.md#domain-solutions) |Public Preview |&#10060; |
+|[Microsoft Defender for Cloud](connect-defender-for-cloud.md) |GA |&#x2705; |
+|[Microsoft Defender for IoT](connect-services-api-based.md) |GA |&#10060; |
+|[Microsoft Power BI (Preview)](data-connectors/microsoft-powerbi.md) |Public Preview |&#10060; |
+|[Microsoft Project (Preview)](data-connectors/microsoft-project.md) |Public Preview |&#10060; |
+|[Microsoft Purview (Preview)](connect-services-diagnostic-setting-based.md) |Public Preview |&#10060; |
+|[Microsoft Purview Information Protection](connect-microsoft-purview.md) |Public Preview |&#10060; |
+|[Office 365](connect-services-api-based.md) |GA |&#x2705; |
+|[Security Events via Legacy Agent](connect-services-windows-based.md#log-analytics-agent-legacy) |GA |&#x2705; |
+|[Syslog](connect-syslog.md) |GA |&#x2705; |
+|[Windows DNS Events via AMA (Preview)](connect-dns-ama.md) |Public Preview |&#10060; |
+|[Windows Firewall](data-connectors/windows-firewall.md) |GA |&#x2705; |
+|[Windows Forwarded Events (Preview)](connect-services-windows-based.md) |Public Preview |&#x2705; |
+|[Windows Security Events via AMA](connect-services-windows-based.md) |GA |&#x2705; |
+
+<sup><a name="logsavailable"></a>1</sup> Supports only sign-in logs and audit logs.
+
+## Hunting
+
+|Feature |Azure commercial |Azure China 21Vianet |
+||||
+|[Hunting blade](hunting.md) |GA |&#x2705; |
+|[Restore historical data](restore.md) |GA |&#x2705; |
+|[Search large datasets](search-jobs.md) |GA |&#x2705; |
+
+## Incidents
+
+|Feature |Azure commercial |Azure China 21Vianet |
+||||
+|[Add entities to threat intelligence](add-entity-to-threat-intelligence.md?tabs=incidents) |Public Preview |&#10060; |
+|[Advanced and/or conditions](add-advanced-conditions-to-automation-rules.md) |Public Preview |&#x2705; |
+|[Automation rules](automate-incident-handling-with-automation-rules.md) |Public Preview |&#x2705; |
+|[Automation rules health](monitor-automation-health.md) |Public Preview |&#10060; |
+|[Create incidents manually](create-incident-manually.md) |Public Preview |&#x2705; |
+|[Cross-tenant/Cross-workspace incidents view](multiple-workspace-view.md) |GA |&#x2705; |
+|[Incident advanced search](investigate-cases.md#search-for-incidents) |GA |&#x2705; |
+|[Incident tasks](incident-tasks.md) |Public Preview |&#x2705; |
+|[Microsoft 365 Defender incident integration](microsoft-365-defender-sentinel-integration.md#working-with-microsoft-365-defender-incidents-in-microsoft-sentinel-and-bi-directional-sync) |Public Preview |&#10060; |
+|[Microsoft Teams integrations](collaborate-in-microsoft-teams.md) |Public Preview |&#10060; |
+|[Playbook template gallery](use-playbook-templates.md) |Public Preview |&#10060; |
+|[Run playbooks on entities](respond-threats-during-investigation.md) |Public Preview |&#10060; |
+|[Run playbooks on incidents](automate-responses-with-playbooks.md) |Public Preview |&#x2705; |
+|[SOC incident audit metrics](manage-soc-with-incident-metrics.md) |GA |&#x2705; |
+
+## Machine Learning
+
+|Feature |Azure commercial |Azure China 21Vianet |
+||||
+|[Anomalous RDP login detection - built-in ML detection](configure-connector-login-detection.md) |Public Preview |&#x2705; |
+|[Anomalous SSH login detection - built-in ML detection](connect-syslog.md#configure-the-syslog-connector-for-anomalous-ssh-login-detection) |Public Preview |&#x2705; |
+|[Bring Your Own ML (BYO-ML)](bring-your-own-ml.md) |Public Preview |&#10060; |
+|[Fusion](fusion.md) - advanced multistage attack detections <sup>[1](#partialga)</sup> |GA |&#x2705; |
+|[Fusion detection for ransomware](fusion.md#fusion-for-ransomware) |Public Preview |&#x2705; |
+|[Fusion for emerging threats](fusion.md#fusion-for-emerging-threats) |Public Preview |&#x2705; |
+
+<sup><a name="partialga"></a>1</sup> Partially GA: The ability to disable specific findings from vulnerability scans is in public preview.
+
+## Normalization
+
+|Feature |Azure commercial |Azure China 21Vianet |
+||||
+|[Advanced Security Information Model (ASIM)](normalization.md) |Public Preview |&#x2705; |
+
+## Notebooks
+
+|Feature |Azure commercial |Azure China 21Vianet |
+||||
+|[Notebooks](notebooks.md) |GA |&#x2705; |
+|[Notebook integration with Azure Synapse](notebooks-with-synapse.md) |Public Preview |&#x2705; |
+
+## SAP
+
+|Feature |Azure commercial |Azure China 21Vianet |
+||||
+|[Threat protection for SAP](sap/deployment-overview.md)<sup>[1](#sap)</sup> |GA |&#x2705; |
+
+<sup><a name="sap"></a>1</sup> Deploy SAP security content [via GitHub](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP).
+
+## Threat intelligence support
+
+|Feature |Azure commercial |Azure China 21Vianet |
+||||
+|[GeoLocation and WhoIs data enrichment](work-with-threat-indicators.md) |Public Preview |&#10060; |
+|[Import TI from flat file](indicators-bulk-file-import.md) |Public Preview |&#x2705; |
+|[Threat intelligence matching analytics](use-matching-analytics-to-detect-threats.md) |Public Preview |&#10060; |
+|[Threat Intelligence Platform data connector](understand-threat-intelligence.md) |Public Preview |&#x2705; |
+|[Threat Intelligence Research blade](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/what-s-new-threat-intelligence-menu-item-in-public-preview/ba-p/1646597) |GA |&#x2705; |
+|[Threat Intelligence - TAXII data connector](understand-threat-intelligence.md) |GA |&#x2705; |
+|[Threat Intelligence workbook](/azure/architecture/example-scenario/data/sentinel-threat-intelligence) |GA |&#x2705; |
+|[URL detonation](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/using-the-new-built-in-url-detonation-in-azure-sentinel/ba-p/996229) |Public Preview |&#10060; |
+
+## UEBA
+
+|Feature |Azure commercial |Azure China 21Vianet |
+||||
+|[Active Directory sync via MDI](enable-entity-behavior-analytics.md#how-to-enable-user-and-entity-behavior-analytics) |Public preview |&#10060; |
+|[Azure resource entity pages](entity-pages.md) |Public Preview |&#10060; |
+|[Entity insights](identify-threats-with-entity-behavior-analytics.md) |GA |&#x2705; |
+|[Entity pages](entity-pages.md) |GA |&#x2705; |
+|[Identity info table data ingestion](investigate-with-ueba.md) |GA |&#x2705; |
+|[IoT device entity page](/azure/defender-for-iot/organizations/iot-advanced-threat-monitoring#investigate-further-with-iot-device-entities) |Public Preview |&#10060; |
+|[Peer/Blast radius enrichments](identify-threats-with-entity-behavior-analytics.md#what-is-user-and-entity-behavior-analytics-ueba) |Public preview |&#10060; |
+|[SOC-ML anomalies](soc-ml-anomalies.md#what-are-customizable-anomalies) |GA |&#10060; |
+|[UEBA anomalies](soc-ml-anomalies.md#ueba-anomalies) |GA |&#10060; |
+|[UEBA enrichments\insights](investigate-with-ueba.md) |GA |&#x2705; |
+
+## Watchlists
+
+|Feature |Azure commercial |Azure China 21Vianet |
+||||
+|[Large watchlists from Azure Storage](watchlists.md) |Public Preview |&#10060; |
+|[Watchlists](watchlists.md) |GA |&#x2705; |
+|[Watchlist templates](watchlist-schemas.md) |Public Preview |&#10060; |
+
+## Next steps
+
+In this article, you learned about available features in Microsoft Sentinel.
+
+- [Learn about Microsoft Sentinel](overview.md)
+- [Plan your Microsoft Sentinel architecture](design-your-workspace-architecture.md)
sentinel Sentinel Solutions Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions-catalog.md
When you deploy a solution, the security content included with the solution, suc
| **[Deception Honey Tokens](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinelhoneytokens.azuresentinelhoneytokens?tab=Overview)** | [Workbooks, analytics rules, playbooks](monitor-key-vault-honeytokens.md) | Security - Threat Protection |Microsoft Sentinel community | |**[Dev 0270 Detection and Hunting](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-dev0270detectionandhunting?tab=Overview)**|[Analytic rules](https://www.microsoft.com/security/blog/2022/09/07/profiling-dev-0270-phosphorus-ransomware-operations/)|Security - Threat Protection|Microsoft| |**[Dev-0537 Detection and Hunting](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-dev0537detectionandhunting?tab=Overview)**||Security - Threat Protection|Microsoft|
+|**[DNS Essentials Solution](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-dns-domain?tab=Overview)**|[Analytics rules, hunting queries, playbooks, workbook](domain-based-essential-solutions.md)|Security - Network | Microsoft|
|**[Endpoint Threat Protection Essentials](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-endpointthreat?tab=Overview)**|Analytic rules, hunting queries|Security - Threat Protection|Microsoft| |**[Legacy IOC based Threat Protection](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-ioclegacy?tab=Overview)**|Analytic rules, hunting queries|Security - Threat Protection|Microsoft| |**[Log4j Vulnerability Detection](https://azuremarketplace.microsoft.com/marketplace/apps/azuresentinel.azure-sentinel-solution-apachelog4jvulnerability?tab=Overview)**|Workbooks, analytic rules, hunting queries, watchlists, playbooks|Application, Security - Automation (SOAR), Security - Threat Protection, Security - Vulnerability Management|Microsoft|
service-bus-messaging Message Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-sessions.md
Multiple applications can send their requests to a single request queue, with a
## Sequencing vs. sessions [Sequence number](message-sequencing.md) on its own guarantees the queuing order and the extractor order of messages, but not the processing order, which requires sessions.
-Say, there are three messages in the queue and two consumers. Consumer 1 picks up message 1. Consumer 2 picks up message 2. Consumer 2 finishes processing message 2 and picks up message 3 while Consumer 1 isn't done with processing message 1 yet. Consumer 2 finishes processing message 3 but consumer 1 is still not done with processing message 1 yet. Finally, consumer 1 completes processing message 1. So, the messages are processed in this order: message 2, message 3, and message 1. If you need message 1, 2, and 3 to be processed in order, you need to use sessions.
+Say, there are three messages in the queue and two consumers.
+1. Consumer 1 picks up message 1.
+1. Consumer 2 picks up message 2.
+1. Consumer 2 finishes processing message 2 and picks up message 3, while Consumer 1 isn't done with processing message 1 yet.
+1. Consumer 2 finishes processing message 3, but consumer 1 is still not done with processing message 1 yet.
+1. Finally, consumer 1 completes processing message 1.
+
+So, the messages are processed in this order: message 2, message 3, and message 1. If you need message 1, 2, and 3 to be processed in order, you need to use sessions.
-So, if messages just need to be retrieved in order, you don't need to use sessions. If messages need to be processed in order, use sessions. The same session ID should be set on messages that belong together, which could be message 1, 4, and 8 in a set, and 2, 3, and 6 in another set.
+If messages just need to be retrieved in order, you don't need to use sessions. If messages need to be processed in order, use sessions. The same session ID should be set on messages that belong together, which could be message 1, 4, and 8 in a set, and 2, 3, and 6 in another set.
## Next steps You can enable message sessions while creating a queue using Azure portal, PowerShell, CLI, Resource Manager template, .NET, Java, Python, and JavaScript. For more information, see [Enable message sessions](enable-message-sessions.md).
static-web-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/overview.md
With Static Web Apps, static assets are separated from a traditional web server
## Key features - **Web hosting** for static content like HTML, CSS, JavaScript, and images.-- **Integrated API** support provided by Azure Functions with the option to link an existing Azure Functions app using a standard account.
+- **Integrated API** support provided by managed Azure Functions, with the option to link an existing function app, web app, container app, or API Management instance using a standard account. If you need your API in a region that doesn't support [managed functions](apis-functions.md), you can [bring your own functions](functions-bring-your-own.md) to your app.
- **First-class GitHub and Azure DevOps integration** that allows repository changes to trigger builds and deployments. - **Globally distributed** static content, putting content closer to your users. - **Free SSL certificates**, which are automatically renewed.
storage Storage Blob Download Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-java.md
# Download a blob with Java
-This article shows how to download a blob using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). You can download a blob by using any of the following methods:
+This article shows how to download a blob using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). You can download blob data to various destinations, including a local file path, stream, or text string. You can also open a blob stream and read from it.
+
+## Prerequisites
+
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform an upload operation. To learn more, see the authorization guidance for the following REST API operation:
+ - [Get Blob](/rest/api/storageservices/get-blob#authorization)
+- The package **azure-storage-blob** installed to your project directory. To learn more about setting up your project, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md#set-up-your-project).
+
+## Download a blob
+
+You can use any of the following methods to download a blob:
- [downloadContent](/java/api/com.azure.storage.blob.specialized.blobclientbase) - [downloadStream](/java/api/com.azure.storage.blob.specialized.blobclientbase)
This article shows how to download a blob using the [Azure Storage client librar
## Download to a file path
-The following example downloads a blob by using a file path:
+The following example downloads a blob to a local file path:
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobDownload.java" id="Snippet_DownloadBLobFile"::: ## Download to a stream
-The following example downloads a blob to an `OutputStream`:
+The following example downloads a blob to an `OutputStream` object:
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobDownload.java" id="Snippet_DownloadBLobStream"::: ## Download to a string
-The following example downloads a blob to a `String` object. This example assumes that the blob is a text file.
+The following example assumes that the blob is a text file, and downloads the blob to a `String` object:
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobDownload.java" id="Snippet_DownloadBLobText":::
storage Storage Blob Download Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-javascript.md
Previously updated : 11/30/2022 Last updated : 04/21/2023
# Download a blob with JavaScript
-This article shows how to download a blob using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). You can download a blob by using any of the following methods:
+This article shows how to download a blob using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). You can download blob data to various destinations, including a local file path, stream, or text string.
+
+## Prerequisites
+
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform an upload operation. To learn more, see the authorization guidance for the following REST API operation:
+ - [Get Blob](/rest/api/storageservices/get-blob#authorization)
+- The package **@azure/storage-blob** installed to your project directory. To learn more about setting up your project, see [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md).
+
+## Download a blob
+
+You can use any of the following methods to download a blob:
- [BlobClient.download](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-download) - [BlobClient.downloadToBuffer](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtobuffer-1) (only available in Node.js runtime) - [BlobClient.downloadToFile](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtofile) (only available in Node.js runtime)-
-> [!NOTE]
-> The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with JavaScript](storage-blob-container-create.md).
## Download to a file path
storage Storage Blob Download Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-python.md
Previously updated : 01/24/2023 Last updated : 04/21/2023
# Download a blob with Python
-This article shows how to download a blob using the [Azure Storage client library for Python](/python/api/overview/azure/storage). You can download a blob by using the following method:
+This article shows how to download a blob using the [Azure Storage client library for Python](/python/api/overview/azure/storage). You can download blob data to various destinations, including a local file path, stream, or text string. You can also open a blob stream and read from it.
+
+## Prerequisites
+
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform an upload operation. To learn more, see the authorization guidance for the following REST API operation:
+ - [Get Blob](/rest/api/storageservices/get-blob#authorization)
+- The package **azure-storage-blob** installed to your project directory. To learn more about setting up your project, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md#set-up-your-project).
+
+## Download a blob
+
+You can use the following method to download a blob:
- [BlobClient.download_blob](/python/api/azure-storage-blob/azure.storage.blob.blobclient#azure-storage-blob-blobclient-download-blob)
storage Storage Blob Download Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-typescript.md
Previously updated : 03/21/2023 Last updated : 04/21/2023
# Download a blob with TypeScript
-This article shows how to download a blob using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). You can download a blob by using any of the following methods:
+This article shows how to download a blob using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). You can download blob data to various destinations, including a local file path, stream, or text string.
+
+## Prerequisites
+
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform an upload operation. To learn more, see the authorization guidance for the following REST API operation:
+ - [Get Blob](/rest/api/storageservices/get-blob#authorization)
+- The package **@azure/storage-blob** installed to your project directory. To learn more about setting up your project, see [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md).
+
+## Download a blob
+
+You can use any of the following methods to download a blob:
- [BlobClient.download](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-download) - [BlobClient.downloadToBuffer](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtobuffer-1) (only available in Node.js runtime) - [BlobClient.downloadToFile](/javascript/api/@azure/storage-blob/blobclient#@azure-storage-blob-blobclient-downloadtofile) (only available in Node.js runtime)-
-> [!NOTE]
-> The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and TypeScript](storage-blob-typescript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with TypeScript](storage-blob-container-create-typescript.md).
## Download to a file path
storage Storage Blob Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download.md
Previously updated : 03/28/2022 Last updated : 04/21/2023
# Download a blob with .NET
-This article shows how to download a blob using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). You can download a blob by using any of the following methods:
+This article shows how to download a blob using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). You can download blob data to various destinations, including a local file path, stream, or text string. You can also open a blob stream and read from it.
+
+## Prerequisites
+
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform an upload operation. To learn more, see the authorization guidance for the following REST API operation:
+ - [Get Blob](/rest/api/storageservices/get-blob#authorization)
+- The package **Azure.Storage.Blobs** installed to your project directory. To learn more about setting up your project, see [Get Started with Azure Storage and .NET](storage-blob-dotnet-get-started.md#set-up-your-project).
+
+## Download a blob
+
+You can use any of the following methods to download a blob:
- [DownloadTo](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadto) - [DownloadToAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadtoasync) - [DownloadContent](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadcontent) - [DownloadContentAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.downloadcontentasync)
-You can also open a stream to read from a blob. The stream will only download the blob as the stream is read from. Use either of the following methods:
+You can also open a stream to read from a blob. The stream only downloads the blob as the stream is read from. You can use either of the following methods:
- [OpenRead](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.openread) - [OpenReadAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.openreadasync)-
-> [!NOTE]
-> The examples in this article assume that you've created a [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md) article.
## Download to a file path
-The following example downloads a blob by using a file path. If the specified directory does not exist, handle the exception and notify the user.
-
-```csharp
-public static async Task DownloadBlob(BlobClient blobClient, string localFilePath)
-{
- try
- {
- await blobClient.DownloadToAsync(localFilePath);
- }
- catch (DirectoryNotFoundException ex)
- {
- // Let the user know that the directory does not exist
- Console.WriteLine($"Directory not found: {ex.Message}");
- }
-}
-```
-
-If the file already exists at `localFilePath`, it will be overwritten by default during subsequent downloads.
+The following example downloads a blob to a local file path. If the specified directory doesn't exist, the code throws a [DirectoryNotFoundException](/dotnet/api/system.io.directorynotfoundexception). If the file already exists at `localFilePath`, it's overwritten by default during subsequent downloads.
+ ## Download to a stream
-The following example downloads a blob by creating a [Stream](/dotnet/api/system.io.stream) object and then downloads to that stream. If the specified directory does not exist, handle the exception and notify the user.
-
-```csharp
-public static async Task DownloadToStream(BlobClient blobClient, string localFilePath)
-{
- try
- {
- FileStream fileStream = File.OpenWrite(localFilePath);
- await blobClient.DownloadToAsync(fileStream);
- fileStream.Close();
- }
- catch (DirectoryNotFoundException ex)
- {
- // Let the user know that the directory does not exist
- Console.WriteLine($"Directory not found: {ex.Message}");
- }
-}
-```
+The following example downloads a blob by creating a [Stream](/dotnet/api/system.io.stream) object and then downloads to that stream. If the specified directory doesn't exist, the code throws a [DirectoryNotFoundException](/dotnet/api/system.io.directorynotfoundexception).
+ ## Download to a string
-The following example downloads a blob to a string. This example assumes that the blob is a text file.
+The following example assumes that the blob is a text file, and downloads the blob to a string:
-```csharp
-public static async Task DownloadToText(BlobClient blobClient)
-{
- BlobDownloadResult downloadResult = await blobClient.DownloadContentAsync();
- string downloadedData = downloadResult.Content.ToString();
- Console.WriteLine("Downloaded data:", downloadedData);
-}
-```
## Download from a stream
-The following example downloads a blob by reading from a stream.
-
-```csharp
-public static async Task DownloadfromStream(BlobClient blobClient, string localFilePath)
-{
- using (var stream = await blobClient.OpenReadAsync())
- {
- FileStream fileStream = File.OpenWrite(localFilePath);
- await stream.CopyToAsync(fileStream);
- }
-}
+The following example downloads a blob by reading from a stream:
-```
## Resources
The Azure SDK for .NET contains libraries that build on top of the Azure REST AP
- [Get Blob](/rest/api/storageservices/get-blob) (REST API)
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/dotnet/BlobDevGuideBlobs/DownloadBlob.cs)
+ [!INCLUDE [storage-dev-guide-resources-dotnet](../../../includes/storage-dev-guides/storage-dev-guide-resources-dotnet.md)]+
+### See also
+
+- [Performance tuning for uploads and downloads](storage-blobs-tune-upload-download.md).
storage Storage Blob Upload Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-java.md
Previously updated : 11/16/2022 Last updated : 04/21/2023
# Upload a block blob with Java
-This article shows how to upload a block blob using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). You can upload a blob, open a blob stream and write to the stream, or upload blobs with index tags.
+This article shows how to upload a block blob using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). You can upload data to a block blob from a file path, a stream, a binary object, or a text string. You can also upload blobs with index tags.
-Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with Java](storage-blob-container-create-java.md).
+## Prerequisites
-To upload a blob using a stream or a binary object, use the following method:
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform an upload operation. To learn more, see the authorization guidance for the following REST API operations:
+ - [Put Blob](/rest/api/storageservices/put-blob#authorization)
+ - [Put Block](/rest/api/storageservices/put-block#authorization)
+- The package **azure-storage-blob** installed to your project directory. To learn more about setting up your project, see [Get Started with Azure Storage and Java](storage-blob-java-get-started.md#set-up-your-project).
+
+## Upload data to a block blob
+
+To upload a block blob from a stream or a binary object, use the following method:
- [upload](/java/api/com.azure.storage.blob.blobclient)
-To upload a blob using a file path, use the following method:
+To upload a block blob from a file path, use the following method:
- [uploadFromFile](/java/api/com.azure.storage.blob.blobclient) Each of these methods can be called using a [BlobClient](/java/api/com.azure.storage.blob.blobclient) object or a [BlockBlobClient](/java/api/com.azure.storage.blob.specialized.blockblobclient) object.
-## Upload data to a block blob
+## Upload a block blob from a local file path
-The following example uploads `BinaryData` to a blob using a `BlobClient` object:
+The following example uploads a file to a block blob using a `BlobClient` object:
## Upload a block blob from a stream
-The following example uploads a blob by creating a `ByteArrayInputStream` object, then uploading that stream object:
+The following example uploads a block blob by creating a `ByteArrayInputStream` object, then uploading that stream object:
:::code language="java" source="~/azure-storage-snippets/blobs/howto/Java/blob-devguide/blob-devguide-blobs/src/main/java/com/blobs/devguide/blobs/BlobUpload.java" id="Snippet_UploadBlobStream":::
-## Upload a block blob from a local file path
+## Upload a block blob from a BinaryData object
-The following example uploads a file to a blob using a `BlobClient` object:
+The following example uploads `BinaryData` to a block blob using a `BlobClient` object:
## Upload a block blob with index tags
To learn more about uploading blobs using the Azure Blob Storage client library
The Azure SDK for Java contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Java paradigms. The client library methods for uploading blobs use the following REST API operations: - [Put Blob](/rest/api/storageservices/put-blob) (REST API)-- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API)
+- [Put Block](/rest/api/storageservices/put-block) (REST API)
### Code samples
storage Storage Blob Upload Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-javascript.md
description: Learn how to upload a blob to your Azure Storage account using the
Previously updated : 07/18/2022 Last updated : 04/21/2023
# Upload a blob with JavaScript
-This article shows how to upload a blob using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). You can upload a blob, open a blob stream and write to that, or upload large blobs in blocks.
+This article shows how to upload a blob using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). You can upload data to a block blob from a file path, a stream, a buffer, or a text string. You can also upload blobs with index tags.
-> [!NOTE]
-> The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with JavaScript](storage-blob-container-create.md).
+## Prerequisites
-## Upload by blob client
+To work with the code examples in this article, make sure you have:
-Use the following table to find the correct upload method based on the blob client.
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform an upload operation. To learn more, see the authorization guidance for the following REST API operations:
+ - [Put Blob](/rest/api/storageservices/put-blob#authorization)
+ - [Put Block](/rest/api/storageservices/put-block#authorization)
+- The package **@azure/storage-blob** installed to your project directory. To learn more about setting up your project, see [Get Started with Azure Storage and JavaScript](storage-blob-javascript-get-started.md#set-up-your-project).
-|Client|Upload method|
-|--|--|
-|[BlobClient](/javascript/api/@azure/storage-blob/blobclient)|The SDK needs to know the blob type you want to upload to. Because BlobClient is the base class for the other Blob clients, it does not have upload methods. It is mostly useful for operations that are common to the child blob classes. For uploading, create specific blob clients directly or get specific blob clients from ContainerClient.|
-|[BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient)|This is the **most common upload client**:<br>* upload()<br>* stageBlock() and commitBlockList()|
-|[AppendBlobClient](/javascript/api/@azure/storage-blob/appendblobclient)|* create()<br>* append()|
-|[PageBlobClient](/javascript/api/@azure/storage-blob/pageblobclient)|* create()<br>* appendPages()|
+## Upload data to a block blob
-## <a name="upload-by-using-a-file-path"></a>Upload with BlockBlobClient by using a file path
+You can use any of the following methods to upload data to a block blob:
+
+- [upload](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-upload) (non-parallel uploading method)
+- [uploadData](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-uploaddata)
+- [uploadFile](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-uploadfile) (only available in Node.js runtime)
+- [uploadStream](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-uploadstream) (only available in Node.js runtime)
+
+Each of these methods can be called using a [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object.
+
+## Upload a block blob from a file path
The following example uploads a local file to blob storage with the [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object. The [options](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) object allows you to pass in your own metadata and [tags](storage-manage-find-blobs.md#blob-index-tags-and-data-management), used for indexing, at upload time: :::code language="javascript" source="~/azure_storage-snippets/blobs/howto/JavaScript/NodeJS-v12/dev-guide/upload-blob-from-local-file-path.js" id="Snippet_UploadBlob" highlight="14":::
-## <a name="upload-by-using-a-stream"></a>Upload with BlockBlobClient by using a Stream
+## Upload a block blob from a stream
The following example uploads a readable stream to blob storage with the [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object. Pass in the BlockBlobUploadStream [options](/javascript/api/@azure/storage-blob/blockblobuploadstreamoptions) to affect the upload:
const uploadOptions = {
await createBlobFromReadStream(containerClient, `my-text-file.txt`, readableStream, uploadOptions); ```
-## <a name="upload-by-using-a-binarydata-object"></a>Upload with BlockBlobClient by using a BinaryData object
+## Upload a block blob from a buffer
The following example uploads a Node.js buffer to blob storage with the [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object. Pass in the BlockBlobParallelUpload [options](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) to affect the upload:
const uploadOptions = {
createBlobFromBuffer(containerClient, `daisies.jpg`, buffer, uploadOptions) ```
-## <a name="upload-a-string"></a>Upload a string with BlockBlobClient
+## Upload a block blob from a string
The following example uploads a string to blob storage with the [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object. Pass in the BlockBlobUploadOptions [options](/javascript/api/@azure/storage-blob/blockblobuploadoptions) to affect the upload:
To learn more about uploading blobs using the Azure Blob Storage client library
The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for uploading blobs use the following REST API operations: - [Put Blob](/rest/api/storageservices/put-blob) (REST API)-- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API)
+- [Put Block](/rest/api/storageservices/put-block) (REST API)
### Code samples
storage Storage Blob Upload Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-python.md
Previously updated : 01/19/2023 Last updated : 04/21/2023
# Upload a block blob with Python
-This article shows how to upload a blob using the [Azure Storage client library for Python](/python/api/overview/azure/storage). You can upload a blob, open a blob stream and write to the stream, or upload blobs with index tags.
+This article shows how to upload a blob using the [Azure Storage client library for Python](/python/api/overview/azure/storage). You can upload data to a block blob from a file path, a stream, a binary object, or a text string. You can also upload blobs with index tags.
-Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with Python](storage-blob-container-create-python.md).
+## Prerequisites
-To upload a blob using a stream or a binary object, use the following method:
+To work with the code examples in this article, make sure you have:
-- [BlobClient.upload_blob](/python/api/azure-storage-blob/azure.storage.blob.blobclient#azure-storage-blob-blobclient-upload-blob)
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform an upload operation. To learn more, see the authorization guidance for the following REST API operations:
+ - [Put Blob](/rest/api/storageservices/put-blob#authorization)
+ - [Put Block](/rest/api/storageservices/put-block#authorization)
+- The package **azure-storage-blob** installed to your project directory. To learn more about setting up your project, see [Get Started with Azure Storage and Python](storage-blob-python-get-started.md#set-up-your-project).
-To upload a blob from a given URL, use the following method:
+## Upload data to a block blob
-- [BlobClient.upload_blob_from_url](/python/api/azure-storage-blob/azure.storage.blob.blobclient#azure-storage-blob-blobclient-upload-blob-from-url)
+To upload a blob using a stream or a binary object, use the following method:
-## Upload data to a block blob
+- [BlobClient.upload_blob](/python/api/azure-storage-blob/azure.storage.blob.blobclient#azure-storage-blob-blobclient-upload-blob)
-The following example uploads data to a block blob using a `BlobClient` object:
+## Upload a block blob from a local file path
+The following example uploads a file to a block blob using a `BlobClient` object:
+ ## Upload a block blob from a stream
The following example creates random bytes of data and uploads a `BytesIO` objec
:::code language="python" source="~/azure-storage-snippets/blobs/howto/python/blob-devguide-py/blob-devguide-blobs.py" id="Snippet_upload_blob_stream":::
-## Upload a block blob from a local file path
+## Upload binary data to a block blob
-The following example uploads a file to a block blob using a `BlobClient` object:
+The following example uploads binary data to a block blob using a `BlobClient` object:
## Upload a block blob with index tags
To learn more about uploading blobs using the Azure Blob Storage client library
The Azure SDK for Python contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Python paradigms. The client library methods for uploading blobs use the following REST API operations: - [Put Blob](/rest/api/storageservices/put-blob) (REST API)-- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API)
+- [Put Block](/rest/api/storageservices/put-block) (REST API)
### Code samples
storage Storage Blob Upload Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-typescript.md
description: Learn how to upload a blob with TypeScript to your Azure Storage ac
Previously updated : 03/21/2023 Last updated : 04/21/2023
# Upload a blob with TypeScript
-This article shows how to upload a blob using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). You can upload a blob, open a blob stream and write to that, or upload large blobs in blocks.
+This article shows how to upload a blob using the [Azure Storage client library for JavaScript](https://www.npmjs.com/package/@azure/storage-blob). You can upload data to a block blob from a file path, a stream, a buffer, or a text string. You can also upload blobs with index tags.
-> [!NOTE]
-> The examples in this article assume that you've created a [BlobServiceClient](/javascript/api/@azure/storage-blob/blobserviceclient) object by using the guidance in the [Get started with Azure Blob Storage and JavaScript](storage-blob-javascript-get-started.md) article. Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. To learn how to create a container, see [Create a container in Azure Storage with JavaScript](storage-blob-container-create.md).
+## Prerequisites
-## Upload by blob client
+To work with the code examples in this article, make sure you have:
-Use the following table to find the correct upload method based on the blob client.
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform an upload operation. To learn more, see the authorization guidance for the following REST API operations:
+ - [Put Blob](/rest/api/storageservices/put-blob#authorization)
+ - [Put Block](/rest/api/storageservices/put-block#authorization)
+- The package **@azure/storage-blob** installed to your project directory. To learn more about setting up your project, see [Get Started with Azure Storage and TypeScript](storage-blob-typescript-get-started.md#set-up-your-project).
-|Client|Upload method|
-|--|--|
-|[BlobClient](/javascript/api/@azure/storage-blob/blobclient)|The SDK needs to know the blob type you want to upload to. Because BlobClient is the base class for the other Blob clients, it does not have upload methods. It is mostly useful for operations that are common to the child blob classes. For uploading, create specific blob clients directly or get specific blob clients from ContainerClient.|
-|[BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient)|This is the **most common upload client**:<br>* upload()<br>* stageBlock() and commitBlockList()|
-|[AppendBlobClient](/javascript/api/@azure/storage-blob/appendblobclient)|* create()<br>* append()|
-|[PageBlobClient](/javascript/api/@azure/storage-blob/pageblobclient)|* create()<br>* appendPages()|
+## Upload data to a block blob
-## <a name="upload-by-using-a-file-path"></a>Upload with BlockBlobClient by using a file path
+You can use any of the following methods to upload data to a block blob:
+
+- [upload](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-upload) (non-parallel uploading method)
+- [uploadData](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-uploaddata)
+- [uploadFile](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-uploadfile) (only available in Node.js runtime)
+- [uploadStream](/javascript/api/@azure/storage-blob/blockblobclient#@azure-storage-blob-blockblobclient-uploadstream) (only available in Node.js runtime)
+
+Each of these methods can be called using a [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object.
+
+## Upload a block blob from a file path
The following example uploads a local file to blob storage with the [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object. The [options](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) object allows you to pass in your own metadata and [tags](storage-manage-find-blobs.md#blob-index-tags-and-data-management), used for indexing, at upload time: :::code language="typescript" source="~/azure-storage-snippets/blobs/howto/TypeScript/NodeJS-v12/dev-guide/src/blob-upload-from-local-file-path.ts" id="Snippet_UploadBlob" :::
-## <a name="upload-by-using-a-stream"></a>Upload with BlockBlobClient by using a Stream
+## Upload a block blob from a stream
The following example uploads a readable stream to blob storage with the [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object. Pass in the BlockBlobUploadStream [options](/javascript/api/@azure/storage-blob/blockblobuploadstreamoptions) to affect the upload:
const uploadOptions: BlockBlobUploadStreamOptions = {
await createBlobFromReadStream(containerClient, `my-text-file.txt`, readableStream, uploadOptions); ```
-## <a name="upload-by-using-a-binarydata-object"></a>Upload with BlockBlobClient by using a BinaryData object
+## Upload a block blob from a buffer
The following example uploads a Node.js buffer to blob storage with the [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object. Pass in the BlockBlobParallelUpload [options](/javascript/api/@azure/storage-blob/blockblobparalleluploadoptions) to affect the upload:
const uploadOptions: BlockBlobParallelUploadOptions = {
createBlobFromBuffer(containerClient, `daisies.jpg`, buffer, uploadOptions) ```
-## <a name="upload-a-string"></a>Upload a string with BlockBlobClient
+## Upload a block blob from a string
The following example uploads a string to blob storage with the [BlockBlobClient](/javascript/api/@azure/storage-blob/blockblobclient) object. Pass in the BlockBlobUploadOptions [options](/javascript/api/@azure/storage-blob/blockblobuploadoptions) to affect the upload:
To learn more about uploading blobs using the Azure Blob Storage client library
The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for uploading blobs use the following REST API operations: - [Put Blob](/rest/api/storageservices/put-blob) (REST API)-- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API)
+- [Put Block](/rest/api/storageservices/put-block) (REST API)
### Code samples
storage Storage Blob Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload.md
description: Learn how to upload a blob to your Azure Storage account using the
Previously updated : 03/28/2022 Last updated : 04/21/2023
# Upload a blob with .NET
-This article shows how to upload a blob using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage).You can upload a blob, open a blob stream and write to that, or upload large blobs in blocks.
+This article shows how to upload a blob using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). You can upload data to a block blob from a file path, a stream, a binary object, or a text string. You can also open a blob stream and write to it, or upload large blobs in blocks.
-> [!NOTE]
-> Blobs in Azure Storage are organized into containers. Before you can upload a blob, you must first create a container. You can create a container in a storage account using methods from [BlobServiceClient](/dotnet/api/azure.storage.blobs.blobserviceclient) or [BlobContainerClient](/dotnet/api/azure.storage.blobs.blobcontainerclient). To learn how to create a container in your storage account, see [Create a container in Azure Storage with .NET](storage-blob-container-create.md).
+## Prerequisites
-To upload a blob by using a file path, a stream, a binary object or a text string, use either of the following methods:
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform an upload operation. To learn more, see the authorization guidance for the following REST API operations:
+ - [Put Blob](/rest/api/storageservices/put-blob#authorization)
+ - [Put Block](/rest/api/storageservices/put-block#authorization)
+- The package **Azure.Storage.Blobs** installed to your project directory. To learn more about setting up your project, see [Get Started with Azure Storage and .NET](storage-blob-dotnet-get-started.md#set-up-your-project).
+
+## Upload data to a block blob
+
+You can use either of the following methods to upload data to a block blob:
- [Upload](/dotnet/api/azure.storage.blobs.blobclient.upload) - [UploadAsync](/dotnet/api/azure.storage.blobs.blobclient.uploadasync)
-To open a stream in Blob Storage, and then write to that stream, use either of the following methods:
+To open a stream in Blob Storage and write to that stream, use either of the following methods:
- [OpenWrite](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.openwrite) - [OpenWriteAsync](/dotnet/api/azure.storage.blobs.specialized.blockblobclient.openwriteasync)
-## Upload by using a file path
-
-The following example uploads a blob by using a file path:
-
-```csharp
-public static async Task UploadFile
- (BlobContainerClient containerClient, string localFilePath)
-{
- string fileName = Path.GetFileName(localFilePath);
- BlobClient blobClient = containerClient.GetBlobClient(fileName);
-
- await blobClient.UploadAsync(localFilePath, true);
-}
-```
-
-## Upload by using a Stream
-
-The following example uploads a blob by creating a [Stream](/dotnet/api/system.io.stream) object, and then uploading that stream.
-
-```csharp
-public static async Task UploadStream
- (BlobContainerClient containerClient, string localFilePath)
-{
- string fileName = Path.GetFileName(localFilePath);
- BlobClient blobClient = containerClient.GetBlobClient(fileName);
+## Upload a block blob from a local file path
- FileStream fileStream = File.OpenRead(localFilePath);
- await blobClient.UploadAsync(fileStream, true);
- fileStream.Close();
-}
-```
+The following example uploads a block blob from a local file path:
-## Upload by using a BinaryData object
-The following example uploads a [BinaryData](/dotnet/api/system.binarydata) object.
+## Upload a block blob from a stream
-```csharp
-public static async Task UploadBinary
- (BlobContainerClient containerClient, string localFilePath)
-{
- string fileName = Path.GetFileName(localFilePath);
- BlobClient blobClient = containerClient.GetBlobClient(fileName);
+The following example uploads a block blob by creating a [Stream](/dotnet/api/system.io.stream) object and uploading the stream.
- FileStream fileStream = File.OpenRead(localFilePath);
- BinaryReader reader = new BinaryReader(fileStream);
- byte[] buffer = new byte[fileStream.Length];
+## Upload a block blob from a BinaryData object
- reader.Read(buffer, 0, buffer.Length);
+The following example uploads a block blob from a [BinaryData](/dotnet/api/system.binarydata) object.
- BinaryData binaryData = new BinaryData(buffer);
- await blobClient.UploadAsync(binaryData, true);
+## Upload a block blob from a string
- fileStream.Close();
-}
-```
+The following example uploads a block blob from a string:
-## Upload a string
-
-The following example uploads a string:
-
-```csharp
-public static async Task UploadString
- (BlobContainerClient containerClient, string localFilePath)
-{
- string fileName = Path.GetFileName(localFilePath);
- BlobClient blobClient = containerClient.GetBlobClient(fileName);
-
- await blobClient.UploadAsync(BinaryData.FromString("hello world"), overwrite: true);
-}
-```
## Upload with index tags Blob index tags categorize data in your storage account using key-value tag attributes. These tags are automatically indexed and exposed as a searchable multi-dimensional index to easily find data. You can perform this task by adding tags to a [BlobUploadOptions](/dotnet/api/azure.storage.blobs.models.blobuploadoptions) instance, and then passing that instance into the [UploadAsync](/dotnet/api/azure.storage.blobs.blobclient.uploadasync) method.
-The following example uploads a blob with three index tags.
+The following example uploads a block blob with index tags:
-```csharp
-public static async Task UploadBlobWithTags
- (BlobContainerClient containerClient, string localFilePath)
-{
- string fileName = Path.GetFileName(localFilePath);
- BlobClient blobClient = containerClient.GetBlobClient(fileName);
- BlobUploadOptions options = new BlobUploadOptions();
- options.Tags = new Dictionary<string, string>
- {
- { "Sealed", "false" },
- { "Content", "image" },
- { "Date", "2020-04-20" }
- };
+## Upload to a stream in Blob Storage
- await blobClient.UploadAsync(localFilePath, options);
-}
-```
+You can open a stream in Blob Storage and write to it. The following example creates a zip file in Blob Storage and writes files to it. Instead of building a zip file in local memory, only one file at a time is in memory.
-## Upload to a stream in Blob Storage
-You can open a stream in Blob Storage and write to that stream. The following example creates a zip file in Blob Storage and writes files to that file. Instead of building a zip file in local memory, only one file at a time is in memory.
-
-```csharp
-public static async Task UploadToStream
- (BlobContainerClient containerClient, string localDirectoryPath)
-{
- string zipFileName = Path.GetFileName
- (Path.GetDirectoryName(localDirectoryPath)) + ".zip";
-
- BlockBlobClient blockBlobClient =
-
- containerClient.GetBlockBlobClient(zipFileName);
-
- using (Stream stream = await blockBlobClient.OpenWriteAsync(true))
- {
- using (ZipArchive zip = new ZipArchive
- (stream, ZipArchiveMode.Create, leaveOpen: false))
- {
- foreach (var fileName in Directory.EnumerateFiles(localDirectoryPath))
- {
- using (var fileStream = File.OpenRead(fileName))
- {
- var entry = zip.CreateEntry(Path.GetFileName
- (fileName), CompressionLevel.Optimal);
- using (var innerFile = entry.Open())
- {
- await fileStream.CopyToAsync(innerFile);
- }
- }
- }
- }
- }
-
-}
-```
-
-## Upload by staging blocks and then committing them
-
-You can have greater control over how to divide our uploads into blocks by manually staging individual blocks of data. When all of the blocks that make up a blob are staged, you can commit them to Blob Storage. You can use this approach if you want to enhance performance by uploading blocks in parallel.
-
-```csharp
-public static async Task UploadInBlocks
- (BlobContainerClient blobContainerClient, string localFilePath, int blockSize)
-{
- string fileName = Path.GetFileName(localFilePath);
- BlockBlobClient blobClient = blobContainerClient.GetBlockBlobClient(fileName);
-
- FileStream fileStream = File.OpenRead(localFilePath);
-
- ArrayList blockIDArrayList = new ArrayList();
-
- byte[] buffer;
-
- var bytesLeft = (fileStream.Length - fileStream.Position);
-
- while (bytesLeft > 0)
- {
- if (bytesLeft >= blockSize)
- {
- buffer = new byte[blockSize];
- await fileStream.ReadAsync(buffer, 0, blockSize);
- }
- else
- {
- buffer = new byte[bytesLeft];
- await fileStream.ReadAsync(buffer, 0, Convert.ToInt32(bytesLeft));
- bytesLeft = (fileStream.Length - fileStream.Position);
- }
-
- using (var stream = new MemoryStream(buffer))
- {
- string blockID = Convert.ToBase64String
- (Encoding.UTF8.GetBytes(Guid.NewGuid().ToString()));
-
- blockIDArrayList.Add(blockID);
--
- await blobClient.StageBlockAsync(blockID, stream);
- }
-
- bytesLeft = (fileStream.Length - fileStream.Position);
-
- }
-
- string[] blockIDArray = (string[])blockIDArrayList.ToArray(typeof(string));
-
- await blobClient.CommitBlockListAsync(blockIDArray);
-}
-```
+## Upload a block blob by staging blocks and committing
+
+You can have greater control over how to divide uploads into blocks by manually staging individual blocks of data. When all of the blocks that make up a blob are staged, you can commit them to Blob Storage. You can use this approach to enhance performance by uploading blocks in parallel.
+ ## Resources
To learn more about uploading blobs using the Azure Blob Storage client library
The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for uploading blobs use the following REST API operations: - [Put Blob](/rest/api/storageservices/put-blob) (REST API)-- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API)
+- [Put Block](/rest/api/storageservices/put-block) (REST API)
+
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/dotnet/BlobDevGuideBlobs/UploadBlob.cs)
### See also
+- [Performance tuning for uploads and downloads](storage-blobs-tune-upload-download.md).
- [Manage and find Azure Blob data with blob index tags](storage-manage-find-blobs.md) - [Use blob index tags to manage and find data on Azure Blob Storage](storage-blob-index-how-to.md)
storage Storage Blobs Tune Upload Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download.md
# Performance tuning for uploads and downloads with the Azure Storage client library for .NET
-When an application transfers data using the Azure Storage client library for .NET, there are several factors that can affect speed, memory usage, and even the success or failure of the request. To maximize performance and reliability for data transfers, it's important to be proactive in configuring client library transfer options based on the environment your app will run in.
+When an application transfers data using the Azure Storage client library for .NET, there are several factors that can affect speed, memory usage, and even the success or failure of the request. To maximize performance and reliability for data transfers, it's important to be proactive in configuring client library transfer options based on the environment your app runs in.
This article walks through several considerations for tuning data transfer options, and the guidance applies to any API that accepts `StorageTransferOptions` as a parameter. When properly tuned, the client library can efficiently distribute data across multiple requests, which can result in improved operation speed, memory usage, and network stability.
The following properties of `StorageTransferOptions` can be tuned based on the n
### InitialTransferSize
-[InitialTransferSize](/dotnet/api/azure.storage.storagetransferoptions.initialtransfersize) is the size of the first range request in bytes. An HTTP range request is a partial request, with the size defined by `InitialTransferSize` in this case. Blobs smaller than this size will be transferred in a single request. Blobs larger than this size will continue being transferred in chunks of size `MaximumTransferSize`.
+[InitialTransferSize](/dotnet/api/azure.storage.storagetransferoptions.initialtransfersize) is the size of the first range request in bytes. An HTTP range request is a partial request, with the size defined by `InitialTransferSize` in this case. Blobs smaller than this size are transferred in a single request. Blobs larger than this size continue to be transferred in chunks of size `MaximumTransferSize`.
-It's important to note that the value you specify for `MaximumTransferSize` *does not* limit the value that you define for `InitialTransferSize`. `InitialTransferSize` defines a separate size limitation for an initial request to perform the entire operation at once, with no subtransfers. It's often the case that you'll want `InitialTransferSize` to be *at least* as large as the value you define for `MaximumTransferSize`, if not larger. Depending on the size of the data transfer, this approach can be more performant, as the transfer is completed with a single request and avoids the overhead of multiple requests.
+It's important to note that the value you specify for `MaximumTransferSize` *does not* limit the value that you define for `InitialTransferSize`. `InitialTransferSize` defines a separate size limitation for an initial request to perform the entire operation at once, with no subtransfers. It's often the case that you want `InitialTransferSize` to be *at least* as large as the value you define for `MaximumTransferSize`, if not larger. Depending on the size of the data transfer, this approach can be more performant, as the transfer is completed with a single request and avoids the overhead of multiple requests.
If you're unsure of what value is best for your situation, a safe option is to set `InitialTransferSize` to the same value used for `MaximumTransferSize`.
If you're unsure of what value is best for your situation, a safe option is to s
> When using a `BlobClient` object, uploading a blob smaller than the `InitialTransferSize` will be performed using [Put Blob](/rest/api/storageservices/put-blob), rather than [Put Block](/rest/api/storageservices/put-block). ### MaximumConcurrency
-[MaximumConcurrency](/dotnet/api/azure.storage.storagetransferoptions.maximumconcurrency) is the maximum number of workers that may be used in a parallel transfer. Currently, only asynchronous operations can parallelize transfers. Synchronous operations will ignore this value and work in sequence.
+[MaximumConcurrency](/dotnet/api/azure.storage.storagetransferoptions.maximumconcurrency) is the maximum number of workers that may be used in a parallel transfer. Currently, only asynchronous operations can parallelize transfers. Synchronous operations ignore this value and work in sequence.
The effectiveness of this value is subject to connection pool limits in .NET, which may restrict performance by default in certain scenarios. To learn more about connection pool limits in .NET, see [.NET Framework Connection Pool Limits and the new Azure SDK for .NET](https://devblogs.microsoft.com/azure-sdk/net-framework-connection-pool-limits/).
To keep data moving efficiently, the client libraries may not always reach the `
The client library includes overloads for the `Upload` and `UploadAsync` methods, which accept a [StorageTransferOptions](/dotnet/api/azure.storage.storagetransferoptions) instance as part of a [BlobUploadOptions](/dotnet/api/azure.storage.blobs.models.blobuploadoptions) parameter. Similar overloads also exist for the `DownloadTo` and `DownloadToAsync` methods, using a [BlobDownloadToOptions](/dotnet/api/azure.storage.blobs.models.blobdownloadoptions) parameter.
-The following code example shows how to define values for a `StorageTransferOptions` instance and pass these configuration options as a parameter to `UploadAsync`. The values provided in this sample aren't intended to be a recommendation. To properly tune these values, you'll need to consider the specific needs of your app.
+The following code example shows how to define values for a `StorageTransferOptions` instance and pass these configuration options as a parameter to `UploadAsync`. The values provided in this sample aren't intended to be a recommendation. To properly tune these values, you need to consider the specific needs of your app.
```csharp // Specify the StorageTransferOptions
BlobUploadOptions options = new BlobUploadOptions
await blobClient.UploadAsync(stream, options); ```
-In this example, we set the number of parallel transfer workers to 2, using the `MaximumConcurrency` property. This configuration opens up to 2 connections simultaneously, allowing the upload to happen in parallel. The initial HTTP range request will attempt to upload up to 8 MiB of data, as defined by the `InitialTransferSize` property. Note that `InitialTransferSize` only applies for uploads when [using a seekable stream](#initialtransfersize-on-upload). If the blob size is smaller than 8 MiB, only a single request is necessary to complete the operation. If the blob size is larger than 8 MiB, all subsequent transfer requests will have a maximum size of 4 MiB, which we set with the `MaximumTransferSize` property.
+In this example, we set the number of parallel transfer workers to 2, using the `MaximumConcurrency` property. This configuration opens up to two connections simultaneously, allowing the upload to happen in parallel. The initial HTTP range request attempts to upload up to 8 MiB of data, as defined by the `InitialTransferSize` property. Note that `InitialTransferSize` only applies for uploads when [using a seekable stream](#initialtransfersize-on-upload). If the blob size is smaller than 8 MiB, only a single request is necessary to complete the operation. If the blob size is larger than 8 MiB, all subsequent transfer requests have a maximum size of 4 MiB, which we set with the `MaximumTransferSize` property.
## Performance considerations for uploads
-During an upload, the Storage client libraries will split a given upload stream into multiple subuploads based on the values defined in the `StorageTransferOptions` instance. Each subupload has its own dedicated call to the REST operation. For a `BlobClient` object or `BlockBlobClient` object, this operation is [Put Block](/rest/api/storageservices/put-block). For a `DataLakeFileClient` object, this operation is [Append Data](/rest/api/storageservices/datalakestoragegen2/path/update). The Storage client library manages these REST operations in parallel (depending on transfer options) to complete the full upload.
+During an upload, the Storage client libraries split a given upload stream into multiple subuploads based on the values defined in the `StorageTransferOptions` instance. Each subupload has its own dedicated call to the REST operation. For a `BlobClient` object or `BlockBlobClient` object, this operation is [Put Block](/rest/api/storageservices/put-block). For a `DataLakeFileClient` object, this operation is [Append Data](/rest/api/storageservices/datalakestoragegen2/path/update). The Storage client library manages these REST operations in parallel (depending on transfer options) to complete the full upload.
-Depending on whether the upload stream is seekable or non-seekable, the client library will handle buffering and `InitialTransferSize` differently, as described in the following sections. A seekable stream is a stream that supports querying and modifying the current position within a stream. To learn more about streams in .NET, see the [Stream class](/dotnet/api/system.io.stream#remarks) reference.
+Depending on whether the upload stream is seekable or non-seekable, the client library handles buffering and `InitialTransferSize` differently, as described in the following sections. A seekable stream is a stream that supports querying and modifying the current position within a stream. To learn more about streams in .NET, see the [Stream class](/dotnet/api/system.io.stream#remarks) reference.
> [!NOTE] > Block blobs have a maximum block count of 50,000 blocks. The maximum size of your block blob, then, is 50,000 times `MaximumTransferSize`. ### Buffering during uploads
-The Storage REST layer doesnΓÇÖt support picking up a REST upload operation where you left off; individual transfers are either completed or lost. To ensure resiliency for non-seekable stream uploads, the Storage client libraries buffer data for each individual REST call before starting the upload. In addition to network speed limitations, this buffering behavior is a reason to consider a smaller value for `MaximumTransferSize`, even when uploading in sequence. Decreasing the value of `MaximumTransferSize` decreases the maximum amount of data that will be buffered on each request and each retry of a failed request. If you're experiencing frequent timeouts during data transfers of a certain size, reducing the value of `MaximumTransferSize` will reduce the buffering time, and may result in better performance.
+The Storage REST layer doesnΓÇÖt support picking up a REST upload operation where you left off; individual transfers are either completed or lost. To ensure resiliency for non-seekable stream uploads, the Storage client libraries buffer data for each individual REST call before starting the upload. In addition to network speed limitations, this buffering behavior is a reason to consider a smaller value for `MaximumTransferSize`, even when uploading in sequence. Decreasing the value of `MaximumTransferSize` decreases the maximum amount of data that is buffered on each request and each retry of a failed request. If you're experiencing frequent timeouts during data transfers of a certain size, reducing the value of `MaximumTransferSize` reduces the buffering time, and may result in better performance.
-Another scenario where buffering occurs is when you're uploading data with parallel REST calls to maximize network throughput. The client libraries need sources they can read from in parallel, and since streams are sequential, the Storage client libraries will buffer the data for each individual REST call before starting the upload. This buffering behavior occurs even if the provided stream is seekable.
+Another scenario where buffering occurs is when you're uploading data with parallel REST calls to maximize network throughput. The client libraries need sources they can read from in parallel, and since streams are sequential, the Storage client libraries buffer the data for each individual REST call before starting the upload. This buffering behavior occurs even if the provided stream is seekable.
To avoid buffering during an asynchronous upload call, you must provide a seekable stream and set `MaximumConcurrency` to 1. While this strategy should work in most situations, it's still possible for buffering to occur if your code is using other client library features that require buffering. ### InitialTransferSize on upload
-When a seekable stream is provided for upload, the stream length will be checked against the value of `InitialTransferSize`. If the stream length is less than this value, the entire stream will be uploaded as a single REST call, regardless of other `StorageTransferOptions` values. Otherwise, the upload will be done in multiple parts as described earlier. `InitialTransferSize` has no effect on a non-seekable stream and will be ignored.
+When a seekable stream is provided for upload, the stream length is checked against the value of `InitialTransferSize`. If the stream length is less than this value, the entire stream is uploaded as a single REST call, regardless of other `StorageTransferOptions` values. Otherwise, the upload is done in multiple parts as described earlier. `InitialTransferSize` has no effect on a non-seekable stream and is ignored.
## Performance considerations for downloads
-During a download, the Storage client libraries will split a given download request into multiple subdownloads based on the values defined in the `StorageTransferOptions` instance. Each subdownload has its own dedicated call to the REST operation. Depending on transfer options, the client libraries manage these REST operations in parallel to complete the full download.
+During a download, the Storage client libraries split a given download request into multiple subdownloads based on the values defined in the `StorageTransferOptions` instance. Each subdownload has its own dedicated call to the REST operation. Depending on transfer options, the client libraries manage these REST operations in parallel to complete the full download.
### Buffering during downloads
Receiving multiple HTTP responses simultaneously with body contents has implicat
### InitialTransferSize on download
-During a download, the Storage client libraries will make one download range request using `InitialTransferSize` before doing anything else. During this initial download request, the client libraries will know the total size of the resource. If the initial request successfully downloaded all of the content, the operation is complete. Otherwise, the client libraries will continue to make range requests up to `MaximumTransferSize` until the full download is complete.
+During a download, the Storage client libraries make one download range request using `InitialTransferSize` before doing anything else. During this initial download request, the client libraries know the total size of the resource. If the initial request successfully downloaded all of the content, the operation is complete. Otherwise, the client libraries continue to make range requests up to `MaximumTransferSize` until the full download is complete.
## Next steps
storage Azure Defender Storage Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/azure-defender-storage-configure.md
There are several ways to enable Defender for Storage on subscriptions:
- [Azure portal](#azure-portal) - [Azure built-in policy](#enable-and-configure-at-scale-with-an-azure-built-in-policy) - IaC templates, including [Bicep](#bicep-template) and [ARM](#arm-template)-- [REST API](#rest-api)
+- [REST API](#enable-and-configure-with-rest-api)
> [!TIP] > You can [override or set custom configuration settings](#override-defender-for-storage-subscription-level-settings) for specific storage accounts within protected subscriptions.
If you want to turn off the **On-upload malware scanning** or **Sensitive data t
To disable the entire Defender plan, set the `pricingTier` property value to `Free` and remove the `subPlan` and `extensions` properties.
-Learn more about the [ARM template AzAPI reference](/azure/templates/microsoft.security/pricings?pivots=deployment-language-arm-template).
+Learn more in the [ARM template reference](/azure/templates/microsoft.security/pricings?pivots=deployment-language-arm-template).
### Enable and configure with REST API
You can enable and configure Microsoft Defender for Storage on specific storage
- [Azure portal](#azure-portal-1) - IaC templates, including [Bicep](#bicep-template-1) and [ARM](#arm-template-1)-- [REST API](#rest-api-1)
+- [REST API](#rest-api)
The steps below include instructions on how to set up logging and an Event Grid for the Malware Scanning.
Microsoft Defender for Storage is now enabled on this storage account.
> To configure **On-upload malware scanning** settings, such as monthly cap, select **Settings** after Defender for Storage was enabled. > :::image type="content" source="../../defender-for-cloud/media/azure-defender-storage-configure/malware-scan-capping.png" alt-text="Screenshot showing where to configure a monthly cap for Malware Scanning.":::
-If you want to disable Defender for Storage on the storage account or disable one of the features (On-upload malware scanning or Sensitive data threat detection), selectΓÇ»**Settings**, edit the settings, and select **Save**.
+If you want to disable Defender for Storage on the storage account or disable one of the features (On-upload malware scanning or Sensitive data threat detection), select **Settings**, edit the settings, and select **Save**.
### Enable and configure with IaC templates
To enable and configure Microsoft Defender for Storage at the storage account le
```json {
- "type": "Microsoft.Storage/storageAccounts/providers/DefenderForStorageSettings",
+ "type": "Microsoft.Security/DefenderForStorageSettings",
"apiVersion": "2022-12-01-preview",
- "name": "[concat(parameters('accountName'), '/Microsoft.Security/current')]",
+ "name": "current",
"properties": { "isEnabled": true, "malwareScanning": {
To enable and configure Microsoft Defender for Storage at the storage account le
"isEnabled": true }, "overrideSubscriptionLevelSettings": true
- }
+ },
+ "scope": "[resourceId('Microsoft.Storage/storageAccounts', parameters('StorageAccountName'))]"
} ```
-To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the `CapGBPerMonthPerStorageAccount` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value `-1`. The default limit is set at 5,000 GB.
-
-If you want to turn off the **On-upload malware scanning** or **Sensitive data threat detection** features, you can change the `isEnabled` value to `false` under Sensitive data discovery.
-
-To disable the entire Defender plan, set the `pricingTier` property value to `Free` and remove the `subPlan` and `extensions` properties.
+To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the `capGBPerMonth` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value `-1`. The default limit is set at 5,000 GB.
-Learn more about the [ARM template AzAPI reference](/azure/templates/microsoft.security/pricings?pivots=deployment-language-arm-template).
+If you want to turn off the **On-upload malware scanning** or **Sensitive data threat detection** features, you can change the `isEnabled` value to `false` under the `malwareScanning` or `sensitiveDataDiscovery` properties sections.
+To disable the entire Defender plan for the storage account, set the `isEnabled` property value to `false` and remove the `malwareScanning` and `sensitiveDataDiscovery` sections from the properties.
#### Bicep template To enable and configure Microsoft Defender for Storage at the storage account level using [Bicep](../../azure-resource-manager/bicep/overview.md), add the following to your Bicep template: ```bicep
-param accountName string
+resource storageAccount 'Microsoft.Storage/storageAccounts@2021-04-01' ...
-resource accountName_current 'Microsoft.Storage/storageAccounts/providers/DefenderForStorageSettings@2022-12-01-preview' = {
-ΓÇ» name: '${accountName}/Microsoft.Security/current'
+resource defenderForStorageSettings 'Microsoft.Security/DefenderForStorageSettings@2022-12-01-preview' = {
+ name: 'current'
+ scope: storageAccount
ΓÇ» properties: { ΓÇ» ΓÇ» isEnabled: true ΓÇ» ΓÇ» malwareScanning: {
resource accountName_current 'Microsoft.Storage/storageAccounts/providers/Defend
} ```
-To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the `CapGBPerMonthPerStorageAccount` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value `-1`. The default limit is set at 5,000 GB.
+To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the `capGBPerMonth` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value `-1`. The default limit is set at 5,000 GB.
-If you want to turn off the **On-upload malware scanning** or **Sensitive data threat detection** features, you can change the `isEnabled` value to `false` under Sensitive data discovery.
+If you want to turn off the **On-upload malware scanning** or **Sensitive data threat detection** features, you can change the `isEnabled` value to `false` under the `malwareScanning` or `sensitiveDataDiscovery` properties sections.
-To disable the entire Defender plan, set the `pricingTier` property value to `Free` and remove the `subPlan` and `extensions` properties.
+To disable the entire Defender plan for the storage account, set the `isEnabled` property value to `false` and remove the `malwareScanning` and `sensitiveDataDiscovery` sections from the properties.
Learn more about the [Bicep template AzAPI reference](/azure/templates/microsoft.security/pricings?pivots=deployment-language-bicep&source=docs).
Learn more about the [Bicep template AzAPI reference](/azure/templates/microsoft
To enable and configure Microsoft Defender for Storage at the storage account level using REST API, create a PUT request with this endpoint. Replace the `subscriptionId` , `resourceGroupName`, and `accountName` in the endpoint URL with your own Azure subscription ID, resource group and storage account names accordingly. ```http
-PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Security/pricings/StorageAccounts?api-version=2023-01-01
+PUT
+https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Storage/storageAccounts/{accountName}/providers/Microsoft.Security/defenderForStorageSettings/current?api-version=2022-12-01-preview
``` And add the following request body:
And add the following request body:
"sensitiveDataDiscovery": { "isEnabled": true },
- "overrideSubscriptionLevelSettings": false
+ "overrideSubscriptionLevelSettings": true
} } ```
-To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the `CapGBPerMonthPerStorageAccount` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value `-1`. The default limit is set at 5,000 GB.
+To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the `capGBPerMonth` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value `-1`. The default limit is set at 5,000 GB.
-If you want to turn off the **On-upload malware scanning** or **Sensitive data threat detection** features, you can change the `isEnabled` value to `false` under Sensitive data discovery.
+If you want to turn off the **On-upload malware scanning** or **Sensitive data threat detection** features, you can change the `isEnabled` value to `false` under the `malwareScanning` or `sensitiveDataDiscovery` properties sections.
-To disable the entire Defender plan, set the `pricingTier` property value to `Free` and remove the `subPlan` and `extensions` properties.
+To disable the entire Defender plan for the storage account, set the `isEnabled` property value to `false` and remove the `malwareScanning` and `sensitiveDataDiscovery` sections from the properties.
Learn more about the [updating Defender plans with the REST API](/rest/api/defenderforcloud/pricings/update) in HTTP, Java, Go and JavaScript.
Request URL:
```http PUT
-https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resourcegroup-name>/providers/Microsoft.Storage/storageAccounts/<storage-account-name>
-/providers/Microsoft.Security/antiMalwareSettings/current/providers/Microsoft.Insights/
-diagnosticSettings/service?api-version=2021-05-01-preview
+https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Storage/storageAccounts/{accountName}/providers/Microsoft.Security/DefenderForStorageSettings/current/providers/Microsoft.Insights/diagnosticSettings/service?api-version=2021-05-01-preview
``` Request Body:
Request Body:
```json { "properties": {
- "workspaceId": "/subscriptions/704601a1-0ac4-4d5d-aecd-322835fbde2f/resourcegroups/demorg/providers/microsoft.operationalinsights/workspaces/malwarescanningscanresultworkspace",
+ "workspaceId": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroup}/providers/microsoft.operationalinsights/workspaces/{workspaceName}",
"logs": [ {
- "categoryGroup": "allLogs",
+ "category": "ScanResults",
"enabled": true, "retentionPolicy": { "enabled": true,
Request URL:
```http PUT
-https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/<resourcegroup-name>/providers/Microsoft.Storage/storageAccounts/<storage-account-name>
-/providers/Microsoft.Security/DefenderForStorageSettings/current?api-version=2022-12-01-preview
+https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Storage/storageAccounts/{accountName}/providers/Microsoft.Security/DefenderForStorageSettings/current?api-version=2022-12-01-preview
``` Request Body:
Request Body:
"isEnabled": true, "capGBPerMonth": 5000 },
- "scanResultsEventGridTopicResourceId": "/subscriptions/704601a1-0ac4-4d5d-aecd-322835fbde2f/resourceGroups/DemoRG/providers/Microsoft.EventGrid/topics/ScanResultsEGCustomTopic"
+ "scanResultsEventGridTopicResourceId": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.EventGrid/topics/{EventGridTopicName}"
}, "sensitiveDataDiscovery": { "isEnabled": true
To override Defender for Storage subscription-level settings to configure settin
```http PUT
- PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/providers/Microsoft.Security/DefenderForStorageSettings/current?api-version=2022-12-01-preview
+ https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/providers/Microsoft.Security/DefenderForStorageSettings/current?api-version=2022-12-01-preview
``` Request Body:
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Previously updated : 10/04/2022 Last updated : 04/20/2023
By default, storage accounts accept connections from clients on any network. You
## Grant access from a virtual network
-You can configure storage accounts to allow access only from specific subnets. The allowed subnets may belong to a VNet in the same subscription, or those in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant.
+You can configure storage accounts to allow access only from specific subnets. The allowed subnets may belong to a VNet in the same subscription or a different subscription, including those belonging to a different Azure Active Directory tenant. With [cross-region service endpoints](#azure-storage-cross-region-service-endpoints), the allowed subnets can also be in different regions from the storage account.
You can enable a [Service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) for Azure Storage within the VNet. The service endpoint routes traffic from the VNet through an optimal path to the Azure Storage service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the storage account that allow requests to be received from specific subnets in a VNet. Clients granted access via these network rules must continue to meet the authorization requirements of the storage account to access the data.
Storage account and the virtual networks granted access may be in different subs
> [!NOTE] > Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. Such rules cannot be configured through the Azure portal, though they may be viewed in the portal.
-### Available virtual network regions
+### Azure Storage cross-region service endpoints
-By default, service endpoints work between virtual networks and service instances in the same Azure region. When using service endpoints with Azure Storage, service endpoints also work between virtual networks and service instances in a [paired region](../../best-practices-availability-paired-regions.md). If you want to use a service endpoint to grant access to virtual networks in other regions, you must register the `AllowGlobalTagsForStorage` feature in the subscription of the virtual network. This capability is currently in public preview.
+Cross-region service endpoints for Azure Storage became generally available in April of 2023. They work between virtual networks and storage service instances in any region. With cross-region service endpoints, subnets will no longer use a public IP address to communicate with any storage account, including those in another region. Instead, all the traffic from subnets to storage accounts will use a private IP address as a source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets will no longer have an effect.
-Service endpoints allow continuity during a regional failover and access to read-only geo-redundant storage (RA-GRS) instances. Network rules that grant access from a virtual network to a storage account also grant access to any RA-GRS instance.
+Configuring service endpoints between virtual networks and service instances in a [paired region](../../best-practices-availability-paired-regions.md) can be an important part of your disaster recovery plan. Service endpoints allow continuity during a regional failover and access to read-only geo-redundant storage (RA-GRS) instances. Network rules that grant access from a virtual network to a storage account also grant access to any RA-GRS instance.
When planning for disaster recovery during a regional outage, you should create the VNets in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your geo-redundant storage accounts.
-### Enabling access to virtual networks in other regions (preview)
-
->
> [!IMPORTANT]
-> This capability is currently in PREVIEW.
+> Local and cross-region service endpoints cannot coexist on the same subnet.
>
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-To enable access from a virtual network that is located in another region over service endpoints, register the `AllowGlobalTagsForStorage` feature in the subscription of the virtual network. All the subnets in the subscription that has the _AllowedGlobalTagsForStorage_ feature enabled will no longer use a public IP address to communicate with any storage account. Instead, all the traffic from these subnets to storage accounts will use a private IP address as a source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets will no longer have an effect.
-
-> [!NOTE]
-> For updating the existing service endpoints to access a storage account in another region, perform an [update subnet](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update&preserve-view=true) operation on the subnet after registering the subscription with the `AllowGlobalTagsForStorage` feature. Similarly, to go back to the old configuration, perform an [update subnet](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update&preserve-view=true) operation after deregistering the subscription with the `AllowGlobalTagsForStorage` feature.
--
-#### [Portal](#tab/azure-portal)
-
-During the preview you must use either PowerShell or the Azure CLI to enable this feature.
-
-#### [PowerShell](#tab/azure-powershell)
-
-1. Open a Windows PowerShell command window.
-
-1. Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
-
- ```powershell
- Connect-AzAccount
- ```
-
-2. If your identity is associated with more than one subscription, then set your active subscription to the subscription of the virtual network.
-
- ```powershell
- $context = Get-AzSubscription -SubscriptionId <subscription-id>
- Set-AzContext $context
- ```
-
- Replace the `<subscription-id>` placeholder value with the ID of your subscription.
-
-3. Register the `AllowGlobalTagsForStorage` feature by using the [Register-AzProviderFeature](/powershell/module/az.resources/register-azproviderfeature) command.
-
- ```powershell
- Register-AzProviderFeature -ProviderNamespace Microsoft.Network -FeatureName AllowGlobalTagsForStorage
- ```
-
- > [!NOTE]
- > The registration process might not complete immediately. Make sure to verify that the feature is registered before using it.
-
-4. To verify that the registration is complete, use the [Get-AzProviderFeature](/powershell/module/az.resources/get-azproviderfeature) command.
-
- ```powershell
- Get-AzProviderFeature -ProviderNamespace Microsoft.Network -FeatureName AllowGlobalTagsForStorage
- ```
-
-#### [Azure CLI](#tab/azure-cli)
-
-1. Open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
-
-2. If your identity is associated with more than one subscription, then set your active subscription to subscription of the virtual network.
-
- ```azurecli-interactive
- az account set --subscription <subscription-id>
- ```
-
- Replace the `<subscription-id>` placeholder value with the ID of your subscription.
-
-3. Register the `AllowGlobalTagsForStorage` feature by using the [az feature register](/cli/azure/feature#az-feature-register) command.
-
- ```azurecli
- az feature register --namespace Microsoft.Network --name AllowGlobalTagsForStorage
- ```
-
- > [!NOTE]
- > The registration process might not complete immediately. Make sure to verify that the feature is registered before using it.
-
-4. To verify that the registration is complete, use the [az feature](/cli/azure/feature#az-feature-show) command.
-
- ```azurecli
- az feature show --namespace Microsoft.Network --name AllowGlobalTagsForStorage
- ```
--
+> To replace existing service endpoints with cross-region ones, delete the existing **Microsoft.Storage** endpoints and recreate them as cross-region endpoints (**Microsoft.Storage.Global**).
### Managing virtual network rules
-You can manage virtual network rules for storage accounts through the Azure portal, PowerShell, or CLIv2.
+You can manage virtual network rules for storage accounts through the Azure portal, PowerShell, or CLIv2.
> [!NOTE]
-> If you registered the `AllowGlobalTagsForStorage` feature, and you want to enable access to your storage account from a virtual network/subnet in another Azure AD tenant, or in a region other than the region of the storage account or its paired region, then you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD tenants or in regions other than the region of the storage account or its paired region, and hence cannot be used to configure access rules for virtual networks in other regions.
+> If you want to enable access to your storage account from a virtual network/subnet in another Azure AD tenant, you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD tenants.
#### [Portal](#tab/azure-portal)
You can manage virtual network rules for storage accounts through the Azure port
> [!NOTE] > If a service endpoint for Azure Storage wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation. >
- > Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use , PowerShell, CLI or REST APIs.
- >
- > Even if you registered the `AllowGlobalTagsForStorageOnly` feature, subnets in regions other than the region of the storage account or its paired region aren't shown for selection. If you want to enable access to your storage account from a virtual network/subnet in a different region, use the instructions in the PowerShell or Azure CLI tabs.
+ > Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use PowerShell, Azure CLI or REST APIs.
5. To remove a virtual network or subnet rule, select **...** to open the context menu for the virtual network or subnet, and select **Remove**.
You can manage virtual network rules for storage accounts through the Azure port
3. Enable service endpoint for Azure Storage on an existing virtual network and subnet. ```powershell
- Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" -Name "myvnet" | Set-AzVirtualNetworkSubnetConfig -Name "mysubnet" -AddressPrefix "10.0.0.0/24" -ServiceEndpoint "Microsoft.Storage" | Set-AzVirtualNetwork
+ Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" -Name "myvnet" | Set-AzVirtualNetworkSubnetConfig -Name "mysubnet" -AddressPrefix "10.0.0.0/24" -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
``` 4. Add a network rule for a virtual network and subnet.
You can manage virtual network rules for storage accounts through the Azure port
3. Enable service endpoint for Azure Storage on an existing virtual network and subnet. ```azurecli
- az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage"
+ az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global"
``` 4. Add a network rule for a virtual network and subnet.
Resources of some services, **when registered in your subscription**, can access
| Azure DevTest Labs | Microsoft.DevTestLab | Custom image creation and artifact installation. [Learn more](../../devtest-labs/devtest-lab-overview.md). | | Azure Event Grid | Microsoft.EventGrid | Enable Blob Storage event publishing and allow Event Grid to publish to storage queues. Learn about [blob storage events](../../event-grid/overview.md#event-sources) and [publishing to queues](../../event-grid/event-handlers.md). | | Azure Event Hubs | Microsoft.EventHub | Archive data with Event Hubs Capture. [Learn More](../../event-hubs/event-hubs-capture-overview.md). |
-| Azure File Sync | Microsoft.StorageSync | Enables you to transform your on-prem file server to a cache for Azure File shares. Allowing for multi-site sync, fast disaster-recovery, and cloud-side backup. [Learn more](../file-sync/file-sync-planning.md) |
+| Azure File Sync | Microsoft.StorageSync | Enables you to transform your on-premises file server to a cache for Azure File shares. Allowing for multi-site sync, fast disaster-recovery, and cloud-side backup. [Learn more](../file-sync/file-sync-planning.md) |
| Azure HDInsight | Microsoft.HDInsight | Provision the initial contents of the default file system for a new HDInsight cluster. [Learn more](../../hdinsight/hdinsight-hadoop-use-blob-storage.md). | | Azure Import Export | Microsoft.ImportExport | Enables import of data to Azure Storage or export of data from Azure Storage using the Azure Storage Import/Export service. [Learn more](../../import-export/storage-import-export-service.md). | | Azure Monitor | Microsoft.Insights | Allows writing of monitoring data to a secured storage account, including resource logs, Azure Active Directory sign-in and audit logs, and Microsoft Intune logs. [Learn more](../../azure-monitor/roles-permissions-security.md). |
storage Storage Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-private-endpoints.md
When you create a private endpoint, you must specify the storage account and the
You need a separate private endpoint for each storage resource that you need to access, namely [Blobs](../blobs/storage-blobs-overview.md), [Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md), [Files](../files/storage-files-introduction.md), [Queues](../queues/storage-queues-introduction.md), [Tables](../tables/table-storage-overview.md), or [Static Websites](../blobs/storage-blob-static-website.md). On the private endpoint, these storage services are defined as the **target sub-resource** of the associated storage account.
-If you create a private endpoint for the Data Lake Storage Gen2 storage resource, then you should also create one for the Blob storage resource. That's because operations that target the Data Lake Storage Gen2 endpoint might be redirected to the Blob endpoint. By creating a private endpoint for both resources, you ensure that operations can complete successfully.
+
+If you create a private endpoint for the Data Lake Storage Gen2 storage resource, then you should also create one for the Blob Storage resource. That's because operations that target the Data Lake Storage Gen2 endpoint might be redirected to the Blob endpoint. Similarly, if you add a private endpoint for Blob Storage only, and not for Data Lake Storage Gen2, some operations (such as Manage ACL, Create Directory, Delete Directory, etc.) will fail since the Gen2 APIs require a DFS private endpoint. By creating a private endpoint for both resources, you ensure that all operations can complete successfully.
> [!TIP] > Create a separate private endpoint for the secondary instance of the storage service for better read performance on RA-GRS accounts.
storage Elastic San Connect Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-linux.md
description: Learn how to connect to an Azure Elastic SAN Preview volume from a
Previously updated : 02/22/2023 Last updated : 04/24/2023
In your virtual network, enable the Storage service endpoint on your subnet. Thi
# [Portal](#tab/azure-portal) 1. Navigate to your virtual network and select **Service Endpoints**.
-1. Select **+ Add** and for **Service** select **Microsoft.Storage**.
+1. Select **+ Add** and for **Service** select **Microsoft.Storage.Global**.
1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**. :::image type="content" source="media/elastic-san-create/elastic-san-service-endpoint.png" alt-text="Screenshot of the virtual network service endpoint page, adding the storage service endpoint." lightbox="media/elastic-san-create/elastic-san-service-endpoint.png":::
$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Na
$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
-$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage" | Set-AzVirtualNetwork
+$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
``` # [Azure CLI](#tab/azure-cli) ```azurecli
-az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage"
+az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global"
```
az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "my
Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks.
-By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For details on accessing your volumes from another region, see [Enabling access to virtual networks in other regions (preview)](elastic-san-networking.md#enabling-access-to-virtual-networks-in-other-regions-preview).
+By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For details on accessing your volumes from another region, see [Azure Storage cross-region service endpoints](elastic-san-networking.md#azure-storage-cross-region-service-endpoints).
# [Portal](#tab/azure-portal)
storage Elastic San Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-windows.md
description: Learn how to connect to an Azure Elastic SAN Preview volume from a
Previously updated : 02/22/2023 Last updated : 04/24/2023
In your virtual network, enable the Storage service endpoint on your subnet. Thi
# [Portal](#tab/azure-portal) 1. Navigate to your virtual network and select **Service Endpoints**.
-1. Select **+ Add** and for **Service** select **Microsoft.Storage**.
+1. Select **+ Add** and for **Service** select **Microsoft.Storage.Global**.
1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**. :::image type="content" source="media/elastic-san-create/elastic-san-service-endpoint.png" alt-text="Screenshot of the virtual network service endpoint page, adding the storage service endpoint." lightbox="media/elastic-san-create/elastic-san-service-endpoint.png":::
$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Na
$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
-$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage" | Set-AzVirtualNetwork
+$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
``` # [Azure CLI](#tab/azure-cli) ```azurecli
-az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage"
+az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global"
```
az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "my
Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks.
-By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For details on accessing your volumes from another region, see [Enabling access to virtual networks in other regions (preview)](elastic-san-networking.md#enabling-access-to-virtual-networks-in-other-regions-preview).
+By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For details on accessing your volumes from another region, see [Azure Storage cross-region service endpoints](elastic-san-networking.md#azure-storage-cross-region-service-endpoints).
# [Portal](#tab/azure-portal)
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
description: An overview of Azure Elastic SAN Preview, a service that enables yo
Previously updated : 02/22/2023 Last updated : 04/24/2023
In your virtual network, enable the Storage service endpoint on your subnet. Thi
> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal. # [Portal](#tab/azure-portal)- 1. Navigate to your virtual network and select **Service Endpoints**. 1. Select **+ Add** and for **Service** select **Microsoft.Storage**. 1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**.
$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Na
$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName
-$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage" | Set-AzVirtualNetwork
+$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
``` # [Azure CLI](#tab/azure-cli) ```azurecli
-az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage"
+az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global"
```
-## Available virtual network regions
-
-By default, service endpoints work between virtual networks and service instances in the same Azure region. When using service endpoints with Azure Storage, service endpoints also work between virtual networks and service instances in a [paired region](../../availability-zones/cross-region-replication-azure.md). If you want to use a service endpoint to grant access to virtual networks in other regions, you must register the `AllowGlobalTagsForStorage` feature in the subscription of the virtual network. This capability is currently in public preview.
-
-Service endpoints allow continuity during a regional failover. When planning for disaster recovery during a regional outage, you should create the virtual networks in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your zone-redundant SANs.
-
-## Enabling access to virtual networks in other regions Preview
-
->
-> [!IMPORTANT]
-> This capability is currently in PREVIEW.
->
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-To enable access from a virtual network that is located in another region over service endpoints, register the `AllowGlobalTagsForStorage` feature in the subscription of the virtual network.
-
-> [!NOTE]
-> For updating the existing service endpoints to access a volume group in another region, perform an [update subnet](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) operation on the subnet after registering the subscription with the `AllowGlobalTagsForStorage` feature. Similarly, to go back to the old configuration, perform an [update subnet](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) operation after deregistering the subscription with the `AllowGlobalTagsForStorage` feature.
--
-### [Portal](#tab/azure-portal)
-
-During the preview you must use either PowerShell or the Azure CLI to enable this feature.
-
-### [PowerShell](#tab/azure-powershell)
--- Open a Windows PowerShell command window.--- Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
+### Available virtual network regions
- ```powershell
- Connect-AzAccount
- ```
+Service endpoints for Azure Storage work between virtual networks and service instances in any region.
-- If your identity is associated with more than one subscription, then set your active subscription to the subscription of the virtual network.
+Configuring service endpoints between virtual networks and service instances in a [paired region](../../best-practices-availability-paired-regions.md) can be an important part of your disaster recovery plan. Service endpoints allow continuity during a regional failover and access to read-only geo-redundant storage (RA-GRS) instances. Network rules that grant access from a virtual network to a storage account also grant access to any RA-GRS instance.
- ```powershell
- $context = Get-AzSubscription -SubscriptionId <subscription-id>
- Set-AzContext $context
- ```
+When planning for disaster recovery during a regional outage, you should create the VNets in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your geo-redundant storage accounts.
- Replace the `<subscription-id>` placeholder value with the ID of your subscription.
+#### Azure Storage cross-region service endpoints
-- Register the `AllowGlobalTagsForStorage` feature by using the [Register-AzProviderFeature](/powershell/module/az.resources/register-azproviderfeature) command.
+Cross-region service endpoints for Azure became generally available in April of 2023. With cross-region service endpoints, subnets will no longer use a public IP address to communicate with any storage account. Instead, all the traffic from subnets to storage accounts will use a private IP address as a source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets will no longer have an effect.
- ```powershell
- Register-AzProviderFeature -ProviderNamespace Microsoft.Network -FeatureName AllowGlobalTagsForStorage
- ```
-
- > [!NOTE]
- > The registration process might not complete immediately. Verify that the feature is registered before using it.
--- To verify that the registration is complete, use the [Get-AzProviderFeature](/powershell/module/az.resources/get-azproviderfeature) command.-
- ```powershell
- Get-AzProviderFeature -ProviderNamespace Microsoft.Network -FeatureName AllowGlobalTagsForStorage
- ```
-
-### [Azure CLI](#tab/azure-cli)
--- Open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.--- If your identity is associated with more than one subscription, then set your active subscription to subscription of the virtual network.-
- ```azurecli-interactive
- az account set --subscription <subscription-id>
- ```
-
- Replace the `<subscription-id>` placeholder value with the ID of your subscription.
--- Register the `AllowGlobalTagsForStorage` feature by using the [az feature register](/cli/azure/feature#az-feature-register) command.-
- ```azurecli
- az feature register --namespace Microsoft.Network --name AllowGlobalTagsForStorage
- ```
-
- > [!NOTE]
- > The registration process might not complete immediately. Make sure to verify that the feature is registered before using it.
--- To verify that the registration is complete, use the [az feature](/cli/azure/feature#az-feature-show) command.-
- ```azurecli
- az feature show --namespace Microsoft.Network --name AllowGlobalTagsForStorage
- ```
--
+To use cross-region service endpoints, it might be necessary to delete existing **Microsoft.Storage** endpoints and recreate them as cross-region (**Microsoft.Storage.Global**).
## Managing virtual network rules
-You can manage virtual network rules for volume groups through the Azure portal, PowerShell, or CLI.
+You can manage virtual network rules for volume groups through the Azure portal, PowerShell, or CLI.
> [!NOTE]
-> If you registered the `AllowGlobalTagsForStorage` feature, and you want to enable access to your volumes from a virtual network/subnet in another Azure AD tenant, or in a region other than the region of the SAN or its paired region, then you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD tenants or in regions other than the region of the storage account or its paired region, and hence cannot be used to configure access rules for virtual networks in other regions.
+> If you want to enable access to your storage account from a virtual network/subnet in another Azure AD tenant, you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD tenants.
### [Portal](#tab/azure-portal)
You can manage virtual network rules for volume groups through the Azure portal,
- Enable service endpoint for Azure Storage on an existing virtual network and subnet. ```azurepowershell
- Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" -Name "myvnet" | Set-AzVirtualNetworkSubnetConfig -Name "mysubnet" -AddressPrefix "10.0.0.0/24" -ServiceEndpoint "Microsoft.Storage" | Set-AzVirtualNetwork
+ Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" -Name "myvnet" | Set-AzVirtualNetworkSubnetConfig -Name "mysubnet" -AddressPrefix "10.0.0.0/24" -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork
``` - Add a network rule for a virtual network and subnet.
You can manage virtual network rules for volume groups through the Azure portal,
- Enable service endpoint for Azure Storage on an existing virtual network and subnet. ```azurecli
- az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage"
+ az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global"
``` - Add a network rule for a virtual network and subnet.
synapse-analytics Synapse File Mount Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-file-mount-api.md
The example assumes that you have one Data Lake Storage Gen2 account named `stor
![Screenshot of a Data Lake Storage Gen2 storage account.](./media/synapse-file-mount-api/gen2-storage-account.png)
-To mount the container called `mycontainer`, `mssparkutils` first needs to check whether you have the permission to access the container. Currently, Azure Synapse Analytics supports three authentication methods for the trigger mount operation: `linkedService`, `accountKey`, and `sastoken`.
+To mount the container called `mycontainer`, `mssparkutils` first needs to check whether you have the permission to access the container. Currently, Azure Synapse Analytics supports three authentication methods for the trigger mount operation: `LinkedService`, `accountKey`, and `sastoken`.
### Mount by using a linked service (recommended)
After you create linked service successfully, you can easily mount the container
mssparkutils.fs.mount( "abfss://mycontainer@<accountname>.dfs.core.windows.net", "/test",
- {"linkedService":"mygen2account"}
+ {"LinkedService":"mygen2account"}
) ```
If you mounted a Blob Storage account and want to access it by using `mssparkuti
mssparkutils.fs.mount( "wasbs://mycontainer@<blobStorageAccountName>.blob.core.windows.net", "/test",
- Map("linkedService" -> "myblobstorageaccount")
+ Map("LinkedService" -> "myblobstorageaccount")
) ```
synapse-analytics Develop Storage Files Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-storage-files-storage-access-control.md
To query a file located in Azure Storage, your serverless SQL pool end point nee
- Server-level CREDENTIAL is used for ad-hoc queries executed using `OPENROWSET` function. Credential name must match the storage URL. - DATABASE SCOPED CREDENTIAL is used for external tables. External table references `DATA SOURCE` with the credential that should be used to access storage.
-To allow a user to create or drop a credential, admin can GRANT/DENY ALTER ANY CREDENTIAL permission to a user:
+To allow a user to create or drop a server-level credential, admin can GRANT ALTER ANY CREDENTIAL permission to the user:
```sql GRANT ALTER ANY CREDENTIAL TO [user_name]; ```
+To allow a user to create or drop a database scoped credential, admin can GRANT CONTROL permission on the database to the user:
+
+```sql
+GRANT CONTROL ON DATABASE::[database_name] TO [user_name];
+```
+ Database users who access external storage must have permission to use credentials. ### Grant permissions to use credential
-To use the credential, a user must have `REFERENCES` permission on a specific credential. To grant a `REFERENCES` permission ON a storage_credential for a specific_user, execute:
+To use the credential, a user must have `REFERENCES` permission on a specific credential.
+
+To grant a `REFERENCES` permission ON a server-level credential for a specific_user, execute:
```sql
-GRANT REFERENCES ON CREDENTIAL::[storage_credential] TO [specific_user];
+GRANT REFERENCES ON CREDENTIAL::[server-level_credential] TO [specific_user];
```
-## Server-scoped credential
+To grant a `REFERENCES` permission ON a DATABASE SCOPED CREDENTIAL for a specific_user, execute:
-Server-scoped credentials are used when SQL login calls `OPENROWSET` function without `DATA_SOURCE` to read files on some storage account. The name of server-scoped credential **must** match the base URL of Azure storage (optionally followed by a container name). A credential is added by running [CREATE CREDENTIAL](/sql/t-sql/statements/create-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true). You'll need to provide a CREDENTIAL NAME argument.
+```sql
+GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[database-scoped_credential] TO [specific_user];
+```
+
+## Server-level credential
+
+Server-level credentials are used when SQL login calls `OPENROWSET` function without `DATA_SOURCE` to read files on some storage account. The name of server-level credential **must** match the base URL of Azure storage (optionally followed by a container name). A credential is added by running [CREATE CREDENTIAL](/sql/t-sql/statements/create-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true). You'll need to provide a CREDENTIAL NAME argument.
> [!NOTE] > The `FOR CRYPTOGRAPHIC PROVIDER` argument is not supported.
Server-level CREDENTIAL name must match the full path to the storage account (an
| Azure Data Lake Storage Gen1 | https | <storage_account>.azuredatalakestore.net/webhdfs/v1 | | Azure Data Lake Storage Gen2 | https | <storage_account>.dfs.core.windows.net |
-Server-scoped credentials enable access to Azure storage using the following authentication types:
+Server-level credentials enable access to Azure storage using the following authentication types:
### [User Identity](#tab/user-identity)
Optionally, you can use just the base URL of the storage account, without contai
### [Public access](#tab/public-access)
-Database scoped credential isn't required to allow access to publicly available files. Create [data source without database scoped credential](develop-tables-external-tables.md?tabs=sql-ondemand#example-for-create-external-data-source) to access publicly available files on Azure storage.
+Server-level credential isn't required to allow access to publicly available files. Create [data source without credential](develop-tables-external-tables.md?tabs=sql-ondemand#example-for-create-external-data-source) to access publicly available files on Azure storage.
The database scoped credential doesn't need to match the name of storage account
### [Public access](#tab/public-access)
-Database scoped credential isn't required to allow access to publicly available files. Create [data source without database scoped credential](develop-tables-external-tables.md?tabs=sql-ondemand#example-for-create-external-data-source) to access publicly available files on Azure storage.
+Database scoped credential isn't required to allow access to publicly available files. Create [data source without credential](develop-tables-external-tables.md?tabs=sql-ondemand#example-for-create-external-data-source) to access publicly available files on Azure storage.
```sql CREATE EXTERNAL DATA SOURCE mysample
CREATE EXTERNAL FILE FORMAT [SynapseParquetFormat]
WITH ( FORMAT_TYPE = PARQUET) GO CREATE EXTERNAL DATA SOURCE publicData
-WITH ( LOCATION = 'https://<storage_account>.dfs.core.windows.net/<public_container>/<path>' )
+WITH ( LOCATION = 'https://<storage_account>.dfs.core.windows.net/<public_container>/<path>' )
GO CREATE EXTERNAL TABLE dbo.userPublicData ( [id] int, [first_name] varchar(8000), [last_name] varchar(8000) )
CREATE EXTERNAL FILE FORMAT [SynapseParquetFormat] WITH ( FORMAT_TYPE = PARQUET)
GO CREATE EXTERNAL DATA SOURCE mysample
-WITH ( LOCATION = 'https://<storage_account>.dfs.core.windows.net/<container>/<path>'
+WITH ( LOCATION = 'https://<storage_account>.dfs.core.windows.net/<container>/<path>'
-- Uncomment one of these options depending on authentication method that you want to use to access data source: --,CREDENTIAL = WorkspaceIdentity --,CREDENTIAL = SasCredential
synapse-analytics Overview Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/overview-features.md
Synapse SQL pools enable you to use built-in security features to secure your da
| **SQL username/password authentication**| Yes | Yes, users can access serverless SQL pool using their usernames and passwords. | | **Azure Active Directory (Azure AD) authentication**| Yes, Azure AD users | Yes, Azure AD logins and users can access serverless SQL pools using their Azure AD identities. | | **Storage Azure Active Directory (Azure AD) passthrough authentication** | Yes | Yes, [Azure AD passthrough authentication](develop-storage-files-storage-access-control.md?tabs=user-identity#supported-storage-authorization-types) is applicable to Azure AD logins. The identity of the Azure AD user is passed to the storage if a credential is not specified. Azure AD passthrough authentication is not available for the SQL users. |
-| **Storage shared access signature (SAS) token authentication** | No | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) with [shared access signature token](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#database-scoped-credential) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) or instance-level [CREDENTIAL](/sql/t-sql/statements/create-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) with [shared access signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-scoped-credential). |
+| **Storage shared access signature (SAS) token authentication** | No | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) with [shared access signature token](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#database-scoped-credential) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) or instance-level [CREDENTIAL](/sql/t-sql/statements/create-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) with [shared access signature](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-level-credential). |
| **Storage Access Key authentication** | Yes, using [DATABASE SCOPED CREDENTIAL](/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=azure-sqldw-latest&preserve-view=true) in [EXTERNAL DATA SOURCE](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) | No, [use SAS token](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#database-scoped-credential) instead of storage access key. | | **Storage [Managed Identity](../../data-factory/data-factory-service-identity.md?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics) authentication** | Yes, using [Managed Service Identity Credential](/azure/azure-sql/database/vnet-service-endpoint-rule-overview?preserve-view=true&toc=%2fazure%2fsynapse-analytics%2ftoc.json&view=azure-sqldw-latest&preserve-view=true) | Yes, The query can access the storage using the workspace [Managed Identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#database-scoped-credential) credential. | | **Storage Application identity/Service principal (SPN) authentication** | [Yes](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true) | Yes, you can create a [credential](develop-storage-files-storage-access-control.md?tabs=service-principal#database-scoped-credential) with a [service principal application ID](develop-storage-files-storage-access-control.md?tabs=service-principal#supported-storage-authorization-types) that will be used to authenticate on the storage. | | **Server roles** | No | Yes, sysadmin, public, and other server-roles are supported. |
-| **SERVER SCOPED CREDENTIAL** | No | Yes, the [server scoped credentials](develop-storage-files-storage-access-control.md?tabs=user-identity#server-scoped-credential) are used by the `OPENROWSET` function that do not uses explicit data source. |
+| **SERVER LEVEL CREDENTIAL** | No | Yes, the [server level credentials](develop-storage-files-storage-access-control.md?tabs=user-identity#server-level-credential) are used by the `OPENROWSET` function that do not uses explicit data source. |
| **Permissions - [Server-level](/sql/relational-databases/security/authentication-access/server-level-roles)** | No | Yes, for example, `CONNECT ANY DATABASE` and `SELECT ALL USER SECURABLES` enable a user to read data from any databases. | | **Database roles** | Yes | Yes, you can use `db_owner`, `db_datareader` and `db_ddladmin` roles. | | **DATABASE SCOPED CREDENTIAL** | Yes, used in external data sources. | Yes, database scoped credentials can be used in external data sources to [define storage authentication method](develop-storage-files-storage-access-control.md?tabs=user-identity.md#database-scoped-credential). |
synapse-analytics Query Delta Lake Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-delta-lake-format.md
To improve the performance of your queries, consider specifying explicit types i
> The serverless Synapse SQL pool uses schema inference to automatically determine columns and their types. The rules for schema inference are the same used for Parquet files. > For Delta Lake type mapping to SQL native type check [type mapping for Parquet](develop-openrowset.md#type-mapping-for-parquet).
-Make sure you can access your file. If your file is protected with SAS key or custom Azure identity, you will need to set up a [server level credential for sql login](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-scoped-credential).
+Make sure you can access your file. If your file is protected with SAS key or custom Azure identity, you will need to set up a [server level credential for sql login](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-level-credential).
> [!IMPORTANT] > Ensure you are using a UTF-8 database collation (for example `Latin1_General_100_BIN2_UTF8`) because string values in Delta Lake files are encoded using UTF-8 encoding.
synapse-analytics Query Json Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-json-files.md
from openrowset(
) with (doc nvarchar(max)) as rows ```
-The JSON document in the preceding sample query includes an array of objects. The query returns each object as a separate row in the result set. Make sure that you can access this file. If your file is protected with SAS key or custom identity, you would need to set up [server level credential for sql login](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-scoped-credential).
+The JSON document in the preceding sample query includes an array of objects. The query returns each object as a separate row in the result set. Make sure that you can access this file. If your file is protected with SAS key or custom identity, you would need to set up [server level credential for sql login](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-level-credential).
### Data source usage
synapse-analytics Query Parquet Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-parquet-files.md
from openrowset(
format = 'parquet') as rows ```
-Make sure that you can access this file. If your file is protected with SAS key or custom Azure identity, you would need to set up [server level credential for sql login](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-scoped-credential).
+Make sure that you can access this file. If your file is protected with SAS key or custom Azure identity, you would need to set up [server level credential for sql login](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-level-credential).
> [!IMPORTANT] > Ensure you are using a UTF-8 database collation (for example `Latin1_General_100_BIN2_UTF8`) because string values in PARQUET files are encoded using UTF-8 encoding.
synapse-analytics Query Single Csv File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-single-csv-file.md
from openrowset(
firstrow = 2 ) as rows ```
-Option `firstrow` is used to skip the first row in the CSV file that represents header in this case. Make sure that you can access this file. If your file is protected with SAS key or custom identity, your would need to setup [server level credential for sql login](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-scoped-credential).
+Option `firstrow` is used to skip the first row in the CSV file that represents header in this case. Make sure that you can access this file. If your file is protected with SAS key or custom identity, your would need to setup [server level credential for sql login](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-level-credential).
> [!IMPORTANT] > If your CSV file contains UTF-8 characters, make sure that you are using a UTF-8 database collation (for example `Latin1_General_100_CI_AS_SC_UTF8`).
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
If you moved a subscription to another Azure AD tenant, you might experience som
If you get errors while you try to access files in Azure storage, make sure that you have permission to access data. You should be able to access publicly available files. If you try to access data without credentials, make sure that your Azure Active Directory (Azure AD) identity can directly access the files.
-If you have a shared access signature key that you should use to access files, make sure that you created a [server-level](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-scoped-credential) or [database-scoped](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#database-scoped-credential) credential that contains that credential. The credentials are required if you need to access data by using the workspace [managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#database-scoped-credential) and custom [service principal name (SPN)](develop-storage-files-storage-access-control.md?tabs=service-principal#database-scoped-credential).
+If you have a shared access signature key that you should use to access files, make sure that you created a [server-level](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#server-level-credential) or [database-scoped](develop-storage-files-storage-access-control.md?tabs=shared-access-signature#database-scoped-credential) credential that contains that credential. The credentials are required if you need to access data by using the workspace [managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity#database-scoped-credential) and custom [service principal name (SPN)](develop-storage-files-storage-access-control.md?tabs=service-principal#database-scoped-credential).
### Can't read, list, or access files in Azure Data Lake Storage
traffic-manager How To Add Endpoint Existing Profile Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/how-to-add-endpoint-existing-profile-template.md
Previously updated : 12/13/2021 Last updated : 04/24/2023
update-center Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/overview.md
description: The article tells what update management center (preview) in Azure
Previously updated : 03/23/2023 Last updated : 04/23/2023 # About Update management center (preview)
+> [!Important]
+> - [Automation Update management](../automation/update-management/overview.md) relies on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**. Update management center (Preview) is the v2 version of Automation Update management and the future of Update management in Azure. UMC is a native service in Azure and does not rely on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../azure-monitor/agents/agents-overview.md).
+> - Guidance for migrating from Automation Update management to Update management center will be provided to customers once the latter is Generally Available. For customers using Automation Update management, we recommend continuing to use the Log Analytics agent and **NOT** migrate to Azure Monitoring agent until migration guidance is provided for Update management or else Automation Update management will not work. Also, the Log Analytics agent would not be deprecated before moving all Automation Update management customers to UMC.
+ Update management center (preview) is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. In addition, you can use the Update management center (preview) to make real-time updates or schedule them within a defined maintenance window. You can use the update management center (preview) in Azure to:
virtual-desktop Language Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/language-packs.md
The latter method is a lot more efficient and cost-effective. However, it's up t
You need the following things to customize your Windows 10 Enterprise multi-session images to add multiple languages: -- An Azure virtual machine (VM) with Windows 10 Enterprise multi-session, version 1903 or later
+- An Azure virtual machine (VM) with a [supported version of Windows 10 Enterprise multi-session](/lifecycle/products/windows-10-enterprise-and-education).
- The Language ISO, Feature on Demand (FOD) Disk 1, and Inbox Apps ISO of the OS version the image uses. You can download them here: - Language ISO:
- - [Windows 10, version 1903 or 1909 Language Pack ISO](https://software-download.microsoft.com/download/pr/18362.1.190318-1202.19h1_release_CLIENTLANGPACKDVD_OEM_MULTI.iso)
- - [Windows 10, version 2004 or later Language Pack ISO](https://software-download.microsoft.com/download/pr/19041.1.191206-1406.vb_release_CLIENTLANGPACKDVD_OEM_MULTI.iso)
+ - [Windows 10 Language Pack ISO (version 2004 or later)](https://software-download.microsoft.com/download/pr/19041.1.191206-1406.vb_release_CLIENTLANGPACKDVD_OEM_MULTI.iso)
- FOD Disk 1 ISO:
- - [Windows 10, version 1903 or 1909 FOD Disk 1 ISO](https://software-download.microsoft.com/download/pr/18362.1.190318-1202.19h1_release_amd64fre_FOD-PACKAGES_OEM_PT1_amd64fre_MULTI.iso)
- - [Windows 10, version 2004 or later FOD Disk 1 ISO](https://software-download.microsoft.com/download/pr/19041.1.191206-1406.vb_release_amd64fre_FOD-PACKAGES_OEM_PT1_amd64fre_MULTI.iso)
+ - [Windows 10 FOD Disk 1 ISO (version 2004 or later)](https://software-download.microsoft.com/download/pr/19041.1.191206-1406.vb_release_amd64fre_FOD-PACKAGES_OEM_PT1_amd64fre_MULTI.iso)
- Inbox Apps ISO:
- - [Windows 10, version 1903 or 1909 Inbox Apps ISO](https://software-download.microsoft.com/download/pr/18362.1.190318-1202.19h1_release_amd64fre_InboxApps.iso)
- - [Windows 10, version 2004 Inbox Apps ISO](https://software-download.microsoft.com/download/pr/19041.1.191206-1406.vb_release_amd64fre_InboxApps.iso)
- - [Windows 10, version 20H2 Inbox Apps ISO](https://software-download.microsoft.com/download/pr/19041.508.200905-1327.vb_release_svc_prod1_amd64fre_InboxApps.iso)
- - [Windows 10, version 21H1 or 21H2 Inbox Apps ISO](https://software-download.microsoft.com/download/sg/19041.928.210407-2138.vb_release_svc_prod1_amd64fre_InboxApps.iso)
+ - [Windows 10 Inbox Apps ISO (version 21H1 or later)](https://software-download.microsoft.com/download/sg/19041.928.210407-2138.vb_release_svc_prod1_amd64fre_InboxApps.iso)
- - If you use Local Experience Pack (LXP) ISO files to localize your images, you'll also need to download the appropriate LXP ISO for the best language experience
- - If you're using Windows 10, version 1903 or 1909:
- - [Windows 10, version 1903 or 1909 LXP ISO](https://software-download.microsoft.com/download/pr/Win_10_1903_32_64_ARM64_MultiLng_LngPkAll_LXP_ONLY.iso)
- - If you're using Windows 10, version 2004, 20H2, or 21H1, use the information in [Adding languages in Windows 10: Known issues](/windows-hardware/manufacture/desktop/language-packs-known-issue) to figure out which of the following LXP ISOs is right for you:
+ - If you use Local Experience Pack (LXP) ISO files to localize your images, you'll also need to download the appropriate LXP ISO for the best language experience. Use the information in [Adding languages in Windows 10: Known issues](/windows-hardware/manufacture/desktop/language-packs-known-issue) to figure out which of the following LXP ISOs is right for you:
- [Windows 10, version 2004 or later 01C 2021 LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2101C.iso) - [Windows 10, version 2004 or later 02C 2021 LXP ISO](https://software-download.microsoft.com/download/pr/LanguageExperiencePack.2102C.iso) - [Windows 10, version 2004 or later 04B 2021 LXP ISO](https://software-download.microsoft.com/download/sg/LanguageExperiencePack.2104B.iso)
virtual-desktop Troubleshoot Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-teams.md
Using Teams in a virtualized environment is different from using Teams in a non-
- With per-machine installation, Teams on VDI isn't automatically updated the same way non-VDI Teams clients are. To update the client, you'll need to update the VM image by installing a new MSI. - Media optimization for Teams is only supported for the Remote Desktop client on machines running Windows 10 or later or macOS 10.14 or later.-- Use of explicit HTTP proxies defined on the client endpoint device isn't supported.
+- Use of explicit HTTP proxies defined on the client endpoint device should work, but isn't supported.
- Zoom in/zoom out of chat windows isn't supported. ### Calls and meetings
virtual-machines Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/delete.md
This example shows how to set the data disk and NIC to be deleted when the VM is
PUT https://management.azure.com/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.Compute/virtualMachines/myVM?api-version=xx {
-"storageProfile": {
+ "storageProfile": {
"dataDisks": [
- { "diskSizeGB": 1023,
+ {
+ "diskSizeGB": 1023,
"name": "myVMdatadisk", "createOption": "Empty", "lun": 0,
- "deleteOption": ΓÇ£DeleteΓÇ¥
- } ]
-},
-"networkProfile": {
+ "deleteOption": "Delete"
+ }
+ ]
+ },
+ "networkProfile": {
"networkInterfaces": [
- { "id": "/subscriptions/.../Microsoft.Network/networkInterfaces/myNIC",
+ {
+ "id": "/subscriptions/.../Microsoft.Network/networkInterfaces/myNIC",
"properties": { "primary": true,
- "deleteOption": ΓÇ£DeleteΓÇ¥
- } }
- ]
+ "deleteOption": "Delete"
+ }
+ }
+ ]
+ }
} ```
PUT https://management.azure.com/subscriptions/subid/resourceGroups/rg1/provider
"id": "/subscriptions/../publicIPAddresses/test-ip",
-          "properties": {
-            “deleteOption”: “Delete”
- }
+          "properties": {
+            "deleteOption": "Delete"
+ }
}, "subnet": {
PATCH https://management.azure.com/subscriptions/subID/resourceGroups/resourcegr
"networkProfile": { "networkInterfaces": [ {
- "id": "/subscriptions/subID/resourceGroups/resourcegroup/providers/Microsoft.Network/networkInterfaces/nic336"
- ,
+ "id": "/subscriptions/subID/resourceGroups/resourcegroup/providers/Microsoft.Network/networkInterfaces/nic336",
"properties": {
- "deleteOption": "Delete"
-}
-}
+ "deleteOption": "Delete"
+ }
+ }
] }
-}
+ }
} ```
virtual-machines Hb Hc Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-hc-known-issues.md
InfiniBand can be configured on the SR-IOV enabled VM sizes with the OFED driver
## Duplicate MAC with cloud-init with Ubuntu on H-series and N-series VMs There's a known issue with cloud-init on Ubuntu VM images as it tries to bring up the IB interface. This can happen either on VM reboot or when trying to create a VM image after generalization. The VM boot logs may show an error like so:
-```console
+```output
ΓÇ£Starting Network Service...RuntimeError: duplicate mac found! both 'eth1' and 'ib0' have macΓÇ¥. ```
This 'duplicate MAC with cloud-init on Ubuntu" is a known issue. This will be re
2) Install the necessary software packages to enable IB ([instruction here](https://techcommunity.microsoft.com/t5/azure-compute/configuring-infiniband-for-ubuntu-hpc-and-gpu-vms/ba-p/1221351)) 3) Edit waagent.conf to change EnableRDMA=y 4) Disable networking in cloud-init
- ```console
+ ```bash
echo network: {config: disabled} | sudo tee /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg ``` 5) Edit netplan's networking configuration file generated by cloud-init to remove the MAC
- ```console
+ ```bash
sudo bash -c "cat > /etc/netplan/50-cloud-init.yaml" <<'EOF' network: ethernets:
HB-series VMs can only expose 228 GB of RAM to guest VMs at this time. Similarly
GSS Proxy has a known bug in CentOS/RHEL 7.5 that can manifest as a significant performance and responsiveness penalty when used with NFS. This can be mitigated with:
-```console
-sed -i 's/GSS_USE_PROXY="yes"/GSS_USE_PROXY="no"/g' /etc/sysconfig/nfs
+```bash
+sudo sed -i 's/GSS_USE_PROXY="yes"/GSS_USE_PROXY="no"/g' /etc/sysconfig/nfs
``` ## Cache Cleaning
On HPC systems, it is often useful to clean up the memory after a job has finish
Using `numactl -H` will show which NUMAnode(s) the memory is buffered with (possibly all). In Linux, users can clean the caches in three ways to return buffered or cached memory to ΓÇÿfreeΓÇÖ. You need to be root or have sudo permissions.
-```console
-echo 1 > /proc/sys/vm/drop_caches [frees page-cache]
-echo 2 > /proc/sys/vm/drop_caches [frees slab objects e.g. dentries, inodes]
-echo 3 > /proc/sys/vm/drop_caches [cleans page-cache and slab objects]
+```bash
+sudo echo 1 > /proc/sys/vm/drop_caches [frees page-cache]
+sudo echo 2 > /proc/sys/vm/drop_caches [frees slab objects e.g. dentries, inodes]
+sudo echo 3 > /proc/sys/vm/drop_caches [cleans page-cache and slab objects]
``` ![Screenshot of command prompt after cleaning](./media/hpc/cache-cleaning-2.png)
echo 3 > /proc/sys/vm/drop_caches [cleans page-cache and slab objects]
You may ignore the following kernel warning messages when booting an HB-series VM under Linux. This is due to a known limitation of the Azure hypervisor that will be addressed over time.
-```console
+```output
[ 0.004000] WARNING: CPU: 4 PID: 0 at arch/x86/kernel/smpboot.c:376 topology_sane.isra.3+0x80/0x90 [ 0.004000] sched: CPU #4's llc-sibling CPU #0 is not on the same node! [node: 1 != 0]. Ignoring dependency. [ 0.004000] Modules linked in:
virtual-machines Hbv3 Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv3-series-overview.md
Previously updated : 03/04/2023 Last updated : 04/21/2023
When paired in a striped array, the NVMe SSD provides up to 7 GB/s reads and 3 G
| MPI Support | HPC-X, Intel MPI, OpenMPI, MVAPICH2, MPICH | | Additional Frameworks | UCX, libfabric, PGAS | | Azure Storage Support | Standard and Premium Disks (maximum 32 disks) |
-| OS Support for SRIOV RDMA | CentOS/RHEL 7.9+, Ubuntu 18.04+, SLES 12 SP5+, WinServer 2016+ |
+| OS Support for SRIOV RDMA | CentOS/RHEL 7.9+, Ubuntu 18.04+, SLES 15.4, WinServer 2016+ |
| Recommended OS for Performance | CentOS 8.1, Windows Server 2019+ | Orchestrator Support | Azure CycleCloud, Azure Batch, AKS; [cluster configuration options](sizes-hpc.md#cluster-configuration-options) | > [!NOTE] > Windows Server 2012 R2 is not supported on HBv3 and other VMs with more than 64 (virtual or physical) cores. For more details, see [Supported Windows guest operating systems for Hyper-V on Windows Server](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows).
+> [!IMPORTANT]
+> This document references a release version of Linux that is nearing or at, End of Life(EOL). Please consider updating to a more current version.
+ ## Next steps - Read about the latest announcements, HPC workload examples, and performance results at the [Azure Compute Tech Community Blogs](https://techcommunity.microsoft.com/t5/azure-compute/bg-p/AzureCompute).
virtual-machines Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/isolation.md
Title: Isolation for VMs in Azure description: Learn about VM isolation works in Azure.-+ Last updated 04/20/2023-+
virtual-machines Build Image With Packer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/build-image-with-packer.md
build {
} ```
-This template builds an Ubuntu 16.04 LTS image, installs NGINX, then deprovisions the VM.
+This template builds an Ubuntu 20.04 LTS image, installs NGINX, then deprovisions the VM.
> [!NOTE] > If you expand on this template to provision user credentials, adjust the provisioner command that deprovisions the Azure agent to read `-deprovision` rather than `deprovision+user`.
virtual-network Manage Network Security Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-network-security-group.md
Previously updated : 02/14/2023 Last updated : 04/24/2023
If you don't have an Azure account with an active subscription, [create one for
If you're running Azure CLI locally, use Azure CLI version 2.0.28 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to sign in to Azure.
-The account you log into, or connect to Azure with must be assigned to the [Network contributor role](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) or to a [Custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that's assigned the appropriate actions listed in [Permissions](#permissions).
+Assign the [Network contributor role](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) or a [Custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) with the appropriate [Permissions](#permissions).
## Work with network security groups
Under **Help**, you can view **Effective security rules**. For more information,
To learn more about the common Azure settings listed, see the following articles: - [Activity log](../azure-monitor/essentials/platform-logs-overview.md)+ - [Access control (IAM)](../role-based-access-control/overview.md)+ - [Tags](../azure-resource-manager/management/tag-resources.md)+ - [Locks](../azure-resource-manager/management/lock-resources.md)+ - [Automation script](../azure-resource-manager/templates/export-template-portal.md) # [**PowerShell**](#tab/network-security-group-powershell)
Get-AzNetworkSecurityGroup -Name myNSG -ResourceGroupName myResourceGroup
To learn more about the common Azure settings listed, see the following articles: - [Activity log](../azure-monitor/essentials/platform-logs-overview.md)+ - [Access control (IAM)](../role-based-access-control/overview.md)+ - [Tags](../azure-resource-manager/management/tag-resources.md)+ - [Locks](../azure-resource-manager/management/lock-resources.md) # [**Azure CLI**](#tab/network-security-group-cli)
az network nsg show --resource-group myResourceGroup --name myNSG
To learn more about the common Azure settings listed, see the following articles: - [Activity log](../azure-monitor/essentials/platform-logs-overview.md)+ - [Access control (IAM)](../role-based-access-control/overview.md)+ - [Tags](../azure-resource-manager/management/tag-resources.md)+ - [Locks](../azure-resource-manager/management/lock-resources.md)
To learn more about the common Azure settings listed, see the following articles
The most common changes to a network security group are: - [Associate or dissociate a network security group to or from a network interface](#associate-or-dissociate-a-network-security-group-to-or-from-a-network-interface)+ - [Associate or dissociate a network security group to or from a subnet](#associate-or-dissociate-a-network-security-group-to-or-from-a-subnet)+ - [Create a security rule](#create-a-security-rule)+ - [Delete a security rule](#delete-a-security-rule) ### Associate or dissociate a network security group to or from a network interface
-To associate a network security group to, or dissociate a network security group from a network interface, see [Associate a network security group to, or dissociate a network security group from a network interface](virtual-network-network-interface.md#associate-or-dissociate-a-network-security-group).
+For more information about the association and dissociation of a network security group, see [Associate or dissociate a network security group](virtual-network-network-interface.md#associate-or-dissociate-a-network-security-group).
### Associate or dissociate a network security group to or from a subnet
az network asg delete --resource-group myResourceGroup --name myASG
## Permissions
-To do tasks on network security groups, security rules, and application security groups, your account must be assigned to the [Network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role or to a [Custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that's assigned the appropriate permissions as listed in the following tables:
+To manage network security groups, security rules, and application security groups, your account must be assigned to the [Network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role. A [Custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) can also be used that's assigned the appropriate permissions as listed in the following tables:
### Network security group
To do tasks on network security groups, security rules, and application security
| Microsoft.Network/networkSecurityGroups/delete | Delete network security group | | Microsoft.Network/networkSecurityGroups/join/action | Associate a network security group to a subnet or network interface -- >[!NOTE] > To perform `write` operations on a network security group, the subscription account must have at least `read` permissions for resource group along with `Microsoft.Network/networkSecurityGroups/write` permission. -- ### Network security group rule | Action | Name |
To do tasks on network security groups, security rules, and application security
## Next steps - Add or remove [a network interface to or from an application security group](./virtual-network-network-interface.md?tabs=network-interface-portal#add-or-remove-from-application-security-groups).+ - Create and assign [Azure Policy definitions](./policy-reference.md) for virtual networks
virtual-network Manage Route Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-route-table.md
Previously updated : 12/13/2022 Last updated : 04/24/2023 # Create, change, or delete a route table
-Azure automatically routes traffic between Azure subnets, virtual networks, and on-premises networks. If you want to change any of Azure's default routing, you do so by creating a route table. If you're new to routing in virtual networks, you can learn more about it in [virtual network traffic routing](virtual-networks-udr-overview.md) or by completing a [tutorial](tutorial-create-route-table-portal.md).
+Azure automatically routes traffic between Azure subnets, virtual networks, and on-premises networks. If you want to change Azure's default routing, you do so by creating a route table. If you're new to routing in virtual networks, you can learn more about it in [virtual network traffic routing](virtual-networks-udr-overview.md) or by completing a [tutorial](tutorial-create-route-table-portal.md).
## Before you begin
If you don't have one, set up an Azure account with an active subscription. [Cre
- **Azure CLI users**: Run the commands via either the [Azure Cloud Shell](https://shell.azure.com/bash) or the Azure CLI running locally. Use Azure CLI version 2.0.31 or later if you're running the Azure CLI locally. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Also run `az login` to create a connection with Azure.
-The account you log into, or connect to Azure with must be assigned to the [Network contributor role](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) or to a [Custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that's assigned the appropriate actions listed in [Permissions](#permissions).
+Assign the [Network contributor role](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) or a [Custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json) with the appropriate [Permissions](#permissions).
## Create a route table
The most common changes are to [add](#create-a-route) routes, [remove](#delete-a
## Associate a route table to a subnet
-You can optionally associate a route table to a subnet. A route table can be associated to zero or more subnets. Because route tables aren't associated to virtual networks, you must associate a route table to each subnet you want the route table associated to. Azure routes all traffic leaving the subnet based on routes you've created within route tables, [default routes](virtual-networks-udr-overview.md#default), and routes propagated from an on-premises network, if the virtual network is connected to an Azure virtual network gateway (ExpressRoute or VPN). You can only associate a route table to subnets in virtual networks that exist in the same Azure location and subscription as the route table.
+You can optionally associate a route table to a subnet. A route table can be associated to zero or more subnets. Route tables aren't associated to virtual networks. You must associate a route table to each subnet you want the route table associated to.
+
+Azure routes all traffic leaving the subnet based on routes you've created:
+
+* Within route tables
+
+* [Default routes](virtual-networks-udr-overview.md#default)
+
+* Routes propagated from an on-premises network, if the virtual network is connected to an Azure virtual network gateway (ExpressRoute or VPN).
+
+You can only associate a route table to subnets in virtual networks that exist in the same Azure location and subscription as the route table.
1. Go to the [Azure portal](https://portal.azure.com) to manage your virtual network. Search for and select **Virtual networks**.
There's a limit to how many routes per route table can create per Azure location
1. Enter a unique **Route name** for the route within the route table.
- :::image type="content" source="./media/manage-route-table/add-route.png" alt-text="Screenshot of the add a route page for a route table.":::
+ :::image type="content" source="./media/manage-route-table/add-route.png" alt-text="Screenshot of add a route page for a route table.":::
1. Enter the **Address prefix**, in Classless Inter-Domain Routing (CIDR) notation, that you want to route traffic to. The prefix can't be duplicated in more than one route within the route table, though the prefix can be within another prefix. For example, if you defined *10.0.0.0/16* as a prefix in one route, you can still define another route with the *10.0.0.0/22* address prefix. Azure selects a route for traffic based on longest prefix match. To learn more, see [How Azure selects a route](virtual-networks-udr-overview.md#how-azure-selects-a-route).
You can determine the next hop type between a virtual machine and the IP address
1. In the **Network Watcher | Next hop** page:
- :::image type="content" source="./media/manage-route-table/add-route.png" alt-text="Screenshot of the add a route page for a route table.":::
+ :::image type="content" source="./media/manage-route-table/add-route.png" alt-text="Screenshot of add a route page for a route table.":::
| Setting | Value | |--|--|
virtual-network Virtual Network Service Endpoints Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoints-overview.md
Previously updated : 10/20/2022 Last updated : 04/20/2023
Service endpoints are available for the following Azure services and regions. Th
**Generally available** - **[Azure Storage](../storage/common/storage-network-security.md?toc=%2fazure%2fvirtual-network%2ftoc.json#grant-access-from-a-virtual-network)** (*Microsoft.Storage*): Generally available in all Azure regions.
+- **[Azure Storage cross-region service endpoints](../storage/common/storage-network-security.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-storage-cross-region-service-endpoints)** (*Microsoft.Storage.Global*): Generally available in all Azure regions.
- **[Azure SQL Database](/azure/azure-sql/database/vnet-service-endpoint-rule-overview?toc=%2fazure%2fvirtual-network%2ftoc.json)** (*Microsoft.Sql*): Generally available in all Azure regions. - **[Azure Synapse Analytics](/azure/azure-sql/database/vnet-service-endpoint-rule-overview?toc=%2fazure%2fvirtual-network%2ftoc.json)** (*Microsoft.Sql*): Generally available in all Azure regions for dedicated SQL pools (formerly SQL DW). - **[Azure Database for PostgreSQL server](../postgresql/howto-manage-vnet-using-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json)** (*Microsoft.Sql*): Generally available in Azure regions where database service is available.
Service endpoints provide the following benefits:
## Limitations - The feature is available only to virtual networks deployed through the Azure Resource Manager deployment model.-- Endpoints are enabled on subnets configured in Azure virtual networks. Endpoints can't be used for traffic from your on-premise services to Azure services. For more information, see [Secure Azure service access from on-premises](#secure-azure-services-to-virtual-networks)-- For Azure SQL, a service endpoint applies only to Azure service traffic within a virtual network's region. For Azure Storage, you can [enable access to virtual networks in other regions](../storage/common/storage-network-security.md?tabs=azure-portal) in preview.
+- Endpoints are enabled on subnets configured in Azure virtual networks. Endpoints can't be used for traffic from your on-premises services to Azure services. For more information, see [Secure Azure service access from on-premises](#secure-azure-services-to-virtual-networks)
+- For Azure SQL, a service endpoint applies only to Azure service traffic within a virtual network's region.
- For Azure Data Lake Storage (ADLS) Gen 1, the VNet Integration capability is only available for virtual networks within the same region. Also note that virtual network integration for ADLS Gen1 uses the virtual network service endpoint security between your virtual network and Azure Active Directory (Azure AD) to generate extra security claims in the access token. These claims are then used to authenticate your virtual network to your Data Lake Storage Gen1 account and allow access. The *Microsoft.AzureActiveDirectory* tag listed under services supporting service endpoints is used only for supporting service endpoints to ADLS Gen 1. Azure AD doesn't support service endpoints natively. For more information about Azure Data Lake Store Gen 1 VNet integration, see [Network security in Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-network-security.md?toc=%2fazure%2fvirtual-network%2ftoc.json). ## Secure Azure services to virtual networks
Service endpoints provide the following benefits:
- Configure service endpoints on a subnet in a virtual network. Endpoints work with any type of compute instances running within that subnet. - You can configure multiple service endpoints for all supported Azure services (Azure Storage or Azure SQL Database, for example) on a subnet.-- For Azure SQL Database, virtual networks must be in the same region as the Azure service resource. For Azure Storage, you can [enable access to virtual networks in other regions](../storage/common/storage-network-security.md?tabs=azure-portal) in preview. For all other services, you can secure Azure service resources to virtual networks in any region.
+- For Azure SQL Database, virtual networks must be in the same region as the Azure service resource. For all other services, you can secure Azure service resources to virtual networks in any region.
- The virtual network where the endpoint is configured can be in the same or different subscription than the Azure service resource. For more information on permissions required for setting up endpoints and securing Azure services, see [Provisioning](#provisioning). - For supported services, you can secure new or existing resources to virtual networks using service endpoints.
virtual-wan Openvpn Azure Ad Tenant Multi App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/openvpn-azure-ad-tenant-multi-app.md
Previously updated : 09/22/2020- Last updated : 04/24/2023+ # Create an Azure Active Directory (AD) tenant for P2S OpenVPN protocol connections
vpn-gateway Bgp Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/bgp-howto.md
For more information about the benefits of BGP and to understand the technical r
## Getting started
-Each part of this article helps you form a basic building block for enabling BGP in your network connectivity. If you complete all three parts (configure BGP on the gateway, S2S connection, and VNet-to-VNet connection) you build the topology as shown in Diagram 1.
+Each part of this article helps you form a basic building block for enabling BGP in your network connectivity. If you complete all three parts (configure BGP on the gateway, S2S connection, and VNet-to-VNet connection) you build the topology as shown in **Diagram 1**. You can combine parts together to build a more complex, multi-hop, transit network that meets your needs.
**Diagram 1** :::image type="content" source="./media/bgp-howto/vnet-to-vnet.png" alt-text="Diagram showing network architecture and settings." border="false":::
-You can combine parts together to build a more complex, multi-hop, transit network that meets your needs.
+For context, referring to Diagram 1, if BGP were to be disabled between TestVNet2 and TestVNet1, TestVNet2 wouldn't learn the routes for the on-premises network, Site5, and therefore couldn't communicate with Site 5. Once you enable BGP, all three networks will be able to communicate over the S2S IPsec and VNet-to-VNet connections.
### Prerequisites
To enable or disable BGP on a VNet-to-VNet connection, you use the same steps as
> [!NOTE] > A VNet-to-VNet connection without BGP will limit the communication to the two connected VNets only. Enable BGP to allow transit routing capability to other S2S or VNet-to-VNet connections of these two VNets.
-If you completed all three parts of this exercise, you have established the following network topology:
-
-**Diagram 4**
--
-For context, referring to **Diagram 4**, if BGP were to be disabled between TestVNet2 and TestVNet1, TestVNet2 wouldn't learn the routes for the on-premises network, Site5, and therefore couldn't communicate with Site 5. Once you enable BGP, as shown in the Diagram 4, all three networks will be able to communicate over the S2S IPsec and VNet-to-VNet connections.
- ## Next steps For more information about BGP, see [About BGP and VPN Gateway](vpn-gateway-bgp-overview.md).
vpn-gateway Openvpn Azure Ad Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant.md
description: Learn how to set up an Azure AD tenant for P2S Azure AD authenticat
Previously updated : 10/25/2022 Last updated : 04/24/2023
vpn-gateway Vpn Gateway Activeactive Rm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-activeactive-rm-powershell.md
description: Learn how to configure active-active connections with VPN gateways
Previously updated : 09/03/2020 Last updated : 04/24/2023
This article walks you through the steps to create active-active cross-premises and VNet-to-VNet connections using the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) and PowerShell. You can also configure an active-active gateway in the Azure portal. ## About highly available cross-premises connections+ To achieve high availability for cross-premises and VNet-to-VNet connectivity, you should deploy multiple VPN gateways and establish multiple parallel connections between your networks and Azure. See [Highly Available Cross-Premises and VNet-to-VNet Connectivity](vpn-gateway-highlyavailable.md) for an overview of connectivity options and topology. This article provides the instructions to set up an active-active cross-premises VPN connection, and active-active connection between two virtual networks.
You can combine these together to build a more complex, highly available network
> The active-active mode is available for all SKUs except Basic. ## <a name ="aagateway"></a>Part 1 - Create and configure active-active VPN gateways
-The following steps will configure your Azure VPN gateway in active-active modes. The key differences between the active-active and active-standby gateways:
+
+The following steps configure your Azure VPN gateway in active-active modes. The key differences between the active-active and active-standby gateways:
* You need to create two Gateway IP configurations with two public IP addresses * You need set the EnableActiveActiveFeature flag
The following steps will configure your Azure VPN gateway in active-active modes
The other properties are the same as the non-active-active gateways. ### Before you begin+ * Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/).
-* You'll need to install the Azure Resource Manager PowerShell cmdlets if you don't want to use Cloud Shell in your browser. See [Overview of Azure PowerShell](/powershell/azure/) for more information about installing the PowerShell cmdlets.
+* You need to install the Azure Resource Manager PowerShell cmdlets if you don't want to use Cloud Shell in your browser. See [Overview of Azure PowerShell](/powershell/azure/) for more information about installing the PowerShell cmdlets.
### Step 1 - Create and configure VNet1+ #### 1. Declare your variables
-For this exercise, we'll start by declaring our variables. If you use the "Try It" Cloud Shell, you will automatically connect to your account. If you use PowerShell locally, use the following example to help you connect:
+For this exercise, we start by declaring our variables. If you use the "Try It" Cloud Shell, you'll automatically connect to your account. If you use PowerShell locally, use the following example to help you connect:
```powershell Connect-AzAccount Select-AzSubscription -SubscriptionName $Sub1 ```
-The example below declares the variables using the values for this exercise. Be sure to replace the values with your own when configuring for production. You can use these variables if you are running through the steps to become familiar with this type of configuration. Modify the variables, and then copy and paste into your PowerShell console.
+The following example declares the variables using the values for this exercise. Be sure to replace the values with your own when configuring for production. You can use these variables if you're running through the steps to become familiar with this type of configuration. Modify the variables, and then copy and paste into your PowerShell console.
```azurepowershell-interactive $Sub1 = "Ross"
$Connection152 = "VNet1toSite5_2"
#### 2. Create a new resource group
-Use the example below to create a new resource group:
+Use the following example to create a new resource group:
```azurepowershell-interactive New-AzResourceGroup -Name $RG1 -Location $Location1 ``` #### 3. Create TestVNet1
-The sample below creates a virtual network named TestVNet1 and three subnets, one called GatewaySubnet, one called FrontEnd, and one called Backend. When substituting values, it's important that you always name your gateway subnet specifically GatewaySubnet. If you name it something else, your gateway creation fails.
+
+The following example creates a virtual network named TestVNet1 and three subnets, one called GatewaySubnet, one called FrontEnd, and one called Backend. When substituting values, it's important that you always name your gateway subnet specifically GatewaySubnet. If you name it something else, your gateway creation fails.
```azurepowershell-interactive $fesub1 = New-AzVirtualNetworkSubnetConfig -Name $FESubName1 -AddressPrefix $FESubPrefix1
New-AzVirtualNetwork -Name $VNetName1 -ResourceGroupName $RG1 -Location $Locatio
``` ### Step 2 - Create the VPN gateway for TestVNet1 with active-active mode+ #### 1. Create the public IP addresses and gateway IP configurations
-Request two public IP addresses to be allocated to the gateway you will create for your VNet. You'll also define the subnet and IP configurations required.
+
+Request two public IP addresses to be allocated to the gateway you'll create for your VNet. You'll also define the subnet and IP configurations required.
```azurepowershell-interactive $gw1pip1 = New-AzPublicIpAddress -Name $GW1IPName1 -ResourceGroupName $RG1 -Location $Location1 -AllocationMethod Dynamic
$gw1ipconf2 = New-AzVirtualNetworkGatewayIpConfig -Name $GW1IPconf2 -Subnet $sub
``` #### 2. Create the VPN gateway with active-active configuration
-Create the virtual network gateway for TestVNet1. Note that there are two GatewayIpConfig entries, and the EnableActiveActiveFeature flag is set. Creating a gateway can take a while (45 minutes or more to complete, depending on the selected SKU).
+
+Create the virtual network gateway for TestVNet1. There are two GatewayIpConfig entries, and the EnableActiveActiveFeature flag is set. Creating a gateway can take a while (45 minutes or more to complete, depending on the selected SKU).
```azurepowershell-interactive New-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1 -Location $Location1 -IpConfigurations $gw1ipconf1,$gw1ipconf2 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1 -Asn $VNet1ASN -EnableActiveActiveFeature -Debug ``` #### 3. Obtain the gateway public IP addresses and the BGP Peer IP address
-Once the gateway is created, you will need to obtain the BGP Peer IP address on the Azure VPN Gateway. This address is needed to configure the Azure VPN Gateway as a BGP Peer for your on-premises VPN devices.
+
+Once the gateway is created, you need to obtain the BGP Peer IP address on the Azure VPN Gateway. This address is needed to configure the Azure VPN Gateway as a BGP Peer for your on-premises VPN devices.
```azurepowershell-interactive $gw1pip1 = Get-AzPublicIpAddress -Name $GW1IPName1 -ResourceGroupName $RG1
PS D:\> $vnet1gw.BgpSettingsText
} ```
-The order of the public IP addresses for the gateway instances and the corresponding BGP Peering Addresses are the same. In this example, the gateway VM with public IP of 40.112.190.5 will use 10.12.255.4 as its BGP Peering Address, and the gateway with 138.91.156.129 will use 10.12.255.5. This information is needed when you set up your on premises VPN devices connecting to the active-active gateway. The gateway is shown in the diagram below with all addresses:
+The order of the public IP addresses for the gateway instances and the corresponding BGP Peering Addresses are the same. In this example, the gateway VM with public IP of 40.112.190.5 uses 10.12.255.4 as its BGP Peering Address, and the gateway with 138.91.156.129 uses 10.12.255.5. This information is needed when you set up your on premises VPN devices connecting to the active-active gateway. The gateway is shown in the following diagram with all addresses:
![active-active gateway](./media/vpn-gateway-activeactive-rm-powershell/active-active-gw.png) Once the gateway is created, you can use this gateway to establish active-active cross-premises or VNet-to-VNet connection. The following sections walk through the steps to complete the exercise. ## <a name ="aacrossprem"></a>Part 2 - Establish an active-active cross-premises connection+ To establish a cross-premises connection, you need to create a Local Network Gateway to represent your on-premises VPN device, and a Connection to connect the Azure VPN gateway with the local network gateway. In this example, the Azure VPN gateway is in active-active mode. As a result, even though there is only one on-premises VPN device (local network gateway) and one connection resource, both Azure VPN gateway instances will establish S2S VPN tunnels with the on-premises device.
-Before proceeding, please make sure you have completed [Part 1](#aagateway) of this exercise.
+Before proceeding, make sure you have completed [Part 1](#aagateway) of this exercise.
### Step 1 - Create and configure the local network gateway+ #### 1. Declare your variables+ This exercise will continue to build the configuration shown in the diagram. Be sure to replace the values with the ones that you want to use for your configuration. ```azurepowershell-interactive
$BGPPeerIP51 = "10.52.255.253"
A couple of things to note regarding the local network gateway parameters: * The local network gateway can be in the same or different location and resource group as the VPN gateway. This example shows them in different resource groups but in the same Azure location.
-* If there is only one on-premises VPN device as shown above, the active-active connection can work with or without BGP protocol. This example uses BGP for the cross-premises connection.
+* If there is only one on-premises VPN device (as shown in the example), the active-active connection can work with or without BGP protocol. This example uses BGP for the cross-premises connection.
* If BGP is enabled, the prefix you need to declare for the local network gateway is the host address of your BGP Peer IP address on your VPN device. In this case, it's a /32 prefix of "10.52.255.253/32".
-* As a reminder, you must use different BGP ASNs between your on-premises networks and Azure VNet. If they are the same, you need to change your VNet ASN if your on-premises VPN device already uses the ASN to peer with other BGP neighbors.
+* As a reminder, you must use different BGP ASNs between your on-premises networks and Azure VNet. If they're the same, you need to change your VNet ASN if your on-premises VPN device already uses the ASN to peer with other BGP neighbors.
#### 2. Create the local network gateway for Site5
-Before you continue, please make sure you are still connected to Subscription 1. Create the resource group if it is not yet created.
+
+Before you continue, make sure you're still connected to Subscription 1. Create the resource group if it isn't yet created.
```azurepowershell-interactive New-AzResourceGroup -Name $RG5 -Location $Location5
New-AzLocalNetworkGateway -Name $LNGName51 -ResourceGroupName $RG5 -Location $Lo
``` ### Step 2 - Connect the VNet gateway and local network gateway+ #### 1. Get the two gateways ```azurepowershell-interactive
$lng5gw1 = Get-AzLocalNetworkGateway -Name $LNGName51 -ResourceGroupName $RG5
``` #### 2. Create the TestVNet1 to Site5 connection+ In this step, you create the connection from TestVNet1 to Site5_1 with "EnableBGP" set to $True. ```azurepowershell-interactive
New-AzVirtualNetworkGatewayConnection -Name $Connection151 -ResourceGroupName $R
``` #### 3. VPN and BGP parameters for your on-premises VPN device
-The example below lists the parameters you will enter into the BGP configuration section on your on-premises VPN device for this exercise:
+
+The following example lists the parameters that you enter into the BGP configuration section on your on-premises VPN device for this exercise:
``` - Site5 ASN : 65050
The example below lists the parameters you will enter into the BGP configuration
- eBGP Multihop : Ensure the "multihop" option for eBGP is enabled on your device if needed ```
-The connection should be established after a few minutes, and the BGP peering session will start once the IPsec connection is established. This example so far has configured only one on-premises VPN device, resulting in the diagram shown below:
+The connection should be established after a few minutes, and the BGP peering session will start once the IPsec connection is established. This example so far has configured only one on-premises VPN device, resulting in the following diagram:
-![active-active-crossprem](./media/vpn-gateway-activeactive-rm-powershell/active-active.png)
### Step 3 - Connect two on-premises VPN devices to the active-active VPN gateway+ If you have two VPN devices at the same on-premises network, you can achieve dual redundancy by connecting the Azure VPN gateway to the second VPN device. #### 1. Create the second local network gateway for Site5+ The gateway IP address, address prefix, and BGP peering address for the second local network gateway must not overlap with the previous local network gateway for the same on-premises network. ```azurepowershell-interactive
New-AzLocalNetworkGateway -Name $LNGName52 -ResourceGroupName $RG5 -Location $Lo
``` #### 2. Connect the VNet gateway and the second local network gateway+ Create the connection from TestVNet1 to Site5_2 with "EnableBGP" set to $True ```azurepowershell-interactive
New-AzVirtualNetworkGatewayConnection -Name $Connection152 -ResourceGroupName $R
``` #### 3. VPN and BGP parameters for your second on-premises VPN device
-Similarly, below lists the parameters you will enter into the second VPN device:
+
+Similarly, the following example lists the parameters you'll enter into the second VPN device:
``` - Site5 ASN : 65050
Similarly, below lists the parameters you will enter into the second VPN device:
- eBGP Multihop : Ensure the "multihop" option for eBGP is enabled on your device if needed ```
-Once the connection (tunnels) are established, you will have dual redundant VPN devices and tunnels connecting your on-premises network and Azure:
+Once the connection (tunnels) are established, you'll have dual redundant VPN devices and tunnels connecting your on-premises network and Azure:
-![dual-redundancy-crossprem](./media/vpn-gateway-activeactive-rm-powershell/dual-redundancy.png)
## <a name ="aav2v"></a>Part 3 - Establish an active-active VNet-to-VNet connection
-This section creates an active-active VNet-to-VNet connection with BGP.
-The instructions below continue from the previous steps listed above. You must complete [Part 1](#aagateway) to create and configure TestVNet1 and the VPN Gateway with BGP.
+This section creates an active-active VNet-to-VNet connection with BGP. The following instructions continue from the previous steps. You must complete [Part 1](#aagateway) to create and configure TestVNet1 and the VPN Gateway with BGP.
### Step 1 - Create TestVNet2 and the VPN gateway
-It is important to make sure that the IP address space of the new virtual network, TestVNet2, does not overlap with any of your VNet ranges.
-In this example, the virtual networks belong to the same subscription. You can set up VNet-to-VNet connections between different subscriptions; please refer to [Configure a VNet-to-VNet connection](vpn-gateway-vnet-vnet-rm-ps.md) to learn more details. Make sure you add the "-EnableBgp $True" when creating the connections to enable BGP.
+It's important to make sure that the IP address space of the new virtual network, TestVNet2, doesn't overlap with any of your VNet ranges.
+
+In this example, the virtual networks belong to the same subscription. You can set up VNet-to-VNet connections between different subscriptions; refer to [Configure a VNet-to-VNet connection](vpn-gateway-vnet-vnet-rm-ps.md) to learn more details. Make sure you add the "-EnableBgp $True" when creating the connections to enable BGP.
#### 1. Declare your variables+ Be sure to replace the values with the ones that you want to use for your configuration. ```azurepowershell-interactive
New-AzVirtualNetwork -Name $VNetName2 -ResourceGroupName $RG2 -Location $Locatio
``` #### 3. Create the active-active VPN gateway for TestVNet2
-Request two public IP addresses to be allocated to the gateway you will create for your VNet. You'll also define the subnet and IP configurations required.
+
+Request two public IP addresses to be allocated to the gateway you'll create for your VNet. You'll also define the subnet and IP configurations required.
```azurepowershell-interactive $gw2pip1 = New-AzPublicIpAddress -Name $GW2IPName1 -ResourceGroupName $RG2 -Location $Location2 -AllocationMethod Dynamic
$gw2ipconf1 = New-AzVirtualNetworkGatewayIpConfig -Name $GW2IPconf1 -Subnet $sub
$gw2ipconf2 = New-AzVirtualNetworkGatewayIpConfig -Name $GW2IPconf2 -Subnet $subnet2 -PublicIpAddress $gw2pip2 ```
-Create the VPN gateway with the AS number and the "EnableActiveActiveFeature" flag. Note that you must override the default ASN on your Azure VPN gateways. The ASNs for the connected VNets must be different to enable BGP and transit routing.
+Create the VPN gateway with the AS number and the "EnableActiveActiveFeature" flag. You must override the default ASN on your Azure VPN gateways. The ASNs for the connected VNets must be different to enable BGP and transit routing.
```azurepowershell-interactive New-AzVirtualNetworkGateway -Name $GWName2 -ResourceGroupName $RG2 -Location $Location2 -IpConfigurations $gw2ipconf1,$gw2ipconf2 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1 -Asn $VNet2ASN -EnableActiveActiveFeature ``` ### Step 2 - Connect the TestVNet1 and TestVNet2 gateways+ In this example, both gateways are in the same subscription. You can complete this step in the same PowerShell session. #### 1. Get both gateways
-Make sure you log in and connect to Subscription 1.
+
+Make sure you sign in and connect to Subscription 1.
```azurepowershell-interactive $vnet1gw = Get-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1
$vnet2gw = Get-AzVirtualNetworkGateway -Name $GWName2 -ResourceGroupName $RG2
``` #### 2. Create both connections
-In this step, you will create the connection from TestVNet1 to TestVNet2, and the connection from TestVNet2 to TestVNet1.
+
+In this step, you create the connection from TestVNet1 to TestVNet2, and the connection from TestVNet2 to TestVNet1.
```azurepowershell-interactive New-AzVirtualNetworkGatewayConnection -Name $Connection12 -ResourceGroupName $RG1 -VirtualNetworkGateway1 $vnet1gw -VirtualNetworkGateway2 $vnet2gw -Location $Location1 -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3' -EnableBgp $True
New-AzVirtualNetworkGatewayConnection -Name $Connection21 -ResourceGroupName $RG
> [!IMPORTANT] > Be sure to enable BGP for BOTH connections.
->
->
-After completing these steps, the connection will be establish in a few minutes, and the BGP peering session will be up once the VNet-to-VNet connection is completed with dual redundancy:
+After completing these steps, the connection will be established in a few minutes, and the BGP peering session will be up once the VNet-to-VNet connection is completed with dual redundancy:
-![active-active-v2v](./media/vpn-gateway-activeactive-rm-powershell/vnet-to-vnet.png)
## <a name ="aaupdate"></a>Update an existing VPN gateway
Add-AzVirtualNetworkGatewayIpConfig -VirtualNetworkGateway $gw -Name $GWIPconf2
#### 3. Enable active-active mode and update the gateway
-In this step, you enable active-active mode and update the gateway. In the example, the VPN gateway is currently using a legacy Standard SKU. However, active-active does not support the Standard SKU. To resize the legacy SKU to one that is supported (in this case, HighPerformance), you simply specify the supported legacy SKU that you want to use.
+In this step, you enable active-active mode and update the gateway. In the example, the VPN gateway is currently using a legacy Standard SKU. However, active-active doesn't support the Standard SKU. To resize the legacy SKU to one that is supported (in this case, HighPerformance), you simply specify the supported legacy SKU that you want to use.
* You can't change a legacy SKU to one of the new SKUs using this step. You can only resize a legacy SKU to another supported legacy SKU. For example, you can't change the SKU from Standard to VpnGw1 (even though VpnGw1 is supported for active-active) because Standard is a legacy SKU and VpnGw1 is a current SKU. For more information about resizing and migrating SKUs, see [Gateway SKUs](vpn-gateway-about-vpngateways.md#gwsku). * If you want to resize a current SKU, for example VpnGw1 to VpnGw3, you can do so using this step because the SKUs are in the same SKU family. To do so, you would use the value: ```-GatewaySku VpnGw3```
-When you are using this in your environment, if you don't need to resize the gateway, you won't need to specify the -GatewaySku. Notice that in this step, you must set the gateway object in PowerShell to trigger the actual update. This update can take 30 to 45 minutes, even if you are not resizing your gateway.
+When you're using this in your environment, if you don't need to resize the gateway, you won't need to specify the -GatewaySku. Notice that in this step, you must set the gateway object in PowerShell to trigger the actual update. This update can take 30 to 45 minutes, even if you aren't resizing your gateway.
```azurepowershell-interactive Set-AzVirtualNetworkGateway -VirtualNetworkGateway $gw -EnableActiveActiveFeature -GatewaySku HighPerformance ``` ### Change an active-active gateway to an active-standby gateway+ #### 1. Declare your variables Replace the following parameters used for the examples with the settings that you require for your own configuration, then declare these variables.
Set-AzVirtualNetworkGateway -VirtualNetworkGateway $gw -DisableActiveActiveFeatu
This update can take up to 30 to 45 minutes. ## Next steps+ Once your connection is complete, you can add virtual machines to your virtual networks. See [Create a Virtual Machine](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-machines%2fwindows%2ftoc.json) for steps.
vpn-gateway Vpn Gateway Highlyavailable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-highlyavailable.md
description: Learn about highly available configuration options using Azure VPN
Previously updated : 05/27/2021 Last updated : 04/24/2023
This article provides an overview of Highly Available configuration options for
## <a name = "activestandby"></a>About VPN gateway redundancy
-Every Azure VPN gateway consists of two instances in an active-standby configuration. For any planned maintenance or unplanned disruption that happens to the active instance, the standby instance would take over (failover) automatically, and resume the S2S VPN or VNet-to-VNet connections. The switch over will cause a brief interruption. For planned maintenance, the connectivity should be restored within 10 to 15 seconds. For unplanned issues, the connection recovery will be longer, about 1 to 3 minutes in the worst case. For P2S VPN client connections to the gateway, the P2S connections will be disconnected and the users will need to reconnect from the client machines.
+Every Azure VPN gateway consists of two instances in an active-standby configuration. For any planned maintenance or unplanned disruption that happens to the active instance, the standby instance would take over (failover) automatically, and resume the S2S VPN or VNet-to-VNet connections. The switch over will cause a brief interruption. For planned maintenance, the connectivity should be restored within 10 to 15 seconds. For unplanned issues, the connection recovery is longer, about 1 to 3 minutes in the worst case. For P2S VPN client connections to the gateway, the P2S connections are disconnected and the users need to reconnect from the client machines.
## Highly Available cross-premises
To provide better availability for your cross premises connections, there are a
You can use multiple VPN devices from your on-premises network to connect to your Azure VPN gateway, as shown in the following diagram: This configuration provides multiple active tunnels from the same Azure VPN gateway to your on-premises devices in the same location. There are some requirements and constraints:
In this configuration, the Azure VPN gateway is still in active-standby mode, so
### Active-active VPN gateways
-You can create an Azure VPN gateway in an active-active configuration, where both instances of the gateway VMs will establish S2S VPN tunnels to your on-premises VPN device, as shown the following diagram:
+You can create an Azure VPN gateway in an active-active configuration, where both instances of the gateway VMs establish S2S VPN tunnels to your on-premises VPN device, as shown the following diagram:
-In this configuration, each Azure gateway instance will have a unique public IP address, and each will establish an IPsec/IKE S2S VPN tunnel to your on-premises VPN device specified in your local network gateway and connection. Note that both VPN tunnels are actually part of the same connection. You will still need to configure your on-premises VPN device to accept or establish two S2S VPN tunnels to those two Azure VPN gateway public IP addresses.
+In this configuration, each Azure gateway instance has a unique public IP address, and each will establish an IPsec/IKE S2S VPN tunnel to your on-premises VPN device specified in your local network gateway and connection. Note that both VPN tunnels are actually part of the same connection. You'll still need to configure your on-premises VPN device to accept or establish two S2S VPN tunnels to those two Azure VPN gateway public IP addresses.
Because the Azure gateway instances are in active-active configuration, the traffic from your Azure virtual network to your on-premises network will be routed through both tunnels simultaneously, even if your on-premises VPN device may favor one tunnel over the other. For a single TCP or UDP flow, Azure attempts to use the same tunnel when sending packets to your on-premises network. However, your on-premises network could use a different tunnel to send packets to Azure.
When a planned maintenance or unplanned event happens to one gateway instance, t
### Dual-redundancy: active-active VPN gateways for both Azure and on-premises networks
-The most reliable option is to combine the active-active gateways on both your network and Azure, as shown in the diagram below.
+The most reliable option is to combine the active-active gateways on both your network and Azure, as shown in the following diagram.
Here you create and set up the Azure VPN gateway in an active-active configuration, and create two local network gateways and two connections for your two on-premises VPN devices as described above. The result is a full mesh connectivity of 4 IPsec tunnels between your Azure virtual network and your on-premises network.
-All gateways and tunnels are active from the Azure side, so the traffic will be spread among all 4 tunnels simultaneously, although each TCP or UDP flow will again follow the same tunnel or path from the Azure side. Even though by spreading the traffic, you may see slightly better throughput over the IPsec tunnels, the primary goal of this configuration is for high availability. And due to the statistical nature of the spreading, it is difficult to provide the measurement on how different application traffic conditions will affect the aggregate throughput.
+All gateways and tunnels are active from the Azure side, so the traffic is spread among all 4 tunnels simultaneously, although each TCP or UDP flow will again follow the same tunnel or path from the Azure side. Even though by spreading the traffic, you may see slightly better throughput over the IPsec tunnels, the primary goal of this configuration is for high availability. And due to the statistical nature of the spreading, it's difficult to provide the measurement on how different application traffic conditions will affect the aggregate throughput.
-This topology will require two local network gateways and two connections to support the pair of on-premises VPN devices, and BGP is required to allow the two connections to the same on-premises network. These requirements are the same as the [above](#activeactiveonprem).
+This topology requires two local network gateways and two connections to support the pair of on-premises VPN devices, and BGP is required to allow the two connections to the same on-premises network. These requirements are the same as the [above](#activeactiveonprem).
-## Highly Available VNet-to-VNet
+## Highly Available VNet-to-VNet
-The same active-active configuration can also apply to Azure VNet-to-VNet connections. You can create active-active VPN gateways for both virtual networks, and connect them together to form the same full mesh connectivity of 4 tunnels between the two VNets, as shown in the diagram below:
+The same active-active configuration can also apply to Azure VNet-to-VNet connections. You can create active-active VPN gateways for both virtual networks, and connect them together to form the same full mesh connectivity of 4 tunnels between the two VNets, as shown in the following diagram:
This ensures there are always a pair of tunnels between the two virtual networks for any planned maintenance events, providing even better availability. Even though the same topology for cross-premises connectivity requires two connections, the VNet-to-VNet topology shown above will need only one connection for each gateway. Additionally, BGP is optional unless transit routing over the VNet-to-VNet connection is required. ## Next steps
-See [Configure active-active gateways](active-active-portal.md) using the [Azure Portal](active-active-portal.md) or [PowerShell](vpn-gateway-activeactive-rm-powershell.md).
+See [Configure active-active gateways](active-active-portal.md) using the [Azure portal](active-active-portal.md) or [PowerShell](vpn-gateway-activeactive-rm-powershell.md).