Updates from: 10/06/2023 01:15:31
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Policies Series Validate User Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-validate-user-input.md
Previously updated : 01/30/2023 Last updated : 10/05/2023
While the *Predicates* define the validation to check against a claim type, the
</ClaimType> ```
-1. Add a `Predicates` element as a child of `BuildingBlocks` section by using the following code:
+1. Add a `Predicates` element as a child of `BuildingBlocks` section by using the following code. You add the `Predicates` element below the `ClaimsSchema` element:
```xml <Predicates>
While the *Predicates* define the validation to check against a claim type, the
We've defined several rules, which when put together described an acceptable password. Next, you can group predicates, to form a set of password policies that you can use in your policy.
-1. Add a `PredicateValidations` element as a child of `BuildingBlocks` section by using the following code:
+1. Add a `PredicateValidations` element as a child of `BuildingBlocks` section by using the following code. You add the `PredicateValidations` element below the `Predicates` element:
```xml <PredicateValidations>
active-directory-b2c Javascript And Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/javascript-and-page-layout.md
Follow these guidelines when you customize the interface of your application usi
- Azure AD B2C settings can be read by calling `window.SETTINGS`, `window.CONTENT` objects, such as the current UI language. DonΓÇÖt change the value of these objects. - To customize the Azure AD B2C error message, use localization in a policy. - If anything can be achieved by using a policy, generally it's the recommended way.
+- We recommend that you use our existing UI controls, such as buttons, rather than hiding them and implementing click bindings on your own UI controls. This approach ensures that your user experience continues to function properly even when we release new page contract upgrades.
## JavaScript samples
active-directory-domain-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot.md
Previously updated : 09/15/2023 Last updated : 10/05/2023 # Common errors and troubleshooting steps for Microsoft Entra Domain Services
If one or more users in your Microsoft Entra tenant can't sign in to the managed
* You've deployed, or updated to, the [latest recommended release of Microsoft Entra Connect](https://www.microsoft.com/download/details.aspx?id=47594). * You've configured Microsoft Entra Connect to [perform a full synchronization][hybrid-phs]. * Depending on the size of your directory, it may take a while for user accounts and credential hashes to be available in the managed domain. Make sure you wait long enough before trying to authenticate against the managed domain.
- * If the issue persists after verifying the previous steps, try restarting the *Microsoft Entra ID Sync Service*. From your Microsoft Entra Connect server, open a command prompt, then run the following commands:
+ * If the issue persists after verifying the previous steps, try restarting the *Azure AD Sync Service*. From your Microsoft Entra Connect server, open a command prompt, then run the following commands:
```console net stop 'Microsoft Azure AD Sync'
active-directory Certificate Based Authentication Federation Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-android.md
Previously updated : 09/30/2022 Last updated : 08/14/2023
As a best practice, you should update your organization's AD FS error pages with
For more information, see [Customizing the AD FS Sign-in Pages](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn280950(v=ws.11)).
-Office apps with modern authentication enabled send '*prompt=login*' to Microsoft Entra ID in their request. By default, Microsoft Entra ID translates '*prompt=login*' in the request to AD FS as '*wauth=usernamepassworduri*' (asks AD FS to do U/P Auth) and '*wfresh=0*' (asks AD FS to ignore SSO state and do a fresh authentication). If you want to enable certificate-based authentication for these apps, you need to modify the default Microsoft Entra behavior. Set the '*PromptLoginBehavior*' in your federated domain settings to '*Disabled*'.
-You can use the [MSOLDomainFederationSettings](/powershell/module/msonline/set-msoldomainfederationsettings) cmdlet to perform this task:
+Office apps with modern authentication enabled send '*prompt=login*' to Azure AD in their request. By default, Azure AD translates '*prompt=login*' in the request to AD FS as '*wauth=usernamepassworduri*' (asks AD FS to do U/P Auth) and '*wfresh=0*' (asks AD FS to ignore SSO state and do a fresh authentication). If you want to enable certificate-based authentication for these apps, you need to modify the default Azure AD behavior. Set the '*PromptLoginBehavior*' in your federated domain settings to '*Disabled*'.
+You can use Set-MgDomainFederationConfiguration to perform this task:
```powershell
-Set-MSOLDomainFederationSettings -domainname <domain> -PromptLoginBehavior Disabled
+Set-MgDomainFederationConfiguration -domainname <domain> -PromptLoginBehavior Disabled
``` ## Exchange ActiveSync clients support
active-directory Concept Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods.md
The following table outlines the security considerations for the available authe
| Certificate-based authentication | High | High | High | | OATH hardware tokens (preview) | Medium | Medium | High | | OATH software tokens | Medium | Medium | High |
+| Temporary Access Pass (TAP) | Medium | High | High |
| SMS | Medium | High | Medium | | Voice | Medium | Medium | Medium | | Password | Low | High | High |
The following table outlines when an authentication method can be used during a
| Certificate-based authentication | Yes | No | | OATH hardware tokens (preview) | No | MFA and SSPR | | OATH software tokens | No | MFA and SSPR |
+| Temporary Access Pass (TAP) | Yes | MFA |
| SMS | Yes | MFA and SSPR | | Voice call | No | MFA and SSPR |
-| Password | Yes | |
+| Password | Yes | No |
> \* Windows Hello for Business, by itself, does not serve as a step-up MFA credential. For example, an MFA Challenge from Sign-in Frequency or SAML Request containing forceAuthn=true. Windows Hello for Business can serve as a step-up MFA credential by being used in FIDO2 authentication. This requires users to be enabled for FIDO2 authentication to work successfully.
To learn more about how each authentication method works, see the following sepa
* [Certificate-based authentication](concept-certificate-based-authentication.md) * [OATH hardware tokens (preview)](concept-authentication-oath-tokens.md#oath-hardware-tokens-preview) * [OATH software tokens](concept-authentication-oath-tokens.md#oath-software-tokens)
+* [Temporary Access Pass (TAP)](howto-authentication-temporary-access-pass.md)
* [SMS sign-in](howto-authentication-sms-signin.md) and [verification](concept-authentication-phone-options.md#mobile-phone-verification) * [Voice call verification](concept-authentication-phone-options.md) * Password
To review what authentication methods are in use, see [Microsoft Entra multifact
<!-- INTERNAL LINKS --> [tutorial-sspr]: tutorial-enable-sspr.md [tutorial-azure-mfa]: tutorial-enable-azure-mfa.md
+[tutorial-tap]: howto-authentication-temporary-access-pass.md
[concept-sspr]: concept-sspr-howitworks.md [concept-mfa]: concept-mfa-howitworks.md
active-directory Concept Certificate Based Authentication Certificateuserids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md
Tenant admins can use the following steps to update certificate user IDs for a u
Authorized callers can run Microsoft Graph queries to find all the users with a given certificateUserId value. On the Microsoft Graph [user](/graph/api/resources/user) object, the collection of certificateUserIds is stored in the **authorizationInfo** property.
-To retrieve all user objects that have the value 'bob@contoso.com' in certificateUserIds:
+To retrieve certificateUserIds of all user objects:
```msgraph-interactive
-GET https://graph.microsoft.com/v1.0/users?$filter=authorizationInfo/certificateUserIds/any(x:x eq 'bob@contoso.com')&$count=true
+GET https://graph.microsoft.com/v1.0/users?$select=authorizationinfo
+ConsistencyLevel: eventual
+```
+To retrieve certificateUserIds for a given user by user's ObjectId:
+
+```msgraph-interactive
+GET https://graph.microsoft.com/v1.0/users/{user-object-id}?$select=authorizationinfo
+ConsistencyLevel: eventual
+```
+To retrieve the user object with a specific value in certificateUserIds:
+
+```msgraph-interactive
+GET https://graph.microsoft.com/v1.0/users?$select=authorizationinfo&$filter=authorizationInfo/certificateUserIds/any(x:x eq 'x509:<PN>user@contoso.com')&$count=true
ConsistencyLevel: eventual ```
Run a PATCH request to update the certificateUserIds for a given user.
#### Request body: ```http
-PATCH https://graph.microsoft.com/v1.0/users/{id}
+PATCH https://graph.microsoft.com/v1.0/users/{user-object-id}
Content-Type: application/json { "authorizationInfo": {
active-directory Fido2 Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/fido2-compatibility.md
The following tables show which transports are supported for each platform. Supp
| Chrome | &#10060; | &#10060; | &#10060; | | Firefox | &#10060; | &#10060; | &#10060; |
-<sup>1</sup>Security key biometrics or PIN for user verficiation isn't currently supported on Android by Google. Microsoft Entra ID requires user verification for all FIDO2 authentications.
+<sup>1</sup>Security key biometrics or PIN for user verficiation are currently supported on Android by Google. Microsoft Entra ID requires user verification for all FIDO2 authentications.
## Minimum browser version
active-directory Howto Authentication Passwordless Phone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md
Previously updated : 09/13/2023 Last updated : 10/05/2023
To use passwordless authentication in Microsoft Entra ID, first enable the combi
Microsoft Entra ID lets you choose which authentication methods can be used during the sign-in process. Users then register for the methods they'd like to use. The **Microsoft Authenticator** authentication method policy manages both the traditional push MFA method and the passwordless authentication method. > [!NOTE]
-> If you enabled Microsoft Authenticator passwordless sign-in using Azure AD PowerShell, it was enabled for your entire directory. If you enable using this new method, it supersedes the PowerShell policy. We recommend you enable for all users in your tenant via the new **Authentication Methods** menu, otherwise users who aren't in the new policy can't sign in without a password.
+> If you enabled Microsoft Authenticator passwordless sign-in using PowerShell, it was enabled for your entire directory. If you enable using this new method, it supersedes the PowerShell policy. We recommend you enable for all users in your tenant via the new **Authentication Methods** menu, otherwise users who aren't in the new policy can't sign in without a password.
To enable the authentication method for passwordless phone sign-in, complete the following steps:
active-directory Howto Mfa Userdevicesettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userdevicesettings.md
If you're assigned the *Authentication Administrator* role, you can require user
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Administrator](../roles/permissions-reference.md#authentication-administrator). 1. Browse to **Identity** > **Users** > **All users**. 1. Choose the user you wish to perform an action on and select **Authentication methods**. At the top of the window, then choose one of the following options for the user:
- - **Reset Password** resets the user's password and assigns a temporary password that must be changed on the next sign-in.
- - **Require Re-register MFA** deactivates the user's hardware OATH tokens and deletes the following authentication methods from this user: phone numbers, Microsoft Authenticator apps and software OATH tokens. If needed, the user is requested to set up a new MFA authentication method the next time they sign in.
- - **Revoke MFA Sessions** clears the user's remembered MFA sessions and requires them to perform MFA the next time it's required by the policy on the device.
-
- :::image type="content" source="media/howto-mfa-userdevicesettings/manage-authentication-methods-in-azure.png" alt-text="Manage authentication methods from the Microsoft Entra admin center":::
+ - **Reset password** resets the user's password and assigns a temporary password that must be changed on the next sign-in.
+ - **Require re-register MFA** deactivates the user's hardware OATH tokens and deletes the following authentication methods from this user: phone numbers, Microsoft Authenticator apps and software OATH tokens. If needed, the user is requested to set up a new MFA authentication method the next time they sign in.
+ - **Revoke MFA sessions** clears the user's remembered MFA sessions and requires them to perform MFA the next time it's required by the policy on the device.
+ :::image type="content" source="media/howto-mfa-userdevicesettings/manage-authentication-methods-in-azure.png" alt-text="Manage authentication methods from the Microsoft Entra admin center":::
-## Delete users' existing app passwords
+ ## Delete users' existing app passwords
-For users that have defined app passwords, administrators can also choose to delete these passwords, causing legacy authentication to fail in those applications. These actions may be necessary if you need to provide assistance to a user, or need to reset their authentication methods. Non-browser apps that were associated with these app passwords will stop working until a new app password is created.
+ For users that have defined app passwords, administrators can also choose to delete these passwords, causing legacy authentication to fail in those applications. These actions may be necessary if you need to provide assistance to a user, or need to reset their authentication methods. Non-browser apps that were associated with these app passwords will stop working until a new app password is created.
-To delete a user's app passwords, complete the following steps:
+ To delete a user's app passwords, complete the following steps:
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Authentication Administrator](../roles/permissions-reference.md#authentication-administrator). 1. Browse to **Identity** > **Users** > **All users**.
This article showed you how to configure individual user settings. To configure
If your users need help, see the [User guide for Microsoft Entra multifactor authentication](https://support.microsoft.com/account-billing/how-to-use-the-microsoft-authenticator-app-9783c865-0308-42fb-a519-8cf666fe0acc). +
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-windows.md
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\Credential Provide
- To enable verbose logging, create a `REG_DWORD: "EnableLogging"`, and set it to 1. - To disable verbose logging, change the `REG_DWORD: "EnableLogging"` to 0.
+- Review the debug logging in the Application event log under source AADPasswordResetCredentialProvider.
## What do users see
active-directory Troubleshoot Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-sspr-writeback.md
Previously updated : 01/29/2023 Last updated : 10/05/2023
To resolve connectivity issues or other transient problems with the service, com
1. As an administrator on the server that runs Microsoft Entra Connect, select **Start**. 1. Enter *services.msc* in the search field and select **Enter**.
-1. Look for the *Microsoft Entra ID Sync* entry.
+1. Look for the *Azure AD Sync* entry.
1. Right-click the service entry, select **Restart**, and wait for the operation to finish. :::image type="content" source="./media/troubleshoot-sspr-writeback/service-restart.png" alt-text="Restart the Azure AD Sync service using the GUI" border="false":::
active-directory How To App Protection Policy Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/how-to-app-protection-policy-windows.md
Previously updated : 09/05/2023 Last updated : 10/04/2023
# Require an app protection policy on Windows devices (preview)
-App protection policies apply mobile application management (MAM) to specific applications on a device. These policies allow for securing data within an application in support of scenarios like bring your own device (BYOD). In the preview, we support applying policy to the Microsoft Edge browser on Windows 11 devices.
+App protection policies apply [mobile application management (MAM)](/mem/intune/apps/app-management#mobile-application-management-mam-basics) to specific applications on a device. These policies allow for securing data within an application in support of scenarios like bring your own device (BYOD). In the preview, we support applying policy to the Microsoft Edge browser on Windows 11 devices.
![Screenshot of a browser requiring the user to sign in to their Microsoft Edge profile to access an application.](./media/how-to-app-protection-policy-windows/browser-sign-in-with-edge-profile.png) ## Prerequisites
-Customers interested in the public preview need to opt in using the [MAM for Windows Public Preview Sign Up Form](https://aka.ms/MAMforWindowsPublic).
+- [Windows 11 Version 22H2 (OS build 22621)](/windows/release-health/windows11-release-information#windows-11-current-versions) or newer.
+- [Configured app protection policy targeting Windows devices](/mem/intune/apps/app-protection-policy-settings-windows).
+- Currently unsupported in sovereign clouds.
## User exclusions [!INCLUDE [active-directory-policy-exclusions](../../../includes/active-directory-policy-exclude-user.md)]
The following policy is put in to [Report-only mode](howto-conditional-access-in
### Require app protection policy for Windows devices
-The following steps help create a Conditional Access policy requiring an app protection policy when using a Windows device. The app protection policy must also be configured and assigned to your users in Microsoft Intune. For more information about how to create the app protection policy, see the article [Preview: App protection policy settings for Windows](/mem/intune/apps/app-protection-policy-settings-windows). The following policy includes multiple controls allowing devices to either use app protection policies for mobile application management (MAM) or be managed and compliant with mobile device management (MDM) policies.
+The following steps help create a Conditional Access policy requiring an app protection policy when using a Windows device. The app protection policy must also be configured and assigned to your users in Microsoft Intune. For more information about how to create the app protection policy, see the article [App protection policy settings for Windows](/mem/intune/apps/app-protection-policy-settings-windows). The following policy includes multiple controls allowing devices to either use app protection policies for mobile application management (MAM) or be managed and compliant with mobile device management (MDM) policies.
+
+> [!TIP]
+> App protection policies (MAM) support unmanaged devices:
+>
+> - If a device is already managed through mobile device management (MDM), then Intune MAM enrollment is blocked, and app protection policy settings aren't applied.
+> - If a device becomes managed after MAM enrollment, app protection policy settings are no longer applied.
1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). 1. Browse to **Protection** > **Conditional Access**.
The following steps help create a Conditional Access policy requiring an app pro
1. Under **Include**, select **All users**. 1. Under **Exclude**, select **Users and groups** and choose at least your organization's emergency access or break-glass accounts. 1. Under **Target resources** > **Cloud apps** > **Include**, select **Office 365**.
+ > [!WARNING]
+ > Selecting **All apps** prevents users from signing in.
1. Under **Conditions**:
- 1. **Device platforms**, set **Configure** to **Yes**.
+ 1. **Device platforms** set **Configure** to **Yes**.
1. Under **Include**, **Select device platforms**. 1. Choose **Windows** only. 1. Select **Done**.
- 1. **Client apps**, set **Configure** to **Yes**.
+ 1. **Client apps** set **Configure** to **Yes**.
1. Select **Browser** only. 1. Under **Access controls** > **Grant**, select **Grant access**. 1. Select **Require app protection policy** and **Require device to be marked as compliant**.
After administrators confirm the settings using [report-only mode](howto-conditi
> [!TIP] > Organizations should also deploy a policy that [blocks access from unsupported or unknown device platforms](howto-policy-unknown-unsupported-device.md) along with this policy.
+In organizations with existing Conditional Access policies that target:
+
+- The **All cloud apps** resource.
+- The **Mobile apps and desktop clients** condition.
+- Use **Require app protection policy** or a **Block access** grant control.
+
+End users are unable to enroll their Windows device in MAM without the following policy changes.
+
+1. Register the **Microsoft Edge Auth** service principal in your tenant using the command `New-MgServicePrincipal -AppId f2d19332-a09d-48c8-a53b-c49ae5502dfc`.
+1. Add an exclusion for **Microsoft Edge Auth** to your existing policy targeting **All cloud apps**.
+ ## Sign in to Windows devices When users attempt to sign in to a site that is protected by an app protection policy for the first time, they're prompted: To access your service, app or website, you may need to sign in to Microsoft Edge using `username@domain.com` or register your device with `organization` if you're already signed in.
This process opens a window offering to allow Windows to remember your account a
![Screenshot showing the stay signed in to all your apps window. Uncheck the allow my organization to manage my device checkbox.](./media/how-to-app-protection-policy-windows/stay-signed-in-to-all-your-apps.png)
-After selecting **OK**, you may see a progress window while policy is applied. After a few moments, you should see a window saying "you're all set", app protection policies are applied.
+After selecting **OK**, you may see a progress window while policy is applied. After a few moments, you should see a window saying **You're all set**, app protection policies are applied.
## Troubleshooting
To resolve these possible scenarios:
- Wait a few minutes and try again in a new tab. - Contact your administrator to check that Microsoft Intune MAM policies are applying to your account correctly.
+#### All apps selected
+
+If your policy for Windows devices targets **All apps** your users aren't able to sign in. Your policy should only target **Office 365**.
+ ### Existing account
-If there's a pre-existing, unregistered account, like `user@contoso.com` in Microsoft Edge, or if a user signs in without registering using the Heads Up Page, then the account isn't properly enrolled in MAM. This configuration blocks the user from being properly enrolled in MAM. This is a known issue.
+There's a known issue where there's a pre-existing, unregistered account, like `user@contoso.com` in Microsoft Edge, or if a user signs in without registering using the Heads Up Page, then the account isn't properly enrolled in MAM. This configuration blocks the user from being properly enrolled in MAM.
## Next steps
active-directory Howto Policy Approved App Or App Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-approved-app-or-app-protection.md
Organizations can choose to deploy this policy using the steps outlined below or
After administrators confirm the settings using [report-only mode](howto-conditional-access-insights-reporting.md), they can move the **Enable policy** toggle from **Report-only** to **On**.
+> [!TIP]
+> Organizations should also deploy a policy that [blocks access from unsupported or unknown device platforms](howto-policy-unknown-unsupported-device.md) along with this policy.
+ ### Block Exchange ActiveSync on all devices This policy will block all Exchange ActiveSync clients using basic authentication from connecting to Exchange Online.
active-directory Apple Sso Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/apple-sso-plugin.md
Use the following information to enable the SSO plug-in by using MDM.
If you use Microsoft Intune as your MDM service, you can use built-in configuration profile settings to enable the Microsoft Enterprise SSO plug-in:
-1. Configure the [SSO app extension](/mem/intune/configuration/device-features-configure#single-sign-on-app-extension) settings of a configuration profile.
+1. Configure the [SSO app plug-in](/mem/intune/configuration/use-enterprise-sso-plug-in-ios-ipados-with-intune) settings of a configuration profile.
1. If the profile isn't already assigned, [assign the profile to a user or device group](/mem/intune/configuration/device-profile-assign). The profile settings that enable the SSO plug-in are automatically applied to the group's devices the next time each device checks in with Intune.
active-directory Tutorial Single Page App React Sign In Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-single-page-app-react-sign-in-users.md
-+ Last updated 09/26/2023 #Customer intent: As a React developer, I want to know how to use functional components to add sign in and sign out experiences in my React application.
active-directory Tenant Restrictions V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tenant-restrictions-v2.md
Previously updated : 09/12/2023 Last updated : 10/04/2023
The following table compares the features in each version.
| |Tenant restrictions v1 |Tenant restrictions v2 | |-||| |**Policy enforcement** | The corporate proxy enforces the tenant restriction policy in the Microsoft Entra ID control plane. | Options: <br></br>- Universal tenant restrictions in Global Secure Access (preview), which uses policy signaling to tag all traffic, providing both authentication and data plane support on all platforms. <br></br>- Authentication plane-only protection, where the corporate proxy sets tenant restrictions v2 signals on all traffic. <br></br>- Windows device management, where devices are configured to point Microsoft traffic to the tenant restriction policy, and the policy is enforced in the cloud. |
+|**Policy enforcement limitation** | Manage corporate proxies by adding tenants to the Microsoft Entra ID traffic allowlist. The character limit of the header value in Restrict-Access-To-Tenants: `<allowed-tenant-list>` limits the number of tenants that can be added. | Managed by a cloud policy in the cross-tenant access policy. A partner policy is created for each external tenant. Currently, the configuration for all external tenants is contained in one policy with a 25KB size limit. |
|**Malicious tenant requests** | Microsoft Entra ID blocks malicious tenant authentication requests to provide authentication plane protection. | Microsoft Entra ID blocks malicious tenant authentication requests to provide authentication plane protection. | |**Granularity** | Limited. | Tenant, user, group, and application granularity. (User-level granularity isn't supported with Microsoft Accounts.) | |**Anonymous access** | Anonymous access to Teams meetings and file sharing is allowed. | Anonymous access to Teams meetings is blocked. Access to anonymously shared resources (ΓÇ£Anyone with the linkΓÇ¥) is blocked. |
The following table compares the features in each version.
|**Portal support** |No user interface in the Microsoft Entra admin center for configuring the policy. | User interface available in the Microsoft Entra admin center for setting up the cloud policy. | |**Unsupported apps** | N/A | Block unsupported app use with Microsoft endpoints by using Windows Defender Application Control (WDAC) or Windows Firewall (for example, for Chrome, Firefox, and so on). See [Block Chrome, Firefox and .NET applications like PowerShell](#block-chrome-firefox-and-net-applications-like-powershell). |
-### Migrate tenant restrictions v1 policies to v2
-
-When using tenant restrictions v2 to manage access for your Windows device users, we recommend also configuring your corporate proxy to enforce tenant restrictions v2 to manage other devices and apps in your corporate network. Although configuring tenant restrictions on your corporate proxy doesn't provide data plane protection, it provides authentication plane protection. For details, see [Set up tenant restrictions v2 on your corporate proxy](#option-2-set-up-tenant-restrictions-v2-on-your-corporate-proxy).
### Tenant restrictions vs. inbound and outbound settings
Think of the different cross-tenant access settings this way:
When your users need access to external organizations and apps, we recommend enabling tenant restrictions to block external accounts and use B2B collaboration instead. B2B collaboration gives you the ability to: -- Use Conditional Access and force multi-factor authentication for B2B collaboration users.
+- Use Conditional Access and force multifactor authentication for B2B collaboration users.
- Manage inbound and outbound access. - Terminate sessions and credentials when a B2B collaboration user's employment status changes or their credentials are breached. - Use sign-in logs to view details about the B2B collaboration user.
Universal tenant restrictions v2 as part of [Microsoft Entra Global Secure Acces
### Option 2: Set up tenant restrictions v2 on your corporate proxy
-Tenant restrictions v2 policies can't be directly enforced on non-Windows 10, Windows 11, or Windows Server 2022 devices, such as Mac computers, mobile devices, unsupported Windows applications, and Chrome browsers. To ensure sign-ins are restricted on all devices and apps in your corporate network, configure your corporate proxy to enforce tenant restrictions v2. Although configuring tenant restrictions on your corporate proxy don't provide data plane protection, it does provide authentication plane protection.
+Tenant restrictions v2 policies can't be directly enforced on non-Windows 10, Windows 11, or Windows Server 2022 devices, such as Mac computers, mobile devices, unsupported Windows applications, and Chrome browsers. To ensure sign-ins are restricted on all devices and apps in your corporate network, configure your corporate proxy to enforce tenant restrictions v2. Although configuring tenant restrictions on your corporate proxy doesn't provide data plane protection, it does provide authentication plane protection.
> [!IMPORTANT] > If you've previously set up tenant restrictions, you'll need to stop sending `restrict-msa` to login.live.com. Otherwise, the new settings will conflict with your existing instructions to the MSA login service.
Tenant restrictions v2 policies can't be directly enforced on non-Windows 10, Wi
This header enforces your tenant restrictions v2 policy on all sign-ins on your network. This header doesn't block anonymous access to Teams meetings, SharePoint files, or other resources that don't require authentication.
+### Migrate tenant restrictions v1 policies to v2
+
+On your corporate proxy, you can move from tenant restrictions v1 to tenant restrictions v2 by changing this tenant restrictions v1 header:
+
+`Restrict-Access-To-Tenants: <allowed-tenant-list>`
+
+to this tenant restrictions v2 header:
+
+`sec-Restrict-Tenant-Access-Policy: <DirectoryID>:<policyGUID>`
+
+where `<DirectoryID>` is your Azure AD tenant ID and `<policyGUID>` is the object ID for your cross-tenant access policy.
+
+#### Tenant restrictions v1 settings on the corporate proxy
+
+The following example shows an existing tenant restrictions V1 setting on the corporate proxy:
+
+`Restrict-Access-To-Tenants: contoso.com, fabrikam.com, dogfood.com sec-Restrict-Tenant-Access-Policy: restrict-msa`
+
+[Learn more](../manage-apps/tenant-restrictions.md) about tenant restrictions v1.
+
+#### Tenant restrictions v2 settings on the corporate proxy
+
+You can configure the corporate proxy to enable client-side tagging of the tenant restrictions V2 header by using the following corporate proxy setting:
+
+`sec-Restrict-Tenant-Access-Policy: <DirectoryID>:<policyGUID>`
+
+where `<DirectoryID>` is your Azure AD tenant ID and `<policyGUID>` is the object ID for your cross-tenant access policy. For details, see [Set up tenant restrictions v2 on your corporate proxy](#option-2-set-up-tenant-restrictions-v2-on-your-corporate-proxy)
+
+You can configure server-side cloud tenant restrictions v2 policies by following the steps at [Step 2: Configure tenant restrictions v2 for specific partners](#step-2-configure-tenant-restrictions-v2-for-specific-partners). Be sure to follow these guidelines:
+
+- Keep the tenant restrictions v2 default policy that blocks all external tenant access using foreign identities (for example, `user@externaltenant.com`).
+
+- Create a partner tenant policy for each tenant listed in your v1 allowlist by following the steps at [Step 2: Configure tenant restrictions v2 for specific partners](#step-2-configure-tenant-restrictions-v2-for-specific-partners).
+
+- Allow only specific users to access specific applications. This design increases your security posture by limiting access to necessary users only.
+
+- Tenant restrictions v2 policies treat MSA as a partner tenant. Create a partner tenant configuration for MSA by following the steps in [Step 2: Configure tenant restrictions v2 for specific partners](#step-2-configure-tenant-restrictions-v2-for-specific-partners). Because user-level assignment isn't available for MSA tenants, the policy applies to all MSA users. However, application-level granularity is available, and you should limit the applications that MSA or consumer accounts can access to only those applications that are necessary.
+
+> [!NOTE]
+>Blocking the MSA tenant will not block user-less traffic for devices, including:
+>
+>- Traffic for Autopilot, Windows Update, and organizational telemetry.
+>- B2B authentication of consumer accounts, or "passthrough" authentication, where Azure apps and Office.com apps use Azure AD to sign in consumer users in a consumer context.
+ #### Tenant restrictions v2 with no support for break and inspect For non-Windows platforms, you can break and inspect traffic to add the tenant restrictions v2 parameters into the header via proxy. However, some platforms don't support break and inspect, so tenant restrictions v2 don't work. For these platforms, the following features of Microsoft Entra ID can provide protection:
active-directory How To Connect Syncservice Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-syncservice-features.md
The synchronization feature of Microsoft Entra Connect has two components:
This topic explains how the following features of the **Microsoft Entra Connect Sync service** work and how you can configure them using PowerShell.
-These settings are configured by the [Azure AD PowerShell module](/previous-versions/azure/jj151815(v=azure.100)). Download and install it separately from Microsoft Entra Connect. The cmdlets documented in this topic were introduced in the [2016 March release (build 9031.1)](https://social.technet.microsoft.com/wiki/contents/articles/28552.microsoft-azure-active-directory-powershell-module-version-release-history.aspx#Version_9031_1). If you do not have the cmdlets documented in this topic or they do not produce the same result, then make sure you run the latest version.
+These settings are configured by the [Azure AD PowerShell module](/previous-versions/azure/jj151815(v=azure.100)). Download and install it separately from Microsoft Entra Connect. The cmdlets documented in this topic were introduced in the [2016 March release (build 9031.1)](https://social.technet.microsoft.com/wiki/contents/articles/28552.microsoft-azure-active-directory-powershell-module-version-release-history.aspx#Version_9031_1). If you don't have the cmdlets documented in this topic or they don't produce the same result, then make sure you run the latest version.
To see the configuration in your Microsoft Entra directory, run `Get-MsolDirSyncFeatures`. ![Get-MsolDirSyncFeatures result](./media/how-to-connect-syncservice-features/getmsoldirsyncfeatures.png)
-To see the configuration in your Microsoft Entra directory using the Graph Powershell, use the following commands:
+To see the configuration in your Microsoft Entra directory using the Graph PowerShell, use the following commands:
```powershell Connect-MgGraph -Scopes OnPremDirectorySynchronization.Read.All, OnPremDirectorySynchronization.ReadWrite.All
-Get-MgDirectoryOnPremisSynchronization | Select-Object -ExpandProperty Features | Format-List
+Get-MgDirectoryOnPremiseSynchronization | Select-Object -ExpandProperty Features | Format-List
``` The output looks similar to `Get-MsolDirSyncFeatures`:
The following settings can be configured by `Set-MsolDirSyncFeature`:
| [EnableSoftMatchOnUpn](#userprincipalname-soft-match) |Allows objects to join on userPrincipalName in addition to primary SMTP address. | | [SynchronizeUpnForManagedUsers](#synchronize-userprincipalname-updates) |Allows the sync engine to update the userPrincipalName attribute for managed/licensed (non-federated) users. |
-After you have enabled a feature, it cannot be disabled again.
+After you have enabled a feature, it can't be disabled again.
> [!NOTE] > From August 24, 2016 the feature *Duplicate attribute resiliency* is enabled by default for new Microsoft Entra directories. This feature will also be rolled out and enabled on directories created before this date. You will receive an email notification when your directory is about to get this feature enabled. > >
-The following settings are configured by Microsoft Entra Connect and cannot be modified by `Set-MsolDirSyncFeature`:
+The following settings are configured by Microsoft Entra Connect and can't be modified by `Set-MsolDirSyncFeature`:
| DirSyncFeature | Comment | | | | | DeviceWriteback |[Microsoft Entra Connect: Enabling device writeback](how-to-connect-device-writeback.md) | | DirectoryExtensions |[Microsoft Entra Connect Sync: Directory extensions](how-to-connect-sync-feature-directory-extensions.md) |
-| [DuplicateProxyAddressResiliency<br/>DuplicateUPNResiliency](#duplicate-attribute-resiliency) |Allows an attribute to be quarantined when it is a duplicate of another object rather than failing the entire object during export. |
+| [DuplicateProxyAddressResiliency<br/>DuplicateUPNResiliency](#duplicate-attribute-resiliency) |Allows an attribute to be quarantined when its a duplicate of another object rather than failing the entire object during export. |
| Password Hash Sync |[Implementing password hash synchronization with Microsoft Entra Connect Sync](how-to-connect-password-hash-synchronization.md) | |Pass-through Authentication|[User sign-in with Microsoft Entra pass-through authentication](how-to-connect-pta.md)| | UnifiedGroupWriteback |Group writeback|
Instead of failing to provision objects with duplicate UPNs / proxyAddresses, th
When this feature is enabled, soft-match is enabled for UPN in addition to the [primary SMTP address](https://support.microsoft.com/kb/2641663), which is always enabled. Soft-match is used to match existing cloud users in Microsoft Entra ID with on-premises users.
-If you need to match on-premises AD accounts with existing accounts created in the cloud and you are not using Exchange Online, then this feature is useful. In this scenario, you generally donΓÇÖt have a reason to set the SMTP attribute in the cloud.
+If you need to match on-premises AD accounts with existing accounts created in the cloud and you aren't using Exchange Online, then this feature is useful. In this scenario, you generally donΓÇÖt have a reason to set the SMTP attribute in the cloud.
This feature is on by default for newly created Microsoft Entra directories. You can see if this feature is enabled for you by running:
This feature is on by default for newly created Microsoft Entra directories. You
## Using the MSOnline module Get-MsolDirSyncFeatures -Feature EnableSoftMatchOnUpn
-## Using the Graph Powershell module
+## Using the Graph PowerShell module
$Config = Get-MgDirectoryOnPremisSynchronization $Config.Features.SoftMatchOnUpnEnabled ```
-If this feature is not enabled for your Microsoft Entra directory, then you can enable it by running:
+If this feature isn't enabled for your Microsoft Entra directory, then you can enable it by running:
```powershell Set-MsolDirSyncFeature -Feature EnableSoftMatchOnUpn -Enable $true ``` ## BlockSoftMatch
-When this feature is enabled it will block the Soft Match feature. Customers are encouraged to enable this feature and keep it at enabled until Soft Matching is required again for their tenancy. This flag should be enabled again after any soft matching has completed and is no longer needed.
+When this feature is enabled, it blocks the Soft Match feature. Customers are encouraged to enable this feature and keep it at enabled until Soft Matching is required again for their tenancy. This flag should be enabled again after any soft matching has completed and is no longer needed.
Example - to block soft matching in your tenant, run this cmdlet:
PS C:\> Set-MsolDirSyncFeature -Feature BlockSoftMatch -Enable $True
Historically, updates to the UserPrincipalName attribute using the sync service from on-premises has been blocked, unless both of these conditions were true: * The user is managed (non-federated).
-* The user has not been assigned a license.
+* The user hasn't been assigned a license.
> [!NOTE] > From March 2019, synchronizing UPN changes for federated user accounts is allowed.
This feature is on by default for newly created Microsoft Entra directories. You
## Using the MSOnline module Get-MsolDirSyncFeatures -Feature SynchronizeUpnForManagedUsers
-## Using the Graph Powershell module
+## Using the Graph PowerShell module
$config = Get-MgDirectoryOnPremisSynchronization $config.Features.SynchronizeUpnForManagedUsersEnabled ```
-If this feature is not enabled for your Microsoft Entra directory, then you can enable it by running:
+If this feature isn't enabled for your Microsoft Entra directory, then you can enable it by running:
```powershell Set-MsolDirSyncFeature -Feature SynchronizeUpnForManagedUsers -Enable $true
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/whats-new-docs.md
Title: "What's new in Azure Active Directory application management" description: "New and updated documentation for the Azure Active Directory application management." Previously updated : 09/04/2023 Last updated : 10/05/2023
Welcome to what's new in Azure Active Directory (Azure AD) application management documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the application management service, see [What's new in Azure AD](../fundamentals/whats-new.md).
+## September
+
+### Updated articles
+
+- [Submit a request to publish your application in Microsoft Entra application gallery](v2-howto-app-gallery-listing.md) - Gallery listing updates
+- [Tutorial: Migrate Okta sync provisioning to Microsoft Entra Connect synchronization](migrate-okta-sync-provisioning.md) - Okta sync provisioning updates
+- [Review permissions granted to enterprise applications](manage-application-permissions.md) - Add clarity on limitation of revoke user consent through UI
+- [Hide an Enterprise application](hide-application-from-user-portal.md) - Review and freshness pass
+
+### Updates related to rebranding of Azure Active Directory to Microsoft Entra
+
+- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos single sign-on](f5-big-ip-kerberos-easy-button.md)
+- [Resources for migrating applications to Microsoft Entra ID](migration-resources.md)
+- [Overview of the Microsoft Entra application gallery](overview-application-gallery.md)
+- [Integrating Microsoft Entra ID with applications getting started guide](plan-an-application-integration.md)
+- [Plan a single sign-on deployment](plan-sso-deployment.md)
+- [Secure hybrid access with Microsoft Entra partner integrations](secure-hybrid-access-integrations.md)
+- [Secure hybrid access: Protect legacy apps with Microsoft Entra ID](secure-hybrid-access.md)
+- [Tutorial: Configure Secure Hybrid Access with Microsoft Entra ID and Silverfort](silverfort-integration.md)
+- [Restrict access to a tenant](tenant-restrictions.md)
+- [Troubleshoot SAML-based single sign-on](troubleshoot-saml-based-sso.md)
+- [Tutorial: Govern and monitor applications](tutorial-govern-monitor.md)
+- [Tutorial: Manage application access and security](tutorial-manage-access-security.md)
+- [Tutorial: Manage certificates for federated single sign-on](tutorial-manage-certificates-for-federated-single-sign-on.md)
+- [Manage access to an application](what-is-access-management.md)
+- [What is application management in Microsoft Entra ID?](what-is-application-management.md)
+- [What is single sign-on in Microsoft Entra ID?](what-is-single-sign-on.md)
+- [End-user experiences for applications](end-user-experiences.md)
+- [Configure F5 BIG-IP Access Policy Manager for form-based SSO](f5-big-ip-forms-advanced.md)
+- [Tutorial: Configure F5 BIG-IP Access Policy Manager for header-based single sign-on](f5-big-ip-header-advanced.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for header-based SSO](f5-big-ip-headers-easy-button.md)
+- [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](f5-big-ip-kerberos-advanced.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for header-based and LDAP single sign-on](f5-big-ip-ldap-header-easybutton.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for SSO to Oracle EBS](f5-big-ip-oracle-enterprise-business-suite-easy-button.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for SSO to Oracle JDE](f5-big-ip-oracle-jde-easy-button.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for SSO to Oracle PeopleSoft](f5-big-ip-oracle-peoplesoft-easy-button.md)
+- [Tutorial: Configure F5 BIG-IP Easy Button for SSO to SAP ERP](f5-big-ip-sap-erp-easy-button.md)
+- [Integrate F5 BIG-IP with Microsoft Entra ID](f5-integration.md)
+- [Tutorial: Configure F5 BIG-IP SSL-VPN for Microsoft Entra SSO](f5-passwordless-vpn.md)
+- [Home Realm Discovery for an application](home-realm-discovery-policy.md)
+- [Configure Microsoft Entra SAML token encryption](howto-saml-token-encryption.md)
+- [Review the application activity report](migrate-adfs-application-activity.md)
+- [Plan application migration to Microsoft Entra ID](migrate-adfs-apps-phases-overview.md)
+- [Understand the stages of migrating application authentication from AD FS to Microsoft Entra ID](migrate-adfs-apps-stages.md)
+- [Phase 2: Classify apps and plan pilot](migrate-adfs-classify-apps-plan-pilot.md)
+- [Phase 1: Discover and scope apps](migrate-adfs-discover-scope-apps.md)
+- [Represent AD FS security policies in Microsoft Entra ID: Mappings and examples](migrate-adfs-represent-security-policies.md)
+- [SAML-based single sign-on: Configuration and Limitations](migrate-adfs-saml-based-sso.md)
+- [Tutorial: Migrate your applications from Okta to Microsoft Entra ID](migrate-applications-from-okta.md)
+- [Tutorial: Migrate Okta federation to Microsoft Entra ID-managed authentication](migrate-okta-federation.md)
+- [Tutorial: Migrate Okta sign-on policies to Microsoft Entra Conditional Access](migrate-okta-sign-on-policies-conditional-access.md)
+- [Enable single sign-on for an enterprise application](add-application-portal-setup-sso.md)
+- [Application Management certificates frequently asked questions](application-management-certs-faq.md)
+- [Troubleshoot application sign-in](application-sign-in-other-problem-access-panel.md)
+- [An app page shows an error message after the user signs in](application-sign-in-problem-application-error.md)
+- [Problems signing in to a Microsoft application](application-sign-in-problem-first-party-microsoft.md)
+- [Advanced certificate signing options in a SAML token](certificate-signing-options.md)
+- [Tutorial: Configure Conditional Access policies in Cloudflare Access](cloudflare-conditional-access-policies.md)
+- [Tutorial: Configure Cloudflare with Microsoft Entra ID for secure hybrid access](cloudflare-integration.md)
+- [Configure sign-in behavior using Home Realm Discovery](configure-authentication-for-federated-users-portal.md)
+- [Manage custom security attributes for an application (Preview)](custom-security-attributes-apps.md)
+- [Tutorial: Configure Secure Hybrid Access with Microsoft Entra ID and Datawiza](datawiza-configure-sha.md)
+- [Configure Datawiza for Microsoft Entra multifactor authentication and single sign-on to Oracle EBS](datawiza-sso-mfa-oracle-ebs.md)
+- [Configure Datawiza Access Proxy for Microsoft Entra single sign-on and multifactor authentication for Outlook Web Access](datawiza-sso-mfa-to-owa.md)
+- [Tutorial: Configure Datawiza to enable Microsoft Entra multifactor authentication and single sign-on to Oracle JD Edwards](datawiza-sso-oracle-jde.md)
+- [Tutorial: Configure Datawiza to enable Microsoft Entra multifactor authentication and single sign-on to Oracle PeopleSoft](datawiza-sso-oracle-peoplesoft.md)
+- [Debug SAML-based single sign-on to applications](debug-saml-sso-issues.md)
+- [Troubleshoot password-based single sign-on](troubleshoot-password-based-sso.md)
+- [Configure group and team owner consent to applications](configure-user-consent-groups.md)
+- [SAML Request Signature Verification](howto-enforce-signed-saml-authentication.md)
+- [Assign enterprise application owners](assign-app-owners.md)
+- [Create collections on the My Apps portal](access-panel-collections.md)
+- [Unexpected consent prompt when signing in to an application](application-sign-in-unexpected-user-consent-prompt.md)
+- [Manage users and groups assignment to an application](assign-user-or-group-access-portal.md)
+ ## August 2023 ### New articles
The following PowerShell samples were updated to use Microsoft Graph PowerShell
The following PowerShell sample was added: - [Export expiring secrets and certs (enterprise apps)](scripts/powershell-export-enterprise-apps-with-expiring-secrets.md)-
-## June 2023
-
-### Updated articles
--- [Manage consent to applications and evaluate consent requests](manage-consent-requests.md)-- [Plan application migration to Azure Active Directory](migrate-adfs-apps-phases-overview.md)-- [Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Silverfort](silverfort-integration.md)-- [Tutorial: Migrate your applications from Okta to Azure Active Directory](migrate-applications-from-okta.md)-- [Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle JD Edwards](datawiza-sso-oracle-jde.md)-- [Tutorial: Configure Datawiza to enable Azure Active Directory Multi-Factor Authentication and single sign-on to Oracle PeopleSoft](datawiza-sso-oracle-peoplesoft.md)-- [Tutorial: Configure Cloudflare with Azure Active Directory for secure hybrid access](cloudflare-integration.md)-- [Configure Datawiza for Azure AD Multi-Factor Authentication and single sign-on to Oracle EBS](datawiza-sso-mfa-oracle-ebs.md)-- [Tutorial: Configure F5 BIG-IP Access Policy Manager for Kerberos authentication](f5-big-ip-kerberos-advanced.md)-- [Tutorial: Configure F5 BIG-IP Easy Button for Kerberos single sign-on](f5-big-ip-kerberos-easy-button.md)
active-directory Hypervault Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hypervault-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
<a name='step-2-configure-hypervault-to-support-provisioning-with-azure-ad'></a> ## Step 2: Configure Hypervault to support provisioning with Microsoft Entra ID
-Contact Hypervault support to configure Hypervault to support provisioning with Microsoft Entra ID.
+
+1. Sign in into your Hypervault account as a manager.
+1. Navigate to the **Workspace Settings** page.
+1. Under the **Connect to Microsoft Azure** section, click **Enable User Provisioning**.
+1. Copy the Domain and Token values. You will need these values in step 5.
<a name='step-3-add-hypervault-from-the-azure-ad-application-gallery'></a>
The Microsoft Entra provisioning service allows you to scope who is provisioned
* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. - ## Step 5: Configure automatic user provisioning to Hypervault
-This section guides you through the steps to configure the Microsoft Entra provisioning service to create, update, and disable users in TestApp based on user assignments in Microsoft Entra ID.
+This section guides you through the steps to configure the Microsoft Entra provisioning service to create, update, and disable users in Hypervault based on user assignments in Microsoft Entra ID.
<a name='to-configure-automatic-user-provisioning-for-hypervault-in-azure-ad'></a>
This section guides you through the steps to configure the Microsoft Entra provi
![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
-1. Under the **Admin Credentials** section, input your Hypervault Tenant URL and Secret Token. Click **Test Connection** to ensure Microsoft Entra ID can connect to Hypervault. If the connection fails, ensure your Hypervault account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, input your Hypervault Tenant URL and Secret Token (generated in step 2). Click **Test Connection** to ensure Microsoft Entra ID can connect to Hypervault. If the connection fails, ensure your Hypervault account has Admin permissions and try again.
![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
active-directory Workload Identities Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identities-faqs.md
suspicious changes to accounts.
Enables delegation of reviews to the right people, focused on the most important privileged roles. -- [App health recommendations](/azure/active-directory/reports-monitoring/howto-use-recommendations): Provides you with personalized insights with actionable guidance so you can implement best practices, improve the state of your Microsoft Entra tenant, and optimize the configurations for your scenarios.
+- [App health recommendations](/azure/active-directory/reports-monitoring/howto-use-recommendations): Provides recommendations for addressing identity hygiene gaps in your application portfolio so you can improve the security and resilience posture of a tenant.
## What do the numbers in each category on the [Workload identities - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) mean?
ai-services Install Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/sdk/install-sdk.md
description: In this guide, you learn how to install the Vision SDK for your pre
--+ Last updated 08/01/2023
ai-services Overview Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/sdk/overview-sdk.md
description: This page gives you an overview of the Azure AI Vision SDK for Imag
--+ Last updated 08/01/2023
ai-services App Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/concepts/app-architecture.md
description: Learn when to choose conversational language understanding or orche
--+ Last updated 08/15/2023
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
description: Learn about the different model capabilities that are available with Azure OpenAI. Previously updated : 09/15/2023 Last updated : 10/04/2023
These models can only be used with the Chat Completion API.
| `gpt-4-32k` (0613) | Australia East<sup>1</sup>, Canada East, East US<sup>1</sup>, East US 2<sup>1</sup>, France Central<sup>1</sup>, Japan East<sup>1</sup>, Sweden Central, Switzerland North, UK South<sup>1</sup> | N/A | 32,768 | September 2021 | <sup>1</sup> Due to high demand, availability is limited in the region<br>
-<sup>2</sup> Version `0314` of gpt-4 and gpt-4-32k will be retired no earlier than July 5, 2024. See [model updates](#model-updates) for model upgrade behavior.<br>
+<sup>2</sup> Version `0314` of gpt-4 and gpt-4-32k will be retired no earlier than July 5, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior.<br>
### GPT-3.5 models
GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can als
| `gpt-35-turbo-16k` (0613) | Australia East, Canada East, East US, East US 2, France Central, Japan East, North Central US, Sweden Central, Switzerland North, UK South | N/A | 16,384 | Sep 2021 | | `gpt-35-turbo-instruct` (0914) | East US, Sweden Central | N/A | 4,097 | Sep 2021 |
-<sup>1</sup> Version `0301` of gpt-35-turbo will be retired no earlier than July 5, 2024. See [model updates](#model-updates) for model upgrade behavior.
+<sup>1</sup> Version `0301` of gpt-35-turbo will be retired no earlier than July 5, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior.
### Embeddings models
These models can only be used with Embedding API requests.
| | | | | | | whisper | North Central US, West Europe | N/A | 25 MB | N/A |
-## Working with models
-
-### Finding what models are available
-
-You can get a list of models that are available for both inference and fine-tuning by your Azure OpenAI resource by using the [Models List API](/rest/api/cognitiveservices/azureopenaistable/models/list).
-
-### Model updates
-
-Azure OpenAI now supports automatic updates for select model deployments. On models where automatic update support is available, a model version drop-down will be visible in Azure OpenAI Studio under **Create new deployment** and **Edit deployment**:
--
-### Auto update to default
-
-When **Auto-update to default** is selected your model deployment will be automatically updated within two weeks of a change in the default version.
-
-If you are still in the early testing phases for inference models, we recommend deploying models with **auto-update to default** set whenever it is available.
-
-### Specific model version
-
-As your use of Azure OpenAI evolves, and you start to build and integrate with applications you may want to manually control model updates so that you can first test and validate that model performance is remaining consistent for your use case prior to upgrade.
-
-When you select a specific model version for a deployment this version will remain selected until you either choose to manually update yourself, or once you reach the retirement date for the model. When the retirement date is reached the model will auto-upgrade to the default version at the time of retirement.
-
-### GPT-35-Turbo 0301 and GPT-4 0314 retirement
-
-The `gpt-35-turbo` (`0301`) and both `gpt-4` (`0314`) models will be retired no earlier than July 5, 2024. Upon retirement, deployments will automatically be upgraded to the default version at the time of retirement. If you would like your deployment to stop accepting completion requests rather than upgrading, then you will be able to set the model upgrade option to expire through the API. We will publish guidelines on this by September 1.
-
-### Viewing deprecation dates
-
-For currently deployed models, from Azure OpenAI Studio select **Deployments**:
--
-To view deprecation/expiration dates for all available models in a given region from Azure OpenAI Studio select **Models** > **Column options** > Select **Deprecation fine tune** and **Deprecation inference**:
--
-### Model deployment upgrade configuration
-
-There are three distinct model deployment upgrade options which are configurable via REST API:
-
-| Name | Description |
-||--|
-| `OnceNewDefaultVersionAvailable` | Once a new version is designated as the default, the model deployment will auto-upgrade to the default version within two weeks of that designation change being made. |
-`OnceCurrentVersionExpired` | Once the retirement date is reached the model deployment will auto-upgrade to the current default version. |
-`NoAutoUpgrade` | The model deployment will never auto-upgrade. Once the retirement date is reached the model deployment will stop working. You will need to update your code referencing that deployment to point to a non-expired model deployment. |
-
-To query the current model deployment settings including the deployment upgrade configuration for a given resource use [`Deployments List`](/rest/api/cognitiveservices/accountmanagement/deployments/list?tabs=HTTP#code-try-0)
-
-```http
-GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/deployments?api-version=2023-05-01
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```acountname``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```resourceGroupName``` | string | Required | The name of the associated resource group for this model deployment. |
-| ```subscriptionId``` | string | Required | Subscription ID for the associated subscription. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
-
-**Supported versions**
--- `2023-05-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/1e71ad94aeb8843559d59d863c895770560d7c93/specification/cognitiveservices/resource-manager/Microsoft.CognitiveServices/stable/2023-05-01/cognitiveservices.json)-
-### Example response
-
-```json
-{
- "id": "/subscriptions/{Subcription-GUID}/resourceGroups/{Resource-Group-Name}/providers/Microsoft.CognitiveServices/accounts/{Resource-Name}/deployments/text-davinci-003",
- "type": "Microsoft.CognitiveServices/accounts/deployments",
- "name": "text-davinci-003",
- "sku": {
- "name": "Standard",
- "capacity": 60
- },
- "properties": {
- "model": {
- "format": "OpenAI",
- "name": "text-davinci-003",
- "version": "1"
- },
- "versionUpgradeOption": "OnceNewDefaultVersionAvailable",
- "capabilities": {
- "completion": "true",
- "search": "true"
- },
- "raiPolicyName": "Microsoft.Default",
- "provisioningState": "Succeeded",
- "rateLimits": [
- {
- "key": "request",
- "renewalPeriod": 10,
- "count": 60
- },
- {
- "key": "token",
- "renewalPeriod": 60,
- "count": 60000
- }
- ]
- }
-```
-
-You can then take the settings from this list to construct an update model REST API call as described below if you want to modify the deployment upgrade configuration.
-
-### Update & deploy models via the API
-
-```http
-PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/deployments/{deploymentName}?api-version=2023-05-01
-```
-
-**Path parameters**
-
-| Parameter | Type | Required? | Description |
-|--|--|--|--|
-| ```acountname``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```deploymentName``` | string | Required | The deployment name you chose when you deployed an existing model or the name you would like a new model deployment to have. |
-| ```resourceGroupName``` | string | Required | The name of the associated resource group for this model deployment. |
-| ```subscriptionId``` | string | Required | Subscription ID for the associated subscription. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
-
-**Supported versions**
--- `2023-05-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/1e71ad94aeb8843559d59d863c895770560d7c93/specification/cognitiveservices/resource-manager/Microsoft.CognitiveServices/stable/2023-05-01/cognitiveservices.json)-
-**Request body**
-
-This is only a subset of the available request body parameters. For the full list of the parameters, you can refer to the [REST API reference documentation](/rest/api/cognitiveservices/accountmanagement/deployments/create-or-update).
-
-|Parameter|Type| Description |
-|--|--|--|
-|versionUpgradeOption | String | Deployment model version upgrade options:<br>`OnceNewDefaultVersionAvailable`<br>`OnceCurrentVersionExpired`<br>`NoAutoUpgrade`|
-|capacity|integer|This represents the amount of [quota](../how-to/quota.md) you are assigning to this deployment. A value of 1 equals 1,000 Tokens per Minute (TPM)|
-
-#### Example request
-
-```Bash
-curl -X PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-temp/providers/Microsoft.CognitiveServices/accounts/docs-openai-test-001/deployments/text-embedding-ada-002-test-1?api-version=2023-05-01 \
- -H "Content-Type: application/json" \
- -H 'Authorization: Bearer YOUR_AUTH_TOKEN' \
- -d '{"sku":{"name":"Standard","capacity":1},"properties": {"model": {"format": "OpenAI","name": "text-embedding-ada-002","version": "2"},"versionUpgradeOption":"OnceCurrentVersionExpired"}}'
-```
-
-> [!NOTE]
-> There are multiple ways to generate an authorization token. The easiest method for initial testing is to launch the Cloud Shell from the [Azure portal](https://portal.azure.com). Then run [`az account get-access-token`](/cli/azure/account?view=azure-cli-latest#az-account-get-access-token&preserve-view=true). You can use this token as your temporary authorization token for API testing.
-
-#### Example response
-
-```json
-{
- "id": "/subscriptions/{subscription-id}/resourceGroups/resource-group-temp/providers/Microsoft.CognitiveServices/accounts/docs-openai-test-001/deployments/text-embedding-ada-002-test-1",
- "type": "Microsoft.CognitiveServices/accounts/deployments",
- "name": "text-embedding-ada-002-test-1",
- "sku": {
- "name": "Standard",
- "capacity": 1
- },
- "properties": {
- "model": {
- "format": "OpenAI",
- "name": "text-embedding-ada-002",
- "version": "2"
- },
- "versionUpgradeOption": "OnceCurrentVersionExpired",
- "capabilities": {
- "embeddings": "true",
- "embeddingsMaxInputs": "1"
- },
- "provisioningState": "Succeeded",
- "ratelimits": [
- {
- "key": "request",
- "renewalPeriod": 10,
- "count": 2
- },
- {
- "key": "token",
- "renewalPeriod": 60,
- "count": 1000
- }
- ]
- },
- "systemData": {
- "createdBy": "docs@contoso.com",
- "createdByType": "User",
- "createdAt": "2023-06-13T00:12:38.885937Z",
- "lastModifiedBy": "docs@contoso.com",
- "lastModifiedByType": "User",
- "lastModifiedAt": "2023-06-13T02:41:04.8410965Z"
- },
- "etag": "\"{GUID}\""
-}
-```
- ## Next steps
+- [Learn more about working with Azure OpenAI models](../how-to/working-with-models.md)
- [Learn more about Azure OpenAI](../overview.md) - [Learn more about fine-tuning Azure OpenAI models](../how-to/fine-tuning.md)
ai-services Working With Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/working-with-models.md
+
+ Title: Azure OpenAI Service working with models
+
+description: Learn about managing model deployment life cycle, updates, & retirement.
++ Last updated : 10/04/2023++++
+recommendations: false
+keywords:
++
+# Working with Azure OpenAI models
+
+Azure OpenAI Service is powered by a diverse set of models with different capabilities and price points. [Model availability varies by ](../concepts/models.md).
+
+You can get a list of models that are available for both inference and fine-tuning by your Azure OpenAI resource by using the [Models List API](/rest/api/cognitiveservices/azureopenaistable/models/list).
+
+## Model updates
+
+Azure OpenAI now supports automatic updates for select model deployments. On models where automatic update support is available, a model version drop-down will be visible in Azure OpenAI Studio under **Create new deployment** and **Edit deployment**:
++
+### Auto update to default
+
+When **Auto-update to default** is selected your model deployment will be automatically updated within two weeks of a change in the default version.
+
+If you're still in the early testing phases for inference models, we recommend deploying models with **auto-update to default** set whenever it's available.
+
+### Specific model version
+
+As your use of Azure OpenAI evolves, and you start to build and integrate with applications you may want to manually control model updates so that you can first test and validate that model performance is remaining consistent for your use case prior to upgrade.
+
+When you select a specific model version for a deployment this version will remain selected until you either choose to manually update yourself, or once you reach the retirement date for the model. When the retirement date is reached the model will automatically upgrade to the default version at the time of retirement.
+
+### GPT-35-Turbo 0301 and GPT-4 0314 retirement
+
+The `gpt-35-turbo` (`0301`) and both `gpt-4` (`0314`) models will be retired no earlier than July 5, 2024. Upon retirement, deployments will automatically be upgraded to the default version at the time of retirement. If you would like your deployment to stop accepting completion requests rather than upgrading, then you'll be able to set the model upgrade option to expire through the API.
+
+## Viewing deprecation dates
+
+For currently deployed models, from Azure OpenAI Studio select **Deployments**:
++
+To view deprecation/expiration dates for all available models in a given region from Azure OpenAI Studio select **Models** > **Column options** > Select **Deprecation fine tune** and **Deprecation inference**:
++
+## Model deployment upgrade configuration
+
+There are three distinct model deployment upgrade options which are configurable via REST API:
+
+| Name | Description |
+||--|
+| `OnceNewDefaultVersionAvailable` | Once a new version is designated as the default, the model deployment will automatically upgrade to the default version within two weeks of that designation change being made. |
+`OnceCurrentVersionExpired` | Once the retirement date is reached the model deployment will automatically upgrade to the current default version. |
+`NoAutoUpgrade` | The model deployment will never automatically upgrade. Once the retirement date is reached the model deployment will stop working. You will need to update your code referencing that deployment to point to a non-expired model deployment. |
+
+To query the current model deployment settings including the deployment upgrade configuration for a given resource use [`Deployments List`](/rest/api/cognitiveservices/accountmanagement/deployments/list?tabs=HTTP#code-try-0)
+
+```http
+GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/deployments?api-version=2023-05-01
+```
+
+**Path parameters**
+
+| Parameter | Type | Required? | Description |
+|--|--|--|--|
+| ```acountname``` | string | Required | The name of your Azure OpenAI Resource. |
+| ```resourceGroupName``` | string | Required | The name of the associated resource group for this model deployment. |
+| ```subscriptionId``` | string | Required | Subscription ID for the associated subscription. |
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
+
+**Supported versions**
+
+- `2023-05-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/1e71ad94aeb8843559d59d863c895770560d7c93/specification/cognitiveservices/resource-manager/Microsoft.CognitiveServices/stable/2023-05-01/cognitiveservices.json)
+
+### Example response
+
+```json
+{
+ "id": "/subscriptions/{Subcription-GUID}/resourceGroups/{Resource-Group-Name}/providers/Microsoft.CognitiveServices/accounts/{Resource-Name}/deployments/text-davinci-003",
+ "type": "Microsoft.CognitiveServices/accounts/deployments",
+ "name": "text-davinci-003",
+ "sku": {
+ "name": "Standard",
+ "capacity": 60
+ },
+ "properties": {
+ "model": {
+ "format": "OpenAI",
+ "name": "text-davinci-003",
+ "version": "1"
+ },
+ "versionUpgradeOption": "OnceNewDefaultVersionAvailable",
+ "capabilities": {
+ "completion": "true",
+ "search": "true"
+ },
+ "raiPolicyName": "Microsoft.Default",
+ "provisioningState": "Succeeded",
+ "rateLimits": [
+ {
+ "key": "request",
+ "renewalPeriod": 10,
+ "count": 60
+ },
+ {
+ "key": "token",
+ "renewalPeriod": 60,
+ "count": 60000
+ }
+ ]
+ }
+```
+
+You can then take the settings from this list to construct an update model REST API call as described below if you want to modify the deployment upgrade configuration.
+
+## Update & deploy models via the API
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/deployments/{deploymentName}?api-version=2023-05-01
+```
+
+**Path parameters**
+
+| Parameter | Type | Required? | Description |
+|--|--|--|--|
+| ```acountname``` | string | Required | The name of your Azure OpenAI Resource. |
+| ```deploymentName``` | string | Required | The deployment name you chose when you deployed an existing model or the name you would like a new model deployment to have. |
+| ```resourceGroupName``` | string | Required | The name of the associated resource group for this model deployment. |
+| ```subscriptionId``` | string | Required | Subscription ID for the associated subscription. |
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
+
+**Supported versions**
+
+- `2023-05-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/1e71ad94aeb8843559d59d863c895770560d7c93/specification/cognitiveservices/resource-manager/Microsoft.CognitiveServices/stable/2023-05-01/cognitiveservices.json)
+
+**Request body**
+
+This is only a subset of the available request body parameters. For the full list of the parameters, you can refer to the [REST API reference documentation](/rest/api/cognitiveservices/accountmanagement/deployments/create-or-update).
+
+|Parameter|Type| Description |
+|--|--|--|
+|versionUpgradeOption | String | Deployment model version upgrade options:<br>`OnceNewDefaultVersionAvailable`<br>`OnceCurrentVersionExpired`<br>`NoAutoUpgrade`|
+|capacity|integer|This represents the amount of [quota](../how-to/quota.md) you are assigning to this deployment. A value of 1 equals 1,000 Tokens per Minute (TPM)|
+
+#### Example request
+
+```Bash
+curl -X PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-temp/providers/Microsoft.CognitiveServices/accounts/docs-openai-test-001/deployments/text-embedding-ada-002-test-1?api-version=2023-05-01 \
+ -H "Content-Type: application/json" \
+ -H 'Authorization: Bearer YOUR_AUTH_TOKEN' \
+ -d '{"sku":{"name":"Standard","capacity":1},"properties": {"model": {"format": "OpenAI","name": "text-embedding-ada-002","version": "2"},"versionUpgradeOption":"OnceCurrentVersionExpired"}}'
+```
+
+> [!NOTE]
+> There are multiple ways to generate an authorization token. The easiest method for initial testing is to launch the Cloud Shell from the [Azure portal](https://portal.azure.com). Then run [`az account get-access-token`](/cli/azure/account?view=azure-cli-latest#az-account-get-access-token&preserve-view=true). You can use this token as your temporary authorization token for API testing.
+
+#### Example response
+
+```json
+{
+ "id": "/subscriptions/{subscription-id}/resourceGroups/resource-group-temp/providers/Microsoft.CognitiveServices/accounts/docs-openai-test-001/deployments/text-embedding-ada-002-test-1",
+ "type": "Microsoft.CognitiveServices/accounts/deployments",
+ "name": "text-embedding-ada-002-test-1",
+ "sku": {
+ "name": "Standard",
+ "capacity": 1
+ },
+ "properties": {
+ "model": {
+ "format": "OpenAI",
+ "name": "text-embedding-ada-002",
+ "version": "2"
+ },
+ "versionUpgradeOption": "OnceCurrentVersionExpired",
+ "capabilities": {
+ "embeddings": "true",
+ "embeddingsMaxInputs": "1"
+ },
+ "provisioningState": "Succeeded",
+ "ratelimits": [
+ {
+ "key": "request",
+ "renewalPeriod": 10,
+ "count": 2
+ },
+ {
+ "key": "token",
+ "renewalPeriod": 60,
+ "count": 1000
+ }
+ ]
+ },
+ "systemData": {
+ "createdBy": "docs@contoso.com",
+ "createdByType": "User",
+ "createdAt": "2023-06-13T00:12:38.885937Z",
+ "lastModifiedBy": "docs@contoso.com",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2023-06-13T02:41:04.8410965Z"
+ },
+ "etag": "\"{GUID}\""
+}
+```
+
+## Next steps
+
+- [Learn more about Azure OpenAI model regional availability](../concepts/models.md)
+- [Learn more about Azure OpenAI](../overview.md)
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
In this quickstart you can use your own data with Azure OpenAI models. Using Azu
- An Azure OpenAI resource with a chat model deployed (for example, GPT-3 or GPT-4). For more information about model deployment, see the [resource deployment guide](./how-to/create-resource.md).
- - Your chat model can use version `gpt-35-turbo (0301)`, `gpt-35-turbo-16k`, `gpt-4`, and `gpt-4-32k`. You can view or change your model version in [Azure OpenAI Studio](./concepts/models.md#model-updates).
+ - Your chat model can use version `gpt-35-turbo (0301)`, `gpt-35-turbo-16k`, `gpt-4`, and `gpt-4-32k`. You can view or change your model version in [Azure OpenAI Studio](./how-to/working-with-models.md#model-updates).
- Be sure that you are assigned at least the [Cognitive Services Contributor](./how-to/role-based-access-control.md#cognitive-services-contributor) role for the Azure OpenAI resource.
ai-services Embedded Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/embedded-speech.md
For embedded speech, you'll need to download the speech recognition models for [
The following [speech to text](speech-to-text.md) models are available: de-DE, en-AU, en-CA, en-GB, en-IE, en-IN, en-NZ, en-US, es-ES, es-MX, fr-CA, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, nl-NL, pt-BR, ru-RU, sv-SE, tr-TR, zh-CN, zh-HK, and zh-TW.
-The following [text to speech](text-to-speech.md) locales and voices are available out of box. We welcome your input to help us gauge demand for additional languages and voices. Check the full text to speech language and voice list [here](language-support.md?tabs=tts).
-
-| Locale (BCP-47) | Language | Text to speech voices |
-| -- | -- | -- |
-| `de-DE` | German (Germany) | `de-DE-KatjaNeural` (Female)<br/>`de-DE-ConradNeural` (Male)|
-| `en-AU` | English (Australia) | `en-AU-AnnetteNeural` (Female)<br/>`en-AU-WilliamNeural` (Male)|
-| `en-CA` | English (Canada) | `en-CA-ClaraNeural` (Female)<br/>`en-CA-LiamNeural` (Male)|
-| `en-GB` | English (United Kingdom) | `en-GB-LibbyNeural` (Female)<br/>`en-GB-RyanNeural` (Male)|
-| `en-US` | English (United States) | `en-US-AriaNeural` (Female)<br/>`en-US-GuyNeural` (Male)<br/>`en-US-JennyNeural` (Female)|
-| `es-ES` | Spanish (Spain) | `es-ES-ElviraNeural` (Female)<br/>`es-ES-AlvaroNeural` (Male)|
-| `es-MX` | Spanish (Mexico) | `es-MX-DaliaNeural` (Female)<br/>`es-MX-JorgeNeural` (Male)|
-| `fr-CA` | French (Canada) | `fr-CA-SylvieNeural` (Female)<br/>`fr-CA-JeanNeural` (Male)|
-| `fr-FR` | French (France) | `fr-FR-DeniseNeural` (Female)<br/>`fr-FR-HenriNeural` (Male)|
-| `it-IT` | Italian (Italy) | `it-IT-IsabellaNeural` (Female)<br/>`it-IT-DiegoNeural` (Male)|
-| `ja-JP` | Japanese (Japan) | `ja-JP-NanamiNeural` (Female)<br/>`ja-JP-KeitaNeural` (Male)|
-| `ko-KR` | Korean (Korea) | `ko-KR-SunHiNeural` (Female)<br/>`ko-KR-InJoonNeural` (Male)|
-| `pt-BR` | Portuguese (Brazil) | `pt-BR-FranciscaNeural` (Female)<br/>`pt-BR-AntonioNeural` (Male)|
-| `zh-CN` | Chinese (Mandarin, Simplified) | `zh-CN-XiaoxiaoNeural` (Female)<br/>`zh-CN-YunxiNeural` (Male)|
+All text to speech locales [here](language-support.md?tabs=tts) (except fa-IR, Persian (Iran)) are available out of box with either 1 selected female and/or 1 selected male voices. We welcome your input to help us gauge demand for additional languages and voices.
## Embedded speech configuration
With hybrid speech configuration for [text to speech](text-to-speech.md) (voices
For cloud speech, you use the `SpeechConfig` object, as shown in the [speech to text quickstart](get-started-speech-to-text.md) and [text to speech quickstart](get-started-text-to-speech.md). To run the quickstarts for embedded speech, you can replace `SpeechConfig` with `EmbeddedSpeechConfig` or `HybridSpeechConfig`. Most of the other speech recognition and synthesis code are the same, whether using cloud, embedded, or hybrid configuration.
+## Embedded voices capabilities
+
+For embedded voices, it is essential to note that certain SSML tags may not be currently supported due to differences in the model structure. For detailed information regarding the unsupported SSML tags, please refer to the table below.
+
+| Level 1 | Level 2 | Sub values | Support in embedded NTTS |
+|--|--|-|--|
+| audio | src | | No |
+| bookmark | | | Yes |
+| break | strength | | No |
+| | time | | No |
+| silence | type | Leading, Tailing, Comma-exact, etc. | No |
+| | value | | No |
+| emphasis | level | | No |
+| lang | | | No |
+| lexicon | uri | | Yes |
+| math | | | No |
+| msttsaudioduration | value | | No |
+| msttsbackgroundaudio | src | | No |
+| | volume | | No |
+| | fadein | | No |
+| | fadeout | | No |
+| msttsexpress-as | style | | No |
+| | styledegree | | No |
+| | role | | No |
+| msttssilence | | | No |
+| msttsviseme | type | redlips_front, FacialExpression | No |
+| p | | | Yes |
+| phoneme | alphabet | ipa, sapi, ups, etc. | Yes |
+| | ph | | Yes |
+| prosody | contour | Sentences level support, word level only en-US and zh-CN | Yes |
+| | pitch | | Yes |
+| | range | | Yes |
+| | rate | | Yes |
+| | volume | | Yes |
+| s | | | Yes |
+| say-as | interpret-as | characters, spell-out, number_digit, date, etc. | Yes |
+| | format | | Yes |
+| | detail | | Yes |
+| sub | alias | | Yes |
+| speak | | | Yes |
+| voice | | | No |
+++ ## Next steps
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/role-based-access-control.md
You can manage access and permissions to your Speech resources with Azure role-b
A role definition is a collection of permissions. When you create a Speech resource, the built-in roles in this table are assigned by default.
-| Role | Can list resource keys | Access to data, models, and endpoints|
-| | | |
-|**Owner** |Yes |View, create, edit, and delete |
-|**Contributor** |Yes |View, create, edit, and delete |
-|**Cognitive Services Contributor** |Yes |View, create, edit, and delete |
-|**Cognitive Services User** |Yes |View, create, edit, and delete |
-|**Cognitive Services Speech Contributor** |No | View, create, edit, and delete |
-|**Cognitive Services Speech User** |No |View only |
-|**Cognitive Services Data Reader (Preview)** |No |View only |
+| Role | Can list resource keys | Access to data, models, and endpoints in custom projects| Access to speech transcription and synthesis APIs
+| | | | |
+|**Owner** |Yes |View, create, edit, and delete |Yes |
+|**Contributor** |Yes |View, create, edit, and delete |Yes |
+|**Cognitive Services Contributor** |Yes |View, create, edit, and delete |Yes |
+|**Cognitive Services User** |Yes |View, create, edit, and delete |Yes |
+|**Cognitive Services Speech Contributor** |No | View, create, edit, and delete |Yes |
+|**Cognitive Services Speech User** |No |View only |Yes |
+|**Cognitive Services Data Reader (Preview)** |No |View only |Yes |
> [!IMPORTANT] > Whether a role can list resource keys is important for [Speech Studio authentication](#speech-studio-authentication). To list resource keys, a role must have permission to run the `Microsoft.CognitiveServices/accounts/listKeys/action` operation. Please note that if key authentication is disabled in the Azure Portal, then none of the roles can list keys.
ai-services Speech Container Cstt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-cstt.md
You can only use a license file with the appropriate container and model that yo
|-|-| | `{IMAGE}` | The container image you want to use.<br/><br/>For example: `mcr.microsoft.com/azure-cognitive-services/custom-speech-to-text:latest` | | `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
-| `{MODEL_PATH}` | The path where the model is located.<br/><br/>For example: `/path/to/model/` |
+| `{MODEL_PATH}` | The path where the model is located.<br/><br/>For example: `/host/models:/usr/local/models` |
| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` | | `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | | `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem.<br/><br/>For example: `/path/to/license/directory` |
Wherever the container is run, the license file must be mounted to the container
| `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container.<br/><br/>For example: `4g` | | `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container.<br/><br/>For example: `4` | | `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted.<br/><br/>For example: `/host/license:/path/to/license/directory` |
-| `{MODEL_PATH}` | The path where the model is located.<br/><br/>For example: `/path/to/model/` |
+| `{MODEL_PATH}` | The path where the model is located.<br/><br/>For example: `/host/models:/usr/local/models` |
| `{OUTPUT_PATH}` | The output path for logging.<br/><br/>For example: `/host/output:/path/to/output/directory`<br/><br/>For more information, see [usage records](../containers/disconnected-containers.md#usage-records) in the Azure AI services documentation. | | `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal.<br/><br/>For example: `https://<your-resource-name>.cognitiveservices.azure.com` | | `{API_KEY}` | The key for your Speech resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |
aks Api Server Authorized Ip Ranges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-authorized-ip-ranges.md
Title: API server authorized IP ranges in Azure Kubernetes Service (AKS) description: Learn how to secure your cluster using an IP address range for access to the API server in Azure Kubernetes Service (AKS) + Last updated 11/04/2022-- #Customer intent: As a cluster operator, I want to increase the security of my cluster by limiting access to the API server to only the IP addresses that I specify.
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
description: Learn how to create a static or dynamic persistent volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS) Previously updated : 09/18/2023 Last updated : 10/05/2023 # Create and use a volume with Azure Files in Azure Kubernetes Service (AKS)
Kubernetes needs credentials to access the file share created in the previous st
kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=myAKSStorageAccount --from-literal=azurestorageaccountkey=$STORAGE_KEY ```
-### Mount file share as an inline volume
-
-> [!NOTE]
-> Inline volume can only access secrets in the same namespace as the pod. To specify a different secret namespace, instead use the [persistent volume example][persistent-volume-example].
-
-To mount the Azure Files file share into your pod, you configure the volume in the container spec.
-
-1. Create a new file named `azure-files-pod.yaml` and copy in the following contents. If you changed the name of the file share or secret name, update the `shareName` and `secretName`. You can also update the `mountPath`, which is the path where the Files share is mounted in the pod. For Windows Server containers, specify a `mountPath` using the Windows path convention, such as *'D:'*.
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: mypod
-spec:
- nodeSelector:
- kubernetes.io/os: linux
- containers:
- - image: 'mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine'
- name: mypod
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - name: azure
- mountPath: /mnt/azure
- volumes:
- - name: azure
- csi:
- driver: file.csi.azure.com
- readOnly: false
- volumeAttributes:
- secretName: azure-secret # required
- shareName: aksshare # required
- mountOptions: 'dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock' # optional
-```
-
-2. Create the pod using the [`kubectl apply`][kubectl-apply] command.
-
- ```bash
- kubectl apply -f azure-files-pod.yaml
- ```
-
- You now have a running pod with an Azure Files file share mounted at */mnt/azure*. You can verify the share is mounted successfully using the [`kubectl describe`][kubectl-describe] command.
-
- ```bash
- kubectl describe pod mypod
- ```
- ### Mount file share as a persistent volume 1. Create a new file named `azurefiles-pv.yaml` and copy in the following contents. Under `csi`, update `resourceGroup`, `volumeHandle`, and `shareName`. For mount options, the default value for `fileMode` and `dirMode` is *0777*.
spec:
kubectl apply -f azure-files-pod.yaml ```
+### Mount file share as an inline volume
+
+> [!NOTE]
+> To avoid performance issue, use persistent volume instead of inline volume when numerous pods are accessing the same file share.
+> Inline volume can only access secrets in the same namespace as the pod. To specify a different secret namespace, instead use the [persistent volume example][persistent-volume-example].
+
+To mount the Azure Files file share into your pod, you configure the volume in the container spec.
+
+1. Create a new file named `azure-files-pod.yaml` and copy in the following contents. If you changed the name of the file share or secret name, update the `shareName` and `secretName`. You can also update the `mountPath`, which is the path where the Files share is mounted in the pod. For Windows Server containers, specify a `mountPath` using the Windows path convention, such as *'D:'*.
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ nodeSelector:
+ kubernetes.io/os: linux
+ containers:
+ - image: 'mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine'
+ name: mypod
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - name: azure
+ mountPath: /mnt/azure
+ volumes:
+ - name: azure
+ csi:
+ driver: file.csi.azure.com
+ readOnly: false
+ volumeAttributes:
+ secretName: azure-secret # required
+ shareName: aksshare # required
+ mountOptions: 'dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock' # optional
+```
+
+2. Create the pod using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f azure-files-pod.yaml
+ ```
+
+ You now have a running pod with an Azure Files file share mounted at */mnt/azure*. You can verify the share is mounted successfully using the [`kubectl describe`][kubectl-describe] command.
+
+ ```bash
+ kubectl describe pod mypod
+ ```
+ ## Next steps For Azure Files CSI driver parameters, see [CSI driver parameters][CSI driver parameters].
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
Title: Use the cluster autoscaler in Azure Kubernetes Service (AKS) description: Learn how to use the cluster autoscaler to automatically scale your Azure Kubernetes Service (AKS) clusters to meet application demands. + Last updated 09/26/2023
aks Csi Secrets Store Nginx Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-nginx-tls.md
You can import the ingress TLS certificate to the cluster using one of the follo
spec: provider: azure secretObjects: # secretObjects defines the desired state of synced K8s secret objects
- - secretName: ingress-tls-csi
- type: kubernetes.io/tls
- data:
- - objectName: $CERT_NAME
- key: tls.key
- - objectName: $CERT_NAME
- key: tls.crt
+ - secretName: ingress-tls-csi
+ type: kubernetes.io/tls
+ data:
+ - objectName: $CERT_NAME
+ key: tls.key
+ - objectName: $CERT_NAME
+ key: tls.crt
parameters: usePodIdentity: "false" useVMManagedIdentity: "true"
Depending on your scenario, you can choose to bind the certificate to either the
1. Bind the certificate to the ingress controller using the `helm install` command. The ingress controllerΓÇÖs deployment references the Secrets Store CSI Driver's Azure Key Vault provider. > [!NOTE]
- > If not using Azure Active Directory (Azure AD) pod-managed identity as your method of access, remove the line with `--set controller.podLabels.aadpodidbinding=$AAD_POD_IDENTITY_NAME`
+ >
+ > - If not using Azure Active Directory (Azure AD) pod-managed identity as your method of access, remove the line with `--set controller.podLabels.aadpodidbinding=$AAD_POD_IDENTITY_NAME` .
+ >
+ > - Also, binding the SecretProviderClass to a pod is required for the Secrets Store CSI Driver to mount it and generate the Kubernetes secret. See [Sync mounted content with a Kubernetes secret][az-keyvault-mirror-as-secret] .
+ ```bash helm install ingress-nginx/ingress-nginx --generate-name \
We can now deploy a Kubernetes ingress resource referencing the secret.
[aks-cluster-secrets-csi]: ./csi-secrets-store-driver.md [aks-akv-instance]: ./csi-secrets-store-driver.md#create-or-use-an-existing-azure-key-vault [az-key-vault-certificate-import]: /cli/azure/keyvault/certificate#az-keyvault-certificate-import
+[az-keyvault-mirror-as-secret]: ./csi-secrets-store-driver.md#sync-mounted-content-with-a-kubernetes-secret
<!-- LINKS EXTERNAL --> [kubernetes-ingress-tls]: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
aks Dapr Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-settings.md
description: Learn how to configure the Dapr extension specifically for your Azu
-+ Last updated 06/08/2023
aks Egress Outboundtype https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-outboundtype.md
Title: Customize cluster egress with outbound types in Azure Kubernetes Service
description: Learn how to define a custom egress route in Azure Kubernetes Service (AKS) + Last updated 06/06/2023- #Customer intent: As a cluster operator, I want to define my own egress paths with user-defined routes. Since I define this up front I do not want AKS provided load balancer configurations.
aks Image Integrity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-integrity.md
description: Learn how to use Image Integrity to validate signed images before d
+ Last updated 09/26/2023
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
Previously updated : 02/22/2023 Last updated : 10/04/2023 #Customer intent: As a cluster operator or developer, I want to learn how to create a service in AKS that uses an internal Azure load balancer for enhanced security and without an external endpoint.
A Private Endpoint allows you to privately connect to your Kubernetes service ob
--connection-name connectToMyK8sService ```
+### PLS Customizations via Annotations
+
+The following are annotations that can be used to customize the PLS resource.
+
+| Annotation | Value | Description | Required | Default |
+| | - | |||
+| `service.beta.kubernetes.io/azure-pls-create` | `"true"` | Boolean indicating whether a PLS needs to be created. | Required | |
+| `service.beta.kubernetes.io/azure-pls-name` | `<PLS name>` | String specifying the name of the PLS resource to be created. | Optional | `"pls-<LB frontend config name>"` |
+| `service.beta.kubernetes.io/azure-pls-resource-group` | `Resource Group name` | String specifying the name of the Resource Group where the PLS resource will be created | Optional | `MC_ resource` |
+| `service.beta.kubernetes.io/azure-pls-ip-configuration-subnet` |`<Subnet name>` | String indicating the subnet to which the PLS will be deployed. This subnet must exist in the same VNET as the backend pool. PLS NAT IPs are allocated within this subnet. | Optional | If `service.beta.kubernetes.io/azure-load-balancer-internal-subnet`, this ILB subnet is used. Otherwise, the default subnet from config file is used. |
+| `service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address-count` | `[1-8]` | Total number of private NAT IPs to allocate. | Optional | 1 |
+| `service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address` | `"10.0.0.7 ... 10.0.0.10"` | A space separated list of static **IPv4** IPs to be allocated. (IPv6 is not supported right now.) Total number of IPs should not be greater than the ip count specified in `service.beta.kubernetes.io/azure-pls-ip-configuration-ip-address-count`. If there are fewer IPs specified, the rest are dynamically allocated. The first IP in the list is set as `Primary`. | Optional | All IPs are dynamically allocated. |
+| `service.beta.kubernetes.io/azure-pls-fqdns` | `"fqdn1 fqdn2"` | A space separated list of fqdns associated with the PLS. | Optional | `[]` |
+| `service.beta.kubernetes.io/azure-pls-proxy-protocol` | `"true"` or `"false"` | Boolean indicating whether the TCP PROXY protocol should be enabled on the PLS to pass through connection information, including the link ID and source IP address. Note that the backend service MUST support the PROXY protocol or the connections will fail. | Optional | `false` |
+| `service.beta.kubernetes.io/azure-pls-visibility` | `"sub1 sub2 sub3 … subN"` or `"*"` | A space separated list of Azure subscription ids for which the private link service is visible. Use `"*"` to expose the PLS to all subs (Least restrictive). | Optional | Empty list `[]` indicating role-based access control only: This private link service will only be available to individuals with role-based access control permissions within your directory. (Most restrictive) |
+| `service.beta.kubernetes.io/azure-pls-auto-approval` | `"sub1 sub2 sub3 … subN"` | A space separated list of Azure subscription ids. This allows PE connection requests from the subscriptions listed to the PLS to be automatically approved. This only works when visibility is set to "*". | Optional | `[]` |
+ ## Use private networks When you create your AKS cluster, you can specify advanced networking settings. These settings allow you to deploy the cluster into an existing Azure virtual network and subnets. For example, you can deploy your AKS cluster into a private network connected to your on-premises environment and run services that are only accessible internally.
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
description: Learn how to connect to Azure Kubernetes Service (AKS) cluster node
Last updated 09/06/2023 -+ #Customer intent: As a cluster operator, I want to learn how to connect to virtual machines in an AKS cluster to perform maintenance or troubleshoot a problem.
aks Node Upgrade Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-upgrade-github-actions.md
Title: Handle AKS node upgrades with GitHub Actions
-description: Learn how to update AKS nodes using GitHub Actions
+description: Learn how to schedule automatic node upgrades in Azure Kubernetes Service (AKS) using GitHub Actions.
Previously updated : 11/27/2020 Last updated : 10/05/2023 #Customer intent: As a cluster administrator, I want to know how to automatically apply Linux updates and reboot nodes in AKS for security and/or compliance
-# Apply security updates to Azure Kubernetes Service (AKS) nodes automatically using GitHub Actions
+# Apply automatic security upgrades to Azure Kubernetes Service (AKS) nodes using GitHub Actions
Security updates are a key part of maintaining your AKS cluster's security and compliance with the latest fixes for the underlying OS. These updates include OS security fixes or kernel updates. Some updates require a node reboot to complete the process.
-Running `az aks upgrade` gives you a zero downtime way to apply updates. The command handles applying the latest updates to all your cluster's nodes, cordoning and draining traffic to the nodes, and restarting the nodes, then allowing traffic to the updated nodes. If you update your nodes using a different method, AKS will not automatically restart your nodes.
+This article shows you how you can automate the update process of AKS nodes using GitHub Actions and Azure CLI to create an update task based on `cron` that runs automatically.
> [!NOTE]
-> The main difference between `az aks upgrade` when used with the `--node-image-only` flag is that, when it's used, only the node images will be upgraded. If omitted, both the node images and the Kubernetes control plane version will be upgraded. You can check [the docs for managed upgrades on nodes][managed-node-upgrades-article] and [the docs for cluster upgrades][cluster-upgrades-article] for more in-depth information.
+> You can also perform node image upgrades automatically and schedule these upgrades using planned maintenance. For more information, see [Automatically upgrade node images][auto-upgrade-node-image].
-All Kubernetes' nodes run in a standard Azure virtual machine (VM). These VMs can be Windows or Linux-based. The Linux-based VMs use an Ubuntu image, with the OS configured to automatically check for updates every night.
+## Before you begin
-When you use the `az aks upgrade` command, Azure CLI creates a surge of new nodes with the latest security and kernel updates, these nodes are initially cordoned to prevent any apps from being scheduled to them until the update is finished. After completion, Azure cordons (makes the node unavailable for scheduling of new workloads) and drains (moves the existent workloads to other node) the older nodes and uncordon the new ones, effectively transferring all the scheduled applications to the new nodes.
+* This article assumes you have an existing AKS cluster. If you need an AKS cluster, create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or [the Azure portal][aks-quickstart-portal].
+* This article also assumes you have a [GitHub account][github] and a [profile repository][profile-repository] to host your actions. If you don't have a repository, create one with the same name as your GitHub username.
+* You need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-This process is better than updating Linux-based kernels manually because Linux requires a reboot when a new kernel update is installed. If you update the OS manually, you also need to reboot the VM, manually cordoning and draining all the apps.
+## Update nodes with `az aks upgrade`
-This article shows you how you can automate the update process of AKS nodes. You'll use GitHub Actions and Azure CLI to create an update task based on `cron` that runs automatically.
+The `az aks upgrade` command gives you a zero downtime way to apply updates. The command performs the following actions:
-Node image upgrades can also be performed automatically, and scheduled by using planned maintenance. For more details, see [Automatically upgrade node images][auto-upgrade-node-image].
+1. Applies the latest updates to all your cluster's nodes.
+2. Cordons (makes the node unavailable for the scheduling of new workloads) and drains (moves the existent workloads to other node) traffic to the nodes.
+3. Restarts the nodes.
+4. Enables the updated nodes to receive traffic again.
-## Before you begin
+AKS doesn't automatically restart your nodes if you update them using a different method.
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+> [!NOTE]
+> Running `az aks upgrade` with the `--node-image-only` flag only upgrades the node images. Running the command without the flag upgrades both the node images and the Kubernetes control plane version. For more information, see the [docs for managed upgrades on nodes][managed-node-upgrades-article] and the [docs for cluster upgrades][cluster-upgrades-article].
+
+All Kubernetes nodes run in a standard Windows or Linux-based Azure virtual machine (VM). The Linux-based VMs use an Ubuntu image with the OS configured to automatically check for updates every night.
-You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+When you use the `az aks upgrade` command, Azure CLI creates a surge of new nodes with the latest security and kernel updates. These new nodes are initially cordoned to prevent any apps from being scheduled to them until the update completes. After the update completes, Azure cordons and drains the older nodes and uncordons the new ones, transferring all the scheduled applications to the new nodes.
-This article also assumes you have a [GitHub][github] account to create your actions in.
+This process is better than updating Linux-based kernels manually because Linux requires a reboot when a new kernel update is installed. If you update the OS manually, you also need to reboot the VM, manually cordoning and draining all the apps.
## Create a timed GitHub Action
-`cron` is a utility that allows you to run a set of commands, or job, on an automated schedule. To create job to update your AKS nodes on an automated schedule, you'll need a repository to host your actions. Usually, GitHub actions are configured in the same repository as your application, but you can use any repository. For this article we'll be using your [profile repository][profile-repository]. If you don't have one, create a new repository with the same name as your GitHub username.
+`cron` is a utility that allows you to run a set of commands, or *jobs*, on an automated schedule. To create a job to update your AKS nodes on an automated schedule, you need a repository to host your actions. GitHub Actions are usually configured in the same repository as your application, but you can use any repository.
-1. Navigate to your repository on GitHub
-2. Select the **Actions** tab at the top of the page.
-3. If you already set up a workflow in this repository, you'll be directed to the list of completed runs, in this case, select the **New Workflow** button. If this is your first workflow in the repository, GitHub will present you with some project templates, select the **Set up a workflow yourself** link below the description text.
-4. Change the workflow `name` and `on` tags similar to the below. GitHub Actions use the same [POSIX cron syntax][cron-syntax] as any Linux-based system. In this schedule, we're telling the workflow to run every 15 days at 3am.
+1. Navigate to your repository on GitHub.
+2. Select **Actions**.
+3. Select **New workflow** > **Set up a workflow yourself**.
+4. Create a GitHub Action named *Upgrade cluster node images* with a schedule trigger to run every 15 days at 3am. Copy the following code into the YAML:
```yml name: Upgrade cluster node images
This article also assumes you have a [GitHub][github] account to create your act
- cron: '0 3 */15 * *' ```
-5. Create a new job using the below. This job is named `upgrade-node`, runs on an Ubuntu agent, and will connect to your Azure CLI account to execute the needed steps to upgrade the nodes.
+5. Create a job named *upgrade-node* that runs on an Ubuntu agent and connects to your Azure CLI account to execute the node upgrade command. Copy the following code into the YAML under the `on` key:
```yml
- name: Upgrade cluster node images
-
- on:
- schedule:
- - cron: '0 3 */15 * *'
- jobs: upgrade-node: runs-on: ubuntu-latest
This article also assumes you have a [GitHub][github] account to create your act
## Set up the Azure CLI in the workflow
-In the `steps` key, you'll define all the work the workflow will execute to upgrade the nodes.
-
-Download and sign in to the Azure CLI.
-
-1. On the right-hand side of the GitHub Actions screen, find the *marketplace search bar* and type **"Azure Login"**.
-2. You'll get as a result, an Action called **Azure Login** published **by Azure**:
+1. In the **Search Marketplace for Actions** bar, search for **Azure Login**.
+2. Select **Azure Login**.
:::image type="content" source="media/node-upgrade-github-actions/azure-login-search.png" alt-text="Search results showing two lines, the first action is called 'Azure Login' and the second 'Azure Container Registry Login'":::
-3. Select **Azure Login**. On the next screen, select the **copy icon** in the top right of the code sample.
-
- :::image type="content" source="media/node-upgrade-github-actions/azure-login.png" alt-text="Azure Login action result pane with code sample below, red square around a copy icon highlights the select spot":::
-
-4. Paste the following under the `steps` key:
+3. Under **Installation**, select a version, such as *v1.4.6*, and copy the installation code snippet.
+4. Add the `steps` key and the following information from the installation code snippet to the YAML:
```yml name: Upgrade cluster node images- on: schedule: - cron: '0 3 */15 * *'- jobs: upgrade-node: runs-on: ubuntu-latest- steps: - name: Azure Login
- uses: Azure/login@v1.4.3
+ uses: Azure/login@v1.4.6
with: creds: ${{ secrets.AZURE_CREDENTIALS }} ```
-5. From the Azure CLI, run the following command to generate a new username and password.
+## Create credentials for the Azure CLI
+
+1. In a new browser window, create a new service principal using the [`az ad sp create-for-rbac`][az-ad-sp-create-for-rbac] command. Make sure you replace `*{subscriptionID}*` with your own subscription ID.
> [!NOTE]
- > This example creates the `Contributor` role at the *Subscription* scope. You may provide the role and scope that meets your needs. For more information, see [Azure built-in roles][azure-built-in-roles] and [Azure RBAC scope levels][azure-rbac-scope-levels].
+ > This example creates the `Contributor` role at the *Subscription* scope. You can provide the role and scope that meets your needs. For more information, see [Azure built-in roles][azure-built-in-roles] and [Azure RBAC scope levels][azure-rbac-scope-levels].
```azurecli-interactive az ad sp create-for-rbac --role Contributor --scopes /subscriptions/{subscriptionID} -o json ```
- The output should be similar to the following json:
+ Your output should be similar to the following example output:
```output {
- "clientId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
- "clientSecret": "xXxXxXxXx",
- "subscriptionId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
- "activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
- "resourceManagerEndpointUrl": "https://management.azure.com/",
- "activeDirectoryGraphResourceId": "https://graph.windows.net/",
- "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/",
- "galleryEndpointUrl": "https://gallery.azure.com/",
- "managementEndpointUrl": "https://management.core.windows.net/"
+ "appId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
+ "displayName": "xxxxx-xxx-xxxx-xx-xx-xx-xx-xx",
+ "password": "xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
+ "tenant": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
} ```
-6. **In a new browser window** navigate to your GitHub repository and open the **Settings** tab of the repository. Select **Secrets** then, select **New Repository Secret**.
-7. For *Name*, use `AZURE_CREDENTIALS`.
-8. For *Value*, add the entire contents from the output of the previous step where you created a new username and password.
-
- :::image type="content" source="media/node-upgrade-github-actions/azure-credential-secret.png" alt-text="Form showing AZURE_CREDENTIALS as secret title, and the output of the executed command pasted as JSON":::
+2. Copy the output and navigate to your GitHub repository.
+3. Select **Settings** > **Secrets and variables** > **Actions** > **New repository secret**.
+4. For **Name**, enter `AZURE_CREDENTIALS`.
+5. For **Secret**, copy in the contents of the output you received when you created the service principal.
+6. Select **Add Secret**.
-9. Select **Add Secret**.
+## Create the steps to execute the Azure CLI commands
-The CLI used by your action will be logged to your Azure account and ready to run commands.
-
-To create the steps to execute Azure CLI commands.
-
-1. Navigate to the **search page** on *GitHub marketplace* on the right-hand side of the screen and search *Azure CLI Action*. Choose *Azure CLI Action by Azure*.
+1. Navigate to your window with the workflow YAML.
+2. In the **Search Marketplace for Actions** bar, search for **Azure CLI Action**.
+3. Select **Azure CLI Action**.
:::image type="content" source="media/node-upgrade-github-actions/azure-cli-action.png" alt-text="Search result for 'Azure CLI Action' with first result being shown as made by Azure":::
-1. Select the copy button on the *GitHub marketplace result* and paste the contents of the action in the main editor, below the *Azure Login* step, similar to the following:
+4. Under **Installation**, select a version, such as *v1.0.8*, and copy the installation code snippet.
+5. Paste the contents of the action into the YAML below the `*Azure Login*` step, similar to the following example:
```yml name: Upgrade cluster node images- on: schedule: - cron: '0 3 */15 * *'- jobs: upgrade-node: runs-on: ubuntu-latest- steps: - name: Azure Login
- uses: Azure/login@v1.4.3
+ uses: Azure/login@v1.4.6
with: creds: ${{ secrets.AZURE_CREDENTIALS }} - name: Upgrade node images
- uses: Azure/cli@v1.0.6
+ uses: Azure/cli@v1.0.8
with: inlineScript: az aks upgrade -g {resourceGroupName} -n {aksClusterName} --node-image-only --yes ``` > [!TIP]
- > You can decouple the `-g` and `-n` parameters from the command by adding them to secrets similar to the previous steps. Replace the `{resourceGroupName}` and `{aksClusterName}` placeholders by their secret counterparts, for example `${{secrets.RESOURCE_GROUP_NAME}}` and `${{secrets.AKS_CLUSTER_NAME}}`
+ > You can decouple the `-g` and `-n` parameters from the command by creating new repository secrets like you did for `AZURE_CREDENTIALS`.
+ >
+ > If you create secrets for these parameters, you need to replace the `{resourceGroupName}` and `{aksClusterName}` placeholders with their secret counterparts. For example, `${{secrets.RESOURCE_GROUP_NAME}}` and `${{secrets.AKS_CLUSTER_NAME}}`
-1. Rename the file to `upgrade-node-images`.
-1. Select **Start Commit**, add a message title, and save the workflow.
+6. Rename the YAML to `upgrade-node-images.yml`.
+7. Select **Commit changes...**, add a commit message, and then select **Commit changes**.
-Once you create the commit, the workflow will be saved and ready for execution.
+## Run the GitHub Action manually
+
+You can run the workflow manually in addition to the scheduled run by adding a new `on` trigger called `workflow_dispatch`.
> [!NOTE]
-> To upgrade a single node pool instead of all node pools on the cluster, add the `--name` parameter to the `az aks nodepool upgrade` command to specify the node pool name. For example:
+> If you want to upgrade a single node pool instead of all node pools on the cluster, add the `--name` parameter to the `az aks nodepool upgrade` command to specify the node pool name. For example:
+>
> ```azurecli-interactive > az aks nodepool upgrade -g {resourceGroupName} --cluster-name {aksClusterName} --name {{nodePoolName}} --node-image-only > ```
-## Run the GitHub Action manually
-
-You can run the workflow manually, in addition to the scheduled run, by adding a new `on` trigger called `workflow_dispatch`. The finished file should look like the YAML below:
-
-```yml
-name: Upgrade cluster node images
+* Add the `workflow_dispatch` trigger under the `on` key:
-on:
- schedule:
- - cron: '0 3 */15 * *'
- workflow_dispatch:
-
-jobs:
- upgrade-node:
- runs-on: ubuntu-latest
+ ```yml
+ name: Upgrade cluster node images
+ on:
+ schedule:
+ - cron: '0 3 */15 * *'
+ workflow_dispatch:
+ ```
- steps:
- - name: Azure Login
- uses: Azure/login@v1.4.3
- with:
- creds: ${{ secrets.AZURE_CREDENTIALS }}
+ The YAML should look similar to the following example:
- # Code for upgrading one or more node pools
-```
+ ```yml
+ name: Upgrade cluster node images
+ on:
+ schedule:
+ - cron: '0 3 */15 * *'
+ workflow_dispatch:
+ jobs:
+ upgrade-node:
+ runs-on: ubuntu-latest
+ steps:
+ - name: Azure Login
+ uses: Azure/login@v1.4.6
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+ - name: Upgrade node images
+ uses: Azure/cli@v1.0.8
+ with:
+ inlineScript: az aks upgrade -g {resourceGroupName} -n {aksClusterName} --node-image-only --yes
+ # Code for upgrading one or more node pools
+ ```
## Next steps -- See the [AKS release notes](https://github.com/Azure/AKS/releases) for information about the latest node images.-- Learn how to upgrade the Kubernetes version with [Upgrade an AKS cluster][cluster-upgrades-article].-- Learn more about multiple node pools with [Create multiple node pools][use-multiple-node-pools].-- Learn more about [system node pools][system-pools]-- To learn how to save costs using Spot instances, see [add a spot node pool to AKS][spot-pools]
+For more information about AKS upgrades, see the following articles and resources:
+
+* [AKS release notes](https://github.com/Azure/AKS/releases)
+* [Upgrade an AKS cluster][cluster-upgrades-article]
<!-- LINKS - external --> [github]: https://github.com [profile-repository]: https://docs.github.com/en/free-pro-team@latest/github/setting-up-and-managing-your-github-profile/about-your-profile
-[cron-syntax]: https://pubs.opengroup.org/onlinepubs/9699919799/utilities/crontab.html#tag_20_25_07
<!-- LINKS - internal --> [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
jobs:
[install-azure-cli]: /cli/azure/install-azure-cli [managed-node-upgrades-article]: node-image-upgrade.md [cluster-upgrades-article]: upgrade-cluster.md
-[system-pools]: use-system-pools.md
-[spot-pools]: spot-node-pool.md
-[use-multiple-node-pools]: create-node-pools.md
[auto-upgrade-node-image]: auto-upgrade-node-image.md [azure-built-in-roles]: ../role-based-access-control/built-in-roles.md [azure-rbac-scope-levels]: ../role-based-access-control/scope-overview.md#scope-format
+[az-ad-sp-create-for-rbac]: /cli/azure/ad/sp#az-ad-sp-create-for-rbac
aks Scale Down Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-down-mode.md
Title: Use Scale-down Mode for your Azure Kubernetes Service (AKS) cluster
description: Learn how to use Scale-down Mode in Azure Kubernetes Service (AKS). + Last updated 08/21/2023
aks Update Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/update-credentials.md
Title: Update or rotate the credentials for an Azure Kubernetes Service (AKS) cluster description: Learn how update or rotate the service principal or Azure AD Application credentials for an Azure Kubernetes Service (AKS) cluster. + Last updated 03/01/2023
aks Use Oidc Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-oidc-issuer.md
Title: Create an OpenID Connect provider for your Azure Kubernetes Service (AKS) cluster description: Learn how to configure the OpenID Connect (OIDC) provider for a cluster in Azure Kubernetes Service (AKS) + Last updated 07/26/2023
aks Vertical Pod Autoscaler Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler-api-reference.md
Title: Vertical Pod Autoscaler API reference in Azure Kubernetes Service (AKS) description: Learn about the Vertical Pod Autoscaler API reference for Azure Kubernetes Service (AKS). -+ Last updated 09/26/2023
aks Windows Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-faq.md
Title: Windows Server node pools FAQ
description: See the frequently asked questions when you run Windows Server node pools and application workloads in Azure Kubernetes Service (AKS). -+ Last updated 04/13/2023 #Customer intent: As a cluster operator, I want to see frequently asked questions when running Windows node pools and application workloads.
app-service App Service Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-ip-restrictions.md
ms.assetid: 3be1f4bd-8a81-4565-8a56-528c037b24bd -+ Last updated 10/05/2022
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
description: Learn how to attach custom network share in Azure App Service. Sha
+ Last updated 8/24/2023 zone_pivot_groups: app-service-containers-code
app-service Configure Encrypt At Rest Using Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-encrypt-at-rest-using-cmk.md
Title: Encrypt your application source at rest description: Learn how to encrypt your application data in Azure Storage and deploy it as a package file. + Last updated 03/06/2020
Only the cost associated with the Azure Storage Account and any applicable egres
## Next steps - [Key Vault references for App Service](app-service-key-vault-references.md)-- [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md)
+- [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md)
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md
Title: Deploy files to App Service
description: Learn to deploy various app packages or discrete libraries, static files, or startup scripts to Azure App Service Last updated 07/21/2023-+
app-service Configure Network Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/configure-network-settings.md
keywords: ASE, ASEv3, ftp, remote debug -+ Last updated 03/29/2022
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
App Service Environment v3 is available in the following regions:
| Norway East | ✅ | ✅ | ✅ | | Norway West | ✅ | | ✅ | | Poland Central | ✅ | | |
-| Qatar Central | ✅ | ✅ | |
+| Qatar Central | ✅** | ✅** | |
| South Africa North | ✅ | ✅ | ✅ | | South Africa West | ✅ | | ✅ | | South Central US | ✅ | ✅ | ✅ |
app-service Manage Automatic Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-automatic-scaling.md
Title: How to enable automatic scaling description: Learn how to scale automatically in Azure App Service with zero configuration. + Last updated 08/02/2023 - # Automatic scaling in Azure App Service
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md
Title: Back up an app
description: Learn how to restore backups of your apps in Azure App Service or configure custom backups. Customize backups by including the linked database. ms.assetid: 6223b6bd-84ec-48df-943f-461d84605694 + Last updated 04/25/2023
app-service Overview App Gateway Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-app-gateway-integration.md
+
+ Title: Application Gateway integration - Azure App Service | Microsoft Learn
+description: Learn how Application Gateway integrates with Azure App Service.
+
+documentationcenter: ''
+
+editor: ''
+
+ms.assetid: 073eb49c-efa1-4760-9f0c-1fecd5c251cc
++
+ na
+ Last updated : 09/29/2023++
+ms.devlang: azurecli
++
+# Application Gateway integration
+
+Three variations of Azure App Service require slightly different configuration of the integration with Azure Application Gateway. The variations include regular App Service (also known as multitenant), an internal load balancer (ILB) App Service Environment, and an external App Service Environment.
+
+This article walks through how to configure Application Gateway with App Service (multitenant) by using service endpoints to secure traffic. The article also discusses considerations around using private endpoints and integrating with ILB and external App Service Environments. Finally, the article describes how to set access restrictions on a Source Control Manager (SCM) site.
+
+## Integration with App Service (multitenant)
+
+App Service (multitenant) has a public internet-facing endpoint. By using [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md), you can allow traffic from only a specific subnet within an Azure virtual network and block everything else. In the following scenario, you use this functionality to ensure that an App Service instance can receive traffic from only a specific application gateway.
++
+There are two parts to this configuration, aside from creating the App Service instance and the application gateway. The first part is enabling service endpoints in the subnet of the virtual network where the application gateway is deployed. Service endpoints ensure that all network traffic leaving the subnet toward App Service is tagged with the specific subnet ID.
+
+The second part is to set an access restriction on the specific web app to ensure that only traffic tagged with this specific subnet ID is allowed. You can configure the access restriction by using different tools, depending on your preference.
+
+## Set up services by using the Azure portal
+
+With the Azure portal, you follow four steps to create and configure the setup of App Service and Application Gateway. If you have existing resources, you can skip the first steps.
+
+1. Create an App Service instance by using one of the quickstarts in the App Service documentation. One example is the [.NET Core quickstart](./quickstart-dotnetcore.md).
+2. Create an application gateway by using the [portal quickstart](../application-gateway/quick-create-portal.md), but skip the section about adding back-end targets.
+3. Configure [App Service as a back end in Application Gateway](../application-gateway/configure-web-app.md), but skip the section about restricting access.
+4. Create the [access restriction by using service endpoints](../app-service/app-service-ip-restrictions.md#set-a-service-endpoint-based-rule).
+
+You can now access App Service through Application Gateway. If you try to access App Service directly, you should receive a 403 HTTP error that says the web app has blocked your access.
++
+## Set up services by using an Azure Resource Manager template
+
+The [Azure Resource Manager deployment template][template-app-gateway-app-service-complete] creates a complete scenario. The scenario consists of an App Service instance that's locked down with service endpoints and an access restriction to receive traffic only from Application Gateway. The template includes many smart defaults and unique postfixes added to the resource names to keep it simple. To override them, you have to clone the repo or download the template and edit it.
+
+To apply the template, you can use the **Deploy to Azure** button in the description of the template. Or you can use appropriate PowerShell or Azure CLI code.
+
+## Set up services by using the Azure CLI
+
+The [Azure CLI sample](../app-service/scripts/cli-integrate-app-service-with-application-gateway.md) creates an App Service instance that's locked down with service endpoints and an access restriction to receive traffic only from Application Gateway. If you only need to isolate traffic to an existing App Service instance from an existing application gateway, use the following command:
+
+```azurecli-interactive
+az webapp config access-restriction add --resource-group myRG --name myWebApp --rule-name AppGwSubnet --priority 200 --subnet mySubNetName --vnet-name myVnetName
+```
+
+In the default configuration, the command ensures setup of the service endpoint configuration in the subnet and the access restriction in App Service.
+
+## Considerations for using private endpoints
+
+As an alternative to service endpoints, you can use private endpoints to secure traffic between Application Gateway and App Service (multitenant). You need to ensure that Application Gateway can use DNS to resolve the private IP address of the App Service apps. Alternatively, you can use the private IP address in the back-end pool and override the host name in the HTTP settings.
++
+Application Gateway caches the DNS lookup results. If you use fully qualified domain names (FQDNs) and rely on DNS lookup to get the private IP address, you might need to restart the application gateway if the DNS update or the link to an Azure private DNS zone happened after you configured the back-end pool.
+
+To restart the application gateway, stop and start it by using the Azure CLI:
+
+```azurecli-interactive
+az network application-gateway stop --resource-group myRG --name myAppGw
+az network application-gateway start --resource-group myRG --name myAppGw
+```
+
+## Considerations for an ILB App Service Environment
+
+An ILB App Service Environment isn't exposed to the internet. Traffic between the instance and an application gateway is already isolated to the virtual network. To configure an ILB App Service Environment and integrate it with an application gateway by using the Azure portal, see the [how-to guide](./environment/integrate-with-application-gateway.md).
+
+If you want to ensure that only traffic from the Application Gateway subnet is reaching the App Service Environment, you can configure a network security group (NSG) that affects all web apps in the App Service Environment. For the NSG, you can specify the subnet IP range and optionally the ports (80/443). For the App Service Environment to function correctly, make sure you don't override the [required NSG rules](./environment/network-info.md#network-security-groups).
+
+To isolate traffic to an individual web app, you need to use IP-based access restrictions, because service endpoints don't work with an App Service Environment. The IP address should be the private IP of the application gateway.
+
+## Considerations for an external App Service Environment
+
+An external App Service Environment has a public-facing load balancer like multitenant App Service. Service endpoints don't work for an App Service Environment. That's why you have to use IP-based access restrictions by using the public IP address of the application gateway. To create an external App Service Environment by using the Azure portal, you can follow [this quickstart](./environment/create-external-ase.md).
+
+[template-app-gateway-app-service-complete]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-with-app-gateway-v2/ "Azure Resource Manager template for a complete scenario"
+
+## Considerations for a Kudu/SCM site
+
+The SCM site, also known as Kudu, is an admin site that exists for every web app. It isn't possible to reverse proxy the SCM site. You most likely also want to lock it down to individual IP addresses or a specific subnet.
+
+If you want to use the same access restrictions as the main site, you can inherit the settings by using the following command:
+
+```azurecli-interactive
+az webapp config access-restriction set --resource-group myRG --name myWebApp --use-same-restrictions-for-scm-site
+```
+
+If you want to add individual access restrictions for the SCM site, you can use the `--scm-site` flag:
+
+```azurecli-interactive
+az webapp config access-restriction add --resource-group myRG --name myWebApp --scm-site --rule-name KudoAccess --priority 200 --ip-address 208.130.0.0/16
+```
+
+## Considerations for using the default domain
+
+Configuring Application Gateway to override the host name and use the default domain of App Service (typically `azurewebsites.net`) is the easiest way to configure the integration. It doesn't require configuring a custom domain and certificate in App Service.
+
+[This article](/azure/architecture/best-practices/host-name-preservation) discusses the general considerations for overriding the original host name. In App Service, there are two scenarios where you need to pay attention with this configuration.
+
+### Authentication
+
+When you use [the authentication feature](./overview-authentication-authorization.md) in App Service (also known as Easy Auth), your app typically redirects to the sign-in page. Because App Service doesn't know the original host name of the request, the redirect is done on the default domain name and usually results in an error.
+
+To work around the default redirect, you can configure authentication to inspect a forwarded header and adapt the redirect domain to the original domain. Application Gateway uses a header called `X-Original-Host`. By using [file-based configuration](./configure-authentication-file-based.md) to configure authentication, you can configure App Service to adapt to the original host name. Add this configuration to your configuration file:
+
+```json
+{
+ ...
+ "httpSettings": {
+ "forwardProxy": {
+ "convention": "Custom",
+ "customHostHeaderName": "X-Original-Host"
+ }
+ }
+ ...
+}
+```
+
+### ARR affinity
+
+In multiple-instance deployments, [ARR affinity](./configure-common.md?tabs=portal#configure-general-settings) ensures that client requests are routed to the same instance for the life of the session. ARR affinity doesn't work with host name overrides. For session affinity to work, you have to configure an identical custom domain and certificate in App Service and in Application Gateway and not override the host name.
+
+## Next steps
+
+For more information on App Service Environments, see the [App Service Environment documentation](./environment/index.yml).
+
+To further secure your web app, you can find information about Azure Web Application Firewall on Application Gateway in the [Azure Web Application Firewall documentation](../web-application-firewall/ag/ag-overview.md).
+
+To deploy a secure, resilient site with a custom domain on App Service by using either Azure Front Door or Application Gateway, see [this tutorial](https://azure.github.io/AppService/2021/03/26/Secure-resilient-site-with-custom-domain).
app-service Overview Arc Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-arc-integration.md
Title: 'App Service on Azure Arc' description: An introduction to App Service integration with Azure Arc for Azure operators. + Last updated 03/15/2023
app-service Overview Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-name-resolution.md
Title: Name resolution in App Service
description: Overview of how name resolution (DNS) works for your app in Azure App Service. + Last updated 04/03/2023
app-service Overview Nat Gateway Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-nat-gateway-integration.md
+
+ Title: Azure NAT Gateway integration - Azure App Service | Microsoft Learn
+description: Learn how Azure NAT Gateway integrates with Azure App Service.
+++
+ms.assetid: 0a84734e-b5c1-4264-8d1f-77e781b28426
+++ Last updated : 04/08/2022++
+ms.devlang: azurecli
+++
+# Azure NAT Gateway integration
+
+Azure NAT Gateway is a fully managed, highly resilient service that can be associated with one or more subnets. It ensures that all outbound internet-facing traffic is routed through a network address translation (NAT) gateway. With Azure App Service, there are two important scenarios where you can use a NAT gateway.
+
+The NAT gateway gives you a static, predictable public IP address for outbound internet-facing traffic. It also significantly increases the available [source network address translation (SNAT) ports](./troubleshoot-intermittent-outbound-connection-errors.md) in scenarios where you have a high number of concurrent connections to the same public address/port combination.
++
+Here are important considerations about Azure NAT Gateway integration:
+
+* Using a NAT gateway with App Service is dependent on virtual network integration, so it requires a supported pricing tier in an App Service plan.
+* When you're using a NAT gateway together with App Service, all traffic to Azure Storage must use private endpoints or service endpoints.
+* You can't use a NAT gateway together with App Service Environment v1 or v2.
+
+For more information and pricing, see the [Azure NAT Gateway overview](../virtual-network/nat-gateway/nat-overview.md).
+
+## Configure NAT gateway integration
+
+To configure NAT gateway integration with App Service, first complete the following tasks:
+
+* Configure regional virtual network integration with your app, as described in [Integrate your app with an Azure virtual network](./overview-vnet-integration.md).
+* Ensure that [Route All](./overview-vnet-integration.md#routes) is enabled for your virtual network integration, so routes in your virtual network affect the internet-bound traffic.
+* Provision a NAT gateway with a public IP address and associate it with the subnet for virtual network integration.
+
+Then, set up Azure NAT Gateway through the Azure portal:
+
+1. In the Azure portal, go to **App Service** > **Networking**. In the **Outbound Traffic** section, select **Virtual network integration**. Ensure that your app is integrated with a subnet and that **Route All** is enabled.
+
+ :::image type="content" source="./media/overview-nat-gateway-integration/nat-gateway-route-all-enabled.png" alt-text="Screenshot of the Route All option enabled for virtual network integration.":::
+1. On the Azure portal menu or from the home page, select **Create a resource**. The **New** window appears.
+1. Search for **NAT gateway** and select it from the list of results.
+1. Fill in the **Basics** information and choose the region where your app is located.
+
+ :::image type="content" source="./media/overview-nat-gateway-integration/nat-gateway-create-basics.png" alt-text="Screenshot of the Basics tab on the page for creating a NAT gateway.":::
+1. On the **Outbound IP** tab, create a public IP address or select an existing one.
+
+ :::image type="content" source="./media/overview-nat-gateway-integration/nat-gateway-create-outbound-ip.png" alt-text="Screenshot of the Outbound IP tab on the page for creating a NAT gateway.":::
+1. On the **Subnet** tab, select the subnet that you use for virtual network integration.
+
+ :::image type="content" source="./media/overview-nat-gateway-integration/nat-gateway-create-subnet.png" alt-text="Screenshot of the Subnet tab on the page for creating a NAT gateway.":::
+1. Fill in tags if needed, and then select **Create**. After the NAT gateway is provisioned, select **Go to resource group**, and then select the new NAT gateway. The **Outbound IP** pane shows the public IP address that your app will use for outbound internet-facing traffic.
+
+ :::image type="content" source="./media/overview-nat-gateway-integration/nat-gateway-public-ip.png" alt-text="Screenshot of the Outbound IP pane for a NAT gateway in the Azure portal.":::
+
+If you prefer to use the Azure CLI to configure your environment, these are the important commands. As a prerequisite, create an app with virtual network integration configured.
+
+1. Ensure that **Route All** is configured for your virtual network integration:
+
+ ```azurecli-interactive
+ az webapp config set --resource-group [myResourceGroup] --name [myWebApp] --vnet-route-all-enabled
+ ```
+
+1. Create a public IP address and a NAT gateway:
+
+ ```azurecli-interactive
+ az network public-ip create --resource-group [myResourceGroup] --name myPublicIP --sku standard --allocation static
+
+ az network nat gateway create --resource-group [myResourceGroup] --name myNATgateway --public-ip-addresses myPublicIP --idle-timeout 10
+ ```
+
+1. Associate the NAT gateway with the subnet for virtual network integration:
+
+ ```azurecli-interactive
+ az network vnet subnet update --resource-group [myResourceGroup] --vnet-name [myVnet] --name [myIntegrationSubnet] --nat-gateway myNATgateway
+ ```
+
+## Scale a NAT gateway
+
+You can use the same NAT gateway across multiple subnets in the same virtual network. That approach allows you to use a NAT gateway across multiple apps and App Service plans.
+
+Azure NAT Gateway supports both public IP addresses and public IP prefixes. A NAT gateway can support up to 16 IP addresses across individual IP addresses and prefixes. Each IP address allocates 64,512 ports (SNAT ports), which allows up to 1 million available ports. Learn more in [Azure NAT Gateway resource](../virtual-network/nat-gateway/nat-gateway-resource.md#scalability).
+
+## Next steps
+
+For more information on Azure NAT Gateway, see the [Azure NAT Gateway documentation](../virtual-network/nat-gateway/nat-overview.md).
+
+For more information on virtual network integration, see the [documentation about virtual network integration](./overview-vnet-integration.md).
app-service Overview Patch Os Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-patch-os-runtime.md
Title: OS and runtime patching cadence
description: Learn how Azure App Service updates the OS and runtimes, what runtimes and patch level your apps has, and how you can get update announcements. Last updated 01/21/2021-+
app-service Overview Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-private-endpoint.md
+
+ Title: Connect privately to an App Service apps using private endpoint
+description: Connect privately to an App Service apps using Azure private endpoint
+
+ms.assetid: 2dceac28-1ba6-4904-a15d-9e91d5ee162c
+ Last updated : 09/29/2023++++
+# Using Private Endpoints for App Service apps
+
+> [!IMPORTANT]
+> Private endpoint is available for Windows and Linux apps, containerized or not, hosted on these App Service plans : **Basic**, **Standard**, **PremiumV2**, **PremiumV3**, **IsolatedV2**, **Functions Premium** (sometimes referred to as the Elastic Premium plan).
+
+You can use private endpoint for your App Service apps to allow clients located in your private network to securely access the app over Azure Private Link. The private endpoint uses an IP address from your Azure virtual network address space. Network traffic between a client on your private network and the app traverses over the virtual network and a Private Link on the Microsoft backbone network, eliminating exposure from the public Internet.
+
+Using private endpoint for your app enables you to:
+
+- Secure your app by configuring the private endpoint and disable public network access to eliminating public exposure.
+- Securely connect to your app from on-premises networks that connect to the virtual network using a VPN or ExpressRoute private peering.
+- Avoid any data exfiltration from your virtual network.
+
+## Conceptual overview
+
+A private endpoint is a special network interface (NIC) for your App Service app in a subnet in your virtual network.
+When you create a private endpoint for your app, it provides secure connectivity between clients on your private network and your app. The private endpoint is assigned an IP Address from the IP address range of your virtual network.
+The connection between the private endpoint and the app uses a secure [Private Link](../private-link/private-link-overview.md). Private endpoint is only used for incoming traffic to your app. Outgoing traffic won't use this private endpoint. You can inject outgoing traffic to your network in a different subnet through the [virtual network integration feature](./overview-vnet-integration.md).
+
+Each slot of an app is configured separately. You can plug up to 100 private endpoints per slot. You can't share a private endpoint between slots. The sub-resource name of a slot is `sites-<slot-name>`.
+
+The subnet where you plug the private endpoint can have other resources in it, you don't need a dedicated empty subnet.
+You can also deploy the private endpoint in a different region than your app.
+
+> [!NOTE]
+> The virtual network integration feature cannot use the same subnet as private endpoint, this is a limitation of the virtual network integration feature.
+
+From a security perspective:
+
+- Private endpoint and public access can co-exist on an app. For more information, see [overview of access restrictions](./overview-access-restrictions.md#how-it-works)
+- When you enable private endpoints to your app, ensure that public network access is disabled to ensure isolation.
+- You can enable multiple private endpoints in others virtual networks and subnets, including virtual network in other regions.
+- The access restrictions rules of your app aren't evaluated for traffic through the private endpoint.
+- You can eliminate the data exfiltration risk from the virtual network by removing all NSG rules where destination is tag Internet or Azure services.
+
+In the Web HTTP logs of your app, you find the client source IP. This feature is implemented using the TCP Proxy protocol, forwarding the client IP property up to the app. For more information, see [Getting connection Information using TCP Proxy v2](../private-link/private-link-service-overview.md#getting-connection-information-using-tcp-proxy-v2).
++
+ > [!div class="mx-imgBorder"]
+ > ![App Service app private endpoint global overview](./media/overview-private-endpoint/global-schema-web-app.png)
++
+## DNS
+
+When you use private endpoint for App Service apps, the requested URL must match the name of your app. By default mywebappname.azurewebsites.net.
+
+By default, without private endpoint, the public name of your web app is a canonical name to the cluster.
+For example, the name resolution is:
+
+|Name |Type |Value |
+|--|--||
+|mywebapp.azurewebsites.net|CNAME|clustername.azurewebsites.windows.net|
+|clustername.azurewebsites.windows.net|CNAME|cloudservicename.cloudapp.net|
+|cloudservicename.cloudapp.net|A|40.122.110.154|
++
+When you deploy a private endpoint, we update the DNS entry to point to the canonical name mywebapp.privatelink.azurewebsites.net.
+For example, the name resolution is:
+
+|Name |Type |Value |Remark |
+|--|--||-|
+|mywebapp.azurewebsites.net|CNAME|mywebapp.privatelink.azurewebsites.net|
+|mywebapp.privatelink.azurewebsites.net|CNAME|clustername.azurewebsites.windows.net|
+|clustername.azurewebsites.windows.net|CNAME|cloudservicename.cloudapp.net|
+|cloudservicename.cloudapp.net|A|40.122.110.154|<--This public IP isn't your private endpoint, you receive a 403 error|
+
+You must set up a private DNS server or an Azure DNS private zone. For tests, you can modify the host entry of your test machine.
+The DNS zone that you need to create is: **privatelink.azurewebsites.net**. Register the record for your app with a A record and the private endpoint IP.
+For example, the name resolution is:
+
+|Name |Type |Value |Remark |
+|--|--||-|
+|mywebapp.azurewebsites.net|CNAME|mywebapp.privatelink.azurewebsites.net|<--Azure creates this CNAME entry in Azure Public DNS to point the app address to the private endpoint address|
+|mywebapp.privatelink.azurewebsites.net|A|10.10.10.8|<--You manage this entry in your DNS system to point to your private endpoint IP address|
+
+After this DNS configuration, you can reach your app privately with the default name mywebappname.azurewebsites.net. You must use this name, because the default certificate is issued for *.azurewebsites.net.
++
+If you need to use a custom DNS name, you must add the custom name in your app and you must validate the custom name like any custom name, using public DNS resolution.
+For more information, see [custom DNS validation](./app-service-web-tutorial-custom-domain.md).
+
+For the Kudu console, or Kudu REST API (deployment with Azure DevOps self-hosted agents for example), you must create two records pointing to the private endpoint IP in your Azure DNS private zone or your custom DNS server. The first is for your app, the second is for the SCM of your app.
+
+| Name | Type | Value |
+|--|--|--|
+| mywebapp.privatelink.azurewebsites.net | A | PrivateEndpointIP |
+| mywebapp.scm.privatelink.azurewebsites.net | A | PrivateEndpointIP |
++
+## App Service Environment v3 special consideration
+
+In order to enable private endpoint for apps hosted in an IsolatedV2 plan (App Service Environment v3), you have to enable the private endpoint support at the App Service Environment level.
+You can activate the feature by the Azure portal in the App Service Environment configuration pane, or through the following CLI:
+
+```azurecli-interactive
+az appservice ase update --name myasename --allow-new-private-endpoint-connections true
+```
+
+## Specific requirements
+
+If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but you also automatically register the provider when you create the first web app in a subscription.
+
+## Pricing
+
+For pricing details, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
++
+## Limitations
+
+* When you use Azure Function in Elastic Premium plan with private endpoint, to run or execute the function in Azure portal, you must have direct network access or you receive an HTTP 403 error. In other words, your browser must be able to reach the private endpoint to execute the function from the Azure portal.
+* You can connect up to 100 private endpoints to a particular app.
+* Remote Debugging functionality isn't available through the private endpoint. The recommendation is to deploy the code to a slot and remote debug it there.
+* FTP access is provided through the inbound public IP address. Private endpoint doesn't support FTP access to the app.
+* IP-Based SSL isn't supported with private endpoints.
+* Apps that you configure with private endpoints cannot use [service endpoint-based access restriction rules](./overview-access-restrictions.md#access-restriction-rules-based-on-service-endpoints).
+* Private endpoint naming must follow the rules defined for resources of type `Microsoft.Network/privateEndpoints`. Naming rules can be found [here](../azure-resource-manager/management/resource-name-rules.md#microsoftnetwork).
+
+We're improving Azure Private Link feature and private endpoint regularly, check [this article](../private-link/private-endpoint-overview.md#limitations) for up-to-date information about limitations.
+
+## Next steps
+
+- To deploy private endpoint for your app through the portal, see [How to connect privately to an app with the Azure portal](../private-link/tutorial-private-endpoint-webapp-portal.md)
+- To deploy private endpoint for your app using Azure CLI, see [How to connect privately to an app with Azure CLI](./scripts/cli-deploy-privateendpoint.md)
+- To deploy private endpoint for your app using PowerShell, see [How to connect privately to an app with PowerShell](./scripts/powershell-deploy-private-endpoint.md)
+- To deploy private endpoint for your app using Azure template, see [How to connect privately to an app with Azure template](./scripts/template-deploy-private-endpoint.md)
+- End-to-end example, how to connect a frontend app to a secured backend app with virtual network integration and private endpoint with ARM template, see this [quickstart](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-privateendpoint-vnet-injection)
+- End-to-end example, how to connect a frontend app to a secured backend app with virtual network integration and private endpoint with terraform, see this [sample](./scripts/terraform-secure-backend-frontend.md)
application-gateway Ingress Controller Install Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-existing.md
description: This article provides information on how to deploy an Application G
-+ Last updated 07/28/2023
azure-app-configuration Concept Point Time Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-point-time-snapshot.md
+ Last updated 05/24/2023
azure-app-configuration Howto Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-import-export-data.md
description: Learn how to import or export configuration data to or from Azure A
+ Last updated 08/24/2022
azure-app-configuration Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/overview-managed-identity.md
Last updated 02/25/2020
-+ # How to use managed identities for Azure App Configuration
azure-arc Configure Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/configure-managed-instance.md
description: Configure Azure Arc-enabled SQL managed instance
+
Trace flags can be enabled as follows:
```azurecli az sql mi-arc update -n <NAME_OF_SQL_MI> --k8s-namespace <namespace> --use-k8s --trace-flags "3614,1234" ```-
azure-arc Create Data Controller Indirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-indirect-cli.md
description: Create an Azure Arc data controller, on a typical multi-node Kubern
+
azure-arc Create Sql Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-sql-managed-instance.md
description: Deploy Azure Arc-enabled SQL Managed Instance
+
azure-arc Delete Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/delete-managed-instance.md
Title: Delete an Azure Arc-enabled SQL Managed Instance description: Learn how to delete an Azure Arc-enabled SQL Managed Instance and optionally, reclaim associated Kubernetes persistent volume claims (PVCs).-+
azure-arc Delete Postgresql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/delete-postgresql-server.md
description: Delete an Azure Arc-enabled Postgres Hyperscale server group
+
azure-arc Maintenance Window https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/maintenance-window.md
description: Article describes how to set a maintenance window
+
azure-arc Managed Instance Disaster Recovery Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery-cli.md
description: Describes how to configure disaster recovery with a failover group
-+
azure-arc Managed Instance High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-high-availability.md
-+ # High Availability with Azure Arc-enabled SQL Managed Instance
azure-arc Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/point-in-time-restore.md
-+ Last updated 06/17/2022
azure-arc Restore Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/restore-postgresql.md
description: Explains how to restore Arc-enabled PostgreSQL server. You can rest
+
azure-arc Upgrade Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-cli.md
description: Article describes how to upgrade a directly connected Azure Arc dat
-+
azure-arc Upgrade Data Controller Indirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-cli.md
description: Article describes how to upgrade an indirectly connected Azure Arc
+
azure-arc Agent Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/agent-upgrade.md
Title: "Upgrade Azure Arc-enabled Kubernetes agents" Last updated 08/28/2023 + description: "Control agent upgrades for Azure Arc-enabled Kubernetes"
If you create a support request and are using a version that is outside of the s
* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). * Already have a Kubernetes cluster connected Azure Arc? [Create configurations on your Azure Arc-enabled Kubernetes cluster](./tutorial-use-gitops-connected-cluster.md).
-* Learn how to [use Azure Policy to apply configurations at scale](./use-azure-policy.md).
+* Learn how to [use Azure Policy to apply configurations at scale](./use-azure-policy.md).
azure-arc Diagnose Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/diagnose-connection-issues.md
Title: "Diagnose connection issues for Azure Arc-enabled Kubernetes clusters" Last updated 12/06/2022 + description: "Learn how to resolve common issues when connecting Kubernetes clusters to Azure Arc."- # Diagnose connection issues for Azure Arc-enabled Kubernetes clusters
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes
description: Sample Azure Resource Graph queries for Azure Arc-enabled Kubernetes showing use of resource types and tables to access Azure Arc-enabled Kubernetes related resources and properties. Last updated 08/09/2023 -+ # Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes
azure-arc Api Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/api-extended-security-updates.md
To link a license, execute the following commands:
``` PUT https://management.azure.com/subscriptions/SUBSCRIPTION_ID/resourceGroups/RESOURCE_GROUP_NAME/providers/Microsoft.HybridCompute/machines/MACHINE_NAME/licenseProfiles/default?api-version=2023-06-20-preview
-{
- ΓÇ£locationΓÇ¥: ΓÇ£SAME_REGION_AS_MACHINEΓÇ¥,
- ΓÇ£propertiesΓÇ¥: {
- ΓÇ£esuProfileΓÇ¥: {
- ΓÇ£assignedLicenseΓÇ¥: ΓÇ£RESOURCE_ID_OF_LICENSEΓÇ¥
- }
- }
+{
+ "location": "SAME_REGION_AS_MACHINE",
+ "properties": {
+ "esuProfile": {
+ "assignedLicense": "RESOURCE_ID_OF_LICENSE"
+ }
+ }
} ```
To unlink a license, execute the following commands:
PUT https://management.azure.com/subscriptions/SUBSCRIPTION_ID/resourceGroups/RESOURCE_GROUP_NAME/providers/Microsoft.HybridCompute/machines/MACHINE_NAME/licenseProfiles/default?api-version=2023-06-20-preview {
- ΓÇ£locationΓÇ¥: ΓÇ£SAME_REGION_AS_MACHINEΓÇ¥,
- ΓÇ£propertiesΓÇ¥: {
- ΓÇ£esuProfileΓÇ¥: {
- ΓÇ£assignedLicenseΓÇ¥: ΓÇ£ΓÇ¥
+ "location": "SAME_REGION_AS_MACHINE",
+ "properties": {
+ "esuProfile": {
+ "assignedLicense": ""
} } }
azure-arc Deliver Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deliver-extended-security-updates.md
Title: Deliver Extended Security Updates for Windows Server 2012 description: Learn how to deliver Extended Security Updates for Windows Server 2012. Previously updated : 09/14/2023 Last updated : 10/05/2023
The **Licenses** tab displays Azure Arc WS2012 licenses that are available. From
:::image type="content" source="media/deliver-extended-security-updates/extended-security-updates-licenses.png" alt-text="Screenshot showing existing licenses." lightbox="media/deliver-extended-security-updates/extended-security-updates-licenses.png":::
-1. To create a new WS2012 license, select **Create ESUs license**, and then provide the information required to configure the license on the page.
+1. To create a new WS2012 license, select **Create**, and then provide the information required to configure the license on the page.
For details on how to complete this step, see [License provisioning guidelines for Extended Security Updates for Windows Server 2012](license-extended-security-updates.md).
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
We recommend you deploy your machines to Azure Arc in preparation for when the r
There are several at-scale onboarding options for Azure Arc-enabled servers, including running a [Custom Task Sequence](onboard-configuration-manager-custom-task.md) through Configuration Manager and deploying a [Scheduled Task through Group Policy](onboard-group-policy-powershell.md).
+> [!NOTE]
+> Delivery of ESUs through Azure Arc to virtual machines running on Virtual Desktop Infrastructure (VDI) is not supported. VDI systems should use Multiple Activation Keys (MAK) to apply ESUs. See [Access your Multiple Activation Key from the Microsoft 365 Admin Center](/windows-server/get-started/extended-security-updates-deploy) to learn more.
+>
+ ### Networking Connectivity options include public endpoint, proxy server, and private link or Azure Express Route. Review the [networking prerequisites](network-requirements.md) to prepare non-Azure environments for deployment to Azure Arc.
azure-arc Administer Arc Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/administer-arc-vmware.md
Last updated 08/18/2023 + # Perform ongoing administration for Arc-enabled VMware vSphere
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-scale.md
The following list contains answers to commonly asked questions about Azure Cach
- [Do I lose data from my cache during scaling?](#do-i-lose-data-from-my-cache-during-scaling) - [Can I use all the features of Premium tier after scaling?](#can-i-use-all-the-features-of-premium-tier-after-scaling) - [Is my custom databases setting affected during scaling?](#is-my-custom-databases-setting-affected-during-scaling)-- [Is my cache be available during scaling?](#is-my-cache-be-available-during-scaling)
+- [Will my cache be available during scaling?](#will-my-cache-be-available-during-scaling)
- [Are there scaling limitations with geo-replication?](#are-there-scaling-limitations-with-geo-replication) - [Operations that aren't supported](#operations-that-arent-supported) - [How long does scaling take?](#how-long-does-scaling-take)
If you configured a custom value for the `databases` setting during cache creati
While Standard, Premium, Enterprise, and Enterprise Flash caches have a SLA for availability, there's no SLA for data loss.
-### Is my cache be available during scaling?
+### Will my cache be available during scaling?
- **Standard**, **Premium**, **Enterprise**, and **Enterprise Flash** caches remain available during the scaling operation. However, connection blips can occur while scaling these caches, and also while scaling from **Basic** to **Standard** caches. These connection blips are expected to be small and redis clients can generally re-establish their connection instantly.-- For Enterprise and Enterprise Flash caches using active geo-replication, scaling only a subset of linked caches can introduce issues over time in some cases. We recommend scaling all caches in the geo-replication group together were possible.
+- For **Enterprise** and **Enterprise Flash** caches using active geo-replication, scaling only a subset of linked caches can introduce issues over time in some cases. We recommend scaling all caches in the geo-replication group together where possible.
- **Basic** caches are offline during scaling operations to a different size. Basic caches remain available when scaling from **Basic** to **Standard** but might experience a small connection blip. If a connection blip occurs, Redis clients can generally re-establish their connection instantly. ### Are there scaling limitations with geo-replication?
azure-functions Configure Encrypt At Rest Using Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-encrypt-at-rest-using-cmk.md
Title: Encrypt your application source at rest description: Encrypt your application data in Azure Storage and deploy it as a package file. + Last updated 03/06/2020
Only the cost associated with the Azure Storage Account and any applicable egres
## Next steps - [Key Vault references for App Service](../app-service/app-service-key-vault-references.md)-- [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md)
+- [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md)
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
description: Learn how to use a .NET isolated worker process to run your C# func
Last updated 07/21/2023-+ recommendations: false #Customer intent: As a developer, I need to know how to create functions that run in an isolated worker process so that I can run my function code on current (not LTS) releases of .NET.
azure-functions Functions Reference Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-java.md
description: Understand how to develop functions with Java.
Last updated 09/14/2018 ms.devlang: java-+ # Azure Functions Java developer guide
azure-functions Migrate Version 1 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md
description: This article shows you how to upgrade your existing function apps r
Last updated 07/31/2023-+ zone_pivot_groups: programming-languages-set-functions
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
Title: Migrate apps from Azure Functions version 3.x to 4.x description: This article shows you how to upgrade your existing function apps running on version 3.x of the Azure Functions runtime to be able to run on version 4.x of the runtime. -+ Last updated 07/31/2023 zone_pivot_groups: programming-languages-set-functions
azure-functions Update Java Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/update-java-versions.md
Title: Update Java versions in Azure Functions description: Learn how to update an existing function app in Azure Functions to run on a new version of Java. + Last updated 09/14/2023 zone_pivot_groups: app-service-platform-windows-linux
azure-monitor Availability Test Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-test-migration.md
The following steps walk you through the process of creating [standard tests](av
> [!IMPORTANT] >
-> On 30 September 2026, the **[URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability)** will be retired, and ping tests. Before that date, you'll need to transition to **[standard tests](/editor/availability-standard-tests.md)**.
+> On September 30th, 2026, **[URL ping tests](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability) retire**. Transition to **[standard tests](/editor/availability-standard-tests.md)** before then.
> > - A cost is associated with running **[standard tests](/editor/availability-standard-tests.md)**. Once you create a **[standard test](/editor/availability-standard-tests.md)**, you will be charged for test executions. > - Refer to **[Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/#pricing)** before starting this process.
The following steps walk you through the process of creating [standard tests](av
#### When should I use these commands?
-We recommend using these commands to migrate a URL ping test to a standard test and take advantage of the available capabilities. Remember, this migration is optional.
+Migrate URL ping tests to standard tests now to take advantage of new capabilities.
#### Do these steps work for both HTTP and HTTPS endpoints?
azure-monitor Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-ad-authentication.md
services.AddApplicationInsightsTelemetry(new ApplicationInsightsServiceOptions
}); ``` - ### [Node.js](#tab/nodejs) > [!NOTE]
appInsights.defaultClient.config.aadTokenCredential = credential;
``` - ### [Java](#tab/java) > [!NOTE]
The following example shows how to configure the Java agent to use a service pri
:::image type="content" source="media/azure-ad-authentication/client-secret-cs.png" alt-text="Screenshot that shows the Client secrets section with the client secret." lightbox="media/azure-ad-authentication/client-secret-cs.png":::
+#### Environment variable configuration
+
+The `APPLICATIONINSIGHTS_AUTHENTICATION_STRING` environment variable lets Application Insights authenticate to Azure AD and send telemetry.
+
+ - For system-assigned identity:
+
+ | App setting | Value |
+ | -- | |
+ | APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD` |
+
+ - For user-assigned identity:
+
+ | App setting | Value |
+ | - | -- |
+ | APPLICATIONINSIGHTS_AUTHENTICATION_STRING | `Authorization=AAD;ClientId={Client id of the User-Assigned Identity}` |
+
+Set the `APPLICATIONINSIGHTS_AUTHENTICATION_STRING` environment variable using this string.
+
+**In Unix/Linux:**
+
+```shell
+export APPLICATIONINSIGHTS_AUTHENTICATION_STRING="Authorization=AAD"
+```
+
+**In Windows:**
+
+```shell
+set APPLICATIONINSIGHTS_AUTHENTICATION_STRING="Authorization=AAD"
+```
+
+After setting it, restart your application. It now sends telemetry to Application Insights using Azure AD authentication.
### [Python](#tab/python)
is included starting with beta version [opencensus-ext-azure 1.1b0](https://pypi
Construct the appropriate [credentials](/python/api/overview/azure/identity-readme#credentials) and pass them into the constructor of the Azure Monitor exporter. Make sure your connection string is set up with the instrumentation key and ingestion endpoint of your resource.
-The following types of authentication are supported by the `Opencensus` Azure Monitor exporters. We recommend using managed identities in production environments.
+The `OpenCensus`` Azure Monitor exporters support these authentication types. We recommend using managed identities in production environments.
#### System-assigned managed identity
tracer = Tracer(
... ``` -- ## Disable local authentication
You can disable local authentication by using the Azure portal or Azure Policy o
1. From your Application Insights resource, select **Properties** under the **Configure** heading in the menu on the left. Select **Enabled (click to change)** if the local authentication is enabled.
- :::image type="content" source="./media/azure-ad-authentication/enabled.png" alt-text="Screenshot that shows Properties under the Configure section and the Enabled (click to change) local authentication button.":::
+ :::image type="content" source="./media/azure-ad-authentication/enabled.png" alt-text="Screenshot that shows Properties under the Configure section and the Enabled (select to change) local authentication button.":::
1. Select **Disabled** and apply changes.
You can disable local authentication by using the Azure portal or Azure Policy o
1. After your resource has disabled local authentication, you'll see the corresponding information in the **Overview** pane.
- :::image type="content" source="./media/azure-ad-authentication/overview.png" alt-text="Screenshot that shows the Overview tab with the Disabled (click to change) local authentication button.":::
+ :::image type="content" source="./media/azure-ad-authentication/overview.png" alt-text="Screenshot that shows the Overview tab with the Disabled (select to change) local authentication button.":::
### Azure Policy
-Azure Policy for `DisableLocalAuth` will deny users the ability to create a new Application Insights resource without this property set to `true`. The policy name is `Application Insights components should block non-AAD auth ingestion`.
+Azure Policy for `DisableLocalAuth` denies users the ability to create a new Application Insights resource without this property set to `true`. The policy name is `Application Insights components should block non-AAD auth ingestion`.
To apply this policy definition to your subscription, [create a new policy assignment and assign the policy](../../governance/policy/assign-policy-portal.md).
The following example shows the Azure Resource Manager template you can use to c
### Token audience
-When developing a custom client to obtain an access token from Azure AD for the purpose of submitting telemetry to Application Insights, refer to the table provided below to determine the appropriate audience string for your particular host environment.
+When developing a custom client to obtain an access token from Azure AD for submitting telemetry to Application Insights, refer to the following table to determine the appropriate audience string for your particular host environment.
| Azure cloud version | Token audience value | | | |
If you're using sovereign clouds, you can find the audience information in the c
_InstrumentationKey={profile.InstrumentationKey};IngestionEndpoint={ingestionEndpoint};LiveEndpoint={liveDiagnosticsEndpoint};AADAudience={aadAudience}_
-Please note that the audience parameter, AADAudience, may vary depending on your specific environment.
+The audience parameter, AADAudience, may vary depending on your specific environment.
## Troubleshooting
This section provides distinct troubleshooting scenarios and steps that you can
### Ingestion HTTP errors
-The ingestion service will return specific errors, regardless of the SDK language. Network traffic can be collected by using a tool such as Fiddler. You should filter traffic to the ingestion endpoint set in the connection string.
+The ingestion service returns specific errors, regardless of the SDK language. Network traffic can be collected by using a tool such as Fiddler. You should filter traffic to the ingestion endpoint set in the connection string.
#### HTTP/1.1 400 Authentication not supported
You can inspect network traffic by using a tool like Fiddler. To enable the traf
Or add the following JVM args while running your application: `-Djava.net.useSystemProxies=true -Dhttps.proxyHost=localhost -Dhttps.proxyPort=8888`
-If Azure AD is enabled in the agent, outbound traffic will include the HTTP header `Authorization`.
+If Azure AD is enabled in the agent, outbound traffic includes the HTTP header `Authorization`.
#### 401 Unauthorized
If the following WARN message is seen in the log file `WARN c.m.a.TelemetryChann
If you're using Fiddler, you might see the response header `HTTP/1.1 403 Forbidden - provided credentials do not grant the access to ingest the telemetry into the component`. The root cause might be one of the following reasons:-- You've created the resource with system-assigned managed identity enabled or you might have associated the user-assigned identity with the resource but forgot to add the Monitoring Metrics Publisher role to the resource (if using SAMI) or user-assigned identity (if using UAMI).
+- You've created the resource with a system-assigned managed identity or associated a user-assigned identity with it. However, you might have forgotten to add the Monitoring Metrics Publisher role to the resource (if using SAMI) or the user-assigned identity (if using UAMI).
- You've provided the right credentials to get the access tokens, but the credentials don't belong to the right Application Insights resource. Make sure you see your resource (VM or app service) or user-assigned identity with Monitoring Metrics Publisher roles in your Application Insights resource. #### Invalid Tenant ID
If the following exception is seen in the log file `com.microsoft.aad.msal4j.Msa
If the following exception is seen in the log file `com.microsoft.aad.msal4j.MsalServiceException: Application with identifier <CLIENT_ID> was not found in the directory`, it indicates the agent wasn't successful in acquiring the access token. The probable reason is that you've provided an invalid or the wrong client ID in your client secret configuration
- This scenario can occur if the application hasn't been installed by the administrator of the tenant or consented to by any user in the tenant. You might have sent your authentication request to the wrong tenant.
+ If the administrator hasn't installed the application or no user in the tenant has consented to it, this scenario occurs. You may have sent your authentication request to the wrong tenant.
### [Python](#tab/python)
azure-monitor Container Insights Custom Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-custom-metrics.md
Title: Custom metrics collected by Container insights description: Describes the custom metrics collected for a Kubernetes cluster by Container insights in Azure Monitor. -+ Last updated 09/28/2022
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
Title: Monitor Azure Arc-enabled Kubernetes clusters Last updated 08/02/2023 + description: Collect metrics and logs of Azure Arc-enabled Kubernetes clusters using Azure Monitor.
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
Follow the instructions to configure an existing ConfigMap or to use a new one.
## [Azure portal](#tab/configure-portal)
+>[!NOTE]
+> DCR based configuration is not supported for service principal based clusters. Please [migrate your clusters with service principal to managed identity](./container-insights-enable-aks.md#migrate-to-managed-identity-authentication) to use this experience.
+ 1. In the Insights section of your Kubernetes cluster, select the **Monitoring Settings** button from the top toolbar ![Screenshot that shows monitoring settings.](./media/container-insights-logging-v2/container-insights-v2-monitoring-settings.png)
azure-monitor Container Insights V2 Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-v2-migration.md
# Migrate from ContainerLog to ContainerLogV2
-With the upgraded offering of ContainerLogV2 becoming generally available, on 30th September 2026, the ContainerLog table will be retired. If you currently ingest container insights data to the ContainerLog table, please make sure to transition to using ContainerLogV2 prior to that date.
+With the upgraded offering of ContainerLogV2 becoming generally available, on 30th September 2026, the ContainerLog table will be retired. If you currently ingest container insights data to the ContainerLog table, please transition to using ContainerLogV2 prior to that date.
>[!NOTE] > Support for ingesting the ContainerLog table will be **retired on 30th September 2026**.
To transition to ContainerLogV2, we recommend the following approach.
1. Learn about the feature differences between ContainerLog and ContainerLogV2 2. Assess the impact migrating to ContainerLogV2 may have on your existing queries, alerts, or dashboards
-3. Enable the ContainerLogV2 schema through either the container insights data collection rules (DCRs) or ConfigMap
+3. [Enable the ContainerLogV2 schema](container-insights-logging-v2.md) through either the container insights data collection rules (DCRs) or ConfigMap
4. Validate that you are now ingesting ContainerLogV2 to your Log Analytics workspace. ## ContainerLog vs ContainerLogV2 schema The following table highlights the key differences between using ContainerLog and ContainerLogV2 schema.
+>[!NOTE]
+> DCR based configuration is not supported for service principal based clusters. Please [migrate your clusters with service principal to managed identity](./container-insights-enable-aks.md#migrate-to-managed-identity-authentication) to use this experience.
+ | Feature differences | ContainerLog | ContainerLogV2 | | - | -- | - |
-| Onboarding | Only configurable through the ConfigMap | Configurable through both the ConfigMap and DCR |
+| Onboarding | Only configurable through the ConfigMap | Configurable through both the ConfigMap and DCR\* |
| Pricing | Only compatible with full-priced analytics logs | Supports the low cost basic logs tier in addition to analytics logs | | Querying | Requires multiple join operations with inventory tables for standard queries | Includes additional pod and container metadata to reduce query complexity and join operations | | Multiline | Not supported, multiline entries are split into multiple rows | Support for multiline logging to allow consolidated, single entries for multiline output |
+\* DCR enablement is not supported for service principal based clusters, must be enabled through the ConfigMap
+ ## Assess the impact on existing alerts
-If you are currently using ContainerLog in your alerts, then migrating to ContainerLogV2 will require updates to your alert queries for them to continue functioning as expected.
+If you are currently using ContainerLog in your alerts, then migrating to ContainerLogV2 requires updates to your alert queries for them to continue functioning as expected.
To scan for alerts that may be referencing the ContainerLog table, run the following Azure Resource Graph query:
azure-monitor Prometheus Authorization Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-authorization-proxy.md
Title: Azure Active Directory authorization proxy description: Azure Active Directory authorization proxy + Last updated 07/10/2022
azure-monitor Prometheus Metrics From Arc Enabled Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-from-arc-enabled-cluster.md
description: How to configure your Azure Arc-enabled Kubernetes cluster (preview
+ Last updated 05/07/2023
azure-monitor Prometheus Remote Write Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write-active-directory.md
Title: Remote-write in Azure Monitor Managed Service for Prometheus using Azure
description: Describes how to configure remote-write to send data from self-managed Prometheus running in your Kubernetes cluster running on-premises or in another cloud using Azure Active Directory authentication. + Last updated 11/01/2022
See [Azure Monitor managed service for Prometheus remote write](prometheus-remot
- [Remote-write in Azure Monitor Managed Service for Prometheus](prometheus-remote-write.md) - [Configure remote write for Azure Monitor managed service for Prometheus using managed identity authentication](./prometheus-remote-write-managed-identity.md) - [Configure remote write for Azure Monitor managed service for Prometheus using Azure Workload Identity (preview)](./prometheus-remote-write-azure-workload-identity.md)-- [Configure remote write for Azure Monitor managed service for Prometheus using Azure AD pod identity (preview)](./prometheus-remote-write-azure-ad-pod-identity.md)
+- [Configure remote write for Azure Monitor managed service for Prometheus using Azure AD pod identity (preview)](./prometheus-remote-write-azure-ad-pod-identity.md)
azure-monitor Azure Monitor Workspace Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-manage.md
+ Last updated 03/28/2023
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
Azure NetApp Files customer-managed keys is supported for the following regions:
* Australia East * Australia Southeast * Brazil South
+* Brazil Southeast
* Canada Central * Canada East * Central India
azure-resource-manager Create Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/create-resource-group.md
Title: Use Bicep to create a new resource group description: Describes how to use Bicep to create a new resource group in your Azure subscription. + Last updated 09/26/2023
To learn about other scopes, see:
* [Resource group deployments](deploy-to-resource-group.md) * [Subscription deployments](deploy-to-subscription.md) * [Management group deployments](deploy-to-management-group.md)
-* [Tenant deployments](deploy-to-tenant.md)
+* [Tenant deployments](deploy-to-tenant.md)
azure-resource-manager Deployment Script Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md
Title: Use deployment scripts in Bicep | Microsoft Docs description: use deployment scripts in Bicep.--- Previously updated : 05/12/2023- Last updated : 10/04/2023 # Use deployment scripts in Bicep
resource runPowerShellInline 'Microsoft.Resources/deploymentScripts@2020-10-01'
forceUpdateTag: '1' containerSettings: { containerGroupName: 'mycustomaci'
+ subnetIds: [
+ {
+ id: '/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet'
+ }
+ ]
} storageAccountSettings: { storageAccountName: 'myStorageAccount'
resource runPowerShellInline 'Microsoft.Resources/deploymentScripts@2020-10-01'
] scriptContent: ''' param([string] $name)
- $output = \'Hello {0}. The username is {1}, the password is {2}.\' -f $name,\${Env:UserName},\${Env:Password}
+ $output = 'Hello {0}. The username is {1}, the password is {2}.' -f $name,${Env:UserName},${Env:Password}
Write-Output $output $DeploymentScriptOutputs = @{}
- $DeploymentScriptOutputs[\'text\'] = $output
+ $DeploymentScriptOutputs['text'] = $output
''' // or primaryScriptUri: 'https://raw.githubusercontent.com/Azure/azure-docs-bicep-samples/main/samples/deployment-script/inlineScript.ps1' supportingScriptUris: [] timeout: 'PT30M'
resource runPowerShellInline 'Microsoft.Resources/deploymentScripts@2020-10-01'
Property value details: -- `identity`: For deployment script API version 2020-10-01 or later, a user-assigned managed identity is optional unless you need to perform any Azure-specific actions in the script. For the API version 2019-10-01-preview, a managed identity is required as the deployment script service uses it to execute the scripts. When the identity property is specified, the script service calls `Connect-AzAccount -Identity` before invoking the user script. Currently, only user-assigned managed identity is supported. To login with a different identity, you can call [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) in the script.
+- <a id='identity'></a>`identity`: For deployment script API version 2020-10-01 or later, a user-assigned managed identity is optional unless you need to perform any Azure-specific actions in the script or running deployment script in private network. For more information, see [Access private virtual network](#access-private-virtual-network). For the API version 2019-10-01-preview, a managed identity is required as the deployment script service uses it to execute the scripts. When the identity property is specified, the script service calls `Connect-AzAccount -Identity` before invoking the user script. Currently, only user-assigned managed identity is supported. To login with a different identity, you can call [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) in the script.
- `tags`: Deployment script tags. If the deployment script service generates a storage account and a container instance, the tags are passed to both resources, which can be used to identify them. Another way to identify these resources is through their suffixes, which contain "azscripts". For more information, see [Monitor and troubleshoot deployment scripts](#monitor-and-troubleshoot-deployment-scripts). - `kind`: Specify the type of script. Currently, Azure PowerShell and Azure CLI scripts are supported. The values are **AzurePowerShell** and **AzureCLI**. - `forceUpdateTag`: Changing this value between Bicep file deployments forces the deployment script to re-execute. If you use the `newGuid()` or the `utcNow()` functions, both functions can only be used in the default value for a parameter. To learn more, see [Run script more than once](#run-script-more-than-once).-- `containerSettings`: Specify the settings to customize Azure Container Instance. Deployment script requires a new Azure Container Instance. You can't specify an existing Azure Container Instance. However, you can customize the container group name by using `containerGroupName`. If not specified, the group name is automatically generated.
+- `containerSettings`: Specify the settings to customize Azure Container Instance. Deployment script requires a new Azure Container Instance. You can't specify an existing Azure Container Instance. However, you can customize the container group name by using `containerGroupName`. If not specified, the group name is automatically generated. You can also specify subnetIds for running the deployment script in a private network. For more information, see [Access private virtual network](#access-private-virtual-network).
- `storageAccountSettings`: Specify the settings to use an existing storage account. If `storageAccountName` is not specified, a storage account is automatically created. See [Use an existing storage account](#use-existing-storage-account). - `azPowerShellVersion`/`azCliVersion`: Specify the module version to be used. See a list of [supported Azure PowerShell versions](https://mcr.microsoft.com/v2/azuredeploymentscripts-powershell/tags/list). The version determines which container image to use:
The identity that your deployment script uses needs to be authorized to work wit
## Access private virtual network
-The supporting resources including the container instance can't be deployed to a private virtual network. To access a private virtual network from your deployment script, you can create another virtual network with a publicly accessible virtual machine or a container instance, and create a peering from this virtual network to the private virtual network.
+With Microsoft.Resources/deploymentScripts version 2023-08-01, you can run deployment scripts in private networks with some additional configurations.
+
+- Create a user-assigned managed identity, and specify it in the `identity` property. To assign the identity, see [Identity](#identity).
+- Create a storage account in the private network, and specify the deployment script to use the existing storage account. To specify an existing storage account, see [Use existing storage account](#use-existing-storage-account). Some additional configuration is required for the storage account.
+
+ 1. Open the storage account in the [Azure portal](https://portal.azure.com).
+ 1. From the left menu, select **Access Control (IAM)**, and then select the **Role assignments** tab.
+ 1. Add the `Storage File Data Privileged Contributor` role to the user-assignment managed identity.
+ 1. From the left menu, under **Security + networking**, select **Networking**, and then select **Firewalls and virtual networks**.
+ 1. Select **Enabled from selected virtual networks and IP addresses**.
+
+ :::image type="content" source="./media/deployment-script-bicep/resource-manager-deployment-script-access-vnet-config-storage.png" alt-text="Screenshot of configuring storage account for accessing private network.":::
+
+ 1. Under **Virtual networks**, add a subnet. On the screenshot, the subnet is called *dspvnVnet*.
+ 1. Under **Exceptions**, select **Allow Azure services on the trusted services list to access this storage account**.
+
+The following Bicep file shows how to configure the environment for running a deployment script:
+
+```bicep
+@maxLength(10) // required max length since the storage account has a max of 26 chars
+param prefix string
+param location string = resourceGroup().location
+param userAssignedIdentityName string = '${prefix}Identity'
+param storageAccountName string = '${prefix}stg${uniqueString(resourceGroup().id)}'
+param vnetName string = '${prefix}Vnet'
+param subnetName string = '${prefix}Subnet'
+
+resource vnet 'Microsoft.Network/virtualNetworks@2023-05-01' = {
+ name: vnetName
+ location: location
+ properties: {
+ addressSpace: {
+ addressPrefixes: [
+ '10.0.0.0/16'
+ ]
+ }
+ enableDdosProtection: false
+ subnets: [
+ {
+ name: subnetName
+ properties: {
+ addressPrefix: '10.0.0.0/24'
+ serviceEndpoints: [
+ {
+ service: 'Microsoft.Storage'
+ }
+ ]
+ delegations: [
+ {
+ name: 'Microsoft.ContainerInstance.containerGroups'
+ properties: {
+ serviceName: 'Microsoft.ContainerInstance/containerGroups'
+ }
+ }
+ ]
+ }
+ }
+ ]
+ }
+}
+
+resource subnet 'Microsoft.Network/virtualNetworks/subnets@2023-05-01' existing = {
+ parent: vnet
+ name: subnetName
+}
+resource storageAccount 'Microsoft.Storage/storageAccounts@2023-01-01' = {
+ name: storageAccountName
+ location: location
+ sku: {
+ name: 'Standard_LRS'
+ }
+ kind: 'StorageV2'
+ properties: {
+ networkAcls: {
+ bypass: 'AzureServices'
+ virtualNetworkRules: [
+ {
+ id: subnet.id
+ action: 'Allow'
+ state: 'Succeeded'
+ }
+ ]
+ defaultAction: 'Deny'
+ }
+ }
+}
+
+resource userAssignedIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities@2023-01-31' = {
+ name: userAssignedIdentityName
+ location: location
+}
+
+resource storageFileDataPrivilegedContributor 'Microsoft.Authorization/roleDefinitions@2022-04-01' existing = {
+ name: '69566ab7-960f-475b-8e7c-b3118f30c6bd' // Storage File Data Priveleged Contributor
+ scope: tenant()
+}
+
+resource roleAssignment 'Microsoft.Authorization/roleAssignments@2022-04-01' = {
+ scope: storageAccount
+
+ name: guid(storageFileDataPrivilegedContributor.id, userAssignedIdentity.id, storageAccount.id)
+ properties: {
+ principalId: userAssignedIdentity.properties.principalId
+ roleDefinitionId: storageFileDataPrivilegedContributor.id
+ principalType: 'ServicePrincipal'
+ }
+}
+```
+
+You can use the following Bicep file to test the deployment:
+
+```bicep
+param prefix string
+
+param location string = resourceGroup().location
+param utcValue string = utcNow()
+
+param storageAccountName string
+param vnetName string
+param subnetName string
+param userAssignedIdentityName string
+
+resource vnet 'Microsoft.Network/virtualNetworks@2023-05-01' existing = {
+ name: vnetName
+
+ resource subnet 'subnets' existing = {
+ name: subnetName
+ }
+}
+
+resource userAssignedIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities@2023-01-31' existing = {
+ name: userAssignedIdentityName
+}
+
+resource dsTest 'Microsoft.Resources/deploymentScripts@2023-08-01' = {
+ name: '${prefix}DS'
+ location: location
+ identity: {
+ type: 'userAssigned'
+ userAssignedIdentities: {
+ '${userAssignedIdentity.id}': {}
+ }
+ }
+ kind: 'AzureCLI'
+ properties: {
+ forceUpdateTag: utcValue
+ azCliVersion: '2.47.0'
+ storageAccountSettings: {
+ storageAccountName: storageAccountName
+ }
+ containerSettings: {
+ subnetIds: [
+ {
+ id: vnet::subnet.id
+ }
+ ]
+ }
+ scriptContent: 'echo "Hello world!"'
+ retentionInterval: 'P1D'
+ cleanupPreference: 'OnExpiration'
+ }
+}
+```
## Next steps
azure-resource-manager Deploy Bicep Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-bicep-definition.md
Title: Use Bicep to deploy an Azure Managed Application definition description: Describes how to use Bicep to deploy an Azure Managed Application definition from your service catalog. -+ Last updated 05/12/2023
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
Title: Move resources to a new subscription or resource group description: Use Azure Resource Manager to move resources to a new resource group or subscription.-- Last updated 04/24/2023
azure-resource-manager Preview Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/preview-features.md
Title: Set up preview features in Azure subscription
description: Describes how to list, register, or unregister preview features in your Azure subscription for a resource provider. Last updated 08/10/2022-+ # Customer intent: As an Azure user, I want to use preview features in my subscription so that I can expose a resource provider's preview functionality.
azure-resource-manager Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Resource Manager
description: Sample Azure Resource Graph queries for Azure Resource Manager showing use of resource types and tables to access Azure Resource Manager related resources and properties. Last updated 07/07/2022 -+ # Azure Resource Graph sample queries for Azure Resource Manager
azure-resource-manager Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tls-support.md
Title: TLS version supported by Azure Resource Manager
description: Describes the deprecation of TLS versions prior to 1.2 in Azure Resource Manager Previously updated : 08/24/2023 Last updated : 10/05/2023 # Migrating to TLS 1.2 for Azure Resource Manager Transport Layer Security (TLS) is a security protocol that establishes encryption channels over computer networks. TLS 1.2 is the current industry standard and is supported by Azure Resource Manager. For backwards compatibility, Azure Resource Manager also supports earlier versions, such as TLS 1.0 and 1.1, but that support is ending.
-To ensure that Azure is compliant with regulatory requirements, and provide improved security for our customers, **Azure Resource Manager will stop supporting protocols older than TLS 1.2 on November 30, 2023.**
+To ensure that Azure is compliant with regulatory requirements, and provide improved security for our customers, **Azure Resource Manager will stop supporting protocols older than TLS 1.2 on September 30, 2024.**
This article provides guidance for removing dependencies on older security protocols.
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md
Title: Use deployment scripts in templates | Microsoft Docs description: Use deployment scripts in Azure Resource Manager templates.--- Previously updated : 05/22/2023- Last updated : 10/04/2023 # Use deployment scripts in ARM templates
The following JSON is an example. For more information, see the latest [template
Property value details: -- `identity`: For deployment script API version 2020-10-01 or later, a user-assigned managed identity is optional unless you need to perform any Azure-specific actions in the script. For the API version 2019-10-01-preview, a managed identity is required as the deployment script service uses it to execute the scripts. When the identity property is specified, the script service calls `Connect-AzAccount -Identity` before invoking the user script. Currently, only user-assigned managed identity is supported. To log in with a different identity, you can call [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) in the script.
+- <a id='identity'></a>`identity`: For deployment script API version 2020-10-01 or later, a user-assigned managed identity is optional unless you need to perform any Azure-specific actions in the script. For the API version 2019-10-01-preview, a managed identity is required as the deployment script service uses it to execute the scripts. When the identity property is specified, the script service calls `Connect-AzAccount -Identity` before invoking the user script. Currently, only user-assigned managed identity is supported. To log in with a different identity, you can call [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) in the script.
- `tags`: Deployment script tags. If the deployment script service generates a storage account and a container instance, the tags are passed to both resources, which can be used to identify them. Another way to identify these resources is through their suffixes, which contain "azscripts". For more information, see [Monitor and troubleshoot deployment scripts](#monitor-and-troubleshoot-deployment-scripts). - `kind`: Specify the type of script. Currently, Azure PowerShell and Azure CLI scripts are supported. The values are **AzurePowerShell** and **AzureCLI**. - `forceUpdateTag`: Changing this value between template deployments forces the deployment script to re-execute. If you use the `newGuid()` or the `utcNow()` functions, both functions can only be used in the default value for a parameter. To learn more, see [Run script more than once](#run-script-more-than-once).
The identity that your deployment script uses needs to be authorized to work wit
## Access private virtual network
-The supporting resources including the container instance can't be deployed to a private virtual network. To access a private virtual network from your deployment script, you can create another virtual network with a publicly accessible virtual machine or a container instance, and create a peering from this virtual network to the private virtual network.
+With Microsoft.Resources/deploymentScripts version 2023-08-01, you can run deployment scripts in private networks with some additional configurations.
+
+- Create a user-assigned managed identity, and specify it in the `identity` property. To assign the identity, see [Identity](#identity).
+- Create a storage account in the private network, and specify the deployment script to use the existing storage account. To specify an existing storage account, see [Use existing storage account](#use-existing-storage-account). Some additional configuration is required for the storage account.
+
+ 1. Open the storage account in the [Azure portal](https://portal.azure.com).
+ 1. From the left menu, select **Access Control (IAM)**, and then select the **Role assignments** tab.
+ 1. Add the `Storage File Data Privileged Contributor` role to the user-assignment managed identity.
+ 1. From the left menu, under **Security + networking**, select **Networking**, and then select **Firewalls and virtual networks**.
+ 1. Select **Enabled from selected virtual networks and IP addresses**.
+
+ :::image type="content" source="./media/deployment-script-template/resource-manager-deployment-script-access-vnet-config-storage.png" alt-text="Screenshot of configuring storage account for accessing private network.":::
+
+ 1. Under **Virtual networks**, add a subnet. On the screenshot, the subnet is called *dspvnVnet*.
+ 1. Under **Exceptions**, select **Allow Azure services on the trusted services list to access this storage account**.
+
+The following ARM template shows how to configure the environment for running a deployment script:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "prefix": {
+ "type": "string",
+ "maxLength": 10
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
+ "userAssignedIdentityName": {
+ "type": "string",
+ "defaultValue": "[format('{0}Identity', parameters('prefix'))]"
+ },
+ "storageAccountName": {
+ "type": "string",
+ "defaultValue": "[format('{0}stg{1}', parameters('prefix'), uniqueString(resourceGroup().id))]"
+ },
+ "vnetName": {
+ "type": "string",
+ "defaultValue": "[format('{0}Vnet', parameters('prefix'))]"
+ },
+ "subnetName": {
+ "type": "string",
+ "defaultValue": "[format('{0}Subnet', parameters('prefix'))]"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Network/virtualNetworks",
+ "apiVersion": "2023-05-01",
+ "name": "[parameters('vnetName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "addressSpace": {
+ "addressPrefixes": [
+ "10.0.0.0/16"
+ ]
+ },
+ "enableDdosProtection": false,
+ "subnets": [
+ {
+ "name": "[parameters('subnetName')]",
+ "properties": {
+ "addressPrefix": "10.0.0.0/24",
+ "serviceEndpoints": [
+ {
+ "service": "Microsoft.Storage"
+ }
+ ],
+ "delegations": [
+ {
+ "name": "Microsoft.ContainerInstance.containerGroups",
+ "properties": {
+ "serviceName": "Microsoft.ContainerInstance/containerGroups"
+ }
+ }
+ ]
+ }
+ }
+ ]
+ }
+ },
+ {
+ "type": "Microsoft.Storage/storageAccounts",
+ "apiVersion": "2023-01-01",
+ "name": "[parameters('storageAccountName')]",
+ "location": "[parameters('location')]",
+ "sku": {
+ "name": "Standard_LRS"
+ },
+ "kind": "StorageV2",
+ "properties": {
+ "networkAcls": {
+ "bypass": "AzureServices",
+ "virtualNetworkRules": [
+ {
+ "id": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('vnetName'), parameters('subnetName'))]",
+ "action": "Allow",
+ "state": "Succeeded"
+ }
+ ],
+ "defaultAction": "Deny"
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/virtualNetworks', parameters('vnetName'))]"
+ ]
+ },
+ {
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities",
+ "apiVersion": "2023-01-31",
+ "name": "[parameters('userAssignedIdentityName')]",
+ "location": "[parameters('location')]"
+ },
+ {
+ "type": "Microsoft.Authorization/roleAssignments",
+ "apiVersion": "2022-04-01",
+ "scope": "[format('Microsoft.Storage/storageAccounts/{0}', parameters('storageAccountName'))]",
+ "name": "[guid(tenantResourceId('Microsoft.Authorization/roleDefinitions', '69566ab7-960f-475b-8e7c-b3118f30c6bd'), resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('userAssignedIdentityName')), resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')))]",
+ "properties": {
+ "principalId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('userAssignedIdentityName')), '2023-01-31').principalId]",
+ "roleDefinitionId": "[tenantResourceId('Microsoft.Authorization/roleDefinitions', '69566ab7-960f-475b-8e7c-b3118f30c6bd')]",
+ "principalType": "ServicePrincipal"
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]",
+ "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('userAssignedIdentityName'))]"
+ ]
+ }
+ ]
+}
+```
+
+You can use the following ARM template to test the deployment:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "prefix": {
+ "type": "string"
+ },
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
+ "utcValue": {
+ "type": "string",
+ "defaultValue": "[utcNow()]"
+ },
+ "storageAccountName": {
+ "type": "string"
+ },
+ "vnetName": {
+ "type": "string"
+ },
+ "subnetName": {
+ "type": "string"
+ },
+ "userAssignedIdentityName": {
+ "type": "string"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Resources/deploymentScripts",
+ "apiVersion": "2023-08-01",
+ "name": "[format('{0}DS', parameters('prefix'))]",
+ "location": "[parameters('location')]",
+ "identity": {
+ "type": "userAssigned",
+ "userAssignedIdentities": {
+ "[format('{0}', resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', parameters('userAssignedIdentityName')))]": {}
+ }
+ },
+ "kind": "AzureCLI",
+ "properties": {
+ "forceUpdateTag": "[parameters('utcValue')]",
+ "azCliVersion": "2.47.0",
+ "storageAccountSettings": {
+ "storageAccountName": "[parameters('storageAccountName')]"
+ },
+ "containerSettings": {
+ "subnetIds": [
+ {
+ "id": "[resourceId('Microsoft.Network/virtualNetworks/subnets', parameters('vnetName'), parameters('subnetName'))]"
+ }
+ ]
+ },
+ "scriptContent": "echo \"Hello world!\"",
+ "retentionInterval": "P1D",
+ "cleanupPreference": "OnExpiration"
+ }
+ }
+ ]
+}
+```
## Next steps
azure-resource-manager Deployment Tutorial Linked Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-tutorial-linked-template.md
Title: Tutorial - Deploy a linked template description: Learn how to deploy a linked template Previously updated : 05/22/2023 Last updated : 10/05/2023
azure-resource-manager Deployment Tutorial Local Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-tutorial-local-template.md
Title: Tutorial - Deploy a local Azure Resource Manager template description: Learn how to deploy an Azure Resource Manager template (ARM template) from your local computer Previously updated : 05/22/2023 Last updated : 10/05/2023
azure-resource-manager Template Cloud Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-cloud-consistency.md
Last updated 12/09/2018 -+ # Develop ARM templates for cloud consistency
azure-resource-manager Enable Debug Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/enable-debug-logging.md
Title: Enable debug logging description: Describes how to enable debug logging to troubleshoot Azure resources deployed with Bicep files or Azure Resource Manager templates (ARM templates). tags: top-support-issue-+ Last updated 04/05/2023
azure-resource-manager Error Register Resource Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/error-register-resource-provider.md
Title: Resource provider registration errors description: Describes how to resolve Azure resource provider registration errors for resources deployed with a Bicep file or Azure Resource Manager template (ARM template). -+ Last updated 04/05/2023
azure-resource-manager Find Error Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/troubleshooting/find-error-code.md
Title: Find error codes
description: Describes how to find error codes to troubleshoot Azure resources deployed with Azure Resource Manager templates (ARM templates) or Bicep files. tags: top-support-issue -+ Last updated 04/05/2023
azure-signalr Howto Shared Private Endpoints Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-shared-private-endpoints-key-vault.md
description: Learn how Azure SignalR Service can use shared private endpoints to
+ Last updated 09/23/2022
azure-signalr Howto Shared Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-shared-private-endpoints.md
description: How to secure outbound traffic through shared private endpoints to
+ Last updated 12/09/2022
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
description: Learn how to set up and enable Arc for your Azure VMware Solution p
Last updated 08/28/2023-+
azure-web-pubsub Howto Secure Shared Private Endpoints Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-secure-shared-private-endpoints-key-vault.md
description: How to access key vault in private network through Shared Private Endpoints + Last updated 03/27/2023
azure-web-pubsub Howto Secure Shared Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-secure-shared-private-endpoints.md
description: How to secure Azure Web PubSub outbound traffic through shared private endpoints + Last updated 03/27/2023
azure-web-pubsub Tutorial Serverless Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-notification.md
description: A tutorial to walk through how to use Azure Web PubSub service and
+ Last updated 05/05/2023
bastion Connect Vm Native Client Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-vm-native-client-linux.md
description: Learn how to connect to a VM from a Linux computer by using Bastion and a native client. + Last updated 08/08/2023
bastion Connect Vm Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-vm-native-client-windows.md
description: Learn how to connect to a VM from a Windows computer by using Bastion and a native client. + Last updated 09/21/2023
chaos-studio Chaos Studio Private Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md
Last updated 10/26/2022 + # Virtual network injection in Azure Chaos Studio Preview
chaos-studio Chaos Studio Samples Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-samples-rest-api.md
Last updated 11/01/2021 -+ # Use the Chaos Studio REST APIs to run and manage chaos experiments
chaos-studio Chaos Studio Target Selection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-target-selection.md
Last updated 09/25/2023-+ # Target selection in Azure Chaos Studio Preview
communication-services Azure Communication Services Azure Cognitive Services Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md
All this is possible with one-click where enterprises can access a secure soluti
BYO Azure AI services can be easily integrated into any application regardless of the programming language. When creating an Azure Resource in Azure portal, enable the BYO option and provide the URL to the Azure AI services. This simple experience allows developers to meet their needs, scale, and avoid investing time and resources into designing and maintaining a custom solution. > [!NOTE]
-> This integration is only supported in limited regions for Azure AI services, for more information about which regions are supported please view the limitations section at the bottom of this document. It is also recommended that when you're creating a new Azure Cognitive Service resource that you create a Multi-service Cognitive Service resource.
+> This integration is only supported in limited regions for Azure AI services, for more information about which regions are supported please view the limitations section at the bottom of this document. This integration only supports Multi-service Cognitive Service resource, so we recommend if you're creating a new Azure Cognitive Service resource you create a Multi-service Cognitive Service resource or when you're connecting an existing resource confirm that it is a Multi-service Cognitive Service resource.
## Common use cases
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
This sandbox setup is designed to help developers begin building the application
|Batch of participants - CreateThread|200 | |Batch of participants - AddParticipant|200 | |Page size - ListMessages|200 |
+|Number of Azure Communication Services resources per Azure Bot|1000 |
### Operation Limits
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/network-requirements.md
You might want to optimize further if:
| Network optimization task | Details | | :-- | :-- | | Plan your network | In this documentation, you can find minimal requirements to your network for calls. Refer to the [Teams example for planning your network](/microsoftteams/tutorial-network-planner-example). |
-| External name resolution | Be sure that all computers running the Communication Services SDKs can resolve external DNS queries to discover the services provided by communication servicers and that your firewalls aren't preventing access. Ensure that the SDKs can resolve the addresses *.skype.com, *.microsoft.com, *.azure.net, *.azureedge.net, *.office.com, and *.trouter.io. |
+| External name resolution | Be sure that all computers running the Communication Services SDKs can resolve external DNS queries to discover the services provided by communication servicers and that your firewalls aren't preventing access. Ensure that the SDKs can resolve the addresses *.skype.com, *.microsoft.com, *.azure.net, *.azure.com, and *.office.com. |
| Maintain session persistence | Make sure your firewall doesn't change the mapped network address translation (NAT) addresses or ports for UDP. Validate NAT pool size | Validate the NAT pool size required for user connectivity. When multiple users and devices access Communication Services by using [NAT or port address translation](/office365/enterprise/nat-support-with-office-365), ensure that the devices hidden behind each publicly routable IP address don't exceed the supported number. Ensure that adequate public IP addresses are assigned to the NAT pools to prevent port exhaustion. Port exhaustion contributes to internal users and devices being unable to connect to Communication Services. | | Intrusion detection and prevention guidance | If your environment has an [intrusion detection system](../../../network-watcher/network-watcher-intrusion-detection-open-source-tools.md) or intrusion prevention system deployed for an extra layer of security for outbound connections, allow all Communication Services URLs. |
communication-services Play Ai Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/play-ai-action.md
- Title: Customize voice prompts to users with Play action using Text-to-Speech-
-description: Provides a how-to guide for playing audio to participants as part of a call.
---- Previously updated : 02/15/2023---
-zone_pivot_groups: acs-csharp-java
--
-# Customize voice prompts to users with Play action using Text-to-Speech
-
->[!IMPORTANT]
->Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly.
->Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/acs-tap-invite).
-
-This guide will help you get started with playing audio to participants by using the play action provided through Azure Communication Services Call Automation SDK.
---
-## Event codes
-|Status|Code|Subcode|Message|
-|-|--|--|--|
-|PlayCompleted|200|0|Action completed successfully.|
-|PlayFailed|400|8535|Action failed, file format is invalid.|
-|PlayFailed|400|8536|Action failed, file could not be downloaded.|
-|PlayCanceled|400|8508|Action failed, the operation was canceled.|
-
-## Clean up resources
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
-
-## Next steps
-- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md)-- Learn more about [Gathering user input in a call](../../concepts/call-automation/recognize-ai-action.md)-- Learn more about [Playing audio in calls](../../concepts/call-automation/play-ai-action.md)
communication-services Recognize Ai Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/recognize-ai-action.md
- Title: Gather user voice input-
-description: Provides a how-to guide for gathering user voice input from participants on a call.
--- Previously updated : 02/15/2023---
-zone_pivot_groups: acs-csharp-java
--
-# Gather user input with Recognize action and voice input
-
-This guide helps you get started recognizing user input in the forms of DTMF or voice input provided by participants through Azure Communication Services Call Automation SDK.
---
-## Event codes
-|Status|Code|Subcode|Message|
-|-|--|--|--|
-|RecognizeCompleted|200|8531|Action completed, max digits received.|
-|RecognizeCompleted|200|8533|Action completed, DTMF option matched.|
-|RecognizeCompleted|200|8545|Action completed, speech option matched.|
-|RecognizeCompleted|200|8514|Action completed as stop tone was detected.|
-|RecognizeCompleted|200|8569|Action completed, speech was recognized.|
-|RecognizeCompleted|400|8532|Action failed, inter-digit silence time out reached.|
-|RecognizeFailed|400|8563|Action failed, speech could not be recognized.|
-|RecognizeFailed|408|8570|Action failed, speech recognition timed out.|
-|RecognizeFailed|400|8510|Action failed, initial silence time out reached.|
-|RecognizeFailed|500|8511|Action failed, encountered failure while trying to play the prompt.|
-|RecognizeFailed|400|8547|Action failed, recognized phrase does not match a valid option.|
-|RecognizeFailed|500|8534|Action failed, incorrect tone entered.|
-|RecognizeFailed|500|9999|Unspecified error.|
-|RecognizeCanceled|400|8508|Action failed, the operation was canceled.|
-
-## Limitations
--- You must either pass a tone for every single choice if you want to enable callers to use tones or voice inputs, otherwise no tone should be sent if you're expecting only voice input from callers. -
-## Clean up resources
-
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
-
-## Next Steps
-- Learn more about [Gathering user input](../../concepts/call-automation/recognize-ai-action.md)-- Learn more about [Playing audio in call](../../concepts/call-automation/play-ai-action.md)-- Learn more about [Call Automation](../../concepts/call-automation/call-automation.md)
communication-services Lobby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/lobby.md
Last updated 06/15/2023 -
+zone_pivot_groups: acs-plat-web-ios-android-windows
# Manage Teams meeting lobby+ In this article, you will learn how to implement the Teams meetings lobby capability by using Azure Communication Service calling SDKs. This capability allows users to admit and reject participants from Teams meeting lobby, receive the join lobby notification and get the lobby participants list. ## Prerequisites
To update or check current meeting join & lobby policies in Teams admin center:
| getParticipants | ✔️ | ✔️ | ✔️ | ✔️ | | lobbyParticipantsUpdated | ✔️ | ✔️ | ✔️ | ✔️ | [!INCLUDE [Lobby Client-side JavaScript](./includes/lobby/lobby-web.md)]+++ ## Next steps
communication-services Get Started With Video Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-with-video-calling.md
zone_pivot_groups: acs-plat-web-ios-android-windows-unity-+ # QuickStart: Add 1:1 video calling to your app
communication-services Getting Started With Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/getting-started-with-calling.md
zone_pivot_groups: acs-plat-web-ios-android-windows-unity-+ # Quickstart: Add voice calling to your app
container-apps Containerapp Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containerapp-up.md
description: How to deploy a container app with the az containerapp up command
+ Last updated 11/08/2022
container-apps Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs.md
description: Learn about jobs in Azure Container Apps
-+ Last updated 08/17/2023
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
description: Using managed identities in Container Apps
-+ Last updated 09/29/2022
container-apps Quickstart Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-code-to-cloud.md
description: Build your container app from a local or GitHub source repository a
+ Last updated 03/29/2023
container-apps Service Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/service-connector.md
Last updated 06/16/2022-+ # Customer intent: As an app developer, I want to connect a containerized app to a storage account in the Azure portal using Service Connector.
container-apps Waf App Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/waf-app-gateway.md
On the *Configuration* tab, you connect the frontend and backend pool you create
## Add private link to your Application Gateway
-This step is required for internal only container app environments as it allows your Application Gateway to communicate with your Container App on the backend through the virtual network.
+You can establish a secured connection to internal-only container app environments by levaraging private link, as it allows your Application Gateway to communicate with your Container App on the backend through the virtual network.
1. Once the Application Gateway is created, select **Go to resource**.
container-instances Container Instances Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-gpu.md
-+ Last updated 06/17/2022
container-registry Container Registry Check Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-check-health.md
Title: Check registry health description: Learn how to run a quick diagnostic command to identify common problems when using an Azure container registry, including local Docker configuration and connectivity to the registry + Last updated 10/11/2022
container-registry Container Registry Image Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-image-lock.md
Title: Lock images description: Set attributes for a container image or repository so it can't be deleted or overwritten in an Azure container registry. + Last updated 10/11/2022
container-registry Container Registry Import Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-import-images.md
Last updated 10/11/2022-+ # Import container images to a container registry
container-registry Container Registry Retention Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-retention-policy.md
Title: Policy to retain untagged manifests description: Learn how to enable a retention policy in your Premium Azure container registry, for automatic deletion of untagged manifests after a defined period. + Last updated 10/11/2022
container-registry Container Registry Tasks Authentication Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-authentication-managed-identity.md
Title: Managed identity in ACR task
description: Enable a managed identity for Azure Resources in an Azure Container Registry task to allow the task to access other Azure resources including other private container registries. +
container-registry Container Registry Tasks Reference Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-reference-yaml.md
Title: YAML reference - ACR Tasks description: Reference for defining tasks in YAML for ACR Tasks, including task properties, step types, step properties, and built-in variables. + Last updated 10/11/2022
container-registry Container Registry Transfer Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-transfer-images.md
Last updated 10/11/2022-+ # ACR Transfer with ARM templates
cosmos-db Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/materialized-views.md
Last updated 11/17/2022-+ # Materialized views in Azure Cosmos DB for Apache Cassandra (preview)
cosmos-db Quickstart Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-console.md
+ Last updated 09/27/2023 # CustomerIntent: As a developer, I want to use the Gremlin console so that I can manually create and traverse vertices and edges.
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-dotnet.md
+ Last updated 09/27/2023 # CustomerIntent: As a .NET developer, I want to use a library for my programming language so that I can create and traverse vertices and edges in code.
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-nodejs.md
+ Last updated 09/27/2023 # CustomerIntent: As a Node.js developer, I want to use a library for my programming language so that I can create and traverse vertices and edges in code.
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/quickstart-python.md
+ Last updated 09/27/2023 # CustomerIntent: As a Python developer, I want to use a library for my programming language so that I can create and traverse vertices and edges in code.
cosmos-db How To Develop Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-develop-emulator.md
Use the [Azure Cosmos DB API for NoSQL .NET SDK](nosql/quickstart-dotnet.md) to
> { > ServerCertificateCustomValidationCallback = HttpClientHandler.DangerousAcceptAnyServerCertificateValidator > }),
- > ConnectionMode = ConnectionMode.Gateway
+ > ConnectionMode = ConnectionMode.Gateway,
+ > LimitToEndpoint = true
> }; > > using CosmosClient client = new(
cosmos-db How To Setup Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-managed-identity.md
Title: Configure managed identities with Azure AD for your Azure Cosmos DB accou
description: Learn how to configure managed identities with Azure Active Directory for your Azure Cosmos DB account -+ Last updated 10/15/2021
cosmos-db Migrate Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md
-+ Last updated 03/31/2023
cosmos-db Tutorial Nodejs Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/tutorial-nodejs-web-app.md
Last updated 08/28/2023-+ # CustomerIntent: As a developer, I want to connect to Azure Cosmos DB for MongoDB vCore from my Node.js application, so I can build MERN stack applications.
cosmos-db Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md
Last updated 04/26/2023-+ # Monitor Azure Cosmos DB data by using diagnostic settings in Azure
cosmos-db How To Dotnet Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-get-started.md
ms.devlang: csharp Last updated 07/06/2022-+ # Get started with Azure Cosmos DB for NoSQL using .NET
cosmos-db How To Javascript Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-javascript-get-started.md
ms.devlang: javascript Last updated 07/06/2022-+ # Get started with Azure Cosmos DB for NoSQL using JavaScript
cosmos-db How To Python Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-python-get-started.md
ms.devlang: python Last updated 12/06/2022-+ # Get started with Azure Cosmos DB for NoSQL using Python
cosmos-db Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/materialized-views.md
-+ Last updated 06/09/2023
cosmos-db Scaling Provisioned Throughput Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scaling-provisioned-throughput-best-practices.md
In general, if you have a starting number of physical partitions `P`, and want t
Increase your RU/s to: `10,000 * P * (2 ^ (ROUNDUP(LOG_2 (S/(10,000 * P))))`. This gives the closest RU/s to the desired value that will ensure all partitions are split evenly. > [!NOTE]
-> When you increase the RU/s of a database or container, this can impact the minimum RU/s you can lower to in the future. Typically, the minimum RU/s is equal to MAX(400 RU/s, Current storage in GB * 10 RU/s, Highest RU/s ever provisioned / 100). For example, if the highest RU/s you've ever scaled to is 100,000 RU/s, the lowest RU/s you can set in the future is 1000 RU/s. Learn more about [minimum RU/s](concepts-limits.md#minimum-throughput-limits).
+> When you increase the RU/s of a database or container, this can impact the minimum RU/s you can lower to in the future. Typically, the minimum RU/s is equal to MAX(400 RU/s, Current storage in GB * 1 RU/s, Highest RU/s ever provisioned / 100). For example, if the highest RU/s you've ever scaled to is 100,000 RU/s, the lowest RU/s you can set in the future is 1000 RU/s. Learn more about [minimum RU/s](concepts-limits.md#minimum-throughput-limits).
#### Step 2: Lower your RU/s to the desired RU/s
cost-management-billing Reservation Trade In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/reservation-trade-in.md
Previously updated : 07/20/2023 Last updated : 10/05/2023
Apart from [Azure Virtual Machines](https://azure.microsoft.com/pricing/details/
Although compute reservation exchanges will end on January 1, 2024, noncompute reservation exchanges are unchanged. You're able to continue to trade-in reservations for saving plans. - You must have owner access on the Reservation Order to trade in an existing reservation. You can [Add or change users who can manage a savings plan](manage-savings-plan.md#who-can-manage-a-savings-plan).
+- To trade-in a reservation for a savings plan, you must have Azure RBAC Owner permission on the subscription you plan to use to purchase a savings plan.
+ - EA Admin write permission or Billing profile contributor and higher, which are Cost Management + Billing permissions, are supported only for direct Savings plan purchases. They can't be used for savings plans purchases as a part of a reservation trade-in.
- Microsoft isn't currently charging early termination fees for reservation trade ins. We might charge the fees made in the future. We currently don't have a date for enabling the fee. ## How to trade in an existing reservation
defender-for-cloud Defender For Storage Classic Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic-enable.md
Last updated 08/01/2023
-+ # Enable Microsoft Defender for Storage (classic)
defender-for-cloud Express Configuration Azure Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/express-configuration-azure-commands.md
Title: Express configuration Azure Command Line Interface (CLI) commands reference description: In this article, you can review the Express configuration Azure Command Line Interface (CLI) commands reference and copy example scripts to use in your environments. + Last updated 06/04/2023
defender-for-cloud Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/resource-graph-samples.md
Title: Azure Resource Graph sample queries
description: Sample Azure Resource Graph queries for Microsoft Defender for Cloud showing use of resource types and tables to access Microsoft Defender for Cloud related resources and properties. Last updated 02/14/2023 -+ # Azure Resource Graph sample queries for Microsoft Defender for Cloud
digital-twins How To Integrate Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-integrate-maps.md
-# Mandatory fields.
Title: Integrate with Azure Maps description: Learn how to use Azure Functions to create a function that can use the twin graph and Azure Digital Twins notifications to update an Azure Maps indoor map.
Last updated 09/27/2022 -+ # Optional fields. Don't forget to remove # if you need a field. #
digital-twins How To Provision Using Device Provisioning Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-provision-using-device-provisioning-service.md
-# Mandatory fields.
Title: Automanage devices using Device Provisioning Service description: Learn how to set up an automated process to provision and retire IoT devices in Azure Digital Twins using Device Provisioning Service (DPS).
Last updated 11/18/2022 + # Optional fields. Don't forget to remove # if you need a field. #
-#
#
For more information about using HTTP requests with Azure functions, see:
You can write custom logic to automatically provide this information using the model and graph data already stored in Azure Digital Twins. To read more about managing, upgrading, and retrieving information from the twins graph, see the following how-to guides: * [Manage a digital twin](how-to-manage-twin.md)
-* [Query the twin graph](how-to-query-graph.md)
+* [Query the twin graph](how-to-query-graph.md)
dms Ads Sku Recommend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/ads-sku-recommend.md
Last updated 05/09/2022 -+
+ - references_regions
+ - sql-migration-content
# Get Azure recommendations to migrate your SQL Server database (Preview)
GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [assessment];
## Next steps -- Learn how to [migrate databases by using the Azure SQL Migration extension in Azure Data Studio](migration-using-azure-data-studio.md).
+- Learn how to [migrate databases by using the Azure SQL Migration extension in Azure Data Studio](migration-using-azure-data-studio.md).
dms Concepts Migrate Azure Mysql Login Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/concepts-migrate-azure-mysql-login-migration.md
Title: MySQL to Azure Database for MySQL Data Migration - MySQL Login Migration
-description: Learn how to use the Azure Database for MySQL Data Migration - MySQL Login Migration
+description: Learn how to use the Azure Database for MySQL Data Migration - MySQL Login Migration
Last updated 07/24/2023 -+
+ - references_regions
+ - sql-migration-content
# MySQL to Azure Database for MySQL Data Migration - MySQL Login Migration
dms Concepts Migrate Azure Mysql Replicate Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/concepts-migrate-azure-mysql-replicate-changes.md
Last updated 07/24/2023 -+
+ - references_regions
+ - sql-migration-content
# MySQL to Azure Database for MySQL Data Migration - MySQL Replicate Changes
To complete the replicate changes migration successfully, ensure that the follow
## Next steps -- [Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS](tutorial-mysql-azure-mysql-offline-portal.md)
+- [Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS](tutorial-mysql-azure-mysql-offline-portal.md)
dms Concepts Migrate Azure Mysql Schema Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/concepts-migrate-azure-mysql-schema-migration.md
Title: MySQL to Azure Database for MySQL Data Migration - MySQL Schema Migration
-description: Learn how to use the Azure Database for MySQL Data Migration - MySQL Schema Migration
+description: Learn how to use the Azure Database for MySQL Data Migration - MySQL Schema Migration
Last updated 07/23/2023 -+
+ - references_regions
+ - sql-migration-content
# MySQL to Azure Database for MySQL Data Migration - MySQL Schema Migration
dms Create Dms Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-bicep.md
Last updated 03/21/2022 -+
+ - subject-armqs
+ - mode-arm
+ - devx-track-bicep
+ - sql-migration-content
# Quickstart: Create instance of Azure Database Migration Service using Bicep
dms Create Dms Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/create-dms-resource-manager-template.md
Last updated 06/29/2020 -+
+ - subject-armqs
+ - mode-arm
+ - devx-track-arm-template
+ - sql-migration-content
# Quickstart: Create instance of Azure Database Migration Service using ARM template
dms Dms Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-overview.md
Last updated 02/08/2023 +
+ - sql-migration-content
# What is Azure Database Migration Service?
dms Dms Tools Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-tools-matrix.md
- mvc - ignite-2022
+ - sql-migration-content
# Services and tools available for data migration scenarios
dms Faq Mysql Single To Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/faq-mysql-single-to-flex.md
Last updated 09/17/2022 -+
+ - seo-lt-2019
+ - sql-migration-content
# Frequently Asked Questions (FAQs)
dms How To Migrate Ssis Packages Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-migrate-ssis-packages-managed-instance.md
Last updated 02/20/2020 -+
+ - seo-lt-2019
+ - sql-migration-content
# Migrate SQL Server Integration Services packages to an Azure SQL Managed Instance
dms How To Migrate Ssis Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-migrate-ssis-packages.md
Last updated 02/20/2020 -+
+ - seo-lt-2019
+ - sql-migration-content
# Redeploy SSIS packages to Azure SQL Database with Azure Database Migration Service
dms How To Monitor Migration Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/how-to-monitor-migration-activity.md
Last updated 02/20/2020 -+
+ - seo-lt-2019
+ - sql-migration-content
# Monitor migration activity using the Azure Database Migration Service
dms Howto Sql Server To Azure Sql Managed Instance Powershell Offline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-offline.md
- seo-lt-2019 - fasttrack-edit - devx-track-azurepowershell
+ - sql-migration-content
# Migrate SQL Server to SQL Managed Instance offline with PowerShell & Azure Database Migration Service
dms Howto Sql Server To Azure Sql Managed Instance Powershell Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-managed-instance-powershell-online.md
Last updated 12/16/2020 -+
+ - devx-track-azurepowershell
+ - sql-migration-content
# Migrate SQL Server to SQL Managed Instance online with PowerShell & Azure Database Migration Service
dms Howto Sql Server To Azure Sql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/howto-sql-server-to-azure-sql-powershell.md
- seo-lt-2019 - devx-track-azurepowershell
+ - sql-migration-content
# Migrate a SQL Server database to Azure SQL Database using Azure PowerShell
dms Known Issues Azure Mysql Fs Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-mysql-fs-online.md
Last updated 10/04/2022 -+
+ - mvc
+ - sql-migration-content
# Known Issues With Migrations To Azure Database for MySQL
dms Known Issues Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-postgresql-online.md
- seo-lt-2019 - seo-dt-2019
+ - sql-migration-content
# Known issues/limitations with online migrations from PostgreSQL to Azure Database for PostgreSQL
dms Known Issues Azure Sql Db Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-db-managed-instance-online.md
Last updated 02/20/2020 -+
+ - mvc
+ - sql-migration-content
# Known issues/migration limitations with online migrations to Azure SQL Managed Instance
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
Last updated 04/21/2023 -+
+ - seo-lt-2019
+ - sql-migration-content
# Known issues, limitations, and troubleshooting
Migrating to SQL Server on Azure VMs by using the Azure SQL extension for Azure
- For an overview and installation of the Azure SQL migration extension, see [Azure SQL migration extension for Azure Data Studio](/sql/azure-data-studio/extensions/azure-sql-migration-extension) - For more information on known limitations with Log Replay Service, see [Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service (Preview)](/azure/azure-sql/managed-instance/log-replay-service-migrate#limitations)-- For more information on SQL Server on Virtual machine resource limits, see [Checklist: Best practices for SQL Server on Azure VMs](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist)
+- For more information on SQL Server on Virtual machine resource limits, see [Checklist: Best practices for SQL Server on Azure VMs](/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist)
dms Known Issues Dms Hybrid Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-dms-hybrid-mode.md
Last updated 02/20/2020 -+
+ - mvc
+ - sql-migration-content
# Known issues/migration limitations with using hybrid mode
dms Known Issues Mongo Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-mongo-cosmos-db.md
- seo-lt-2019 - kr2b-contr-experiment - ignite-2022
+ - sql-migration-content
# Known issues with migrations from MongoDB to Azure Cosmos DB
dms Known Issues Troubleshooting Dms Source Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms-source-connectivity.md
Last updated 02/20/2020 -+
+ - seo-lt-2019
+ - sql-migration-content
# Troubleshoot DMS errors when connecting to source databases
dms Known Issues Troubleshooting Dms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms.md
- seo-lt-2019 - ignite-2022 - has-azure-ad-ps-ref
+ - sql-migration-content
# Troubleshoot common Azure Database Migration Service issues and errors
dms Migrate Azure Mysql Consistent Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migrate-azure-mysql-consistent-backup.md
Last updated 04/19/2022 -+
+ - references_regions
+ - sql-migration-content
# MySQL to Azure Database for MySQL Data Migration - MySQL Consistent Backup
dms Migrate Mysql To Azure Mysql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migrate-mysql-to-azure-mysql-powershell.md
- seo-lt-2019 - devx-track-azurepowershell
+ - sql-migration-content
# Migrate MySQL to Azure Database for MySQL offline with PowerShell & Azure Database Migration Service
dms Migration Dms Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-dms-powershell-cli.md
Last updated 04/26/2022 - +
+ - devx-track-azurepowershell
+ - sql-migration-content
# Migrate databases at scale using automation (Preview)
dms Migration Using Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migration-using-azure-data-studio.md
Last updated 09/28/2022 -+
+ - references_regions
+ - sql-migration-content
# Migrate databases by using the Azure SQL Migration extension for Azure Data Studio
dms Pre Reqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/pre-reqs.md
Last updated 02/25/2020 -+
+ - seo-lt-2019
+ - sql-migration-content
# Overview of prerequisites for using the Azure Database Migration Service
dms Quickstart Create Data Migration Service Hybrid Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-hybrid-portal.md
- seo-lt-2019 - mode-ui - subject-rbac-steps
+ - sql-migration-content
# Quickstart: Create a hybrid mode instance with Azure portal & Azure Database Migration Service
dms Quickstart Create Data Migration Service Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/quickstart-create-data-migration-service-portal.md
- seo-lt-2019 - mode-ui
+ - sql-migration-content
# Quickstart: Create an instance of the Azure Database Migration Service by using the Azure portal
dms Resource Custom Roles Mysql Database Migration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-mysql-database-migration-service.md
Last updated 08/07/2023 +
+ - sql-migration-content
# Custom roles for MySQL to Azure Database for MySQL migrations using Database Migration Service
To assign a role to users, open the Azure portal, perform the following steps:
* For information about Azure Database Migration Service, see [What is Azure Database Migration Service?](./dms-overview.md). * For information about known issues and limitations when migrating to Azure Database for MySQL - Flexible Server using DMS, see [Known Issues With Migrations To Azure Database for MySQL - Flexible Server](./known-issues-azure-mysql-fs-online.md). * For information about known issues and limitations when performing migrations using DMS, see [Common issues - Azure Database Migration Service](./known-issues-troubleshooting-dms.md).
-* For troubleshooting source database connectivity issues while using DMS, see article [Issues connecting source databases](./known-issues-troubleshooting-dms-source-connectivity.md).
+* For troubleshooting source database connectivity issues while using DMS, see article [Issues connecting source databases](./known-issues-troubleshooting-dms-source-connectivity.md).
dms Resource Custom Roles Sql Database Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-database-ads.md
Last updated 09/28/2022 +
+ - sql-migration-content
# Custom roles for SQL Server to Azure SQL Database migrations in Azure Data Studio
dms Resource Custom Roles Sql Db Managed Instance Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance-ads.md
Last updated 05/02/2022 +
+ - sql-migration-content
# Custom roles for SQL Server to Azure SQL Managed Instance migrations using ADS
dms Resource Custom Roles Sql Db Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-managed-instance.md
Last updated 02/08/2021 -+
+ - seo-lt-2019
+ - sql-migration-content
# Custom roles for SQL Server to Azure SQL Managed Instance online migrations
dms Resource Custom Roles Sql Db Virtual Machine Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-custom-roles-sql-db-virtual-machine-ads.md
Last updated 05/02/2022 +
+ - sql-migration-content
# Custom roles for SQL Server to Azure Virtual Machines migrations using ADS
dms Resource Network Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-network-topologies.md
Last updated 01/08/2020 -+
+ - seo-lt-2019
+ - sql-migration-content
# Network topologies for Azure SQL Managed Instance migrations using Azure Database Migration Service
dms Resource Scenario Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/resource-scenario-status.md
Last updated 04/27/2022 -+
+ - mvc
+ - sql-migration-content
# Azure Database Migration Service supported scenarios
dms Tutorial Azure Postgresql To Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-azure-postgresql-to-azure-postgresql-online-portal.md
Last updated 07/21/2020 -+
+ - seo-lt-2019
+ - sql-migration-content
# Tutorial: Migrate/Upgrade Azure Database for PostgreSQL - Single Server to Azure Database for PostgreSQL - Single Server online using DMS via the Azure portal
dms Tutorial Login Migration Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-login-migration-ads.md
Last updated 01/31/2023 +
+ - sql-migration-content
# Tutorial: Migrate SQL Server logins (preview) to Azure SQL in Azure Data Studio
The following table describes the current status of the Login migration support
- [Migrate databases with Azure SQL Migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) - [Tutorial: Migrate SQL Server to Azure SQL Database - Offline](./tutorial-sql-server-azure-sql-database-offline-ads.md) - [Tutorial: Migrate SQL Server to Azure SQL Managed Instance - Online](./tutorial-sql-server-managed-instance-online-ads.md)-- [Tutorial: Migrate SQL Server to SQL Server On Azure Virtual Machines - Online](./tutorial-sql-server-to-virtual-machine-online-ads.md)
+- [Tutorial: Migrate SQL Server to SQL Server On Azure Virtual Machines - Online](./tutorial-sql-server-to-virtual-machine-online-ads.md)
dms Tutorial Mongodb Cosmos Db Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db-online.md
- seo-nov-2020 - ignite-2022
+ - sql-migration-content
# Tutorial: Migrate MongoDB to Azure Cosmos DB for MongoDB online using DMS
dms Tutorial Mongodb Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mongodb-cosmos-db.md
- seo-lt-2019 - ignite-2022
+ - sql-migration-content
# Tutorial: Migrate MongoDB to Azure Cosmos DB for MongoDB offline
dms Tutorial Mysql Azure External To Flex Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-external-to-flex-online-portal.md
Last updated 08/07/2023 +
+ - sql-migration-content
# Tutorial: Migrate from MySQL to Azure Database for MySQL - Flexible Server online using DMS via the Azure portal
When performing a migration, be sure to consider the following best practices.
* For information about Azure Database Migration Service, see [What is Azure Database Migration Service?](./dms-overview.md). * For information about known issues and limitations when migrating to Azure Database for MySQL - Flexible Server using DMS, see [Known Issues With Migrations To Azure Database for MySQL - Flexible Server](./known-issues-azure-mysql-fs-online.md). * For information about known issues and limitations when performing migrations using DMS, see [Common issues - Azure Database Migration Service](./known-issues-troubleshooting-dms.md).
-* For troubleshooting source database connectivity issues while using DMS, see article [Issues connecting source databases](./known-issues-troubleshooting-dms-source-connectivity.md).
+* For troubleshooting source database connectivity issues while using DMS, see article [Issues connecting source databases](./known-issues-troubleshooting-dms-source-connectivity.md).
dms Tutorial Mysql Azure Mysql Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-mysql-offline-portal.md
- seo-lt-2019 - ignite-2022
+ - sql-migration-content
# Tutorial: Migrate MySQL to Azure Database for MySQL offline using DMS
dms Tutorial Mysql Azure Single To Flex Offline Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-offline-portal.md
Last updated 09/17/2022 -+
+ - seo-lt-2019
+ - sql-migration-content
# Tutorial: Migrate Azure Database for MySQL - Single Server to Flexible Server offline using DMS via the Azure portal
dms Tutorial Mysql Azure Single To Flex Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-online-portal.md
Last updated 09/17/2022 -+
+ - seo-lt-2019
+ - sql-migration-content
# Tutorial: Migrate Azure Database for MySQL - Single Server to Flexible Server online using DMS via the Azure portal
dms Tutorial Postgresql Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online-portal.md
- seo-lt-2019 - ignite-2022
+ - sql-migration-content
# Tutorial: Migrate PostgreSQL to Azure Database for PostgreSQL online using DMS (classic) via the Azure portal
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online.md
- seo-lt-2019 - devx-track-azurecli - ignite-2022
+ - sql-migration-content
# Tutorial: Migrate PostgreSQL to Azure Database for PostgreSQL online using DMS (classic) via the Azure CLI
dms Tutorial Rds Postgresql Server Azure Db For Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-rds-postgresql-server-azure-db-for-postgresql-online.md
Last updated 04/11/2020 -+
+ - seo-lt-2019
+ - sql-migration-content
# Tutorial: Migrate RDS PostgreSQL to Azure DB for PostgreSQL online using DMS
dms Tutorial Sql Server Azure Sql Database Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-azure-sql-database-offline-ads.md
Last updated 06/07/2023 -+
+ - seo-lt-2019
+ - sql-migration-content
# Tutorial: Migrate SQL Server to Azure SQL Database offline in Azure Data Studio
dms Tutorial Sql Server Managed Instance Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-offline-ads.md
Last updated 06/07/2023 -+
+ - seo-lt-2019
+ - sql-migration-content
# Tutorial: Migrate SQL Server to Azure SQL Managed Instance offline in Azure Data Studio
Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azu
- Complete a quickstart to [migrate a database to SQL Managed Instance by using the T-SQL RESTORE command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart). - Learn more about [SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview). - Learn how to [connect apps to SQL Managed Instance](/azure/azure-sql/managed-instance/connect-application-instance).-- To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
+- To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Sql Server Managed Instance Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online-ads.md
Last updated 06/07/2023 -+
+ - seo-lt-2019
+ - sql-migration-content
# Tutorial: Migrate SQL Server to Azure SQL Managed Instance online in Azure Data Studio
Migrating to Azure SQL Managed Instance by using the Azure SQL extension for Azu
* For a tutorial showing you how to migrate a database to SQL Managed Instance using the T-SQL RESTORE command, see [Restore a backup to SQL Managed Instance using the restore command](/azure/azure-sql/managed-instance/restore-sample-database-quickstart). * For information about SQL Managed Instance, see [What is SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview). * For information about connecting apps to SQL Managed Instance, see [Connect applications](/azure/azure-sql/managed-instance/connect-application-instance).
-* To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
+* To troubleshoot, review [Known issues](known-issues-azure-sql-migration-azure-data-studio.md).
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
- seo-lt-2019 - ignite-2022
+ - sql-migration-content
# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using DMS (classic)
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md
- Title: "Tutorial: Migrate SQL Server offline to Azure SQL Database"-
-description: Learn to migrate from SQL Server to Azure SQL Database offline by using Azure Database Migration Service (classic).
--- Previously updated : 02/08/2023---
- - seo-lt-2019
- - ignite-2022
--
-# Tutorial: Migrate SQL Server to Azure SQL Database using DMS (classic)
--
-> [!NOTE]
-> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Database by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-azure-sql-database-offline-ads.md).
->
-> To compare features between versions, review [compare versions](dms-overview.md#compare-versions).
-
-You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to [Azure SQL Database](/azure/sql-database/). In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database restored to an on-premises instance of SQL Server 2016 (or later) to a single database or pooled database in Azure SQL Database by using Azure Database Migration Service.
-
-You will learn how to:
-> [!div class="checklist"]
->
-> - Assess and evaluate your on-premises database for any blocking issues by using the Data Migration Assistant.
-> - Use the Data Migration Assistant to migrate the database sample schema.
-> - Register the Azure DataMigration resource provider.
-> - Create an instance of Azure Database Migration Service.
-> - Create a migration project by using Azure Database Migration Service.
-> - Run the migration.
-> - Monitor the migration.
--
-## Prerequisites
-
-To complete this tutorial, you need to:
--- Download and install [SQL Server 2016 or later](https://www.microsoft.com/sql-server/sql-server-downloads).-- Enable the TCP/IP protocol, which is disabled by default during SQL Server Express installation, by following the instructions in the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).-- [Restore the AdventureWorks2016 database to the SQL Server instance.](/sql/samples/adventureworks-install-configure#restore-to-sql-server)-- Create a database in Azure SQL Database, which you do by following the details in the article [Create a database in Azure SQL Database using the Azure portal](/azure/azure-sql/database/single-database-create-quickstart). For purposes of this tutorial, the name of the Azure SQL Database is assumed to be **AdventureWorksAzure**, but you can provide whatever name you wish.-
- > [!NOTE]
- > If you use SQL Server Integration Services (SSIS) and want to migrate the catalog database for your SSIS projects/packages (SSISDB) from SQL Server to Azure SQL Database, the destination SSISDB will be created and managed automatically on your behalf when you provision SSIS in Azure Data Factory (ADF). For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
-- Download and install the latest version of the [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595).-- Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.-
- > [!NOTE]
- > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned:
- >
- > - Target database endpoint (for example, SQL endpoint, Azure Cosmos DB endpoint, and so on)
- > - Storage endpoint
- > - Service bus endpoint
- >
- > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
- >
- >If you donΓÇÖt have site-to-site connectivity between the on-premises network and Azure or if there is limited site-to-site connectivity bandwidth, consider using Azure Database Migration Service in hybrid mode (Preview). Hybrid mode leverages an on-premises migration worker together with an instance of Azure Database Migration Service running in the cloud. To create an instance of Azure Database Migration Service in hybrid mode, see the article [Create an instance of Azure Database Migration Service in hybrid mode using the Azure portal](./quickstart-create-data-migration-service-hybrid-portal.md).
--- Ensure that your virtual network Network Security Group outbound security rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and AzureMonitor. For more detail on Azure virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).-- Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).-- Open your Windows firewall to allow Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. If your default instance is listening on some other port, add that to the firewall.-- If you're running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that Azure Database Migration Service can connect to a named instance on your source server.-- When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration.-- Create a server-level IP [firewall rule](/azure/azure-sql/database/firewall-configure) for Azure SQL Database to allow Azure Database Migration Service access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.-- Ensure that the credentials used to connect to source SQL Server instance have [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permissions.-- Ensure that the credentials used to connect to target Azure SQL Database instance have [CONTROL DATABASE](/sql/t-sql/statements/grant-database-permissions-transact-sql) permission on the target databases.-
- > [!IMPORTANT]
- > Creating an instance of Azure Database Migration Service requires access to virtual network settings that are normally not within the same resource group. As a result, the user creating an instance of DMS requires permission at subscription level. To create the required roles, which you can assign as needed, run the following script:
- >
- > ```
- >
- > $readerActions = `
- > "Microsoft.Network/networkInterfaces/ipConfigurations/read", `
- > "Microsoft.DataMigration/*/read", `
- > "Microsoft.Resources/subscriptions/resourceGroups/read"
- >
- > $writerActions = `
- > "Microsoft.DataMigration/services/*/write", `
- > "Microsoft.DataMigration/services/*/delete", `
- > "Microsoft.DataMigration/services/*/action", `
- > "Microsoft.Network/virtualNetworks/subnets/join/action", `
- > "Microsoft.Network/virtualNetworks/write", `
- > "Microsoft.Network/virtualNetworks/read", `
- > "Microsoft.Resources/deployments/validate/action", `
- > "Microsoft.Resources/deployments/*/read", `
- > "Microsoft.Resources/deployments/*/write"
- >
- > $writerActions += $readerActions
- >
- > # TODO: replace with actual subscription IDs
- > $subScopes = ,"/subscriptions/00000000-0000-0000-0000-000000000000/","/subscriptions/11111111-1111-1111-1111-111111111111/"
- >
- > function New-DmsReaderRole() {
- > $aRole = [Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefinition]::new()
- > $aRole.Name = "Azure Database Migration Reader"
- > $aRole.Description = "Lets you perform read only actions on DMS service/project/tasks."
- > $aRole.IsCustom = $true
- > $aRole.Actions = $readerActions
- > $aRole.NotActions = @()
- >
- > $aRole.AssignableScopes = $subScopes
- > #Create the role
- > New-AzRoleDefinition -Role $aRole
- > }
- >
- > function New-DmsContributorRole() {
- > $aRole = [Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefinition]::new()
- > $aRole.Name = "Azure Database Migration Contributor"
- > $aRole.Description = "Lets you perform CRUD actions on DMS service/project/tasks."
- > $aRole.IsCustom = $true
- > $aRole.Actions = $writerActions
- > $aRole.NotActions = @()
- >
- > $aRole.AssignableScopes = $subScopes
- > #Create the role
- > New-AzRoleDefinition -Role $aRole
- > }
- >
- > function Update-DmsReaderRole() {
- > $aRole = Get-AzRoleDefinition "Azure Database Migration Reader"
- > $aRole.Actions = $readerActions
- > $aRole.NotActions = @()
- > Set-AzRoleDefinition -Role $aRole
- > }
- >
- > function Update-DmsConributorRole() {
- > $aRole = Get-AzRoleDefinition "Azure Database Migration Contributor"
- > $aRole.Actions = $writerActions
- > $aRole.NotActions = @()
- > Set-AzRoleDefinition -Role $aRole
- > }
- >
- > # Invoke above functions
- > New-DmsReaderRole
- > New-DmsContributorRole
- > Update-DmsReaderRole
- > Update-DmsConributorRole
- > ```
-
-## Assess your on-premises database
-
-Before you can migrate data from a SQL Server instance to a single database or pooled database in Azure SQL Database, you need to assess the SQL Server database for any blocking issues that might prevent migration. Using the Data Migration Assistant, follow the steps described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem) to complete the on-premises database assessment. A summary of the required steps follows:
-
-1. In the Data Migration Assistant, select the New (+) icon, and then select the **Assessment** project type.
-2. Specify a project name. From the **Assessment type** drop-down list, select **Database Engine**, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database**, and then select **Create** to create the project.
-
- When you're assessing the source SQL Server database migrating to a single database or pooled database in Azure SQL Database, you can choose one or both of the following assessment report types:
-
- - Check database compatibility
- - Check feature parity
-
- Both report types are selected by default.
-
-3. In the Data Migration Assistant, on the **Options** screen, select **Next**.
-4. On the **Select sources** screen, in the **Connect to a server** dialog box, provide the connection details to your SQL Server, and then select **Connect**.
-5. In the **Add sources** dialog box, select **AdventureWorks2016**, select **Add**, and then select **Start Assessment**.
-
- > [!NOTE]
- > If you use SSIS, DMA does not currently support the assessment of the source SSISDB. However, SSIS projects/packages will be assessed/validated as they are redeployed to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
- When the assessment is complete, the results display as shown in the following graphic:
-
- ![Assess data migration](media/tutorial-sql-server-to-azure-sql/dma-assessments.png)
-
- For databases in Azure SQL Database, the assessments identify feature parity issues and migration blocking issues for deploying to a single database or pooled database.
-
- - The **SQL Server feature parity** category provides a comprehensive set of recommendations, alternative approaches available in Azure, and mitigating steps to help you plan the effort into your migration projects.
- - The **Compatibility issues** category identifies partially supported or unsupported features that reflect compatibility issues that might block migrating SQL Server database(s) to Azure SQL Database. Recommendations are also provided to help you address those issues.
-
-6. Review the assessment results for migration blocking issues and feature parity issues by selecting the specific options.
-
-## Migrate the sample schema
-
-After you're comfortable with the assessment and satisfied that the selected database is a viable candidate for migration to a single database or pooled database in Azure SQL Database, use DMA to migrate the schema to Azure SQL Database.
-
-> [!NOTE]
-> Before you create a migration project in Data Migration Assistant, be sure that you have already provisioned a database in Azure as mentioned in the prerequisites.
-
-> [!IMPORTANT]
-> If you use SSIS, DMA does not currently support the migration of source SSISDB, but you can redeploy your SSIS projects/packages to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
-To migrate the **AdventureWorks2016** schema to a single database or pooled database Azure SQL Database, perform the following steps:
-
-1. In the Data Migration Assistant, select the New (+) icon, and then under **Project type**, select **Migration**.
-2. Specify a project name, in the **Source server type** text box, select **SQL Server**, and then in the **Target server type** text box, select **Azure SQL Database**.
-3. Under **Migration Scope**, select **Schema only**.
-
- After performing the previous steps, the Data Migration Assistant interface should appear as shown in the following graphic:
-
- ![Create Data Migration Assistant Project](media/tutorial-sql-server-to-azure-sql/dma-create-project.png)
-
-4. Select **Create** to create the project.
-5. In the Data Migration Assistant, specify the source connection details for your SQL Server, select **Connect**, and then select the **AdventureWorks2016** database.
-
- ![Data Migration Assistant Source Connection Details](media/tutorial-sql-server-to-azure-sql/dma-source-connect.png)
-
-6. Select **Next**, under **Connect to target server**, specify the target connection details for the Azure SQL Database, select **Connect**, and then select the **AdventureWorksAzure** database you had pre-provisioned in Azure SQL Database.
-
- ![Data Migration Assistant Target Connection Details](media/tutorial-sql-server-to-azure-sql/dma-target-connect.png)
-
-7. Select **Next** to advance to the **Select objects** screen, on which you can specify the schema objects in the **AdventureWorks2016** database that need to be deployed to Azure SQL Database.
-
- By default, all objects are selected.
-
- ![Generate SQL Scripts](media/tutorial-sql-server-to-azure-sql/dma-assessment-source.png)
-
-8. Select **Generate SQL script** to create the SQL scripts, and then review the scripts for any errors.
-
- ![Schema Script](media/tutorial-sql-server-to-azure-sql/dma-schema-script.png)
-
-9. Select **Deploy schema** to deploy the schema to Azure SQL Database, and then after the schema is deployed, check the target server for any anomalies.
-
- ![Deploy Schema](media/tutorial-sql-server-to-azure-sql/dma-schema-deploy.png)
---
-## Create a migration project
-
-After the service is created, locate it within the Azure portal, open it, and then create a new migration project.
-
-1. In the Azure portal menu, select **All services**. Search for and select **Azure Database Migration Services**.
-
- ![Locate all instances of Azure Database Migration Service](media/tutorial-sql-server-to-azure-sql/dms-search.png)
-
-2. On the **Azure Database Migration Services** screen, select the Azure Database Migration Service instance that you created.
-
-3. Select **New Migration Project**.
-
- ![Locate your instance of Azure Database Migration Service](media/tutorial-sql-server-to-azure-sql/dms-instance-search.png)
-
-4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database**, and then for **Choose Migration activity type**, select **Data migration**.
-
- ![Create Database Migration Service Project](media/tutorial-sql-server-to-azure-sql/dms-create-project-2.png)
-
-5. Select **Create and run activity** to create the project and run the migration activity.
-
-## Specify source details
-
-1. On the **Select source** screen, specify the connection details for the source SQL Server instance.
-
- Make sure to use a Fully Qualified Domain Name (FQDN) for the source SQL Server instance name. You can also use the IP Address for situations in which DNS name resolution isn't possible.
-
-2. If you have not installed a trusted certificate on your source server, select the **Trust server certificate** check box.
-
- When a trusted certificate is not installed, SQL Server generates a self-signed certificate when the instance is started. This certificate is used to encrypt the credentials for client connections.
-
- > [!CAUTION]
- > TLS connections that are encrypted using a self-signed certificate do not provide strong security. They are susceptible to man-in-the-middle attacks. You should not rely on TLS using self-signed certificates in a production environment or on servers that are connected to the internet.
-
- > [!IMPORTANT]
- > If you use SSIS, DMS does not currently support the migration of source SSISDB, but you can redeploy your SSIS projects/packages to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
-
- ![Source Details](media/tutorial-sql-server-to-azure-sql/dms-source-details-2.png)
-
-3. Select **Next: Select databases**.
-
-## Select databases for migration
-
-Select either all databases or specific databases that you want to migrate to Azure SQL Database. DMS provides you with the expected migration time for selected databases. If the migration downtimes are acceptable continue with the migration. If the migration downtimes are not acceptable, consider migrating to [SQL Managed Instance with near-zero downtime](tutorial-sql-server-managed-instance-online.md) or submit ideas/suggestions for improvement, and other feedback in the [Azure Community forum ΓÇö Azure Database Migration Service](https://feedback.azure.com/d365community/forum/2dd7eb75-ef24-ec11-b6e6-000d3a4f0da0).
-
-1. Choose the database(s) you want to migrate from the list of available databases.
-1. Review the expected downtime. If it's acceptable, select **Next: Select target >>**
-
- ![Source databases](media/tutorial-sql-server-to-azure-sql/select-database.png)
---
-## Specify target details
-
-1. On the **Select target** screen, provide authentication settings to your Azure SQL Database.
-
- ![Select target](media/tutorial-sql-server-to-azure-sql/select-target.png)
-
- > [!NOTE]
- > Currently, SQL authentication is the only supported authentication type.
-
-1. Select **Next: Map to target databases** screen, map the source and the target database for migration.
-
- If the target database contains the same database name as the source database, Azure Database Migration Service selects the target database by default.
-
- ![Map to target databases](media/tutorial-sql-server-to-azure-sql/dms-map-targets-activity-2.png)
-
-1. Select **Next: Configuration migration settings**, expand the table listing, and then review the list of affected fields.
-
- Azure Database Migration Service auto selects all the empty source tables that exist on the target Azure SQL Database instance. If you want to remigrate tables that already include data, you need to explicitly select the tables on this blade.
-
- ![Select tables](media/tutorial-sql-server-to-azure-sql/dms-configure-setting-activity-2.png)
-
-1. Select **Next: Summary**, review the migration configuration and in the **Activity name** text box, specify a name for the migration activity.
-
- ![Choose validation option](media/tutorial-sql-server-to-azure-sql/dms-configuration-2.png)
-
-## Run the migration
--- Select **Start migration**.-
- The migration activity window appears, and the **Status** of the activity is **Pending**.
-
- ![Activity Status](media/tutorial-sql-server-to-azure-sql/dms-activity-status-1.png)
-
-## Monitor the migration
-
-1. On the migration activity screen, select **Refresh** to update the display until the **Status** of the migration shows as **Completed**.
-
- ![Activity Status Completed](media/tutorial-sql-server-to-azure-sql/dms-completed-activity-1.png)
-
-2. Verify the target database(s) on the target **Azure SQL Database**.
-
-## Additional resources
--- For information about Azure Database Migration Service, see the article [What is Azure Database Migration Service?](./dms-overview.md).-- For information about Azure SQL Database, see the article [What is the Azure SQL Database service?](/azure/azure-sql/database/sql-database-paas-overview).+
+ Title: "Tutorial: Migrate SQL Server offline to Azure SQL Database"
+
+description: Learn to migrate from SQL Server to Azure SQL Database offline by using Azure Database Migration Service (classic).
+++ Last updated : 02/08/2023+++
+ - seo-lt-2019
+ - ignite-2022
+ - sql-migration-content
++
+# Tutorial: Migrate SQL Server to Azure SQL Database using DMS (classic)
++
+> [!NOTE]
+> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Database by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-azure-sql-database-offline-ads.md).
+>
+> To compare features between versions, review [compare versions](dms-overview.md#compare-versions).
+
+You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to [Azure SQL Database](/azure/sql-database/). In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database restored to an on-premises instance of SQL Server 2016 (or later) to a single database or pooled database in Azure SQL Database by using Azure Database Migration Service.
+
+You will learn how to:
+> [!div class="checklist"]
+>
+> - Assess and evaluate your on-premises database for any blocking issues by using the Data Migration Assistant.
+> - Use the Data Migration Assistant to migrate the database sample schema.
+> - Register the Azure DataMigration resource provider.
+> - Create an instance of Azure Database Migration Service.
+> - Create a migration project by using Azure Database Migration Service.
+> - Run the migration.
+> - Monitor the migration.
++
+## Prerequisites
+
+To complete this tutorial, you need to:
+
+- Download and install [SQL Server 2016 or later](https://www.microsoft.com/sql-server/sql-server-downloads).
+- Enable the TCP/IP protocol, which is disabled by default during SQL Server Express installation, by following the instructions in the article [Enable or Disable a Server Network Protocol](/sql/database-engine/configure-windows/enable-or-disable-a-server-network-protocol#SSMSProcedure).
+- [Restore the AdventureWorks2016 database to the SQL Server instance.](/sql/samples/adventureworks-install-configure#restore-to-sql-server)
+- Create a database in Azure SQL Database, which you do by following the details in the article [Create a database in Azure SQL Database using the Azure portal](/azure/azure-sql/database/single-database-create-quickstart). For purposes of this tutorial, the name of the Azure SQL Database is assumed to be **AdventureWorksAzure**, but you can provide whatever name you wish.
+
+ > [!NOTE]
+ > If you use SQL Server Integration Services (SSIS) and want to migrate the catalog database for your SSIS projects/packages (SSISDB) from SQL Server to Azure SQL Database, the destination SSISDB will be created and managed automatically on your behalf when you provision SSIS in Azure Data Factory (ADF). For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
+
+- Download and install the latest version of the [Data Migration Assistant](https://www.microsoft.com/download/details.aspx?id=53595).
+- Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details.
+
+ > [!NOTE]
+ > During virtual network setup, if you use ExpressRoute with network peering to Microsoft, add the following service [endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) to the subnet in which the service will be provisioned:
+ >
+ > - Target database endpoint (for example, SQL endpoint, Azure Cosmos DB endpoint, and so on)
+ > - Storage endpoint
+ > - Service bus endpoint
+ >
+ > This configuration is necessary because Azure Database Migration Service lacks internet connectivity.
+ >
+ >If you donΓÇÖt have site-to-site connectivity between the on-premises network and Azure or if there is limited site-to-site connectivity bandwidth, consider using Azure Database Migration Service in hybrid mode (Preview). Hybrid mode leverages an on-premises migration worker together with an instance of Azure Database Migration Service running in the cloud. To create an instance of Azure Database Migration Service in hybrid mode, see the article [Create an instance of Azure Database Migration Service in hybrid mode using the Azure portal](./quickstart-create-data-migration-service-hybrid-portal.md).
+
+- Ensure that your virtual network Network Security Group outbound security rules don't block the outbound port 443 of ServiceTag for ServiceBus, Storage, and AzureMonitor. For more detail on Azure virtual network NSG traffic filtering, see the article [Filter network traffic with network security groups](../virtual-network/virtual-network-vnet-plan-design-arm.md).
+- Configure your [Windows Firewall for database engine access](/sql/database-engine/configure-windows/configure-a-windows-firewall-for-database-engine-access).
+- Open your Windows firewall to allow Azure Database Migration Service to access the source SQL Server, which by default is TCP port 1433. If your default instance is listening on some other port, add that to the firewall.
+- If you're running multiple named SQL Server instances using dynamic ports, you may wish to enable the SQL Browser Service and allow access to UDP port 1434 through your firewalls so that Azure Database Migration Service can connect to a named instance on your source server.
+- When using a firewall appliance in front of your source database(s), you may need to add firewall rules to allow Azure Database Migration Service to access the source database(s) for migration.
+- Create a server-level IP [firewall rule](/azure/azure-sql/database/firewall-configure) for Azure SQL Database to allow Azure Database Migration Service access to the target databases. Provide the subnet range of the virtual network used for Azure Database Migration Service.
+- Ensure that the credentials used to connect to source SQL Server instance have [CONTROL SERVER](/sql/t-sql/statements/grant-server-permissions-transact-sql) permissions.
+- Ensure that the credentials used to connect to target Azure SQL Database instance have [CONTROL DATABASE](/sql/t-sql/statements/grant-database-permissions-transact-sql) permission on the target databases.
+
+ > [!IMPORTANT]
+ > Creating an instance of Azure Database Migration Service requires access to virtual network settings that are normally not within the same resource group. As a result, the user creating an instance of DMS requires permission at subscription level. To create the required roles, which you can assign as needed, run the following script:
+ >
+ > ```
+ >
+ > $readerActions = `
+ > "Microsoft.Network/networkInterfaces/ipConfigurations/read", `
+ > "Microsoft.DataMigration/*/read", `
+ > "Microsoft.Resources/subscriptions/resourceGroups/read"
+ >
+ > $writerActions = `
+ > "Microsoft.DataMigration/services/*/write", `
+ > "Microsoft.DataMigration/services/*/delete", `
+ > "Microsoft.DataMigration/services/*/action", `
+ > "Microsoft.Network/virtualNetworks/subnets/join/action", `
+ > "Microsoft.Network/virtualNetworks/write", `
+ > "Microsoft.Network/virtualNetworks/read", `
+ > "Microsoft.Resources/deployments/validate/action", `
+ > "Microsoft.Resources/deployments/*/read", `
+ > "Microsoft.Resources/deployments/*/write"
+ >
+ > $writerActions += $readerActions
+ >
+ > # TODO: replace with actual subscription IDs
+ > $subScopes = ,"/subscriptions/00000000-0000-0000-0000-000000000000/","/subscriptions/11111111-1111-1111-1111-111111111111/"
+ >
+ > function New-DmsReaderRole() {
+ > $aRole = [Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefinition]::new()
+ > $aRole.Name = "Azure Database Migration Reader"
+ > $aRole.Description = "Lets you perform read only actions on DMS service/project/tasks."
+ > $aRole.IsCustom = $true
+ > $aRole.Actions = $readerActions
+ > $aRole.NotActions = @()
+ >
+ > $aRole.AssignableScopes = $subScopes
+ > #Create the role
+ > New-AzRoleDefinition -Role $aRole
+ > }
+ >
+ > function New-DmsContributorRole() {
+ > $aRole = [Microsoft.Azure.Commands.Resources.Models.Authorization.PSRoleDefinition]::new()
+ > $aRole.Name = "Azure Database Migration Contributor"
+ > $aRole.Description = "Lets you perform CRUD actions on DMS service/project/tasks."
+ > $aRole.IsCustom = $true
+ > $aRole.Actions = $writerActions
+ > $aRole.NotActions = @()
+ >
+ > $aRole.AssignableScopes = $subScopes
+ > #Create the role
+ > New-AzRoleDefinition -Role $aRole
+ > }
+ >
+ > function Update-DmsReaderRole() {
+ > $aRole = Get-AzRoleDefinition "Azure Database Migration Reader"
+ > $aRole.Actions = $readerActions
+ > $aRole.NotActions = @()
+ > Set-AzRoleDefinition -Role $aRole
+ > }
+ >
+ > function Update-DmsConributorRole() {
+ > $aRole = Get-AzRoleDefinition "Azure Database Migration Contributor"
+ > $aRole.Actions = $writerActions
+ > $aRole.NotActions = @()
+ > Set-AzRoleDefinition -Role $aRole
+ > }
+ >
+ > # Invoke above functions
+ > New-DmsReaderRole
+ > New-DmsContributorRole
+ > Update-DmsReaderRole
+ > Update-DmsConributorRole
+ > ```
+
+## Assess your on-premises database
+
+Before you can migrate data from a SQL Server instance to a single database or pooled database in Azure SQL Database, you need to assess the SQL Server database for any blocking issues that might prevent migration. Using the Data Migration Assistant, follow the steps described in the article [Performing a SQL Server migration assessment](/sql/dma/dma-assesssqlonprem) to complete the on-premises database assessment. A summary of the required steps follows:
+
+1. In the Data Migration Assistant, select the New (+) icon, and then select the **Assessment** project type.
+2. Specify a project name. From the **Assessment type** drop-down list, select **Database Engine**, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database**, and then select **Create** to create the project.
+
+ When you're assessing the source SQL Server database migrating to a single database or pooled database in Azure SQL Database, you can choose one or both of the following assessment report types:
+
+ - Check database compatibility
+ - Check feature parity
+
+ Both report types are selected by default.
+
+3. In the Data Migration Assistant, on the **Options** screen, select **Next**.
+4. On the **Select sources** screen, in the **Connect to a server** dialog box, provide the connection details to your SQL Server, and then select **Connect**.
+5. In the **Add sources** dialog box, select **AdventureWorks2016**, select **Add**, and then select **Start Assessment**.
+
+ > [!NOTE]
+ > If you use SSIS, DMA does not currently support the assessment of the source SSISDB. However, SSIS projects/packages will be assessed/validated as they are redeployed to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
+
+ When the assessment is complete, the results display as shown in the following graphic:
+
+ ![Assess data migration](media/tutorial-sql-server-to-azure-sql/dma-assessments.png)
+
+ For databases in Azure SQL Database, the assessments identify feature parity issues and migration blocking issues for deploying to a single database or pooled database.
+
+ - The **SQL Server feature parity** category provides a comprehensive set of recommendations, alternative approaches available in Azure, and mitigating steps to help you plan the effort into your migration projects.
+ - The **Compatibility issues** category identifies partially supported or unsupported features that reflect compatibility issues that might block migrating SQL Server database(s) to Azure SQL Database. Recommendations are also provided to help you address those issues.
+
+6. Review the assessment results for migration blocking issues and feature parity issues by selecting the specific options.
+
+## Migrate the sample schema
+
+After you're comfortable with the assessment and satisfied that the selected database is a viable candidate for migration to a single database or pooled database in Azure SQL Database, use DMA to migrate the schema to Azure SQL Database.
+
+> [!NOTE]
+> Before you create a migration project in Data Migration Assistant, be sure that you have already provisioned a database in Azure as mentioned in the prerequisites.
+
+> [!IMPORTANT]
+> If you use SSIS, DMA does not currently support the migration of source SSISDB, but you can redeploy your SSIS projects/packages to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
+
+To migrate the **AdventureWorks2016** schema to a single database or pooled database Azure SQL Database, perform the following steps:
+
+1. In the Data Migration Assistant, select the New (+) icon, and then under **Project type**, select **Migration**.
+2. Specify a project name, in the **Source server type** text box, select **SQL Server**, and then in the **Target server type** text box, select **Azure SQL Database**.
+3. Under **Migration Scope**, select **Schema only**.
+
+ After performing the previous steps, the Data Migration Assistant interface should appear as shown in the following graphic:
+
+ ![Create Data Migration Assistant Project](media/tutorial-sql-server-to-azure-sql/dma-create-project.png)
+
+4. Select **Create** to create the project.
+5. In the Data Migration Assistant, specify the source connection details for your SQL Server, select **Connect**, and then select the **AdventureWorks2016** database.
+
+ ![Data Migration Assistant Source Connection Details](media/tutorial-sql-server-to-azure-sql/dma-source-connect.png)
+
+6. Select **Next**, under **Connect to target server**, specify the target connection details for the Azure SQL Database, select **Connect**, and then select the **AdventureWorksAzure** database you had pre-provisioned in Azure SQL Database.
+
+ ![Data Migration Assistant Target Connection Details](media/tutorial-sql-server-to-azure-sql/dma-target-connect.png)
+
+7. Select **Next** to advance to the **Select objects** screen, on which you can specify the schema objects in the **AdventureWorks2016** database that need to be deployed to Azure SQL Database.
+
+ By default, all objects are selected.
+
+ ![Generate SQL Scripts](media/tutorial-sql-server-to-azure-sql/dma-assessment-source.png)
+
+8. Select **Generate SQL script** to create the SQL scripts, and then review the scripts for any errors.
+
+ ![Schema Script](media/tutorial-sql-server-to-azure-sql/dma-schema-script.png)
+
+9. Select **Deploy schema** to deploy the schema to Azure SQL Database, and then after the schema is deployed, check the target server for any anomalies.
+
+ ![Deploy Schema](media/tutorial-sql-server-to-azure-sql/dma-schema-deploy.png)
+++
+## Create a migration project
+
+After the service is created, locate it within the Azure portal, open it, and then create a new migration project.
+
+1. In the Azure portal menu, select **All services**. Search for and select **Azure Database Migration Services**.
+
+ ![Locate all instances of Azure Database Migration Service](media/tutorial-sql-server-to-azure-sql/dms-search.png)
+
+2. On the **Azure Database Migration Services** screen, select the Azure Database Migration Service instance that you created.
+
+3. Select **New Migration Project**.
+
+ ![Locate your instance of Azure Database Migration Service](media/tutorial-sql-server-to-azure-sql/dms-instance-search.png)
+
+4. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **SQL Server**, in the **Target server type** text box, select **Azure SQL Database**, and then for **Choose Migration activity type**, select **Data migration**.
+
+ ![Create Database Migration Service Project](media/tutorial-sql-server-to-azure-sql/dms-create-project-2.png)
+
+5. Select **Create and run activity** to create the project and run the migration activity.
+
+## Specify source details
+
+1. On the **Select source** screen, specify the connection details for the source SQL Server instance.
+
+ Make sure to use a Fully Qualified Domain Name (FQDN) for the source SQL Server instance name. You can also use the IP Address for situations in which DNS name resolution isn't possible.
+
+2. If you have not installed a trusted certificate on your source server, select the **Trust server certificate** check box.
+
+ When a trusted certificate is not installed, SQL Server generates a self-signed certificate when the instance is started. This certificate is used to encrypt the credentials for client connections.
+
+ > [!CAUTION]
+ > TLS connections that are encrypted using a self-signed certificate do not provide strong security. They are susceptible to man-in-the-middle attacks. You should not rely on TLS using self-signed certificates in a production environment or on servers that are connected to the internet.
+
+ > [!IMPORTANT]
+ > If you use SSIS, DMS does not currently support the migration of source SSISDB, but you can redeploy your SSIS projects/packages to the destination SSISDB hosted by Azure SQL Database. For more information about migrating SSIS packages, see the article [Migrate SQL Server Integration Services packages to Azure](./how-to-migrate-ssis-packages.md).
+
+ ![Source Details](media/tutorial-sql-server-to-azure-sql/dms-source-details-2.png)
+
+3. Select **Next: Select databases**.
+
+## Select databases for migration
+
+Select either all databases or specific databases that you want to migrate to Azure SQL Database. DMS provides you with the expected migration time for selected databases. If the migration downtimes are acceptable continue with the migration. If the migration downtimes are not acceptable, consider migrating to [SQL Managed Instance with near-zero downtime](tutorial-sql-server-managed-instance-online.md) or submit ideas/suggestions for improvement, and other feedback in the [Azure Community forum ΓÇö Azure Database Migration Service](https://feedback.azure.com/d365community/forum/2dd7eb75-ef24-ec11-b6e6-000d3a4f0da0).
+
+1. Choose the database(s) you want to migrate from the list of available databases.
+1. Review the expected downtime. If it's acceptable, select **Next: Select target >>**
+
+ ![Source databases](media/tutorial-sql-server-to-azure-sql/select-database.png)
+++
+## Specify target details
+
+1. On the **Select target** screen, provide authentication settings to your Azure SQL Database.
+
+ ![Select target](media/tutorial-sql-server-to-azure-sql/select-target.png)
+
+ > [!NOTE]
+ > Currently, SQL authentication is the only supported authentication type.
+
+1. Select **Next: Map to target databases** screen, map the source and the target database for migration.
+
+ If the target database contains the same database name as the source database, Azure Database Migration Service selects the target database by default.
+
+ ![Map to target databases](media/tutorial-sql-server-to-azure-sql/dms-map-targets-activity-2.png)
+
+1. Select **Next: Configuration migration settings**, expand the table listing, and then review the list of affected fields.
+
+ Azure Database Migration Service auto selects all the empty source tables that exist on the target Azure SQL Database instance. If you want to remigrate tables that already include data, you need to explicitly select the tables on this blade.
+
+ ![Select tables](media/tutorial-sql-server-to-azure-sql/dms-configure-setting-activity-2.png)
+
+1. Select **Next: Summary**, review the migration configuration and in the **Activity name** text box, specify a name for the migration activity.
+
+ ![Choose validation option](media/tutorial-sql-server-to-azure-sql/dms-configuration-2.png)
+
+## Run the migration
+
+- Select **Start migration**.
+
+ The migration activity window appears, and the **Status** of the activity is **Pending**.
+
+ ![Activity Status](media/tutorial-sql-server-to-azure-sql/dms-activity-status-1.png)
+
+## Monitor the migration
+
+1. On the migration activity screen, select **Refresh** to update the display until the **Status** of the migration shows as **Completed**.
+
+ ![Activity Status Completed](media/tutorial-sql-server-to-azure-sql/dms-completed-activity-1.png)
+
+2. Verify the target database(s) on the target **Azure SQL Database**.
+
+## Additional resources
+
+- For information about Azure Database Migration Service, see the article [What is Azure Database Migration Service?](./dms-overview.md).
+- For information about Azure SQL Database, see the article [What is the Azure SQL Database service?](/azure/azure-sql/database/sql-database-paas-overview).
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md
- seo-lt-2019 - fasttrack-edit - ignite-2022
+ - sql-migration-content
# Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using DMS (classic)
dms Tutorial Sql Server To Virtual Machine Offline Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-offline-ads.md
Last updated 06/07/2023 -+
+ - seo-lt-2019
+ - sql-migration-content
# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines offline in Azure Data Studio
dms Tutorial Sql Server To Virtual Machine Online Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-virtual-machine-online-ads.md
Last updated 06/07/2023 -+
+ - seo-lt-2019
+ - sql-migration-content
# Tutorial: Migrate SQL Server to SQL Server on Azure Virtual Machines online in Azure Data Studio
dms Tutorial Transparent Data Encryption Migration Ads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-transparent-data-encryption-migration-ads.md
Last updated 02/03/2023 +
+ - sql-migration-content
# Tutorial: Migrate TDE-enabled databases (preview) to Azure SQL in Azure Data Studio
The following table describes the current status of the TDE-enabled database mig
- [Migrate databases with Azure SQL Migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) - [Tutorial: Migrate SQL Server to Azure SQL Database - Offline](./tutorial-sql-server-azure-sql-database-offline-ads.md) - [Tutorial: Migrate SQL Server to Azure SQL Managed Instance - Online](./tutorial-sql-server-managed-instance-online-ads.md)-- [Tutorial: Migrate SQL Server to SQL Server On Azure Virtual Machines - Online](./tutorial-sql-server-to-virtual-machine-online-ads.md)
+- [Tutorial: Migrate SQL Server to SQL Server On Azure Virtual Machines - Online](./tutorial-sql-server-to-virtual-machine-online-ads.md)
dns Dns Private Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-overview.md
The following restrictions hold with respect to virtual networks:
### Subnet restrictions Subnets used for DNS resolver have the following limitations:-- A subnet must be a minimum of /28 address space or a maximum of /24 address space. A /28 subnet is sufficient to accomodate current endpoint limits. A subnet size of /27 to /24 can provide flexibility if these limits change.
+- A subnet must be a minimum of /28 address space or a maximum of /24 address space. A /28 subnet is sufficient to accommodate current endpoint limits. A subnet size of /27 to /24 can provide flexibility if these limits change.
- A subnet can't be shared between multiple DNS resolver endpoints. A single subnet can only be used by a single DNS resolver endpoint. - All IP configurations for a DNS resolver inbound endpoint must reference the same subnet. Spanning multiple subnets in the IP configuration for a single DNS resolver inbound endpoint isn't allowed. - The subnet used for a DNS resolver inbound endpoint must be within the virtual network referenced by the parent DNS resolver.
event-grid Event Schema Health Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-health-resources.md
The following example shows the schema of a key-value modified event:
+## Contact us
+If you have any questions or feedback on this feature, don't hesitate to reach us at [arnsupport@microsoft.com](mailto:arnsupport@microsoft.com).
+ ## Next steps
-See [Subscribe to Azure Resource Notifications - Health Resources events](subscribe-to-resource-notifications-health-resources-events.md).
+See [Subscribe to Azure Resource Notifications - Health Resources events](subscribe-to-resource-notifications-health-resources-events.md).
event-grid Event Schema Resource Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-resource-notifications.md
To enhance customer experience, a built-in role definition that encompasses all
} ```
+## Contact us
+If you have any questions or feedback on this feature, don't hesitate to reach us at [arnsupport@microsoft.com](mailto:arnsupport@microsoft.com).
+ ## Next steps
-See [Azure Resource Notifications - Health Resources events in Azure Event Grid](event-schema-health-resources.md).
+See [Azure Resource Notifications - Health Resources events in Azure Event Grid](event-schema-health-resources.md).
event-grid Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/managed-service-identity.md
Title: Event delivery, managed service identity, and private link description: This article describes how to enable managed service identity for an Azure event grid topic. Use it to forward events to supported destinations. + Last updated 03/25/2021
event-grid Query Event Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/query-event-subscriptions.md
Title: Query Azure Event Grid subscriptions
description: This article describes how to list Event Grid subscriptions in your Azure subscription. You provide different parameters based on the type of subscription. Last updated 09/28/2021 -+ # Query Event Grid subscriptions
event-grid Subscribe To Resource Notifications Health Resources Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-resource-notifications-health-resources-events.md
Title: Subscribe to Azure Resource Notifications - Health Resources events description: This article explains how to subscribe to events published by Azure Resource Notifications - Health Resources. + Last updated 09/08/2023
Value = Microsoft.Compute/virtualMachines
+## Contact us
+If you have any questions or feedback on this feature, don't hesitate to reach us at [arnsupport@microsoft.com](mailto:arnsupport@microsoft.com).
## Next steps For detailed information about these events, see [Azure Resource Notifications - Health Resources events](event-schema-health-resources.md).-
event-hubs Event Hubs Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-samples.md
You can find Event Hubs samples on [GitHub](https://github.com/Azure/azure-event
## Go samples
-You can find Go samples for Azure Event Hubs in the [azure-event-hubs-go](https://github.com/Azure/azure-event-hubs-go/tree/master/_examples) GitHub repository.
+You can find Go samples for Azure Event Hubs in the [azeventhubs](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/messaging/azeventhubs) folder in the [azure-sdk-for-go](https://github.com/Azure/azure-sdk-for-go/) GitHub repository.
## Azure CLI samples You can find Azure CLI samples for Azure Event Hubs in the [azure-event-hubs](https://github.com/Azure/azure-event-hubs/tree/master/samples/Management/CLI) GitHub repository.
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-about-virtual-network-gateways.md
Previously updated : 09/11/2023 Last updated : 10/05/2023
Additionally, you can downgrade the virtual network gateway SKU. The following d
- High Performance to Standard - ErGw2Az to ErGw1Az
-For all other downgrade scenarios, you'll need to delete and recreate the gateway. Recreating a gateway incurs downtime.
+For all other downgrade scenarios, you need to delete and recreate the gateway. Recreating a gateway incurs downtime.
### <a name="gatewayfeaturesupport"></a>Feature support by gateway SKU
Before you create an ExpressRoute gateway, you must create a gateway subnet. The
When you create the gateway subnet, you specify the number of IP addresses that the subnet contains. The IP addresses in the gateway subnet are allocated to the gateway VMs and gateway services. Some configurations require more IP addresses than others.
-When you're planning your gateway subnet size, refer to the documentation for the configuration that you're planning to create. For example, the ExpressRoute/VPN Gateway coexist configuration requires a larger gateway subnet than most other configurations. Further more, you may want to make sure your gateway subnet contains enough IP addresses to accommodate possible future configurations. While you can create a gateway subnet as small as /29, we recommend that you create a gateway subnet of /27 or larger (/27, /26 etc.). If you plan on connecting 16 ExpressRoute circuits to your gateway, you **must** create a gateway subnet of /26 or larger. If you're creating a dual stack gateway subnet, we recommend that you also use an IPv6 range of /64 or larger. This set up will accommodate most configurations.
+When you're planning your gateway subnet size, refer to the documentation for the configuration that you're planning to create. For example, the ExpressRoute/VPN Gateway coexist configuration requires a larger gateway subnet than most other configurations. Further more, you may want to make sure your gateway subnet contains enough IP addresses to accommodate possible future configurations. While you can create a gateway subnet as small as /29, we recommend that you create a gateway subnet of /27 or larger (/27, /26 etc.). If you plan on connecting 16 ExpressRoute circuits to your gateway, you **must** create a gateway subnet of /26 or larger. If you're creating a dual stack gateway subnet, we recommend that you also use an IPv6 range of /64 or larger. This set up accommodates most configurations.
The following Resource Manager PowerShell example shows a gateway subnet named GatewaySubnet. You can see the CIDR notation specifies a /27, which allows for enough IP addresses for most configurations that currently exist.
ExpressRoute virtual network gateway is designed to exchange network routes and
For more information about FastPath, including limitations and requirements, see [About FastPath](about-fastpath.md).
-## Connectivity to Private Endpoints
+## Connectivity to private endpoints
The ExpressRoute virtual network gateway facilitates connectivity to private endpoints deployed in the same virtual network as the virtual network gateway and across virtual network peers.
The ExpressRoute virtual network gateway facilitates connectivity to private end
> * Throughput and control plane capacity may be half compared to connectivity to non-private-endpoint resources. > * During a maintenance period, you may experience intermittent connectivity issues to private endpoint resources.
+### Private endpoint connectivity and planned maintenance events
+
+Private endpoint connectivity is stateful. When a connection to a private endpoint is established over ExpressRoute private peering, inbound and outbound connections are routed through one of the backend instances of the gateway infrastructure. During a maintenance event, backend instances of the virtual network gateway infrastructure are rebooted one at a time. This could result in intermittent connectivity issues during the maintenance event.
+
+To prevent or reduce the impact of connectivity issues with private endpoints during maintenance activities, we recommend that you adjust the TCP time-out value to a value between 15-30 seconds on your on-premises applications. Examine the requirements of your application to test and configure the optimal value.
## Route Server
-When you create or delete an Azure Route Server from a virtual network that contains a Virtual Network Gateway (ExpressRoute or VPN), expect downtime until the operation gets completed.
+The creation or deletion of an Azure Route Server from a virtual network that has a Virtual Network Gateway (either ExpressRoute or VPN) may cause downtime until the operation is completed.
## <a name="resources"></a>REST APIs and PowerShell cmdlets
For more technical resources and specific syntax requirements when using REST AP
## VNet-to-VNet connectivity
-By default, connectivity between virtual networks is enabled when you link multiple virtual networks to the same ExpressRoute circuit. Microsoft recommends not using your ExpressRoute circuit for communication between virtual networks. Instead, it is recommended to use [VNet peering](../virtual-network/virtual-network-peering-overview.md). For more information about why VNet-to-VNet connectivity isn't recommended over ExpressRoute, see [connectivity between virtual networks over ExpressRoute](virtual-network-connectivity-guidance.md).
+By default, connectivity between virtual networks is enabled when you link multiple virtual networks to the same ExpressRoute circuit. Microsoft recommends not using your ExpressRoute circuit for communication between virtual networks. Instead, it's recommended to use [virtual network peering](../virtual-network/virtual-network-peering-overview.md). For more information about why VNet-to-VNet connectivity isn't recommended over ExpressRoute, see [connectivity between virtual networks over ExpressRoute](virtual-network-connectivity-guidance.md).
### Virtual network peering
firewall Basic Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/basic-features.md
Azure Firewall Availability Zones are available in regions that support Availabi
You can limit outbound HTTP/S traffic or Azure SQL traffic to a specified list of fully qualified domain names (FQDN) including wild cards. This feature doesn't require TLS termination.
+The following video shows how to create an application rule: <br><br>
+
+> [!VIDEO https://learn-video.azurefd.net/vod/player?id=d8dbf4b9-4f75-4c88-b717-c3664b667e8b]
+ ## Network traffic filtering rules You can centrally create allow or deny network filtering rules by source and destination IP address, port, and protocol. Azure Firewall is fully stateful, so it can distinguish legitimate packets for different types of connections. Rules are enforced and logged across multiple subscriptions and virtual networks.
firewall Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/features.md
Azure Firewall can scale out as much as you need to accommodate changing network
You can limit outbound HTTP/S traffic or Azure SQL traffic to a specified list of fully qualified domain names (FQDN) including wild cards. This feature doesn't require TLS termination.
+The following video shows how to create an application rule: <br><br>
+
+> [!VIDEO https://learn-video.azurefd.net/vod/player?id=d8dbf4b9-4f75-4c88-b717-c3664b667e8b]
+ ## Network traffic filtering rules You can centrally create *allow* or *deny* network filtering rules by source and destination IP address, port, and protocol. Azure Firewall is fully stateful, so it can distinguish legitimate packets for different types of connections. Rules are enforced and logged across multiple subscriptions and virtual networks.
frontdoor Front Door Waf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-waf.md
+ Last updated 10/01/2020
frontdoor Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/managed-identity.md
You must already have a user managed identity created. To create a new identity,
## Verify access
-1. Go to the Azure Front Door profile you enabled managed identity and select **Secret** from under *Settings*.
+1. Go to the Azure Front Door profile you enabled managed identity and select **Secrets** from under *Security*.
:::image type="content" source="./media/managed-identity/secrets.png" alt-text="Screenshot of accessing secrets from under settings of a Front Door profile.":::
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/resource-graph-samples.md
Title: Azure Resource Graph sample queries for management groups
description: Sample Azure Resource Graph queries for management groups showing use of resource types and tables to access management group details. Last updated 07/07/2022 -+ # Azure Resource Graph sample queries for management groups
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md
description: Azure Policy evaluations and effects determine compliance. Learn ho
Last updated 11/03/2022 -+ # Get compliance data of Azure resources
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/remediate-resources.md
Start-AzPolicyRemediation -Name 'myRemedation' -PolicyAssignmentId '/subscriptio
You may also choose to adjust remediation settings through these optional parameters: - `-FailureThreshold` - Used to specify whether the remediation task should fail if the percentage of failures exceeds the given threshold. Provided as a number between 0 to 100. By default, the failure threshold is 100%.-- `-ParallelDeploymentCount` - Determines how many non-compliant resources to remediate in a given remediation task. The default value is 500 (the previous limit). The maximum number is 50,000 resources.-- `-ResourceCount` - Determines how many resources to remediate at the same time. The allowed values are 1 to 30 resources at a time. The default value is 10.
+- `-ResourceCount` - Determines how many non-compliant resources to remediate in a given remediation task. The default value is 500 (the previous limit). The maximum number is 50,000 resources.
+- `-ParallelDeploymentCount` - Determines how many resources to remediate at the same time. The allowed values are 1 to 30 resources at a time. The default value is 10.
For more remediation cmdlets and examples, see the [Az.PolicyInsights](/powershell/module/az.policyinsights/#policy_insights) module.
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Policy
description: Sample Azure Resource Graph queries for Azure Policy showing use of resource types and tables to access Azure Policy related resources and properties. Last updated 08/31/2023 -+ # Azure Resource Graph sample queries for Azure Policy
governance Explore Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/explore-resources.md
Title: Explore your Azure resources
description: Learn to use the Resource Graph query language to explore your resources and discover how they're connected. Last updated 08/17/2021 -+ # Explore your Azure resources with Resource Graph
governance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/advanced.md
Title: Advanced query samples
description: Use Azure Resource Graph to run some advanced queries, including working with columns, listing tags used, and matching resources with regular expressions. Last updated 06/15/2022 -+
governance Starter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/starter.md
Last updated 08/31/2023 -+ # Starter Resource Graph query samples
healthcare-apis Dicom Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-services-overview.md
- Title: Overview of the DICOM service - Azure Health Data Services
-description: In this article, you'll learn concepts of DICOM and the DICOM service.
---- Previously updated : 09/01/2023---
-# What is the DICOM service?
-
-DICOM (Digital Imaging and Communications in Medicine) is the international standard to transmit, store, retrieve, print, process, and display medical imaging information, and is the primary medical imaging standard accepted across healthcare.
-
-The DICOM service is a managed service within [Azure Health Data Services](../healthcare-apis-overview.md) that ingests and persists DICOM objects at multiple thousands of images per second. It facilitates communication and transmission of imaging data with any DICOMweb&trade; enabled systems or applications via DICOMweb Standard APIs like [Store (STOW-RS)](dicom-services-conformance-statement-v2.md#store-stow-rs), [Search (QIDO-RS)](dicom-services-conformance-statement-v2.md#search-qido-rs), [Retrieve (WADO-RS)](dicom-services-conformance-statement-v2.md#retrieve-wado-rs). It's backed by a managed Platform-as-a Service (PaaS) offering in the cloud with complete [PHI](https://www.hhs.gov/answers/hipaa/what-is-phi/https://docsupdatetracker.net/index.html) compliance that you can upload PHI data to the DICOM service and exchange it through secure networks.
--- **PHI Compliant**: Protect your PHI with unparalleled security intelligence. Your data is isolated to a unique database per API instance and protected with multi-region failover. The DICOM service implements a layered, in-depth defense and advanced threat protection for your data.-- **Extended Query Tags**: Additionally index DICOM studies, series, and instances on both standard and private DICOM tags by expanding list of tags that are already specified within [DICOM Conformance Statement](dicom-services-conformance-statement-v2.md).-- **Change Feed**: Access ordered, guaranteed, immutable, read-only logs of all the changes that occur in DICOM service. Client applications can read these logs at any time independently, in parallel and at their own pace.-- **DICOMcast**: Via DICOMcast, the DICOM service can inject DICOM metadata into a FHIR service, or FHIR server, as an imaging study resource allowing a single source of truth for both clinical data and imaging metadata. This feature is available as an open-source feature that can be self-hosted in Azure. Learn more about [deploying DICOMcast](https://github.com/microsoft/dicom-server/blob/main/docs/quickstarts/deploy-dicom-cast.md).-- **Region availability**: DICOM service has wide-range of [availability across many regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-api-for-fhir&regions=all) with multi-region failover protection and continuously expanding.-- **Scalability**: DICOM service is designed out-of-the-box to support different workload levels at a hospital, country/region, and global scale without sacrificing any performance spec by using autoscaling features. -- **Role-based access**: You control your data. Role-based access control (RBAC) enables you to manage how your data is stored and accessed. Providing increased security and reducing administrative workload, you determine who has access to the datasets you create, based on role definitions you create for your environment.-
-[Open-source DICOM-server project](https://github.com/microsoft/dicom-server) is also constantly monitored for feature parity with managed service so that developers can deploy open source version as a set of Docker containers to speed up development and test in their environments, and contribute to potential future managed service features.
-
-## Applications for the DICOM service
-
-In order to effectively treat patients, research new treatments, diagnose solutions, or provide an effective overview of the health history of a single patient, organizations must integrate data across several sources. One of the most pressing integrations is between clinical and imaging data. DICOM service enables imaging data to securely persist in the Microsoft cloud and allows it to reside with EHR and IoT data in the same Azure subscription.
-
-FHIR&trade; is becoming an important standard for clinical data and provides extensibility to support integration of other types of data directly, or through references. By using DICOM service, organizations can store references to imaging data in FHIR&trade; and enable queries that cross clinical and imaging datasets. This can enable many different scenarios, for example:
--- **Image back-up**: Research institutions, clinics, imaging centers, veterinary clinics, pathology institutions, retailers, any team or organization can use the DICOM service to back up their images with unlimited storage and access. And there's no need to de-identify PHI data as our service is validated for PHI compliance.-- **Image exchange and collaboration**: Share an image, a sub set of images in your storage, or entire image library instantly with or without related EHR data.-- **Disaster recovery**: High availability is a resiliency characteristic of DICOM service. High availability is implemented in place (in the same region as your primary service) by designing it as a feature of the primary system.-- **Creating cohorts for research**: Often through queries for patients that match data in both clinical and imaging systems, such as this one (which triggered the effort to integrate FHIR&trade; and DICOM data): ΓÇ£Give me all the medications prescribed with all the CT scan documents and their associated radiology reports for any patient older than 45 that has had a diagnosis of osteosarcoma over the last two years.ΓÇ¥-- **Finding outcomes for similar patients to understand options and plan treatments**: When presented with a patient diagnosis, a physician can identify patient outcomes and treatment plans for past patients with a similar diagnosis, even when these include imaging data.-- **Providing a longitudinal view of a patient during diagnosis**: Radiologists, especially teleradiologists, often don't have complete access to a patientΓÇÖs medical history and related imaging studies. Through FHIR&trade; integration, this data can be easily provided, even to radiologists outside of the organizationΓÇÖs local network.-- **Closing the feedback loop with teleradiologists**: Ideally a radiologist has access to a hospitalΓÇÖs clinical data to close the feedback loop after making a recommendation. However for teleradiologists, this is often not the case. Instead, they're often unable to close the feedback loop after performing a diagnosis, since they don't have access to patient data after the initial read. With no (or limited) access to clinical results or outcomes, they canΓÇÖt get the feedback necessary to improve their skills. As one teleradiologist put it: ΓÇ£Take parathyroid for example. We do more than any other clinic in the country/region, and yet I have to beg and plead for surgeons to tell me what they actually found. Out of the more than 500 studies I do each month, I get direct feedback on only three or four.ΓÇ¥ Through integration with FHIR&trade;, an organization can easily create a tool that will provide direct feedback to teleradiologists, helping them to hone their skills and make better recommendations in the future.-- **Closing the feedback loop for AI/ML models**: Machine learning models do best when real-world feedback can be used to improve their models. However, third-party ML model providers rarely get the feedback they need to improve their models over time. For instance, one ISV put it this way: ΓÇ£We use a combination of machine models and human experts to recommend a treatment plan for heart surgery. However, we only rarely get feedback from physicians on how accurate our plan was. For instance, we often recommend a stent size. WeΓÇÖd love to get feedback on if our prediction was correct, but the only time we hear from customers is when thereΓÇÖs a major issue with our recommendations.ΓÇ¥ As with feedback for teleradiologists, integration with FHIR&trade; allows organizations to create a mechanism to provide feedback to the model retraining pipeline.-
-## Deploy DICOM service to Azure
-
-DICOM service needs an Azure subscription to configure and run the required components. These components are, by default, created inside of an existing or new Azure Resource Group to simplify management. Additionally, an Azure Active Directory account is required. For each instance of DICOM service, we create a combination of isolated and multi-tenant resource.
-
-## DICOM server
-
-The Medical Imaging Server for DICOM (hereby known as DICOM server) is an open source DICOM server that is easily deployed on Azure. It allows standards-based communication with any DICOMwebΓäó enabled systems, and injects DICOM metadata into a FHIR server to create a holistic view of patient data. See [DICOM server](https://github.com/microsoft/dicom-server).
-
-## Summary
-
-This conceptual article provided you with an overview of DICOM and the DICOM service.
-
-## Next steps
-
-To get started using the DICOM service, see
-
->[!div class="nextstepaction"]
->[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md)
-
-For more information about how to use the DICOMweb&trade; Standard APIs with the DICOM service, see
-
->[!div class="nextstepaction"]
->[Using DICOMweb&trade;Standard APIs with DICOM service](dicomweb-standard-apis-with-dicom-services.md)
-
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/overview.md
+
+ Title: Overview of the DICOM service in Azure Health Data Services
+description: The DICOM service is a cloud-based solution for storing, managing, and exchanging medical imaging data securely and efficiently with any DICOMwebΓäó-enabled systems or applications. Learn more about its benefits and use cases.
+++ Last updated : 10/06/2023+++
+# What is the DICOM service?
+
+The DICOM service is a cloud-based solution that enables healthcare organizations to store, manage, and exchange medical imaging data securely and efficiently with any DICOMweb&trade;-enabled systems or applications. The DICOM service is part of [Azure Health Data Services](../healthcare-apis-overview.md).
+
+The DICOM service offers many benefits, including:
+
+- **Global availability**. The DICOM service is available in any of the regions where Azure Health Data Services is available. Microsoft is continuously expanding availability of the DICOM service, so check [regional availability](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=health-data-services&regions=all) for updates.
+
+- **PHI compliance**. The DICOM service is designed for protected health information (PHI), meeting all regional compliance requirements including HIPAA, GDPR, and CCPA.
+
+- **Scalability**. The DICOM service scales to support everything from small imaging archives in a clinic to large imaging archives with petabytes of data and thousands of new studies added daily.
+
+- **Automatic data replication**. The DICOM service uses Azure Locally Redundant Storage (LRS) within a region. If one copy of the data fails or becomes unavailable, your data can be accessed without interruption.
+
+- **Role-based access control (RBAC)**. RBAC enables you to manage how your organization's data is stored and accessed. You determine who has access to datasets based on roles you define for your environment.
+
+## Use imaging data to enable healthcare scenarios
+
+To effectively treat patients, research treatments, diagnose illnesses, or get an overview of a patient's health history, organizations need to integrate data across several sources. The DICOM service enables imaging data to persist securely in the Microsoft cloud and allows it to reside with electronic health records (EHR) and healthcare device (IoT) data in the same Azure subscription.
+
+FHIR supports integration of other types of data directly, or through references. With the DICOM service, organizations are able to store references to imaging data in FHIR and enable queries that cross clinical and imaging datasets. This capability enables organizations to deliver better healthcare. For example:
+
+- **Image back up**. Research institutions, clinics, imaging centers, veterinary clinics, pathology institutions, retailers, or organizations can use the DICOM service to back up their images with unlimited storage and access. There's no need to deidentify PHI data because the service is validated for PHI compliance.
+
+- **Image exchange and collaboration**. Share an image, a subset of images, or an entire image library instantly with or without related EHR data.
+
+- **Create cohorts for research**. To find the right patients for clinical trials, researchers need to query for patients that match data in both clinical and imaging systems. The service allows researchers to use natural language to query across systems. For example, ΓÇ£Give me all the medications prescribed with all the CT scan documents and their associated radiology reports for any patient older than 45 that has had a diagnosis of osteosarcoma over the last two years.ΓÇ¥
+
+- **Plan treatment based on similar patients**. When presented with a patient diagnosis, a physician can identify patient outcomes and treatment plans for past patients with a similar diagnosis even when these include imaging data.
+
+- **Get a longitudinal view of a patient during diagnosis**. Radiologists, especially teleradiologists, often don't have complete access to a patientΓÇÖs medical history and related imaging studies. Through FHIR integration, this data can be provided even to radiologists outside of the organizationΓÇÖs local network.
+
+- **Close the feedback loop with teleradiologists**. Teleradiologists are often unable to find out about the accuracy and quality of their diagnoses because they don't have access to patient data after the initial read. With limited or no access to clinical results or outcomes, they miss opportunities to improve their skills. Through integration with FHIR, an organization can create a tool that provides direct feedback to teleradiologists, helping them make better recommendations in the future.
+
+## Manage medical imaging data securely and efficiently
+
+The DICOM service enables organizations to manage medical imaging data with several key capabilities:
+
+- **Data isolation**. The DICOM service assigns a unique database to each API instance, which means your organization's data isn't mixed with other organizations' data.
+
+- **Studies Service support**. The [Studies Service](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_10) allows users to store, retrieve, and search for DICOM studies, series, and instances. Microsoft includes the nonstandard delete transaction to enable a full resource lifecycle.
+
+- **Worklist Service support**. The DICOM service supports the Push and Pull SOPs of the [Worklist Service (UPS-RS)](https://dicom.nema.org/medical/dicom/current/output/html/part18.html#chapter_11). This service provides access to one Worklist containing Workitems, each of which represents a Unified Procedure Step (UPS).Studies Service
+
+- **Extended query tags**. The DICOM service allows you to expand the list of tags specified in the [DICOM Conformance Statement](dicom-services-conformance-statement-v2.md) so you can index DICOM studies, series, and instances on standard or private DICOM tags.
+
+- **Change feed**. The DICOM service enables you to access ordered, guaranteed, immutable, read-only logs of all changes that occur in the DICOM service. Client applications can read these logs at any time independently, in parallel and at their own pace.
+
+- **DICOMcast**. DICOMcast is an [open-source capability](https://github.com/microsoft/dicom-server/blob/main/docs/quickstarts/deploy-dicom-cast.md) that can be self-hosted in Azure. DICOMcast enables a single source of truth for clinical data and imaging metadata. With DICOMcast, the DICOM service can inject DICOM metadata into a FHIR service or FHIR server as an imaging study resource.
+
+- **Export files**. The DICOM service allows you to [export DICOM data](export-dicom-files.md) in a file format, simplifying the process of using medical imaging in external workflows such as AI and machine learning.
+
+## Prerequisites to deploy the DICOM service
+
+Your organization needs an Azure subscription to configure and run the components required for the DICOM service. By default, the components are created inside of an Azure resource group to simplify management. Additionally, a Microsoft Entra ID account is required. For each instance of the DICOM service, Microsoft creates a combination of isolated and multitenant resources.
+
+## Next steps
+
+[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md)
+
+[Use DICOMweb standard APIs](dicomweb-standard-apis-with-dicom-services.md)
+
+> [!NOTE]
+> FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
# Release notes: Azure Health Data Services > Azure Health Data Services is Generally Available.
->
>For more information about Azure Health Data Services Service Level Agreements, see [SLA for Azure Health Data Services](https://azure.microsoft.com/support/legal/sla/health-data-services/v1_1/). Azure Health Data Services is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI.
This article provides details about the features and enhancements made to Azure
## September 2023
-Documentation navigation improvements include a new hub page for Azure Health Data
+**Retirement announcement for Azure API for FHIR**
+
+Azure API for FHIR will be retired on September 30, 2026. [Azure Health Data Services FHIR service](/azure/healthcare-apis/healthcare-apis-overview) is the evolved version of Azure API for FHIR that enables customers to manage FHIR, DICOM, and MedTech services with integrations into other Azure services. Due to retirement of Azure API for FHIR, new deployments won't be allowed beginning April 1, 2025. For more information, see [migration strategies](/azure/healthcare-apis/fhir/migration-strategies).
+
+**Documentation navigation improvements**
+
+Documentation navigation improvements include a new hub page for Azure Health Data
## August 2023 ### FHIR service
-**Incremental Import feature is Generally Available (GA)**
+**Incremental Import feature is generally available (GA)**
$Import operation supports new capability of "Incremental Load" mode, which is optimized for periodically loading data into the FHIR service.
For details on Incremental Import, visit [Import Documentation](./../healthcare-
**Reindex operation provides job status at resource level** Reindex operation supports determining the status of the reindex operation with help of API call `GET {{FHIR_URL}}/_operations/reindex/{{reindexJobId}}`.
-Details per resource, on the number of completed reindexed resources can be obtained with help of the new field, added in the response- "resourceReindexProgressByResource". For details, visit [3286](https://github.com/microsoft/fhir-server/pull/3286).
+Details per resource, on the number of completed reindexed resources can be obtained with help of the new field, added in the response- "resourceReindexProgressByResource". For details, see [3286](https://github.com/microsoft/fhir-server/pull/3286).
**FHIR Search Query optimization of complex queries**
For more information, visit [#3207](https://github.com/microsoft/fhir-server/pul
**Fixed transient issues associated with loading custom search parameters**
-This bug fix addresses the issue, where the FHIR service wouldn't load the latest SearchParameter status in event of failure.
+This bug fix addresses the issue where the FHIR service wouldn't load the latest SearchParameter status in a failure.
For more information, visit [#3222](https://github.com/microsoft/fhir-server/pull/3222) ## March 2023
General availability (GA) of Azure Health Data services in Japan East region.
**Introduction of _till parameters and throughput improvement by 50x** _till parameter is introduced as optional parameter and allows you to export resources that have been modified until the specified time.
-This feature improvement is applicable to System export, for more information on export, visit [FHIR specification](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html)
+This feature improvement is applicable to System export, for more information on export, see [FHIR specification](https://hl7.org/fhir/uv/bulkdata/export/https://docsupdatetracker.net/index.html)
-Also visit [Export your FHIR data by invoking the $export command on the FHIR service | Microsoft Learn](./../healthcare-apis/fhir/export-data.md)
+Also see [Export your FHIR data by invoking the $export command on the FHIR service](./../healthcare-apis/fhir/export-data.md)
**Fixed issue for Chained search with :contains modifier results with no resources are returned**
For more information, visit [#2971](https://github.com/microsoft/fhir-server/pul
**Fixed issue related to HTTP Status code 500 was encountered when :not modifier was used with chained searches**
-This bug fix addresses the issue. Identified resources are returned per search criteria with :contains modifier . for more information on bug fix visit [#3041](https://github.com/microsoft/fhir-server/pull/3041)
+This bug fix addresses the issue. Identified resources are returned per search criteria with :contains modifier. for more information on bug fix visit [#3041](https://github.com/microsoft/fhir-server/pull/3041)
**Versioning policy enabled at resource level still required If-match header for transaction requests.**
The issue is fixed and querying with :not operator should provide correct result
**Provided an Error message for failure in export resulting from long time span**
-With failure in export job due to a long time span, a customer will see `RequestEntityTooLarge` HTTP status code. For more information, see [#2790](https://github.com/microsoft/fhir-server/pull/2790).
+With failure in export job due to a long time span, a customer sees `RequestEntityTooLarge` HTTP status code. For more information, see [#2790](https://github.com/microsoft/fhir-server/pull/2790).
**Fixed issue in a query sort, where functionality throws an error when chained search is performed with same field value.**
Performance improvements have cut the time to deploy new instances of the DICOM
**Reduced strictness when validating STOW requests**
-Some customers have run into issues storing DICOM files that don't perfectly conform to the specification. To enable those files to be stored in the DICOM service, we have reduced the strictness of the validation performed on STOW.
+Some customers have run into issues storing DICOM files that don't perfectly conform to the specification. To enable those files to be stored in the DICOM service, we reduced the strictness of the validation performed on STOW.
-The service accepts the following:
+The service accepts:
* DICOM UIDs that contain trailing whitespace * IS, DS, SV, and UV VRs that aren't valid numbers * Invalid private creator tags
All REST API requests to the DICOM service must include the API version in the U
**Index the first value for DICOM tags that incorrectly specify multiple values**
-Attributes that are defined to have a single value but have specified multiple values are leniently accepted. The first value for such attributes are indexed.
+Attributes that are defined to have a single value but have specified multiple values are leniently accepted. The first value for such attributes is indexed.
## April 2022
Fixed an issue with `SearchParameter` if it had a null value for Code, the resul
**Returned `BadRequestException` with valid message when input JSON body is invalid**
-For invalid JSON body requests, the FHIR server was returning a 500 error. Now we'll return a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239)
+For invalid JSON body requests, the FHIR server was returning a 500 error. Now the server returns a `BadRequestException` with a valid message instead of 500. [#2239](https://github.com/microsoft/fhir-server/pull/2239)
**Handled SQL Timeout issue**
In the capability statement, the software name distinguishes if you're using Azu
**Compress continuation tokens**
-In certain instances, the continuation token was too long to be able to follow the [next link](./../healthcare-apis/fhir/overview-of-search.md#pagination) in searches and would result in a 404. To resolve this, we compressed the continuation token to ensure it stays below the size limit [#2279](https://github.com/microsoft/fhir-server/pull/2279). Addresses issue [#2250](https://github.com/microsoft/fhir-server/issues/2250).
+In certain instances, the continuation token was too long to be able to follow the [next link](./../healthcare-apis/fhir/overview-of-search.md#pagination) in searches and would result in a 404. To resolve the issue, we compressed the continuation token to ensure it stays below the size limit [#2279](https://github.com/microsoft/fhir-server/pull/2279). Addresses issue [#2250](https://github.com/microsoft/fhir-server/issues/2250).
**FHIR service autoscale**
Learn about:
[Release notes: Azure API for FHIR](./azure-api-for-fhir/release-notes.md) FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.+
iot-central Tutorial Connect Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-connect-device.md
Last updated 06/06/2023
-+ zone_pivot_groups: programming-languages-set-twenty-six #- id: programming-languages-set-twenty-six
iot-edge How To Retrieve Iot Edge Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-retrieve-iot-edge-logs.md
Last updated 09/01/2022
+ # Retrieve logs from IoT Edge deployments
iot-edge Tutorial Nested Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-nested-iot-edge-for-linux-on-windows.md
Last updated 05/12/2023 + # Tutorial: Create a hierarchy of IoT Edge devices using IoT Edge for Linux on Windows
To learn more about using gateways to create hierarchical layers of IoT Edge dev
> [!div class="nextstepaction"] > [Connect Azure IoT Edge devices to create a hierarchy](how-to-connect-downstream-iot-edge-device.md)-
iot-hub-device-update Create Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/create-update.md
Last updated 10/31/2022 + # Prepare an update to import into Device Update for IoT Hub
iot-hub Authenticate Authorize Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/authenticate-authorize-azure-ad.md
Last updated 09/01/2023-+ # Control access to IoT Hub by using Azure Active Directory
For more information, see the [Azure IoT extension for Azure CLI release page](h
- For more information on the advantages of using Azure AD in your application, see [Integrating with Azure Active Directory](../active-directory/develop/how-to-integrate.md). - For more information on requesting access tokens from Azure AD for users and service principals, see [Authentication scenarios for Azure AD](../active-directory/develop/authentication-vs-authorization.md).
-Use the Device Provisioning Service to [Provision multiple X.509 devices using enrollment groups](../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md).
+Use the Device Provisioning Service to [Provision multiple X.509 devices using enrollment groups](../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md).
iot-hub Iot Hub Devguide Messages C2d https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-c2d.md
Last updated 12/20/2022-+ # Send cloud-to-device messages from an IoT hub
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys-byok.md
tags: azure-resource-manager-+
key-vault Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/backup-restore.md
description: This document explains full backup/restore and selective restore
tags: azure-key-vault+
key-vault Tls Offload Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/tls-offload-library.md
-
+ Title: Azure Managed HSM TLS Offload Library description: Azure Managed HSM TLS Offload Library + Last updated 02/25/2023
kinect-dk Retrieve Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/retrieve-images.md
By default, the API will only return a capture once it has received all of the r
```C // Capture a depth frame
+k4a_capture_t capture = NULL;
switch (k4a_device_get_capture(device, &capture, TIMEOUT_IN_MS)) { case K4A_WAIT_RESULT_SUCCEEDED:
kubernetes-fleet Update Orchestration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/update-orchestration.md
Last updated 05/10/2023
-+ # Orchestrate updates across multiple clusters by using Azure Kubernetes Fleet Manager (Preview)
load-balancer Configure Inbound NAT Rules Vm Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-inbound-NAT-rules-vm-scale-set.md
Last updated 12/06/2022-+ # Configure inbound NAT Rules for Virtual Machine Scale Sets
load-balancer Load Balancer Monitor Metrics Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-monitor-metrics-cli.md
Last updated 06/27/2023-+ # Get Load Balancer metrics with Azure Monitor CLI
az monitor metrics list --resource <resource_id> --metric DipAvailability --filt
## Next steps * [Review the metric definitions to better understand how each is generated](./load-balancer-standard-diagnostics.md#multi-dimensional-metrics) * [Create Connection Monitors for your Load Balancer](./load-balancer-standard-diagnostics.md)
-* [Create your own workbooks](../azure-monitor/visualize/workbooks-overview.md), you can take inspiration by clicking on the edit button in your detailed metrics dashboard
+* [Create your own workbooks](../azure-monitor/visualize/workbooks-overview.md), you can take inspiration by clicking on the edit button in your detailed metrics dashboard
load-balancer Load Balancer Standard Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-availability-zones.md
Azure Load Balancer supports availability zones scenarios. You can use Standard Load Balancer to increase availability throughout your scenario by aligning resources with, and distribution across zones. Review this document to understand these concepts and fundamental scenario design guidance.
-A Load Balancer can either be **zone redundant, zonal,** or **non-zonal**. To configure the zone-related properties for your load balancer, select the appropriate type of frontend needed.
+A Load Balancer can either be **zone redundant, zonal,** or **non-zonal**. The load balancer's availability zone selection is synonymous with its frontend IP's zone selection. For public load balancers, if the public IP in the Load balancer's frontend is zone redundant then the load balancer is also zone-redundant. If the public IP in the load balancer's frontend is zonal, then the load balancer will also be designated to the same zone. To configure the zone-related properties for your load balancer, select the appropriate type of frontend needed.
## Zone redundant
-In a region with Availability Zones, a Standard Load Balancer can be zone-redundant with traffic served by a single IP address. A single frontend IP address survives zone failure. The frontend IP may be used to reach all (nonimpacted) backend pool members no matter the zone. One or more availability zones can fail and the data path survives as long as one zone in the region remains healthy.
+In a region with Availability Zones, a Standard Load Balancer can be zone-redundant with traffic served by a single IP address. A single frontend IP address survives zone failure. The frontend IP may be used to reach all (non-impacted) backend pool members no matter the zone. Up to one availability zone can fail and the data path survives as long as the remaining zones in the region remain healthy.
The frontend's IP address is served simultaneously by multiple independent infrastructure deployments in multiple availability zones. Any retries or reestablishment will succeed in other zones not affected by the zone failure.
Now that you understand the zone-related properties for Standard Load Balancer,
### Tolerance to zone failure -- A **zone redundant** frontend can serve a zonal resource in any zone with a single IP address. The IP can survive one or more zone failures as long as at least one zone remains healthy within the region.
+- A **zone redundant** frontend can serve a zonal resource in any zone with a single IP address. The IP can survive one zone failure as long as the remaining zones are healthy within the region.
- A **zonal** frontend is a reduction of the service to a single zone and shares fate with the respective zone. If the deployment in your zone goes down, your load balancer won't survive this failure. Members in the backend pool of a load balancer are normally associated with a single zone such as with zonal virtual machines. A common design for production workloads would be to have multiple zonal resources. For example, placing virtual machines from zone 1, 2, and 3 in the backend of a load balancer with a zone-redundant frontend meets this design principle.
load-testing How To Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-customer-managed-keys.md
Previously updated : 05/09/2023 Last updated : 09/18/2023
Azure Load Testing automatically encrypts all data stored in your load testing r
The keys you provide are stored securely using [Azure Key Vault](../key-vault/general/overview.md). You can create a separate key for each Azure load testing resource you enable with customer-managed keys.
+When you use customer-managed encryption keys, you need to specify a user-assigned managed identity to retrieve the keys from Azure Key Vault.
+ Azure Load Testing uses the customer-managed key to encrypt the following data in the load testing resource: - Test script and configuration files
Azure Load Testing uses the customer-managed key to encrypt the following data i
- Customer-managed keys are only available for new Azure load testing resources. You should configure the key during resource creation. -- Azure Load Testing can't automatically rotate the customer-managed key to use the latest version of the encryption key. You should update the key URI in the resource after the key is rotated in the Azure Key Vault.- - Once customer-managed key encryption is enabled on a resource, it can't be disabled. -- If the customer-managed key is stored in an Azure Key Vault behind a firewall, public access should be enabled on the firewall to allow Azure Load Testing to access the key.
+- Azure Load Testing can't automatically rotate the customer-managed key to use the latest version of the encryption key. You should update the key URI in the resource after the key is rotated in the Azure Key Vault.
## Configure your Azure key vault
-To use customer-managed encryption keys with Azure Load Testing, you need to store the key in Azure Key Vault. You can use an existing or create a new key vault. The load testing resource and key vault may be in different regions or subscriptions in the same tenant.
+To use customer-managed encryption keys with Azure Load Testing, you need to store the key in Azure Key Vault. You can use an existing key vault or create a new one. The load testing resource and key vault may be in different regions or subscriptions in the same tenant.
+
+Make sure to configure the following key vault settings when you use customer-managed encryption keys.
+
+### Configure key vault networking settings
+
+If you restricted access to your Azure key vault by a firewall or virtual networking, you need to grant access to Azure Load Testing for retrieving your customer-managed keys. Follow these steps to [grant access to trusted Azure services](/azure/key-vault/general/overview-vnet-service-endpoints#grant-access-to-trusted-azure-services).
+
+### Configure soft delete and purge protection
You have to set the *Soft Delete* and *Purge Protection* properties on your key vault to use customer-managed keys with Azure Load Testing. Soft delete is enabled by default when you create a new key vault and can't be disabled. You can enable purge protection at any time. Learn more about [soft delete and purge protection in Azure Key Vault](/azure/key-vault/general/soft-delete-overview).
az keyvault update --subscription <subscription-id> -g <resource-group> -n <key-
-## Add a key
+## Add a customer-managed key to Azure Key Vault
Next, add a key to the key vault. Azure Load Testing encryption supports RSA keys. For more information about supported key types in Azure Key Vault, see [About keys](/azure/key-vault/keys/about-keys).
az keyvault key create \
## Add an access policy to your key vault
-The user-assigned managed identity for accessing the customer-managed keys in Azure Key Vault must have appropriate permissions to access the key vault.
+When you use customer-managed encryption keys, you have to specify a user-assigned managed identity. The user-assigned managed identity for accessing the customer-managed keys in Azure Key Vault must have appropriate permissions to access the key vault.
1. In the [Azure portal](https://portal.azure.com), go to the Azure key vault instance that you plan to use to host your encryption keys.
The user-assigned managed identity for accessing the customer-managed keys in Az
1. Select **Save** on the key vault instance to save all changes.
-## Configure customer-managed keys for a new load testing resource
+## Use customer-managed keys with Azure Load Testing
+
+You can only configure customer-managed encryption keys when you create a new Azure load testing resource. When you specify the encryption key details, you also have to select a user-assigned managed identity to retrieve the key from Azure Key Vault.
To configure customer-managed keys for a new load testing resource, follow these steps:
az deployment group create --resource-group <resource-group-name> --template-fil
-
-## Change the managed identity
+## Change the managed identity for retrieving the encryption key
You can change the managed identity for customer-managed keys for an existing load testing resource at any time.
You can change the managed identity for customer-managed keys for an existing lo
:::image type="content" source="media/how-to-configure-customer-managed-keys/change-identity-existing-azure-load-testing-resource.png" alt-text="Screenshot that shows how to change the managed identity for customer managed keys on an existing Azure load testing resource.":::
-> [!NOTE]
-> The selected managed identity should have access granted on the Azure Key Vault.
+> [!IMPORTANT]
+> Make sure that the selected [managed identity has access to the Azure Key Vault](#add-an-access-policy-to-your-key-vault).
-## Change the key
+## Update the customer-managed encryption key
You can change the key that you're using for Azure Load Testing encryption at any time. To change the key with the Azure portal, follow these steps:
You can change the key that you're using for Azure Load Testing encryption at an
1. Save your changes.
-## Key rotation
+## Rotate encryption keys
+
+You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. To rotate a key:
-You can rotate a customer-managed key in Azure Key Vault according to your compliance policies. To rotate a key, in Azure Key Vault, update the key version or create a new key. You can then update the load testing resource to [encrypt data using the new key URI](#change-the-key).
+1. In Azure Key Vault, update the key version or create a new key.
+1. [Update the customer-managed encryption key](#update-the-customer-managed-encryption-key) for your load testing resource.
## Frequently asked questions
You can revoke a key by disabling the latest version of the key in Azure Key Vau
When you revoke the encryption key you may be able to run tests for about 10 minutes, after which the only available operation is resource deletion. It's recommended to rotate the key instead of revoking it to manage resource security and retain your data.
-## Next steps
+## Related content
- Learn how to [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md).-- Learn how to [Parameterize a load test](./how-to-parameterize-load-tests.md).
+- Learn how to [Parameterize a load test with secrets and environment variables](./how-to-parameterize-load-tests.md).
load-testing How To Parameterize Load Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-parameterize-load-tests.md
The Azure Load Testing service supports two types of parameters:
- An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- An Azure Load Testing resource. If you need to create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).
+- An Azure load testing resource. If you need to create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).
## <a name="secrets"></a> Configure load tests with secrets
You'll also need to grant Azure Load Testing access to your Azure key vault to r
1. [Add the secret value to your key vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault), if you haven't already done so.
+ > [!IMPORTANT]
+ > If you restricted access to your Azure key vault by a firewall or virtual networking, follow these steps to [grant access to trusted Azure services](/azure/key-vault/general/overview-vnet-service-endpoints#grant-access-to-trusted-azure-services).
+ 1. Retrieve the key vault **secret identifier** for your secret. You'll use this secret identifier to configure your load test. :::image type="content" source="media/how-to-parameterize-load-tests/key-vault-secret.png" alt-text="Screenshot that shows the details of a secret in an Azure key vault.":::
load-testing How To Read Csv Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-read-csv-data.md
To configure your load test to split input CSV files:
+## Troubleshooting
+
+### Test status is failed and test log has `File {my-filename} must exist and be readable`
+
+When the load test completes with the Failed status, you can [download the test logs](./how-to-troubleshoot-failing-test.md#download-apache-jmeter-worker-logs).
+
+When you receive an error message `File {my-filename} must exist and be readable` in the test log, this means that the input CSV file couldn't be found when running the JMeter script.
+
+Azure Load Testing stores all input files alongside the JMeter script. When you reference the input CSV file in the JMeter script, make sure *not* to include the file path, but only use the filename.
+
+The following code snippet shows an extract of a JMeter file that uses a `CSVDataSet` element to read the input file. Notice that the `filename` doesn't include the file path.
++ ## Next steps - Learn how to [Set up a high-scale load test](./how-to-high-scale-load.md).
load-testing How To Test Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md
description: Learn how to deploy Azure Load Testing in a virtual network (VNET injection) to test private application endpoints and hybrid deployments. + Last updated 05/12/2023
logic-apps Logic Apps Enterprise Integration Flatfile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-flatfile.md
Last updated 02/22/2023
Before you send XML content to a business partner in a business-to-business (B2B) scenario, you might want to encode that content first. If you receive encoded XML content, you'll need to decode that content first. When you're building a logic app workflow in Azure Logic Apps, you can encode and decode flat files by using the **Flat File** built-in connector actions and a flat file schema for encoding and decoding. You can use **Flat File** actions in multi-tenant Consumption logic app workflows and single-tenant Standard logic app workflows.
-> [!NOTE]
->
-> In Standard logic app workflows, the **Flat File** actions are currently in preview.
- While no **Flat File** triggers are available, you can use any trigger or action to feed the source XML content into your workflow. For example, you can use a built-in connector trigger, a managed or Azure-hosted connector trigger available for Azure Logic Apps, or even another app. This article shows how to add the **Flat File** encoding and decoding actions to your workflow.
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-overview.md
You might also want to explore other quickstart guides for Azure Logic Apps:
Learn more about the Azure Logic Apps platform with these introductory videos:
-> [!VIDEO https://learn.microsoft.com/Shows/Azure-Friday/Connect-and-extend-your-mainframe-to-the-cloud-with-Logic-Apps/player]
+> [!VIDEO https://learn-video.azurefd.net/vod/player?show=azure-friday&ep=integrate-your-mainframes-and-midranges-with-azure-logic-apps]
## Next steps
machine-learning Apache Spark Azure Ml Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/apache-spark-azure-ml-concepts.md
Previously updated : 06/28/2023 Last updated : 10/05/2023 #Customer intent: As a full-stack machine learning pro, I want to use Apache Spark in Azure Machine Learning.
Users can define resources, including instance type and the Apache Spark runtime
### Points to consider
-serverless Spark compute works well for most user scenarios that require quick access to distributed computing through Apache Spark. However, to make an informed decision, users should consider the advantages and disadvantages of this approach.
+Serverless Spark compute works well for most user scenarios that require quick access to distributed computing resources through Apache Spark. However, to make an informed decision, users should consider the advantages and disadvantages of this approach.
Advantages: -- No dependencies on other Azure resources to be created for Apache Spark (Azure Synapse infrastructure operates under the hood).
+- No dependencies on creation of other Azure resources for Apache Spark (Azure Synapse infrastructure operates under the hood).
- No required subscription permissions to create Azure Synapse-related resources. - No need for SQL pool quotas.
To use network isolation with Azure Machine Learning and serverless Spark comput
At first launch, a serverless Spark compute (*cold start*) resource might need three to five minutes to start the Spark session itself. The automated serverless Spark compute provisioning, backed by Azure Synapse, causes this delay. After the serverless Spark compute is provisioned, and an Apache Spark session starts, subsequent code executions (*warm start*) won't experience this delay.
-The Spark session configuration offers an option that defines a session timeout (in minutes). The Spark session will end after an inactivity period that exceeds the user-defined timeout. If another Spark session doesn't start in the following ten minutes, resources provisioned for the serverless Spark compute will be torn down.
+The Spark session configuration offers an option that defines a session timeout (in minutes). The Spark session will end after an inactivity period that exceeds the user-defined timeout. If another Spark session doesn't start in the following 10 minutes, resources provisioned for the serverless Spark compute will be torn down.
After the serverless Spark compute resource tear-down happens, submission of the next job will require a *cold start*. The next visualization shows some session inactivity period and cluster teardown scenarios. :::image type="content" source="./media/apache-spark-azure-ml-concepts/spark-session-timeout-teardown.png" lightbox="./media/apache-spark-azure-ml-concepts/spark-session-timeout-teardown.png" alt-text="Expandable diagram that shows scenarios for Apache Spark session inactivity period and cluster teardown."::: > [!NOTE]
-> For a session-level conda package:
+> For a session-level Conda package:
> - the *Cold start* will need about ten to fifteen minutes.
-> - the *Warm start*, using same conda package, will need about one minute.
-> - the *Warm start*, with a different conda package, will also need about ten to fifteen minutes.
-> - If the package that you're installing is large or takes a long time to install, it might affect the Spark instance's startup time.
+> - the *Warm start*, using same Conda package, will need about one minute.
+> - the *Warm start*, with a different Conda package, will also need about ten to fifteen minutes.
+> - If the package that you install is large or needs a long installation time, it might impact the Spark instance startup time.
> - Altering the PySpark, Python, Scala/Java, .NET, or Spark version is not supported. ### Session-level Conda Packages
-A conda dependency YAML file can define a number of session-level conda packages in a session configuration. A session will time out if it takes longer than fifteen minutes to install the conda packages defined in the YAML file. It becomes important to first check whether a required package is already available in the Azure Synapse base image. To do this, users should follow the link to determine *packages available in the base image for* the Apache Spark version in use:
+A Conda dependency YAML file can define many session-level Conda packages in a session configuration. A session will time out if it needs more than 15 minutes to install the Conda packages defined in the YAML file. It becomes important to first check whether a required package is already available in the Azure Synapse base image. To do this, users should follow the link to determine *packages available in the base image for* the Apache Spark version in use:
+- [Azure Synapse Runtime for Apache Spark 3.3](../synapse-analytics/spark/apache-spark-33-runtime.md#python-libraries-normal-vms)
- [Azure Synapse Runtime for Apache Spark 3.2](../synapse-analytics/spark/apache-spark-32-runtime.md#python-libraries-normal-vms)
+### Improving session cold start time while using session-level Conda packages
+You can improve the Spark session *cold start* time by setting the `spark.hadoop.aml.enable_cache` configuration variable to `true`. The session *cold start* with session level Conda packages typically takes 10 to 15 minutes when the session starts for the first time. However, subsequent session *cold starts* take three to five minutes. Define the configuration variable in the **Configure session** user interface, under **Configuration settings**.
++ ## Attached Synapse Spark pool A Spark pool created in an Azure Synapse workspace becomes available in the Azure Machine Learning workspace with the attached Synapse Spark pool. This option might be suitable for users who want to reuse an existing Synapse Spark pool.
The Spark session configuration for an attached Synapse Spark pool also offers a
## Defining Spark cluster size
-In Azure Machine Learning Spark jobs, you can define Spark cluster size with three parameter values:
+In Azure Machine Learning Spark jobs, you can define the Spark cluster size, with three parameter values:
- Number of executors - Executor cores - Executor memory
-You should consider an Azure Machine Learning Apache Spark executor as equivalent to Azure Spark worker nodes. An example can explain these parameters. Let's say that you defined the number of executors as 6 (equivalent to six worker nodes), executor cores as 4, and executor memory as 28 GB. Your Spark job then has access to a cluster with 24 cores and 168 GB of memory.
+You should consider an Azure Machine Learning Apache Spark executor as equivalent to Azure Spark worker nodes. An example can explain these parameters. Let's say that you defined the number of executors as 6 (equivalent to six worker nodes), the number of executor cores as 4, and executor memory as 28 GB. Your Spark job then has access to a cluster with 24 cores in total, and 168 GB of memory.
## Ensuring resource access for Spark jobs
-To access data and other resources, a Spark job can use either a user identity passthrough or a managed identity. This table summarizes the mechanisms that Spark jobs use to access resources.
+To access data and other resources, a Spark job can use either a managed identity or a user identity passthrough. This table summarizes the mechanisms that Spark jobs use to access resources.
|Spark pool|Supported identities|Default identity| | - | -- | - |
To access data and other resources, a Spark job can use either a user identity p
- [Interactive data wrangling with Apache Spark in Azure Machine Learning](./interactive-data-wrangling-with-apache-spark-azure-ml.md) - [Submit Spark jobs in Azure Machine Learning](./how-to-submit-spark-jobs.md) - [Code samples for Spark jobs using the Azure Machine Learning CLI](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/spark)-- [Code samples for Spark jobs using the Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark)
+- [Code samples for Spark jobs using the Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark)
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
Last updated 5/01/2023 -+ # Create jobs and input data for batch endpoints
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
-+ Last updated 08/01/2023 show_latex: true
machine-learning How To Change Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-change-storage-access-key.md
workspace_name = '<AZUREML_WORKSPACE_NAME>'
ml_client = MLClient(credential=DefaultAzureCredential(), subscription_id=subscription_id,
- resource_group_name=resource_group)
+ resource_group_name=resource_group,
+ workspace_name=workspace_name)
# list all the datastores datastores = ml_client.datastores.list()
To update Azure Machine Learning to use the new key, use the following steps:
1. To update the workspace to use the new key, use the following command. Replace `myworkspace` with your Azure Machine Learning workspace name, and replace `myresourcegroup` with the name of the Azure resource group that contains the workspace. ```azurecli-interactive
- az ml workspace sync-keys -w myworkspace -g myresourcegroup
+ az ml workspace sync-keys -n myworkspace -g myresourcegroup
``` This command automatically syncs the new keys for the Azure storage account used by the workspace.
for more information on using datastores, see [Use datastores](how-to-datastore.
:::moniker-end :::moniker range="azureml-api-1" For more information on registering datastores, see the [`Datastore`](/python/api/azureml-core/azureml.core.datastore%28class%29) class reference.
machine-learning How To Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connection.md
Last updated 06/19/2023-+ # Customer intent: As an experienced data scientist with Python skills, I have data located in external sources outside of Azure. I need to make that data available to the Azure Machine Learning platform, to train my machine learning models.
machine-learning How To Create Workspace Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-workspace-template.md
-+
machine-learning How To Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-datastore.md
Last updated 07/06/2023-+ # Customer intent: As an experienced Python developer, I need to make my data in Azure storage available to my remote compute resource, to train my machine learning models.
machine-learning How To Deploy Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-kubernetes-extension.md
Last updated 08/31/2022 -+ # Deploy Azure Machine Learning extension on AKS or Arc Kubernetes cluster
machine-learning How To Manage Imported Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-imported-data-assets.md
Last updated 06/19/2023-+ # Manage imported data assets (preview)
This Azure CLI code sample shows the data assets with certain conditions, or wit
- [Access data in a job](how-to-read-write-data-v2.md#access-data-in-a-job) - [Working with tables in Azure Machine Learning](how-to-mltable.md)-- [Access data from Azure cloud storage during interactive development](how-to-access-data-interactive.md)
+- [Access data from Azure cloud storage during interactive development](how-to-access-data-interactive.md)
machine-learning How To Manage Inputs Outputs Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-inputs-outputs-pipeline.md
Last updated 08/27/2023 -+ # Manage inputs and outputs of component and pipeline
machine-learning How To Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network.md
If you have an existing workspace and want to enable managed VNet for it, there'
* Compute cluster * Compute instance
+* Kubernetes clusters
* Managed online endpoints ## Next steps
machine-learning How To Secure Kubernetes Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-kubernetes-online-endpoint.md
Last updated 10/10/2022 -+ # Configure a secure online endpoint with TLS/SSL
machine-learning How To Submit Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md
Previously updated : 06/28/2023 Last updated : 10/05/2023
These prerequisites cover the submission of a Spark job from Azure Machine Learn
> [!NOTE]
->- To learn more about resource access while using Azure Machine Learning serverless Spark compute, and attached Synapse Spark pool, see [Ensuring resource access for Spark jobs](apache-spark-environment-configuration.md#ensuring-resource-access-for-spark-jobs).
+>- To learn more about resource access while using Azure Machine Learning serverless Spark compute and attached Synapse Spark pool, see [Ensuring resource access for Spark jobs](apache-spark-environment-configuration.md#ensuring-resource-access-for-spark-jobs).
>- Azure Machine Learning provides a [shared quota](how-to-manage-quotas.md#azure-machine-learning-shared-quota) pool from which all users can access compute quota to perform testing for a limited time. When you use the serverless Spark compute, Azure Machine Learning allows you to access this shared quota for a short time. ### Attach user assigned managed identity using CLI v2
These prerequisites cover the submission of a Spark job from Azure Machine Learn
} } ```
-1. Execute the following command in the PowerShell prompt or the command prompt, to attach the user-assigned managed identity to the workspace.
+1. To attach the user-assigned managed identity to the workspace, execute the following command in the PowerShell prompt or the command prompt.
```cmd armclient PATCH https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.MachineLearningServices/workspaces/<AML_WORKSPACE_NAME>?api-version=2022-05-01 '@<JSON_FILE_NAME>.json' ```
These prerequisites cover the submission of a Spark job from Azure Machine Learn
> - Serverless Spark compute supports Azure Machine Learning managed virtual network. If a [managed network is provisioned for the serverless Spark compute, the corresponding private endpoints for the storage account should also be provisioned](./how-to-managed-network.md#configure-for-serverless-spark-jobs) to ensure data access. ## Submit a standalone Spark job
-A Python script developed by [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md) can be used to submit a batch job to process a larger volume of data, after making necessary changes for Python script parameterization. A simple data wrangling batch job can be submitted as a standalone Spark job.
+After making necessary changes for Python script parameterization, a Python script developed by [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md) can be used to submit a batch job to process a larger volume of data. A simple data wrangling batch job can be submitted as a standalone Spark job.
A Spark job requires a Python script that takes arguments, which can be developed with modification of the Python code developed from [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md). A sample Python script is shown here.
df.to_csv(args.wrangled_data, index_col="PassengerId")
``` > [!NOTE]
-> This Python code sample uses `pyspark.pandas`, which is only supported by Spark runtime version 3.2.
+> This Python code sample uses `pyspark.pandas`. Only the Spark runtime version 3.2 or later supports this.
The above script takes two arguments `--titanic_data` and `--wrangled_data`, which pass the path of input data and output folder respectively. # [Azure CLI](#tab/cli) [!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
-To create a job, a standalone Spark job can be defined as a YAML specification file, which can be used in the `az ml job create` command, with the `--file` parameter. Define these properties in the YAML file as follows:
+To create a job, a standalone Spark job can be defined as a YAML specification file, which can be used in the `az ml job create` command, with the `--file` parameter. Define these properties in the YAML file:
### YAML properties in the Spark job specification
To create a job, a standalone Spark job can be defined as a YAML specification f
- `standard_e32s_v3` - `standard_e64s_v3` - `runtime_version` - defines the Spark runtime version. The following Spark runtime versions are currently supported:
- - `3.1`
- `3.2` - `3.3` > [!IMPORTANT] > Azure Synapse Runtime for Apache Spark: Announcements
- > * Azure Synapse Runtime for Apache Spark 3.1:
- > * End of Life (EOLA) Announcement Date: January 26, 2023
- > * End of Support Date: July 31, 2023. After this date, the runtime will be disabled.
> * Azure Synapse Runtime for Apache Spark 3.2: > * EOLA Announcement Date: July 8, 2023 > * End of Support Date: July 8, 2024. After this date, the runtime will be disabled. > * For continued support and optimal performance, we advise migrating to Apache Spark 3.3.
- An example is shown here:
+ This is an example:
```yaml resources: instance_type: standard_e8s_v3
To create a job, a standalone Spark job can be defined as a YAML specification f
path: azureml://datastores/workspaceblobstore/paths/data/wrangled/ mode: direct ```-- `identity` - this optional property defines the identity used to submit this job. It can have `user_identity` and `managed` values. If the YAML specification does not define an identity, the Spark job uses the default identity.
+- `identity` - this optional property defines the identity used to submit this job. It can have `user_identity` and `managed` values. If the YAML specification doesn't define an identity, the Spark job uses the default identity.
### Standalone Spark job
resources:
``` > [!NOTE]
-> To use an attached Synapse Spark pool, define the `compute` property in the sample YAML specification file shown above, instead of the `resources` property.
+> To use an attached Synapse Spark pool, define the `compute` property in the sample YAML specification file shown earlier, instead of the `resources` property.
-The YAML files shown above can be used in the `az ml job create` command, with the `--file` parameter, to create a standalone Spark job as shown:
+The YAML files shown earlier can be used in the `az ml job create` command, with the `--file` parameter, to create a standalone Spark job as shown:
```azurecli az ml job create --file <YAML_SPECIFICATION_FILE_NAME>.yaml --subscription <SUBSCRIPTION_ID> --resource-group <RESOURCE_GROUP> --workspace-name <AML_WORKSPACE_NAME>
To create a standalone Spark job, use the `azure.ai.ml.spark` function, with the
- `Standard_E32S_V3` - `Standard_E64S_V3` - `runtime_version` - a key that defines the Spark runtime version. The following Spark runtime versions are currently supported:
- - `3.1.0`
- `3.2.0` - `3.3.0` > [!IMPORTANT] > Azure Synapse Runtime for Apache Spark: Announcements
- > * Azure Synapse Runtime for Apache Spark 3.1:
- > * End of Life (EOLA) Announcement Date: January 26, 2023
- > * End of Support Date: July 31, 2023. After this date, the runtime will be disabled.
> * Azure Synapse Runtime for Apache Spark 3.2: > * EOLA Announcement Date: July 8, 2023 > * End of Support Date: July 8, 2024. After this date, the runtime will be disabled.
To submit a standalone Spark job using the Azure Machine Learning studio UI:
2. Select **Spark runtime version**. > [!IMPORTANT] > Azure Synapse Runtime for Apache Spark: Announcements
- > * Azure Synapse Runtime for Apache Spark 3.1:
- > * End of Life (EOLA) Announcement Date: January 26, 2023
- > * End of Support Date: July 31, 2023. After this date, the runtime will be disabled.
> * Azure Synapse Runtime for Apache Spark 3.2: > * EOLA Announcement Date: July 8, 2023 > * End of Support Date: July 8, 2024. After this date, the runtime will be disabled.
A Spark component offers the flexibility to use the same component in multiple [
# [Azure CLI](#tab/cli) [!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
-YAML syntax for a Spark component resembles the [YAML syntax for Spark job specification](#yaml-properties-in-the-spark-job-specification) in most ways. These properties are defined differently in the Spark component YAML specification:
+The YAML syntax for a Spark component resembles the [YAML syntax for Spark job specification](#yaml-properties-in-the-spark-job-specification) in most ways. These properties are defined differently in the Spark component YAML specification:
- `name` - the name of the Spark component. - `version` - the version of the Spark component. - `display_name` - the name of the Spark component to display in the UI and elsewhere.
jobs:
resources: instance_type: standard_e8s_v3
- runtime_version: "3.2"
+ runtime_version: "3.3"
``` > [!NOTE] > To use an attached Synapse Spark pool, define the `compute` property in the sample YAML specification file shown above, instead of `resources` property.
-The above YAML specification file can be used in `az ml job create` command, using the `--file` parameter, to create a pipeline job as shown:
+The above YAML specification file can be used in the `az ml job create` command, using the `--file` parameter, to create a pipeline job as shown:
```azurecli az ml job create --file <YAML_SPECIFICATION_FILE_NAME>.yaml --subscription <SUBSCRIPTION_ID> --resource-group <RESOURCE_GROUP> --workspace-name <AML_WORKSPACE_NAME>
You can execute the above command from:
# [Python SDK](#tab/sdk) [!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)]
-To create an Azure Machine Learning pipeline with a Spark component, you should have familiarity with creation of [Azure Machine Learning pipelines from components, using Python SDK](./tutorial-pipeline-python-sdk.md#create-the-pipeline-from-components). A Spark component is created using `azure.ai.ml.spark` function. The function parameters are defined almost the same way as for the [standalone Spark job](#standalone-spark-job-using-python-sdk). These parameters are defined differently for the Spark component:
+To create an Azure Machine Learning pipeline with a Spark component, you should know about the creation of [Azure Machine Learning pipelines from components, using Python SDK](./tutorial-pipeline-python-sdk.md#create-the-pipeline-from-components). A Spark component is created using `azure.ai.ml.spark` function. The function parameters are defined almost the same way as for the [standalone Spark job](#standalone-spark-job-using-python-sdk). These parameters are defined differently for the Spark component:
- `name` - the name of the Spark component. - `display_name` - the name of the Spark component displayed in the UI and elsewhere.-- `inputs` - this parameter is similar to `inputs` parameter described for the [standalone Spark job](#standalone-spark-job-using-python-sdk), except that the `azure.ai.ml.Input` class is instantiated without the `path` parameter.-- `outputs` - this parameter is similar to `outputs` parameter described for the [standalone Spark job](#standalone-spark-job-using-python-sdk), except that the `azure.ai.ml.Output` class is instantiated without the `path` parameter.
+- `inputs` - this parameter resembles the `inputs` parameter described for the [standalone Spark job](#standalone-spark-job-using-python-sdk), except that the `azure.ai.ml.Input` class is instantiated without the `path` parameter.
+- `outputs` - this parameter resembles the `outputs` parameter described for the [standalone Spark job](#standalone-spark-job-using-python-sdk), except that the `azure.ai.ml.Output` class is instantiated without the `path` parameter.
> [!NOTE]
-> A Spark component created using `azure.ai.ml.spark` function does not define `identity`, `compute` or `resources` parameters. The Azure Machine Learning pipeline defines these parameters.
+> A Spark component created using `azure.ai.ml.spark` function does not define the `identity`, `compute` or `resources` parameters. The Azure Machine Learning pipeline defines these parameters.
You can submit a pipeline job with a Spark component from: - an Azure Machine Learning Notebook connected to an Azure Machine Learning compute instance.
To troubleshoot a Spark job, you can access the logs generated for that job in A
> [!NOTE] > To troubleshoot Spark jobs created during interactive data wrangling in a notebook session, select **Job details** near the top right corner of the notebook UI. A Spark jobs from an interactive notebook session is created under the experiment name **notebook-runs**.
+## Improving serverless Spark session start-up time while using session-level Conda packages
+A serverless Spark session [*cold start* with session-level Conda packages](./apache-spark-azure-ml-concepts.md#inactivity-periods-and-tear-down-mechanism) may take 10 to 15 minutes. You can improve the Spark session *cold start* time by setting configuration variable `spark.hadoop.aml.enable_cache` to true. A session *cold start* with session level Conda packages typically takes 10 to 15 minutes when the session starts for the first time. However, subsequent session *cold starts* typically take three to five minutes.
+
+# [CLI](#tab/cli)
+
+Use the `conf` property in the standalone Spark job, or the Spark component YAML specification file, to define the configuration variable `spark.hadoop.aml.enable_cache`.
+
+```yaml
+conf:
+ spark.hadoop.aml.enable_cache: True
+```
+
+# [Python SDK](#tab/sdk)
+
+Use the `conf` parameter of the `azure.ai.ml.spark` function to define the configuration variable `spark.hadoop.aml.enable_cache`.
+
+```python
+conf={"spark.hadoop.aml.enable_cache": "true"},
+```
+
+# [Studio UI](#tab/ui)
+
+Define configuration variable `spark.hadoop.aml.enable_cache` in the **Configure session** user interface, under **Configuration settings**. Set the value of this variable to `true`.
++++ ## Next steps - [Code samples for Spark jobs using Azure Machine Learning CLI](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/spark)-- [Code samples for Spark jobs using Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark)
+- [Code samples for Spark jobs using Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark)
machine-learning Interactive Data Wrangling With Apache Spark Azure Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/interactive-data-wrangling-with-apache-spark-azure-ml.md
- Previously updated : 05/22/2023+ Last updated : 10/05/2023
In this article, you'll learn how to perform data wrangling using
- (Optional): A Service Principal. See [Create a Service Principal](../active-directory/develop/howto-create-service-principal-portal.md). - [(Optional): An attached Synapse Spark pool in the Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
-Before starting data wrangling tasks, you need familiarity with the process of storing secrets
+Before you start your data wrangling tasks, learn about the process of storing secrets
- Azure Blob storage account access key - Shared Access Signature (SAS) token
The Notebooks UI also provides options for Spark session configuration, for the
1. Select **Configure session** at the top of the screen. 2. Select **Apache Spark version** from the dropdown menu.
- > [!IMPORTANT]
- >
- > End of life announcement (EOLA) for Azure Synapse Runtime for Apache Spark 3.1 was made on January 26, 2023. In accordance, Apache Spark 3.1 will not be supported after July 31, 2023. We recommend that you use Apache Spark 3.2.
+ > [!IMPORTANT]
+ > Azure Synapse Runtime for Apache Spark: Announcements
+ > * Azure Synapse Runtime for Apache Spark 3.2:
+ > * EOLA Announcement Date: July 8, 2023
+ > * End of Support Date: July 8, 2024. After this date, the runtime will be disabled.
+ > * For continued support and optimal performance, we advise that you migrate to Apache Spark 3.3.
3. Select **Instance type** from the dropdown menu. The following instance types are currently supported: - `Standard_E4s_v3` - `Standard_E8s_v3`
The Notebooks UI also provides options for Spark session configuration, for the
6. Select the number of **Executors** for the Spark session. 7. Select **Executor size** from the dropdown menu. 8. Select **Driver size** from the dropdown menu.
-9. To use a conda file to configure a Spark session, check the **Upload conda file** checkbox. Then, select **Browse**, and choose the conda file with the Spark session configuration you want.
+9. To use a Conda file to configure a Spark session, check the **Upload conda file** checkbox. Then, select **Browse**, and choose the Conda file with the Spark session configuration you want.
10. Add **Configuration settings** properties, input values in the **Property** and **Value** textboxes, and select **Add**. 11. Select **Apply**. 12. Select **Stop session** in the **Configure new session?** pop-up. The session configuration changes persist and become available to another notebook session that is started using the serverless Spark compute.
+> [!TIP]
+>
+> If you use session-level Conda packages, you can [improve](./how-to-submit-spark-jobs.md#improving-serverless-spark-session-start-up-time-while-using-session-level-conda-packages) the Spark session *cold start* time if you set the configuration variable `spark.hadoop.aml.enable_cache` to true.
+ ### Import and wrangle data from Azure Data Lake Storage (ADLS) Gen 2 You can access and wrangle data stored in Azure Data Lake Storage (ADLS) Gen 2 storage accounts with `abfss://` data URIs following one of the two data access mechanisms:
You can access and wrangle data stored in Azure Data Lake Storage (ADLS) Gen 2 s
- Service principal-based data access > [!TIP]
-> Data wrangling with a serverless Spark compute, and user identity passthrough to access data in Azure Data Lake Storage (ADLS) Gen 2 storage account requires the least number of configuration steps.
+> Data wrangling with a serverless Spark compute, and user identity passthrough to access data in a Azure Data Lake Storage (ADLS) Gen 2 storage account, requires the smallest number of configuration steps.
To start interactive data wrangling with the user identity passthrough: - Verify that the user identity has **Contributor** and **Storage Blob Data Contributor** [role assignments](./apache-spark-environment-configuration.md#add-role-assignments-in-azure-storage-accounts) in the Azure Data Lake Storage (ADLS) Gen 2 storage account. -- To use the serverless Spark compute, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**, from the **Compute** selection menu.
+- To use the serverless Spark compute, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark** from the **Compute** selection menu.
-- To use an attached Synapse Spark pool, select an attached Synapse Spark pool under **Synapse Spark pools**, from the **Compute** selection menu.
+- To use an attached Synapse Spark pool, select an attached Synapse Spark pool under **Synapse Spark pools** from the **Compute** selection menu.
- This Titanic data wrangling code sample shows use of a data URI in format `abfss://<FILE_SYSTEM_NAME>@<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net/<PATH_TO_DATA>` with `pyspark.pandas` and `pyspark.ml.feature.Imputer`.
To start interactive data wrangling with the user identity passthrough:
``` > [!NOTE]
- > This Python code sample uses `pyspark.pandas`, which is only supported by Spark runtime version 3.2.
+ > This Python code sample uses `pyspark.pandas`. Only the Spark runtime version 3.2 or later supports this.
To wrangle data by access through a service principal:
To wrangle data by access through a service principal:
2. [Create Azure Key Vault secrets](./apache-spark-environment-configuration.md#store-azure-storage-account-credentials-as-secrets-in-azure-key-vault) for the service principal tenant ID, client ID and client secret values. 3. Select **Serverless Spark compute** under **Azure Machine Learning Serverless Spark** from the **Compute** selection menu, or select an attached Synapse Spark pool under **Synapse Spark pools** from the **Compute** selection menu. 4. To set the service principal tenant ID, client ID and client secret in the configuration, and execute the following code sample.
- - The `get_secret()` call in the code depends on name of the Azure Key Vault, and the names of the Azure Key Vault secrets created for the service principal tenant ID, client ID and client secret. The corresponding property name/values to set in the configuration are as follows:
+ - The `get_secret()` call in the code depends on name of the Azure Key Vault, and the names of the Azure Key Vault secrets created for the service principal tenant ID, client ID and client secret. Set these corresponding property name/values in the configuration:
- Client ID property: `fs.azure.account.oauth2.client.id.<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net` - Client secret property: `fs.azure.account.oauth2.client.secret.<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net` - Tenant ID property: `fs.azure.account.oauth2.client.endpoint.<STORAGE_ACCOUNT_NAME>.dfs.core.windows.net`
To start interactive data wrangling:
> [!NOTE] > The `get_secret()` calls in the above code snippets require the name of the Azure Key Vault, and the names of the secrets created for the Azure Blob storage account access key or SAS token
-2. Execute the data wrangling code in the same notebook. Format the data URI as `wasbs://<BLOB_CONTAINER_NAME>@<STORAGE_ACCOUNT_NAME>.blob.core.windows.net/<PATH_TO_DATA>` similar to this code snippet
+2. Execute the data wrangling code in the same notebook. Format the data URI as `wasbs://<BLOB_CONTAINER_NAME>@<STORAGE_ACCOUNT_NAME>.blob.core.windows.net/<PATH_TO_DATA>`, similar to what this code snippet shows:
```python import pyspark.pandas as pd
To start interactive data wrangling:
``` > [!NOTE]
- > This Python code sample uses `pyspark.pandas`, which is only supported by Spark runtime version 3.2.
+ > This Python code sample uses `pyspark.pandas`. Only the Spark runtime version 3.2 or later supports this.
### Import and wrangle data from Azure Machine Learning Datastore
To access data from [Azure Machine Learning Datastore](how-to-datastore.md), def
``` > [!NOTE]
- > This Python code sample uses `pyspark.pandas`, which is only supported by Spark runtime version 3.2.
+ > This Python code sample uses `pyspark.pandas`. Only the Spark runtime version 3.2 or later supports this.
-The Azure Machine Learning datastores can access data using Azure storage account credentials
+The Azure Machine Learning datastores can access data using Azure storage account credentials
- access key-- SAS token
+- SAS token
- service principal
-or provide credential-less data access. Depending on the datastore type and the underlying Azure storage account type, adopt an appropriate authentication mechanism to ensure data access. This table summarizes the authentication mechanisms to access data in the Azure Machine Learning datastores:
+or provide credential-less data access. Depending on the datastore type and the underlying Azure storage account type, select an appropriate authentication mechanism to ensure data access. This table summarizes the authentication mechanisms to access data in the Azure Machine Learning datastores:
|Storage account type|Credential-less data access|Data access mechanism|Role assignments| | | | | |
or provide credential-less data access. Depending on the datastore type and the
|Azure Data Lake Storage (ADLS) Gen 2|No|Service principal|Service principal should have [appropriate role assignments](./apache-spark-environment-configuration.md#add-role-assignments-in-azure-storage-accounts) in the Azure Data Lake Storage (ADLS) Gen 2 storage account| |Azure Data Lake Storage (ADLS) Gen 2|Yes|User identity passthrough|User identity should have [appropriate role assignments](./apache-spark-environment-configuration.md#add-role-assignments-in-azure-storage-accounts) in the Azure Data Lake Storage (ADLS) Gen 2 storage account|
-<sup><b>*</b></sup> user identity passthrough works for credential-less datastores that point to Azure Blob storage accounts, only if [soft delete](../storage/blobs/soft-delete-blob-overview.md) isn't enabled.
+<sup><b>*</b></sup> User identity passthrough works for credential-less datastores that point to Azure Blob storage accounts, only if [soft delete](../storage/blobs/soft-delete-blob-overview.md) is not enabled.
## Accessing data on the default file share
df.to_csv(output_path, index_col="PassengerId")
``` > [!NOTE]
-> This Python code sample uses `pyspark.pandas`, which is only supported by Spark runtime version 3.2.
+> This Python code sample uses `pyspark.pandas`. Only the Spark runtime version 3.2 or later supports this.
## Next steps
machine-learning Application Sharing Policy Not Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/application-sharing-policy-not-supported.md
# Known issue - The ApplicationSharingPolicy property isn't supported for compute instances + Configuring the `applicationSharingPolicy` property for a compute instance has no effect as that property isn't supported - **Status:** Open **Problem area:** Compute
machine-learning Azure Machine Learning Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/azure-machine-learning-known-issues.md
Select the **Title** to view more information about that specific known issue.
|Compute | [Provisioning error when creating a compute instance with A10 SKU](compute-a10-sku-not-supported.md) | August 14, 2023 | |Compute | [Idleshutdown property in Bicep template causes error](compute-idleshutdown-bicep.md) | August 14, 2023 | |Compute | [Slowness in compute instance terminal from a mounted path](compute-slowness-terminal-mounted-path.md)| August 14, 2023|
-|Compute| [Creating compute instance after a workspace move results in an Etag conflict error.](workspace-move-compute-instance-same-name.md)| August 14, 2023 |
+|Compute| [Creating compute instance after a workspace move results in an Etag conflict error.](workspace-move-compute-instance-same-name.md)| August 14, 2023 |
+|Inferencing| [Invalid certificate error during deployment with an AKS cluster](inferencing-invalid-certificate.md)| September, 26, 2023 |
+|Inferencing| [Existing Kubernetes compute can't be updated with `az ml compute attach` command](inferencing-updating-kubernetes-compute-appears-to-succeed.md) | September, 26, 2023 |
## Next steps
machine-learning Compute A10 Sku Not Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/compute-a10-sku-not-supported.md
# Known issue - Provisioning error when creating a compute instance with A10 SKU + While trying to create a compute instance with A10 SKU, you'll encounter a provisioning error. :::image type="content" source="media/compute-a10-sku-not-supported/ci-a10.png" alt-text="A screenshot showing the provisioning error message."::: **Status:** Open
machine-learning Compute Idleshutdown Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/compute-idleshutdown-bicep.md
# Known issue - Idleshutdown property in Bicep template causes error + When creating an Azure Machine Learning compute instance through Bicep compiled using [MSBuild/NuGet](../../azure-resource-manager/bicep/msbuild-bicep-file.md), using the `idleTimeBeforeShutdown` property as described in the API reference [Microsoft.MachineLearningServices workspaces/computes API reference](/azure/templates/microsoft.machinelearningservices/workspaces/computes?pivots=deployment-language-bicep) results in an error. -- **Status:** Open
machine-learning Compute Slowness Terminal Mounted Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/compute-slowness-terminal-mounted-path.md
# Known issue - Slowness in compute instance terminal from a mounted path
-While using the compute instance terminal inside a mounted path of a data folder, any commands executed from the terminal result in slowness. This issue is restricted to the terminal; running the commands from SDK using a notebook works as expected.
-- [!INCLUDE [dev v2](../includes/machine-learning-dev-v2.md)]
-<! Choose the correct include >
+
+While using the compute instance terminal inside a mounted path of a data folder, any commands executed from the terminal result in slowness. This issue is restricted to the terminal; running the commands from SDK using a notebook works as expected.
**Status:** Open
machine-learning Inferencing Invalid Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/inferencing-invalid-certificate.md
+
+ Title: Known issue - Invalid certificate error during deployment
+
+description: During machine learning deployments with an AKS cluster, you may receive an invalid certificate error.
+++++ Last updated : 08/04/2023+++
+# Known issue - Invalid certificate error during deployment with an AKS cluster
++
+During machine learning deployments using an AKS cluster, you may receive an invalid certificate error, such as `{"code":"BadRequest","statusCode":400,"message":"The request is invalid.","details":[{"code":"KubernetesUnaccessible","message":"Kubernetes error: AuthenticationException. Reason: InvalidCertificate"}]`.
+
+
+**Status:** Open
+
+**Problem area:** Inferencing
+
+## Symptoms
+
+Azure Machine Learning deployments with an AKS cluster fail with the error:
+
+`{"code":"BadRequest","statusCode":400,"message":"The request is invalid.","details":[{"code":"KubernetesUnaccessible","message":"Kubernetes error: AuthenticationException. Reason: InvalidCertificate"}],`
+and the following error is shown in the MMS logs:
+
+`K8sReadNamespacedServiceAsync failed with AuthenticationException: System.Security.Authentication.AuthenticationException: The remote certificate was rejected by the provided RemoteCertificateValidationCallback. at System.Net.Security.SslStream.SendAuthResetSignal(ProtocolToken message, ExceptionDispatchInfo exception) at System.Net.Security.SslStream.CompleteHandshake(SslAuthenticationOptions sslAuthenticationOptions) at System.Net.Security.SslStream.ForceAuthenticationAsync[TIOAdapter](tioadapteradapterbooleanreceivefirstbytereauthenticationdatabooleanisapm) at System.Net.Http.ConnectHelper.EstablishSslConnectionAsync(SslClientAuthenticationOptions sslOptions, HttpRequestMessage request, Boolean async, Stream stream, CancellationToken cancellationToken)`
+
+## Cause
+
+This error occurs because the certificate for AKS clusters created before January 2021 does not include the `Subject Key Identifier` value, which prevents the required `Authority Key Identifier` value from being generated.
+
+## Solutions and workarounds
+
+There are two options to resolve this issue:
+- Rotate the AKS certificate for the cluster. See [Certificate Rotation in Azure Kubernetes Service (AKS) - Azure Kubernetes Service](../../aks/certificate-rotation.md) for more information.
+- Wait for 5 hours for the certificate to be automatically updated, and the issue should be resolved.
+
+## Next steps
+
+- [About known issues](azure-machine-learning-known-issues.md)
machine-learning Inferencing Updating Kubernetes Compute Appears To Succeed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/inferencing-updating-kubernetes-compute-appears-to-succeed.md
+
+ Title: Known issue - Existing Kubernetes compute can't be updated
+
+description: Updating a Kubernetes attached compute instance using the az ml attach command appears to succeed but doesn't.
+++++ Last updated : 08/04/2023+++
+# Known issue - Existing Kubernetes compute can't be updated with `az ml compute attach` command
++
+Updating a Kubernetes attached compute instance using the `az ml attach` command appears to succeed but doesn't.
+
+**Status:** Open
+
+**Problem area:** Inferencing
+
+## Symptoms
+
+When running the command `az ml compute attach --resource-group <resource-group-name> --workspace-name <workspace-name> --type Kubernetes --name <existing-attached-compute-name> --resource-id "<cluster-resource-id>" --namespace <kubernetes-namespace>`, The CLI returns a success message indicating that the compute has been successfully updated. However the compute won't be updated.
+
+## Cause
+
+The `az ml compute attach` command currently does not support updating existing Kubernetes compute.
++
+## Next steps
+
+- [About known issues](azure-machine-learning-known-issues.md)
machine-learning Jupyter R Kernel Not Starting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/jupyter-r-kernel-not-starting.md
# Known issue - Jupyter R Kernel doesn't start in new compute instance images
-When trying to launch an R kernel in JupyterLab or a notebook in a new compute instance, the kernel fails to start with `Error: .onLoad failed in loadNamespace()`
- [!INCLUDE [dev v2](../includes/machine-learning-dev-v2.md)]
+When trying to launch an R kernel in JupyterLab or a notebook in a new compute instance, the kernel fails to start with `Error: .onLoad failed in loadNamespace()`.
**Status:** Open - **Problem area:** Compute - ## Symptoms After creating a new compute instance, try to launch R kernel in JupyterLab or a Jupyter notebook. The kernel fails to launch. You'll see the following messages in the Jupyter logs:
machine-learning Workspace Move Compute Instance Same Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/known-issues/workspace-move-compute-instance-same-name.md
# Known issue - Creating compute instance after a workspace move results in an Etag conflict error.
-After a moving a workspace to a different subscription or resource group, creating a compute instance with the same name as a previous compute instance will fail with an Etag conflict error.
+After a moving a workspace to a different subscription or resource group, creating a compute instance with the same name as a previous compute instance will fail with an Etag conflict error.
-<! Choose the correct include >
**Status:** Open
managed-grafana How To Create Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-create-api-keys.md
Previously updated : 11/17/2022 Last updated : 10/04/2023 # Create and manage Grafana API keys in Azure Managed Grafana (Deprecated)
Last updated 11/17/2022
> [!IMPORTANT] > This document is deprecated as the API keys feature has been replaced by [service accounts](./how-to-service-accounts.md) in Grafana 9.1. To switch to using service accounts, in Grafana instances created before the release of Grafana 9.1, go to **Configuration > API keys and select Migrate to service accounts now**. Select **Yes, migrate now**. Each existing API keys will be automatically migrated into a service account with a token. The service account will be created with the same permission as the API Key and current API keys will continue to work as before.
+> [!CAUTION]
+> Each API key is treated by Azure Managed Grafana as a single active user. Generating new API keys will therefore increase your monthly Azure invoice. Pricing per active user can be found at [Pricing Details](https://azure.microsoft.com/pricing/details/managed-grafana/#pricing).
+ In this guide, learn how to generate and manage API keys, and start making API calls to the Grafana server. Grafana API keys will enable you to create integrations between Azure Managed Grafana and other services. ## Prerequisites
managed-grafana How To Create Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-create-dashboard.md
description: Learn how to create and configure Azure Managed Grafana dashboards.
+ Last updated 03/07/2023
managed-instance-apache-cassandra Dba Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/dba-commands.md
Title: How to run DBA commands for Azure Managed Instance for Apache Cassandra
description: Learn how to run DBA commands + Last updated 03/02/2022
mysql Concepts Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-audit-logs.md
The following sections describe the output of MySQL audit logs based on the even
| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` | | `ResourceType` | `Servers` | | `ResourceId` | Resource URI |
-| `Resource` | Name of the server |
+| `Resource` | Name of the server in upper case |
| `Category` | `MySqlAuditLogs` | | `OperationName` | `LogEvent` | | `LogicalServerName_s` | Name of the server |
Schema below applies to GENERAL, DML_SELECT, DML_NONSELECT, DML, DDL, DCL, and A
| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` | | `ResourceType` | `Servers` | | `ResourceId` | Resource URI |
-| `Resource` | Name of the server |
+| `Resource` | Name of the server in upper case|
| `Category` | `MySqlAuditLogs` | | `OperationName` | `LogEvent` | | `LogicalServerName_s` | Name of the server |
Schema below applies to GENERAL, DML_SELECT, DML_NONSELECT, DML, DDL, DCL, and A
| `ResourceProvider` | Name of the resource provider. Always `MICROSOFT.DBFORMYSQL` | | `ResourceType` | `Servers` | | `ResourceId` | Resource URI |
-| `Resource` | Name of the server |
+| `Resource` | Name of the server in upper case|
| `Category` | `MySqlAuditLogs` | | `OperationName` | `LogEvent` | | `LogicalServerName_s` | Name of the server |
Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, yo
```kusto AzureDiagnostics
- | where Resource == '<your server name>'
+ | where Resource == '<your server name>' //Server name must be in Upper case
| where Category == 'MySqlAuditLogs' and event_class_s == "general_log" | project TimeGenerated, Resource, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s | order by TimeGenerated asc nulls last
Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, yo
```kusto AzureDiagnostics
- | where Resource == '<your server name>'
+ | where Resource == '<your server name>' //Server name must be in Upper case
| where Category == 'MySqlAuditLogs' and event_class_s == "connection_log" | project TimeGenerated, Resource, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s | order by TimeGenerated asc nulls last
Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, yo
```kusto AzureDiagnostics
- | where Resource == '<your server name>'
+ | where Resource == '<your server name>' //Server name must be in Upper case
| where Category == 'MySqlAuditLogs' | project TimeGenerated, Resource, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s | summarize count() by event_class_s, event_subclass_s, user_s, ip_s
Once your audit logs are piped to Azure Monitor Logs through Diagnostic Logs, yo
```kusto AzureDiagnostics
- | where Resource == '<your server name>'
+ | where Resource == '<your server name>' //Server name must be in Upper case
| where Category == 'MySqlAuditLogs' | project TimeGenerated, Resource, event_class_s, event_subclass_s, event_time_t, user_s , ip_s , sql_text_s | summarize count() by Resource, bin(TimeGenerated, 5m)
mysql Tutorial Power Automate With Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-power-automate-with-mysql.md
Azure database for MySQL connector supports triggers for when an item is created
|Trigger|Description| |-|-|
-|(When an item is created)[./connectors/azuremysql/#when-an-item-is-created]|Triggers a flow when an item is created in MySQL (Available only for Power Automate.)|
-|(When an item is modified)[./connectors/azuremysql/#when-an-item-is-modified]|Triggers a flow when an item is modified in MySQL. (Available only for Power Automate.)|
+|[When an item is created](/connectors/azuremysql/#when-an-item-is-created)|Triggers a flow when an item is created in MySQL (Available only for Power Automate.)|
+|[When an item is modified](/connectors/azuremysql/#when-an-item-is-modified)|Triggers a flow when an item is modified in MySQL. (Available only for Power Automate.)|
## Next steps [Azure Database for MySQL connector](/connectors/azuremysql/) reference
mysql Quickstart Mysql Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-mysql-github-actions.md
-+ Last updated 02/15/2023
network-watcher Quickstart Configure Network Security Group Flow Logs From Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-bicep.md
Last updated 09/29/2023-+ #CustomerIntent: As an Azure administrator, I need to enable NSG flow logs using a Bicep file so that I can log the traffic flowing through a network security group.
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics.md
Previously updated : 08/01/2023 Last updated : 10/05/2023+
+#CustomerIntent: As an Azure administrator, I want to use Traffic analytics to analyze Network Watcher flow logs so that I can view network activity, secure my networks, and optimize performance.
-# Traffic analytics
+# Traffic analytics overview
-Traffic analytics is a cloud-based solution that provides visibility into user and application activity in your cloud networks. Specifically, traffic analytics analyzes Azure Network Watcher NSG flow logs to provide insights into traffic flow in your Azure cloud. With traffic analytics, you can:
+Traffic analytics is a cloud-based solution that provides visibility into user and application activity in your cloud networks. Specifically, traffic analytics analyzes Azure Network Watcher flow logs to provide insights into traffic flow in your Azure cloud. With traffic analytics, you can:
- Visualize network activity across your Azure subscriptions. - Identify hot spots.
Traffic analytics is a cloud-based solution that provides visibility into user a
- Optimize your network deployment for performance and capacity by understanding traffic flow patterns across Azure regions and the internet. - Pinpoint network misconfigurations that can lead to failed connections in your network.
-> [!NOTE]
-> Traffic analytics now supports collecting NSG flow logs data at a frequency of every 10 minutes.
- ## Why traffic analytics? It's vital to monitor, manage, and know your own network for uncompromised security, compliance, and performance. Knowing your own environment is of paramount importance to protect and optimize it. You often need to know the current state of the network, including the following information:
It's vital to monitor, manage, and know your own network for uncompromised secur
Cloud networks are different from on-premises enterprise networks. In on-premises networks, routers and switches support NetFlow and other, equivalent protocols. You can use these devices to collect data about IP network traffic as it enters or exits a network interface. By analyzing traffic flow data, you can build an analysis of network traffic flow and volume.
-With Azure virtual networks, NSG flow logs collect data about the network. These logs provide information about ingress and egress IP traffic through a network security group that's associated with individual network interfaces, VMs, or subnets. Traffic analytics analyzes raw NSG flow logs and combines the log data with intelligence about security, topology, and geography. Traffic analytics then provides you with insights into traffic flow in your environment.
+With Azure virtual networks, flow logs collect data about the network. These logs provide information about ingress and egress IP traffic through a network security group or a virtual network. Traffic analytics analyzes raw flow logs and combines the log data with intelligence about security, topology, and geography. Traffic analytics then provides you with insights into traffic flow in your environment.
Traffic analytics provides the following information:
Traffic analytics provides the following information:
## Key components -- **Network security group (NSG)**: A resource that contains a list of security rules that allow or deny network traffic to or from resources that are connected to an Azure virtual network. NSGs can be associated with subnets, network interfaces (NICs) that are attached to VMs (Resource Manager), or individual VMs (classic). For more information, see [Network security group overview](../virtual-network/network-security-groups-overview.md).--- **NSG flow logs**: Recorded information about ingress and egress IP traffic through a network security group. NSG flow logs are written in JSON format and include:-
- - Outbound and inbound flows on a per rule basis.
- - The NIC that the flow applies to.
- - Information about the flow, such as the source and destination IP addresses, the source and destination ports, and the protocol.
- - The status of the traffic, such as allowed or denied.
-
- For more information about NSG flow logs, see [NSG flow logs](network-watcher-nsg-flow-logging-overview.md).
--- **Log Analytics**: A tool in the Azure portal that you use to work with Azure Monitor Logs data. Azure Monitor Logs is an Azure service that collects monitoring data and stores the data in a central repository. This data can include events, performance data, or custom data that's provided through the Azure API. After this data is collected, it's available for alerting, analysis, and export. Monitoring applications such as network performance monitor and traffic analytics use Azure Monitor Logs as a foundation. For more information, see [Azure Monitor Logs](../azure-monitor/logs/log-query-overview.md). Log Analytics provides a way to edit and run queries on logs. You can also use this tool to analyze query results. For more information, see [Overview of Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md).--- **Log Analytics workspace**: The environment that stores Azure Monitor log data that pertains to an Azure account. For more information about Log Analytics workspaces, see [Overview of Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md).--- **Network Watcher**: A regional service that you can use to monitor and diagnose conditions at a network-scenario level in Azure. You can use Network Watcher to turn NSG flow logs on and off. For more information, see [What is Azure Network Watcher?](network-watcher-monitoring-overview.md).
+To use traffic analytics, you need the following components:
+
+- **Network Watcher**: A regional service that you can use to monitor and diagnose conditions at a network-scenario level in Azure. You can use Network Watcher to turn NSG flow logs on and off. For more information, see [What is Azure Network Watcher?](network-watcher-monitoring-overview.md)
+
+- **Log Analytics**: A tool in the Azure portal that you use to work with Azure Monitor Logs data. Azure Monitor Logs is an Azure service that collects monitoring data and stores the data in a central repository. This data can include events, performance data, or custom data that's provided through the Azure API. After this data is collected, it's available for alerting, analysis, and export. Monitoring applications such as network performance monitor and traffic analytics use Azure Monitor Logs as a foundation. For more information, see [Azure Monitor Logs](../azure-monitor/logs/log-query-overview.md?toc=/azure/network-watcher/toc.json). Log Analytics provides a way to edit and run queries on logs. You can also use this tool to analyze query results. For more information, see [Overview of Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md?toc=/azure/network-watcher/toc.json).
+
+- **Log Analytics workspace**: The environment that stores Azure Monitor log data that pertains to an Azure account. For more information about Log Analytics workspaces, see [Overview of Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md?toc=/azure/network-watcher/toc.json).
+
+- Additionally, you need a network security group enabled for flow logging if you're using traffic analytics to analyze [NSG flow logs](network-watcher-nsg-flow-logging-overview.md) or a virutal network enabled for flow logging if you're using traffic analytics to analyze [VNet flow logs (preview)](vnet-flow-logs-overview.md):
+
+ - **Network security group (NSG)**: A resource that contains a list of security rules that allow or deny network traffic to or from resources that are connected to an Azure virtual network. Network security groups can be associated with subnets, network interfaces (NICs) that are attached to VMs (Resource Manager), or individual VMs (classic). For more information, see [Network security group overview](../virtual-network/network-security-groups-overview.md?toc=/azure/network-watcher/toc.json).
+
+ - **NSG flow logs**: Recorded information about ingress and egress IP traffic through a network security group. NSG flow logs are written in JSON format and include:
+
+ - Outbound and inbound flows on a per rule basis.
+ - The NIC that the flow applies to.
+ - Information about the flow, such as the source and destination IP addresses, the source and destination ports, and the protocol.
+ - The status of the traffic, such as allowed or denied.
+
+ For more information about NSG flow logs, see [NSG flow logs overview](network-watcher-nsg-flow-logging-overview.md).
+
+ - **Virtual network (VNet)**: A resource that enables many types of Azure resources to securely communicate with each other, the internet, and on-premises networks. For more information, see [Virtual network overview](../virtual-network/virtual-networks-overview.md?toc=/azure/network-watcher/toc.json).
+
+ - **VNet flow logs (preview)**: Recorded information about ingress and egress IP traffic through a virtual network. VNet flow logs are written in JSON format and include:
+
+ - Outbound and inbound flows.
+ - Information about the flow, such as the source and destination IP addresses, the source and destination ports, and the protocol.
+ - The status of the traffic, such as allowed or denied.
+
+ For more information about NSG flow logs, see [VNet flow logs overview](vnet-flow-logs-overview.md).
## How traffic analytics works
-Traffic analytics examines raw NSG flow logs. It then reduces the log volume by aggregating flows that have a common source IP address, destination IP address, destination port, and protocol.
+Traffic analytics examines raw flow logs. It then reduces the log volume by aggregating flows that have a common source IP address, destination IP address, destination port, and protocol.
An example might involve Host 1 at IP address 10.10.10.10 and Host 2 at IP address 10.10.20.10. Suppose these two hosts communicate 100 times over a period of one hour. The raw flow log has 100 entries in this case. If these hosts use the HTTP protocol on port 80 for each of those 100 interactions, the reduced log has one entry. That entry states that Host 1 and Host 2 communicated 100 times over a period of one hour by using the HTTP protocol on port 80.
Reduced logs are enhanced with geography, security, and topology information and
Traffic analytics requires: - A Network Watcher enabled subscription. For more information, see [Enable or disable Azure Network Watcher](network-watcher-create.md)-- NSG flow logs enabled for the network security groups you want to monitor. For more information, see [Create a flow log](nsg-flow-logging.md#create-a-flow-log).-- An Azure Log Analytics workspace with read and write access. For more information, see [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md)-
-One of the following [Azure built-in roles](../role-based-access-control/built-in-roles.md) needs to be assigned to your account:
-
-|Deployment model | Role |
-| | |
-|Resource Manager | Owner |
-| | Contributor |
-| | Network Contributor |
-
-> [!IMPORTANT]
-> [Network contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#network-contributor) does not cover `Microsoft.OperationalInsights/workspaces/*` actions.
-
-If none of the preceding built-in roles are assigned to your account, assign a [custom role](../role-based-access-control/custom-roles.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) to your account. The custom role should support the following actions at the subscription level:
--- `Microsoft.Network/applicationGateways/read`-- `Microsoft.Network/connections/read`-- `Microsoft.Network/loadBalancers/read`-- `Microsoft.Network/localNetworkGateways/read`-- `Microsoft.Network/networkInterfaces/read`-- `Microsoft.Network/networkSecurityGroups/read`-- `Microsoft.Network/publicIPAddresses/read"`-- `Microsoft.Network/routeTables/read`-- `Microsoft.Network/virtualNetworkGateways/read`-- `Microsoft.Network/virtualNetworks/read`-- `Microsoft.Network/expressRouteCircuits/read`-- `Microsoft.OperationalInsights/workspaces/*`-
-For information about how to check user access permissions, see [Traffic analytics FAQ](traffic-analytics-faq.yml#what-are-the-prerequisites-to-use-traffic-analytics-).
-
-## Frequently asked questions
-
-To get answers to frequently asked questions about traffic analytics, see [Traffic analytics FAQ](traffic-analytics-faq.yml).
-
-## Next steps
+- NSG flow logs enabled for the network security groups you want to monitor or VNet flow logs enabled for the virtual network you want to monitor. For more information, see [Create a flow log](nsg-flow-logging.md#create-a-flow-log) or [Enable VNet flow logs](vnet-flow-logs-powershell.md#enable-vnet-flow-logs).
+- An Azure Log Analytics workspace with read and write access. For more information, see [Create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md?toc=/azure/network-watcher/toc.json).
+
+- One of the following [Azure built-in roles](../role-based-access-control/built-in-roles.md) needs to be assigned to your account:
+
+ | Deployment model | Role |
+ | - | - |
+ | Resource Manager | [Owner](../role-based-access-control/built-in-roles.md#owner) |
+ | | [Contributor](../role-based-access-control/built-in-roles.md#contributor) |
+ | | [Network contributor](../role-based-access-control/built-in-roles.md#network-contributor) <sup>1</sup> and [Monitoring contributor](../role-based-access-control/built-in-roles.md#monitoring-contributor) <sup>2</sup> |
+
+ If none of the preceding built-in roles are assigned to your account, assign a [custom role](../role-based-access-control/custom-roles.md?toc=/azure/network-watcher/toc.json) to your account. The custom role should support the following actions at the subscription level:
+
+ - `Microsoft.Network/applicationGateways/read`
+ - `Microsoft.Network/connections/read`
+ - `Microsoft.Network/loadBalancers/read`
+ - `Microsoft.Network/localNetworkGateways/read`
+ - `Microsoft.Network/networkInterfaces/read`
+ - `Microsoft.Network/networkSecurityGroups/read`
+ - `Microsoft.Network/publicIPAddresses/read"`
+ - `Microsoft.Network/routeTables/read`
+ - `Microsoft.Network/virtualNetworkGateways/read`
+ - `Microsoft.Network/virtualNetworks/read`
+ - `Microsoft.Network/expressRouteCircuits/read`
+ - `Microsoft.OperationalInsights/workspaces/*`
+ - `Microsoft.Insights/dataCollectionRules/read` <sup>2</sup>
+ - `Microsoft.Insights/dataCollectionRules/write` <sup>2</sup>
+ - `Microsoft.Insights/dataCollectionRules/delete` <sup>2</sup>
+ - `Microsoft.Insights/dataCollectionEndpoints/read` <sup>2</sup>
+ - `Microsoft.Insights/dataCollectionEndpoints/write` <sup>2</sup>
+ - `Microsoft.Insights/dataCollectionEndpoints/delete` <sup>2</sup>
+
+ <sup>1</sup> Network contributor doesn't cover `Microsoft.OperationalInsights/workspaces/*` actions.
+
+ <sup>2</sup> Only required when using traffic analytics to analyze VNet flow logs (preview). For more information, see [Data collection rules in Azure Monitor](../azure-monitor/essentials/data-collection-rule-overview.md?toc=/azure/network-watcher/toc.json) and [Data collection endpoints in Azure Monitor](../azure-monitor/essentials/data-collection-endpoint-overview.md?toc=/azure/network-watcher/toc.json).
+
+ For information about how to check user access permissions, see [Traffic analytics FAQ](traffic-analytics-faq.yml#what-are-the-prerequisites-to-use-traffic-analytics-).
+
+## Traffic analytics (FAQ)
+
+To get answers to the most frequently asked questions about traffic analytics, see [Traffic analytics FAQ](traffic-analytics-faq.yml).
+
+## Related content
- To learn how to use traffic analytics, see [Usage scenarios](usage-scenarios-traffic-analytics.md). - To understand the schema and processing details of traffic analytics, see [Schema and data aggregation in Traffic Analytics](traffic-analytics-schema.md).
openshift Howto Service Principal Credential Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-service-principal-credential-rotation.md
description: Discover how to rotate service principal credentials in Azure Red H
+ Last updated 05/31/2021 #Customer intent: As an operator, I need to rotate service principal credentials
operator-nexus Howto Configure Isolation Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-isolation-domain.md
Last updated 07/20/2023-+ # Configure L2 and L3 isolation-domains using managed network fabric services
Use this command to enable a management L3 isolation domain:
```azurecli az nf l3domain update-admin-state --resource-group "ResourceGroupName" --resource-name "l3mgmt" --state Enable ```-------
operator-nexus Howto Kubernetes Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-connect.md
Last updated 08/17/2023 -+ # Connect to Azure Operator Nexus Kubernetes cluster
operator-nexus Howto Monitor Virtualized Network Functions Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-monitor-virtualized-network-functions-virtual-machines.md
Last updated 02/01/2023-+ # Monitoring virtual machines (for virtualized network function)
operator-nexus Howto Virtual Machine Placement Hints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-virtual-machine-placement-hints.md
Last updated 07/28/2023 #Required; mm/dd/yyyy format.-+ # Working with placement hints in Azure Operator Nexus virtual machine
operator-nexus Quickstarts Tenant Workload Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-prerequisites.md
Last updated 01/25/2023-+ # Prerequisites for deploying tenant workloads
playwright-testing Quickstart Run End To End Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/playwright-testing/quickstart-run-end-to-end-tests.md
We recommend that you use the `dotenv` module to manage your environment. With `
## Add Microsoft Playwright Testing configuration
-To run your Playwright tests in your Microsoft Playwright Testing workspace, you need to add a service configuration file alongside your Playwright configuration file. The service configuration file references the environment variables to get the workspace endpoint and your access token. In the next step, you pass this service configuration file to the Playwright CLI.
+To run your Playwright tests in your Microsoft Playwright Testing workspace, you need to add a service configuration file alongside your Playwright configuration file. The service configuration file references the environment variables to get the workspace endpoint and your access token.
To add the service configuration to your project:
To add the service configuration to your project:
## Run your tests at scale with Microsoft Playwright Testing
-You've now prepared the configuration for running your Playwright tests in the cloud with Microsoft Playwright Testing. To run your Playwright tests, you use the Playwright CLI and specify the service configuration file and number of workers on the command-line.
+You've now prepared the configuration for running your Playwright tests in the cloud with Microsoft Playwright Testing. You can either use the Playwright CLI to run your tests, or use the [Playwright Test Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-playwright.playwright).
-Perform the following steps to run your Playwright tests:
+Perform the following steps to run your Playwright tests with Microsoft Playwright Testing.
-1. Open a terminal window and enter the following command to run your Playwright tests on remote browsers in your workspace:
+# [Playwright CLI](#tab/playwrightcli)
- Depending on the size of your test suite, the tests run on up to 20 parallel workers.
+When you use the Playwright CLI to run your tests, specify the service configuration file in the command-line to connect to use remote browsers.
- ```bash
- npx playwright test --config=playwright.service.config.ts --workers=20
- ```
-
- You should see a similar output when the tests complete:
+Open a terminal window and enter the following command to run your Playwright tests on remote browsers in your workspace:
- ```output
- Running 6 tests using 6 workers
- 6 passed (18.2s)
-
- To open last HTML report run:
+```bash
+npx playwright test --config=playwright.service.config.ts --workers=20
+```
- npx playwright show-report
- ```
+Depending on the size of your test suite, this command runs your tests on up to 20 parallel workers.
+
+You should see a similar output when the tests complete:
+
+```output
+Running 6 tests using 6 workers
+ 6 passed (18.2s)
+
+To open last HTML report run:
+
+ npx playwright show-report
+```
+
+# [Visual Studio Code](#tab/vscode)
+
+To run your Playwrights tests in Visual Studio Code with Microsoft Playwright Testing:
+
+1. Install the [Playwright Test Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-playwright.playwright).
+
+1. Open the **Test Explorer** view in the activity bar.
+
+ The test explorer automatically detects your Playwright tests and the service configuration.
+
+ :::image type="content" source="./media/quickstart-run-end-to-end-tests/visual-studio-code-test-explorer.png" alt-text="Screenshot that shows the Test Explorer view in Visual Studio Code, which lists the Playwright tests." lightbox="./media/quickstart-run-end-to-end-tests/visual-studio-code-test-explorer.png":::
+
+1. Select a service profile to run your tests with Microsoft Playwright Testing.
+
+ Notice that the service run profiles are coming from the `playwright.service.config.ts` file you added previously.
+
+ Optionally, select **Select Default Profile**, and then select your default projects. By setting a default profile, you can automatically run your services with the service, or run multiple Playwright projects simultaneously.
+
+ :::image type="content" source="./media/quickstart-run-end-to-end-tests/visual-studio-code-choose-run-profile.png" alt-text="Screenshot that shows the menu to choose a run profile for your tests, highlighting the projects from the service configuration file." lightbox="./media/quickstart-run-end-to-end-tests/visual-studio-code-choose-run-profile.png":::
+
+ > [!TIP]
+ > You can still debug your test code when you run your tests on remote browsers.
+
+1. You can view the test results directly in Visual Studio Code.
+
+ :::image type="content" source="./media/quickstart-run-end-to-end-tests/visual-studio-code-test-results.png" alt-text="Screenshot that shows the Playwright test results in Visual Studio Code." lightbox="./media/quickstart-run-end-to-end-tests/visual-studio-code-test-results.png":::
++
-1. Go to the [Playwright portal](https://aka.ms/mpt/portal) to view your test run.
+Go to the [Playwright portal](https://aka.ms/mpt/portal) to view the test run metadata and activity log for your workspace.
- The activity log lists for each test run the following details: the total test completion time, the number of parallel workers, and the number of test minutes.
- :::image type="content" source="./media/quickstart-run-end-to-end-tests/playwright-testing-activity-log.png" alt-text="Screenshot that shows the activity log for a workspace in the Playwright Testing portal." lightbox="./media/quickstart-run-end-to-end-tests/playwright-testing-activity-log.png":::
+The activity log lists for each test run the following details: the total test completion time, the number of parallel workers, and the number of test minutes.
## Optimize parallel worker configuration
postgresql How To Scale Compute Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-scale-compute-storage-portal.md
Please use below steps to enable storage autogrow for your flexible server and a
> [!IMPORTANT]
-> Storage autogrow always trigger disk scaling operations online. In specific scenarios where disk scaling process cannot be done online storage autogrow does not trigger and you need to manually increase the storage. These scenarios include reaching, starting at, or crossing the 4,096-GiB limit.
+> Storage autogrow initiates disk scaling operations online, but there are specific situations where online scaling is not possible. In such cases, like when approaching or surpassing the 4,096-GiB limit, storage autogrow does not activate, and you must manually increase the storage. A portal informational message is displayed when this happens.
### Next steps - Learn about [business continuity](./concepts-business-continuity.md) - Learn about [high availability](./concepts-high-availability.md)-- Learn about [Compute and Storage](./concepts-compute-storage.md)
+- Learn about [Compute and Storage](./concepts-compute-storage.md)
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Last updated 9/20/2023
This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant to Flexible Server - PostgreSQL ## Release: September 2023
-* General availability of [Storage auto-grow](./concepts-compute-storage.md#storage-auto-grow-preview) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+ ## Release: August 2023 * Support for [minor versions](./concepts-supported-versions.md) 15.3, 14.8, 13.11, 12.15, 11.20 <sup>$</sup>
postgresql Troubleshoot Canceling Statement Due To Conflict With Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/troubleshoot-canceling-statement-due-to-conflict-with-recovery.md
+
+ Title: Canceling statement due to conflict with recovery - Azure Database for PostgreSQL - Flexible Server
+description: Provides resolutions for a read replica error - Canceling statement due to conflict with recovery.
+++++ Last updated : 10/5/2023++
+# Canceling statement due to conflict with recovery
+This article helps you solve a problem that occurs during executing queries against read replica.
++
+## Symptoms
+1. When attempting to execute a query on a read replica, the query is unexpectedly terminated.
+2. Error messages such as "Canceling statement due to conflict with recovery" appear in the logs or in the query output.
+3. There might be a noticeable delay or lag in replication from the primary to the read replica.
+
+In the provided screenshot, on the left is the primary Azure Database for PostgreSQL - Flexible Server instance, and on the right is the read replica.
++
+* **Read replica console (right side of the screenshot above)**
+ * We can observe a lengthy `SELECT` statement in progress. A vital aspect to note about SQL is its consistent view of the data. When an SQL statement is executed, it essentially "freezes" its view of the data. Throughout its execution, the SQL statement always sees a consistent snapshot of the data, even if changes are occurring concurrently elsewhere.
+* **Primary console (left side of the screenshot above)**
+ * An `UPDATE` operation has been executed. While an `UPDATE` by itself doesn't necessarily disrupt the behavior of the read replica, the subsequent operation does. After the update, a `VACUUM` operation (in this case, manually triggered for demonstration purposes, but it's noteworthy that an autovacuum process could also be initiated automatically) is executed.
+ * The `VACUUM`'s role is to reclaim space by removing old versions of rows. Given that the read replica is running a lengthy `SELECT` statement, it's currently accessing some of these rows that `VACUUM` targets for removal.
+ * These changes initiated by the `VACUUM` operation, which include the removal of rows, get logged into the Write-Ahead Log (`WAL`). As Azure Database for PostgreSQL Flexible Server read replicas utilize native PostgreSQL physical replication, these changes are later sent to the read replica.
+ * Here lies the crux of the issue: the `VACUUM` operation, unaware of the ongoing `SELECT` statement on the read replica, removes rows that the read replica still needs. This scenario results in what's known as a replication conflict.
+
+The aftermath of this scenario is that the read replica experiences a replication conflict due to the rows removed by the `VACUUM` operation. By default, the read replica attempts to resolve this conflict for a duration of 30 seconds, since the default value of `max_standby_streaming_delay` is set to 30 seconds. After this duration, if the conflict remains unresolved, the query on the read replica is canceled.
+
+## Cause
+The root cause of this issue is that the read replica in PostgreSQL is a continuously recovering system. This situation means that while the replica is catching up with the primary, it's essentially in a state of constant recovery.
+If a query on a read replica tries to read a row that is simultaneously being updated by the recovery process (because the primary has made a change), PostgreSQL might cancel the query to allow the recovery to proceed without interruption.
+
+## Resolution
+1. **Adjust `max_standby_streaming_delay`**: Increase the `max_standby_streaming_delay` parameter on the read replica. Increasing the value of the setting allows the replica more time to resolve conflicts before it decides to cancel a query. However, this might also increase replication lag, so it's a trade-off. This parameter is dynamic, meaning changes take effect without requiring a server restart.
+2. **Monitor and Optimize Queries**: Review the types and frequencies of queries run against the read replica. Long-running or complex queries might be more susceptible to conflicts. Optimizing or scheduling them differently can help.
+3. **Off-Peak Query Execution**: Consider running heavy or long-running queries during off-peak hours to reduce the chances of a conflict.
+4. **Enable `hot_standby_feedback`**: Consider setting `hot_standby_feedback` to `on` on the read replica. When enabled, it informs the primary server about the queries currently being executed by the replica. This prevents the primary from removing rows that are still needed by the replica, reducing the likelihood of a replication conflict. This parameter is dynamic, meaning changes take effect without requiring a server restart.
+
+> [!CAUTION]
+> Enabling `hot_standby_feedback` can lead to the following potential issues:
+>* This setting can prevent some necessary cleanup operations on the primary, potentially leading to table bloat (increased disk space usage due to unvacuumed old row versions).
+>* Regular monitoring of the primary's disk space and table sizes is essential. Learn more about monitoring for Azure Database for PostgreSQL - Flexible Server [here](concepts-monitoring.md).
+>* Be prepared to manage potential table bloat manually if it becomes problematic. Consider enabling [autovacuum tuning](how-to-enable-intelligent-performance-portal.md) in Azure Database for PostgreSQL - Flexible Server to help mitigate this issue.
+
+5. **Adjust `max_standby_archive_delay`**: The `max_standby_archive_delay` server parameter specifies the maximum delay that the server will allow when reading archived `WAL` data. If the replica of Azure Database for PostgreSQL - Flexible Server ever switches from streaming mode to file-based log shipping (though rare), tweaking this value can help resolve the query cancellation issue.
+++++
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
The migration tool comes with a simple, wizard-based experience on the Azure por
3. In the **Overview** tab of the Flexible Server, on the left menu, scroll down to **Migration** and select it.
- :::image type="content" source="./media/concepts-single-to-flexible/azure-portal-overview-page.png" alt-text="Screenshot of the Overview page." lightbox="./media/concepts-single-to-flexible/azure-portal-overview-page.png":::
+ :::image type="content" source="./media/concepts-single-to-flexible/flexible-overview.png" alt-text="Screenshot of the flexible Overview page." lightbox="./media/concepts-single-to-flexible/flexible-overview.png":::
4. Select the **Migrate from Single Server** button to start a migration from Single Server to Flexible Server. If this is the first time you're using the migration tool, an empty grid appears with a prompt to begin your first migration.
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png" alt-text="Screenshot of the Migrate from Single Server tab." lightbox="./media/concepts-single-to-flexible/single-to-flex-migrate-single-server.png":::
+ :::image type="content" source="./media/concepts-single-to-flexible/flexible-migration-grid.png" alt-text="Screenshot of the Migration tab in flexible." lightbox="./media/concepts-single-to-flexible/flexible-migration-grid.png":::
If you've already created migrations to your Flexible Server target, the grid contains information about migrations that were attempted to this target from the Single Server(s).
Alternatively, you can initiate the migration process from the Azure Database fo
2. Upon selecting the Single Server, you can observe a migration-related banner in the Overview tab. Select **Migrate now** to get started.
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-initiate-migrate-from-single-server.png" alt-text="Screenshot to initiate migration from Single Server tab." lightbox="./media/concepts-single-to-flexible/single-to-flex-initiate-migrate-from-single-server.png":::
+ :::image type="content" source="./media/concepts-single-to-flexible/single-banner.png" alt-text="Screenshot to initiate migration from Single Server tab." lightbox="./media/concepts-single-to-flexible/single-banner.png":::
3. You're taken to a page with two options. If you've already created a Flexible Server and want to use that as the target, choose **Select existing**, and select the corresponding Subscription, Resource group and Server name details. Once the selections are made, select **Go to Migration wizard** and skip to the instructions under the **Setup tab** section in this page.
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-choose-between-flexible-server.png" alt-text="Screenshot to choose existing flexible server option." lightbox="./media/concepts-single-to-flexible/single-to-flex-choose-between-flexible-server.png":::
+ :::image type="content" source="./media/concepts-single-to-flexible/single-click-banner.png" alt-text="Screenshot to choose existing flexible server option." lightbox="./media/concepts-single-to-flexible/single-click-banner.png":::
4. Should you choose to Create a new Flexible Server, select **Create new** and select **Go to Create Wizard**. This action takes you through the Flexible Server creation process and deploys the Flexible Server.
- :::image type="content" source="./media/concepts-single-to-flexible/single-to-flex-create-new.png" alt-text="Screenshot to choose new flexible server option." lightbox="./media/concepts-single-to-flexible/single-to-flex-create-new.png":::
+ :::image type="content" source="./media/concepts-single-to-flexible/single-banner-create-new.png" alt-text="Screenshot to choose new flexible server option." lightbox="./media/concepts-single-to-flexible/single-banner-create-new.png":::
After deploying the Flexible Server, follow the steps 3 to 5 under [Configure the migration task](#configure-the-migration-task)
The first tab is **Setup**. Just in case you missed it, allowlist necessary exte
>[!NOTE] > If TIMESCALEDB, POSTGIS_TOPOLOGY, POSTGIS_TIGER_GEOCODER, POSTGRES_FDW or PG_PARTMAN extensions are used in your single server database, please raise a support request since the Single to Flex migration tool will not handle these extensions. **Migration name** is the unique identifier for each migration to this Flexible Server target. This field accepts only alphanumeric characters and doesn't accept any special characters except a hyphen (-). The name can't start with a hyphen and should be unique for a target server. No two migrations to the same Flexible Server target can have the same name.
+The second attribute on the **Source** tab is **Migration mode**. The migration tool offers offline mode of migration as default.
+ Select the **Next** button. ### Source tab The **Source** tab prompts you to give details related to the Single Server that is the source of the databases. After you make the **Subscription** and **Resource Group** selections, the dropdown list for server names shows Single Servers under that resource group across regions. Select the source that you want to migrate databases from. Note that you can migrate databases from a Single Server to a target Flexible Server in the same region - cross region migrations are supported only in China regions. After you choose the Single Server source, the **Location**, **PostgreSQL version**, and **Server admin login name** boxes are populated automatically. The server admin login name is the admin username used to create the Single Server. In the **Password** box, enter the password for that admin user. The migration tool performs the migration of single server databases as the admin user.
-Under **Choose databases to migrate**, there's a list of user databases inside the Single Server. You can select and migrate up to eight databases in a single migration attempt. If there are more than eight user databases, the migration process is repeated between the source and target servers for the next set of databases.
-
-The final property on the **Source** tab is **Migration mode**. The migration tool offers offline mode of migration as default.
- After filling out all the fields, select the **Next** button. ### Target tab The **Target** tab displays metadata for the Flexible Server target, like subscription name, resource group, server name, location, and PostgreSQL version. For **Server admin login name**, the tab displays the admin username used during the creation of the Flexible Server target. Enter the corresponding password for the admin user.
-For **Authorize DB overwrite**:
--- If you select **Yes**, you give this migration tool permission to overwrite existing data if the database is already present.-- If you select **No**, the migration tool does not overwrite the data for the database that is already present.
+>[!NOTE]
+> The migration tool overwrites existing database(s) on the target Flexible server if a database of the same name is already present on the target.
Select the **Next** button.
+### Select Database(s) for Migration tab
+
+Under this tab, there is a list of user databases inside the Single Server. You can select and migrate up to eight databases in a single migration attempt. If there are more than eight user databases, the migration process is repeated between the source and target servers for the next set of databases.
++ ### Review + create tab >[!NOTE]
Select the **Next** button.
The **Review + create** tab summarizes all the details for creating the migration. Review the details and select the **Create** button to start the migration. ## Monitor the migration After you select the **Create** button, a notification appears in a few seconds to say that the migration creation is successful. You are redirected automatically to the **Migration** page of Flexible Server. That page has a new entry for the recently created migration. The grid that displays the migrations has these columns: **Name**, **Status**, **Source DB server**, **Resource group**, **Region**, **Databases**, and **Start time**. The migrations are in the descending order of migration start time with the most recent migration on top. You can use the refresh button to refresh the status of the migrations. You can also select the migration name in the grid to see the details of that migration. As soon as the migration is created, the migration moves to the **InProgress** state and **PerformingPreRequisiteSteps** substate. It takes 2-3 minutes for the migration workflow to set up the migration infrastructure and network connections. After the **PerformingPreRequisiteSteps** substate is completed, the migration moves to the substate of **Migrating Data** when the Cloning/Copying of the databases takes place. The time for migration to complete depends on the size and shape of databases that you are migrating. If the data is mostly evenly distributed across all the tables, the migration is quick. Skewed table sizes take a relatively longer time.
-When you select each of the databases in migration, a fan-out pane appears. It has all the table count - copied, queued, copying and errors apart from the database migration status.
+When you select any of the databases in migration, a fan-out pane appears. It has all the table count - copied, queued, copying and errors apart from the database migration status.
The migration moves to the **Succeeded** state as soon as the **Migrating Data** state finishes successfully. If there's an issue at the **Migrating Data** state, the migration moves into a **Failed** state. Once the migration moves to the **Succeeded** state, migration of schema and data from your Single Server to your Flexible Server target is complete. You can use the refresh button on the page to confirm the same. +
+The Migration grid gives a top-level view of the completed migration.
+ After the migration has moved to the **Succeeded** state, follow the post-migration steps in [Migrate from Azure Database for PostgreSQL Single Server to Flexible Server](./concepts-single-to-flexible.md#post-migration).
postgresql How To Deploy Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-deploy-github-action.md
Last updated 04/28/2023
-
- - github-actions-azure
- - mode-other
+ # Quickstart: Use GitHub Actions to connect to Azure PostgreSQL
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
Last updated 03/30/2023-+ zone_pivot_groups: ase-pro-version
sap Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-devops.md
Last updated 12/1/2022
-+ # Use SAP Deployment Automation Framework from Azure DevOps Services
sap Manual Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/manual-deployment.md
Last updated 11/17/2021
-+ # Get started with manual deployment
sap Sap Hana High Availability Scale Out Hsr Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel.md
description: SAP HANA scale-out with HANA system replication (HSR) and Pacemaker
tags: azure-resource-manager+ ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
vm-windows
Last updated 09/26/2023 - # High availability of SAP HANA scale-out system on Red Hat Enterprise Linux
sap Sap Hana High Availability Scale Out Hsr Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-suse.md
+ Last updated 07/11/2023
sap Sap Hana Scale Out Standby Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-suse.md
description: Learn how to deploy a SAP HANA scale-out system with standby node o
tags: azure-resource-manager+ ms.assetid: 5e514964-c907-4324-b659-16dd825f6f87
search Search Get Started Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-rest.md
In a few seconds, you should see an HTTP 201 response in the session list. This
If you get a 207, at least one document failed to upload. If you get a 404, you have a syntax error in either the header or body of the request: verify you changed the endpoint to include `/docs/index`. > [!TIP]
-> For selected data sources, you can choose the alternative *indexer* approach which simplifies and reduces the amount of code required for indexing. For more information, see [Indexer operations](/rest/api/searchservice/indexer-operations).
+> For selected data sources, you can [create an indexer](/rest/api/searchservice/create-indexer), which simplifies and reduces the amount of code required for indexing.
## 3 - Search an index
search Search How To Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-alias.md
Index aliases are also supported in the latest preview SDKs for [Java](https://s
## Send requests to an index alias
-Once you've created your alias, you're ready to start using it. Aliases can be used for all [document operations](/rest/api/searchservice/document-operations) including querying, indexing, suggestions, and autocomplete.
+Once you've created your alias, you're ready to start using it. Aliases can be used for all document operations including querying, indexing, suggestions, and autocomplete.
In the query below, instead of sending the request to `hotel-samples-index`, you can instead send the request to `my-alias` and it will be routed accordingly.
POST /indexes/my-alias/docs/search?api-version=2021-04-30-preview
If you expect to make updates to a production index, specify an alias rather than the index name in your client-side application. Scenarios that require an index rebuild are outlined in [Drop and rebuild an index](search-howto-reindex.md). > [!NOTE]
-> You can only use an alias with [document operations](/rest/api/searchservice/document-operations) or to get and update an index definition. Aliases can't be used to delete an index, can't be used with the Analyze Text API, and can't be used as the `targetIndexName` on an indexer.
+> You can only use an alias with document operations or to get and update an index definition. Aliases can't be used to delete an index, can't be used with the Analyze Text API, and can't be used as the `targetIndexName` on an indexer.
> > An update to an alias may take up to 10 seconds to propagate through the system so you should wait at least 10 seconds before performing any operation in the index that has been mapped or recently was mapped to the alias.
search Search How To Create Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-create-search-index.md
The following screenshot highlights where **Add index** and **Import data** appe
The REST API provides defaults for field attribution. For example, all `Edm.String` fields are searchable by default. Attributes are shown in full below for illustrative purposes, but you can omit attribution in cases where the default values apply.
-Refer to the [Index operations (REST)](/rest/api/searchservice/index-operations) for help with formulating index requests.
- ```json POST https://[servicename].search.windows.net/indexes?api-version=[api-version] {
search Search Howto Concurrency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-concurrency.md
Try modifying other samples to exercise ETags or AccessCondition objects.
+ [Common HTTP request and response headers](/rest/api/searchservice/common-http-request-and-response-headers-used-in-azure-search) + [HTTP status codes](/rest/api/searchservice/http-status-codes)
-+ [Index operations (REST API)](/rest/api/searchservice/index-operations)
search Search Howto Create Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-create-indexers.md
Previously updated : 09/20/2023 Last updated : 10/05/2023 # Create an indexer in Azure Cognitive Search
This article focuses on the basic steps of creating an indexer. Depending on the
+ Be under the [maximum limits](search-limits-quotas-capacity.md#indexer-limits) for your service tier. The Free tier allows three objects of each type and 1-3 minutes of indexer processing, or 3-10 if there's a skillset.
-## Indexer definition at a glance
+## Indexer patterns
When you create an indexer, the definition is one of two patterns: text-based indexing or AI enrichment with skills. The patterns are the same, except that skills-based indexing has more definitions.
-### Indexer for text-based indexing
+### Indexer example for text-based indexing
Text-based indexing for full text search is the primary use case for indexers, and for this workflow, an indexer looks like this example.
By default, an indexer runs immediately when you create it on the search service
You can also [specify a schedule](search-howto-schedule-indexers.md) or set an [encryption key](search-security-manage-encryption-keys.md) for supplemental encryption of the indexer definition.
-### Indexer for skills-based indexing and AI enrichment
+### Indexer example for skills-based indexing
Indexers also drive [AI enrichment](cognitive-search-concept-intro.md). All of the above properties and parameters for apply, but the following extra properties are specific to AI enrichment: `"skillSetName"`, `"cache"`, `"outputFieldMappings"`.
POST /indexers?api-version=[api-version]
There are numerous tutorials and examples that demonstrate REST clients for creating objects. [Create a search index using REST and Postman](search-get-started-rest.md) can get you started.
-Refer to the [Indexer operations (REST)](/rest/api/searchservice/Indexer-operations) for help with formulating indexer requests.
- ### [**.NET SDK**](#tab/indexer-csharp) For Cognitive Search, the Azure SDKs implement generally available features. As such, you can use any of the SDKs to create indexer-related objects. All of them provide a **SearchIndexerClient** that has methods for creating indexers and related objects, including skillsets.
search Search Howto Run Reset Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-run-reset-indexers.md
Reset APIs are used to inform the scope of the next indexer run. For actual proc
After you reset and rerun indexer jobs, you can monitor status from the search service, or obtain detailed information through resource logging.
-+ [Indexer operations (REST)](/rest/api/searchservice/indexer-operations)
+ [Monitor search indexer status](search-howto-monitor-indexers.md) + [Collect and analyze log data](monitor-azure-cognitive-search.md) + [Schedule an indexer](search-howto-schedule-indexers.md)
search Search Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-overview.md
Previously updated : 12/07/2022 Last updated : 10/05/2023 # Indexers in Azure Cognitive Search
You can use an indexer as the sole means for data ingestion, or in combination w
| Multiple indexers | Multiple data sources are typically paired with multiple indexers if you need to vary run time parameters, the schedule, or field mappings. </br></br>[Cross-region scale out of Cognitive Search](search-reliability.md#data-sync) is another scenario. You might have copies of the same search index in different regions. To synchronize search index content, you could have multiple indexers pulling from the same data source, where each indexer targets a different search index in each region.</br></br>[Parallel indexing](search-howto-large-index.md#parallel-indexing) of very large data sets also requires a multi-indexer strategy, where each indexer targets a subset of the data. | | Content transformation | Indexers drive [AI enrichment](cognitive-search-concept-intro.md). Content transforms are defined in a [skillset](cognitive-search-working-with-skillsets.md) that you attach to the indexer.|
+ You should plan on creating one indexer for every target index and data source combination. You can have multiple indexers writing into the same index, and you can reuse the same data source for multiple indexers. However, an indexer can only consume one data source at a time, and can only write to a single index. As the following graphic illustrates, one data source provides input to one indexer, which then populates a single index:
++
+ Although you can only use one indexer at a time, resources can be used in different combinations. The main takeaway of the next illustration is to notice is that a data source can be paired with more than one indexer, and multiple indexers can write to same index.
++ <a name="supported-data-sources"></a> ## Supported data sources
On an initial run, when the index is empty, an indexer will read in all of the d
For each document it receives, an indexer implements or coordinates multiple steps, from document retrieval to a final search engine "handoff" for indexing. Optionally, an indexer also drives [skillset execution and outputs](cognitive-search-concept-intro.md), assuming a skillset is defined. <a name="document-cracking"></a>
Despite the similarity in names, output field mappings and field mappings build
The next image shows a sample indexer [debug session](cognitive-search-debug-session.md) representation of the indexer stages: document cracking, field mappings, skillset execution, and output field mappings. ## Basic workflow
search Search Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-rest.md
Management REST API calls are authenticated through Azure Active Directory (Azur
> [!NOTE] > The following steps are borrowed from the [Azure REST APIs with Postman](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/) blog post.
-1. Open a command shell for Azure CLI. If you don't have Azure CLI installed, you can open [Create a service principal](/cli/azure/create-an-azure-service-principal-azure-cli#1-create-a-service-principal), select **Try It**.
+1. Open a command shell for Azure CLI.
1. Sign in to your Azure subscription.
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md
Customers often ask how Azure Cognitive Search compares with other search-relate
|-|--| | Microsoft Search | [Microsoft Search](/microsoftsearch/overview-microsoft-search) is for Microsoft 365 authenticated users who need to query over content in SharePoint. It's offered as a ready-to-use search experience, enabled and configured by administrators, with the ability to accept external content through connectors from Microsoft and other sources. If this describes your scenario, then Microsoft Search with Microsoft 365 is an attractive option to explore.<br/><br/>In contrast, Azure Cognitive Search executes queries over an index that you define, populated with data and documents you own, often from diverse sources. Azure Cognitive Search has crawler capabilities for some Azure data sources through [indexers](search-indexer-overview.md), but you can push any JSON document that conforms to your index schema into a single, consolidated searchable resource. You can also customize the indexing pipeline to include machine learning and lexical analyzers. Because Cognitive Search is built to be a plug-in component in larger solutions, you can integrate search into almost any app, on any platform.| |Bing | [Bing family of search APIs](/bing/search-apis/bing-web-search/bing-api-comparison) search the indexes on Bing.com for matching terms you submit. Indexes are built from HTML, XML, and other web content on public sites. Built on the same foundation, [Bing Custom Search](/bing/search-apis/bing-custom-search/overview) offers the same crawler technology for web content types, scoped to individual web sites.<br/><br/>In Cognitive Search, you define and populate the search index with your content. You control data ingestion. One way is to use [indexers](search-indexer-overview.md) to crawl Azure data sources. You can also push any index-conforming JSON document to your search service. |
-|Database search | Many database platforms include a built-in search experience. SQL Server has [full text search](/sql/relational-databases/search/full-text-search). Azure Cosmos DB and similar technologies have queryable indexes. When evaluating products that combine search and storage, it can be challenging to determine which way to go. Many solutions use both: DBMS for storage, and Azure Cognitive Search for specialized search features.<br/><br/>Compared to DBMS search, Azure Cognitive Search stores content from heterogeneous sources and offers specialized text processing features such as linguistic-aware text processing (stemming, lemmatization, word forms) in [56 languages](/rest/api/searchservice/language-support). It also supports autocorrection of misspelled words, [synonyms](/rest/api/searchservice/synonym-map-operations), [suggestions](/rest/api/searchservice/suggestions), [scoring controls](/rest/api/searchservice/add-scoring-profiles-to-a-search-index), [facets](search-faceted-navigation.md), and [custom tokenization](/rest/api/searchservice/custom-analyzers-in-azure-search). The [full text search engine](search-lucene-query-architecture.md) in Azure Cognitive Search is built on Apache Lucene, an industry standard in information retrieval. However, while Azure Cognitive Search persists data in the form of an inverted index, it isn't a replacement for true data storage and we don't recommend using it in that capacity. For more information, see this [forum post](https://stackoverflow.com/questions/40101159/can-azure-search-be-used-as-a-primary-database-for-some-data). <br/><br/>Resource utilization is another inflection point in this category. Indexing and some query operations are often computationally intensive. Offloading search from the DBMS to a dedicated solution in the cloud preserves system resources for transaction processing. Furthermore, by externalizing search, you can easily adjust scale to match query volume.|
+|Database search | Many database platforms include a built-in search experience. SQL Server has [full text search](/sql/relational-databases/search/full-text-search). Azure Cosmos DB and similar technologies have queryable indexes. When evaluating products that combine search and storage, it can be challenging to determine which way to go. Many solutions use both: DBMS for storage, and Azure Cognitive Search for specialized search features.<br/><br/>Compared to DBMS search, Azure Cognitive Search stores content from heterogeneous sources and offers specialized text processing features such as linguistic-aware text processing (stemming, lemmatization, word forms) in [56 languages](/rest/api/searchservice/language-support). It also supports autocorrection of misspelled words, [synonyms](/rest/api/searchservice/create-synonym-map), [suggestions](/rest/api/searchservice/suggestions), [scoring controls](/rest/api/searchservice/add-scoring-profiles-to-a-search-index), [facets](search-faceted-navigation.md), and [custom tokenization](/rest/api/searchservice/custom-analyzers-in-azure-search). The [full text search engine](search-lucene-query-architecture.md) in Azure Cognitive Search is built on Apache Lucene, an industry standard in information retrieval. However, while Azure Cognitive Search persists data in the form of an inverted index, it isn't a replacement for true data storage and we don't recommend using it in that capacity. For more information, see this [forum post](https://stackoverflow.com/questions/40101159/can-azure-search-be-used-as-a-primary-database-for-some-data). <br/><br/>Resource utilization is another inflection point in this category. Indexing and some query operations are often computationally intensive. Offloading search from the DBMS to a dedicated solution in the cloud preserves system resources for transaction processing. Furthermore, by externalizing search, you can easily adjust scale to match query volume.|
|Dedicated search solution | Assuming you've decided on dedicated search with full spectrum functionality, a final categorical comparison is between on premises solutions or a cloud service. Many search technologies offer controls over indexing and query pipelines, access to richer query and filtering syntax, control over rank and relevance, and features for self-directed and intelligent search. <br/><br/>A cloud service is the right choice if you want a turn-key solution with minimal overhead and maintenance, and adjustable scale. <br/><br/>Within the cloud paradigm, several providers offer comparable baseline features, with full-text search, geospatial search, and the ability to handle a certain level of ambiguity in search inputs. Typically, it's a [specialized feature](search-features-list.md), or the ease and overall simplicity of APIs, tools, and management that determines the best fit. | Among cloud providers, Azure Cognitive Search is strongest for full text search workloads over content stores and databases on Azure, for apps that rely primarily on search for both information retrieval and content navigation.
search Search What Is Data Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-data-import.md
Indexers connect an index to a data source (usually a table, view, or equivalent
### How to pull data into an Azure Cognitive Search index
-Indexer functionality is exposed in the [Azure portal](search-import-data-portal.md), the [REST API](/rest/api/searchservice/Indexer-operations), and the [.NET SDK](/dotnet/api/azure.search.documents.indexes.searchindexerclient).
+Indexer functionality is exposed in the [Azure portal](search-import-data-portal.md), the [REST API](/rest/api/searchservice/create-indexer), and the [.NET SDK](/dotnet/api/azure.search.documents.indexes.searchindexerclient).
An advantage to using the portal is that Azure Cognitive Search can usually generate a default index schema by reading the metadata of the source dataset. You can modify the generated index until the index is processed, after which the only schema edits allowed are those that do not require reindexing. If the changes affect the schema itself, you would need to rebuild the index.
sentinel Deploy Data Connector Agent Container Other Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container-other-methods.md
description: This article shows you how to manually deploy the container that ho
+ Last updated 01/18/2023
sentinel Deploy Data Connector Agent Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md
description: This article shows you how to use the UI to deploy the container th
+ Last updated 01/18/2023
sentinel Sap Solution Deploy Alternate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-deploy-alternate.md
description: Learn how to deploy Microsoft Sentinel for SAP data connector envir
+ Last updated 06/19/2023
service-fabric How To Managed Cluster Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-availability-zones.md
Previously updated : 11/09/2022 Last updated : 10/02/2023 # Deploy a Service Fabric managed cluster across availability zones
The update should be done via ARM template by setting the zonalUpdateMode proper
} }] ```
-2. Add a node to the node type from a cluster by following the procedure to [modify node type](how-to-managed-cluster-modify-node-type.md).
+2. Add a node to a cluster by using the [az sf cluster node add PowerShell command](/cli/azure/sf/cluster/node?view=azure-cli-latest#az-sf-cluster-node-add()).
-3. Remove a node to the node type from a cluster by following the procedure to [modify node type](how-to-managed-cluster-modify-node-type.md).
+3. Remove a node from a cluster by using the [az sf cluster node remove PowerShell command](/cli/azure/sf/cluster/node?view=azure-cli-latest#az-sf-cluster-node-remove()).
[sf-architecture]: ./media/service-fabric-cross-availability-zones/sf-cross-az-topology.png [sf-architecture]: ./media/service-fabric-cross-availability-zones/sf-cross-az-topology.png
service-health Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Service Health
description: Sample Azure Resource Graph queries for Azure Service Health showing use of resource types and tables to access Azure Service Health related resources and properties. Last updated 07/07/2022 -+ # Azure Resource Graph sample queries for Azure Service Health
site-recovery Hyper V Vmm Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-disaster-recovery.md
- Title: Set up Hyper-V disaster recovery to a secondary site with Azure Site Recovery
-description: Learn how to set up disaster recovery for Hyper-V VMs between your on-premises sites with Azure Site Recovery.
-- Previously updated : 11/14/2019---
-# Set up disaster recovery for Hyper-V VMs to a secondary on-premises site
-
-The [Azure Site Recovery](site-recovery-overview.md) service contributes to your disaster recovery strategy by managing and orchestrating replication, failover, and failback of on-premises machines, and Azure virtual machines (VMs).
-
-This article shows you how to set up disaster recovery to a secondary site, for on-premises Hyper-V VMs managed in System Center Virtual Machine Manager (VMM) clouds. In this article, you learn how to:
-
-> [!div class="checklist"]
-> * Prepare on-premises VMM servers and Hyper-V hosts
-> * Create a Recovery Services vault for Site Recovery
-> * Set up the source and target replication environments.
-> * Set up network mapping
-> * Create a replication policy
-> * Enable replication for a VM
--
-## Prerequisites
-
-To complete this scenario:
--- Review the [scenario architecture and components](hyper-v-vmm-architecture.md).-- Make sure that VMM servers and Hyper-V hosts comply with [support requirements](hyper-v-vmm-secondary-support-matrix.md).-- Check that VMs you want to replicate comply with [replicated machine support](hyper-v-vmm-secondary-support-matrix.md#replicated-vm-support).-- Prepare VMM servers for network mapping.-
-### Prepare for network mapping
-
-[Network mapping](hyper-v-vmm-network-mapping.md) maps between on-premises VMM VM networks in source and target clouds. Mapping does the following:
--- Connects VMs to appropriate target VM networks after failover. -- Optimally places replica VMs on target Hyper-V host servers. -- If you don't configure network mapping, replica VMs won't be connected to a VM network after failover.-
-Prepare VMM as follows:
-
-1. Make sure you have [VMM logical networks](/system-center/vmm/network-logical) on the source and target VMM servers.
- - The logical network on the source server should be associated with the source cloud in which Hyper-V hosts are located.
- - The logical network on the target server should be associated with the target cloud.
-1. Make sure you have [VM networks](/system-center/vmm/network-virtual) on the source and target VMM servers. VM networks should be linked to the logical network in each location.
-2. Connect VMs on the source Hyper-V hosts to the source VM network.
--
-## Create a Recovery Services vault
---
-## Choose a protection goal
-
-Select what you want to replicate and where you want to replicate to.
-
-1. Click **Site Recovery** > **Step 1: Prepare Infrastructure** > **Protection goal**.
-2. Select **To recovery site**, and select **Yes, with Hyper-V**.
-3. Select **Yes** to indicate you're using VMM to manage the Hyper-V hosts.
-4. Select **Yes** if you have a secondary VMM server. If you're deploying replication between clouds on a single VMM server, click **No**. Then click **OK**.
--
-## Set up the source environment
-
-Install the Azure Site Recovery Provider on VMM servers, and discover and register servers in the vault.
-
-1. Click **Prepare Infrastructure** > **Source**.
-2. In **Prepare source**, click **+ VMM** to add a VMM server.
-3. In **Add Server**, check that **System Center VMM server** appears in **Server type**.
-4. Download the Azure Site Recovery Provider installation file.
-5. Download the registration key. You need this when you install the Provider. The key is valid for five days after you generate it.
-
- ![Screenshot of the options to download Provider and registration key.](./media/hyper-v-vmm-disaster-recovery/source-settings.png)
-
-6. Install the Provider on each VMM server. You don't need to explicitly install anything on Hyper-V hosts.
-
-### Install the Azure Site Recovery Provider
-
-1. Run the Provider setup file on each VMM server. If VMM is deployed in a cluster, install for the first time as follows:
- - Install the Provider on an active node, and finish the installation to register the VMM server in the vault.
- - Then, install the Provider on the other nodes. Cluster nodes should all run the same version of the Provider.
-2. Setup runs a few prerequisite checks, and requests permission to stop the VMM service. The VMM service will be restarted automatically when setup finishes. If you install on a VMM cluster, you're prompted to stop the Cluster role.
-3. In **Microsoft Update**, you can opt in to specify that provider updates are installed in accordance with your Microsoft Update policy.
-4. In **Installation**, accept or modify the default installation location, and click **Install**.
-5. After installation is complete, click **Register** to register the server in the vault.
-
- ![Screenshot of the Provider Installation screen including the install location.](./media/hyper-v-vmm-disaster-recovery/provider-register.png)
-6. In **Vault name**, verify the name of the vault in which the server will be registered. Click **Next**.
-7. In **Proxy Connection**, specify how the Provider running on the VMM server connects to Azure.
- - You can specify that the provider should connect directly to the internet, or via a proxy. Specify proxy settings as needed.
- - If you use a proxy, a VMM RunAs account (DRAProxyAccount) is created automatically, using the specified proxy credentials. Configure the proxy server so that this account can authenticate successfully. The RunAs account settings can be modified in the VMM console > **Settings** > **Security** > **Run As Accounts**.
- - Restart the VMM service to update changes.
-8. In **Registration Key**, select the key that you downloaded and copied to the VMM server.
-9. The encryption setting isn't relevant in this scenario.
-10. In **Server name**, specify a friendly name to identify the VMM server in the vault. In a cluster, specify the VMM cluster role name.
-11. In **Synchronize cloud metadata**, select whether you want to synchronize metadata for all clouds on the VMM server. This action only needs to happen once on each server. If you don't want to synchronize all clouds, leave this setting unchecked. You can synchronize each cloud individually, in the cloud properties in the VMM console.
-12. Click **Next** to complete the process. After registration, Site Recovery retrieves metadata from the VMM server. The server is displayed in **Servers** > **VMM Servers** in the vault.
-13. After the server appears in the vault, in **Source** > **Prepare source** select the VMM server, and select the cloud in which the Hyper-V host is located. Then click **OK**.
--
-## Set up the target environment
-
-Select the target VMM server and cloud:
-
-1. Click **Prepare infrastructure** > **Target**, and select the target VMM server.
-2. VMM clouds that are synchronized with Site Recovery are displayed. Select the target cloud.
-
- ![Screenshot of the target VMM Server and Cloud selections.](./media/hyper-v-vmm-disaster-recovery/target-vmm.png)
--
-## Set up a replication policy
-
-Before you start, make sure that all hosts using the policy have the same operating system. If hosts are running different versions of Windows Server, you need multiple replication policies.
-
-1. To create a new replication policy, click **Prepare infrastructure** > **Replication Settings** > **+Create and associate**.
-2. In **Create and associate policy**, specify a policy name. The source and target type should be **Hyper-V**.
-3. In **Hyper-V host version**, select which operating system is running on the host.
-4. In **Authentication type** and **Authentication port**, specify how traffic is authenticated between the primary and recovery Hyper-V host servers.
- - Select **Certificate** unless you have a working Kerberos environment. Azure Site Recovery will automatically configure certificates for HTTPS authentication. You don't need to do anything manually.
- - By default, port 8083 and 8084 (for certificates) will be opened in the Windows Firewall on the Hyper-V host servers.
- - If you do select **Kerberos**, a Kerberos ticket will be used for mutual authentication of the host servers. Kerberos is only relevant for Hyper-V host servers running on Windows Server 2012 R2 or later.
-1. In **Copy frequency**, specify how often you want to replicate delta data after the initial replication (every 30 seconds, 5 or 15 minutes).
-2. In **Recovery point retention**, specify \how long (in hours) the retention window will be for each recovery point. Replicated machines can be recovered to any point within a window.
-3. In **App-consistent snapshot frequency**, specify how frequently (1-12 hours) recovery points containing application-consistent snapshots are created. Hyper-V uses two types of snapshots:
- - **Standard snapshot**: Provides an incremental snapshot of the entire virtual machine.
- - **App-consistent snapshot**: Takes a point-in-time snapshot of the application data inside the VM. Volume Shadow Copy Service (VSS) ensures that apps are in a consistent state when the snapshot is taken. Enabling application-consistent snapshots, affects app performance on source VMs. Set a value that's less than the number of additional recovery points you configure.
-4. In **Data transfer compression**, specify whether transferred replication data should be compressed.
-5. Select **Delete replica VM**, to specify that the replica virtual machine should be deleted if you disable protection for the source VM. If you enable this setting, when you disable protection for the source VM it's removed from the Site Recovery console, Site Recovery settings for the VMM are removed from the VMM console, and the replica is deleted.
-6. In **Initial replication method**, if you're replicating over the network, specify whether to start the initial replication or schedule it. To save network bandwidth, you might want to schedule it outside your busy hours. Then click **OK**.
-
- ![Screenshot of the replication policy options.](./media/hyper-v-vmm-disaster-recovery/replication-policy.png)
-
-7. The new policy is automatically associated with the VMM cloud. In **Replication policy**, click **OK**.
--
-## Enable replication
-
-1. Click **Replicate application** > **Source**.
-2. In **Source**, select the VMM server, and the cloud in which the Hyper-V hosts you want to replicate are located. Then click **OK**.
-3. In **Target**, verify the secondary VMM server and cloud.
-4. In **Virtual machines**, select the VMs you want to protect from the list.
--
-You can track progress of the **Enable Protection** action in **Jobs** > **Site Recovery jobs**. After the **Finalize Protection** job completes, the initial replication is complete, and the VM is ready for failover.
-
-## Next steps
-
-[Run a disaster recovery drill](hyper-v-vmm-test-failover.md)
site-recovery Hyper V Vmm Failover Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-failover-failback.md
- Title: Set up failover/failback to a secondary Hyper-V site with Azure Site Recovery
-description: Learn how to fail over Hyper-V VMs to your secondary on-premises site and fail back to primary site, during disaster recovery with Azure Site Recovery.
--- Previously updated : 11/14/2019----
-# Fail over and fail back Hyper-V VMs replicated to your secondary on-premises site
-
-The [Azure Site Recovery](site-recovery-overview.md) service manages and orchestrates replication, failover, and failback of on-premises machines, and Azure virtual machines (VMs).
-
-This article describes how to fail over a Hyper-V VM managed in a System Center Virtual Machine Manager (VMM) cloud, to a secondary VMM site. After you've failed over, you fail back to your on-premises site when it's available. In this article, you learn how to:
-
-> [!div class="checklist"]
-> * Fail over a Hyper-V VM from a primary VMM cloud to a secondary VMM cloud
-> * Reprotect from the secondary site to the primary, and fail back
-> * Optionally start replicating from primary to secondary again
-
-## Failover and failback
-
-Failover and failback has three stages:
-
-1. **Fail over to secondary site**: Fail machines over from the primary site to the secondary.
-2. **Fail back from the secondary site**: Replicate VMs from secondary to primary, and run a planned failover to fail back.
-3. After the planned failover, optionally start replicating from the primary site to the secondary again.
--
-## Prerequisites
--- Make sure you've completed a [disaster recovery drill](hyper-v-vmm-test-failover.md) to check that everything's working as expected.-- To complete failback, make sure that the primary and secondary VMM servers are connected to Site Recovery.---
-## Run a failover from primary to secondary
-
-You can run a regular or planned failover for Hyper-V VMs.
--- Use a regular failover for unexpected outages. When you run this failover, Site Recovery creates a VM in the secondary site, and powers it up. Data loss can occur depending on pending data that hasn't been synchronized.-- A planned failover can be used for maintenance, or during expected outage. This option provides zero data loss. When a planned failover is triggered, the source VMs are shut down. Unsynchronized data is synchronized, and the failover is triggered. --
- This procedure describes how to run a regular failover.
--
-1. In **Settings** > **Replicated items** click the VM > **Failover**.
-1. Select **Shut down machine before beginning failover** if you want Site Recovery to attempt to do a shutdown of source VMs before triggering the failover. Site Recovery will also try to synchronize on-premises data that hasn't yet been sent to the secondary site, before triggering the failover. Note that failover continues even if shutdown fails. You can follow the failover progress on the **Jobs** page.
-2. You should now be able to see the VM in the secondary VMM cloud.
-3. After you verify the VM, **Commit** the failover. This deletes all the available recovery points.
-
-> [!WARNING]
-> **Don't cancel a failover in progress**: Before failover is started, VM replication is stopped. If you cancel a failover in progress, failover stops, but the VM won't replicate again.
--
-## Reverse replicate and failover
-
-Start replicating from the secondary site to the primary, and fail back to the primary site. After VMs are running in the primary site again, you can replicate them to the secondary site.
-
-
-1. Click the VM > click on **Reverse Replicate**.
-2. Once the job is complete, click the VM >In **Failover**, verify the failover direction (from secondary VMM cloud), and select the source and target locations.
-4. Initiate the failover. You can follow the failover progress on the **Jobs** tab.
-5. In the primary VMM cloud, check that the VM is available.
-6. If you want to start replicating the primary VM back to the secondary site again, click on **Reverse Replicate**.
-
-## Next steps
-[Review the step](hyper-v-vmm-disaster-recovery.md) for replicating Hyper-V VMs to a secondary site.
site-recovery Hyper V Vmm Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-networking.md
- Title: Set up IP addressing after failover to a secondary site with Azure Site Recovery
-description: Describes how to set up IP addressing for connecting to VMs in a secondary on-premises site after disaster recovery and failover with Azure Site Recovery.
-- Previously updated : 11/12/2019---
-# Set up IP addressing to connect to a secondary on-premises site after failover
-
-After you fail over Hyper-V VMs in System Center Virtual Machine Manager (VMM) clouds to a secondary site, you need to be able connect to the replica VMs. This article helps you to do this.
-
-## Connection options
-
-After failover, there are a couple of ways to handle IP addressing for replica VMs:
--- **Retain the same IP address after failover**: In this scenario, the replicated VM has the same IP address as the primary VM. This simplifies network related issues after failover, but requires some infrastructure work.-- **Use a different IP address after failover**: In this scenario the VM gets a new IP address after failover.
-
-
-## Retain the IP address
-
-If you want to retain the IP addresses from the primary site, after failover to the secondary site, you can:
--- Deploy a stretched subnet between the primary and the secondary sites.-- Perform a full subnet failover from the primary to secondary site. You need to update routes to indicate the new location of the IP addresses.--
-### Deploy a stretched subnet
-
-In a stretched configuration, the subnet is available simultaneously in both the primary and secondary sites. In a stretched subnet, when you move a machine and its IP (Layer 3) address configuration to the secondary site, the network automatically routes the traffic to the new location.
--- From a Layer 2 (data link layer) perspective, you need networking equipment that can manage a stretched VLAN.-- By stretching the VLAN, the potential fault domain extends to both sites. This becomes a single point of failure. While unlikely, in such a scenario you might not be able to isolate an incident such as a broadcast storm. --
-### Fail over a subnet
-
-You can fail over the entire subnet to obtain the benefits of the stretched subnet, without actually stretching it. In this solution, a subnet is available in the source or target site, but not in both simultaneously.
--- To maintain the IP address space in the event of a failover, you can programmatically arrange for the router infrastructure to move subnets from one site to another.-- When a failover occurs, subnets move with their associated VMs.-- The main drawback of this approach is that in the event of a failure, you have to move the entire subnet.-
-#### Example
-
-Here's an example of complete subnet failover.
--- Before failover, the primary site has applications running in subnet 192.168.1.0/24.-- During failover, all of the VMs in this subnet are failed over to the secondary site, and retain their IP addresses. -- Routes between all sites need to be modified to reflect the fact that all the VMs in subnet 192.168.1.0/24 have now moved to the secondary site.-
-The following graphics illustrate the subnets before and after failover.
--
-**Before failover**
-
-![Diagram showing the subnets before failover.](./media/hyper-v-vmm-networking/network-design2.png)
-
-**After failover**
-
-![Diagram showing the subnets after failover.](./media/hyper-v-vmm-networking/network-design3.png)
-
-After failover, Site Recovery allocates an IP address for each network interface on the VM. The address is allocated from the static IP address pool in the relevant network, for each VM instance.
--- If the IP address pool in the secondary site is the same as that on the source site, Site Recovery allocates the same IP address (of the source VM), to the replica VM. The IP address is reserved in VMM, but it isn't set as the failover IP address on the Hyper-V host. The failover IP address on a Hyper-v host is set just before the failover.-- If the same IP address isn't available, Site Recovery allocates another available IP address from the pool.-- If VMs use DHCP, Site Recovery doesn't manage the IP addresses. You need to check that the DHCP server on the secondary site can allocate addresses from the same range as the source site.-
-### Validate the IP address
-
-After you enable protection for a VM, you can use following sample script to verify the address assigned to the VM. This IP address is set as the failover IP address, and assigned to the VM at the time of failover:
-
-```powershell
-$vm = Get-SCVirtualMachine -Name <VM_NAME>
-$na = $vm[0].VirtualNetworkAdapters>
-$ip = Get-SCIPAddress -GrantToObjectID $na[0].id
-$ip.address
-```
-
-## Use a different IP address
-
-In this scenario, the IP addresses of VMs that fail over are changed. The drawback of this solution is the maintenance required. DNS and cache entries might need to be updated. This can result in downtime, which can be mitigated as follows:
--- Use low TTL values for intranet applications.-- Use the following script in a Site Recovery recovery plan, for a timely update of the DNS server. You don't need the script if you use dynamic DNS registration.-
- ```powershell
- param(
- string]$Zone,
- [string]$name,
- [string]$IP
- )
- $Record = Get-DnsServerResourceRecord -ZoneName $zone -Name $name
- $newrecord = $record.clone()
- $newrecord.RecordData[0].IPv4Address = $IP
- Set-DnsServerResourceRecord -zonename $zone -OldInputObject $record -NewInputObject $Newrecord
- ```
-
-### Example
-
-In this example we have different IP addresses across primary and secondary sites, and there's a third site from which applications hosted on the primary or recovery site can be accessed.
--- Before failover, apps are hosted subnet 192.168.1.0/24 on the primary site.-- After failover, apps are configured in subnet 172.16.1.0/24 in the secondary site.-- All three sites can access each other.-- After failover, apps will be restored in the recovery subnet.-- In this scenario there's no need to fail over the entire subnet, and no changes are needed to reconfigure VPN or network routes. The failover, and some DNS updates, ensure that applications remain accessible.-- If DNS is configured to allow dynamic updates, then the VMs will register themselves using the new IP address, when they start after failover.-
-**Before failover**
-
-![Diagram showing different IP addresses before failover.](./media/hyper-v-vmm-networking/network-design10.png)
-
-**After failover**
-
-![Diagram showing different IP addresses after failover.](./media/hyper-v-vmm-networking/network-design11.png)
--
-## Next steps
-
-[Run a failover](hyper-v-vmm-failover-failback.md)
-
site-recovery Hyper V Vmm Performance Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-performance-results.md
- Title: Test Hyper-V VM replication to a secondary site with VMM using Azure Site Recovery
-description: This article provides information about performance testing for replication of Hyper-V VMs in VMM clouds to a secondary site using Azure Site Recovery.
---- Previously updated : 12/27/2018---
-# Test results for Hyper-V replication to a secondary site
--
-This article provides the results of performance testing when replicating Hyper-V VMs in System Center Virtual Machine Manager (VMM) clouds, to a secondary datacenter.
-
-## Test goals
-
-The goal of testing was to examine how Site Recovery performs during steady state replication.
--- Steady state replication occurs when VMs have completed initial replication, and are synchronizing delta changes.-- ItΓÇÖs important to measure performance using steady state, because itΓÇÖs the state in which most VMs remain, unless unexpected outages occur.-- The test deployment consisted of two on-premises sites, with a VMM server in each site. This test deployment is typical of a head office/branch office deployment, with head office acting as the primary site, and the branch office as the secondary or recovery site.-
-## What we did
-
-Here's what we did in the test pass:
-
-1. Created VMs using VMM templates.
-2. Started VMs, and captured baseline performance metrics over 12 hours.
-3. Created clouds on the primary and recovery VMM servers.
-4. Configured replication in Site Recovery, including mapping between source and recovery clouds.
-5. Enabled protection for VMs, and allowed them to complete initial replication.
-6. Waited a couple of hours for system stabilization.
-7. Captured performance metrics over 12 hours, where all VMs remained in an expected replication state for those 12 hours.
-8. Measured the delta between the baseline performance metrics, and the replication performance metrics.
--
-## Primary server performance
-
-* Hyper-V Replica (used by Site Recovery) asynchronously tracks changes to a log file, with minimum storage overhead on the primary server.
-* Hyper-V Replica utilizes self-maintained memory cache to minimize IOPS overhead for tracking. It stores writes to the VHDX in memory, and flushes them into the log file before the time that the log is sent to the recovery site. A disk flush also happens if the writes hit a predetermined limit.
-* The graph below shows the steady state IOPS overhead for replication. We can see that the IOPS overhead due to replication is around 5%, which is quite low.
-
- ![Graph that shows the steady state IOPS overhead for replication.](./media/hyper-v-vmm-performance-results/IC744913.png)
-
-Hyper-V Replica uses memory on the primary server, to optimize disk performance. As shown in the following graph, memory overhead on all servers in the primary cluster is marginal. The memory overhead shown is the percentage of memory used by replication, compared to the total installed memory on the Hyper-V server.
-
-![Primary results](./media/hyper-v-vmm-performance-results/IC744914.png)
-
-Hyper-V Replica has minimum CPU overhead. As shown in the graph, replication overhead is in the range of 2-3%.
-
-![Graph that shows replication overhead is in the range of 2-3%.](./media/hyper-v-vmm-performance-results/IC744915.png)
-
-## Secondary server performance
-
-Hyper-V Replica uses a small amount of memory on the recovery server, to optimize the number of storage operations. The graph summarizes the memory usage on the recovery server. The memory overhead shown is the percentage of memory used by replication, compared to the total installed memory on the Hyper-V server.
-
-![Graph that summarizes the memory usage on the recovery server.](./media/hyper-v-vmm-performance-results/IC744916.png)
-
-The amount of I/O operations on the recovery site is a function of the number of write operations on the primary site. LetΓÇÖs look at the total I/O operations on the recovery site in comparison with the total I/O operations and write operations on the primary site. The graphs show that the total IOPS on the recovery site is
-
-* Around 1.5 times the write IOPS on the primary.
-* Around 37% of the total IOPS on the primary site.
-
-![Graph that shows a comparison of IOPS on primary and secondary sites.](./media/hyper-v-vmm-performance-results/IC744917.png)
-
-![Secondary results](./media/hyper-v-vmm-performance-results/IC744918.png)
-
-## Effect on network utilization
-
-An average of 275 Mb per second of network bandwidth was used between the primary and recovery nodes (with compression enabled), against an existing bandwidth of 5 Gb per second.
-
-![Results network utilization](./media/hyper-v-vmm-performance-results/IC744919.png)
-
-## Effect on VM performance
-
-An important consideration is the impact of replication on production workloads running on the virtual machines. If the primary site is adequately provisioned for replication, there shouldnΓÇÖt be any impact on the workloads. Hyper-V ReplicaΓÇÖs lightweight tracking mechanism ensures that workloads running in the virtual machines are not impacted during steady-state replication. This is illustrated in the following graphs.
-
-This graph shows IOPS performed by virtual machines running different workloads, before and after replication was enabled. You can observe that there is no difference between the two.
-
-![Replica effect results](./media/hyper-v-vmm-performance-results/IC744920.png)
-
-The following graph shows the throughput of virtual machines running different workloads, before and after replication was enabled. You can observe that replication has no significant impact.
-
-![Results replica effects](./media/hyper-v-vmm-performance-results/IC744921.png)
-
-## Conclusion
-
-The results clearly show that Site Recovery, coupled with Hyper-V Replica, scales well with minimum overhead for a large cluster. Site Recovery provides simple deployment, replication, management and monitoring. Hyper-V Replica provides the necessary infrastructure for successful replication scaling.
-
-## Test environment details
-
-### Primary site
-
-* The primary site has a cluster containing five Hyper-V servers, running 470 virtual machines.
-* The VMs run different workloads, and all have Site Recovery protection enabled.
-* Storage for the cluster node is provided by an iSCSI SAN. Model ΓÇô Hitachi HUS130.
-* Each cluster server has four network cards (NICs) of one Gbps each.
-* Two of the network cards are connected to an iSCSI private network, and two are connected to an external enterprise network. One of the external networks is reserved for cluster communications only.
-
-![Primary hardware requirements](./media/hyper-v-vmm-performance-results/IC744922.png)
-
-| Server | RAM | Model | Processor | Number of processors | NIC | Software |
-| | | | | | | |
-| Hyper-V servers in cluster: <br />ESTLAB-HOST11<br />ESTLAB-HOST12<br />ESTLAB-HOST13<br />ESTLAB-HOST14<br />ESTLAB-HOST25 |128<br />ESTLAB-HOST25 has 256 |Dell Γäó PowerEdge Γäó R820 |Intel(R) Xeon(R) CPU E5-4620 0 \@ 2.20GHz |4 |I Gbps x 4 |Windows Server Datacenter 2012 R2 (x64) + Hyper-V role |
-| VMM Server |2 | | |2 |1 Gbps |Windows Server Database 2012 R2 (x64) + VMM 2012 R2 |
-
-### Secondary site
-
-* The secondary site has a six-node failover cluster.
-* Storage for the cluster node is provided by an iSCSI SAN. Model ΓÇô Hitachi HUS130.
-
-![Primary hardware specification](./media/hyper-v-vmm-performance-results/IC744923.png)
-
-| Server | RAM | Model | Processor | Number of processors | NIC | Software |
-| | | | | | | |
-| Hyper-V servers in cluster: <br />ESTLAB-HOST07<br />ESTLAB-HOST08<br />ESTLAB-HOST09<br />ESTLAB-HOST10 |96 |Dell Γäó PowerEdge Γäó R720 |Intel(R) Xeon(R) CPU E5-2630 0 \@ 2.30GHz |2 |I Gbps x 4 |Windows Server Datacenter 2012 R2 (x64) + Hyper-V role |
-| ESTLAB-HOST17 |128 |Dell Γäó PowerEdge Γäó R820 |Intel(R) Xeon(R) CPU E5-4620 0 \@ 2.20GHz |4 | |Windows Server Datacenter 2012 R2 (x64) + Hyper-V role |
-| ESTLAB-HOST24 |256 |Dell Γäó PowerEdge Γäó R820 |Intel(R) Xeon(R) CPU E5-4620 0 \@ 2.20GHz |2 | |Windows Server Datacenter 2012 R2 (x64) + Hyper-V role |
-| VMM Server |2 | | |2 |1 Gbps |Windows Server Database 2012 R2 (x64) + VMM 2012 R2 |
-
-### Server workloads
-
-* For test purposes we picked workloads commonly used in enterprise customer scenarios.
-* We use [IOMeter](http://www.iometer.org) with the workload characteristic summarized in the table for simulation.
-* All IOMeter profiles are set to write random bytes to simulate worst-case write patterns for workloads.
-
-| Workload | I/O size (KB) | % Access | %Read | Outstanding I/Os | I/O pattern |
-| | | | | | |
-| File Server |4<br />8<br />16<br />32<br />64 |60%<br />20%<br />5%<br />5%<br />10% |80%<br />80%<br />80%<br />80%<br />80% |8<br />8<br />8<br />8<br />8 |All 100% random |
-| SQL Server (volume 1)<br />SQL Server (volume 2) |8<br />64 |100%<br />100% |70%<br />0% |8<br />8 |100% random<br />100% sequential |
-| Exchange |32 |100% |67% |8 |100% random |
-| Workstation/VDI |4<br />64 |66%<br />34% |70%<br />95% |1<br />1 |Both 100% random |
-| Web File Server |4<br />8<br />64 |33%<br />34%<br />33% |95%<br />95%<br />95% |8<br />8<br />8 |All 75% random |
-
-### VM configuration
-
-* 470 VMs on the primary cluster.
-* All VMs with VHDX disk.
-* VMs running workloads summarized in the table. All were created with VMM templates.
-
-| Workload | # VMs | Minimum RAM (GB) | Maximum RAM (GB) | Logical disk size (GB) per VM | Maximum IOPS |
-| | | | | | |
-| SQL Server |51 |1 |4 |167 |10 |
-| Exchange Server |71 |1 |4 |552 |10 |
-| File Server |50 |1 |2 |552 |22 |
-| VDI |149 |.5 |1 |80 |6 |
-| Web server |149 |.5 |1 |80 |6 |
-| TOTAL |470 | | |96.83 TB |4108 |
-
-### Site Recovery settings
-
-* Site Recovery was configured for on-premises to on-premises protection
-* The VMM server has four clouds configured, containing the Hyper-V cluster servers and their VMs.
-
-| Primary VMM cloud | Protected VMs | Replication frequency | Additional recovery points |
-| | | | |
-| PrimaryCloudRpo15m |142 |15 mins |None |
-| PrimaryCloudRpo30s |47 |30 secs |None |
-| PrimaryCloudRpo30sArp1 |47 |30 secs |1 |
-| PrimaryCloudRpo5m |235 |5 mins |None |
-
-### Performance metrics
-
-The table summarizes the performance metrics and counters that were measured in the deployment.
-
-| Metric | Counter |
-| | |
-| CPU |\Processor(_Total)\% Processor Time |
-| Available memory |\Memory\Available MBytes |
-| IOPS |\PhysicalDisk(_Total)\Disk Transfers/sec |
-| VM read (IOPS) operations/sec |\Hyper-V Virtual Storage Device(\<VHD>)\Read Operations/Sec |
-| VM write (IOPS) operations/sec |\Hyper-V Virtual Storage Device(\<VHD>)\Write Operations/S |
-| VM read throughput |\Hyper-V Virtual Storage Device(\<VHD>)\Read Bytes/sec |
-| VM write throughput |\Hyper-V Virtual Storage Device(\<VHD>)\Write Bytes/sec |
-
-## Next steps
-
-[Set up replication](hyper-v-vmm-disaster-recovery.md)
site-recovery Hyper V Vmm Recovery Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-recovery-script.md
- Title: Add a script to a recovery plan in Azure Site Recovery
-description: Learn how to add a VMM script to a recovery plan for disaster recovery of Hyper-V VMs in VMM clouds.
---- Previously updated : 11/27/2018---
-# Add a VMM script to a recovery plan
-
-This article describes how to create a System Center Virtual Machine Manager (VMM) script and add it to a recovery plan in [Azure Site Recovery](site-recovery-overview.md).
-
-Post any comments or questions at the bottom of this article, or on the [Microsoft Q&A question page for Azure Recovery Services](/answers/topics/azure-site-recovery.html).
-
-## Prerequisites
-
-You can use PowerShell scripts in your recovery plans. To be accessible from the recovery plan, you must author the script and place the script in the VMM library. Keep the following considerations in mind while you write the script:
-
-* Ensure that scripts use try-catch blocks, so that exceptions are handled gracefully.
- - If an exception occurs in the script, the script stops running, and the task shows as failed.
- - If an error occurs, the remainder of the script doesn't run.
- - If an error occurs when you run an unplanned failover, the recovery plan continues.
- - If an error occurs when you run a planned failover, the recovery plan stops. Fix the script, check that it runs as expected, and then run the recovery plan again.
- - The `Write-Host` command doesnΓÇÖt work in a recovery plan script. If you use the `Write-Host` command in a script, the script fails. To create output, create a proxy script that in turn runs your main script. To ensure that all output is piped out, use the **\>\>** command.
- - The script times out if it doesn't return within 600 seconds.
- - If anything is written to STDERR, the script is classified as failed. This information is displayed in the script execution details.
-
-* Scripts in a recovery plan run in the context of the VMM service account. Ensure that this account has read permissions for the remote share on which the script is located. Test the script to run with the same level of user rights as the VMM service account.
-* VMM cmdlets are delivered in a Windows PowerShell module. The module is installed when you install the VMM console. To load the module into your script, use the following command in the script:
-
- `Import-Module -Name virtualmachinemanager`
-
- For more information, see [Get started with Windows PowerShell and VMM](/previous-versions/system-center/system-center-2012-R2/hh875013(v=sc.12)).
-* Ensure that you have at least one library server in your VMM deployment. By default, the library share path for a VMM server is located locally on the VMM server. The folder name is MSCVMMLibrary.
-
- If your library share path is remote (or if it's local but not shared with MSCVMMLibrary), configure the share as follows, using \\libserver2.contoso.com\share\ as an example:
-
- 1. Open the Registry Editor, and then go to **HKEY_LOCAL_MACHINE\SOFTWARE\MICROSOFT\Azure Site Recovery\Registration**.
-
- 1. Change the value for **ScriptLibraryPath** to **\\\libserver2.contoso.com\share\\**. Specify the full FQDN. Provide permissions to the share location. This is the root node of the share. To check for the root node, in VMM, go to the root node in the library. The path that opens is the root of the path. This is the path that you must use in the variable.
-
- 1. Test the script by using a user account that has the same level of user rights as the VMM service account. Using these user rights verifies that standalone, tested scripts run the same way that they run in recovery plans. On the VMM server, set the execution policy to bypass, as follows:
-
- a. Open the **64-bit Windows PowerShell** console as an administrator.
-
- b. Enter **Set-executionpolicy bypass**. For more information, see [Using the Set-ExecutionPolicy cmdlet](/previous-versions/windows/it-pro/windows-powershell-1.0/ee176961(v=technet.10)).
-
- > [!IMPORTANT]
- > Set **Set-executionpolicy bypass** only in the 64-bit PowerShell console. If you set it for the 32-bit PowerShell console, the scripts don't run.
-
-## Add the script to the VMM library
-
-If you have a VMM source site, you can create a script on the VMM server. Then, include the script in your recovery plan.
-
-1. In the library share, create a new folder. For example, \<VMM server name>\MSSCVMMLibrary\RPScripts. Place the folder on the source and target VMM servers.
-1. Create the script. For example, name the script RPScript. Verify that the script works as expected.
-1. Place the script in the \<VMM server name>\MSSCVMMLibrary folder on the source and target VMM servers.
-
-## Add the script to a recovery plan
-
-After you've added VMs or replication groups to a recovery plan and created the plan, you can add the script to the group.
-
-1. Open the recovery plan.
-1. In the **Step** list, select an item. Then, select either **Script** or **Manual Action**.
-1. Specify whether to add the script or action before or after the selected item. To move the position of the script up or down, select the **Move Up** and **Move Down** buttons.
-1. If you add a VMM script, select **Failover to VMM script**. In **Script Path**, enter the relative path to the share. For example, enter **\RPScripts\RPScript.PS1**.
-1. If you add an Azure Automation runbook, specify the Automation account in which the runbook is located. Then, select the Azure runbook script that you want to use.
-1. To ensure that the script works as expected, do a test failover of the recovery plan.
--
-## Next steps
-* Learn more about [running failovers](site-recovery-failover.md).
-
site-recovery Hyper V Vmm Test Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-test-failover.md
- Title: Run a NHyper-V disaster recovery drill to a secondary site with Azure Site Recovery
-description: Learn how to run a DR drill for Hyper-V VMs in VMM clouds to a secondary on-premises datacenter using Azure Site Recovery.
---- Previously updated : 11/27/2018---
-# Run a DR drill for Hyper-V VMs to a secondary site
--
-This article describes how to do a disaster recovery (DR) drill for Hyper-V VMs that are managed in System Center Virtual Machine Manager V(MM) clouds, to a secondary on-premises site, using [Azure Site Recovery](site-recovery-overview.md).
-
-You run a test failover to validate your replication strategy, and perform a DR drill without any data loss or downtime. A test failover doesn't have any impact on the ongoing replication, or on your production environment.
-
-## How do test failovers work?
-
-You run a test failover from the primary to the secondary site. If you simply want to check that a VM fails over, you can run a test failover without setting anything up on the secondary site. If you want to verify app failover works as expected, you will need to set up networking and infrastructure in the secondary location.
-- You can run a test failover on a single VM, or on a [recovery plan](site-recovery-create-recovery-plans.md).-- You can run a test failover without a network, with an existing network, or with an automatically created network. More details about these options are provided in the table below.
- - You can run a test failover without a network. This option is useful if you simply want to check that a VM was able to fail over, but you won't be able to verify any network configuration.
- - Run the failover with an existing network. We recommend you don't use a production network.
- - Run the failover and let Site Recovery automatically create a test network. In this case Site Recovery will create the network automatically, and clean it up when test failover is complete.
-- You need to select a recovery point for the test failover:
- - **Latest processed**: This option fails a VM over to the latest recovery point processed by Site Recovery. This option provides a low RTO (Recovery Time Objective), because no time is spent processing unprocessed data.
- - **Latest app-consistent**: This option fail over a VM to the latest application-consistent recovery point processed by Site Recovery.
- - **Latest**: This option first processes all the data that has been sent to Site Recovery service, to create a recovery point for each VM before failing over to it. This option provides the lowest RPO (Recovery Point Objective), because the VM created after failover will have all the data replicated to Site Recovery when the failover was triggered.
- - **Latest multi-VM processed**: Available for recovery plans that include one or more VMs that have multi-VM consistency enabled. VMs with the setting enabled fail over to the latest common multi-VM consistent recovery point. Other VMs fail over to the latest processed recovery point.
- - **Latest multi-VM app-consistent**: This option is available for recovery plans with one or more VMs that have multi-VM consistency enabled. VMs that are part of a replication group fail over to the latest common multi-VM application-consistent recovery point. Other VMs fail over to their latest application-consistent recovery point.
- - **Custom**: Use this option to fail over a specific VM to a particular recovery point.
---
-## Prepare networking
-
-When you run a test failover, you're asked to select network settings for test replica machines, as summarized in the table.
-
-| **Option** | **Details** |
-| | |
-| **None** | The test VM is created on the host on which the replica VM is located. It isn't added to the cloud, and isn't connected to any network.<br/><br/> You can connect the machine to a VM network after it has been created.|
-| **Use existing** | The test VM is created on the host on which the replica VM is located. It isn't added to the cloud.<br/><br/>Create a VM network that's isolated from your production network.<br/><br/>If you're using a VLAN-based network, we recommend that you create a separate logical network (not used in production) in VMM for this purpose. This logical network is used to create VM networks for test failovers.<br/><br/>The logical network should be associated with at least one of the network adapters of all the Hyper-V servers that are hosting virtual machines.<br/><br/>For VLAN logical networks, the network sites that you add to the logical network should be isolated.<br/><br/>If you're using a Windows Network VirtualizationΓÇôbased logical network, Azure Site Recovery automatically creates isolated VM networks. |
-| **Create a network** | A temporary test network is created automatically based on the setting that you specify in **Logical Network** and its related network sites.<br/><br/> Failover checks that VMs are created.<br/><br/> You should use this option if a recovery plan uses more than one VM network.<br/><br/> If you're using Windows Network Virtualization networks, this option can automatically create VM networks with the same settings (subnets and IP address pools) in the network of the replica virtual machine. These VM networks are cleaned up automatically after the test failover is complete.<br/><br/> The test VM is created on the host on which the replica virtual machine exists. It isn't added to the cloud.|
-
-### Best practices
--- Testing a production network causes downtime for production workloads. Ask your users not to use related apps when the disaster recovery drill is in progress.--- The test network doesn't need to match the VMM logical network type used for test failover. But, some combinations don't work:-
- - If the replica uses DHCP and VLAN-based isolation, the VM network for the replica doesn't need a static IP address pool. So using Windows Network Virtualization for the test failover won't work because no address pools are available.
-
- - Test failover won't work if the replica network uses no isolation, and the test network uses Windows Network Virtualization. This is because the no-isolation network doesn't have the subnets required to create a Windows Network Virtualization network.
-
-- We recommend that you don't use the network you selected for network mapping, for test failover.--- How replica virtual machines are connected to mapped VM networks after failover depends on how the VM network is configured in the VMM console.--
-### VM network configured with no isolation or VLAN isolation
-
-If a VM network is configured in VMM with no isolation, or VLAN isolation, note the following:
--- If DHCP is defined for the VM network, the replica virtual machine is connected to the VLAN ID through the settings that are specified for the network site in the associated logical network. The virtual machine receives its IP address from the available DHCP server.-- You don't need to define a static IP address pool for the target VM network. If a static IP address pool is used for the VM network, the replica virtual machine is connected to the VLAN ID through the settings that are specified for the network site in the associated logical network.-- The virtual machine receives its IP address from the pool that's defined for the VM network. If a static IP address pool isn't defined on the target VM network, IP address allocation will fail. Create the IP address pool on both the source and target VMM servers that you will use for protection and recovery.-
-### VM network with Windows Network Virtualization
-
-If a VM network is configured in VMM with Windows Network Virtualization, note the following:
--- You should define a static pool for the target VM network, regardless of whether the source VM network is configured to use DHCP or a static IP address pool. -- If you define DHCP, the target VMM server acts as a DHCP server and provides an IP address from the pool that's defined for the target VM network.-- If use of a static IP address pool is defined for the source server, the target VMM server allocates an IP address from the pool. In both cases, IP address allocation will fail if a static IP address pool is not defined.---
-## Prepare the infrastructure
-
-If you simply want to check that a VM can fail over, you can run a test failover without an infrastructure. If you want to do a full DR drill to test app failover, you need to prepare the infrastructure at the secondary site:
--- If you run a test failover using an existing network, prepare Active Directory, DHCP, and DNS in that network.-- If you run a test failover with the option to create a VM network automatically, you need to add infrastructure resources to the automatically created network, before you run the test failover. In a recovery plan, you can facilitate this by adding a manual step before Group-1 in the recovery plan that you're going to use for the test failover. Then, add the infrastructure resources to the automatically created network before you run the test failover.--
-### Prepare DHCP
-If the virtual machines involved in test failover use DHCP, create a test DHCP server within the isolated network for the purpose of test failover.
--
-### Prepare Active Directory
-To run a test failover for application testing, you need a copy of the production Active Directory environment in your test environment. For more information, review the [test failover considerations for Active Directory](site-recovery-active-directory.md#test-failover-considerations).
-
-### Prepare DNS
-Prepare a DNS server for the test failover as follows:
-
-* **DHCP**: If virtual machines use DHCP, the IP address of the test DNS should be updated on the test DHCP server. If you're using a network type of Windows Network Virtualization, the VMM server acts as the DHCP server. Therefore, the IP address of DNS should be updated in the test failover network. In this case, the virtual machines register themselves to the relevant DNS server.
-* **Static address**: If virtual machines use a static IP address, the IP address of the test DNS server should be updated in test failover network. You might need to update DNS with the IP address of the test virtual machines. You can use the following sample script for this purpose:
-
- ```powershell
- Param(
- [string]$Zone,
- [string]$name,
- [string]$IP
- )
- $Record = Get-DnsServerResourceRecord -ZoneName $zone -Name $name
- $newrecord = $record.clone()
- $newrecord.RecordData[0].IPv4Address = $IP
- Set-DnsServerResourceRecord -zonename $zone -OldInputObject $record -NewInputObject $Newrecord
- ```
-
-## Run a test failover
-
-This procedure describes how to run a test failover for a recovery plan. Alternatively, you can run the failover for a single virtual machine on the **Virtual Machines** tab.
-
-1. Select **Recovery Plans** > *recoveryplan_name*. Click **Failover** > **Test Failover**.
-2. On the **Test Failover** blade, specify how replica VMs should be connected to networks after the test failover.
-3. Track failover progress on the **Jobs** tab.
-4. After failover is complete, verify that the VMs start successfully.
-5. When you're done, click **Cleanup test failover** on the recovery plan. In **Notes**, record and save any observations associated with the test failover. This step deletes any VMs and networks that were created by Site Recovery during test failover.
-
-![Test failover](./media/hyper-v-vmm-test-failover/TestFailover.png)
-
--
-> [!TIP]
-> The IP address given to a virtual machine during test failover is the same IP address that the virtual machine would receive for a planned or unplanned failover (presuming that the IP address is available in the test failover network). If the same IP address isn't available in the test failover network, the virtual machine receives another IP address that's available in the test failover network.
---
-### Run a test failover to a production network
-
-We recommend that you don't run a test failover to your production recovery site network that you specified during network mapping. But if you do need to validate end-to-end network connectivity in a failed-over VM, note the following points:
-
-* Make sure that the primary VM is shut down when you're doing the test failover. If you don't, two virtual machines with the same identity will be running in the same network at the same time. That situation can lead to undesired consequences.
-* Any changes that you make to the test failover VMs are lost when you clean up the test failover virtual machines. These changes are not replicated back to the primary VMs.
-* Testing like this leads to downtime for your production application. Ask users of the application not to use the application when the DR drill is in progress.
--
-## Next steps
-After you have successfully run a DR drill, you can [run a full failover](site-recovery-failover.md).
---
site-recovery Quickstart Enable Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/quickstart-enable-replication.md
+
+ Title: Enable replication for VMware VM disaster recovery to Azure with Azure Site Recovery
+description: Quickly enable replication for on-premises VMware VMs with Azure Site Recovery - Modernized.
+ Last updated : 10/03/2023+++++
+# Quickstart: Set up disaster recovery to Azure for on-premises VMware VMs - Modernized
+
+This quickstart describes how to enable replication for on-premises VMware VMs, for disaster recovery to Azure using the Modernized VMware/Physical machine protection experience using [Azure Site Recovery](site-recovery-overview.md).
+
+## Before you start
+
+This article assumes that you've already set up disaster recovery for on-premises VMware VMs. If you haven't, follow the [set up disaster recovery to Azure for on-premises VMware VMs - Modernized](./vmware-azure-set-up-replication-tutorial-modernized.md).
+
+## Prerequisites
+
+To complete this tutorial, ensure the following are completed:
+
+- Ensure that the [pre-requisites](vmware-physical-azure-support-matrix.md) across storage and networking are met.
+- [Prepare an Azure account](./vmware-azure-set-up-replication-tutorial-modernized.md#grant-required-permissions-to-the-vault)
+- [Create a recovery Services vault](./quickstart-create-vault-template.md?tabs=CLI)
++
+## Enable replication of VMware VMs
+
+After an Azure Site Recovery replication appliance is added to a vault, you can get started with protecting the machines.
+
+Follow these steps to enable replication:
+
+1. Select **Site Recovery** under **Getting Started** section.
+1. Select **Enable Replication (Modernized)** under the VMware section.
+
+1. Choose the machine type you want to protect through Azure Site Recovery.
+
+ > [!NOTE]
+ > In Modernized, the support is limited to virtual machines.
+
+ ![Screenshot of Select source machines.](./media/quickstart-enable-replication/select-source.png)
+
+1. After choosing the machine type, select the vCenter server added to Azure Site Recovery replication appliance, registered in this vault.
+
+1. Search the source machine name to protect it. To review the selected machines, select **Selected resources**.
+
+1. After you select the list of VMs, select **Next** to proceed to source settings. Here, select the [replication appliance](#appliance-selection) and VM credentials. These credentials will be used to push mobility agent on the machine by Azure Site Recovery replication appliance to complete enabling Azure Site Recovery. Ensure accurate credentials are chosen.
+
+ >[!NOTE]
+ >For Linux OS, ensure to provide the root credentials. For Windows OS, a user account with admin privileges should be added. These credentials will be used to push Mobility Service on to the source machine during enable replication operation.
+
+ ![Screenshot of Source settings.](./media/quickstart-enable-replication/source-settings.png)
+
+1. Select **Next** to provide target region properties. By default, Vault subscription and Vault resource group are selected. You can choose a subscription and resource group of your choice. Your source machines will be deployed in this subscription and resource group when you failover in the future.
+
+ ![Screenshot of Target properties.](./media/quickstart-enable-replication/target-properties.png)
+
+1. Next, you can select an existing Azure network or create a new target network to be used during failover. If you select **Create new**, you will be redirected to create virtual network context blade and asked to provide address space and subnet details. This network will be created in the target subscription and target resource group selected in the previous step.
+
+1. Then, provide the test failover network details.
+
+ > [!NOTE]
+ > Ensure that the test failover network is different from the failover network. This is to make sure the failover network is readily available in case of an actual disaster.
+
+1. Select the storage.
+
+ - Cache storage account:
+ Now, choose the cache storage account which Azure Site Recovery uses for staging purposes - caching and storing logs before writing the changes on to the managed disks.
+
+ By default, a new LRS v1 type storage account will be created by Azure Site Recovery for the first enable replication operation in a vault. For the next operations, the same cache storage account will be re-used.
+ - Managed disks
+
+ By default, Standard HDD managed disks are created in Azure. You can customize the type of Managed disks by Selecting **Customize**. Choose the type of disk based on the business requirement. Ensure [appropriate disk type is chosen](../virtual-machines/disks-types.md#disk-type-comparison) based on the IOPS of the source machine disks. For pricing information, see managed disk pricing document [here](https://azure.microsoft.com/pricing/details/managed-disks/).
+
+ >[!NOTE]
+ > If Mobility Service is installed manually before enabling replication, you can change the type of managed disk, at a disk level. Else, by default, one managed disk type can be chosen at a machine level
+
+1. Create a new replication policy if needed.
+
+ A default replication policy gets created under the vault with 3 days recovery point retention and app-consistent recovery points disabled by default. You can create a new replication policy or modify the existing one as per your RPO requirements.
+
+ - Select **Create new**.
+
+ - Enter the **Name**.
+
+ - Enter a value for **Retention period (in days)**. You can enter any value ranging from 0 to 15.
+
+ - **Enable app consistency frequency** if you wish and enter a value for **App-consistent snapshot frequency (in hours)** as per business requirements.
+
+ - Select **OK** to save the policy.
+
+ The policy will be created and can be used for protecting the chosen source machines.
+
+1. After choosing the replication policy, select **Next**. Review the Source and Target properties. Select **Enable Replication** to initiate the operation.
+
+ ![Screenshot of Site recovery.](./media/quickstart-enable-replication/enable-replication.png)
+
+ A job is created to enable replication of the selected machines. To track the progress, navigate to Site Recovery jobs in the recovery services vault.
+
+## Appliance selection
+
+- You can select any of the Azure Site Recovery replication appliances registered under a vault to protect a machine.
+- Same replication appliance can be used both for forward and backward protection operations, if it is in a non-critical state. It should not impact the performance of the replications.
+
+## Next steps
+
+- Learn how to [set up disaster recovery to Azure for on-premises VMware VMs - Modernized](./vmware-azure-set-up-replication-tutorial-modernized.md).
+- Learn how to [run a disaster recovery drill](site-recovery-test-failover-to-azure.md).
+
site-recovery Site Recovery Manage Network Interfaces On Premises To Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-manage-network-interfaces-on-premises-to-azure.md
Previously updated : 4/9/2019 Last updated : 09/28/2023
For VMware and physical machines, and for Hyper-V (without System Center Virtual
3. Under **Network properties**, choose a virtual network from the list of available network interfaces.
- ![Network settings](./media/site-recovery-manage-network-interfaces-on-premises-to-azure/compute-and-network.png)
+ ![Screenshot of network settings.](./media/site-recovery-manage-network-interfaces-on-premises-to-azure/compute-and-networks.png)
Modifying the target network affects all network interfaces for that specific virtual machine.
You can modify the subnet and IP address for a replicated item's network interfa
3. Enter the desired IP address (as required).
- ![Network interface settings](./media/site-recovery-manage-network-interfaces-on-premises-to-azure/network-interface-settings.png)
+ ![Screenshot of network interface settings.](./media/site-recovery-manage-network-interfaces-on-premises-to-azure/network-interface-setting.png)
4. Select **OK** to finish editing and return to the **Compute and Network** pane.
site-recovery Site To Site Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-to-site-deprecation.md
- Title: Deprecation of disaster recovery between customer-managed sites (with VMM) using Azure Site Recovery | Microsoft Docs
-description: Details about Upcoming deprecation of DR between customer owned sites using Hyper-V and between sites managed by SCVMM to Azure and alternate options
----- Previously updated : 02/25/2020---
-# Deprecation of disaster recovery between customer-managed sites (with VMM) using Azure Site Recovery
-
-This article describes the upcoming deprecation plan, the corresponding implications, and the alternative options available for the customers for the following scenario:
-
-DR between customer owned sites managed by System Center Virtual Machine Manager (SCVMM) using Site Recovery
-
-> [!IMPORTANT]
-> Customers are advised to take the remediation steps at the earliest to avoid any disruption to their environment.
-
-## What changes should you expect?
--- Starting March 2020,you will receive Azure portal notifications & email communications with the upcoming deprecation of site-to-site replication of Hyper-V VMs. The deprecation is planned for March 2023.--- If you have an existing configuration, there will be no impact to the set up.--- Once the scenarios are deprecated unless the customer follows the alternate approaches, the existing replications may get disrupted. Customers won't be able to view, manage, or performs any DR-related operations via the Azure Site Recovery experience in Azure portal.
-
-## Alternatives
-
-Below are the alternatives that the customer can choose from to ensure that their DR strategy is not impacted once the scenario is deprecated.
--- Option 1 (Recommended): Choose to [start using Azure as the DR target](hyper-v-vmm-azure-tutorial.md).---- Option 2: Choose to continue with site-to-site replication using the underlying [Hyper-V Replica solution](/windows-server/virtualization/hyper-v/manage/set-up-hyper-v-replica), but you will be unable to manage DR configurations using Azure Site Recovery in the Azure portal. --
-## Remediation steps
-
-If you are choosing to go with Option 1, please execute the following steps:
-
-1. [Disable protection of all the virtual machines associated with the VMMs](site-recovery-manage-registration-and-protection.md#disable-protection-for-a-hyper-v-virtual-machine-replicating-to-secondary-vmm-server-using-the-system-center-vmm-to-vmm-scenario). Use the **Disable replication and remove** option or run the scripts mentioned to ensure the replication settings on-premises are cleaned up.
-
-2. [Unregister all the VMM servers](site-recovery-manage-registration-and-protection.md#unregister-a-vmm-server) from the site-to-site replication configuration.
-
-3. [Prepare Azure resources](tutorial-prepare-azure-for-hyperv.md) for enabling replication of your VMs.
-4. [Prepare on-premises Hyper-V servers](hyper-v-prepare-on-premises-tutorial.md)
-5. [Set up replication for the VMs in the VMM cloud](hyper-v-vmm-azure-tutorial.md)
-6. Optional but recommended: [Run a DR drill](tutorial-dr-drill-azure.md)
-
-If you are choosing to go with Option 2 of using Hyper-V replica, execute the following steps:
-
-1. In **Protected Items** > **Replicated Items**, right-click the machine > **Disable replication**.
-2. In **Disable replication**, select **Remove**.
-
- This removes the replicated item from Azure Site Recovery (billing is stopped). Replication configuration on the on-premises virtual machine **will not** be cleaned up.
-
-## Next steps
-Plan for the deprecation and choose an alternate option that's best suited for your infrastructure and business. In case you have any queries regarding this, reach out to Microsoft Support
-
site-recovery Vmware Azure Tutorial Failover Failback Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-tutorial-failover-failback-modernized.md
Title: Fail over VMware VMs to Azure with Site Recovery - Modernized
+ Title: Run VMware VMs failover to Azure
description: Learn how to fail over VMware VMs to Azure in Azure Site Recovery - Modernized Previously updated : 08/19/2021 Last updated : 09/29/2023
In this tutorial, you learn how to:
[Learn about](failover-failback-overview.md#types-of-failover) different types of failover. If you want to fail over multiple VMs in a recovery plan, review [this article](site-recovery-failover.md).
-## Before you start
+## Prerequisites
Complete the previous tutorials:
Complete the previous tutorials:
Before you run a failover, check the VM properties to make sure that the VMs meet [Azure requirements](vmware-physical-azure-support-matrix.md#replicated-machines).
-Verify properties as follows:
+Follow these steps to verify VM properties:
1. In **Protected Items**, select **Replicated Items**, and then select the VM you want to verify.
-2. In the **Replicated item** pane, there's a summary of VM information, health status, and the
- latest available recovery points. Select **Properties** to view more details.
+2. In the **Replicated item** pane, there's a summary of VM information, health status, and the latest available recovery points. Select **Properties** to view more details.
3. In **Compute and Network**, you can modify these properties as needed: * Azure name
Ensure the following for the VM, after it is failed over to Azure:
## Cancel planned failover
-If your on-premises environment is not ready or in case of any challenges, you can cancel the planned failover
-You can perform a planned failover any time later, once your on-premises conditions turn favorable.
+If your on-premises environment is not ready or in case of any challenges, you can cancel the planned failover. You can perform a planned failover any time later, once your on-premises conditions turn favorable.
**To cancel a planned failover**:
After successfully enabling replication and initial replication, recovery points
After failover, reprotect the Azure VMs to on-premises. After the VMs are reprotected and replicating to the on-premises site, fail back from Azure when you're ready.
-> [!div class="nextstepaction"]
-> [Reprotect Azure VMs](failover-failback-overview-modernized.md)
-> [Fail back from Azure](failover-failback-overview-modernized.md)
+- [Reprotect Azure VMs](failover-failback-overview-modernized.md)
+- [Fail back from Azure](failover-failback-overview-modernized.md)
site-recovery Vmware Azure Tutorial Failover Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-tutorial-failover-failback.md
Title: Fail over VMware VMs to Azure with Site Recovery - Classic description: Learn how to fail over VMware VMs to Azure in Azure Site Recovery - Classic - Previously updated : 08/19/2021+ Last updated : 09/22/2023
This article describes how to fail over an on-premises VMware virtual machine (VM) to Azure with [Azure Site Recovery](site-recovery-overview.md) - Classic.
-For information about failover in modernized release, [see this article](vmware-azure-tutorial-failover-failback-modernized.md).
+- Learn how to [failover in modernized release](vmware-azure-tutorial-failover-failback-modernized.md).
+- Learn how to [fail over VMs and physical servers](site-recovery-failover.md).
+- Learn about the [different types of failover](failover-failback-overview.md#types-of-failover).
+- Learn how to [fail over multiple VMs in a recovery plan](site-recovery-failover.md).
-In this tutorial, you learn how to:
+## Prerequisites
-> [!div class="checklist"]
-> * Verify that the VMware VM properties conform with Azure requirements.
-> * Fail over specific VMs to Azure.
+Ensure the following:
-> [!NOTE]
-> Tutorials show you the simplest deployment path for a scenario. They use default options where possible and don't show all possible settings and paths. If you want to learn about failover in detail, see [Fail over VMs and physical servers](site-recovery-failover.md).
-
-[Learn about](failover-failback-overview.md#types-of-failover) different types of failover. If you want to fail over multiple VMs in a recovery plan, review [this article](site-recovery-failover.md).
-
-## Before you start
-
-Complete the previous tutorials:
-
-1. Make sure you've [set up Azure](tutorial-prepare-azure.md) for on-premises disaster recovery of VMware VMs, Hyper-V VMs, and physical machines to Azure.
+1. Ensure you've [set up Azure](tutorial-prepare-azure.md) for on-premises disaster recovery of VMware VMs, Hyper-V VMs, and physical machines to Azure.
2. Prepare your on-premises [VMware](vmware-azure-tutorial-prepare-on-premises.md) environment for disaster recovery. 3. Set up disaster recovery for [VMware VMs](vmware-azure-tutorial.md). 4. Run a [disaster recovery drill](tutorial-dr-drill-azure.md) to make sure that everything's working as expected.
In some scenarios, failover requires additional processing that takes around 8 t
After failover, reprotect the Azure VMs to on-premises. Then, after the VMs are reprotected and replicating to the on-premises site, fail back from Azure when you're ready.
-> [!div class="nextstepaction"]
-> [Reprotect Azure VMs](vmware-azure-reprotect.md)
-> [Fail back from Azure](vmware-azure-failback.md)
+- [Reprotect Azure VMs](vmware-azure-reprotect.md)
+- [Fail back from Azure](vmware-azure-failback.md)
site-recovery Vmware Physical Secondary Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-secondary-disaster-recovery.md
- Title: Disaster recovery of VMware VMs/physical servers to a secondary site with Azure Site Recovery
-description: Learn how to set up disaster recovery of VMware VMs, or Windows and Linux physical servers, to a secondary site with Azure Site Recovery.
--- Previously updated : 11/05/2019----
-# Set up disaster recovery of on-premises VMware virtual machines or physical servers to a secondary site
-
-InMage Scout in [Azure Site Recovery](site-recovery-overview.md) provides real-time replication between on-premises VMware sites. InMage Scout is included in Azure Site Recovery service subscriptions.
-
-## End-of-support announcement
-
-The Azure Site Recovery scenario for replication between on-premises VMware or physical datacenters is reaching end-of-support.
--- From August 2018, the scenario canΓÇÖt be configured in the Recovery Services vault, and the InMage Scout software canΓÇÖt be downloaded from the vault. Existing deployments will be supported. -- From December 31 2020, the scenario wonΓÇÖt be supported.-- Existing partners can onboard new customers to the scenario until support ends.-
-During 2018 and 2019, two updates will be released:
--- Update 7: Fixes network configuration and compliance issues, and provides TLS 1.2 support.-- Update 8: Adds support for Linux operating systems RHEL/CentOS 7.3/7.4/7.5, and for SUSE 12-
-After Update 8, no further updates will be released. There will be limited hotfix support for the operating systems added in Update 8, and bug fixes based on best effort.
-
-Azure Site Recovery continues to innovate by providing VMware and Hyper-V customers a seamless and best-in-class DRaaS solution with Azure as a disaster recovery site. Microsoft recommends that existing InMage / ASR Scout customers consider using Azure Site RecoveryΓÇÖs VMware to Azure scenario for their business continuity needs. Azure Site Recovery's VMware to Azure scenario is an enterprise-class DR solution for VMware applications, which offers RPO and RTO of minutes, support for multi-VM application replication and recovery, seamless onboarding, comprehensive monitoring, and significant TCO advantage.
-
-### Scenario migration
-As an alternative, we recommend setting up disaster recovery for on-premises VMware VMs and physical machines by replicating them to Azure. Do this as follows:
-
-1. Review the quick comparison below. Before you can replicate on-premises machines, you need check that they meet [requirements](./vmware-physical-azure-support-matrix.md#replicated-machines) for replication to Azure. If youΓÇÖre replicating VMware VMs, we recommend that you review [capacity planning guidelines](./site-recovery-plan-capacity-vmware.md), and run the [Deployment Planner tool](./site-recovery-deployment-planner.md) to identity capacity requirements, and verify compliance.
-2. After running the Deployment Planner, you can set up replication:
-o For VMware VMs, follow these tutorials to [prepare Azure](./tutorial-prepare-azure.md), [prepare your on-premises VMware environment](./vmware-azure-tutorial-prepare-on-premises.md), and [set up disaster recovery](./vmware-azure-tutorial-prepare-on-premises.md).
-o For physical machines, follow this [tutorial](./physical-azure-disaster-recovery.md).
-3. After machines are replicating to Azure, you can run a [disaster recovery drill](./site-recovery-test-failover-to-azure.md) to make sure everythingΓÇÖs working as expected.
-
-### Quick comparison
-
-**Feature** | **Replication to Azure** |**Replication between VMware datacenters**
|--|--
-**Required components** |Mobility service on replicated machines. On-premises configuration server, process server, master target server.Temporary process server in Azure for failback.|Mobility service, Process Server, Configuration Server and Master Target
-**Configuration and orchestration** |Recovery Services vault in the Azure portal | Using vContinuum
-**Replicated** |Disk (Windows and Linux) |Volume-Windows<br> Disk-Linux
-**Shared disk cluster** |Not supported|Supported
-**Data churn limits (average)** |10 MB/s data per disk<br> 25MB/s data per VM<br> [Learn more](./site-recovery-vmware-deployment-planner-analyze-report.md#azure-site-recovery-limits) | > 10 MB/s data per disk <br> > 25 MB/s data per VM
-**Monitoring** |From Azure portal|From CX (Configuration Server)
-**Support Matrix** | [Click here for details](./vmware-physical-azure-support-matrix.md)|[Download ASR Scout compatible matrix](https://aka.ms/asr-scout-cm)
--
-## Prerequisites
-To complete this tutorial:
--- [Review](vmware-physical-secondary-support-matrix.md) the support requirements for all components.-- Make sure that the machines you want to replicate comply with [replicated machine support](vmware-physical-secondary-support-matrix.md#replicated-vm-support).--
-## Download and install component updates
-
- Review and install the latest [updates](#updates). Updates should be installed on servers in the following order:
-
-1. RX server (if applicable)
-2. Configuration servers
-3. Process servers
-4. Master Target servers
-5. vContinuum servers
-6. Source server (both Windows and Linux Servers)
-
-Install the updates as follows:
-
-> [!NOTE]
->All Scout components' file update version may not be the same in the update .zip file. The older version indicate that there is no change in the component since previous update to this update.
-
-Download the [update](https://aka.ms/asr-scout-update7) .zip file and the [MySQL and PHP upgrade](https://aka.ms/asr-scout-u7-mysql-php-manualupgrade) configuration files. The update .zip file contains the all the base binaries and cumulative upgrade binaries of the following components:
-- InMage_ScoutCloud_RX_8.0.1.0_RHEL6-64_GA_02Mar2015.tar.gz-- RX_8.0.7.0_GA_Update_7_2965621_28Dec18.tar.gz-- InMage_CX_8.0.1.0_Windows_GA_26Feb2015_release.exe-- InMage_CX_TP_8.0.1.0_Windows_GA_26Feb2015_release.exe-- CX_Windows_8.0.7.0_GA_Update_7_2965621_28Dec18.exe-- InMage_PI_8.0.1.0_Windows_GA_26Feb2015_release.exe-- InMage_Scout_vContinuum_MT_8.0.7.0_Windows_GA_27Dec2018_release.exe-- InMage_UA_8.0.7.0_Windows_GA_27Dec2018_release.exe-- InMage_UA_8.0.7.0_OL5-32_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_OL5-64_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_OL6-32_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_OL6-64_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_RHEL5-32_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_RHEL5-64_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_RHEL6-32_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_RHEL6-64_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_RHEL7-64_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES10-32_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES10-64_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES10-SP1-32_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES10-SP1-64_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES10-SP2-32_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES10-SP2-64_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES10-SP3-32_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES10-SP3-64_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES10-SP4-32_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES10-SP4-64_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES11-32_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES11-64_GA_04Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES11-SP1-32_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES11-SP1-64_GA_04Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES11-SP2-32_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES11-SP2-64_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES11-SP3-32_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES11-SP3-64_GA_03Dec2018_release.tar.gz-- InMage_UA_8.0.7.0_SLES11-SP4-64_GA_03Dec2018_release.tar.gz
- 1. Extract the .zip files.
- 2. **RX server**: Copy **RX_8.0.7.0_GA_Update_7_2965621_28Dec18.tar.gz** to the RX server, and extract it. In the extracted folder, run **/Install**.
- 3. **Configuration server and process server**: Copy **CX_Windows_8.0.7.0_GA_Update_7_2965621_28Dec18.exe** to the configuration server and process server. Double-click to run it.<br>
- 4. **Windows Master Target server**: To update the unified agent, copy **InMage_UA_8.0.7.0_Windows_GA_27Dec2018_release.exe** to the server. Double-click it to run it. The same file can also be used for fresh installation. The same unified agent update is also applicable for the source server.
- The update does not need to apply on the Master target prepared with **InMage_Scout_vContinuum_MT_8.0.7.0_Windows_GA_27Dec2018_release.exe** as this is new GA installer with all the latest changes.
- 5. **vContinuum server**: Copy **InMage_Scout_vContinuum_MT_8.0.7.0_Windows_GA_27Dec2018_release.exe** to the server. Make sure that you've closed the vContinuum wizard. Double-click on the file to run it.
- 6. **Linux master target server**: To update the unified agent, copy **InMage_UA_8.0.7.0_RHEL6-64_GA_03Dec2018_release.tar.gz** to the Linux Master Target server and extract it. In the extracted folder, run **/Install**.
- 7. **Windows source server**: To update the unified agent, copy **InMage_UA_8.0.7.0_Windows_GA_27Dec2018_release.exe** to the source server. Double-click on the file to run it.
- 8. **Linux source server**: To update the unified agent, copy the corresponding version of the unified agent file to the Linux server, and extract it. In the extracted folder, run **/Install**. Example: For RHEL 6.7 64-bit server, copy **InMage_UA_8.0.7.0_RHEL6-64_GA_03Dec2018_release.tar.gz** to the server, and extract it. In the extracted folder, run **/Install**.
- 9. After upgrading Configuration Server, Process Server and RX server with the above mentioned installers, the PHP and MySQL libraries needs to be upgraded manually with steps mentioned in section 7.4 of the [quick installation guide](https://aka.ms/asr-scout-quick-install-guide).
-
-## Enable replication
-
-1. Set up replication between the source and target VMware sites.
-2. Refer to following documents to learn more about installation, protection, and recovery:
-
- * [Release notes](https://aka.ms/asr-scout-release-notes)
- * [Compatibility matrix](https://aka.ms/asr-scout-cm)
- * [User guide](https://aka.ms/asr-scout-user-guide)
- * [RX user guide](https://aka.ms/asr-scout-rx-user-guide)
- * [Quick installation guide](https://aka.ms/asr-scout-quick-install-guide)
- * [Upgrading MYSQL and PHP libraries](https://aka.ms/asr-scout-u7-mysql-php-manualupgrade)
-
-## Updates
-
-### Site Recovery Scout 8.0.1 Update 7
-Updated: December 31, 2018
-Download [Scout update 7](https://aka.ms/asr-scout-update7).
-Scout Update 7 is a full installer which can be used for fresh installation as well as to upgrade existing agents/MT which are on previous updates (from Update 1 to Update 6). It contains all fixes from Update 1 to Update 6 plus the new fixes and enhancements described below.
-
-#### New features
-* PCI compliance
-* TLS v1.2 Support
-
-#### Bug and Security Fixes
-* Fixed: Windows Cluster/Standalone Machines have incorrect IP configuration upon recovery/DR-Drill.
-* Fixed: Sometimes Add disk operation fails for V2V cluster.
-* Fixed: vContinuum Wizard gets stuck during recovery phase if the Master Target is Windows Server 2016
-* Fixed: MySQL security issues are mitigated by upgrading MySQL to version 5.7.23
-
-#### Manual Upgrade for PHP and MySQL on CS,PS, and RX
-The PHP scripting platform should be upgraded to version 7.2.10 on Configuration Server, Process Server and RX Server.
-The MySQL database management system should be upgraded to version 5.7.23 on Configuration Server, Process Server and RX Server.
-Please follow the manual steps given in the [Quick installation guide](https://aka.ms/asr-scout-quick-install-guide) to upgrade PHP and MySQL versions.
-
-### Site Recovery Scout 8.0.1 Update 6
-Updated: October 12, 2017
-
-Download [Scout update 6](https://aka.ms/asr-scout-update6).
-
-Scout Update 6 is a cumulative update. It contains all fixes from Update 1 to Update 5 plus the new fixes and enhancements described below.
-
-#### New platform support
-* Support has been added for Source Windows Server 2016
-* Support has been added for following Linux operating systems:
- - Red Hat Enterprise Linux (RHEL) 6.9
- - CentOS 6.9
- - Oracle Linux 5.11
- - Oracle Linux 6.8
-* Support has been added for VMware Center 6.5
-
-Install the updates as follows:
-
-> [!NOTE]
->All Scout components' file update version may not be the same in the update .zip file. The older version indicate that there is no change in the component since previous update to this update.
-
-Download the [update](https://aka.ms/asr-scout-update6) .zip file. The file contains the following components:
-- RX_8.0.4.0_GA_Update_4_8725872_16Sep16.tar.gz-- CX_Windows_8.0.6.0_GA_Update_6_13746667_18Sep17.exe-- UA_Windows_8.0.5.0_GA_Update_5_11525802_20Apr17.exe-- UA_RHEL6-64_8.0.4.0_GA_Update_4_9035261_26Sep16.tar.gz-- vCon_Windows_8.0.6.0_GA_Update_6_11525767_21Sep17.exe-- UA update4 bits for RHEL5, OL5, OL6, SUSE 10, SUSE 11: UA_\<Linux OS>_8.0.4.0_GA_Update_4_9035261_26Sep16.tar.gz
- 1. Extract the .zip files.
- 2. **RX server**: Copy **RX_8.0.4.0_GA_Update_4_8725872_16Sep16.tar.gz** to the RX server, and extract it. In the extracted folder, run **/Install**.
- 3. **Configuration server and process server**: Copy **CX_Windows_8.0.6.0_GA_Update_6_13746667_18Sep17.exe** to the configuration server and process server. Double-click to run it.<br>
- 4. **Windows Master Target server**: To update the unified agent, copy **UA_Windows_8.0.5.0_GA_Update_5_11525802_20Apr17.exe** to the server. Double-click it to run it. The same unified agent update is also applicable for the source server. If source hasn't been updated to Update 4, you should update the unified agent.
- The update does not need to apply on the Master target prepared with **InMage_Scout_vContinuum_MT_8.0.1.0_Windows_GA_10Oct2017_release.exe** as this is new GA installer with all the latest changes.
- 5. **vContinuum server**: Copy **vCon_Windows_8.0.6.0_GA_Update_6_11525767_21Sep17.exe** to the server. Make sure that you've closed the vContinuum wizard. Double-click on the file to run it.
- The update does not need to apply on the Master Target prepared with **InMage_Scout_vContinuum_MT_8.0.1.0_Windows_GA_10Oct2017_release.exe** as this is new GA installer with all the latest changes.
- 6. **Linux master target server**: To update the unified agent, copy **UA_RHEL6-64_8.0.4.0_GA_Update_4_9035261_26Sep16.tar.gz** to the master target server and extract it. In the extracted folder, run **/Install**.
- 7. **Windows source server**: To update the unified agent, copy **UA_Windows_8.0.5.0_GA_Update_5_11525802_20Apr17.exe** to the source server. Double-click on the file to run it.
- You don't need to install the Update 5 agent on the source server if it has already been updated to Update 4 or source agent is installed with latest base installer **InMage_UA_8.0.1.0_Windows_GA_28Sep2017_release.exe**.
- 8. **Linux source server**: To update the unified agent, copy the corresponding version of the unified agent file to the Linux server, and extract it. In the extracted folder, run **/Install**. Example: For RHEL 6.7 64-bit server, copy **UA_RHEL6-64_8.0.4.0_GA_Update_4_9035261_26Sep16.tar.gz** to the server, and extract it. In the extracted folder, run **/Install**.
--
-> [!NOTE]
-> * Base Unified Agent(UA) installer for Windows has been refreshed to support Windows Server 2016. The new installer **InMage_UA_8.0.1.0_Windows_GA_28Sep2017_release.exe** is packaged with the base Scout GA package (**InMage_Scout_Standard_8.0.1 GA-Oct17.zip**). The same installer will be used for all supported Windows version.
-> * Base Windows vContinuum & Master Target installer has been refreshed to support Windows Server 2016. The new installer **InMage_Scout_vContinuum_MT_8.0.1.0_Windows_GA_10Oct2017_release.exe** is packaged with the base Scout GA package (**InMage_Scout_Standard_8.0.1 GA-Oct17.zip**). The same installer will be used to deploy Windows 2016 Master Target and Windows 2012R2 Master Target.
-> * Windows server 2016 on physical server is not supported by ASR Scout. It supports only Windows Server 2016 VMware VM.
->
-
-#### Bug fixes and enhancements
-- Failback protection fails for Linux VM with list of disks to be replicated is empty at the end of config.-
-### Site Recovery Scout 8.0.1 Update 5
-Scout Update 5 is a cumulative update. It contains all fixes from Update 1 to Update 4, and the new fixes described below.
-- Fixes from Site Recovery Scout Update 4 to Update 5 are specifically for the master target and vContinuum components.-- If source servers, the master target, configuration, process, and RX servers are already running Update 4, then apply it only on the master target server. -
-#### New platform support
-* SUSE Linux Enterprise Server 11 Service Pack 4(SP4)
-* SLES 11 SP4 64 bit **InMage_UA_8.0.1.0_SLES11-SP4-64_GA_13Apr2017_release.tar.gz** is packaged with the base Scout GA package (**InMage_Scout_Standard_8.0.1 GA.zip**). Download the GA package from the portal, as described in create a vault.
--
-#### Bug fixes and enhancements
-
-* Fixes for increased Windows cluster support reliability:
- * Fixed- Some of the P2V MSCS cluster disks become RAW after recovery.
- * Fixed- P2V MSCS cluster recovery fails due to a disk order mismatch.
- * Fixed- The MSCS cluster operation to add disks fails with a disk size mismatch error.
- * Fixed- The readiness check for the source MSCS cluster with RDM LUNs mapping fails in size verification.
- * Fixed- Single node cluster protection fails because of a SCSI mismatch issue.
- * Fixed- Re-protection of the P2V Windows cluster server fails if target cluster disks are present.
-
-* Fixed: During failback protection, if the selected master target server isn't on the same ESXi server as the protected source machine (during forward protection), then vContinuum picks up the wrong master target server during failback recovery, and the recovery operation fails.
-
-> [!NOTE]
-> * The P2V cluster fixes are applicable only to physical MSCS clusters that are newly protected with Site Recovery Scout Update 5. To install the cluster fixes on protected P2V MSCS clusters with older updates, follow the upgrade steps mentioned in section 12 of the [Site Recovery Scout Release Notes](https://aka.ms/asr-scout-release-notes).
-> * if at the time of re-protection, the same set of disks are active on each of the cluster nodes as they were when initially protected, then re-protection of a physical MSCS cluster can only reuse existing target disks. If not, then use the manual steps in section 12 of [Site Recovery Scout Release Notes](https://aka.ms/asr-scout-release-notes), to move the target side disks to the correct datastore path, for reuse during re-protection. If you reprotect the MSCS cluster in P2V mode without following the upgrade steps, it creates a new disk on the target ESXi server. You will need to manually delete the old disks from the datastore.
-> * When a source SLES11 or SLES11 (with any service pack) server is rebooted gracefully, then manually mark the **root** disk replication pairs for re-synchronization. There's no notification in the CX interface. If you don't mark the root disk for resynchronization, you might notice data integrity issues.
--
-### Azure Site Recovery Scout 8.0.1 Update 4
-Scout Update 4 is a cumulative update. It includes all fixes from Update 1 to Update 3, and the new fixes described below.
-
-#### New platform support
-
-* Support has been added for vCenter/vSphere 6.0, 6.1 and 6.2
-* Support has been added for these Linux operating systems:
- * Red Hat Enterprise Linux (RHEL) 7.0, 7.1 and 7.2
- * CentOS 7.0, 7.1 and 7.2
- * Red Hat Enterprise Linux (RHEL) 6.8
- * CentOS 6.8
-
-> [!NOTE]
-> RHEL/CentOS 7 64 bit **InMage_UA_8.0.1.0_RHEL7-64_GA_06Oct2016_release.tar.gz** is packaged with the base Scout GA package **InMage_Scout_Standard_8.0.1 GA.zip**. Download the Scout GA package from the portal as described in create a vault.
-
-#### Bug fixes and enhancements
-
-* Improved shutdown handling for the following Linux operating systems and clones, to prevent unwanted resynchronization issues:
- * Red Hat Enterprise Linux (RHEL) 6.x
- * Oracle Linux (OL) 6.x
-* For Linux, all folder access permissions in the unified agent installation directory are now restricted to the local user only.
-* On Windows, a fix for a timing out issue that occurred when issuing common distributed consistency bookmarks, on heavily loaded distributed applications such as SQL Server and SharePoint clusters.
-* A log related fix in the configuration server base installer.
-* A download link to VMware vCLI 6.0 was added to the Windows master target base installer.
-* Additional checks and logs were added, for network configuration changes during failover and disaster recovery drills.
-* A fix for an issue that caused retention information not to be reported to the configuration server.
-* For physical clusters, a fix for an issue that caused volume resizing to fail in the vContinuum wizard, when shrinking the source volume.
-* A fix for a cluster protection issue that failed with error: "Failed to find the disk signature", when the cluster disk is a PRDM disk.
-* A fix for a cxps transport server crash, caused by an out-of-range exception.
-* Server name and IP address columns are now resizable in the **Push Installation** page of the vContinuum wizard.
-* RX API enhancements:
- * The five latest available common consistency points now available (only guaranteed tags).
- * Capacity and free space details are displayed for all protected devices.
- * Scout driver state on the source server is available.
-
-> [!NOTE]
-> * **InMage_Scout_Standard_8.0.1_GA.zip** base package has:
-> * An updated configuration server base installer (**InMage_CX_8.0.1.0_Windows_GA_26Feb2015_release.exe**)
-> * A Windows master target base installer (**InMage_Scout_vContinuum_MT_8.0.1.0_Windows_GA_26Feb2015_release.exe**).
-> * For all new installations, use the new configuration server and Windows master target GA bits.
-> * Update 4 can be applied directly on 8.0.1 GA.
-> * The configuration server and RX updates canΓÇÖt be rolled back after they've been applied.
--
-### Azure Site Recovery Scout 8.0.1 Update 3
-
-All Site Recovery updates are cumulative. Update 3 contains all fixes from Update 1 and Update 2. Update 3 can be directly applied on 8.0.1 GA. The configuration server and RX updates canΓÇÖt be rolled back after they've been applied.
-
-#### Bug fixes and enhancements
-Update 3 fixes the following issues:
-
-* The configuration server and RX aren't registered in the vault when they're behind the proxy.
-* The number of hours in which the recovery point objective (RPO) wasn't reached is not updated in the health report.
-* The configuration server isn't syncing with RX when the ESX hardware details, or network details, contain any UTF-8 characters.
-* Windows Server 2008 R2 domain controllers don't start after recovery.
-* Offline synchronization isn't working as expected.
-* After VM failover, replication-pair deletion doesn't progress in the configuration server console for a long time. Users can't complete the failback or resume operations.
-* Overall snapshot operations by the consistency job have been optimized, to help reduce application disconnects such as SQL Server clients.
-* Consistency tool (VACP.exe) performance has been improved. Memory usage required for creating snapshots on Windows has been reduced.
-* The push install service crashes when the password is larger than 16 characters.
-* vContinuum doesn't check and prompt for new vCenter credentials, when credentials are modified.
-* On Linux, the master target cache manager (cachemgr) isn't downloading files from the process server. This results in replication pair throttling.
-* When the physical failover cluster (MSCS) disk order isn't the same on all nodes, replication isn't set for some of the cluster volumes. The cluster must be reprotected to take advantage of this fix.
-* SMTP functionality isn't working as expected, after RX is upgraded from Scout 7.1 to Scout 8.0.1.
-* More statistics have been added in the log for the rollback operation, to track the time taken to complete it.
-* Support has been added for Linux operating systems on the source server:
- * Red Hat Enterprise Linux (RHEL) 6 update 7
- * CentOS 6 update 7
-* The configuration server and RX consoles now show notifications for the pair, which goes into bitmap mode.
-* The following security fixes have been added in RX:
- * Authorization bypass via parameter tampering: Restricted access to non-applicable users.
- * Cross-site request forgery: The page-token concept was implemented, and it generates randomly for every page. This means there's only a single sign-in instance for the same user, and page refresh doesn't work. Instead, it redirects to the dashboard.
- * Malicious file upload: Files are restricted to specific extensions: z, aiff, asf, avi, bmp, csv, doc, docx, fla, flv, gif, gz, gzip, jpeg, jpg, log, mid, mov, mp3, mp4, mpc, mpeg, mpg, ods, odt, pdf, png, ppt, pptx, pxd, qt, ram, rar, rm, rmi, rmvb, rtf, sdc, sitd, swf, sxc, sxw, tar, tgz, tif, tiff, txt, vsd, wav, wma, wmv, xls, xlsx, xml, and zip.
- * Persistent cross-site scripting: Input validations were added.
-
-### Azure Site Recovery Scout 8.0.1 Update 2 (Update 03Dec15)
-
-Fixes in Update 2 include:
-
-* **Configuration server**: Issues that prevented the 31-day free metering feature from working as expected, when the configuration server was registered to Azure Site Recovery vault.
-* **Unified agent**: Fix for an issue in Update 1 that resulted in the update not being installed on the master target server, during upgrade from version 8.0 to 8.0.1.
-
-### Azure Site Recovery Scout 8.0.1 Update 1
-Update 1 includes the following bug fixes and new features:
-
-* 31 days of free protection per server instance. This enables you to test functionality, or set up a proof-of-concept.
-* All operations on the server, including failover and failback, are free for the first 31 days. The time starts when a server is first protected with Site Recovery Scout. From the 32nd day, each protected server is charged at the standard instance rate for Site Recovery protection to a customer-owned site.
-* At any time, the number of protected servers currently being charged is available on the **Dashboard** in the vault.
-* Support was added for vSphere Command-Line Interface (vCLI) 5.5 Update 2.
-* Support was added for these Linux operating systems on the source server:
- * RHEL 6 Update 6
- * RHEL 5 Update 11
- * CentOS 6 Update 6
- * CentOS 5 Update 11
-* Bug fixes to address the following issues:
- * Vault registration fails for the configuration server, or RX server.
- * Cluster volumes don't appear as expected when clustered VMs are reprotected as they resume.
- * Failback fails when the master target server is hosted on a different ESXi server from the on-premises production VMs.
- * Configuration file permissions are changed when you upgrade to 8.0.1. This change affects protection and operations.
- * The resynchronization threshold isn't enforced as expected, causing inconsistent replication behavior.
- * The RPO settings don't appear correctly in the configuration server console. The uncompressed data value incorrectly shows the compressed value.
- * The Remove operation doesn't delete as expected in the vContinuum wizard, and replication isn't deleted from the configuration server console.
- * In the vContinuum wizard, the disk is automatically unselected when you click **Details** in the disk view, during protection of MSCS VMs.
- * In the physical-to-virtual (P2V) scenario, required HP services (such as CIMnotify and CqMgHost) aren't moved to manual in VM recovery. This issue results in additional boot time.
- * Linux VM protection fails when there are more than 26 disks on the master target server.
-
spring-apps How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-deploy-with-custom-container-image.md
-+ Last updated 4/28/2022
spring-apps How To Enable Ingress To App Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enable-ingress-to-app-tls.md
Last updated 04/12/2022-+ # Enable ingress-to-app TLS for an application
spring-apps How To Enterprise Build Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-build-service.md
Last updated 05/25/2023-+ # Use Tanzu Build Service
spring-apps How To Enterprise Deploy Static File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-static-file.md
Last updated 5/25/2023-+ # Deploy web static files
spring-apps Quickstart Access Standard Consumption Within Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-access-standard-consumption-within-virtual-network.md
Last updated 06/21/2023-+ # Quickstart: Access applications using Azure Spring Apps Standard consumption and dedicated plan in a virtual network
static-web-apps Apis Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis-container-apps.md
To link a container app to your static web app, you need to have an existing Con
## Example
-Consider an existing Azure App Service instance that exposes an endpoint via the following location.
+Consider an existing Azure Container App instance that exposes an endpoint via the following location.
```url https://my-container-app.red-river-123.eastus2.azurecontainerapps.io/api/getProducts
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
For more information about pricing, see [Block Blob pricing](https://azure.micro
- Each rule can have up to 10 case-sensitive prefixes and up to 10 blob index tag conditions.
+- If you enable firewall rules for your storage account, lifecycle management requests may be blocked. You can unblock these requests by providing exceptions for trusted Microsoft services. For more information, see the **Exceptions** section in [Configure firewalls and virtual networks](../common/storage-network-security.md#exceptions).
+ - A lifecycle management policy can't change the tier of a blob that uses an encryption scope. - The delete action of a lifecycle management policy won't work with any blob in an immutable container. With an immutable policy, objects can be created and read, but not modified or deleted. For more information, see [Store business-critical blob data with immutable storage](./immutable-storage-overview.md).
storage Storage Blob Index How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-index-how-to.md
Last updated 07/21/2022
ms.devlang: csharp-+ # Use blob index tags to manage and find data on Azure Blob Storage
storage Storage Account Keys Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-keys-manage.md
Last updated 03/22/2023 -+ # Manage storage account access keys
storage Storage Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-disaster-recovery-guidance.md
Each type of failover has a unique set of use cases, corresponding expectations
| Type | Failover Scope | Use case | Expected data loss | HNS supported | ||--|-|||
-| Customer-managed | Storage account | The storage service endpoints for the primary region become unavailable, but the secondary region is available. <br></br> You received an Azure Advisory in which Microsoft advises you to perform a failover operation of storage accounts potentially affected by an outage. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes *(In preview)*](#azure-data-lake-storage-gen2) |
-| Microsoft-managed | Entire region, datacenter or scale unit | The primary region becomes completely unavailable due to a significant disaster, but the secondary region is available. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes](#azure-data-lake-storage-gen2) |
+| Customer-managed | Storage account | The storage service endpoints for the primary region become unavailable, but the secondary region is available. <br></br> You received an Azure Advisory in which Microsoft advises you to perform a failover operation of storage accounts potentially affected by an outage. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes ](#azure-data-lake-storage-gen2)*[(In preview)](#azure-data-lake-storage-gen2)* |
+| Microsoft-managed | Entire region or scale unit | The primary region becomes completely unavailable due to a significant disaster, but the secondary region is available. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes](#azure-data-lake-storage-gen2) |
### Customer-managed failover
In extreme circumstances where the original primary region is deemed unrecoverab
> [!IMPORTANT] > Your disaster recovery plan should be based on customer-managed failover. **Do not** rely on Microsoft-managed failover, which might only be used in extreme circumstances.
->
-> A Microsoft-managed failover would be initiated for an entire physical unit, such as a region, datacenter or scale unit. It can't be initiated for individual storage accounts, subscriptions, or tenants. For the ability to selectively failover your individual storage accounts, use [customer-managed account failover](#customer-managed-failover).
-
+> A Microsoft-managed failover would be initiated for an entire physical unit, such as a region or scale unit. It can't be initiated for individual storage accounts, subscriptions, or tenants. For the ability to selectively failover your individual storage accounts, use [customer-managed account failover](#customer-managed-failover).
### Anticipate data loss and inconsistencies > [!CAUTION]
Microsoft also recommends that you design your application to prepare for the po
- [Tutorial: Build a highly available application with Blob storage](../blobs/storage-create-geo-redundant-storage.md) - [Azure Storage redundancy](storage-redundancy.md) - [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md)+
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
We'll use an example to illustrate how to estimate the amount of free space woul
1. NTFS allocates a cluster size for each of the tiered files. 1 million files * 4 KiB cluster size = 4,000,000 KiB (4 GiB) > [!Note]
- > The space occupied by tiered files is allocated by NTFS. Therefore, it will not show up in any UI.
+ > To fully benefit from cloud tiering, it is recommended to use smaller NTFS cluster sizes (less than 64KiB) since each tiered file occupies a cluster. Also, the space occupied by tiered files is allocated by NTFS. Therefore, it will not show up in any UI.
1. Sync metadata occupies a cluster size per item. (1 million files + 100,000 directories) * 4 KB cluster size = 4,400,000 KiB (4.4 GiB) 1. Azure File Sync heatstore occupies 1.1 KiB per file. 1 million files * 1.1 KiB = 1,100,000 KiB (1.1 GiB) 1. Volume free space policy is 20%. 1000 GiB * 0.2 = 200 GiB
storage Storage Files Enable Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-enable-soft-delete.md
Last updated 04/05/2021 -+
storage Storage Files How To Mount Nfs Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-mount-nfs-shares.md
description: Learn how to mount a Network File System (NFS) Azure file share on
Previously updated : 02/06/2023 Last updated : 10/03/2023
Azure file shares can be mounted in Linux distributions using either the Server
:::image type="content" source="media/storage-files-how-to-mount-nfs-shares/disable-secure-transfer.png" alt-text="Screenshot of storage account configuration screen with secure transfer disabled." lightbox="media/storage-files-how-to-mount-nfs-shares/disable-secure-transfer.png":::
+## Mount options
+
+The following mount options are recommended or required when mounting NFS Azure file shares.
+
+| **Mount option** | **Recommended value** | **Description** |
+|******************|***********************|*****************|
+| `vers` | 4 | Required. Specifies which version of the NFS protocol to use. Azure Files only supports NFS v4.1. |
+| `minorversion` | 1 | Required. Specifies the minor version of the NFS protocol. Some Linux distros don't recognize minor versions on the `vers` parameter. So instead of `vers=4.1`, use `vers=4,minorversion=1`. |
+| `sec` | sys | Required. Specifies the type of security to use when authenticating an NFS connection. Setting `sec=sys` uses the local UNIX UIDs and GIDs that use AUTH_SYS to authenticate NFS operations. |
+| `rsize` | 1048576 | Recommended. Sets the maximum number of bytes to be transferred in a single NFS read operation. Specifying the maximum level of 1048576 bytes will usually result in the best performance. |
+| `wsize` | 1048576 | Recommended. Sets the maximum number of bytes to be transferred in a single NFS write operation. Specifying the maximum level of 1048576 bytes will usually result in the best performance. |
+| `noresvport` | n/a | Recommended. Tells the NFS client to use a non-privileged source port when communicating with an NFS server for the mount point. Using the `noresvport` mount option helps ensure that your NFS share has uninterrupted availability after a reconnection. Using this option is strongly recommended for achieving high availability. |
+| `actimeo` | 30-60 | Recommended. Specifying `actimeo` sets all of `acregmin`, `acregmax`, `acdirmin`, and `acdirmax` to the same value. Using a value lower than 30 seconds can cause performance degradation because attribute caches for files and directories expire too quickly. We recommend setting `actimeo` between 30 and 60 seconds. |
+ ## Mount an NFS share using the Azure portal > [!NOTE]
-> You can use the `nconnect` Linux mount option to improve performance for NFS Azure file shares at scale. For more information, see [Improve NFS Azure file share performance](nfs-performance.md).
+> You can use the `nconnect` Linux mount option to improve performance for NFS Azure file shares at scale. For more information, see [Improve NFS Azure file share performance](nfs-performance.md#nconnect).
1. Once the file share is created, select the share and select **Connect from Linux**. 1. Enter the mount path you'd like to use, then copy the script.
-1. Connect to your client and use the provided mounting script.
+1. Connect to your client and use the provided mounting script. Only the required mount options are included in the script, but you can add other [recommended mount options](#mount-options).
:::image type="content" source="media/storage-files-how-to-create-mount-nfs-shares/mount-nfs-file-share-script.png" alt-text="Screenshot of file share connect blade.":::
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
Title: Mount SMB Azure file share on Linux
description: Learn how to mount an Azure file share over SMB on Linux and review SMB security considerations on Linux clients. -+ Last updated 01/10/2023
synapse-analytics Security White Paper Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/guidance/security-white-paper-access-control.md
In addition to securing SQL tables in Azure Synapse, dedicated SQL pool (formerl
[Column-level security](../sql-data-warehouse/column-level-security.md) allows security administrators to set permissions that limit who can access sensitive columns in tables. It's set at the database level and can be implemented without the need to change the design of the data model or application tier. > [!NOTE]
-> Column-level security is supported in Azure Synapse and dedicated SQL pool (formerly SQL DW), but it's not supported for Apache Spark pool and serverless SQL pool.
+> Column-level security is supported in Azure Synapse, serverless SQL pool views and dedicated SQL pool (formerly SQL DW), but it's not supported for serverless SQL pool external tables and Apache Spark pool. In case of a serverless SQL pool external tables workaround can be applied by creating a view on top of an external table.
## Dynamic data masking
virtual-desktop Private Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-overview.md
Private Link with Azure Virtual Desktop has the following limitations:
- Using both Private Link and [RDP Shortpath](./shortpath.md) at the same time isn't currently supported.
+- If you're using the [Remote Desktop client for Windows](./users/connect-windows.md) on a network without public internet access, you aren't able to subscribe to a workspace with a private endpoint if you're also assigned to a workspace that doesn't have a private endpoint configured.
+ - Early in the preview of Private Link with Azure Virtual Desktop, the private endpoint for the initial feed discovery (for the *global* sub-resource) shared the private DNS zone name of `privatelink.wvd.microsoft.com` with other private endpoints for workspaces and host pools. In this configuration, users are unable to establish private endpoints exclusively for host pools and workspaces. Starting September 1, 2023, sharing the private DNS zone in this configuration will no longer be supported. You need to create a new private endpoint for the *global* sub-resource to use the private DNS zone name of `privatelink-global.wvd.microsoft.com`. For the steps to do this, see [Initial feed discovery](private-link-setup.md#initial-feed-discovery). - Azure PowerShell cmdlets for Azure Virtual Desktop that support Private Link are in preview. You need to download and install the [preview version of the Az.DesktopVirtualization module](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/5.0.0-preview) to use these cmdlets, which have been added in version 5.0.0.
virtual-desktop Private Link Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-setup.md
Title: Set up Private Link with Azure Virtual Desktop - Azure
description: Learn how to set up Private Link with Azure Virtual Desktop to privately connect to your remote resources. -+ Last updated 07/17/2023
virtual-desktop Troubleshoot Set Up Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-set-up-overview.md
To report issues or suggest features for Azure Virtual Desktop with Azure Resour
When you make a post asking for help or propose a new feature, make sure you describe your topic in as much detail as possible. Detailed information can help other users answer your question or understand the feature you're proposing a vote for.
+## Help with application issues
+
+If you encounter issues with your applications running in Azure Virtual Desktop, App Assure is a service from Microsoft designed to help you resolve them at no additional cost. For more information, go to [App Assure](/microsoft-365/fasttrack/windows-and-other-services#app-assure).
+ ## Escalation tracks Before doing anything else, make sure to check the [Azure status page](https://azure.status.microsoft/status) and [Azure Service Health](https://azure.microsoft.com/features/service-health/) to make sure your Azure service is running properly.
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure
description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 09/07/2023 Last updated : 10/05/2023
A rollout may take several weeks before the agent is available in all environmen
| Release | Latest version | |--|--|
-| Production | 1.0.7255.1400 |
-| Validation | 1.0.7539.5800 |
+| Production | 1.0.7539.8300 |
+| Validation | 1.0.7755.1100 |
The agent is automatically installed when adding session hosts in most scenarios. If you need to download the agent, you find it at [Register session hosts to a host pool](add-session-hosts-host-pool.md#register-session-hosts-to-a-host-pool), together with the steps to install it.
-## Version 1.0.7539.5800 (validation)
+## Version 1.0.7755.1100 (validation)
-This update was released at the beginning of September 2023 and includes the following changes:
+This update was released at the end of September 2023 and includes the following changes:
- Security improvements and bug fixes.
+## Version 1.0.7539.8300
+
+This update was released at the end of September 2023 and includes the following changes:
+
+- Security improvements and bug fixes.
+
+## Version 1.0.7539.5800
+
+This update was released at the beginning of September 2023 and includes the following changes:
+
+- Security improvements and bug fixes.
+
+## Version 1.0.7255.1400
+
+This update was released at the end of August 2023 and includes the following changes:
+
+- Security improvements and bug fixes.
+ ## Version 1.0.7255.800 This update was released at the end of July 2023 and includes the following changes:
virtual-machine-scale-sets Virtual Machine Scale Sets Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md
Last updated 11/22/2022 -+ # Networking for Azure Virtual Machine Scale Sets
virtual-machines B Series Cpu Credit Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/b-series-cpu-credit-model/b-series-cpu-credit-model.md
While traditional Azure virtual machines provide fixed CPU performance, B-series
The credit accumulation and consumption rates are set such that a VM running at exactly its base performance level will have neither a net accumulation or consumption of bursting credits. A VM has a net credit increase whenever it's running below its base CPU performance level and will have a net decrease in credits whenever the VM is utilizing the CPU more than its base CPU performance level. To conduct calculations on credit accumulations and consumptions, customers can utilize the holistic 'credits banked per minute' formula =>
-`((Base CPU performance * number of vCPU)/2 - (Percentage CPU * number of vCPU)/2)/100`.
+`((Base CPU performance * number of vCPU) - (Percentage CPU * number of vCPU))/100`.
-Putting this calculation into action, let's say that a customer deploys the Standard_B2ts_v2 VM size and their workload demands 10% of the 'Percentage CPU' or CPU performance, then the 'credits banked per minute' calculation will be as follows: `((20%*2)/2 - (10%*2)/2)/100 = 0.1 credits/minute`. In such a scenario, a B-series VM is accumulating credits given the 'Percentage CPU'/ CPU performance requirement is below the 'Base CPU performance' of the Standard_B2ts_v2.
+Putting this calculation into action, let's say that a customer deploys the Standard_B2ts_v2 VM size and their workload demands 10% of the 'Percentage CPU' or CPU performance, then the 'credits banked per minute' calculation will be as follows: `((20*2) - (10*2))/100 = 0.2 credits/minute`. In such a scenario, a B-series VM is accumulating credits given the 'Percentage CPU' per CPU performance requirement is below the 'Base CPU performance' of the Standard_B2ts_v2.
-Similarly, utilizing the example of a Standard_B32as_v2 VM size, if the workload demands 60% of the CPU performance for a measurement of time - then the 'credits banked per minute' calculation will be as follows: `((40%*32)/2 - (60%*32)/2)/100 = (6.4 - 9.6)/100 = -3.2 credits per minute`. Here the negative result implies the B-series VM is consuming credits given the 'Percentage CPU'/CPU performance requirement is above the 'Base CPU performance' of the Standard_B32as_v2.
+Similarly, utilizing the example of a Standard_B32as_v2 VM size, if the workload demands 60% of the CPU performance for a measurement of time - then the 'credits banked per minute' calculation will be as follows: `((40*32) - (60*32))/100 = -6.4 credits/minute`. Here the negative result implies the B-series VM is consuming credits given the 'Percentage CPU'/CPU performance requirement is above the 'Base CPU performance' of the Standard_B32as_v2.
## Credit monitoring
virtual-machines Bpsv2 Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/bpsv2-arm.md
Bpsv2 VMs offer up to 16 vCPU and 64 GiB of RAM and are optimized for scale-out
| Size | vCPU | RAM | Base CPU Performance / vCPU (%) | Initial Credits (#) | Credits banked/hour | Max Banked Credits (#) | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max Data Disks | Max Network Bandwidth (Gbps) (up to) | Max NICs | |--||--|--||||--|--|-||-| | Standard_B2pts_v2 | 2 | 1 | 20% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.250 | 2 |
-| Standard_B2pls_v2 | 2 | 4 | 30% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.250 | 2 |
-| Standard_B2ps_v2 | 2 | 8 | 40% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.250 | 2 |
-| Standard_B4pls_v2 | 4 | 8 | 30% | 120 | 48 | 1152 | 6,400/145 | 20,000/960 | 8 | 6.250 | 2 |
-| Standard_B4ps_v2 | 4 | 16 | 40% | 120 | 48 | 1152 | 6,400/145 | 20,000/960 | 8 | 6.250 | 2 |
-| Standard_B8pls_v2 | 8 | 16 | 30% | 240 | 96 | 2304 | 12,800/290 | 20,000/960 | 16 | 6.250 | 2 |
-| Standard_B8ps_v2 | 8 | 32 | 40% | 240 | 96 | 2304 | 12,800/290 | 20,000/960 | 16 | 6.250 | 2 |
-| Standard_B16pls_v2 | 16 | 32 | 30% | 480 | 192 | 4608 | 25,600/600 | 40,000/960 | 32 | 6.250 | 4 |
-| Standard_B16ps_v2 | 16 | 64 | 40% | 480 | 192 | 4608 | 25,600/600 | 40,000/960 | 32 | 6.250 | 4 |
+| Standard_B2pls_v2 | 2 | 4 | 30% | 60 | 36 | 864 | 3750/85 | 10,000/960 | 4 | 6.250 | 2 |
+| Standard_B2ps_v2 | 2 | 8 | 40% | 60 | 48 | 1152 | 3750/85 | 10,000/960 | 4 | 6.250 | 2 |
+| Standard_B4pls_v2 | 4 | 8 | 30% | 120 | 72 | 1728 | 6,400/145 | 20,000/960 | 8 | 6.250 | 2 |
+| Standard_B4ps_v2 | 4 | 16 | 40% | 120 | 96 | 2304 | 6,400/145 | 20,000/960 | 8 | 6.250 | 2 |
+| Standard_B8pls_v2 | 8 | 16 | 30% | 240 | 144 | 3456 | 12,800/290 | 20,000/960 | 16 | 6.250 | 2 |
+| Standard_B8ps_v2 | 8 | 32 | 40% | 240 | 192 | 4608 | 12,800/290 | 20,000/960 | 16 | 6.250 | 2 |
+| Standard_B16pls_v2 | 16 | 32 | 30% | 480 | 288 | 6912 | 25,600/600 | 40,000/960 | 32 | 6.250 | 4 |
+| Standard_B16ps_v2 | 16 | 64 | 40% | 480 | 384 | 9216 | 25,600/600 | 40,000/960 | 32 | 6.250 | 4 |
<sup>*</sup> Accelerated networking is required and turned on by default on all Dpsv5 machines <br>
virtual-machines Hpccompute Gpu Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpccompute-gpu-linux.md
This extension installs NVIDIA GPU drivers on Linux N-series virtual machines (V
Instructions on manual installation of the drivers and the current supported versions are available. An extension is also available to install NVIDIA GPU drivers on [Windows N-series VMs](hpccompute-gpu-windows.md). > [!NOTE]
-> With Secure Boot enabled, all OS boot components (boot loader, kernel, kernel drivers) must be signed by trusted publishers (key trusted by the system). Secure Boot is not supported using Windows or Linux extensions. For more information on manually installing GPU drivers with Secure Boot enabled, see [Azure N-series GPU driver setup for Linux](../linux/n-series-driver-setup.md).
+> With Secure Boot enabled, all OS boot components (boot loader, kernel, kernel drivers) must be signed by trusted publishers (key trusted by the system). Secure Boot is not supported using Windows or Linux extensions. For more information on manually installing GPU drivers with Secure Boot enabled, see [Azure N-series GPU driver setup for Linux](../linux/n-series-driver-setup.md).
+>
+> [!Note]
+> The GPU driver extensions do not automatically update the driver after the extension is installed. If you need to move to a newer driver version then either manually download and install the driver or remove and add the extension again.
+>
## Prerequisites
virtual-machines Hpccompute Gpu Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpccompute-gpu-windows.md
The instructions for manual installation of the drivers, and the list of current
The NVIDIA GPU Driver Extension can also be deployed on Linux N-series VMs. For more information, see [NVIDIA GPU Driver Extension for Linux](hpccompute-gpu-linux.md).
+> [!Note]
+> The GPU driver extensions do not automatically update the driver after the extension is installed. If you need to move to a newer driver version then either manually download and install the driver or remove and add the extension again.
+>
+ ## Prerequisites Confirm your virtual machine satisfies the prerequisites for using the NVIDIA GPU Driver Extension.
virtual-machines Network Watcher Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-update.md
Last updated 08/30/2023-+ # Update Azure Network Watcher extension to the latest version
virtual-machines Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-hybrid-benefit-linux.md
Last updated 05/02/2023 -+ # Azure Hybrid Benefit for Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines
If you use Azure Hybrid Benefit BYOS to PAYG capability for SLES and want more i
* [Learn how to create and update virtual machines and add license types (RHEL_BYOS, SLES_BYOS) for Azure Hybrid Benefit by using the Azure CLI](/cli/azure/vm) * [Learn about Azure Hybrid Benefit on Virtual Machine Scale Sets for RHEL and SLES and how to use it](../../virtual-machine-scale-sets/azure-hybrid-benefit-linux.md)--
virtual-machines Disk Encryption Linux Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-linux-aad.md
Last updated 01/04/2023-+ # Enable Azure Disk Encryption with Azure AD on Linux VMs (previous release)
virtual-machines Disk Encryption Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-linux.md
Last updated 07/07/2023-+ # Azure Disk Encryption scenarios on Linux VMs
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
Last updated 09/18/2023
-+ # Create an Azure Image Builder Bicep or ARM template JSON template
virtual-machines Move Virtual Machines Regional Zonal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/move-virtual-machines-regional-zonal-powershell.md
Title: Move Azure single instance Virtual Machines from regional to zonal availa
description: Move single instance Azure virtual machines from a regional configuration to a target Availability Zone within the same Azure region using PowerShell and CLI. + Last updated 09/25/2023
virtual-machines Np Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/np-series.md
VM Generation Support: Generation 1<br>
**Q:** How to request quota for NP VMs?
-**A:** Follow this page [Increase VM-family vCPU quotas](../azure-portal/supportability/per-vm-quota-requests.md). NP VMs are available in East US, West US2, West Europe, SouthEast Asia, and SouthCentral US.
+**A:** Follow this page [Increase VM-family vCPU quotas](../azure-portal/supportability/per-vm-quota-requests.md). NP VMs are available in East US, West US2, SouthCentral US, West Europe, SouthEast Asia, Japan East, and Canada Central.
**Q:** What version of Vitis should I use?
virtual-machines Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/resource-graph-samples.md
-+ # Azure Resource Graph sample queries for Azure Virtual Machines
virtual-machines Disk Encryption Windows Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-windows-aad.md
Last updated 03/15/2019-+ # Azure Disk Encryption with Azure AD for Windows VMs (previous release)
virtual-machines Disk Encryption Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-windows.md
Last updated 07/07/2023-+ # Azure Disk Encryption scenarios on Windows VMs
virtual-network Manage Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-public-ip-address-prefix.md
The following section details the parameters when creating a public IP prefix.
| Setting | Required? | Details | | | | |
- | Subscription|Yes|Must exist in the same [subscription](../../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#subscription) as the resource you want to associate the public IP address to. |
- | Resource group|Yes|Can exist in the same, or different, [resource group](../../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#resource-group) as the resource you want to associate the public IP address to. |
+ | Subscription|Yes|Must exist in the same [subscription](../../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#subscription) as the resource you want to associate the public IP address prefix to. |
+ | Resource group|Yes|Can exist in the same, or different, [resource group](../../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#resource-group) as the resource you want to associate the public IP address prefix to. |
| Name | Yes | The name must be unique within the resource group you select.| | Region | Yes | Must exist in the same [region](https://azure.microsoft.com/regions)as the public IP addresses assigned from the range. | | IP version | Yes | IP version of the prefix (v4 or v6). |
- | Prefix size | Yes | The size of the prefix you need. A range with 16 IP addresses (/28 for v4 or /124 for v6) is the default. |
+ | Prefix ownership | Yes | Specify if the IP ranges will be owned by Microsoft or you, see [Custom IP Prefix](custom-ip-address-prefix.md) for more information on the latter case. |
+ | Prefix size | Yes | The size of the prefix you need. A range with 16 IP addresses (/28 for v4 or /124 for v6) is the default limit for Microsoft owned ranges. |
Alternatively, you may use the following CLI and PowerShell commands to create a public IP address prefix.
Alternatively, you may use the following CLI and PowerShell commands to create a
>[!NOTE] >In regions with availability zones, you can use PowerShell or CLI commands to create a public IP address prefix as either: non-zonal, associated with a specific zone, or to use zone-redundancy. For API version 2020-08-01 or later, if a zone parameter is not provided, a non-zonal public IP address prefix is created. For versions of the API older than 2020-08-01, a zone-redundant public IP address prefix is created.
+>[!NOTE]
+>For more information about deriving a Public IP Prefix from an onboarded Custom IP Prefix (BYOIP range), please refer to [Manage Custom IP Address Prefix](manage-custom-ip-address-prefix.md#create-a-public-ip-prefix-from-a-custom-ip-prefix).
+ ## Create a static public IP address from a prefix The following section details the parameters required when creating a static public IP address from a prefix.
virtual-network Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-address-prefix.md
The following public IP prefix sizes are available:
Prefix size is specified as a Classless Inter-Domain Routing (CIDR) mask size.
+>[!NOTE]
+>If you are [deriving a Public IP Prefix from a Custom IP Prefix (BYOIP range)](manage-custom-ip-address-prefix.md#create-a-public-ip-prefix-from-a-custom-ip-prefix), the prefix size can be as large as the Custom IP Prefix.
+ There aren't limits as to how many prefixes created in a subscription. The number of ranges created can't exceed more static public IP addresses than allowed in your subscription. For more information, see [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits). ## Scenarios
Resource|Scenario|Steps|
## Limitations -- You can't specify the set of IP addresses for the prefix (though you can specify which IP you want from the prefix). Azure gives the IP addresses for the prefix, based on the size that you specify. Additionally, all public IP addresses created from the prefix must exist in the same Azure region and subscription as the prefix. Addresses must be assigned to resources in the same region and subscription.
+- You can't specify the set of IP addresses for the prefix (though you can [specify which IP you want from the prefix](manage-public-ip-address-prefix.md#create-a-static-public-ip-address-from-a-prefix)). Azure gives the IP addresses for the prefix, based on the size that you specify. Additionally, all public IP addresses created from the prefix must exist in the same Azure region and subscription as the prefix. Addresses must be assigned to resources in the same region and subscription.
-- You can create a prefix of up to 16 IP addresses. Review [Network limits increase requests](../../azure-portal/supportability/networking-quota-requests.md) and [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits) for more information.
+- You can create a prefix of up to 16 IP addresses for Microsoft owned prefixes. Review [Network limits increase requests](../../azure-portal/supportability/networking-quota-requests.md) and [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits) for more information.
- The size of the range can't be modified after the prefix has been created.
virtual-network Virtual Network Manage Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-peering.md
Last updated 08/24/2023 -+ # Create, change, or delete a virtual network peering
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
The route limit for OpenVPN clients is 1000.
Virtual WAN is a networking-as-a-service platform that has a 99.95% SLA. However, Virtual WAN combines many different components such as Azure Firewall, site-to-site VPN, ExpressRoute, point-to-site VPN, and Virtual WAN Hub/Integrated Network Virtual Appliances. The SLA for each component is calculated individually. For example, if ExpressRoute has a 10 minute downtime, the availability of ExpressRoute would be calculated as (Maximum Available Minutes - downtime) / Maximum Available Minutes * 100.
+### Can you change the VNet address space in a spoke VNet connected to the hub?
+
+Yes, this can be done automatically with no update or reset required on the peering connection. You can find more info on how to change the VNet address space [here](https://learn.microsoft.com/azure/virtual-network/manage-virtual-network ).
## Next steps
vpn-gateway Azure Vpn Client Optional Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/azure-vpn-client-optional-configurations.md
description: Learn how to configure optional configuration settings for the Azur
Previously updated : 07/27/2023 Last updated : 10/05/2023
You can configure forced tunneling in order to direct all traffic to the VPN tun
``` > [!NOTE]
-> - The default status for the clientconfig tag is `<clientconfig i:nil="true" />`, which can be modified based on the requirement.
-> - A duplicate clientconfig tag is not supported on macOS, so make sure the clientconfig tag is not duplicated in the XML file.
+> * The default status for the clientconfig tag is `<clientconfig i:nil="true" />`, which can be modified based on the requirement.
+> * A duplicate clientconfig tag is not supported on macOS, so make sure the clientconfig tag is not duplicated in the XML file.
### Add custom routes
The ability to completely block routes isn't supported by the Azure VPN Client.
``` > [!NOTE]
-> - To include/exclude multiple destination routes, put each destination address under a separate route tag _(as shown in the above examples)_, because multiple destination addresses in a single route tag won't work.
-> - If you encounter the error "_Destination cannot be empty or have more than one entry inside route tag_", check the profile XML file and ensure that the includeroutes/excluderoutes section has only one destination address inside a route tag.
+> * To include/exclude multiple destination routes, put each destination address under a separate route tag _(as shown in the above examples)_, because multiple destination addresses in a single route tag won't work.
+> * If you encounter the error "_Destination cannot be empty or have more than one entry inside route tag_", check the profile XML file and ensure that the includeroutes/excluderoutes section has only one destination address inside a route tag.
>
-## Version Information
+## Azure VPN Client version information
-Version 3.2.0.0
+For Azure VPN Client version information, see [Azure VPN Client versions](azure-vpn-client-versions.md).
-New in this Release:
- - AAD Authentication is now available from the settings page.
- - Server High Availability(HA), releasing on a rolling basis until October 20.
- - Accesibility Improvements
- - Connection logs in UTC
- - Minor bug fixes
-
## Next steps For more information about P2S VPN, see the following articles:
vpn-gateway Azure Vpn Client Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/azure-vpn-client-versions.md
+
+ Title: 'Azure VPN Client versions'
+description: This article shows the Azure VPN Client versions.
++++ Last updated : 10/05/2023+++
+# Azure VPN Client versions
+
+This article helps you view each of the versions of the Azure VPN Client. As new client versions become available, they're added to this article.
+
+## Client versions
+
+Each version is listed in the following sections.
+
+### Version 3.2.0.0
+
+New in this Release:
+
+* Microsoft Entra ID (Azure AD) Authentication is now available from the settings page.
+* Server High Availability(HA), releasing on a rolling basis until October 20.
+* Accessibility Improvements
+* Connection logs in UTC
+* Minor bug fixes
+
+## Next steps
+
+For more information about VPN point-to-site, see [About point-to-site configurations](point-to-site-about.md).
vpn-gateway Openvpn Azure Ad Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-client.md
Once you have a working profile and need to distribute it to other users, you ca
You can configure the Azure VPN Client with optional configuration settings such as additional DNS servers, custom DNS, forced tunneling, custom routes, and other additional settings. For a description of the available optional settings and configuration steps, see [Azure VPN Client optional settings](azure-vpn-client-optional-configurations.md).
-## Azure VPN Client Version Information
+## Azure VPN Client version information
-Version 3.2.0.0
-
-New in this Release:
--- AAD Authentication is now available from the settings page.-- Server High Availability(HA), releasing on a rolling basis until October 20.-- Accesibility Improvements-- Connection logs in UTC-- Minor bug fixes
+For Azure VPN Client version information, see [Azure VPN Client versions](azure-vpn-client-versions.md).
## Next steps
-For more information, see [Create an Azure AD tenant for P2S Open VPN connections that use Azure AD authentication](openvpn-azure-ad-tenant.md).
-
+For more information, see [Create an Azure AD tenant for P2S Open VPN connections that use Azure AD authentication](openvpn-azure-ad-tenant.md).
vpn-gateway Tutorial Create Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-create-gateway-portal.md
Previously updated : 09/26/2023 Last updated : 10/05/2023
Create a virtual network gateway using the following values:
* **Name:** VNet1GW * **Region:** East US * **Gateway type:** VPN
-* **VPN type:** Route-based
* **SKU:** VpnGw2 * **Generation:** Generation 2 * **Virtual network:** VNet1
vpn-gateway Tutorial Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-site-to-site-portal.md
Previously updated : 08/10/2023 Last updated : 10/05/2023
Create a virtual network gateway (VPN gateway) using the following values:
* **Name:** VNet1GW * **Region:** East US * **Gateway type:** VPN
-* **VPN type:** Route-based
* **SKU:** VpnGw2 * **Generation:** Generation 2 * **Virtual network:** VNet1
web-application-firewall Application Gateway Waf Request Size Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-waf-request-size-limits.md
description: This article provides information on Web Application Firewall reque
Previously updated : 07/26/2022 Last updated : 10/05/2023
web-application-firewall Rate Limiting Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/rate-limiting-configure.md
description: Learn how to configure rate limit custom rules for Application Gate
-+ Last updated 08/16/2023