Updates from: 04/12/2023 01:08:14
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Policies Series Hello World https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-hello-world.md
If you haven't already done so, create the following encryption keys. To automat
```xml <UserJourney Id="HelloWorldJourney">
- <OrchestrationStep Order="1" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" />
- </UserJourney>
+ <OrchestrationSteps>
+ <OrchestrationStep Order="1" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" />
+ </OrchestrationSteps>
+</UserJourney>
``` We've added a [UserJourney](userjourneys.md). The user journey specifies the business logic the end user goes through as Azure AD B2C processes a request. This user journey has only one step that issues a JTW token with the claims that you'll define in the next step.
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
Previously updated : 04/04/2022 Last updated : 04/11/2023 # Plan an automatic user provisioning deployment in Azure Active Directory
-Many organizations rely on software as a service (SaaS) applications such as ServiceNow, Zscaler, and Slack for end-user productivity. Historically IT staff have relied on manual provisioning methods such as uploading CSV files, or using custom scripts to securely manage user identities in each SaaS application. These processes are error prone, insecure, and hard to manage.
+Many organizations rely on software as a service (SaaS) applications such as ServiceNow, Zscaler, and Slack for end-user productivity. Historically IT staff has relied on manual provisioning methods such as uploading CSV files, or using custom scripts to securely manage user identities in each SaaS application. These processes are error prone, insecure, and hard to manage.
Azure Active Directory (Azure AD) automatic user provisioning simplifies this process by securely automating the creation, maintenance, and removal of user identities in SaaS applications based on business rules. This automation allows you to effectively scale your identity management systems on both cloud-only and hybrid environments as you expand their dependency on cloud-based solutions.
The key benefits of enabling automatic user provisioning are:
* **Manage risk**. You can increase security by automating changes based on employee status or group memberships that define roles and/or access.
-* **Address compliance and governance**. Azure AD supports native audit logs for every user provisioning request. Requests are executed in both the source and target systems. This enables you to track who has access to applications from a single screen.
+* **Address compliance and governance**. Azure AD supports native audit logs for every user provisioning request. Requests are executed in both the source and target systems. Audit logs let you track who has access to applications from a single screen.
* **Reduce cost**. Automatic user provisioning reduces costs by avoiding inefficiencies and human error associated with manual provisioning. It reduces the need for custom-developed user provisioning solutions, scripts, and audit logs.
Azure AD provides self-service integration of any application using templates pr
#### Application licensing
-You'll need the appropriate licenses for the application(s) you want to automatically provision. Discuss with the application owners whether the users assigned to the application have the proper licenses for their application roles. If Azure AD manages automatic provisioning based on roles, the roles assigned in Azure AD must align to application licenses. Incorrect licenses owned in the application may lead to errors during the provisioning/updating of a user.
+You need the appropriate licenses for the application(s) you want to automatically provision. Discuss with the application owners whether the users assigned to the application have the proper licenses for their application roles. If Azure AD manages automatic provisioning based on roles, the roles assigned in Azure AD must align to application licenses. Incorrect licenses owned in the application may lead to errors during the provisioning/updating of a user.
### Terms
In this example, user creation occurs in Azure AD and the Azure AD provisioning
#### Automatic user provisioning for cloud HR applications
-In this example, the users and or groups are created in a cloud HR application like such as Workday and SuccessFactors. The Azure AD provisioning service and Azure AD Connect provisioning agent provisions the user data from the cloud HR app tenant into AD. Once the accounts are updated in AD, it is synced with Azure AD through Azure AD Connect, and the email addresses and username attributes can be written back to the cloud HR app tenant.
+In this example, the users and or groups are created in a cloud HR application like such as Workday and SuccessFactors. The Azure AD provisioning service and Azure AD Connect provisioning agent provisions the user data from the cloud HR app tenant into AD. Once the accounts are updated in AD, it's synced with Azure AD through Azure AD Connect, and the email addresses and username attributes can be written back to the cloud HR app tenant.
![Picture 2](./media/plan-auto-user-provisioning/workdayprovisioning.png)
Communication is critical to the success of any new service. Proactively communi
### Plan a pilot
-We recommend that the initial configuration of automatic user provisioning be in a test environment with a small subset of users before scaling it to all users in production. See [best practices](../fundamentals/active-directory-deployment-plans.md#best-practices-for-a-pilot) for running a pilot.
+We recommend that the initial configuration of automatic user provisioning is in a test environment with a small subset of users before scaling it to all users in production. See [best practices](../fundamentals/active-directory-deployment-plans.md#best-practices-for-a-pilot) for running a pilot.
#### Best practices for a pilot  
Choose the steps that align to your solution requirements.
When the Azure AD provisioning service runs for the first time, the initial cycle against the source system and target systems creates a snapshot of all user objects for each target system.
-When enabling automatic provisioning for an application, the initial cycle can take anywhere from 20 minutes to several hours. The duration depends on the size of the Azure AD directory and the number of users in scope for provisioning.
+When you enable automatic provisioning for an application, the initial cycle takes anywhere from 20 minutes to several hours. The duration depends on the size of the Azure AD directory and the number of users in scope for provisioning.
The provisioning service stores the state of both systems after the initial cycle, improving performance of subsequent incremental cycles.
active-directory Concept Authentication Passwordless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-passwordless.md
The following considerations apply:
- Administrators can enable passwordless authentication methods for their tenant. -- Administrators can target all users or select users/groups within their tenant for each method.
+- Administrators can target all users or select users/Security groups within their tenant for each method.
- Users can register and manage these passwordless authentication methods in their account portal.
active-directory Concept Authentication Phone Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-phone-options.md
To work properly, phone numbers must be in the format *+CountryCode PhoneNumber*
> [!NOTE] > There needs to be a space between the country/region code and the phone number. >
-> Password reset and Azure AD Multi-Factor Authentication don't support phone extensions. Even in the *+1 4251234567X12345* format, extensions are removed before the call is placed.
+> Password reset and Azure AD Multi-Factor Authentication support phone extensions only in office phone.
## Mobile phone verification
active-directory Concept Mfa Authprovider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-authprovider.md
Previously updated : 01/29/2023 Last updated : 04/10/2023
Note the SDK has been deprecated and will only continue to work until November 1
## What is an MFA provider?
-There are two types of Auth providers, and the distinction is around how your Azure subscription is charged. The per-authentication option calculates the number of authentications performed against your tenant in a month. This option is best if some users authenticate only occasionally. The per-user option calculates the number of users who are eligible to perform MFA, which is all users in Azure AD, and all enabled users in MFA Server. This option is best if some users have licenses but you need to extend MFA to more users beyond your licensing limits.
+There are two types of Auth providers, and the distinction is around how your Azure subscription is charged. The per-authentication option calculates the number of authentications performed against your tenant in a month. This option is best if some accounts authenticate only occasionally. The per-user option calculates the number of accounts that are eligible to perform MFA, which is all accounts in Azure AD, and all enabled accounts in MFA Server. This option is best if some users have licenses but you need to extend MFA to more users beyond your licensing limits.
## Manage your MFA provider
active-directory Concept Mfa Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-licensing.md
After you have purchased the required Azure AD tier, [plan and deploy Azure AD M
### Azure AD Free tier
-All users in an Azure AD Free tenant can use Azure AD Multi-Factor Authentication by using security defaults. The mobile authentication app is the only method that can be used for Azure AD Multi-Factor Authentication when using Azure AD Free security defaults.
+All users in an Azure AD Free tenant can use Azure AD Multi-Factor Authentication by using security defaults. The mobile authentication app and SMS methods can be used for Azure AD Multi-Factor Authentication when using Azure AD Free security defaults.
* [Learn more about Azure AD security defaults](../fundamentals/concept-fundamentals-security-defaults.md) * [Enable security defaults for users in Azure AD Free](../fundamentals/concept-fundamentals-security-defaults.md#enabling-security-defaults)
active-directory How To Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-methods-manage.md
For each method, note whether or not it's enabled for the tenant. The following
### Review the legacy SSPR policy
-To get the authentication methods available in the legacy SSPR policy, go to **Azure Active Directory** > **Password reset** > **Authentication methods**. The following table lists the available methods in the legacy SSPR policy and corresponding methods in the Authentication method policy.
+To get the authentication methods available in the legacy SSPR policy, go to **Azure Active Directory** > **Users** > **Password reset** > **Authentication methods**. The following table lists the available methods in the legacy SSPR policy and corresponding methods in the Authentication method policy.
:::image type="content" border="false" source="media/how-to-authentication-methods-manage/legacy-sspr-policy.png" alt-text="Screenshot that shows the legacy Azure AD SSPR policy." lightbox="media/how-to-authentication-methods-manage/legacy-sspr-policy.png":::
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 04/05/2023 Last updated : 04/10/2023
Number match will be enabled for all users of Microsoft Authenticator push notif
Relevant services will begin deploying these changes after May 8, 2023 and users will start to see number match in approval requests. As services deploy, some may see number match while others don't. To ensure consistent behavior for all your users, we highly recommend you use the Azure portal or Graph API to roll out number match for all Microsoft Authenticator users.
-### Will the changes after May 8th, 2023, override number matching settings that are configured for a group in the Authentication methods policy?
-
-No, the changes after May 8th won't affect the **Enable and Target** tab for Microsoft Authenticator in the Authentication methods policy. Administrators can continue to target specific users and groups or **All Users** for Microsoft Authenticator **Push** or **Any** authentication mode.
+### What happens to number matching settings that are currently configured for a group in the Authentication methods policy after number matching is enabled for Authenticator push notifications after May 8th, 2023?
When Microsoft begins protecting all organizations by enabling number matching after May 8th, 2023, administrators will see the **Require number matching for push notifications** setting on the **Configure** tab of the Microsoft Authenticator policy is set to **Enabled** for **All users** and can't be disabled. In addition, the **Exclude** option for this setting will be removed.
active-directory Howto Mfa Mfasettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md
Previously updated : 02/13/2023 Last updated : 04/10/2023
Users can have a combination of up to five OATH hardware tokens or authenticator
If users receive phone calls for MFA prompts, you can configure their experience, such as caller ID or the voice greeting they hear.
-In the United States, if you haven't configured MFA caller ID, voice calls from Microsoft come from the following number. Uses with spam filters should exclude this number.
+In the United States, if you haven't configured MFA caller ID, voice calls from Microsoft come from the following number. Users with spam filters should exclude this number.
* *+1 (855) 330-8653*
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md
When you install the extension, you need the *Tenant ID* and admin credentials f
The NPS server must be able to communicate with the following URLs over TCP port 443:
-* `https:\//login.microsoftonline.com`
-* `https:\//credentials.azure.com`
+* `https://login.microsoftonline.com`
+* `https://login.microsoftonline.us (Azure Government)`
+* `https://login.chinacloudapi.cn (Azure China 21Vianet)`
+* `https://credentials.azure.com`
+* `https://strongauthenticationservice.auth.microsoft.com`
+* `https://strongauthenticationservice.auth.microsoft.us (Azure Government)`
+* `https://strongauthenticationservice.auth.microsoft.cn (Azure China 21Vianet)`
+* `https://adnotifications.windowsazure.com`
+* `https://adnotifications.windowsazure.us (Azure Government)`
+* `https://adnotifications.windowsazure.cn (Azure China 21Vianet)`
Additionally, connectivity to the following URLs is required to complete the [setup of the adapter using the provided PowerShell script](#run-the-powershell-script):
-* `https:\//login.microsoftonline.com`
-* `https:\//provisioningapi.microsoftonline.com`
-* `https:\//aadcdn.msauth.net`
-* `https:\//www.powershellgallery.com`
-* `https:\//go.microsoft.com`
-* `https:\//aadcdn.msftauthimages.net`
+* `https://login.microsoftonline.com`
+* `https://provisioningapi.microsoftonline.com`
+* `https://aadcdn.msauth.net`
+* `https://www.powershellgallery.com`
+* `https://go.microsoft.com`
+* `https://aadcdn.msftauthimages.net`
## Prepare your environment
active-directory Troubleshoot Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-sspr-writeback.md
Azure [GOV endpoints](../../azure-government/compare-azure-government-global-azu
* *\*.passwordreset.microsoftonline.us* * *\*.servicebus.usgovcloudapi.net*
-If you need more granularity, see the [list of Microsoft Azure Datacenter IP Ranges](https://www.microsoft.com/download/details.aspx?id=41653). This list is updated every Wednesday and goes into effect the next Monday.
+If you need more granularity, see the [list of Microsoft Azure IP Ranges and Service Tags for Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519).
+
+For Azure GOV, see the [list of Microsoft Azure IP Ranges and Service Tags for US Government Cloud](https://www.microsoft.com/download/details.aspx?id=57063).
+
+These files are updated weekly.
To determine if access to a URL and port are restricted in an environment, run the following cmdlet:
active-directory Scenario Desktop Acquire Token Wam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-desktop-acquire-token-wam.md
# Desktop app that calls web APIs: Acquire a token using WAM
-MSAL is able to call Web Account Manager, a Windows 10 component that ships with the OS. This component acts as an authentication broker and users of your app benefit from integration with accounts known from Windows, such as the account you signed-in with in your Windows session.
-
-## Availability
-
-MSAL 4.25+ supports WAM on UWP, and .NET 5.
-
-For .NET 5, target `net5.0-windows10.0.17763.0` (or higher) and not just `net5.0`. Your app will still run on older versions of Windows if you add `<SupportedOSPlatformVersion>7</SupportedOSPlatformVersion>` in the *.csproj* file. MSAL will use a browser when WAM isn't available.
+MSAL is able to call Web Account Manager (WAM), a Windows 10+ component that ships with the OS. This component acts as an authentication broker and users of your app benefit from integration with accounts known from Windows, such as the account you signed-in with in your Windows session.
## WAM value proposition Using an authentication broker such as WAM has numerous benefits. -- Enhanced security (your app doesn't have to manage the powerful refresh token)
+- Enhanced security. See [token protection](https://learn.microsoft.com/azure/active-directory/conditional-access/concept-token-protection)
- Better support for Windows Hello, Conditional Access and FIDO keys - Integration with Windows' "Email and Accounts" view-- Better Single Sign-On (users don't have to reenter passwords)
+- Better Single Sign-On
+- Ability to sign in silently with the current Windows account
- Most bug fixes and enhancements will be shipped with Windows ## WAM limitations
+- Available on Windows 10 and later and on Windows Server 2019 and later. On Mac, Linux, and earlier versions of Windows, MSAL will automatically fall back to a browser.
- B2C and ADFS authorities aren't supported. MSAL will fall back to a browser.-- Available on Win10+ and Win Server 2019+. On Mac, Linux, and earlier versions of Windows, MSAL will fall back to a browser.-- Not available on Xbox.
-## WAM calling pattern
+## WAM integration package
-You can use the following pattern to use WAM.
+Most apps will need to reference `Microsoft.Identity.Client.Broker` package to use this integration. MAUI apps are not required to do this; the functionality is inside MSAL when the target is `net6-windows` and later.
-```csharp
-// 1. Configuration - read below about redirect URI
-var pca = PublicClientApplicationBuilder.Create("client_id")
- .WithBroker()
- .Build();
-
-// Add a token cache, see https://learn.microsoft.com/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=desktop
+## WAM calling pattern
-// 2. GetAccounts
-var accounts = await pca.GetAccountsAsync();
-var accountToLogin = // choose an account, or null, or use PublicClientApplication.OperatingSystemAccount for the default OS account
+You can use the following pattern to use WAM.
-try
-{
- // 3. AcquireTokenSilent
- var authResult = await pca.AcquireTokenSilent(new[] { "User.Read" }, accountToLogin)
- .ExecuteAsync();
-}
-catch (MsalUiRequiredException) // no change in the pattern
-{
- // 4. Specific: Switch to the UI thread for next call . Not required for console apps.
- await SwitchToUiThreadAsync(); // not actual code, this is different on each platform / tech
-
- // 5. AcquireTokenInteractive
- var authResult = await pca.AcquireTokenInteractive(new[] { "User.Read" })
- .WithAccount(accountToLogin) // this already exists in MSAL, but it is more important for WAM
- .WithParentActivityOrWindow(myWindowHandle) // to be able to parent WAM's windows to your app (optional, but highly recommended; not needed on UWP)
- .ExecuteAsync();
-}
+```csharp
+ // 1. Configuration - read below about redirect URI
+ var pca = PublicClientApplicationBuilder.Create("client_id")
+ .WithBroker(new BrokerOptions(BrokerOptions.OperatingSystems.Windows))
+ .Build();
+
+ // Add a token cache, see https://learn.microsoft.com/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=desktop
+
+ // 2. Find a account for silent login
+
+ // is there an account in the cache?
+ IAccount accountToLogin = (await pca.GetAccountsAsync()).FirstOrDefault();
+ if (accountToLogin == null)
+ {
+ // 3. no account in the cache, try to login with the OS account
+ accountToLogin = PublicClientApplication.OperatingSystemAccount;
+ }
+
+ try
+ {
+ // 4. Silent authentication
+ var authResult = await pca.AcquireTokenSilent(new[] { "User.Read" }, accountToLogin)
+ .ExecuteAsync();
+ }
+ // cannot login silently - most likely AAD would like to show a consent dialog or the user needs to re-enter credentials
+ catch (MsalUiRequiredException)
+ {
+ // 5. Interactive authentication
+ var authResult = await pca.AcquireTokenInteractive(new[] { "User.Read" })
+ .WithAccount(accountToLogin)
+ // this is mandatory so that WAM is correctly parented to your app, read on for more guidance
+ .WithParentActivityOrWindow(myWindowHandle)
+ .ExecuteAsync();
+
+ // consider allowing the user to re-authenticate with a different account, by calling AcquireTokenInteractive again
+ }
```
-Call `.WithBroker(true)`. If a broker isn't present (for example, Win8.1, Mac, or Linux), then MSAL will fall back to a browser, where redirect URI rules apply.
+If a broker isn't present (for example, Win8.1, Mac, or Linux), then MSAL will fall back to a browser, where redirect URI rules apply.
-## Redirect URI
+### Redirect URI
-WAM redirect URIs don't need to be configured in MSAL, but they must be configured in the app registration.
-
-### Win32 (.NET framework / .NET 5)
+WAM redirect URIs don't need to be configured in MSAL, but they must be configured in the app registration.
``` ms-appx-web://microsoft.aad.brokerplugin/{client_id} ```
-### UWP
-```csharp
- // returns smth like S-1-15-2-2601115387-131721061-1180486061-1362788748-631273777-3164314714-2766189824
- string sid = WebAuthenticationBroker.GetCurrentApplicationCallbackUri().Host.ToUpper();
+### Token cache persistence
- // the redirect uri you need to register
- string redirectUri = $"ms-appx-web://microsoft.aad.brokerplugin/{sid}";
-```
+It's important to persist MSAL's token cache because MSAL continues to store id tokens and account metadata there. See https://learn.microsoft.com/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=desktop
-## Token cache persistence
+### Find a account for silent login
-It's important to persist MSAL's token cache because MSAL needs to save internal WAM account IDs there. Without it, restarting the app means that `GetAccounts` API will miss some of the accounts. On UWP, MSAL knows where to save the token cache.
+The recommended pattern is:
-## GetAccounts
+1. If the user previously logged in, use that account.
+2. If not, use `PublicClientApplication.OperatingSystemAccount` which the current Windows Account
+3. Allow the end-user to change to a different account by logging in interactively.
-`GetAccounts` returns accounts of users who have previously logged in interactively into the app.
+## Parent Window Handles
-In addition, WAM can list the OS-wide Work and School accounts configured in Windows (for Win32 apps but not for UWP apps). To opt-into this feature, set `ListWindowsWorkAndSchoolAccounts` in `WindowsBrokerOptions` to **true**. You can enable it as below.
+It is required to configure MSAL with the window that the interactive experience should be parented to, using `WithParentActivityOrWindow` APIs.
-```csharp
-.WithWindowsBrokerOptions(new WindowsBrokerOptions()
-{
- // GetAccounts will return Work and School accounts from Windows
- ListWindowsWorkAndSchoolAccounts = true,
+### UI applications
+For UI apps like WinForms, WPF, WinUI3 see https://learn.microsoft.com/windows/apps/develop/ui-input/retrieve-hwnd
- // Legacy support for 1st party apps only
- MsaPassthrough = true
-})
-```
+### Console applications
->[!NOTE]
-> Microsoft (outlook.com etc.) accounts will not be listed in Win32 nor UWP for privacy reasons.
+For console applications it is a bit more involved, because of the terminal window and its tabs. Use the following code:
-Applications cannot remove accounts from Windows!
-
-## RemoveAsync
--- Removes all account information from MSAL's token cache (this includes MSA, that is, personal accounts information copied by MSAL into its cache).-- Removes app-only (not OS-wide) accounts.-
->[!NOTE]
-> Only users can remove OS accounts, whereas apps themselves cannot. If an OS account is passed into `RemoveAsync`, and then `GetAccounts` is called with `ListWindowsWorkAndSchoolAccounts` enabled, the same OS accounts will still be returned.
-
-## Other considerations
--- WAM's interactive operations require being on the UI thread. MSAL throws a meaningful exception when not on UI thread. This doesn't apply to console apps.-- `WithAccount` provides an accelerated authentication experience if the MSAL account was originally obtained via WAM, or, WAM can find a work and school account in Windows.-- WAM isn't able to pre-populate the username field with a login hint, unless a Work and School account with the same username is found in Windows.-- If WAM is unable to offer an accelerated authentication experience, it will show an account picker. Users can add new accounts.-
-!["WAM account picker"](media/scenario-desktop-acquire-token-wam/wam-account-picker.png)
+```csharp
+enum GetAncestorFlags
+{
+ GetParent = 1,
+ GetRoot = 2,
+ /// <summary>
+ /// Retrieves the owned root window by walking the chain of parent and owner windows returned by GetParent.
+ /// </summary>
+ GetRootOwner = 3
+}
-- New accounts are automatically remembered by Windows. Work and School have the option of joining the organization's directory or opting out completely, in which case the account won't appear under "Email & Accounts". Microsoft accounts are automatically added to Windows. Apps can't list these accounts programmatically (but only through the Account Picker).
+/// <summary>
+/// Retrieves the handle to the ancestor of the specified window.
+/// </summary>
+/// <param name="hwnd">A handle to the window whose ancestor is to be retrieved.
+/// If this parameter is the desktop window, the function returns NULL. </param>
+/// <param name="flags">The ancestor to be retrieved.</param>
+/// <returns>The return value is the handle to the ancestor window.</returns>
+[DllImport("user32.dll", ExactSpelling = true)]
+static extern IntPtr GetAncestor(IntPtr hwnd, GetAncestorFlags flags);
+
+[DllImport("kernel32.dll")]
+static extern IntPtr GetConsoleWindow();
+
+// This is your window handle!
+public IntPtr GetConsoleOrTerminalWindow()
+{
+ IntPtr consoleHandle = GetConsoleWindow();
+ IntPtr handle = GetAncestor(consoleHandle, GetAncestorFlags.GetRootOwner );
+
+ return handle;
+}
+```
## Troubleshooting
-### "Either the user canceled the authentication or the WAM Account Picker crashed because the app is running in an elevated process" error message
-
-When an app that uses MSAL is run as an elevated process, some of these calls within WAM may fail due to different process security levels. Internally MSAL.NET uses native Windows methods ([COM](/windows/win32/com/the-component-object-model)) to integrate with WAM. Starting with version 4.32.0, MSAL will display a descriptive error message when it detects that the app process is elevated and WAM returned no accounts.
-
-One solution is to not run the app as elevated, if possible. Another solution is for the app developer to call `WindowsNativeUtils.InitializeProcessSecurity` method when the app starts up. This will set the security of the processes used by WAM to the same levels. See [this sample app](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/blob/master/tests/devapps/WAM/NetCoreWinFormsWam/Program.cs#L18-L21) for an example. However, note, that this solution isn't guaranteed to succeed to due external factors like the underlying CLR behavior. In that case, an `MsalClientException` will be thrown. For more information, see issue [#2560](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/issues/2560).
- ### "WAM Account Picker did not return an account" error message This message indicates that either the application user closed the dialog that displays accounts, or the dialog itself crashed. A crash might occur if AccountsControl, a Windows control, is registered incorrectly in Windows. To resolve this issue:
This message indicates that either the application user closed the dialog that d
if (-not (Get-AppxPackage Microsoft.AccountsControl)) { Add-AppxPackage -Register "$env:windir\SystemApps\Microsoft.AccountsControl_cw5n1h2txyewy\AppxManifest.xml" -DisableDevelopmentMode -ForceApplicationShutdown } Get-AppxPackage Microsoft.AccountsControl ```
-### Connection issues
-
-The application user sees an error message similar to "Please check your connection and try again". If this issue occurs regularly, see the [troubleshooting guide for Office](/office365/troubleshoot/authentication/connection-issue-when-sign-in-office-2016), which also uses WAM.
- ## Sample [WPF sample that uses WAM](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2)
-[UWP sample that uses WAM, along Xamarin](https://github.com/Azure-Samples/active-directory-xamarin-native-v2/tree/master/2-With-broker)
- ## Next steps
active-directory Single Page App Tutorial 03 Sign In Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-page-app-tutorial-03-sign-in-users.md
reactspalocal/
### Adding the sign in experience
-1. Open *SignInButton.jsx* and add the following code, which creates a button that signs in the user using either a popup or redirect.
+1. Open *SignInButton.jsx* and add the following code, which creates a button that signs in the user using either a pop-up or redirect.
```javascript import React from "react";
reactspalocal/
<!-- ::: zone-end --> > [!div class="nextstepaction"]
-> [Tutorial: Call an API from a React single-page app](single-page-app-tutorial-04-call-api.md)
+> [Tutorial: Call an API from a React single-page app](single-page-app-tutorial-04-call-api.md)
active-directory Tutorial V2 Javascript Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-spa.md
In the next steps, you'll create a new folder for the JavaScript SPA and set up
<!-- msal.js with a fallback to backup CDN --> <script src="https://alcdn.msauth.net/browser/2.30.0/js/msal-browser.js"
- integrity="sha384-L8LyrNcolaRZ4U+N06atid1fo+kBo8hdlduw0yx+gXuACcdZjjquuGZTA5uMmUdS"
+ integrity="sha384-o4ufwq3oKqc7IoCcR08YtZXmgOljhTggRwxP2CLbSqeXGtitAxwYaUln/05nJjit"
crossorigin="anonymous"></script> <!-- adding Bootstrap 4 for UI components -->
- <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384-o4ufwq3oKqc7IoCcR08YtZXmgOljhTggRwxP2CLbSqeXGtitAxwYaUln/05nJjit" crossorigin="anonymous">
+ <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous">
</head> <body> <nav class="navbar navbar-expand-lg navbar-dark bg-primary">
The Microsoft Graph API requires the `User.Read` scope to read a user's profile.
Delve deeper into SPA development on the Microsoft identity platform in the first part of a scenario series: > [!div class="nextstepaction"]
-> [Scenario: Single-page application](scenario-spa-overview.md)
+> [Scenario: Single-page application](scenario-spa-overview.md)
active-directory Clean Up Unmanaged Azure Ad Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/clean-up-unmanaged-azure-ad-accounts.md
Run the following cmdlets:
To identify unmanaged Azure AD accounts, run:
-* `Connect-MgGraph -Scope User.ReadAll`
+* `Connect-MgGraph -Scope User.Read.All`
* `Get-MsIdUnmanagedExternalUser` To reset unmanaged Azure AD account redemption status, run:
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
For more information about Microsoft cloud settings for B2B collaboration., see:
### Modernizing Terms of Use Experiences **Type:** Plan for Change
-**Service category:** Access Reviews
+**Service category:** Terms of use
**Product capability:** AuthZ/Access Delegation Starting July 2023, we're modernizing the following Terms of Use end user experiences with an updated PDF viewer, and moving the experiences from https://account.activedirectory.windowsazure.com to https://myaccount.microsoft.com:
active-directory Beable Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/beable-tutorial.md
Previously updated : 02/09/2023 Last updated : 04/11/2023
In this article, you learn how to integrate Beable with Azure Active Directory (
* Enable your users to be automatically signed-in to Beable with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-You'll configure and test Azure AD single sign-on for Beable in a test environment. Beable supports **IDP** initiated single sign-on.
+You configure and test Azure AD single sign-on for Beable in a test environment. Beable supports **IDP** initiated single sign-on.
## Prerequisites
Complete the following steps to enable Azure AD single sign-on in the Azure port
`https://<SUBDOMAIN>.beable.com` b. In the **Reply URL** textbox, type a URL using the following pattern:
- `https://prod-literacy-backend-alb-12049610218161332941.beable.com/login/ssoVerification/?providerId=1466658d-11ae-11ed-b1a0-b9e58c7ef6cc&identifier=<DOMAIN>`
+ `https://prod-literacy-backend-alb-<ID>.beable.com/login/ssoVerification/?providerId=<ProviderID>&identifier=<DOMAIN>`
> [!Note] > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Beable support team](https://beable.com/contact/) to get these values. You can also refer to the patterns shown in the Basic SAML Configuration section in the Azure portal.
active-directory Qradar Soar Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/qradar-soar-tutorial.md
Previously updated : 12/14/2022 Last updated : 04/11/2023 # Azure Active Directory SSO integration with QRadar SOAR
-In this article, you'll learn how to integrate QRadar SOAR with Azure Active Directory (Azure AD). QRadar SOAR enhances the analyst experience through accelerated incident response with simple automation, process standardization, and integration with your existing security tools. When you integrate QRadar SOAR with Azure AD, you can:
+In this article, you learn how to integrate QRadar SOAR with Azure Active Directory (Azure AD). QRadar SOAR enhances the analyst experience through accelerated incident response with simple automation, process standardization, and integration with your existing security tools. When you integrate QRadar SOAR with Azure AD, you can:
* Control in Azure AD who has access to QRadar SOAR. * Enable your users to be automatically signed-in to QRadar SOAR with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-You'll configure and test Azure AD single sign-on for QRadar SOAR in a test environment. QRadar SOAR supports both **SP** and **IDP** initiated single sign-on.
+You configure and test Azure AD single sign-on for QRadar SOAR in a test environment. QRadar SOAR supports both **SP** and **IDP** initiated single sign-on.
## Prerequisites
Complete the following steps to enable Azure AD single sign-on in the Azure port
| **Identifier** | |-|
- | `https://<UPS>.domain.extension/<ID>` |
- | `https://<SOAR>.domain.extension` |
+ | `https://<CustomerName>.domain.extension/<ID>` |
+ | `https://<CustomerName>.domain.extension` |
b. In the **Reply URL** textbox, type a URL using one of the following patterns: | **Reply URL** | |-|
- | `https://<UPS>.domain.extension/<ID>` |
- | `https://<SOAR>.domain.extension` |
+ | `https://<CustomerName>.domain.extension/<ID>` |
+ | `https://<CustomerName>.domain.extension` |
1. If you want to configure **SP** initiated SSO, then perform the following step:
Complete the following steps to enable Azure AD single sign-on in the Azure port
| **Sign on URL** | |-|
- | `https://<UPS>.domain.extension/<ID>` |
- | `https://<SOAR>.domain.extension` |
+ | `https://<CustomerName>.domain.extension/<ID>` |
+ | `https://<CustomerName>.domain.extension` |
> [!Note] > These values are not the real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [QRadar SOAR Client support team](mailto:mysphelp@us.ibm.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
active-directory Servicenow Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-provisioning-tutorial.md
For more information on the Azure AD automatic user provisioning service, see [A
- A [ServiceNow Express instance](https://www.servicenow.com) of Helsinki or higher. - A user account in ServiceNow with the admin role.
+> [!NOTE]
+> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud.
+ ## Step 1: Plan your provisioning deployment - Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
Title: Use GPUs on Azure Kubernetes Service (AKS)
-description: Learn how to use GPUs for high performance compute or graphics-intensive workloads on Azure Kubernetes Service (AKS)
+description: Learn how to use GPUs for high performance compute or graphics-intensive workloads on Azure Kubernetes Service (AKS).
Previously updated : 08/06/2021 Last updated : 04/10/2023 #Customer intent: As a cluster administrator or developer, I want to create an AKS cluster that can use high-performance GPU-based VMs for compute-intensive workloads. # Use GPUs for compute-intensive workloads on Azure Kubernetes Service (AKS)
-Graphical processing units (GPUs) are often used for compute-intensive workloads such as graphics and visualization workloads. AKS supports the creation of GPU-enabled node pools to run these compute-intensive workloads in Kubernetes. For more information on available GPU-enabled VMs, see [GPU optimized VM sizes in Azure][gpu-skus]. For AKS node pools, we recommend a minimum size of *Standard_NC6*. Note that the NVv4 series (based on AMD GPUs) are not yet supported with AKS.
+Graphical processing units (GPUs) are often used for compute-intensive workloads, such as graphics and visualization workloads. AKS supports GPU-enabled Linux node pools to run compute-intensive Kubernetes workloads. For more information on available GPU-enabled VMs, see [GPU-optimized VM sizes in Azure][gpu-skus]. For AKS node pools, we recommend a minimum size of *Standard_NC6*. The NVv4 series (based on AMD GPUs) aren't supported with AKS.
+
+This article helps you provision nodes with schedulable GPUs on new and existing AKS clusters.
> [!NOTE] > GPU-enabled VMs contain specialized hardware subject to higher pricing and region availability. For more information, see the [pricing][azure-pricing] tool and [region availability][azure-availability].
-Currently, using GPU-enabled node pools is only available for Linux node pools.
- ## Before you begin
-This article helps you provision nodes with schedulable GPUs on new and existing AKS clusters. This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-
-You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* This article assumes you have an existing AKS cluster. If you don't have a cluster, create one using the [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or the [Azure portal][aks-quickstart-portal].
+* You also need the Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
## Get the credentials for your cluster
-Get the credentials for your AKS cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following example command gets the credentials for the *myAKSCluster* in the *myResourceGroup* resource group.
+* Get the credentials for your AKS cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. The following example command gets the credentials for the *myAKSCluster* in the *myResourceGroup* resource group:
-```azurecli-interactive
-az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
-```
+ ```azurecli-interactive
+ az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
+ ```
## Add the NVIDIA device plugin
-There are two options for adding the NVIDIA device plugin:
+There are two ways to add the NVIDIA device plugin:
-* Use the AKS GPU image
-* Manually install the NVIDIA device plugin
+1. [Using the AKS GPU image](#update-your-cluster-to-use-the-aks-gpu-image-preview)
+2. [Manually installing the NVIDIA device plugin](#manually-install-the-nvidia-device-plugin)
> [!WARNING]
-> You can use either of the above options, but you shouldn't manually install the NVIDIA device plugin daemon set with clusters that use the AKS GPU image.
+> We don't recommend manually installing the NVIDIA device plugin daemon set with clusters using the AKS GPU image.
### Update your cluster to use the AKS GPU image (preview)
-AKS provides a fully configured AKS image that already contains the [NVIDIA device plugin for Kubernetes][nvidia-github].
+AKS provides a fully configured AKS image containing the [NVIDIA device plugin for Kubernetes][nvidia-github].
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
-First, install the aks-preview Azure CLI extension by running the following command:
+1. Install the `aks-preview` Azure CLI extension using the [`az extension add`][az-extension-add] command.
-```azurecli
-az extension add --name aks-preview
-```
+ ```azurecli-interactive
+ az extension add --name aks-preview
+ ```
-Run the following command to update to the latest version of the extension released:
+2. Update to the latest version of the extension using the [`az extension update`][az-extension-update] command.
-```azurecli
-az extension update --name aks-preview
-```
+ ```azurecli-interactive
+ az extension update --name aks-preview
+ ```
-Then, register the `GPUDedicatedVHDPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+3. Register the `GPUDedicatedVHDPreview` feature flag using the [`az feature register`][az-feature-register] command.
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "GPUDedicatedVHDPreview"
-```
+ ```azurecli-interactive
+ az feature register --namespace "Microsoft.ContainerService" --name "GPUDedicatedVHDPreview"
+ ```
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+ It takes a few minutes for the status to show *Registered*.
-```azurecli-interactive
-az feature show --namespace "Microsoft.ContainerService" --name "GPUDedicatedVHDPreview"
-```
+4. Verify the registration status using the [`az feature show`][az-feature-show] command.
-When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+ ```azurecli-interactive
+ az feature show --namespace "Microsoft.ContainerService" --name "GPUDedicatedVHDPreview"
+ ```
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
+5. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
-## Add a node pool for GPU nodes
+ ```azurecli-interactive
+ az provider register --namespace Microsoft.ContainerService
+ ```
-To add a node pool with to your cluster, use [az aks nodepool add][az-aks-nodepool-add].
+#### Add a node pool for GPU nodes
-```azurecli-interactive
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name gpunp \
- --node-count 1 \
- --node-vm-size Standard_NC6 \
- --node-taints sku=gpu:NoSchedule \
- --aks-custom-headers UseGPUDedicatedVHD=true \
- --enable-cluster-autoscaler \
- --min-count 1 \
- --max-count 3
-```
+Now that you updated your cluster to use the AKS GPU image, you can add a node pool for GPU nodes to your cluster.
-The above command adds a node pool named *gpunp* to the *myAKSCluster* in the *myResourceGroup* resource group. The command also sets the VM size for the node in the node pool to *Standard_NC6*, enables the cluster autoscaler, configures the cluster autoscaler to maintain a minimum of one node and a maximum of three nodes in the node pool, specifies a specialized AKS GPU image nodes on your new node pool, and specifies a *sku=gpu:NoSchedule* taint for the node pool.
+* Add a node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command.
-> [!NOTE]
-> A taint and VM size can only be set for node pools during node pool creation, but the autoscaler settings can be updated at any time.
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name gpunp \
+ --node-count 1 \
+ --node-vm-size Standard_NC6 \
+ --node-taints sku=gpu:NoSchedule \
+ --aks-custom-headers UseGPUDedicatedVHD=true \
+ --enable-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 3
+ ```
-> [!NOTE]
-> If your GPU sku requires generation two VMs use *--aks-custom-headers UseGPUDedicatedVHD=true,usegen2vm=true*. For example:
->
-> ```azurecli
-> az aks nodepool add \
-> --resource-group myResourceGroup \
-> --cluster-name myAKSCluster \
-> --name gpunp \
-> --node-count 1 \
-> --node-vm-size Standard_NC6 \
-> --node-taints sku=gpu:NoSchedule \
-> --aks-custom-headers UseGPUDedicatedVHD=true,usegen2vm=true \
-> --enable-cluster-autoscaler \
-> --min-count 1 \
-> --max-count 3
-> ```
+ The previous example command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings:
+
+ * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6*.
+ * `--node-taints`: Specifies a *sku=gpu:NoSchedule* taint on the node pool.
+ * `--aks-custom-headers`: Specifies a specialized AKS GPU image, *UseGPUDedicatedVHD=true*. If your GPU sku requires generation 2 VMs, use *--aks-custom-headers UseGPUDedicatedVHD=true,usegen2vm=true* instead.
+ * `--enable-cluster-autoscaler`: Enables the cluster autoscaler.
+ * `--min-count`: Configures the cluster autoscaler to maintain a minimum of one node in the node pool.
+ * `--max-count`: Configures the cluster autoscaler to maintain a maximum of three nodes in the node pool.
+
+ > [!NOTE]
+ > Taints and VM sizes can only be set for node pools during node pool creation, but you can update autoscaler settings at any time.
### Manually install the NVIDIA device plugin
-Alternatively, you can deploy a DaemonSet for the NVIDIA device plugin. This DaemonSet runs a pod on each node to provide the required drivers for the GPUs.
+You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on each node to provide the required drivers for the GPUs.
-Add a node pool with to your cluster using [az aks nodepool add][az-aks-nodepool-add].
+1. Add a node pool to your cluster using the [`az aks nodepool add`][az-aks-nodepool-add] command.
-```azurecli-interactive
-az aks nodepool add \
- --resource-group myResourceGroup \
- --cluster-name myAKSCluster \
- --name gpunp \
- --node-count 1 \
- --node-vm-size Standard_NC6 \
- --node-taints sku=gpu:NoSchedule \
- --enable-cluster-autoscaler \
- --min-count 1 \
- --max-count 3
-```
+ ```azurecli-interactive
+ az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --name gpunp \
+ --node-count 1 \
+ --node-vm-size Standard_NC6 \
+ --node-taints sku=gpu:NoSchedule \
+ --enable-cluster-autoscaler \
+ --min-count 1 \
+ --max-count 3
+ ```
-The above command adds a node pool named *gpunp* to the *myAKSCluster* in the *myResourceGroup* resource group. The command also sets the VM size for the nodes in the node pool to *Standard_NC6*, enables the cluster autoscaler, configures the cluster autoscaler to maintain a minimum of one node and a maximum of three nodes in the node pool, and specifies a *sku=gpu:NoSchedule* taint for the node pool.
+ The previous example command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings:
-> [!NOTE]
-> A taint and VM size can only be set for node pools during node pool creation, but the autoscaler settings can be updated at any time.
-
-Create a namespace using the [kubectl create namespace][kubectl-create] command, such as *gpu-resources*:
-
-```console
-kubectl create namespace gpu-resources
-```
-
-Create a file named *nvidia-device-plugin-ds.yaml* and paste the following YAML manifest. This manifest is provided as part of the [NVIDIA device plugin for Kubernetes project][nvidia-github].
-
-```yaml
-apiVersion: apps/v1
-kind: DaemonSet
-metadata:
- name: nvidia-device-plugin-daemonset
- namespace: gpu-resources
-spec:
- selector:
- matchLabels:
- name: nvidia-device-plugin-ds
- updateStrategy:
- type: RollingUpdate
- template:
+ * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6*.
+ * `--node-taints`: Specifies a *sku=gpu:NoSchedule* taint on the node pool.
+ * `--enable-cluster-autoscaler`: Enables the cluster autoscaler.
+ * `--min-count`: Configures the cluster autoscaler to maintain a minimum of one node in the node pool.
+ * `--max-count`: Configures the cluster autoscaler to maintain a maximum of three nodes in the node pool.
+
+ > [!NOTE]
+ > Taints and VM sizes can only be set for node pools during node pool creation, but you can update autoscaler settings at any time.
+
+2. Create a namespace using the [`kubectl create namespace`][kubectl-create] command.
+
+ ```console
+ kubectl create namespace gpu-resources
+ ```
+
+3. Create a file named *nvidia-device-plugin-ds.yaml* and paste the following YAML manifest provided as part of the [NVIDIA device plugin for Kubernetes project][nvidia-github]:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: DaemonSet
metadata:
- # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
- # reserves resources for critical add-on pods so that they can be rescheduled after
- # a failure. This annotation works in tandem with the toleration below.
- annotations:
- scheduler.alpha.kubernetes.io/critical-pod: ""
- labels:
- name: nvidia-device-plugin-ds
+ name: nvidia-device-plugin-daemonset
+ namespace: gpu-resources
spec:
- tolerations:
- # Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.
- # This, along with the annotation above marks this pod as a critical add-on.
- - key: CriticalAddonsOnly
- operator: Exists
- - key: nvidia.com/gpu
- operator: Exists
- effect: NoSchedule
- - key: "sku"
- operator: "Equal"
- value: "gpu"
- effect: "NoSchedule"
- containers:
- - image: mcr.microsoft.com/oss/nvidia/k8s-device-plugin:1.11
- name: nvidia-device-plugin-ctr
- securityContext:
- allowPrivilegeEscalation: false
- capabilities:
- drop: ["ALL"]
- volumeMounts:
- - name: device-plugin
- mountPath: /var/lib/kubelet/device-plugins
- volumes:
- - name: device-plugin
- hostPath:
- path: /var/lib/kubelet/device-plugins
-```
-
-Use [kubectl apply][kubectl-apply] to create the DaemonSet and confirm the NVIDIA device plugin is created successfully, as shown in the following example output:
-
-```console
-$ kubectl apply -f nvidia-device-plugin-ds.yaml
-
-daemonset "nvidia-device-plugin" created
-```
+ selector:
+ matchLabels:
+ name: nvidia-device-plugin-ds
+ updateStrategy:
+ type: RollingUpdate
+ template:
+ metadata:
+ # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
+ # reserves resources for critical add-on pods so that they can be rescheduled after
+ # a failure. This annotation works in tandem with the toleration below.
+ annotations:
+ scheduler.alpha.kubernetes.io/critical-pod: ""
+ labels:
+ name: nvidia-device-plugin-ds
+ spec:
+ tolerations:
+ # Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.
+ # This, along with the annotation above marks this pod as a critical add-on.
+ - key: CriticalAddonsOnly
+ operator: Exists
+ - key: nvidia.com/gpu
+ operator: Exists
+ effect: NoSchedule
+ - key: "sku"
+ operator: "Equal"
+ value: "gpu"
+ effect: "NoSchedule"
+ containers:
+ - image: mcr.microsoft.com/oss/nvidia/k8s-device-plugin:1.11
+ name: nvidia-device-plugin-ctr
+ securityContext:
+ allowPrivilegeEscalation: false
+ capabilities:
+ drop: ["ALL"]
+ volumeMounts:
+ - name: device-plugin
+ mountPath: /var/lib/kubelet/device-plugins
+ volumes:
+ - name: device-plugin
+ hostPath:
+ path: /var/lib/kubelet/device-plugins
+ ```
+
+4. Create the DaemonSet and confirm the NVIDIA device plugin is created successfully using the [`kubectl apply`][kubectl-apply] command.
+
+ ```console
+ kubectl apply -f nvidia-device-plugin-ds.yaml
+ ```
## Confirm that GPUs are schedulable
-With your AKS cluster created, confirm that GPUs are schedulable in Kubernetes. First, list the nodes in your cluster using the [kubectl get nodes][kubectl-get] command:
+After creating your cluster, confirm that GPUs are schedulable in Kubernetes.
-```console
-$ kubectl get nodes
+1. List the nodes in your cluster using the [`kubectl get nodes`][kubectl-get] command.
-NAME STATUS ROLES AGE VERSION
-aks-gpunp-28993262-0 Ready agent 13m v1.20.7
-```
+ ```console
+ kubectl get nodes
+ ```
-Now use the [kubectl describe node][kubectl-describe] command to confirm that the GPUs are schedulable. Under the *Capacity* section, the GPU should list as `nvidia.com/gpu: 1`.
+ Your output should look similar to the following example output:
-The following condensed example shows that a GPU is available on the node named *aks-nodepool1-18821093-0*:
+ ```console
+ NAME STATUS ROLES AGE VERSION
+ aks-gpunp-28993262-0 Ready agent 13m v1.20.7
+ ```
-```console
-$ kubectl describe node aks-gpunp-28993262-0
+2. Confirm the GPUs are schedulable using the [`kubectl describe node`][kubectl-describe] command.
-Name: aks-gpunp-28993262-0
-Roles: agent
-Labels: accelerator=nvidia
+ ```console
+ kubectl describe node aks-gpunp-28993262-0
+ ```
-[...]
+ Under the *Capacity* section, the GPU should list as `nvidia.com/gpu: 1`. Your output should look similar to the following condensed example output:
-Capacity:
-[...]
- nvidia.com/gpu: 1
-[...]
-```
+ ```console
+ Name: aks-gpunp-28993262-0
+ Roles: agent
+ Labels: accelerator=nvidia
+
+ [...]
+
+ Capacity:
+ [...]
+ nvidia.com/gpu: 1
+ [...]
+ ```
## Run a GPU-enabled workload
-To see the GPU in action, schedule a GPU-enabled workload with the appropriate resource request. In this example, let's run a [Tensorflow](https://www.tensorflow.org/) job against the [MNIST dataset](http://yann.lecun.com/exdb/mnist/).
+To see the GPU in action, you can schedule a GPU-enabled workload with the appropriate resource request. In this example, we'll run a [Tensorflow](https://www.tensorflow.org/) job against the [MNIST dataset](http://yann.lecun.com/exdb/mnist/).
-Create a file named *samples-tf-mnist-demo.yaml* and paste the following YAML manifest. The following job manifest includes a resource limit of `nvidia.com/gpu: 1`:
+1. Create a file named *samples-tf-mnist-demo.yaml* and paste the following YAML manifest, which includes a resource limit of `nvidia.com/gpu: 1`:
-> [!NOTE]
-> If you receive a version mismatch error when calling into drivers, such as, CUDA driver version is insufficient for CUDA runtime version, review the NVIDIA driver matrix compatibility chart - [https://docs.nvidia.com/deploy/cuda-compatibility/https://docsupdatetracker.net/index.html](https://docs.nvidia.com/deploy/cuda-compatibility/https://docsupdatetracker.net/index.html)
-
-```yaml
-apiVersion: batch/v1
-kind: Job
-metadata:
- labels:
- app: samples-tf-mnist-demo
- name: samples-tf-mnist-demo
-spec:
- template:
+ > [!NOTE]
+ > If you receive a version mismatch error when calling into drivers, such as "CUDA driver version is insufficient for CUDA runtime version", review the [NVIDIA driver matrix compatibility chart](https://docs.nvidia.com/deploy/cuda-compatibility/https://docsupdatetracker.net/index.html).
+
+ ```yaml
+ apiVersion: batch/v1
+ kind: Job
metadata: labels: app: samples-tf-mnist-demo
+ name: samples-tf-mnist-demo
spec:
- containers:
- - name: samples-tf-mnist-demo
- image: mcr.microsoft.com/azuredocs/samples-tf-mnist-demo:gpu
- args: ["--max_steps", "500"]
- imagePullPolicy: IfNotPresent
- resources:
- limits:
- nvidia.com/gpu: 1
- restartPolicy: OnFailure
- tolerations:
- - key: "sku"
- operator: "Equal"
- value: "gpu"
- effect: "NoSchedule"
-```
-
-Use the [kubectl apply][kubectl-apply] command to run the job. This command parses the manifest file and creates the defined Kubernetes objects:
-
-```console
-kubectl apply -f samples-tf-mnist-demo.yaml
-```
-
-## View the status and output of the GPU-enabled workload
-
-Monitor the progress of the job using the [kubectl get jobs][kubectl-get] command with the `--watch` argument. It may take a few minutes to first pull the image and process the dataset. When the *COMPLETIONS* column shows *1/1*, the job has successfully finished. Exit the `kubetctl --watch` command with *Ctrl-C*:
-
-```console
-$ kubectl get jobs samples-tf-mnist-demo --watch
-
-NAME COMPLETIONS DURATION AGE
-
-samples-tf-mnist-demo 0/1 3m29s 3m29s
-samples-tf-mnist-demo 1/1 3m10s 3m36s
-```
-
-To look at the output of the GPU-enabled workload, first get the name of the pod with the [kubectl get pods][kubectl-get] command:
-
-```console
-$ kubectl get pods --selector app=samples-tf-mnist-demo
-
-NAME READY STATUS RESTARTS AGE
-samples-tf-mnist-demo-mtd44 0/1 Completed 0 4m39s
-```
-
-Now use the [kubectl logs][kubectl-logs] command to view the pod logs. The following example pod logs confirm that the appropriate GPU device has been discovered, `Tesla K80`. Provide the name for your own pod:
-
-```console
-$ kubectl logs samples-tf-mnist-demo-smnr6
-
-2019-05-16 16:08:31.258328: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
-2019-05-16 16:08:31.396846: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
-name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
-pciBusID: 2fd7:00:00.0
-totalMemory: 11.17GiB freeMemory: 11.10GiB
-2019-05-16 16:08:31.396886: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: Tesla K80, pci bus id: 2fd7:00:00.0, compute capability: 3.7)
-2019-05-16 16:08:36.076962: I tensorflow/stream_executor/dso_loader.cc:139] successfully opened CUDA library libcupti.so.8.0 locally
-Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
-Extracting /tmp/tensorflow/input_data/train-images-idx3-ubyte.gz
-Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
-Extracting /tmp/tensorflow/input_data/train-labels-idx1-ubyte.gz
-Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
-Extracting /tmp/tensorflow/input_data/t10k-images-idx3-ubyte.gz
-Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
-Extracting /tmp/tensorflow/input_data/t10k-labels-idx1-ubyte.gz
-Accuracy at step 0: 0.1081
-Accuracy at step 10: 0.7457
-Accuracy at step 20: 0.8233
-Accuracy at step 30: 0.8644
-Accuracy at step 40: 0.8848
-Accuracy at step 50: 0.8889
-Accuracy at step 60: 0.8898
-Accuracy at step 70: 0.8979
-Accuracy at step 80: 0.9087
-Accuracy at step 90: 0.9099
-Adding run metadata for 99
-Accuracy at step 100: 0.9125
-Accuracy at step 110: 0.9184
-Accuracy at step 120: 0.922
-Accuracy at step 130: 0.9161
-Accuracy at step 140: 0.9219
-Accuracy at step 150: 0.9151
-Accuracy at step 160: 0.9199
-Accuracy at step 170: 0.9305
-Accuracy at step 180: 0.9251
-Accuracy at step 190: 0.9258
-Adding run metadata for 199
-Accuracy at step 200: 0.9315
-Accuracy at step 210: 0.9361
-Accuracy at step 220: 0.9357
-Accuracy at step 230: 0.9392
-Accuracy at step 240: 0.9387
-Accuracy at step 250: 0.9401
-Accuracy at step 260: 0.9398
-Accuracy at step 270: 0.9407
-Accuracy at step 280: 0.9434
-Accuracy at step 290: 0.9447
-Adding run metadata for 299
-Accuracy at step 300: 0.9463
-Accuracy at step 310: 0.943
-Accuracy at step 320: 0.9439
-Accuracy at step 330: 0.943
-Accuracy at step 340: 0.9457
-Accuracy at step 350: 0.9497
-Accuracy at step 360: 0.9481
-Accuracy at step 370: 0.9466
-Accuracy at step 380: 0.9514
-Accuracy at step 390: 0.948
-Adding run metadata for 399
-Accuracy at step 400: 0.9469
-Accuracy at step 410: 0.9489
-Accuracy at step 420: 0.9529
-Accuracy at step 430: 0.9507
-Accuracy at step 440: 0.9504
-Accuracy at step 450: 0.951
-Accuracy at step 460: 0.9512
-Accuracy at step 470: 0.9539
-Accuracy at step 480: 0.9533
-Accuracy at step 490: 0.9494
-Adding run metadata for 499
-```
+ template:
+ metadata:
+ labels:
+ app: samples-tf-mnist-demo
+ spec:
+ containers:
+ - name: samples-tf-mnist-demo
+ image: mcr.microsoft.com/azuredocs/samples-tf-mnist-demo:gpu
+ args: ["--max_steps", "500"]
+ imagePullPolicy: IfNotPresent
+ resources:
+ limits:
+ nvidia.com/gpu: 1
+ restartPolicy: OnFailure
+ tolerations:
+ - key: "sku"
+ operator: "Equal"
+ value: "gpu"
+ effect: "NoSchedule"
+ ```
+
+2. Run the job using the [`kubectl apply`][kubectl-apply] command, which parses the manifest file and creates the defined Kubernetes objects.
+
+ ```console
+ kubectl apply -f samples-tf-mnist-demo.yaml
+ ```
+
+## View the status of the GPU-enabled workload
+
+1. Monitor the progress of the job using the [`kubectl get jobs`][kubectl-get] command with the `--watch` flag. It may take a few minutes to first pull the image and process the dataset.
+
+ ```console
+ kubectl get jobs samples-tf-mnist-demo --watch
+ ```
+
+ When the *COMPLETIONS* column shows *1/1*, the job has successfully finished, as shown in the following example output:
+
+ ```console
+ NAME COMPLETIONS DURATION AGE
+
+ samples-tf-mnist-demo 0/1 3m29s 3m29s
+ samples-tf-mnist-demo 1/1 3m10s 3m36s
+ ```
+
+2. Exit the `kubectl --watch` process with *Ctrl-C*.
+
+3. Get the name of the pod using the [`kubectl get pods`][kubectl-get] command.
+
+ ```console
+ kubectl get pods --selector app=samples-tf-mnist-demo
+ ```
+
+4. View the output of the GPU-enabled workload using the [`kubectl logs`][kubectl-logs] command.
+
+ ```console
+ kubectl logs samples-tf-mnist-demo-smnr6
+ ```
+
+ The following condensed example output of the pod logs confirms that the appropriate GPU device, `Tesla K80`, has been discovered:
+
+ ```console
+ 2019-05-16 16:08:31.258328: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
+ 2019-05-16 16:08:31.396846: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
+ name: Tesla K80 major: 3 minor: 7 memoryClockRate(GHz): 0.8235
+ pciBusID: 2fd7:00:00.0
+ totalMemory: 11.17GiB freeMemory: 11.10GiB
+ 2019-05-16 16:08:31.396886: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: Tesla K80, pci bus id: 2fd7:00:00.0, compute capability: 3.7)
+ 2019-05-16 16:08:36.076962: I tensorflow/stream_executor/dso_loader.cc:139] successfully opened CUDA library libcupti.so.8.0 locally
+ Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
+ Extracting /tmp/tensorflow/input_data/train-images-idx3-ubyte.gz
+ Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
+ Extracting /tmp/tensorflow/input_data/train-labels-idx1-ubyte.gz
+ Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
+ Extracting /tmp/tensorflow/input_data/t10k-images-idx3-ubyte.gz
+ Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
+ Extracting /tmp/tensorflow/input_data/t10k-labels-idx1-ubyte.gz
+ Accuracy at step 0: 0.1081
+ Accuracy at step 10: 0.7457
+ Accuracy at step 20: 0.8233
+ Accuracy at step 30: 0.8644
+ Accuracy at step 40: 0.8848
+ Accuracy at step 50: 0.8889
+ Accuracy at step 60: 0.8898
+ Accuracy at step 70: 0.8979
+ Accuracy at step 80: 0.9087
+ Accuracy at step 90: 0.9099
+ Adding run metadata for 99
+ Accuracy at step 100: 0.9125
+ Accuracy at step 110: 0.9184
+ Accuracy at step 120: 0.922
+ Accuracy at step 130: 0.9161
+ Accuracy at step 140: 0.9219
+ Accuracy at step 150: 0.9151
+ Accuracy at step 160: 0.9199
+ Accuracy at step 170: 0.9305
+ Accuracy at step 180: 0.9251
+ Accuracy at step 190: 0.9258
+ Adding run metadata for 199
+ [...]
+ Adding run metadata for 499
+ ```
## Use Container Insights to monitor GPU usage
-The following metrics are available for [Container Insights with AKS][aks-container-insights] to monitor GPU usage.
+[Container Insights with AKS][aks-container-insights] monitors the following GPU usage metrics:
| Metric name | Metric dimension (tags) | Description | |-|-|-|
-| containerGpuDutyCycle | `container.azm.ms/clusterId`, `container.azm.ms/clusterName`, `containerName`, `gpuId`, `gpuModel`, `gpuVendor` | Percentage of time over the past sample period (60 seconds) during which GPU was busy/actively processing for a container. Duty cycle is a number between 1 and 100. |
+| containerGpuDutyCycle | `container.azm.ms/clusterId`, `container.azm.ms/clusterName`, `containerName`, `gpuId`, `gpuModel`, `gpuVendor`| Percentage of time over the past sample period (60 seconds) during which GPU was busy/actively processing for a container. Duty cycle is a number between 1 and 100. |
| containerGpuLimits | `container.azm.ms/clusterId`, `container.azm.ms/clusterName`, `containerName` | Each container can specify limits as one or more GPUs. It is not possible to request or limit a fraction of a GPU. | | containerGpuRequests | `container.azm.ms/clusterId`, `container.azm.ms/clusterName`, `containerName` | Each container can request one or more GPUs. It is not possible to request or limit a fraction of a GPU. | | containerGpumemoryTotalBytes | `container.azm.ms/clusterId`, `container.azm.ms/clusterName`, `containerName`, `gpuId`, `gpuModel`, `gpuVendor` | Amount of GPU Memory in bytes available to use for a specific container. |
The following metrics are available for [Container Insights with AKS][aks-contai
## Clean up resources
-To remove the associated Kubernetes objects created in this article, use the [kubectl delete job][kubectl delete] command as follows:
+* Remove the associated Kubernetes objects you created in this article using the [`kubectl delete job`][kubectl delete] command.
-```console
-kubectl delete jobs samples-tf-mnist-demo
-```
+ ```console
+ kubectl delete jobs samples-tf-mnist-demo
+ ```
## Next steps
-To run Apache Spark jobs, see [Run Apache Spark jobs on AKS][aks-spark].
-
-For more information about running machine learning (ML) workloads on Kubernetes, see [Kubeflow Labs][kubeflow-labs].
-
-For more information on features of the Kubernetes scheduler, see [Best practices for advanced scheduler features in AKS][advanced-scheduler-aks].
-
-For information on using Azure Kubernetes Service with Azure Machine Learning, see the following articles:
-
-* [Configure a Kubernetes cluster for ML model training or deployment][azureml-aks].
-* [Deploy a model with an online endpoint][azureml-deploy].
-* [High-performance serving with Triton Inference Server][azureml-triton].
+* To run Apache Spark jobs, see [Run Apache Spark jobs on AKS][aks-spark].
+* For more information on features of the Kubernetes scheduler, see [Best practices for advanced scheduler features in AKS][advanced-scheduler-aks].
+* For more information on Azure Kubernetes Service and Azure Machine Learning, see:
+ * [Configure a Kubernetes cluster for ML model training or deployment][azureml-aks].
+ * [Deploy a model with an online endpoint][azureml-deploy].
+ * [High-performance serving with Triton Inference Server][azureml-triton].
+ * [Labs for Kubernetes and Kubeflow][kubeflow].
<!-- LINKS - external --> [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubeflow-labs]: https://github.com/Azure/kubeflow-labs
+[kubeflow]: https://github.com/Azure/kubeflow-labs
[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe [kubectl-logs]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs [kubectl delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete
For information on using Azure Kubernetes Service with Azure Machine Learning, s
[nvidia-github]: https://github.com/NVIDIA/k8s-device-plugin <!-- LINKS - internal -->
-[az-group-create]: /cli/azure/group#az_group_create
-[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
For information on using Azure Kubernetes Service with Azure Machine Learning, s
[az-provider-register]: /cli/azure/provider#az-provider-register [az-feature-register]: /cli/azure/feature#az-feature-register [az-feature-show]: /cli/azure/feature#az-feature-show
+[az-extension-add]: /cli/azure/extension#az-extension-add
+[az-extension-update]: /cli/azure/extension#az-extension-update
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
az group create --name $RG --location $LOC
Create a virtual network with two subnets to host the AKS cluster and the Azure Firewall. Each will have their own subnet. Let's start with the AKS network.
-```
+```azurecli
# Dedicated virtual network with AKS subnet az network vnet create \
If you used authorized IP ranges for the cluster on the previous step, you must
Add another IP address to the approved ranges with the following command
-```bash
+```azurecli
# Retrieve your IP address CURRENT_IP=$(dig @resolver1.opendns.com ANY myip.opendns.com +short)
api-management Cache Store Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-policy.md
The `cache-store` policy caches responses according to the specified cache setti
| Attribute | Description | Required | Default | | -- | | -- | - | | duration | Time-to-live of the cached entries, specified in seconds. Policy expressions are allowed. | Yes | N/A |
-| cache-response | Set to `true` to cache the current HTTP response. If the attribute is omitted or set to `false`, only HTTP responses with the status code `200 OK` are cached. Policy expressions are allowed. | No | `false` |
+| cache-response | Set to `true` to cache the current HTTP response. If the attribute is omitted, only HTTP responses with the status code `200 OK` are cached. Policy expressions are allowed. | No | `false` |
## Usage
For more information, see [Policy expressions](api-management-policy-expressions
* [API Management caching policies](api-management-caching-policies.md)
api-management Export Api Power Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/export-api-power-platform.md
Last updated 03/24/2023 - + # Export APIs from Azure API Management to the Power Platform Citizen developers using the Microsoft [Power Platform](https://powerplatform.microsoft.com) often need to reach the business capabilities that are developed by professional developers and deployed in Azure. [Azure API Management](https://aka.ms/apimrocks) enables professional developers to publish their backend service as APIs, and easily export these APIs to the Power Platform ([Power Apps](/powerapps/powerapps-overview) and [Power Automate](/power-automate/getting-started)) as custom connectors for discovery and consumption by citizen developers.
-This article walks through the steps in the Azure portal to create a custom Power Platform connector to an API in API Management. With this capability, citizen developers can use the Power Platform to create and distribute apps that are based on internal and external APIs managed by API Management.
-
+This article walks through the steps in the Azure portal to create a Power Platform [custom connector](/connectors/custom-connectors/) to an API in API Management. With this capability, citizen developers can use the Power Platform to create and distribute apps that are based on internal and external APIs managed by API Management.
## Prerequisites + Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md)
From API Management, you can update a connector to target a different API or Pow
## Next steps
-* [Learn more about the Power Platform](https://powerplatform.microsoft.com/)
+* [Learn more about the Power Platform](https://powerplatform.microsoft.com/) and [licensing](/power-platform/admin/pricing-billing-skus)
* [Learn more about creating and using custom connectors](/connectors/custom-connectors/) * [Learn common tasks in API Management by following the tutorials](./import-and-publish.md)++
api-management Virtual Network Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/virtual-network-reference.md
When an API Management service instance is hosted in a VNet, the ports in the fo
| * / 443 | Outbound | TCP | VirtualNetwork / AzureActiveDirectory | [Azure Active Directory](api-management-howto-aad.md) and Azure Key Vault dependency (optional) | External & Internal | | * / 1433 | Outbound | TCP | VirtualNetwork / Sql | **Access to Azure SQL endpoints** | External & Internal | | * / 443 | Outbound | TCP | VirtualNetwork / AzureKeyVault | **Access to Azure Key Vault** | External & Internal |
-| * / 5671, 5672, 443 | Outbound | TCP | VirtualNetwork / EventHub | Dependency for [Log to Azure Event Hubs policy](api-management-howto-log-event-hubs.md) and monitoring agent (optional) | External & Internal |
+| * / 5671, 5672, 443 | Outbound | TCP | VirtualNetwork / EventHub | Dependency for [Log to Azure Event Hubs policy](api-management-howto-log-event-hubs.md) and [Azure Monitor](api-management-howto-use-azure-monitor.md) (optional) | External & Internal |
| * / 445 | Outbound | TCP | VirtualNetwork / Storage | Dependency on Azure File Share for [GIT](api-management-configuration-repository-git.md) (optional) | External & Internal | | * / 1886, 443 | Outbound | TCP | VirtualNetwork / AzureMonitor | Publish [Diagnostics Logs and Metrics](api-management-howto-use-azure-monitor.md), [Resource Health](../service-health/resource-health-overview.md), and [Application Insights](api-management-howto-app-insights.md) (optional) | External & Internal | | * / 6380 | Inbound & Outbound | TCP | VirtualNetwork / VirtualNetwork | Access external Azure Cache for Redis service for [caching](api-management-caching-policies.md) policies between machines (optional) | External & Internal |
app-service Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/getting-started.md
+
+ Title: Getting started with Azure App Service
+description: Take the first steps toward working with Azure App Service.
+++ Last updated : 4/10/2023
+zone_pivot_groups: app-service-getting-started-stacks
++
+# Getting started with Azure App Service
+
+## Introduction
+[Azure App Service](./overview.md) is a fully managed platform as a service (PaaS) for hosting web applications. Use the following resources to get started with .NET.
+
+| Action | Resources |
+| | |
+| **Create your first .NET app** | Using one of the following tools:<br><br>- [Visual Studio](./quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-vs)<br>- [Visual Studio Code](./quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-vscode)<br>- [Command line](./quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-cli)<br>- [Azure PowerShell](./quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-ps)<br>- [Azure portal](./quickstart-dotnetcore.md?tabs=net60&pivots=development-environment-azure-portal) |
+| **Deploy your app** | - [Configure ASP.NET](./configure-language-dotnet-framework.md)<br>- [Configure ASP.NET core](./configure-language-dotnetcore.md?pivots=platform-linux)<br>- [GitHub actions](./deploy-github-actions.md) |
+| **Monitor your app**| - [Log stream](./troubleshoot-diagnostic-logs.md#stream-logs)<br>- [Diagnose and solve tool](./overview-diagnostics.md)|
+| **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add SSL certificate](./configure-ssl-certificate.md)|
+| **Connect to a database** | - [.NET with Azure SQL Database](./app-service-web-tutorial-dotnet-sqldatabase.md)<br>- [.NET Core with Azure SQL DB](./tutorial-dotnetcore-sqldb-app.md)|
+| **Custom containers** |- [Linux - Visual Studio Code](./quickstart-custom-container.md?tabs=dotnet&pivots=container-linux-vscode)<br>- [Windows - Visual Studio](./quickstart-custom-container.md?tabs=dotnet&pivots=container-windows-vs)|
+| **Review best practices** | - [Scale your app](./manage-scale-up.md)<br>- [Deployment](./deploy-best-practices.md)<br>- [Security](/security/benchmark/azure/baselines/app-service-security-baseline?toc=/azure/app-service/toc.json)<br>- [Virtual Network](./configure-vnet-integration-enable.md)|
+[Azure App Service](./overview.md) is a fully managed platform as a service (PaaS) for hosting web applications. Use the following resources to get started with Python.
+
+| Action | Resources |
+| | |
+| **Create your first Python app** | Using one of the following tools:<br><br>- [Flask - CLI](./quickstart-python.md?tabs=flask%2Cwindows%2Cazure-cli%2Cvscode-deploy%2Cdeploy-instructions-azportal%2Cterminal-bash%2Cdeploy-instructions-zip-azcli)<br>- [Flask - Visual Studio Code](./quickstart-python.md?tabs=flask%2Cwindows%2Cvscode-aztools%2Cvscode-deploy%2Cdeploy-instructions-azportal%2Cterminal-bash%2Cdeploy-instructions-zip-azcli)<br>- [Django - CLI](./quickstart-python.md?tabs=django%2Cwindows%2Cazure-cli%2Cvscode-deploy%2Cdeploy-instructions-azportal%2Cterminal-bash%2Cdeploy-instructions-zip-azcli)<br>- [Django - Visual Studio Code](./quickstart-python.md?tabs=django%2Cwindows%2Cvscode-aztools%2Cvscode-deploy%2Cdeploy-instructions-azportal%2Cterminal-bash%2Cdeploy-instructions-zip-azcli)<br>- [Django - Azure portal](./quickstart-python.md?tabs=django%2Cwindows%2Cazure-portal%2Cvscode-deploy%2Cdeploy-instructions-azportal%2Cterminal-bash%2Cdeploy-instructions-zip-azcli) |
+| **Deploy your app** | - [Configure Python](configure-language-python.md)<br>- [GitHub actions](./deploy-github-actions.md) |
+| **Monitor your app**| - [Log stream](./troubleshoot-diagnostic-logs.md#stream-logs)<br>- [Diagnose and solve tool](./overview-diagnostics.md)|
+| **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add SSL certificate](./configure-ssl-certificate.md)|
+| **Connect to a database** | - [Postgres - CLI](./tutorial-python-postgresql-app.md?tabs=flask%2Cwindows&pivots=deploy-azd)<br>- [Postgres - Azure portal](./tutorial-python-postgresql-app.md?tabs=flask%2Cwindows&pivots=deploy-portal)|
+| **Custom containers** |- [Linux - Visual Studio Code](./quickstart-custom-container.md?tabs=python&pivots=container-linux-vscode)|
+| **Review best practices** | - [Scale your app](./manage-scale-up.md)<br>- [Deployment](./deploy-best-practices.md)<br>- [Security](/security/benchmark/azure/baselines/app-service-security-baseline?toc=/azure/app-service/toc.json)<br>- [Virtual Network](./configure-vnet-integration-enable.md)|
+[Azure App Service](./overview.md) is a fully managed platform as a service (PaaS) for hosting web applications. Use the following resources to get started with Node.js.
+
+| Action | Resources |
+| | |
+| **Create your first Node app** | Using one of the following tools:<br><br>- [Visual Studio Code](./quickstart-nodejs.md?tabs=linux&pivots=development-environment-vscode)<br>- [CLI](./quickstart-nodejs.md?tabs=linux&pivots=development-environment-cli)<br>- [Azure portal](./quickstart-nodejs.md?tabs=linux&pivots=development-environment-azure-portal) |
+| **Deploy your app** | - [Configure Node](./configure-language-nodejs.md?pivots=platform-linux)<br>- [GitHub actions](./deploy-github-actions.md) |
+| **Monitor your app**| - [Log stream](./troubleshoot-diagnostic-logs.md#stream-logs)<br>- [Diagnose and solve tool](./overview-diagnostics.md)|
+| **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add SSL certificate](./configure-ssl-certificate.md)|
+| **Connect to a database** | - [MongoDB](./tutorial-nodejs-mongodb-app.md)|
+| **Custom containers** |- [Linux - Visual Studio Code](./quickstart-custom-container.md?tabs=node&pivots=container-linux-vscode)|
+| **Review best practices** | - [Scale your app](./manage-scale-up.md)<br>- [Deployment](./deploy-best-practices.md)<br>- [Security](/security/benchmark/azure/baselines/app-service-security-baseline?toc=/azure/app-service/toc.json)<br>- [Virtual Network](./configure-vnet-integration-enable.md)|
+[Azure App Service](./overview.md) is a fully managed platform as a service (PaaS) for hosting web applications. Use the following resources to get started with Java.
+
+| Action | Resources |
+| | |
+| **Create your first Java app** | Using one of the following tools:<br><br>- [Linux - Maven](./quickstart-java.md?tabs=javase&pivots=platform-linux-development-environment-maven)<br>- [Linux - Azure portal](./quickstart-java.md?tabs=javase&pivots=platform-linux-development-environment-azure-portal)<br>- [Windows - Maven](./quickstart-java.md?tabs=javase&pivots=platform-windows-development-environment-maven)<br>- [Windows - Azure portal](./quickstart-java.md?tabs=javase&pivots=platform-windows-development-environment-azure-portal) |
+| **Deploy your app** | - [Configure Java](./configure-language-java.md?pivots=platform-linux)<br>- [Deploy War](./deploy-zip.md?tabs=cli#deploy-warjarear-packages)<br>- [GitHub actions](./deploy-github-actions.md) |
+| **Monitor your app**| - [Log stream](./troubleshoot-diagnostic-logs.md#stream-logs)<br>- [Diagnose and solve tool](./overview-diagnostics.md)|
+| **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add SSL certificate](./configure-ssl-certificate.md)|
+| **Connect to a database** |- [Java Spring with Cosmos DB](./tutorial-java-spring-cosmosdb.md)|
+| **Custom containers** |- [Linux - Visual Studio Code](./quickstart-custom-container.md?tabs=python&pivots=container-linux-vscode)|
+| **Review best practices** | - [Scale your app](./manage-scale-up.md)<br>- [Deployment](./deploy-best-practices.md)<br>- [Security](/security/benchmark/azure/baselines/app-service-security-baseline?toc=/azure/app-service/toc.json)<br>- [Virtual Network](./configure-vnet-integration-enable.md)|
+[Azure App Service](./overview.md) is a fully managed platform as a service (PaaS) for hosting web applications. Use the following resources to get started with PHP.
+
+| Action | Resources |
+| | |
+| **Create your first PHP app** | Using one of the following tools:<br><br>- [Linux - CLI](./quickstart-php.md?tabs=cli&pivots=platform-linux)<br>- [Linux - Azure portal](./quickstart-php.md?tabs=portal&pivots=platform-linux) |
+| **Deploy your app** | - [Configure PHP](./configure-language-php.md?pivots=platform-linux)<br>- [Deploy via FTP](./deploy-ftp.md?tabs=portal)|
+| **Monitor your app**|- [Troubleshoot with Azure Monitor](./tutorial-troubleshoot-monitor.md)<br>- [Log stream](./troubleshoot-diagnostic-logs.md#stream-logs)<br>- [Diagnose and solve tool](./overview-diagnostics.md)|
+| **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add SSL certificate](./configure-ssl-certificate.md)|
+| **Connect to a database** | - [MySQL with PHP](./tutorial-php-mysql-app.md)|
+| **Custom containers** |- [Multi-container](./quickstart-multi-container.md)|
+| **Review best practices** | - [Scale your app]()<br>- [Deployment](./deploy-best-practices.md)<br>- [Security](/security/benchmark/azure/baselines/app-service-security-baseline?toc=/azure/app-service/toc.json)<br>- [Virtual Network](./configure-vnet-integration-enable.md)|
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about App Service](./overview.md)
application-gateway Http Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md
Previously updated : 04/19/2022 Last updated : 04/04/2023 # HTTP response codes in Application Gateway
-This article lists some HTTP response codes that can be returned by Azure Application Gateway. Common causes and troubleshooting steps are provided to help you determine the root cause. HTTP response codes can be returned to a client request whether or not a connection was initiated to a backend target.
+This article gives reasons on why Azure Application Gateway returns specific HTTP response codes. Common causes and troubleshooting steps are provided to help you determine the root cause of error HTTP Response code. HTTP response codes can be returned to a client request whether or not a connection was initiated to a backend target.
## 3XX response codes (redirection)
HTTP 400 response codes are commonly observed when:
- Non-HTTP / HTTPS traffic is initiated to an application gateway with an HTTP or HTTPS listener. - HTTP traffic is initiated to a listener with HTTPS, with no redirection configured. - Mutual authentication is configured and unable to properly negotiate.
+- The request is not compliant to RFC.
+
+Some of the common reasons for the request to be non-compliant to RFC is listed.So review the urls/requests from the clients and ensure it's compliant to RFC.
+
+| Category | Examples |
+| - | - |
+| Invalid Host in request line | Host containing two colons (example.com:**8090:8080**) |
+| Missing Host Header | Request doesn't have Host Header |
+| Presence of malformed or illegal character | Reserved characters are **&,!.** Workaround is to percent code it like %& |
+| Invalid HTTP version | Get /content.css HTTP/**0.3** |
+| Header field name and URI contains non-ASCII Character | GET /**«úü¡»¿**.doc HTTP/1.1 |
+| Missing Content Length header for POST request | Self Explanatory |
+| Invalid HTTP Method | **GET123** /https://docsupdatetracker.net/index.html HTTP/1.1 |
+| Duplicate Headers | Authorization:\<base64 encoded content\>,Authorization: \<base64 encoded content\> |
+| Invalid value in Content-Length | Content-Length: **abc**,Content-Length: **-10**|
++ For cases when mutual authentication is configured, several scenarios can lead to an HTTP 400 response being returned the client, such as: - Client certificate isn't presented, but mutual authentication is enabled.
For more information about troubleshooting mutual authentication, see [Error cod
#### 403 ΓÇô Forbidden
-HTTP 403 Forbidden is presented when customers are utilizing WAF skus and have WAF configured in Prevention mode. If enabled WAF rulesets or custom deny WAF rules match the characteristics of an inbound request, the client will be presented a 403 forbidden response.
+HTTP 403 Forbidden is presented when customers are utilizing WAF skus and have WAF configured in Prevention mode. If enabled WAF rulesets or custom deny WAF rules match the characteristics of an inbound request, the client is presented a 403 forbidden response.
#### 404 ΓÇô Page not found
An HTTP 404 response can be returned if a request is sent to an application gate
#### 408 ΓÇô Request Timeout
-An HTTP 408 response can be observed when client requests to the frontend listener of application gateway do not respond back within 60 seconds. This error can be observed due to traffic congestion between on-premises networks and Azure, when traffic is inspected by virtual appliances, or the client itself becomes overwhelmed.
+An HTTP 408 response can be observed when client requests to the frontend listener of application gateway don't respond back within 60 seconds. This error can be observed due to traffic congestion between on-premises networks and Azure, when virtual appliance inspects the traffic traffic, or the client itself becomes overwhelmed.
#### 499 ΓÇô Client closed the connection
-An HTTP 499 response is presented if a client request that is sent to application gateways using v2 sku is closed before the server finished responding. This error can be observed when a large response is returned to the client, but the client may have closed or refreshed their browser/application before the server had a chance to finish responding. In application gateways using v1 sku, an HTTP 0 response code may be raised for the client closing the connection before the server has finished responding as well.
+An HTTP 499 response is presented if a client request that is sent to application gateways using v2 sku is closed before the server finished responding. This error can be observed in 2 scenarios. First scenario is when a large response is returned to the client and the client may have closed or refreshed their application before the server finished sending the large response. Second scenario is the timeout on the client side is low and does not wait long enough to receive the response from server. In this case it is better to increase the timeout on the client. In application gateways using v1 sku, an HTTP 0 response code may be raised for the client closing the connection before the server has finished responding as well.
## 5XX response codes (server error)
An HTTP 499 response is presented if a client request that is sent to applicatio
#### 500 ΓÇô Internal Server Error
-Azure Application Gateway shouldn't exhibit 500 response codes. Please open a support request if you see this code, because this issue is an internal error to the service. For information on how to open a support case, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+Azure Application Gateway shouldn't exhibit 500 response codes. Open a support request if you see this code, because this issue is an internal error to the service. For information on how to open a support case, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
#### 502 ΓÇô Bad Gateway
For information about scenarios where 502 errors occur, and how to troubleshoot
#### 504 ΓÇô Gateway timeout
-HTTP 504 errors are presented if a request is sent to application gateways using v2 sku, and the backend response time exceeds the time-out value configured in the Backend Setting.
+Azure application Gateway V2 SKU sent HTTP 504 errors if the backend response time exceeds the time-out value which is configured in the Backend Setting.
## Next steps
application-gateway Ingress Controller Install New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-new.md
Kubernetes. We'll use it to install the `application-gateway-kubernetes-ingress`
sed -i "s|<applicationGatewayName>|${applicationGatewayName}|g" helm-config.yaml sed -i "s|<identityResourceId>|${identityResourceId}|g" helm-config.yaml sed -i "s|<identityClientId>|${identityClientId}|g" helm-config.yaml-
- # You can further modify the helm config to enable/disable features
- nano helm-config.yaml
```
+
> [!NOTE] > **For deploying to Sovereign Clouds (e.g., Azure Government)**, the `appgw.environment` configuration parameter must be added and set to the appropriate value as documented below.
application-gateway Ssl Certificate Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ssl-certificate-management.md
Title: Listener SSL certificate management in Application Gateway
+ Title: Listener TLS certificate management in Application Gateway
description: Understand listener certificate management through portal.
Last updated 03/01/2023
-# SSL certificate management for listeners
+# TLS certificates management for listeners
-Listener SSL certificates in Application Gateway are used for terminating client TLS connection at the gateway. This function is analogous to uploading a certificate on a web server to support TLS/HTTPS connections from clients/browsers.
+Listener TLS/SSL certificates in Application Gateway are used for terminating client TLS connection at the gateway. This function is analogous to uploading a certificate on a web server to support TLS/HTTPS connections from clients/browsers.
-## SSL Certificate structure
+## TLS Certificate structure
-The SSL certificates on application gateway are stored in local certificate objects or containers. This certificate containerΓÇÖs reference is then supplied to listeners to support TLS connections for clients. Refer to this illustration for better understanding.
+The TLS/SSL certificates on application gateway are stored in local certificate objects or containers. This certificate containerΓÇÖs reference is then supplied to listeners to support TLS connections for clients. Refer to this illustration for better understanding.
![Diagram that shows how certficates are linked to a listener.](media/ssl-certificate-management/cert-reference.png) Here is a sample application gateway configuration. The SSLCertificates property includes certificate object ΓÇ£contoso-agw-cert" linked to a key vault. The ΓÇ£listener1ΓÇ¥ references that certificate object. ## Understanding the portal section (Preview)
-
+
+> [!IMPORTANT]
+> The **TLS certificate for Listeners** (TLS termination/End-to-end TLS) is a **Generally available** feature. Only its Portal management experience ([released in March 2023](https://azure.microsoft.com/updates/public-preview-listener-tls-certificates-management-available-in-the-azure-portal/)) is referred to as Preview.
+ ### Listener SSL certificates This section allows you to list all the SSL certificate objects that are present on your application gateway. This view is equivalent of running PowerShell command `Get-AzApplicationGatewaySslCertificate -ApplicationGateway $AppGW` or CLI command `az network application-gateway ssl-cert list --gateway-name --resource-group`.
applied-ai-services Form Recognizer Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-disconnected-containers.md
Previously updated : 02/10/2023 Last updated : 03/02/2023 monikerRange: 'form-recog-2.1.0' recommendations: false
Before attempting to run a Docker container in an offline environment, make sure
## Request access to use containers in disconnected environments
-Complete and submit the [request form](https://aka.ms/csdisconnectedcontainers) to request access to the containers disconnected from the Internet.
--
-Access is limited to customers that meet the following requirements:
-
-* Your organization should be identified as strategic customer or partner with Microsoft.
-* Disconnected containers are expected to run fully offline, hence your use cases must meet one of the following or similar requirements:
- * Environment or device(s) with zero connectivity to internet.
- * Remote location that occasionally has internet access.
- * Organization under strict regulation of not sending any kind of data back to cloud.
-* Application completed as instructed - Pay close attention to guidance provided throughout the application to ensure you provide all the necessary information required for approval.
-
-## Create a new resource and purchase a commitment plan
-
-1. Create a new [Form Recognizer resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) in the Azure portal.
-
-1. Enter the applicable information to create your resource. Be sure to select **Commitment tier disconnected containers** as your pricing tier.
-
- > [!NOTE]
- >
- > * You will only see the option to purchase a commitment tier if you have been approved by Microsoft.
-
- :::image type="content" source="../media/create-resource-offline-container.png" alt-text="A screenshot showing resource creation on the Azure portal.":::
-
-1. Select **Review + Create** at the bottom of the page. Review the information, and select **Create**.
+Before you can use Form Recognizer containers in disconnected environments, you must first fill out and [submit a request form](../../../cognitive-services/containers/disconnected-containers.md#request-access-to-use-containers-in-disconnected-environments) and [purchase a commitment plan](../../../cognitive-services/containers/disconnected-containers.md#purchase-a-commitment-plan-to-use-containers-in-disconnected-environments).
## Gather required parameters
This usage-logs endpoint returns a JSON response similar to the following exampl
} ```
-### Purchase a different commitment plan for disconnected containers
-
-Commitment plans for disconnected containers have a calendar year commitment period. When you purchase a plan, you're charged the full price immediately. During the commitment period, you can't change your commitment plan, however you can purchase more unit(s) at a pro-rated price for the remaining days in the year. You have until midnight (UTC) on the last day of your commitment, to end a commitment plan.
-
-You can choose a different commitment plan in the **Commitment tier pricing** settings of your resource under the **Resource Management** section.
-
-### End a commitment plan
-
-If you decide that you don't want to continue purchasing a commitment plan, you can set your resource's auto-renewal to **Do not auto-renew**. Your commitment plan expires on the displayed commitment end date. After this date, you won't be charged for the commitment plan. You can continue using the Azure resource to make API calls, charged at pay-as-you-go pricing. You have until midnight (UTC) on the last day of the year to end a commitment plan for disconnected containers. If you cancel at or before that time, there are no charges for the next year.
- ## Troubleshooting Run the container with an output mount and logging enabled. These settings enable the container generates log files that are helpful for troubleshooting issues that occur while starting or running the container.
Run the container with an output mount and logging enabled. These settings enabl
## Next steps
-[Deploy the Sample Labeling tool to an Azure Container Instance (ACI)](../deploy-label-tool.md#deploy-with-azure-container-instances-aci)
+* [Deploy the Sample Labeling tool to an Azure Container Instance (ACI)](../deploy-label-tool.md#deploy-with-azure-container-instances-aci)
+* [Change or end a commitment plan](../../../cognitive-services/containers/disconnected-containers.md#purchase-a-different-commitment-plan-for-disconnected-containers)
automanage Virtual Machines Custom Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/virtual-machines-custom-profile.md
The `azureSecurityBaselineAssignmentType` is the audit mode that you can choose
You can also specify an existing log analytics workspace by adding this setting to the configuration section of properties below: * "LogAnalytics/Workspace": "/subscriptions/**subscriptionId**/resourceGroups/**resourceGroupName**/providers/Microsoft.OperationalInsights/workspaces/**workspaceName**"
-* "LogAnalytics/Reprovision": false
-Specify your existing workspace in the `LogAnalytics/Workspace` line. Set the `LogAnalytics/Reprovision` setting to true if you would like this log analytics workspace to be used in all cases. This means that any machine with this custom profile will use this workspace, even it is already connected to one. By default, the `LogAnalytics/Reprovision` is set to false. If your machine is already connected to a workspace, then that workspace will continue to be used. If it's not connected to a workspace, then the workspace specified in `LogAnalytics\Workspace` will be used.
+* "LogAnalytics/Behavior": false
+Specify your existing workspace in the `LogAnalytics/Workspace` line. Set the `LogAnalytics/Behavior` setting to true if you would like this log analytics workspace to be used in all cases. This means that any machine with this custom profile will use this workspace, even it is already connected to one. By default, the `LogAnalytics/Behavior` is set to false. If your machine is already connected to a workspace, then that workspace will continue to be used. If it's not connected to a workspace, then the workspace specified in `LogAnalytics\Workspace` will be used.
Also, you can add tags to resources specified in the custom profile like below:
automation Automation Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-availability-zones.md
description: This article provides an overview of Azure availability zones and r
keywords: automation availability zones. Previously updated : 03/16/2023 Last updated : 04/10/2023
Automation accounts currently support the following regions:
- Canada Central - Central US - China North 3
+- East Asia
- East US - East US 2 - France Central - Germany West Central - Japan East
+- Korea Central
- North Europe
+- Norway East
- Qatar Central - South Africa North - South Central US - South East Asia
+- Sweden Central
- UK South - West Europe - West US 2
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
description: This article provides information about deploying the extension-bas
Previously updated : 04/05/2023 Last updated : 04/10/2023 #Customer intent: As a developer, I want to learn about extension so that I can efficiently deploy Hybrid Runbook Workers.
New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Locati
#### [Bicep template](#tab/bicep-template)
-You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](../azure-resource-manager/bicep/overview.md)
+You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](../azure-resource-manager/bicep/overview.md).
+
+Follow the steps mentioned below as an example:
+
+1. Create a Hybrid Worker Group.
+1. Create either an Azure VM or Arc-enabled server. Alternatively, you can also use an existing Azure VM or Arc-enabled server.
+1. Connect the Azure VM or Arc-enabled server to the above created Hybrid Worker Group.
+1. Generate a new GUID and pass it as the name of the Hybrid Worker.
+1. Enable System-assigned managed identity on the VM.
+1. Install Hybrid Worker Extension on the VM.
+1. To confirm if the extension has been successfully installed on the VM, in **Azure portal**, go to the VM > **Extensions** tab and check the status of the Hybrid Worker extension installed on the VM.
```Bicep param automationAccount string
output output1 string = automationAccount_resource.properties.automationHybridSe
You can use an Azure Resource Manager (ARM) template to create a new Azure Windows VM and connect it to an existing Automation account and Hybrid Worker Group. To learn more about ARM templates, see [What are ARM templates?](../azure-resource-manager/templates/overview.md)
+Follow the steps mentioned below as an example:
+
+1. Create a Hybrid Worker Group.
+1. Create either an Azure VM or Arc-enabled server. Alternatively, you can also use an existing Azure VM or Arc-enabled server.
+1. Connect the Azure VM or Arc-enabled server to the above created Hybrid Worker Group.
+1. Generate a new GUID and pass it as the name of the Hybrid Worker.
+1. Enable System-assigned managed identity on the VM.
+1. Install Hybrid Worker Extension on the VM.
+1. To confirm if the extension has been successfully installed on the VM, in **Azure portal**, go to the VM > **Extensions** tab and check the status of the Hybrid Worker extension installed on the VM.
++ **Review the template** ```json
To install and use Hybrid Worker extension using REST API, follow these steps. T
#### [Azure CLI](#tab/cli)
+You can use Azure CLI to create a new Hybrid Worker group, create a new Azure VM, add it to an existing Hybrid Worker Group and install the Hybrid Worker extension. Learn more aboutΓÇ»[Azure CLI](https://learn.microsoft.com/cli/azure/what-is-azure-cli).
+
+Follow the steps mentioned below as an example:
+
+1. Create a Hybrid Worker Group.
+ ```azurecli-interactive
+ az automation hrwg create --automation-account-name accountName --resource-group groupName --name hybridrunbookworkergroupName
+ ```
+1. Create an Azure VM or Arc-enabled server and add it to the above created Hybrid Worker Group. Use the below command to add an existing Azure VM or Arc-enabled Server to the Hybrid Worker Group. Generate a new GUID and pass it as `hybridRunbookWorkerGroupName`. To fetch `vmResourceId`, go to the **Properties** tab of the VM on Azure portal.
+
+ ```azurecli-interactive
+ az automation hrwg hrw create --automation-account-name accountName --resource-group groupName --hybrid-runbook-worker-group-name hybridRunbookWorkerGroupName --hybrid-runbook-worker-id
+ ```
+1. Follow the steps [here](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-vm) to enable the System-assigned managed identity on the VM.
+1. Install Hybrid Worker Extension on the VM
+
+ ```azurecli-interactive
+ az vm extension set --name HybridWorkerExtension --publisher Microsoft.Azure.Automation.HybridWorker --version 1.1 --vm-name <vmname> -g <resourceGroupName> \
+ --settings '{"AutomationAccountURL" = "<registration-url>";}' --enable-auto-upgrade true
+ ```
+1. To confirm if the extension has been successfully installed on the VM, in **Azure portal**, go to the VM > **Extensions** tab and check the status of the Hybrid Worker extension installed on the VM.
+ **Manage Hybrid Worker Extension** - To create, delete, and manage extension-based Hybrid Runbook Worker groups, see [az automation hrwg | Microsoft Docs](/cli/azure/automation/hrwg)
After creating new Hybrid Runbook Worker, you must install the extension on the
#### [PowerShell](#tab/ps)
+You can use PowerShell cmdlets to create a new Hybrid Worker group, create a new Azure VM, add it to an existing Hybrid Worker Group and install the Hybrid Worker extension.
+
+Follow the steps mentioned below as an example:
+
+1. Create a Hybrid Worker Group.
+
+ ```powershell-interactive
+ New-AzAutomationHybridRunbookWorkerGroup -AutomationAccountName "Contoso17" -Name "RunbookWorkerGroupName" -ResourceGroupName "ResourceGroup01"
+ ```
+1. Create an Azure VM or Arc-enabled server and add it to the above created Hybrid Worker Group. Use the below command to add an existing Azure VM or Arc-enabled Server to the Hybrid Worker Group. Generate a new GUID and pass it as `hybridRunbookWorkerGroupName`. To fetch `vmResourceId`, go to the **Properties** tab of the VM on Azure portal.
+
+ ```azurepowershell
+ New-AzAutomationHybridRunbookWorker -AutomationAccountName "Contoso17" -Name "RunbookWorkerName" -HybridRunbookWorkerGroupName "RunbookWorkerGroupName" -VmResourceId "VmResourceId" -ResourceGroupName "ResourceGroup01"
+ ```
+1. Follow the steps [here](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-vm) to enable the System-assigned managed identity on the VM.
+
+1. Install Hybrid Worker Extension on the VM.
+
+ **Hybrid Worker extension settings**
+
+ ```powershell-interactive
+ $settings = @{
+ "AutomationAccountURL" = "<registrationurl>";
+ };
+ ```
+
+ **Azure VMs**
+
+ ```powershell
+ Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 1.1 -Settings $settings -EnableAutomaticUpgrade $true/$false
+ ```
+ **Azure Arc-enabled VMs**
+
+ ```powershell
+ New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -MachineName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 1.1 -Setting $settings -NoWait -EnableAutomaticUpgrade
+ ```
+
+1. To confirm if the extension has been successfully installed on the VM, In **Azure portal**, go to the VM > **Extensions** tab and check the status of Hybrid Worker extension installed on the VM.
++
+**Manage Hybrid Worker Extension**
+ You can use the following PowerShell cmdlets to manage Hybrid Runbook Worker and Hybrid Runbook Worker groups: | PowerShell cmdlet | Description |
automation Migrate Existing Agent Based Hybrid Worker To Extension Based Workers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md
Title: Migrate an existing agent-based hybrid workers to extension-based-workers
description: This article provides information on how to migrate an existing agent-based hybrid worker to extension based workers. Previously updated : 04/05/2023 Last updated : 04/11/2023 #Customer intent: As a developer, I want to learn about extension so that I can efficiently migrate agent based hybrid workers to extension based workers.
For at-scale migration of multiple Agent based Hybrid Workers, you can also use
You can use the Bicep template to create a new Hybrid Worker group, create a new Azure Windows VM and add it to an existing Hybrid Worker Group. Learn more about [Bicep](../azure-resource-manager/bicep/overview.md).
+Follow the steps mentioned below as an example:
+
+1. Create a Hybrid Worker Group.
+1. Create either an Azure VM or Arc-enabled server. Alternatively, you can also use an existing Azure VM or Arc-enabled server.
+1. Connect the Azure VM or Arc-enabled server to the above created Hybrid Worker Group.
+1. Generate a new GUID and pass it as the name of the Hybrid Worker.
+1. Enable System-assigned managed identity on the VM.
+1. Install Hybrid Worker Extension on the VM.
+1. To confirm if the extension has been successfully installed on the VM, in **Azure portal**, go to the VM > **Extensions** tab and check the status of the Hybrid Worker extension installed on the VM.
+ ```Bicep param automationAccount string param automationAccountLocation string
output output1 string = automationAccount_resource.properties.automationHybridSe
You can use an Azure Resource Manager (ARM) template to create a new Azure Windows VM and connect it to an existing Automation account and Hybrid Worker Group. To learn more about ARM templates, see [What are ARM templates?](../azure-resource-manager/templates/overview.md)
+Follow the steps mentioned below as an example:
+
+1. Create a Hybrid Worker Group.
+1. Create either an Azure VM or Arc-enabled server. Alternatively, you can also use an existing Azure VM or Arc-enabled server.
+1. Connect the Azure VM or Arc-enabled server to the above created Hybrid Worker Group.
+1. Generate a new GUID and pass it as the name of the Hybrid Worker.
+1. Enable System-assigned managed identity on the VM.
+1. Install Hybrid Worker Extension on the VM.
+1. To confirm if the extension has been successfully installed on the VM, in **Azure portal**, go to the VM > **Extensions** tab and check the status of the Hybrid Worker extension installed on the VM.
+ **Review the template** ```json
To install and use Hybrid Worker extension using REST API, follow these steps. T
#### [Azure CLI](#tab/cli)
+You can use Azure CLI to create a new Hybrid Worker group, create a new Azure VM, add it to an existing Hybrid Worker Group and install the Hybrid Worker extension. Learn more aboutΓÇ»[Azure CLI](https://learn.microsoft.com/cli/azure/what-is-azure-cli).
+
+Follow the steps mentioned below as an example:
+
+1. Create a Hybrid Worker Group.
+ ```azurecli-interactive
+ az automation hrwg create --automation-account-name accountName --resource-group groupName --name hybridrunbookworkergroupName
+ ```
+1. Create an Azure VM or Arc-enabled server and add it to the above created Hybrid Worker Group. Use the below command to add an existing Azure VM or Arc-enabled Server to the Hybrid Worker Group. Generate a new GUID and pass it as `hybridRunbookWorkerGroupName`. To fetch `vmResourceId`, go to the **Properties** tab of the VM on Azure portal.
+
+ ```azurecli-interactive
+ az automation hrwg hrw create --automation-account-name accountName --resource-group groupName --hybrid-runbook-worker-group-name hybridRunbookWorkerGroupName --hybrid-runbook-worker-id
+ ```
+1. Follow the steps [here](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-vm) to enable the System-assigned managed identity on the VM.
+1. Install Hybrid Worker Extension on the VM
+
+ ```azurecli-interactive
+ az vm extension set --name HybridWorkerExtension --publisher Microsoft.Azure.Automation.HybridWorker --version 1.1 --vm-name <vmname> -g <resourceGroupName> \
+ --settings '{"AutomationAccountURL" = "<registration-url>";}' --enable-auto-upgrade true
+ ```
+1. To confirm if the extension has been successfully installed on the VM, in **Azure portal**, go to the VM > **Extensions** tab and check the status of the Hybrid Worker extension installed on the VM.
+ **Manage Hybrid Worker Extension** - To create, delete, and manage extension-based Hybrid Runbook Worker groups, see [az automation hrwg | Microsoft Docs](/cli/azure/automation/hrwg?view=azure-cli-latest)
After creating new Hybrid Runbook Worker, you must install the extension on the
#### [PowerShell](#tab/ps)
+You can use PowerShell cmdlets to create a new Hybrid Worker group, create a new Azure VM, add it to an existing Hybrid Worker Group and install the Hybrid Worker extension.
+
+Follow the steps mentioned below as an example:
+
+1. Create a Hybrid Worker Group.
+
+ ```powershell-interactive
+ New-AzAutomationHybridRunbookWorkerGroup -AutomationAccountName "Contoso17" -Name "RunbookWorkerGroupName" -ResourceGroupName "ResourceGroup01"
+ ```
+1. Create an Azure VM or Arc-enabled server and add it to the above created Hybrid Worker Group. Use the below command to add an existing Azure VM or Arc-enabled Server to the Hybrid Worker Group. Generate a new GUID and pass it as `hybridRunbookWorkerGroupName`. To fetch `vmResourceId`, go to the **Properties** tab of the VM on Azure portal.
+
+ ```azurepowershell
+ New-AzAutomationHybridRunbookWorker -AutomationAccountName "Contoso17" -Name "RunbookWorkerName" -HybridRunbookWorkerGroupName "RunbookWorkerGroupName" -VmResourceId "VmResourceId" -ResourceGroupName "ResourceGroup01"
+ ```
+1. Follow the steps [here](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-vm) to enable the System-assigned managed identity on the VM.
+1. Install Hybrid Worker Extension on the VM.
+
+ **Hybrid Worker extension settings**
+
+ ```powershell-interactive
+ $settings = @{
+ "AutomationAccountURL" = "<registrationurl>";
+ };
+ ```
+
+ **Azure VMs**
+
+ ```powershell
+ Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 1.1 -Settings $settings -EnableAutomaticUpgrade $true/$false
+ ```
+ **Azure Arc-enabled VMs**
+
+ ```powershell
+ New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -MachineName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForWindows -TypeHandlerVersion 1.1 -Setting $settings -NoWait -EnableAutomaticUpgrade
+ ```
+
+1. To confirm if the extension has been successfully installed on the VM, In **Azure portal**, go to the VM > **Extensions** tab and check the status of Hybrid Worker extension installed on the VM.
+
+**Manage Hybrid Worker Extension**
+ You can use the following PowerShell cmdlets to manage Hybrid Runbook Worker and Hybrid Runbook Worker groups: | PowerShell cmdlet | Description |
azure-app-configuration Quickstart Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-azure-kubernetes-service.md
+
+ Title: Quickstart for using Azure App Configuration in Azure Kubernetes Service (preview) | Microsoft Docs
+description: "In this quickstart, create an Azure Kubernetes Service with an ASP.NET core web app workload and use the Azure App Configuration Kubernetes Provider to load key-values from App Configuration store."
+++
+ms.devlang: csharp
++ Last updated : 04/06/2023+
+#Customer intent: As an Azure Kubernetes Service user, I want to manage all my app settings in one place using Azure App Configuration.
++
+# Quickstart: Use Azure App Configuration in Azure Kubernetes Service (preview)
+
+In Kubernetes, you set up pods to consume configuration from ConfigMaps. It lets you decouple configuration from your container images, making your applications easily portable. [Azure App Configuration Kubernetes Provider](https://mcr.microsoft.com/product/azure-app-configuration/kubernetes-provider/about) can construct ConfigMaps and Secrets from your key-values and Key Vault references in Azure App Configuration. It enables you to take advantage of Azure App Configuration for the centralized storage and management of your configuration without any changes to your application code.
+
+In this quickstart, you incorporate Azure App Configuration Kubernetes Provider in an Azure Kubernetes Service workload where you run a simple ASP.NET Core app consuming configuration from environment variables.
+
+## Prerequisites
+
+* An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).
+* An Azure Container Registry. [Create a registry](/azure/aks/tutorial-kubernetes-prepare-acr#create-an-azure-container-registry).
+* An Azure Kubernetes Service (AKS) cluster that is granted permission to pull images from your Azure Container Registry. [Create an AKS cluster](/azure/aks/tutorial-kubernetes-deploy-cluster#create-a-kubernetes-cluster).
+* [.NET Core SDK](https://dotnet.microsoft.com/download)
+* [Azure CLI](/cli/azure/install-azure-cli)
+* [Docker Desktop](https://www.docker.com/products/docker-desktop/)
+* [helm](https://helm.sh/docs/intro/install/)
+* [kubectl](https://kubernetes.io/docs/tasks/tools/)
+
+> [!TIP]
+> The Azure Cloud Shell is a free, interactive shell that you can use to run the command line instructions in this article. It has common Azure tools preinstalled, including the .NET Core SDK. If you're logged in to your Azure subscription, launch your [Azure Cloud Shell](https://shell.azure.com) from shell.azure.com. You can learn more about Azure Cloud Shell by [reading our documentation](../cloud-shell/overview.md)
+>
+
+## Create an application running in AKS
+In this section, you will create a simple ASP.NET Core web application running in Azure Kubernetes Service (AKS). The application reads configuration from the environment variables defined in a Kubernetes deployment. In the next section, you will enable it to consume configuration from Azure App Configuration without changing the application code. If you already have an AKS application that reads configuration from environment variables, you can skip this section and go to [Use App Configuration Kubernetes Provider](#use-app-configuration-kubernetes-provider).
+
+### Create an application
+
+1. Use the .NET Core command-line interface (CLI) and run the following command to create a new ASP.NET Core web app project in a new *MyWebApp* directory:
+
+ ```dotnetcli
+ dotnet new webapp --output MyWebApp --framework net6.0
+ ```
+
+1. Open *Index.cshtml* in the Pages directory, and update the content with the following code.
+
+ ```html
+ @page
+ @model IndexModel
+ @using Microsoft.Extensions.Configuration
+ @inject IConfiguration Configuration
+ @{
+ ViewData["Title"] = "Home page";
+ }
+
+ <style>
+ h1 {
+ color: @Configuration["Settings:FontColor"];
+ }
+ </style>
+
+ <div class="text-center">
+ <h1>@Configuration["Settings:Message"]</h1>
+ </div>
+ ```
+
+### Containerize the application
+
+1. Run the [dotnet publish](/dotnet/core/tools/dotnet-publish) command to build the app in release mode and create the assets in the *published* folder.
+
+ ```dotnetcli
+ dotnet publish -c Release -o published
+ ```
+
+1. Create a file named *Dockerfile* at the root of your project directory, open it in a text editor, and enter the following content. A Dockerfile is a text file that doesn't have an extension and that is used to create a container image.
+
+ ```dockerfile
+ FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS runtime
+ WORKDIR /app
+ COPY published/ ./
+ ENTRYPOINT ["dotnet", "MyWebApp.dll"]
+ ```
+
+1. Build a container image named *aspnetapp* by running the following command.
+
+ ```docker
+ docker build --tag aspnetapp .
+ ```
+
+### Push the image to Azure Container Registry
+
+1. Run the [az acr login](/cli/azure/acr#az-acr-login) command to login your container registry. The following example logs into a registry named *myregistry*. Replace the registry name with yours.
+
+ ```azurecli
+ az acr login --name myregistry
+ ```
+
+ The command returns `Login Succeeded` once login is successful.
+
+1. Use [docker tag](https://docs.docker.com/engine/reference/commandline/tag/) to create a tag *myregistry.azurecr.io/aspnetapp:v1* for the image *aspnetapp*.
+
+ ```docker
+ docker tag aspnetapp myregistry.azurecr.io/aspnetapp:v1
+ ```
+
+ > [!TIP]
+ > To review the list of your existing docker images and tags, run `docker image ls`. In this scenario, you should see at least two images: `aspnetapp` and `myregistry.azurecr.io/aspnetapp`.
+
+1. Use [docker push](https://docs.docker.com/engine/reference/commandline/push/) to upload the image to the container registry. For example, the following command pushes the image to a repository named *aspnetapp* with tag *v1* under the registry *myregistry*.
+
+ ```docker
+ docker push myregistry.azurecr.io/aspnetapp:v1
+ ```
+
+### Deploy the application
+
+1. Create a *Deployment* directory in the root directory of your project.
+
+1. Add a *deployment.yaml* file to the *Deployment* directory with the following content to create a deployment. Replace the value of `template.spec.containers.image` with the image you created in the previous step.
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: aspnetapp-demo
+ labels:
+ app: aspnetapp-demo
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: aspnetapp-demo
+ template:
+ metadata:
+ labels:
+ app: aspnetapp-demo
+ spec:
+ containers:
+ - name: aspnetapp
+ image: myregistry.azurecr.io/aspnetapp:v1
+ ports:
+ - containerPort: 80
+ env:
+ - name: Settings__Message
+ value: "Message from the local configuration"
+ - name: Settings__FontColor
+ value: "Black"
+ ```
+
+1. Add a *service.yaml* file to the *Deployment* directory with the following content to create a LoadBalancer service.
+
+ ```yaml
+ apiVersion: v1
+ kind: Service
+ metadata:
+ name: aspnetapp-demo-service
+ spec:
+ type: LoadBalancer
+ ports:
+ - port: 80
+ selector:
+ app: aspnetapp-demo
+ ```
+
+1. Run the following command to deploy the application to the AKS cluster.
+
+ ```console
+ kubectl create namespace appconfig-demo
+ kubectl apply -f ./Deployment -n appconfig-demo
+ ```
+
+1. Run the following command and get the External IP address exposed by the LoadBalancer service.
+
+ ```console
+ kubectl get service configmap-demo-service -n appconfig-demo
+ ```
+
+1. Open a browser window, and navigate to the IP address obtained in the previous step. The web page looks like this:
+
+ ![Screenshot showing Kubernetes Provider before using configMap.](./media/quickstarts/kubernetes-provider-app-launch-before.png)
+
+## Use App Configuration Kubernetes Provider
+
+Now that you have an application running in AKS, you'll deploy the App Configuration Kubernetes Provider to your AKS cluster running as a Kubernetes controller. The provider retrieves data from your App Configuration store and creates a ConfigMap, which is consumable as environment variables by your application.
+
+### Setup the Azure App Configuration store
+
+1. Add following key-values to the App Configuration store and leave **Label** and **Content Type** with their default values. For more information about how to add key-values to a store using the Azure portal or the CLI, go to [Create a key-value](./quickstart-azure-app-configuration-create.md#create-a-key-value).
+
+ |**Key**|**Value**|
+ |||
+ |Settings__FontColor|*Green*|
+ |Settings__Message|*Hello from Azure App Configuration*|
+
+1. [Enabling the system-assigned managed identity on the Virtual Machine Scale Sets of your AKS cluster](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss#enable-system-assigned-managed-identity-on-an-existing-virtual-machine-scale-set). This allows the App Configuration Kubernetes Provider to use the managed identity to connect to your App Configuration store.
+
+1. Grant read access to your App Configuration store by [assigning the managed identity the App Configuration Data Reader role](/azure/azure-app-configuration/howto-integrate-azure-managed-service-identity#grant-access-to-app-configuration).
+
+### Install App Configuration Kubernetes Provider to AKS cluster
+1. Run the following command to get access credentials for your AKS cluster. Replace the value of the `name` and `resource-group` parameters with your AKS instance:
+
+ ```console
+ az aks get-credentials --name <your-aks-instance-name> --resource-group <your-aks-resource-group>
+ ```
+
+1. Install Azure App Configuration Kubernetes Provider to your AKS cluster using `helm`:
+
+ ```console
+ helm install azureappconfiguration.kubernetesprovider \
+ oci://mcr.microsoft.com/azure-app-configuration/helmchart/kubernetes-provider \
+ --version 1.0.0-preview \
+ --namespace azappconfig-system \
+ --create-namespace
+ ```
+
+1. Add an *appConfigurationProvider.yaml* file to the *Deployment* directory with the following content to create an `AzureAppConfigurationProvider` resource. `AzureAppConfigurationProvider` is a custom resource that defines what data to download from an Azure App Configuration store and creates a ConfigMap.
+
+ Replace the value of the `endpoint` field with the endpoint of your Azure App Configuration store.
+
+ ```yaml
+ apiVersion: azconfig.io/v1beta1
+ kind: AzureAppConfigurationProvider
+ metadata:
+ name: appconfigurationprovider-sample
+ spec:
+ endpoint: <your-app-configuration-store-endpoint>
+ target:
+ configMapName: configmap-created-by-appconfig-provider
+ ```
+
+ > [!NOTE]
+ > `AzureAppConfigurationProvider` is a declarative API object. It defines the desired state of the ConfigMap created from the data in your App Configuration store with the following behavior:
+ >
+ > - The ConfigMap will fail to be created if a ConfigMap with the same name already exists in the same namespace.
+ > - The ConfigMap will be reset based on the present data in your App Configuration store if it's deleted or modified by any other means.
+ > - The ConfigMap will be deleted if the App Configuration Kubernetes Provider is uninstalled.
+
+2. Update the *deployment.yaml* file in the *Deployment* directory to use the ConfigMap `configmap-created-by-appconfig-provider` for environment variables.
+
+ Replace the `env` section
+ ```yaml
+ env:
+ - name: Settings__Message
+ value: "Message from the local configuration"
+ - name: Settings__FontColor
+ value: "Black"
+ ```
+ with
+ ```yaml
+ envFrom:
+ - configMapRef:
+ name: configmap-created-by-appconfig-provider
+ ```
+
+3. Run the following command to deploy the changes. Replace the namespace if you are using your existing AKS application.
+
+ ```console
+ kubectl apply -f ./Deployment -n appconfig-demo
+ ```
+
+4. Refresh the browser. The page shows updated content.
+
+ ![Screenshot showing Kubernetes Provider after using configMap.](./media/quickstarts/kubernetes-provider-app-launch-after.png)
+
+### Troubleshooting
+
+If you don't see your application picking up the data from your App Configuration store, run the following command to validate that the ConfigMap is created properly.
+
+```console
+kubectl get configmap configmap-created-by-appconfig-provider -n appconfig-demo
+```
+
+If the ConfigMap is not created properly, run the following command to get the data retrieval status.
+
+```console
+kubectl get AzureAppConfigurationProvider appconfigurationprovider-sample -n appconfig-demo -o yaml
+```
+
+If the Azure App Configuration Kubernetes Provider retrieved data from your App Configuration store successfully, the `phase` property under the status section of the output should be `COMPLETE`, as shown in the following example.
+
+```console
+$ kubectl get AzureAppConfigurationProvider appconfigurationprovider-sample -n appconfig-demo -o yaml
+
+apiVersion: azconfig.io/v1beta1
+kind: AzureAppConfigurationProvider
+ ... ... ...
+status:
+ lastReconcileTime: "2023-04-06T06:17:06Z"
+ lastSyncTime: "2023-04-06T06:17:06Z"
+ message: Complete sync settings to ConfigMap or Secret
+ phase: COMPLETE
+```
+
+If the phase is not `COMPLETE`, the data isn't downloaded from your App Configuration store properly. Run the following command to show the logs of the Azure App Configuration Kubernetes Provider.
+
+```console
+kubectl logs deployment/az-appconfig-k8s-provider -n azappconfig-system
+```
+
+Use the logs for further troubleshooting. For example, if you see requests to your App Configuration store are responded with *RESPONSE 403: 403 Forbidden*, it may indicate the App Configuration Kubernetes Provider doesn't have the necessary permission to access your App Configuration store. Follow the instructions in [Setup the Azure App Configuration store](#setup-the-azure-app-configuration-store) to ensure the managed identity is enabled and it's assigned the proper permission.
+
+## Clean up resources
+
+Uninstall the App Configuration Kubernetes Provider from your AKS cluster if you want to keep the AKS cluster.
+
+```console
+helm uninstall azureappconfiguration.kubernetesprovider --namespace azappconfig-system
+```
++
+## Summary
+
+In this quickstart, you:
+
+* Created an application running in Azure Kubernetes Service (AKS).
+* Connected your AKS cluster to your App Configuration store using the App Configuration Kubernetes Provider.
+* Created a ConfigMap with data from your App Configuration store.
+* Ran the application with configuration from your App Configuration store without changing your application code.
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md
Previously updated : 06/13/2022 Last updated : 04/04/2023
Azure failover groups use the same distributed availability groups technology th
> [!NOTE] > - The Azure Arc-enabled SQL Managed Instance in both geo-primary and geo-secondary sites need to be identical in terms of their compute & capacity, as well as service tiers they are deployed in.
-> - Distributed availability groups can be setup for either General Purpose or Business Critical service tiers.
+> - Distributed availability groups can be set up for either General Purpose or Business Critical service tiers.
-To configure an Azure failover group:
+## Prerequisites
+
+The following prerequisites must be met before setting up failover groups between two Azure Arc-enabled SQL managed instances:
+
+- An Azure Arc data controller and an Arc enabled SQL managed instance provisioned at the primary site with `--license-type` as one of `BasePrice` or `LicenseIncluded`.
+- An Azure Arc data controller and an Arc enabled SQL managed instance provisioned at the secondary site with identical configuration as the primary in terms of:
+ - CPU
+ - Memory
+ - Storage
+ - Service tier
+ - Collation
+ - Other instance settings
+- The instance at the secondary site requires `--license-type` as `DisasterRecovery`
+
+> [!NOTE]
+> - It is important to specify the `--license-type` **during** the Azure Arc-enabled SQL MI creation. This will allow the DR instance to be seeded from the primary instance in the primary data center. Updating this property post deployment will not have the same effect.
+
+## Deployment process
+
+To set up an Azure failover group between two Azure Arc-enabled SQL managed instances, complete the following steps:
1. Create custom resource for distributed availability group at the primary site 1. Create custom resource for distributed availability group at the secondary site
-1. Copy the binary data from the mirroring certificates
+1. Copy the binary data from the mirroring certificates
1. Set up the distributed availability group between the primary and secondary sites
+ either in `sync` mode or `async` mode
The following image shows a properly configured distributed availability group:
-![A properly configured distributed availability group](.\media\business-continuity\dag.png)
+![Diagram showing a properly configured distributed availability group](.\media\business-continuity\distributed-availability-group.png)
+
+## Synchronization modes
+
+Failover groups in Azure Arc data services support two synchronization modes - `sync` and `async`. The synchronization mode directly impacts how the data is synchronized between the Azure Arc-enabled SQL managed instances, and potentially the performance on the primary managed instance.
+
+If primary and secondary sites are within a few miles of each other, use `sync` mode. Otherwise use `async` mode to avoid any performance impact on the primary site.
-### Configure Azure failover group
+## Configure Azure failover group - direct mode
+
+Follow the steps below if the Azure Arc data services are deployed in `directly` connected mode.
+
+Once the prerequisites are met, run the below command to set up Azure failover group between the two Azure Arc-enabled SQL managed instances:
+
+```azurecli
+az sql instance-failover-group-arc create --name <name of failover group> --mi <primary SQL MI> --partner-mi <Partner MI> --resource-group <name of RG> --partner-resource-group <name of partner MI RG>
+```
+
+Example:
+
+```azurecli
+az sql instance-failover-group-arc create --name sql-fog --mi sql1 --partner-mi sql2 --resource-group rg-name --partner-resource-group rg-name
+```
+
+The above command:
+
+1. Creates the required custom resources on both primary and secondary sites
+1. Copies the mirroring certificates and configures the failover group between the instances
+
+## Configure Azure failover group - indirect mode
+
+Follow the steps below if Azure Arc data services are deployed in `indirectly` connected mode.
1. Provision the managed instance in the primary site.
The following image shows a properly configured distributed availability group:
2. Switch context to the secondary cluster by running ```kubectl config use-context <secondarycluster>``` and provision the managed instance in the secondary site that will be the disaster recovery instance. At this point, the system databases are not part of the contained availability group.
-> [!NOTE]
-> - It is important to specify `--license-type DisasterRecovery` **during** the Azure Arc SQL MI creation. This will allow the DR instance to be seeded from the primary instance in the primary data center. Updating this property post deployment will not have the same effect.
--
+ > [!NOTE]
+ > It is important to specify `--license-type DisasterRecovery` **during** the Azure Arc-enabled SQL MI creation. This will allow the DR instance to be seeded from the primary instance in the primary data center. Updating this property post deployment will not have the same effect.
```azurecli az sql mi-arc create --name <secondaryinstance> --tier bc --replicas 3 --license-type DisasterRecovery --k8s-namespace <namespace> --use-k8s ```
-3. Mirroring certificates - The binary data inside the Mirroring Certificate property of the Arc SQL MI is needed for the Instance Failover Group CR (Custom Resource) creation.
+3. Mirroring certificates - The binary data inside the Mirroring Certificate property of the Azure Arc-enabled SQL MI is needed for the Instance Failover Group CR (Custom Resource) creation.
This can be achieved in a few ways:
- (a) If using ```az``` CLI, generate the mirroring certificate file first, and then point to that file while configuring the Instance Failover Group so the binary data is read from the file and copied over into the CR. The cert files are not needed post FOG creation.
+ (a) If using `az` CLI, generate the mirroring certificate file first, and then point to that file while configuring the Instance Failover Group so the binary data is read from the file and copied over into the CR. The cert files are not needed after failover group creation.
- (b) If using ```kubectl```, directly copy and paste the binary data from the Arc SQL MI CR into the yaml file that will be used to create the Instance Failover Group.
+ (b) If using `kubectl`, directly copy and paste the binary data from the Azure Arc-enabled SQL MI CR into the yaml file that will be used to create the Instance Failover Group.
Using (a) above:
The following image shows a properly configured distributed availability group:
> Ensure the SQL instances have different names for both primary and secondary sites, and the `shared-name` value should be identical on both sites. ```azurecli
- az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for primary DAG resource> --mi <local SQL managed instance name> --role primary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<secondary IP> --partner-mirroring-cert-file <secondary.pem> --k8s-namespace <namespace> --use-k8s
+ az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for primary failover group resource> --mi <local SQL managed instance name> --role primary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<secondary IP> --partner-mirroring-cert-file <secondary.pem> --k8s-namespace <namespace> --use-k8s
``` Example:
The following image shows a properly configured distributed availability group:
az sql instance-failover-group-arc create --shared-name myfog --name primarycr --mi sqlinstance1 --role primary --partner-mi sqlinstance2 --partner-mirroring-url tcp://10.20.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance2.pem --k8s-namespace my-namespace --use-k8s ```
- On the secondary instance, run the following command to setup the FOG CR. The ```--partner-mirroring-cert-file``` in this case should point to a path that has the mirroring certificate file generated from the primary instance as described in 3(a) above.
+ On the secondary instance, run the following command to set up the failover group custom resource. The `--partner-mirroring-cert-file` in this case should point to a path that has the mirroring certificate file generated from the primary instance as described in 3(a) above.
```azurecli
- az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for secondary DAG resource> --mi <local SQL managed instance name> --role secondary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<primary IP> --partner-mirroring-cert-file <primary.pem> --k8s-namespace <namespace> --use-k8s
+ az sql instance-failover-group-arc create --shared-name <name of failover group> --name <name for secondary failover group resource> --mi <local SQL managed instance name> --role secondary --partner-mi <partner SQL managed instance name> --partner-mirroring-url tcp://<primary IP> --partner-mirroring-cert-file <primary.pem> --k8s-namespace <namespace> --use-k8s
``` Example:
The following image shows a properly configured distributed availability group:
az sql instance-failover-group-arc create --shared-name myfog --name secondarycr --mi sqlinstance2 --role secondary --partner-mi sqlinstance1 --partner-mirroring-url tcp://10.10.5.20:970 --partner-mirroring-cert-file $HOME/sqlcerts/sqlinstance1.pem --k8s-namespace my-namespace --use-k8s ```
-## Manual failover from primary to secondary instance
+## Retrieve Azure failover group health state
+
+Information about the failover group such as primary role, secondary role, and the current health status can be viewed on the custom resource on either primary or secondary site.
+
+Run the below command on primary and/or the secondary site to list the failover groups custom resource:
+
+```azurecli
+kubectl get fog -n <namespace>
+```
+
+Describe the custom resource to retrieve the failover group status, as follows:
+
+```azurecli
+kubectl describe fog <failover group cr name> -n <namespace>
+```
+
+## Failover group operations
+
+Once the failover group is set up between the managed instances, different failover operations can be performed depending on the circumstances.
+
+Possible failover scenarios are:
+
+- The Azure Arc-enabled SQL managed instances at both sites are in healthy state and a failover needs to be performed:
+ + perform a manual failover from primary to secondary without data loss by setting `role=secondary` on the primary SQL MI.
+
+- Primary site is unhealthy/unreachable and a failover needs to be performed:
+
+ + the primary Azure Arc-enabled SQL managed instance is down/unhealthy/unreachable
+ + the secondary Azure Arc-enabled SQL managed instance needs to be force-promoted to primary with potential data loss
+ + when the original primary Azure Arc-enabled SQL managed instance comes back online, it will report as `Primary` role and unhealthy state and needs to be forced into a `secondary` role so it can join the failover group and data can be synchronized.
+
+
+## Manual failover (without data loss)
-Use `az sql instance-failover-group-arc ...` to initiate a failover from primary to secondary. The following command initiates a failover from the primary instance to the secondary instance. Any pending transactions on the geo-primary instance are replicated over to the geo-secondary instance before the failover.
+Use `az sql instance-failover-group-arc update ...` command group to initiate a failover from primary to secondary. Any pending transactions on the geo-primary instance are replicated over to the geo-secondary instance before the failover.
+
+### Directly connected mode
+Run the following command to initiate a manual failover, in `direct` connected mode using ARM APIs:
+
+```azurecli
+az sql instance-failover-group-arc update --name <shared name of failover group> --mi <primary Azure Arc-enabled SQL MI> --role secondary --resource-group <resource group>
+```
+Example:
+
+```azurecli
+az sql instance-failover-group-arc update --name myfog --mi sqlmi1 --role secondary --resource-group myresourcegroup
+```
+### Indirectly connected mode
+Run the following command to initiate a manual failover, in `indirect` connected mode using kubernetes APIs:
```azurecli
-az sql instance-failover-group-arc update --name <name of DAG resource> --role secondary --k8s-namespace <namespace> --use-k8s
+az sql instance-failover-group-arc update --name <name of failover group resource> --role secondary --k8s-namespace <namespace> --use-k8s
``` Example:
Example:
az sql instance-failover-group-arc update --name myfog --role secondary --k8s-namespace my-namespace --use-k8s ```
-## Forced failover
+## Forced failover with data loss
In the circumstance when the geo-primary instance becomes unavailable, the following commands can be run on the geo-secondary DR instance to promote to primary with a forced failover incurring potential data loss.
-Run the below command on geo-primary, if available:
+On the geo-secondary DR instance, run the following command to promote it to primary role, with data loss.
+
+> [!NOTE]
+> If the `--partner-sync-mode` was configured as `sync`, it needs to be reset to `async` when the secondary is promoted to primary.
+
+### Directly connected mode
+```azurecli
+az sql instance-failover-group-arc update --name <shared name of failover group> --mi <secondary Azure Arc-enabled SQL MI> --role force-primary-allow-data-loss --resource-group <resource group> --partner-sync-mode async
+```
+Example:
```azurecli
-az sql instance-failover-group-arc update --k8s-namespace my-namespace --name primarycr --use-k8s --role force-secondary
+az sql instance-failover-group-arc update --name myfog --mi sqlmi2 --role force-primary-allow-data-loss --resource-group myresourcegroup --partner-sync-mode async
```
-On the geo-secondary DR instance, run the following command to promote it to primary role, with data loss.
+### Indirectly connected mode
+```azurecli
+az sql instance-failover-group-arc update --k8s-namespace my-namespace --name secondarycr --use-k8s --role force-primary-allow-data-loss --partner-sync-mode async
+```
+
+When the geo-primary Azure Arc-enabled SQL MI instance becomes available, run the below command to bring it into the failover group and synchronize the data:
+
+### Directly connected mode
+```azurecli
+az sql instance-failover-group-arc update --name <shared name of failover group> --mi <old primary Azure Arc-enabled SQL MI> --role force-secondary --resource-group <resource group>
+```
+### Indirectly connected mode
```azurecli
-az sql instance-failover-group-arc update --k8s-namespace my-namespace --name secondarycr --use-k8s --role force-primary-allow-data-loss
+az sql instance-failover-group-arc update --k8s-namespace my-namespace --name secondarycr --use-k8s --role force-secondary
```
-## Limitation
+Optionally, the `--partner-sync-mode` can be configured back to `sync` mode if desired.
+
+At this point, if you plan to continue running the production workload off of the secondary site, the `--license-type` needs to be updated to either `BasePrice` or `LicenseIncluded` to initiate billing for the vCores consumed.
+
+## Next steps
-When you use [SQL Server Management Studio Object Explorer to create a database](/sql/relational-databases/databases/create-a-database#SSMSProcedure), the application returns an error. You can [create new databases with T-SQL](/sql/relational-databases/databases/create-a-database#TsqlProcedure).
+[Overview: Azure Arc-enabled SQL Managed Instance business continuity](managed-instance-business-continuity-overview.md)
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
+## April 12, 2023
+
+### Image tag
+
+`v1.18.0_2023-04-11`
+
+For complete release version information, see [Version log](version-log.md#april-11-2023).
+
+New for this release:
+
+- Azure Arc-enabled SQL Managed Instance
+ - Direct mode for failover groups is generally available az CLI
+
+- Arc PostgreSQL
+ - Ensure postgres extensions work per database/role
+ - Arc PostgreSQL | Upload metrics/logs to Azure Monitor
+ ## March 14, 2023 ### Image tag
azure-arc Sizing Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/sizing-guidance.md
## Overview of sizing guidance
-When planning for the deployment of Azure Arc data services you should plan for the correct amount of compute, memory, and storage that will be required to run the Azure Arc data controller and for the number of SQL managed instance and PostgreSQL servers that you will be deploying. Because Azure Arc-enabled data services is deployed on Kubernetes, you have the flexibility of adding additional capacity to your Kubernetes cluster over time by adding additional compute nodes or storage. This guide will provide guidance on minimum requirements as well as provide guidance on recommended sizes for some common requirements.
+When planning for the deployment of Azure Arc data services, plan the correct amount of:
+
+- Compute
+- Memory
+- Storage
+
+These resources are required for:
+
+- The data controller
+- SQL managed instances
+- PostgreSQL servers
+
+Because Azure Arc-enabled data services deploy on Kubernetes, you have the flexibility of adding more capacity to your Kubernetes cluster over time by compute nodes or storage. This guide explains minimum requirements and recommends sizes for some common requirements.
## General sizing requirements
When planning for the deployment of Azure Arc data services you should plan for
Cores numbers must be an integer value greater than or equal to one.
-When using Azure CLI (az) for deployment the memory values should be specified in a power of two number - i.e. using the suffixes: Ki, Mi, or Gi.
+When you deploy with Azure CLI (az), use a power of two number to set the memory values. Specifically, use the suffixes:
+
+- `Ki`
+- `Mi`
+- `Gi`
Limit values must always be greater than to the request value, if specified.
Limit values for cores are the billable metric on SQL managed instance and Postg
## Minimum deployment requirements
-A minimum size Azure Arc-enabled data services deployment could be considered to be the Azure Arc data controller plus one SQL managed instance plus one PostgreSQL server. For this configuration, you need at least 16 GB of RAM and 4 cores of _available_ capacity on your Kubernetes cluster. You should ensure that you have a minimum Kubernetes node size of 8 GB RAM and 4 cores and a sum total capacity of 16 GB RAM available across all of your Kubernetes nodes. For example, you could have 1 node at 32 GB RAM and 4 cores or you could have 2 nodes with 16GB RAM and 4 cores each.
+A minimum size Azure Arc-enabled data services deployment could be considered to be the Azure Arc data controller plus one SQL managed instance plus one PostgreSQL server. For this configuration, you need at least 16-GB RAM and 4 cores of _available_ capacity on your Kubernetes cluster. You should ensure that you have a minimum Kubernetes node size of 8-GB RAM and 4 cores and a sum total capacity of 16-GB RAM available across all of your Kubernetes nodes. For example, you could have 1 node at 32-GB RAM and 4 cores or you could have 2 nodes with 16-GB RAM and 4 cores each.
See the [storage-configuration](storage-configuration.md) article for details on storage sizing.
See the [storage-configuration](storage-configuration.md) article for details on
The data controller is a collection of pods that are deployed to your Kubernetes cluster to provide an API, the controller service, the bootstrapper, and the monitoring databases and dashboards. This table describes the default values for memory and CPU requests and limits.
-|Pod name|CPU request|Memory request|CPU limit|Memory limit|Notes|
-|||||||
-|**bootstrapper**|100m|100Mi|200m|200Mi||
-|**control**|400m|2Gi|1800m|2Gi||
-|**controldb**|200m|3Gi|800m|6Gi||
-|**logsdb**|200m|1600Mi|2|1600Mi||
-|**logsui**|100m|500Mi|2|2Gi||
-|**metricsdb**|200m|800Mi|400m|2Gi||
-|**metricsdc**|100m|200Mi|200m|300Mi|Metricsdc is a daemonset which is created on each of the Kubernetes nodes in your cluster. The numbers in the table here are _per node_. If you set allowNodeMetricsCollection = false in your deployment profile file before creating the data controller, the metricsdc daemonset will not be created.|
-|**metricsui**|20m|200Mi|500m|200Mi||
+|Pod name|CPU request|Memory request|CPU limit|Memory limit|
+||||||
+|**`bootstrapper`**|`100m`|`100Mi`|`200m`|`200Mi`|
+|**`control`**|`400m`|`2Gi`|`1800m`|`2Gi`|
+|**`controldb`**|`200m`|`4Gi`|`800m`|`6Gi`|
+|**`logsdb`**|`200m`|`1600Mi`|`2`|`1600Mi`|
+|**`logsui`**|`100m`|`500Mi`|`2`|`2Gi`|
+|**`metricsdb`**|`200m`|`800Mi`|`400m`|`2Gi`|
+|**`metricsdc`**|`100m`|`200Mi`|`200m`|`300Mi`|
+|**`metricsui`**|`20m`|`200Mi`|`500m`|`200Mi`|
-You can override the default settings for the controldb and control pods in your deployment profile file or datacontroller YAML file. Example:
+`metricsdc` is a `daemonset`, which is created on each of the Kubernetes nodes in your cluster. The numbers in the table are _per node_. If you set `allowNodeMetricsCollection = false` in your deployment profile file before you create the data controller, this `daemonset` isn't created.
+
+You can override the default settings for the `controldb` and control pods in your data controller YAML file. Example:
```yaml resources:
Each SQL managed instance must have the following minimum resource requests and
|Service tier|General Purpose|Business Critical| ||||
-|CPU request|Minimum: 1; Maximum: 24; Default: 2|Minimum: 1; Maximum: unlimited; Default: 4|
-|CPU limit|Minimum: 1; Maximum: 24; Default: 2|Minimum: 1; Maximum: unlimited; Default: 4|
-|Memory request|Minimum: 2Gi; Maxium: 128Gi; Default: 4Gi|Minimum: 2Gi; Maxium: unlimited; Default: 4Gi|
-|Memory limit|Minimum: 2Gi; Maxium: 128Gi; Default: 4Gi|Minimum: 2Gi; Maxium: unlimited; Default: 4Gi|
+|CPU request|Minimum: 1<br/> Maximum: 24<br/> Default: 2|Minimum: 1<br/> Maximum: unlimited<br/> Default: 4|
+|CPU limit|Minimum: 1<br/> Maximum: 24<br/> Default: 2|Minimum: 1<br/> Maximum: unlimited<br/> Default: 4|
+|Memory request|Minimum: `2Gi`<br/> Maximum: `128Gi`<br/> Default: `4Gi`|Minimum: `2Gi`<br/> Maximum: unlimited<br/> Default: `4Gi`|
+|Memory limit|Minimum: `2Gi`<br/> Maximum: `128Gi`<br/> Default: `4Gi`|Minimum: `2Gi`<br/> Maximum: unlimited<br/> Default: `4Gi`|
Each SQL managed instance pod that is created has three containers: |Container name|CPU Request|Memory Request|CPU Limit|Memory Limit|Notes| |||||||
-|fluentbit|100m|100Mi|Not specified|Not specified|The fluentbit container resource requests are _in addition to_ the requests specified for the SQL managed instance.|
-|arc-sqlmi|User specified or not specified.|User specified or not specified.|User specified or not specified.|User specified or not specified.|
-|collectd|Not specified|Not specified|Not specified|Not specified|
+|`fluentbit`|`100m`|`100Mi`|Not specified|Not specified|The `fluentbit` container resource requests are _in addition to_ the requests specified for the SQL managed instance.|
+|`arc-sqlmi`|User specified or not specified.|User specified or not specified.|User specified or not specified.|User specified or not specified.|
+|`collectd`|Not specified|Not specified|Not specified|Not specified|
-The default volume size for all persistent volumes is 5Gi.
+The default volume size for all persistent volumes is `5Gi`.
## PostgreSQL server sizing details Each PostgreSQL server node must have the following minimum resource requests:-- Memory: 256Mi
+- Memory: `256Mi`
- Cores: 1 Each PostgreSQL server pod that is created has three containers: |Container name|CPU Request|Memory Request|CPU Limit|Memory Limit|Notes| |||||||
-|fluentbit|100m|100Mi|Not specified|Not specified|The fluentbit container resource requests are _in addition to_ the requests specified for the PostgreSQL server.|
-|postgres|User specified or not specified.|User specified or 256Mi (default).|User specified or not specified.|User specified or not specified.||
-|arc-postgresql-agent|Not specified|Not specified|Not specified|Not specified||
+|`fluentbit`|`100m`|`100Mi`|Not specified|Not specified|The `fluentbit` container resource requests are _in addition to_ the requests specified for the PostgreSQL server.|
+|`postgres`|User specified or not specified.|User specified or `256Mi` (default).|User specified or not specified.|User specified or not specified.||
+|`arc-postgresql-agent`|Not specified|Not specified|Not specified|Not specified||
## Cumulative sizing
-The overall size of an environment required for Azure Arc-enabled data services is primarily a function of the number and size of the database instances that will be created. The overall size can be difficult to predict ahead of time knowing that the number of instances will grow and shrink and the amount of resources that are required for each database instance will change.
+The overall size of an environment required for Azure Arc-enabled data services is primarily a function of the number and size of the database instances. The overall size can be difficult to predict ahead of time knowing that the number of instances may grow and shrink and the amount of resources that are required for each database instance can change.
-The baseline size for a given Azure Arc-enabled data services environment is the size of the data controller which requires 4 cores and 16 GB of RAM. From there you can add on top the cumulative total of cores and memory required for the database instances. For SQL managed instance the number of pods is equal to the number of SQL managed instances that are created. For PostgreSQL servers the number of pods is equal to the number of PostgreSQL servers that are created.
+The baseline size for a given Azure Arc-enabled data services environment is the size of the data controller, which requires 4 cores and 16-GB RAM. From there, add the cumulative total of cores and memory required for the database instances. SQL Managed Instance requires one pod for each instance. PostgreSQL server creates one pod for each server.
-In addition to the cores and memory you request for each database instance, you should add 250m of cores and 250Mi of RAM for the agent containers.
+In addition to the cores and memory you request for each database instance, you should add `250m` of cores and `250Mi` of RAM for the agent containers.
-The following is an example sizing calculation.
+### Example sizing calculation
Requirements: -- **"SQL1"**: 1 SQL managed instance with 16 GB of RAM, 4 cores-- **"SQL2"**: 1 SQL managed instance with 256 GB of RAM, 16 cores-- **"Postgres1"**: 1 PostgreSQL server at 12 GB of RAM, 4 cores
+- **"SQL1"**: 1 SQL managed instance with 16-GB RAM, 4 cores
+- **"SQL2"**: 1 SQL managed instance with 256-GB RAM, 16 cores
+- **"Postgres1"**: 1 PostgreSQL server at 12-GB RAM, 4 cores
Sizing calculations: -- The size of "SQL1" is: 1 pod * ([16 Gi RAM, 4 cores] + [250Mi RAM, 250m cores]) for the agents per pod = 16.25 Gi RAM, 4.25 cores.-- The size of "SQL2" is: 1 pod * ([256 Gi RAM, 16 cores] + + [250Mi RAM, 250m cores]) for the agents per pod = 256.25 Gi RAM, 16.25 cores.-- The total size of SQL 1 and SQL 2 is: (16.25 GB + 256.25 Gi) = 272.5 GB RAM, (4.25 cores + 16.25 cores) = 20.5 cores.--- The size of "Postgres1" is: 1 pod * ([12 GB RAM, 4 cores] + [250Mi RAM, 250m cores]) for the agents per pod = 12.25 GB RAM, 4.25 cores.--- The total capacity required for the database instances is: 272.5 GB RAM, 20.5 cores for SQL + 12.25 GB RAM, 4.25 cores for PostgreSQL server = 284.75 GB RAM, 24.75 cores.--- The total capacity required for the database instances plus the data controller is: 284.75 GB RAM, 24.75 cores for the database instances + 16 GB RAM, 4 cores for the data controller = 300.75 GB RAM, 28.75 cores.
+- The size of "SQL1" is: `1 pod * ([16Gi RAM, 4 cores] + [250Mi RAM, 250m cores])`. For the agents per pod use `16.25 Gi` RAM and 4.25 cores.
+- The size of "SQL2" is: `1 pod * ([256Gi RAM, 16 cores] + [250Mi RAM, 250m cores])`. For the agents per pod use `256.25 Gi` RAM and 16.25 cores.
+- The total size of SQL 1 and SQL 2 is:
+ - `(16.25 GB + 256.25 Gi) = 272.5-GB RAM`
+ - `(4.25 cores + 16.25 cores) = 20.5 cores`
+
+- The size of "Postgres1" is: `1 pod * ([12Gi RAM, 4 cores] + [250Mi RAM, 250m cores])`. For the agents per pod use `12.25 Gi` RAM and `4.25` cores.
+
+- The total capacity required:
+ - For the database instances:
+ - 272.5-GB RAM
+ - 20.5 cores
+ - For SQL:
+ - 12.25-GB RAM
+ - 4.25 cores
+ - For PostgreSQL server
+ - 284.75-GB RAM
+ - 24.75 cores
+
+- The total capacity required for the database instances plus the data controller is:
+ - For the database instance
+ - 284.75-GB RAM
+ - 24.75 cores
+ - For the data controller
+ - 16-GB RAM
+ - 4 cores
+ - In total:
+ - 300.75-GB RAM
+ - 28.75 cores.
See the [storage-configuration](storage-configuration.md) article for details on storage sizing. ## Other considerations
-Keep in mind that a given database instance size request for cores or RAM cannot exceed the available capacity of the Kubernetes nodes in the cluster. For example, if the largest Kubernetes node you have in your Kubernetes cluster is 256 GB of RAM and 24 cores, you will not be able to create a database instance with a request of 512 GB of RAM and 48 cores.
+Keep in mind that a given database instance size request for cores or RAM cannot exceed the available capacity of the Kubernetes nodes in the cluster. For example, if the largest Kubernetes node you have in your Kubernetes cluster is 256-GB RAM and 24 cores, you can't create a database instance with a request of 512-GB RAM and 48 cores.
+
+Maintain at least 25% of available capacity across the Kubernetes nodes. This capacity allows Kubernetes to:
-It is a good idea to maintain at least 25% of available capacity across the Kubernetes nodes to allow Kubernetes to efficiently schedule pods to be created and to allow for elastic scaling, allow for rolling upgrades of the Kubernetes nodes, and longer term growth on demand.
+- Efficiently schedule pods to be created
+- Enable elastic scaling
+- Supports rolling upgrades of the Kubernetes nodes
+- Facilitates longer term growth on demand
-In your sizing calculations, don't forget to add in the resource requirements of the Kubernetes system pods and any other workloads which may be sharing capacity with Azure Arc-enabled data services on the same Kubernetes cluster.
+In your sizing calculations, add the resource requirements of the Kubernetes system pods and any other workloads, which may be sharing capacity with Azure Arc-enabled data services on the same Kubernetes cluster.
-To maintain high availability during planned maintenance and disaster continuity, you should plan for at least one of the Kubernetes nodes in your cluster to be unavailable at any given point in time. Kubernetes will attempt to reschedule the pods that were running on a given node that was taken down for maintenance or due to a failure. If there is no available capacity on the remaining nodes those pods will not be rescheduled for creation until there is available capacity again. Be extra careful with large database instances. For example, if there is only one Kubernetes node big enough to meet the resource requirements of a large database instance and that node fails then Kubernetes will not be able to schedule that database instance pod onto another Kubernetes node.
+To maintain high availability during planned maintenance and disaster continuity, plan for at least one of the Kubernetes nodes in your cluster to be unavailable at any given point in time. Kubernetes attempts to reschedule the pods that were running on a given node that was taken down for maintenance or due to a failure. If there is no available capacity on the remaining nodes those pods won't be rescheduled for creation until there is available capacity again. Be extra careful with large database instances. For example, if there is only one Kubernetes node big enough to meet the resource requirements of a large database instance and that node fails, then Kubernetes won't schedule that database instance pod onto another Kubernetes node.
Keep the [maximum limits for a Kubernetes cluster size](https://kubernetes.io/docs/setup/best-practices/cluster-large/) in mind.
-Your Kubernetes administrator may have set up [resource quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) on your namespace/project. Keep these in mind when planning your database instance sizes.
+Your Kubernetes administrator may have set up [resource quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) on your namespace/project. Keep these quotas in mind when planning your database instance sizes.
azure-arc Troubleshoot Managed Instance Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshoot-managed-instance-configuration.md
+
+ Title: Troubleshoot configuration - Azure Arc-enabled SQL Managed Instance
+description: Describes how to troubleshoot configuration. Includes steps to provide configuration files for Azure Arc-enabled SQL Managed Instance Azure Arc-enabled data services
+++ Last updated : 04/10/2023++
+# User-provided configuration files
+
+Arc data services provide management of configuration settings and files in the system. The system generates configuration files such as `mssql.conf`, `mssql.json`, `krb5.conf` using the user-provided settings in the custom resource spec and some system-determined settings. The scope of what settings are supported and what changes can be made to the configuration files using the custom resource spec evolves over time. You may need to try changes in the configuration files that aren't possible through the settings on the custom resource spec.
+
+To alleviate this problem, you can provide configuration file content for a selected set of files through a Kubernetes `ConfigMap`. The information in the `ConfigMap` effectively overrides the file content that the system would have otherwise generated. This content allows you to try some configuration settings.
+
+For Arc SQL Managed Instance, the supported configuration files that you can override using this method are:
+
+- `mssql.conf`
+- `mssql.json`
+- `krb5.conf`
+
+## Steps to provide override configuration files
+
+1. Prepare the content of the configuration file
+
+ Prepare the content of the file that you would like to provide an override for.
+
+1. Create a `ConfigMap`
+
+ Create a `ConfigMap` spec to store the content of the configuration file. The key in the `ConfigMap` dictionary should be the name of the file, and the value should be the content.
+
+ You can provide file overrides for multiple configuration files in one `ConfigMap`.
+
+ The `ConfigMap` must be in the same namespace as the SQL Managed Instance.
+
+ The following spec shows an example of how to provide an override for mssql.conf file:
+
+ ```json
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: sqlmifo-cm
+ namespace: test
+ data:
+ mssql.conf: "[language]\r\nlcid = 1033\r\n\r\n[licensing]\r\npid = GeneralPurpose\r\n\r\n[network]\r\nforceencryption = 0\r\ntlscert = /var/run/secrets/managed/certificates/mssql/mssql-certificate.pem\r\ntlsciphers = ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384\r\ntlskey = /var/run/secrets/managed/certificates/mssql/mssql-privatekey.pem\r\ntlsprotocols = 1.2\r\n\r\n[sqlagent]\r\nenabled = False\r\n\r\n[telemetry]\r\ncustomerfeedback = false\r\n\r\n"
+ ```
+
+ Apply the `ConfigMap` in Kubernetes using `kubectl apply -f <filename>`.
+
+1. Provide the name of the ConfigMap in SQL Managed Instance spec
+
+ In SQL Managed Instance spec, provide the name of the ConfigMap in the field `spec.fileOverrideConfigMap`.
+
+ The SQL Managed Instance `apiVersion` must be at least v12 (released in April 2023).
+
+ The following SQL Managed Instance spec shows an example of how to provide the name of the ConfigMap.
+
+ ```json
+ apiVersion: sql.arcdata.microsoft.com/v12
+ kind: SqlManagedInstance
+ metadata:
+ name: sqlmifo
+ namespace: test
+ spec:
+ fileOverrideConfigMap: sqlmifo-cm
+ ...
+ ```
+
+ Apply the SQL Managed Instance spec in Kubernetes. This action leads to the delivery of the provided configuration files to Arc SQL Managed Instance container.
+
+1. Check that the files are downloaded in the `arc-sqlmi` container.
+
+ The locations of supported files in the container are:
+
+ - `mssql.conf`: `/var/run/config/mssql/mssql.conf`
+ - `mssql.json`: `/var/run/config/mssql/mssql.json`
+ - `krb5.conf`: `/etc/krb5.conf`
+
+## Next steps
+
+[Get logs to troubleshoot Azure Arc-enabled data services](troubleshooting-get-logs.md)
azure-arc Upload Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-logs.md
Once your logs are uploaded, you should be able to query them using the log quer
2. Select Logs in the left panel. 3. Select Get Started (or select the links on the Getting Started page to learn more about Log Analytics if you are new to it). 4. Follow the tutorial to learn more about Log Analytics if this is your first time using Log Analytics.
-5. Expand Custom Logs at the bottom of the list of tables and you will see a table called 'sql_instance_logs_CL'.
+5. Expand Custom Logs at the bottom of the list of tables and you will see a table called 'sql_instance_logs_CL' or 'postgresInstances_postgresql_logs_CL'.
6. Select the 'eye' icon next to the table name. 7. Select the 'View in query editor' button. 8. You'll now have a query in the query editor that will show the most recent 10 events in the log.
Once your logs are uploaded, you should be able to query them using the log quer
If you want to upload metrics and logs on a scheduled basis, you can create a script and run it on a timer every few minutes. Below is an example of automating the uploads using a Linux shell script.
-In your favorite text/code editor, add the following script to the file and save as a script executable file - such as .sh for Linux/Mac, or .cmd, .bat, or .ps1 for Windows.
+In your favorite text/code editor, add the following script to the file and save as a script executable file - such as `.sh` (Linux/Mac), `.cmd`, `.bat`, or `.ps1` (Windows).
```azurecli az arcdata dc export --type logs --path logs.json --force --k8s-namespace arc
azure-arc Upload Metrics And Logs To Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-metrics-and-logs-to-azure-monitor.md
Periodically, you can export out usage information for billing purposes, monitoring metrics, and logs and then upload it to Azure. The export and upload of any of these three types of data will also create and update the data controller, and SQL managed instance resources in Azure.
-> [!NOTE]
-> At this time, you can't upload usage data, metrics, or logs for Azure Arc-enabled PostgreSQL server preview.
- Before you can upload usage data, metrics, or logs you need to: * Install tools
azure-arc Upload Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upload-metrics.md
If you have multiple sites that have Azure Arc data services, you can use Azure
## Upload metrics for Azure Arc data controller in **direct** mode
-In the **direct** connected mode, metrics upload can only be setup in **automatic** mode. This automatic upload of metrics can be setup either during deployment of Azure Arc data controller or post deployment.
+In the **direct** connected mode, metrics upload can only be set up in **automatic** mode. This automatic upload of metrics can be set up either during deployment of Azure Arc data controller or post deployment.
The Arc data services extension managed identity is used for uploading metrics. The managed identity needs to have the **Monitoring Metrics Publisher** role assigned to it. > [!NOTE]
To view your metrics, navigate to the [Azure portal](https://portal.azure.com).
You can view CPU utilization on the Overview page or if you want more detailed metrics you can click on metrics from the left navigation panel
-Choose sql server as the metric namespace:
+Choose sql server or postgres as the metric namespace.
-Select the metric you want to visualize (you can also select multiple):
+Select the metric you want to visualize (you can also select multiple).
-Change the frequency to last 30 minutes:
+Change the frequency to last 30 minutes.
> [!NOTE] > You can only upload metrics only for the last 30 minutes. Azure Monitor rejects metrics older than 30 minutes.
Change the frequency to last 30 minutes:
If you want to upload metrics and logs on a scheduled basis, you can create a script and run it on a timer every few minutes. Below is an example of automating the uploads using a Linux shell script.
-In your favorite text/code editor, add the following script to the file and save as a script executable file such as .sh (Linux/Mac) or .cmd, .bat, .ps1.
+In your favorite text/code editor, add the following script to the file and save as a script executable file such as `.sh` (Linux/Mac), `.cmd`, `.bat`, or `.ps1`.
```azurecli az arcdata dc export --type metrics --path metrics.json --force --k8s-namespace arc
azure-arc Using Extensions In Postgresql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/using-extensions-in-postgresql-server.md
PostgreSQL is at its best when you use it with extensions.
[!INCLUDE [azure-arc-data-preview](../../../includes/azure-arc-data-preview.md)] ## Supported extensions
-For this preview, the following standard [`contrib`](https://www.postgresql.org/docs/14/contrib.html) extensions are already deployed in the containers of your Azure Arc-enabled PostgreSQL server:
+The following extensions are deployed by default in the containers of your Azure Arc-enabled PostgreSQL server, some of them are standard [`contrib`](https://www.postgresql.org/docs/14/contrib.html) extensions:
- `address_standardizer_data_us` 3.3.1 - `adminpack` 2.1 - `amcheck` 1.3
For this preview, the following standard [`contrib`](https://www.postgresql.org/
Updates to this list will be posted as it evolves over time.
-## Create an Arc-enabled PostgreSQL server with extensions enabled
-You can create an Arc-enabled PostgreSQL server with any of the supported extensions enabled by passing a comma separated list of extensions to the `--extensions` parameter of the `create` command. *NOTE:* Extensions are enabled on the database for the admin user that was supplied when the server was created:
+## Enable extensions in Arc-enabled PostgreSQL server
+You can create an Arc-enabled PostgreSQL server with any of the supported extensions enabled by passing a comma separated list of extensions to the `--extensions` parameter of the `create` command.
+ ```azurecli az postgres server-arc create -n <name> --k8s-namespace <namespace> --extensions "pgaudit,pg_partman" --use-k8s ```
+*NOTE*: Enabled extensions are added to the configuration ``shared_preload_libraries``. Extensions must be installed in your database before you can use it. To install a particular extension, you should run the [`CREATE EXTENSION`](https://www.postgresql.org/docs/current/sql-createextension.html) command. This command loads the packaged objects into your database.
+
+For example, connect to your database and issue the following PostgreSQL command to install pgaudit extension:
+
+```SQL
+CREATE EXTENSION pgaudit;
+```
-## Add or remove extensions
+## Update extensions
You can add or remove extensions from an existing Arc-enabled PostgreSQL server.
-First describe the server to get the current list of extensions:
+You can run the kubectl describe command to get the current list of enabled extensions:
```console kubectl describe postgresqls <server-name> -n <namespace> ```
If there are extensions enabled the output contains a section like this:
config: postgreSqlExtensions: pgaudit,pg_partman ```
-Add new extensions by appending them to the existing list, or remove extensions by removing them from the existing list. Pass the desired list to the update command. For example, to add `pgcrypto` and remove `pg_partman` from the server in the example above:
+
+Check whether the extension is installed after connecting to the database by running following PostgreSQL command:
+```SQL
+select * from pg_extension;
+```
+
+Enable new extensions by appending them to the existing list, or remove extensions by removing them from the existing list. Pass the desired list to the update command. For example, to add `pgcrypto` and remove `pg_partman` from the server in the example above:
+ ```azurecli az postgres server-arc update -n <name> --k8s-namespace <namespace> --extensions "pgaudit,pgrypto" --use-k8s ```
-## Show the list of enabled extensions
-Connect to your server with the client tool of your choice and run the standard PostgreSQL query:
-```console
+Once allowed extensions list is updated. Connect to the database and install newly added extension by the following command:
+
+```SQL
+CREATE EXTENSION pgcrypto;
+```
+
+Similarly, to remove an extension from an existing database issue the command [`DROP EXTENSION`](https://www.postgresql.org/docs/current/sql-dropextension.html) :
+
+```SQL
+DROP EXTENSION pg_partman;
+```
+
+## Show the list of installed extensions
+Connect to your database with the client tool of your choice and run the standard PostgreSQL query:
+```SQL
select * from pg_extension; ```
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
This article identifies the component versions with each release of Azure Arc-enabled data services.
+## April 11, 2023
+
+|Component|Value|
+|--|--|
+|Container images tag |`v1.18.0_2023-04-11`|
+|**CRD names and version:**| |
+|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1|
+|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5|
+|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2|
+|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2|
+|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4|
+|`monitors.arcdata.microsoft.com`| v1beta1, v1, v2|
+|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6|
+|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1|
+|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v12|
+|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2|
+|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1|
+|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1|
+|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5|
+|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5|
+|`redis.arcdata.microsoft.com`| v1beta1|
+|Azure Resource Manager (ARM) API version|2023-01-15-preview|
+|`arcdata` Azure CLI extension version|1.4.13 ([Download](https://aka.ms/az-cli-arcdata-ext))|
+|Arc-enabled Kubernetes helm chart extension version|1.18.0|
+|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))|
+ ## March 14, 2023 |Component|Value|
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
+## Version 1.24 - November 2022
+
+Download for [Windows](https://download.microsoft.com/download/f/9/d/f9d60cc9-7c2a-4077-b890-f6a54cc55775/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+
+### New features
+
+- `azcmagent logs` improvements:
+ - Only the most recent log file for each component is collected by default. To collect all log files, use the new `--full` flag.
+ - Journal logs for the agent services are now collected on Linux operating systems
+ - Logs from extensions are now collected
+- Agent telemetry is no longer sent to `dc.services.visualstudio.com`. You may be able to remove this URL from any firewall or proxy server rules if no other applications in your environment require it.
+- Failed extension installs can now be retried without removing the old extension as long as the extension settings are different
+- Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Azure Update Management Center extension on Linux to reduce downtime during update operations
+
+### Fixed
+
+- Improved logic for detecting machines running on Azure Stack HCI to reduce false positives
+- Auto-registration of required resource providers only happens when they are unregistered
+- Agent will now detect drift between the proxy settings of the command line tool and background services
+- Fixed a bug with proxy bypass feature that caused the agent to incorrectly use the proxy server for bypassed URLs
+- Improved error handling when extensions don't download successfully, fail validation, or have corrupt state files
+ ## Version 1.23 - October 2022
+Download for [Windows](https://download.microsoft.com/download/3/9/8/398f6036-958d-43c4-ad7d-4576f1d860a#installing-a-specific-version-of-the-agent)
+ ### New features - The minimum PowerShell version required on Windows Server has been reduced to PowerShell 4.0
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
## Version 1.22 - September 2022
+Download for [Windows](https://download.microsoft.com/download/1/3/5/135f1f2b-7b14-40f6-bceb-3af4ebadf434/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### Known issues - The 'connect' command uses the value of the last tag for all tags. You will need to fix the tags after onboarding to use the correct values.
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- The agent now supports Red Hat Enterprise Linux 8 servers that have FIPS mode enabled. - Agent telemetry uses the proxy server when configured. - Improved accuracy of network connectivity checks-- The agent retains extension allow and blocklists when switching the agent from monitoring mode to full mode. Use [azcmagent clear](manage-agent.md#config) to reset individual configuration settings to the default state.
+- The agent retains extension allow and blocklists when switching the agent from monitoring mode to full mode. Use [azcmagent config clear](manage-agent.md#config) to reset individual configuration settings to the default state.
## Version 1.21 - August 2022
+Download for [Windows](https://download.microsoft.com/download/#installing-a-specific-version-of-the-agent)
+ ### New features - `azcmagent connect` usability improvements:
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
## Version 1.20 - July 2022
+Download for [Windows](https://download.microsoft.com/download/f/b/1/fb143ada-1b82-4d19-a125-40f2b352e257/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### Known issues - Some systems may incorrectly report their cloud provider as Azure Stack HCI.
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
## Version 1.19 - June 2022
+Download for [Windows](https://download.microsoft.com/download/8/9/f/89f80a2b-32c3-43e8-b3b8-fce6cea8e2cf/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### Known issues - Agents configured to use private endpoints incorrectly download extensions from a public endpoint. [Upgrade the agent](manage-agent.md#upgrade-the-agent) to version 1.20 or later to restore correct functionality.
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
## Version 1.18 - May 2022
+Download for [Windows](https://download.microsoft.com/download/2/5/6/25685d0f-2895-4b80-9b1d-5ba53a46097f/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### New features - You can configure the agent to operate in [monitoring mode](security-overview.md#agent-modes), which simplifies configuration of the agent for scenarios where you only want to use Arc for monitoring and security scenarios. This mode disables other agent functionality and prevents use of extensions that could make changes to the system (for example, the Custom Script Extension).
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
## Version 1.17 - April 2022
+Download for [Windows](https://download.microsoft.com/download/#installing-a-specific-version-of-the-agent)
+ ### New features - The default resource name for AWS EC2 instances is now the instance ID instead of the hostname. To override this behavior, use the `--resource-name PreferredResourceName` parameter to specify your own resource name when connecting a server to Azure Arc.
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
## Version 1.16 - March 2022
+Download for [Windows](https://download.microsoft.com/download/e/#installing-a-specific-version-of-the-agent)
+ ### Known issues - `azcmagent logs` doesn't collect Guest Configuration logs in this release. You can locate the log directories in the [agent installation details](agent-overview.md#agent-resources).
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
## Version 1.15 - February 2022
+Download for [Windows](https://download.microsoft.com/download/0/7/4/074a7a9e-1d86-4588-8297-b4e587ea0307/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### Known issues - The "Arc" proxy bypass feature on Linux includes some endpoints that belong to Azure Active Directory. As a result, if you only specify the "Arc" bypass rule, traffic destined for Azure Active Directory endpoints will not use the proxy server as expected.
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
## Version 1.14 - January 2022
+Download for [Windows](https://download.microsoft.com/download/e/8/1/e816ff18-251b-4160-b421-a4f8ab9c2bfe/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### Fixed - Fixed a state corruption issue in the extension manager that could cause extension operations to get stuck in transient states. Customers running agent version 1.13 are encouraged to upgrade to version 1.14 as soon as possible. If you continue to have issues with extensions after upgrading the agent, [submit a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). ## Version 1.13 - November 2021
+Download for [Windows](https://download.microsoft.com/download/8/#installing-a-specific-version-of-the-agent)
+ ### Known issues - Extensions may get stuck in transient states (creating, deleting, updating) on Windows machines running the 1.13 agent in certain conditions. Microsoft recommends upgrading to agent version 1.14 as soon as possible to resolve this issue.
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
## Version 1.12 - October 2021
+Download for [Windows](https://download.microsoft.com/download/9/e/e/9eec9acb-53f1-4416-9e10-afdd8e5281ad/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### Fixed - Improved reliability when validating signatures of extension packages. - `azcmagent_proxy remove` command on Linux now correctly removes environment variables on Red Hat Enterprise Linux and related distributions. - `azcmagent logs` now includes the computer name and timestamp to help disambiguate log files.+ ## Version 1.11 - September 2021
+Download for [Windows](https://download.microsoft.com/download/6/d/b/6dbf7141-0bf0-4b18-93f5-20de4018369d/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### Fixed - The agent now supports on Windows systems with the [System objects: Require case insensitivity for non-Windows subsystems](/windows/security/threat-protection/security-policy-settings/system-objects-require-case-insensitivity-for-non-windows-subsystems) policy set to Disabled.
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
## Version 1.10 - August 2021
+Download for [Windows](https://download.microsoft.com/download/1/c/4/1c4a0bde-0b6c-4c52-bdaf-04851c567f43/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### Fixed - The guest configuration policy agent can now configure and remediate system settings. Existing policy assignments continue to be audit-only. Learn more about the Azure Policy [guest configuration remediation options](../../governance/machine-configuration/machine-configuration-policy-effects.md).
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
## Version 1.9 - July 2021
+Download for [Windows](https://download.microsoft.com/download/5/1/d/51d4340b-c927-4fc9-a0da-0bb8556338d0/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### New features Added support for the Indonesian language
Fixed a bug that prevented extension management in the West US 3 region
## Version 1.8 - July 2021
+Download for [Windows](https://download.microsoft.com/download/1/7/5/1758f4ea-3114-4a20-9113-6bc5fff1c3e8/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### New features - Improved reliability when installing the Azure Monitor Agent extension on Red Hat and CentOS systems
Fixed a bug that prevented extension management in the West US 3 region
## Version 1.7 - June 2021
+Download for [Windows](https://download.microsoft.com/download/6/1/c/61c69f31-8e22-4298-ac9d-47cd2090c81d/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### New features - Improved reliability during onboarding:
Fixed a bug that prevented extension management in the West US 3 region
## Version 1.6 - May 2021
+Download for [Windows](https://download.microsoft.com/download/d/3/d/d3df034a-d231-4ca6-9199-dbaa139b1eaf/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### New features - Added support for SUSE Enterprise Linux 12
Fixed a bug that prevented extension management in the West US 3 region
## Version 1.5 - April 2021
+Download for [Windows](https://download.microsoft.com/download/1/d/4/1d44ef2e-dcc9-42e4-b76c-2da6a6e852af/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### New features - Added support for Red Hat Enterprise Linux 8 and CentOS Linux 8.
Fixed a bug that prevented extension management in the West US 3 region
## Version 1.4 - March 2021
+Download for [Windows](https://download.microsoft.com/download/e/b/1/eb128465-8830-47b0-b89e-051eefd33f7c/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### New features - Added support for private endpoints, which is currently in limited preview.
Network endpoint checks are now faster.
## Version 1.3 - December 2020
+Download for [Windows](https://download.microsoft.com/download/5/4/c/54c2afd8-e559-41ab-8aa2-cc39bc13156b/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### New features Added support for Windows Server 2008 R2 SP1.
Resolved issue preventing the Custom Script Extension on Linux from installing s
## Version 1.2 - November 2020
+Download for [Windows](https://download.microsoft.com/download/4/c/2/4c287d81-6657-4cd8-9254-881ae6a2d1f4/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### Fixed Resolved issue where proxy configuration resets after upgrade on RPM-based distributions.
This version is the first generally available release of the Azure Connected Mac
- Before evaluating or enabling Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods. -- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
+- Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 03/10/2023 Last updated : 04/06/2023
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md).
+## Version 1.29 - April 2023
+
+Download for [Windows](https://download.microsoft.com/download/2/7/0/27063536-949a-4b16-a29a-3d1dcb29cff7/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+
+### New features
+
+- The agent now compares the time on the local system and Azure service when checking network connectivity and creating the resource in Azure. If the clocks are offset by more than 120 seconds (2 minutes), a non-blocking error will be printed to the console. You may encounter TLS connection errors if the time of your computer does not match the time in Azure.
+- `azcmagent show` now supports an `--os` flag to print additional OS information to the console
+
+### Fixed
+
+- Resolved a rare condition under which the guest configuration service (gc_service) could consume excessive CPU resources
+- Removed "sudo" calls in internal install script that could be blocked if SELinux is enabled
+- Reduced how long network checks wait before determining a network endpoint is unreachable
+- Stopped writing error messages in "himds.log" referring to a missing certificate key file for the ATS agent, an inactive component reserved for future use.
+ ## Version 1.28 - March 2023
+Download for [Windows](https://download.microsoft.com/download/5/9/7/59789af8-5833-4c91-8dc5-91c46ad4b54f/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### Fixed - Improved reliability of delete requests for extensions
This page is updated monthly, so revisit it regularly. If you're looking for ite
## Version 1.27 - February 2023
+Download for [Windows](https://download.microsoft.com/download/8/4/5/845d5e04-bb09-4ed2-9ca8-bb51184cddc9/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ ### Fixed - The extension service now correctly restarts when the Azure Connected Machine agent is upgraded by Update Management Center
This page is updated monthly, so revisit it regularly. If you're looking for ite
## Version 1.26 - January 2023
+Download for [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+ > [!NOTE] > Version 1.26 is only available for Linux operating systems.
This page is updated monthly, so revisit it regularly. If you're looking for ite
## Version 1.25 - January 2023
+Download for [Windows](https://download.microsoft.com/download/2/#installing-a-specific-version-of-the-agent)
+ ### New features - Red Hat Enterprise Linux (RHEL) 9 is now a [supported operating system](prerequisites.md#supported-operating-systems)
This page is updated monthly, so revisit it regularly. If you're looking for ite
- Improved error messages in the Windows MSI installer - Additional improvements to the detection logic for machines running on Azure Stack HCI
-## Version 1.24 - November 2022
-
-### New features
--- `azcmagent logs` improvements:
- - Only the most recent log file for each component is collected by default. To collect all log files, use the new `--full` flag.
- - Journal logs for the agent services are now collected on Linux operating systems
- - Logs from extensions are now collected
-- Agent telemetry is no longer sent to `dc.services.visualstudio.com`. You may be able to remove this URL from any firewall or proxy server rules if no other applications in your environment require it.-- Failed extension installs can now be retried without removing the old extension as long as the extension settings are different-- Increased the [resource limits](agent-overview.md#agent-resource-governance) for the Azure Update Management Center extension on Linux to reduce downtime during update operations-
-### Fixed
--- Improved logic for detecting machines running on Azure Stack HCI to reduce false positives-- Auto-registration of required resource providers only happens when they are unregistered-- Agent will now detect drift between the proxy settings of the command line tool and background services-- Fixed a bug with proxy bypass feature that caused the agent to incorrectly use the proxy server for bypassed URLs-- Improved error handling when extensions don't download successfully, fail validation, or have corrupt state files- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Title: Managing the Azure Arc-enabled servers agent description: This article describes the different management tasks that you will typically perform during the lifecycle of the Azure Connected Machine agent. Previously updated : 03/10/2023 Last updated : 04/07/2023
To clear a configuration property's value, run the following command:
`azcmagent config clear <propertyName>`
+## Installing a specific version of the agent
+
+Microsoft recommends using the most recent version of the Azure Connected Machine agent for the best experience. However, if you need to run an older version of the agent for any reason, you can follow these instructions to install a specific version of the agent.
+
+### [Windows](#tab/windows)
+
+Links to the current and previous releases of the Windows agents are available below the heading of each [release note](agent-release-notes.md). If you're looking for an agent version that's more than 6 months old, check out the [release notes archive](agent-release-notes-archive.md).
+
+### [Linux - apt](#tab/linux-apt)
+
+1. If you haven't already, configure your package manager with the [Linux Software Repository for Microsoft Products](/windows-server/administration/linux-package-repository-for-microsoft-software).
+1. Search for available agent versions with `apt-cache`:
+
+ ```bash
+ sudo apt-cache madison azcmagent
+ ```
+
+1. Find the version you want to install, replace `VERSION` in the following command with the full (4-part) version number, and run the command to install the agent:
+
+ ```bash
+ sudo apt install azcmagent=VERSION
+ ```
+
+ For example, to install version 1.28, the install command is:
+
+ ```bash
+ sudo apt install azcmagent=1.28.02260.736
+ ```
+
+### [Linux - yum](#tab/linux-yum)
+
+1. If you haven't already, configure your package manager with the [Linux Software Repository for Microsoft Products](/windows-server/administration/linux-package-repository-for-microsoft-software).
+1. Search for available agent versions with `yum list`:
+
+ ```bash
+ sudo yum list azcmagent --showduplicates
+ ```
+
+1. Find the version you want to install, replace `VERSION` in the following command with the full (4-part) version number, and run the command to install the agent:
+
+ ```bash
+ sudo yum install azcmagent-VERSION
+ ```
+
+ For example, to install version 1.28, the install command would look like:
+
+ ```bash
+ sudo yum install azcmagent-1.28.02260-755
+ ```
+
+### [Linux - zypper](#tab/linux-zypper)
+
+1. If you haven't already, configure your package manager with the [Linux Software Repository for Microsoft Products](/windows-server/administration/linux-package-repository-for-microsoft-software).
+1. Search for available agent versions with `zypper search`:
+
+ ```bash
+ sudo zypper search -s azcmagent
+ ```
+
+1. Find the version you want to install, replace `VERSION` in the following command with the full (4-part) version number, and run the command to install the agent:
+
+ ```bash
+ sudo zypper install -f azcmagent-VERSION
+ ```
+
+ For example, to install version 1.28, the install command would look like:
+
+ ```bash
+ sudo zypper install -f azcmagent-1.28.02260-755
+ ```
+++ ## Upgrade the agent The Azure Connected Machine agent is updated regularly to address bug fixes, stability enhancements, and new functionality. [Azure Advisor](../../advisor/advisor-overview.md) identifies resources that are not using the latest version of the machine agent and recommends that you upgrade to the latest version. It will notify you when you select the Azure Arc-enabled server by presenting a banner on the **Overview** page or when you access Advisor through the Azure portal.
azure-cache-for-redis Cache Best Practices Development https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-development.md
description: Learn how to develop code for Azure Cache for Redis.
Previously updated : 04/15/2022 Last updated : 04/10/2023
Microsoft is updating Azure services to use TLS server certificates from a diffe
#### Does this change affect me?
-We expect that most Azure Cache for Redis customers aren't affected by the change. Your application may be impacted if it explicitly specifies a list of acceptable certificates, a practice known as ΓÇ£certificate pinningΓÇ¥. If it's pinned to an intermediate or leaf certificate instead of the Baltimore CyberTrust Root, you should **take immediate actions** to change the certificate configuration.
+We expect that most Azure Cache for Redis customers aren't affected by the change. Your application might be affected if it explicitly specifies a list of acceptable certificates, a practice known as ΓÇ£certificate pinningΓÇ¥. If it's pinned to an intermediate or leaf certificate instead of the Baltimore CyberTrust Root, you should **take immediate actions** to change the certificate configuration.
+
+Azure Cache for Redis doesn't support [OCSP stapling](https://docs.redis.com/latest/rs/security/certificates/ocsp-stapling/).
The following table provides information about the certificates that are being rolled. Depending on which certificate your application uses, you might need to update it to prevent loss of connectivity to your Azure Cache for Redis instance.
azure-cache-for-redis Quickstart Create Redis Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/quickstart-create-redis-enterprise.md
Previously updated : 04/07/2023 Last updated : 04/10/2023 # Quickstart: Create a Redis Enterprise cache
Azure Cache for Redis is continually expanding into new regions. To check the av
1. Select **Next: Advanced**.
- Enable **Non-TLS access only** if you plan to connect to the new cache without using TLS. Disabling TLS is **not** recommended, however.
+ Enable **Non-TLS access only** if you plan to connect to the new cache without using TLS. Disabling TLS is **not** recommended, however. You can't change the eviction policy or clustering policy of an Enterprise cache instance after you create it. If you're using this cache instance in a replication group, be sure to know the policies of your primary nodes before you create the cache. For more information on replication, see [Active geo-replication prerequisites](cache-how-to-active-geo-replication.md#active-geo-replication-prerequisites).
Set **Clustering policy** to **Enterprise** for a nonclustered cache. For more information on choosing **Clustering policy**, see [Clustering Policy](#clustering-policy).
azure-maps How To Dev Guide Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-csharp-sdk.md
Title: How to create Azure Maps applications using the C# REST SDK description: How to develop applications that incorporate Azure Maps using the C# SDK Developers Guide.--++ Last updated 11/11/2021
# C# REST SDK Developers Guide
-The Azure Maps C# SDK supports functionality available in the [Azure Maps Rest API][Rest API], like searching for an address, routing between different coordinates, and getting the geo-location of a specific IP address. This article introduces the C# REST SDK with examples to help you get started building location-aware applications in C# that incorporate the power of Azure Maps.
+The Azure Maps C# SDK supports functionality available in the Azure Maps [Rest API], like searching for an address, routing between different coordinates, and getting the geo-location of a specific IP address. This article introduces the C# REST SDK with examples to help you get started building location-aware applications in C# that incorporates the power of Azure Maps.
> [!NOTE]
-> Azure Maps C# SDK supports any .NET version that is compatible with [.NET standard 2.0][.NET standard]. For an interactive table, seeΓÇ»[.NET Standard versions][.NET Standard versions].
+> Azure Maps C# SDK supports any .NET version that is compatible with [.NET standard] version 2.0 or higher. For an interactive table, seeΓÇ»[.NET Standard versions].
## Prerequisites -- [Azure Maps account][Azure Maps account].-- [Subscription key][Subscription key] or other form of [authentication][authentication].-- [.NET standard][.NET standard] version 2.0 or higher.
+- [Azure Maps account].
+- [Subscription key] or other form of [Authentication with Azure Maps].
+- [.NET standard] version 2.0 or higher.
> [!TIP] > You can create an Azure Maps account programmatically, Here's an example using the Azure CLI:
dotnet add package Azure.Maps.Geolocation --prerelease
## Create and authenticate a MapsSearchClient
-The client object used to access the Azure Maps Search APIs require either an `AzureKeyCredential` object to authenticate when using an Azure Maps subscription key or a `TokenCredential` object with the Azure Maps client ID when authenticating using Azure Active Directory (Azure AD). For more information on authentication, see [Authentication with Azure Maps][authentication].
+The client object used to access the Azure Maps Search APIs require either an `AzureKeyCredential` object to authenticate when using an Azure Maps subscription key or a `TokenCredential` object with the Azure Maps client ID when authenticating using Azure Active Directory (Azure AD). For more information on authentication, see [Authentication with Azure Maps].
### Using an Azure AD credential
-You can authenticate with Azure AD using the [Azure Identity library][Identity library .NET]. To use the [DefaultAzureCredential][defaultazurecredential.NET] provider, you'll need to install the Azure Identity client library for .NET:
+You can authenticate with Azure AD using the [Azure Identity library][Identity library .NET]. To use the [DefaultAzureCredential][defaultazurecredential.NET] provider, you need to install the Azure Identity client library for .NET:
```powershell dotnet add package Azure.Identity ```
-You'll need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources][Host daemon]. During this process you'll get an Application (client) ID, a Directory (tenant) ID, and a client secret. Copy these values and store them in a secure place. You'll need them in the following steps.
+You need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources]. The Application (client) ID, a Directory (tenant) ID, and a client secret are returned. Copy these values and store them in a secure place. You need them in the following steps.
Set the values of the Application (client) ID, Directory (tenant) ID, and client secret of your Azure AD application, and the map resourceΓÇÖs client ID as environment variables:
var client = new MapsSearchClient(credential, clientId);
``` > [!IMPORTANT]
-> The other environment variables created above, while not used in the code sample here, are required by `DefaultAzureCredential()`. If you do not set these environment variables correctly, using the same naming conventions, you will get run-time errors. For example, if your `AZURE_CLIENT_ID` is missing or invalid you will get an `InvalidAuthenticationTokenTenant` error.
+> The other environment variables created in the previous code snippet, while not used in the code sample, are required by `DefaultAzureCredential()`. If you do not set these environment variables correctly, using the same naming conventions, you will get run-time errors. For example, if your `AZURE_CLIENT_ID` is missing or invalid you will get an `InvalidAuthenticationTokenTenant` error.
### Using a subscription key credential
foreach (var result in searchResult.Results)
} ```
-The above code snippet demonstrates how to create a `MapsSearchClient` object using your Azure credentials, then uses its [FuzzySearch][FuzzySearch] method, passing in the point of interest (POI) name "_Starbucks_" and coordinates _GeoPosition(-122.31, 47.61)_. This all gets wrapped up by the SDK and sent to the Azure Maps REST endpoints. When the search results are returned, they're written out to the screen using `Console.WriteLine`.
+The above code snippet demonstrates how to create a `MapsSearchClient` object using your Azure credentials, then uses its [FuzzySearch] method, passing in the point of interest (POI) name "_Starbucks_" and coordinates _GeoPosition(-122.31, 47.61)_. The SDK packages and sends the results to the Azure Maps REST endpoints. When the search results are returned, they're written out to the screen using `Console.WriteLine`.
The following libraries are used:
if (searchResult.Results.Count > 0)
} ```
-Results returned by the `SearchAddress` method are ordered by confidence score and because `searchResult.Results.First()` is used, only the coordinates of the first result will be returned.
+The `SearchAddress` method returns results ordered by confidence score and since `searchResult.Results.First()` is used, only the coordinates of the first result are returned.
## Batch reverse search
-Azure Maps Search also provides some batch query methods. These methods will return Long Running Operations (LRO) objects. The requests might not return all the results immediately, so users can choose to wait until completion or query the result periodically. The example below demonstrates how to call the batched reverse search methods:
+Azure Maps Search also provides some batch query methods. These methods return Long Running Operations (LRO) objects. The requests might not return all the results immediately, so users can choose to wait until completion or query the result periodically. The following example demonstrates how to call the batched reverse search methods:
```csharp var queries = new List<ReverseSearchAddressQuery>()
var queries = new List<ReverseSearchAddressQuery>()
}; ```
-In the above example, two queries are passed to the batched reverse search request. To get the LRO results, you have few options. The first option is to pass `WaitUntil.Completed` to the method. The request will wait until all requests are finished and return the results:
+In the above example, two queries are passed to the batched reverse search request. To get the LRO results, you have few options. The first option is to pass `WaitUntil.Completed` to the method. The request waits until all requests are finished and return the results:
```csharp // Wait until the LRO return batch results
Response<ReverseSearchAddressBatchOperation> waitUntilCompletedResults = client.
printReverseBatchAddresses(waitUntilCompletedResults.Value); ```
-Another option is to pass `WaitUntil.Started`. The request will return immediately, and you'll need to manually poll the results:
+Another option is to pass `WaitUntil.Started`. The request returns immediately, and you need to manually poll the results:
```csharp // Manual polling the batch results
Response<ReverseSearchAddressBatchOperation> manualPollingResult = manualPolling
printReverseBatchAddresses(manualPollingResult.Value); ```
-The third method requires the operation ID to get the results, which will be cached on the server side for 14 days:
+The third method requires the operation ID to get the results, which is cached on the server side for 14 days:
```csharp ReverseSearchAddressBatchOperation longRunningOperation = client.ReverseSearchAddressBatch(WaitUntil.Started, queries);
void printReverseBatchAddresses(ReverseSearchAddressBatchResult batchResult)
## Additional information
-The [Azure.Maps Namespace][Azure.Maps Namespace] in the .NET documentation.
+The [Azure.Maps Namespace] in the .NET documentation.
-[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-
-[authentication]: azure-maps-authentication.md
-[Host daemon]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources
+[.NET Standard versions]: https://dotnet.microsoft.com/platform/dotnet-standard#versions
[.NET standard]: /dotnet/standard/net-standard?tabs=net-standard-2-0
+[Authentication with Azure Maps]: azure-maps-authentication.md
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Azure.Maps Namespace]: /dotnet/api/azure.maps
+[defaultazurecredential.NET]: /dotnet/api/overview/azure/identity-readme?view=azure-dotnet#defaultazurecredential
+[FuzzySearch]: /dotnet/api/azure.maps.search.mapssearchclient.fuzzysearch
+[geolocation readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Geolocation/README.md
+[geolocation sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Geolocation/samples
+[geolocation package]: https://www.nuget.org/packages/Azure.Maps.geolocation
+[Host a daemon on non-Azure resources]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources
+[Identity library .NET]: /dotnet/api/overview/azure/identity-readme?view=azure-dotnet
+[rendering package]: https://www.nuget.org/packages/Azure.Maps.Rendering
+[rendering readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Rendering/README.md
+[rendering sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Rendering/samples
[Rest API]: /rest/api/maps/
-[.NET Standard versions]: https://dotnet.microsoft.com/platform/dotnet-standard#versions
-[search package]: https://www.nuget.org/packages/Azure.Maps.Search
-[search readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Search/README.md
-[search sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Search/samples
[routing package]: https://www.nuget.org/packages/Azure.Maps.Routing [routing readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Routing/README.md [routing sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Routing/samples
-[rendering package]: https://www.nuget.org/packages/Azure.Maps.Rendering
-[rendering readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Rendering/README.md
-[rendering sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Rendering/samples
-[geolocation package]: https://www.nuget.org/packages/Azure.Maps.geolocation
-[geolocation readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Geolocation/README.md
-[geolocation sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Geolocation/samples
-[FuzzySearch]: /dotnet/api/azure.maps.search.mapssearchclient.fuzzysearch
-[Azure.Maps Namespace]: /dotnet/api/azure.maps
-[search-api]: /dotnet/api/azure.maps.search
-[Identity library .NET]: /dotnet/api/overview/azure/identity-readme?view=azure-dotnet
-[defaultazurecredential.NET]: /dotnet/api/overview/azure/identity-readme?view=azure-dotnet#defaultazurecredential
-[NuGet]: https://www.nuget.org/
+[search package]: https://www.nuget.org/packages/Azure.Maps.Search
+[search readme]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/maps/Azure.Maps.Search/README.md
+[search sample]: https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/maps/Azure.Maps.Search/samples
+[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps How To Dev Guide Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md
Title: How to create Azure Maps applications using the Java REST SDK (preview) description: How to develop applications that incorporate Azure Maps using the Java REST SDK Developers Guide.--++ Last updated 01/25/2023
The Azure Maps Java SDK can be integrated with Java applications and libraries t
## Create a Maven project
-The following PowerShell code snippet demonstrates how to use PowerShell to create a maven project. First we'll run maven command to create maven project:
+The following PowerShell code snippet demonstrates how to use PowerShell to create a maven project. First, run the maven command to create a maven project:
```powershell mvn archetype:generate "-DgroupId=groupId" "-DartifactId=DemoProject" "-DarchetypeArtifactId=maven-archetype-quickstart" "-DarchetypeVersion=1.4" "-DinteractiveMode=false" 
mvn archetype:generate "-DgroupId=groupId" "-DartifactId=DemoProject" "-Darchety
| Parameter | Description | |-|--| | `-DGroupId` | Group ID uniquely identifies your project across all projects|
-| `-DartifactId` | Project name. It will be created as a new folder. |
+| `-DartifactId` | Project name. It's created as a new folder. |
| `-DarchetypeArtifactId` | project type. `maven-archetype-quickstart` results in a sample project. | | `-DinteractiveMode` | Setting to `false` results in a blank Java project with default options. | ### Install the packages
-To use the Azure Maps Java SDK, you will need to install all required packages. Each service in Azure Maps is available in its own package. Services include Search, Render, Traffic, Weather, etc. You only need to install the packages for the service or services you will be using in your project.
+To use the Azure Maps Java SDK, you need to install all required packages. Each service in Azure Maps is available in its own package. The Azure Maps services include Search, Render, Traffic, Weather, etc. You only need to install the packages for the service or services used in your project.
-After creating the maven project, there should be a `pom.xml` file with basic information such as group ID, name, artifact ID. This is where you will add a dependency for each of the Azure Maps services, as shown below:
+Once the maven project is created, there should be a `pom.xml` file with basic information such as group ID, name, artifact ID. Next, add a dependency for each of the Azure Maps services, as the following example demonstrates:
```xml <dependency> 
The client object used to access the Azure Maps Search APIs require either an `A
### Using an Azure AD credential
-You can authenticate with Azure AD using the [Azure Identity library][Identity library]. To use the [DefaultAzureCredential] provider, you'll need to add the mvn dependency in the `pom.xml` file:
+You can authenticate with Azure AD using the [Azure Identity library][Identity library]. To use the [DefaultAzureCredential] provider, you need to add the mvn dependency in the `pom.xml` file:
```xml <dependency>
You can authenticate with Azure AD using the [Azure Identity library][Identity l
</dependency> ```
-You'll need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources][Host daemon]. During this process you'll get an Application (client) ID, a Directory (tenant) ID, and a client secret. Copy these values and store them in a secure place. You'll need them in the following steps.
+You need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources][Host daemon]. The Application (client) ID, a Directory (tenant) ID, and a client secret are returned. Copy these values and store them in a secure place. You need them in the following steps.
Set the values of the Application (client) ID, Directory (tenant) ID, and client secret of your Azure AD application, and the map resource's client ID as environment variables:
public class Demo {
``` > [!IMPORTANT]
-> The other environment variables created above, while not used in the code sample here, are required by `DefaultAzureCredential()`. If you do not set these environment variables correctly, using the same naming conventions, you will get run-time errors. For example, if your `AZURE_CLIENT_ID` is missing or invalid you will get an `InvalidAuthenticationTokenTenant` error.
+> The other environment variables created in the previous code snippet, while not used in the code sample, are required by are required by `DefaultAzureCredential()`. If you do not set these environment variables correctly, using the same naming conventions, you will get run-time errors. For example, if your `AZURE_CLIENT_ID` is missing or invalid you will get an `InvalidAuthenticationTokenTenant` error.
### Using a subscription key credential
In this sample, the `client.SearchAddress` method returns results ordered by con
## Batch reverse search
-Azure Maps Search also provides some batch query methods. These methods will return Long Running Operations (LRO) objects. The requests might not return all the results immediately, so users can choose to wait until completion or query the result periodically as demonstrated in the batch reverse search method:
+Azure Maps Search also provides some batch query methods. These methods return Long Running Operations (LRO) objects. The requests might not return all the results immediately, so users can choose to wait until completion or query the result periodically as demonstrated in the batch reverse search method:
```java import java.util.ArrayList;
public class Demo{
} ```
-[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[authentication]: azure-maps-authentication.md-
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[defaultazurecredential]: /azure/developer/java/sdk/identity-azure-hosted-auth#default-azure-credential
+[Host daemon]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources
+[Identity library]: /java/api/overview/azure/identity-readme?source=recommendations&view=azure-java-stable
[Java Standard Versions]: https://www.oracle.com/java/technologies/downloads/ [Java Version 8]: /azure/developer/java/fundamentals/?view=azure-java-stable [maven]: /azure/developer/java/sdk/get-started-maven
-[Identity library]: /java/api/overview/azure/identity-readme?source=recommendations&view=azure-java-stable
-[defaultazurecredential]: /azure/developer/java/sdk/identity-azure-hosted-auth#default-azure-credential
-[Host daemon]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources
+[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
<!-- Java SDK Developers Guide >
-[java search package]: https://repo1.maven.org/maven2/com/azure/azure-maps-search
-[java search readme]: https://github.com/Azure/azure-sdk-for-jav
-[java search sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-search/src/samples/java/com/azure/maps/search/samples
-[java routing package]: https://repo1.maven.org/maven2/com/azure/azure-maps-route
-[java routing readme]: https://github.com/Azure/azure-sdk-for-jav
-[java routing sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-route/src/samples/java/com/azure/maps/route/samples
+[java elevation package]: https://repo1.maven.org/maven2/com/azure/azure-maps-elevation
+[java elevation readme]: https://github.com/Azure/azure-sdk-for-jav
+[java elevation sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-elevation/src/samples/java/com/azure/maps/elevation/samples
+[java geolocation readme]: https://github.com/Azure/azure-sdk-for-jav
+[java geolocation sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-geolocation/src/samples/java/com/azure/maps/geolocation/samples
+[java geolocation package]: https://repo1.maven.org/maven2/com/azure/azure-maps-geolocation
[java rendering package]: https://repo1.maven.org/maven2/com/azure/azure-maps-render [java rendering readme]: https://github.com/Azure/azure-sdk-for-jav [java rendering sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-render/src/samples/java/com/azure/maps/render/samples
-[java geolocation package]: https://repo1.maven.org/maven2/com/azure/azure-maps-geolocation
-[java geolocation readme]: https://github.com/Azure/azure-sdk-for-jav
-[java geolocation sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-geolocation/src/samples/java/com/azure/maps/geolocation/samples
-[java timezone package]: https://repo1.maven.org/maven2/com/azure/azure-maps-timezone
+[java routing package]: https://repo1.maven.org/maven2/com/azure/azure-maps-route
+[java routing readme]: https://github.com/Azure/azure-sdk-for-jav
+[java routing sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-route/src/samples/java/com/azure/maps/route/samples
+[java search package]: https://repo1.maven.org/maven2/com/azure/azure-maps-search
+[java search readme]: https://github.com/Azure/azure-sdk-for-jav
+[java search sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-search/src/samples/java/com/azure/maps/search/samples
[java timezone readme]: https://github.com/Azure/azure-sdk-for-jav [java timezone sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-timezone/src/samples/java/com/azure/maps/timezone/samples
-[java elevation package]: https://repo1.maven.org/maven2/com/azure/azure-maps-elevation
-[java elevation readme]: https://github.com/Azure/azure-sdk-for-jav
-[java elevation sample]: https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/maps/azure-maps-elevation/src/samples/java/com/azure/maps/elevation/samples
+[java timezone package]: https://repo1.maven.org/maven2/com/azure/azure-maps-timezone
azure-maps How To Dev Guide Js Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md
Title: How to create Azure Maps applications using the JavaScript REST SDK (preview) description: How to develop applications that incorporate Azure Maps using the JavaScript SDK Developers Guide.--++ Last updated 11/15/2021
# JavaScript/TypeScript REST SDK Developers Guide (preview)
-The Azure Maps JavaScript/TypeScript REST SDK (JavaScript SDK) supports searching using the Azure Maps [Search service], like searching for an address, fuzzy searching for a point of interest (POI), and searching by coordinates. This article will help you get started building location-aware applications that incorporate the power of Azure Maps.
+The Azure Maps JavaScript/TypeScript REST SDK (JavaScript SDK) supports searching using the Azure Maps [Search service], like searching for an address, fuzzy searching for a point of interest (POI), and searching by coordinates. This article helps you get started building location-aware applications that incorporate the power of Azure Maps.
> [!NOTE] > Azure Maps JavaScript SDK supports the LTS version of Node.js. For more information, seeΓÇ»[Node.js Release Working Group].
The Azure Maps JavaScript/TypeScript REST SDK (JavaScript SDK) supports searchin
## Prerequisites - [Azure Maps account].-- [Subscription key] or other form of [authentication].
+- [Subscription key] or other form of [Authentication with Azure Maps].
- [Node.js]. > [!TIP]
The Azure Maps JavaScript/TypeScript REST SDK (JavaScript SDK) supports searchin
## Create a Node.js project
-The example below creates a new directory then a Node.js program named _mapsDemo_ using npm:
+The following example creates a new directory then a Node.js program named _mapsDemo_ using npm:
```powershell mkdir mapsDemo
npm init
## Install the search package
-To use Azure Maps JavaScript SDK, you'll need to install the search package. Each of the Azure Maps services including search, routing, rendering and geolocation are each in their own package.
+To use Azure Maps JavaScript SDK, you need to install the search package. Each of the Azure Maps services including search, routing, rendering and geolocation are each in their own package.
```powershell npm install @azure-rest/maps-search
mapsDemo
## Create and authenticate a MapsSearchClient
-You'll need a `credential` object for authentication when creating the `MapsSearchClient` object used to access the Azure Maps search APIs. You can use either an Azure Active Directory (Azure AD) credential or an Azure subscription key to authenticate. For more information on authentication, see [Authentication with Azure Maps][authentication].
+You need a `credential` object for authentication when creating the `MapsSearchClient` object used to access the Azure Maps search APIs. You can use either an Azure Active Directory (Azure AD) credential or an Azure subscription key to authenticate. For more information on authentication, see [Authentication with Azure Maps].
> [!TIP] > The`MapsSearchClient` is the primary interface for developers using the Azure Maps search library. See [Azure Maps Search client library][JS-SDK] to learn more about the search methods available. ### Using an Azure AD credential
-You can authenticate with Azure AD using the [Azure Identity library]. To use the [DefaultAzureCredential] provider, you'll need to install the `@azure/identity` package:
+You can authenticate with Azure AD using the [Azure Identity library]. To use the [DefaultAzureCredential] provider, you need to install the `@azure/identity` package:
```powershell npm install @azure/identity ```
-You'll need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources]. During this process you'll get an Application (client) ID, a Directory (tenant) ID, and a client secret. Copy these values and store them in a secure place. You'll need them in the following steps.
+You need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources]. The Application (client) ID, a Directory (tenant) ID, and a client secret are returned. Copy these values and store them in a secure place. You need them in the following steps.
Set the values of the Application (client) ID, Directory (tenant) ID, and client secret of your Azure AD application, and the map resourceΓÇÖs client ID as environment variables:
Set the values of the Application (client) ID, Directory (tenant) ID, and client
| AZURE_TENANT_ID | Directory (tenant) ID in your registered application | | MAPS_CLIENT_ID | The client ID in your Azure Map account |
-You can use a `.env` file for these variables. You'll need to install the [dotenv] package:
+You can use a `.env` file for these variables. You need to install the [dotenv] package:
```powershell npm install dotenv
You can authenticate with your Azure Maps subscription key. Your subscription ke
You need to pass the subscription key to the `AzureKeyCredential` class provided by the [Azure Core Authentication Package]. For security reasons, it's better to specify the key as an environment variable than to include it in your source code.
-You can accomplish this by using a `.env` file to store the subscription key variable. You'll need to install the [dotenv] package to retrieve the value:
+Use a `.env` file to store the subscription key variable to accomplish this. You need to install the [dotenv] package to retrieve the value:
```powershell npm install dotenv
main().catch((err) => {
```
-The code snippet above shows how to use the `MapsSearch` method from the Azure Maps Search client library to create a `client` object with your Azure credentials. You can use either your Azure Maps subscription key or the [Azure AD credential](#using-an-azure-ad-credential) from the previous section. The `path` parameter specifies the API endpoint, which is "/search/fuzzy/{format}" in this case. The `get` method sends an HTTP GET request with the query parameters, such as `query`, `coordinates`, and `countryFilter`. The query searches for Starbucks locations near Seattle in the US. The SDK returns the results as a [FuzzySearchResult] object and writes them to the console. For more details, see the [FuzzySearchRequest] documentation.
+This code snippet shows how to use the `MapsSearch` method from the Azure Maps Search client library to create a `client` object with your Azure credentials. You can use either your Azure Maps subscription key or the [Azure AD credential](#using-an-azure-ad-credential) from the previous section. The `path` parameter specifies the API endpoint, which is "/search/fuzzy/{format}" in this case. The `get` method sends an HTTP GET request with the query parameters, such as `query`, `coordinates`, and `countryFilter`. The query searches for Starbucks locations near Seattle in the US. The SDK returns the results as a [FuzzySearchResult] object and writes them to the console. For more information, see the [FuzzySearchRequest] documentation.
Run `search.js` with Node.js:
The results are ordered by confidence score and in this example only the first r
## Batch reverse search
-Azure Maps Search also provides some batch query methods. These methods will return Long Running Operations (LRO) objects. The requests might not return all the results immediately, so you can use the poller to wait until completion. The example below demonstrates how to call batched reverse search method:
+Azure Maps Search also provides some batch query methods. These methods return Long Running Operations (LRO) objects. The requests might not return all the results immediately, so you can use the poller to wait until completion. The following example demonstrates how to call batched reverse search method:
```JavaScript const batchItems = await createBatchItems([
Azure Maps Search also provides some batch query methods. These methods will ret
}); ```
-In this example, three queries are passed into the helper function `createBatchItems` which is imported from `@azure-rest/maps-search`. This helper function composed the body of the batch request. The first query is invalid, see [Handing failed requests](#handing-failed-requests) for an example showing how to handle the invalid query.
+In this example, three queries are passed into the helper function `createBatchItems` that is imported from `@azure-rest/maps-search`. This helper function composed the body of the batch request. The first query is invalid, see [Handing failed requests](#handing-failed-requests) for an example showing how to handle the invalid query.
Use the `getLongRunningPoller` method with the `initialResponse` to get the poller. Then you can use `pollUntilDone` to get the final result:
function logResponseBody(resBody) {
### Handing failed requests
-Handle failed requests by checking for the `error` property in the response batch item. See the `logResponseBody` function in the completed batch reverse search example below.
+Handle failed requests by checking for the `error` property in the response batch item. See the `logResponseBody` function in the completed batch reverse search following example.
### Completed batch reverse search example
main().catch(console.error);
- The [Azure Maps Search client library for JavaScript/TypeScript][JS-SDK].
-[JS-SDK]: /javascript/api/@azure-rest/maps-search
-
+[Authentication with Azure Maps]: azure-maps-authentication.md
+[Azure Core Authentication Package]: /javascript/api/@azure/core-auth/
+[Azure Identity library]: /javascript/api/overview/azure/identity-readme
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
[DefaultAzureCredential]: https://github.com/Azure/azure-sdk-for-js/tree/@azure/maps-search_1.0.0-beta.1/sdk/identity/identity#defaultazurecredential-
-[searchAddress]: /javascript/api/@azure-rest/maps-search/searchaddress
-
+[dotenv]: https://github.com/motdotla/dotenv#readme
[FuzzySearchRequest]: /javascript/api/@azure-rest/maps-search/fuzzysearch
-
[FuzzySearchResult]: /javascript/api/@azure-rest/maps-search/searchfuzzysearch200response--
-[Search service]: /rest/api/maps/search
+[Host a daemon on non-Azure resources]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources
+[js geolocation package]: https://www.npmjs.com/package/@azure-rest/maps-geolocation
+[js geolocation readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-geolocation-rest/README.md
+[js geolocation sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-geolocation-rest/samples/v1-beta
+[js render package]: https://www.npmjs.com/package/@azure-rest/maps-render
+[js render readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-render-rest/README.md
+[js render sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-render-rest/samples/v1-beta
+[js route package]: https://www.npmjs.com/package/@azure-rest/maps-route
+[js route readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-route-rest/README.md
+[js route sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-route-rest/samples/v1-beta
+[JS-SDK]: /javascript/api/@azure-rest/maps-search
[Node.js Release Working Group]: https://github.com/nodejs/release#release-schedule [Node.js]: https://nodejs.org/en/download/
-[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-
-[authentication]: azure-maps-authentication.md
-[Azure Identity library]: /javascript/api/overview/azure/identity-readme
-[Azure Core Authentication Package]: /javascript/api/@azure/core-auth/
-
-[Host a daemon on non-Azure resources]: ./how-to-secure-daemon-app.md#host-a-daemon-on-non-azure-resources
-[dotenv]: https://github.com/motdotla/dotenv#readme
- [search package]: https://www.npmjs.com/package/@azure-rest/maps-search [search readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-search-rest/README.md [search sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-search-rest/samples/v1-beta-
-[js route readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-route-rest/README.md
-[js route package]: https://www.npmjs.com/package/@azure-rest/maps-route
-[js route sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-route-rest/samples/v1-beta
-
-[js render readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-render-rest/README.md
-[js render package]: https://www.npmjs.com/package/@azure-rest/maps-render
-[js render sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-render-rest/samples/v1-beta
-
-[js geolocation readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-geolocation-rest/README.md
-[js geolocation package]: https://www.npmjs.com/package/@azure-rest/maps-geolocation
-[js geolocation sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-geolocation-rest/samples/v1-beta
+[Search service]: /rest/api/maps/search
+[searchAddress]: /javascript/api/@azure-rest/maps-search/searchaddress
+[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps How To Dev Guide Py Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-py-sdk.md
Title: How to create Azure Maps applications using the Python REST SDK (preview) description: How to develop applications that incorporate Azure Maps using the Python SDK Developers Guide.--++ Last updated 01/15/2021
The Azure Maps Python SDK can be integrated with Python applications and librari
## Prerequisites - [Azure Maps account].-- [Subscription key] or other form of [authentication].
+- [Subscription key] or other form of [Authentication with Azure Maps].
- Python on 3.7 or later. It's recommended to use the [latest release]. For more information, see [Azure SDK for Python version support policy]. > [!TIP]
Azure Maps Python SDK supports Python version 3.7 or later. For more information
## Create and authenticate a MapsSearchClient
-You'll need a `credential` object for authentication when creating the `MapsSearchClient` object used to access the Azure Maps search APIs. You can use either an Azure Active Directory (Azure AD) credential or an Azure subscription key to authenticate. For more information on authentication, see [Authentication with Azure Maps][authentication].
+You need a `credential` object for authentication when creating the `MapsSearchClient` object used to access the Azure Maps search APIs. You can use either an Azure Active Directory (Azure AD) credential or an Azure subscription key to authenticate. For more information on authentication, see [Authentication with Azure Maps].
> [!TIP] > The`MapsSearchClient` is the primary interface for developers using the Azure Maps search library. See [Azure Maps Search package client library] to learn more about the search methods available. ### Using an Azure AD credential
-You can authenticate with Azure AD using the [Azure Identity package]. To use the [DefaultAzureCredential] provider, you'll need to install the Azure Identity client package:
+You can authenticate with Azure AD using the [Azure Identity package]. To use the [DefaultAzureCredential] provider, you need to install the Azure Identity client package:
```powershell pip install azure-identity ```
-You'll need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources]. During this process you'll get an Application (client) ID, a Directory (tenant) ID, and a client secret. Copy these values and store them in a secure place. You'll need them in the following steps.
+You need to register the new Azure AD application and grant access to Azure Maps by assigning the required role to your service principal. For more information, see [Host a daemon on non-Azure resources]. The Application (client) ID, a Directory (tenant) ID, and a client secret are returned. Copy these values and store them in a secure place. You need them in the following steps.
-Next you'll need to specify the Azure Maps account you intend to use by specifying the maps’ client ID. The Azure Maps account client ID can be found in the Authentication sections of the Azure Maps account. For more information, see [View authentication details].
+Next you need to specify the Azure Maps account you intend to use by specifying the maps’ client ID. The Azure Maps account client ID can be found in the Authentication sections of the Azure Maps account. For more information, see [View authentication details].
Set the values of the Application (client) ID, Directory (tenant) ID, and client secret of your Azure AD application, and the map resourceΓÇÖs client ID as environment variables:
$Env:AZURE_TENANT_ID="your Directory (tenant) ID"
$Env:MAPS_CLIENT_ID="your Azure Maps client ID" ```
-After setting up the environment variables, you can use them in your program to instantiate the `AzureMapsSearch` client. Create a file named *demo.py* and add the following:
+After setting up the environment variables, you can use them in your program to instantiate the `AzureMapsSearch` client. Create a file named *demo.py* and add the following code:
```Python import os
maps_search_client = MapsSearchClient(
``` > [!IMPORTANT]
-> The other environment variables created above, while not used in the code sample here, are required by `DefaultAzureCredential()`. If you do not set these environment variables correctly, using the same naming conventions, you will get run-time errors. For example, if your `AZURE_CLIENT_ID` is missing or invalid you will get an `InvalidAuthenticationTokenTenant` error.
+> The other environment variables created in the previous code snippet, while not used in the code sample, are required by `DefaultAzureCredential()`. If you do not set these environment variables correctly, using the same naming conventions, you will get run-time errors. For example, if your `AZURE_CLIENT_ID` is missing or invalid you will get an `InvalidAuthenticationTokenTenant` error.
### Using a subscription key credential
Now you can create environment variables in PowerShell to store the subscription
$Env:SUBSCRIPTION_KEY="your subscription key" ```
-Once your environment variable is created, you can access it in your code. Create a file named *demo.py* and add the following:
+Once your environment variable is created, you can access it in your code. Create a file named *demo.py* and add the following code:
```Python import os
if __name__ == '__main__':
ΓÇ» ΓÇ» fuzzy_search() ```
-The sample code above instantiates `AzureKeyCredential` with the Azure Maps subscription key, then it to instantiate the `MapsSearchClient` object. The methods provided by `MapsSearchClient` forward the request to the Azure Maps REST endpoints. In the end, the program iterates through the results and prints the address and coordinates for each result.
+This sample code instantiates `AzureKeyCredential` with the Azure Maps subscription key, then uses it to instantiate the `MapsSearchClient` object. The methods provided by `MapsSearchClient` forward the request to the Azure Maps REST endpoints. In the end, the program iterates through the results and prints the address and coordinates for each result.
After finishing the program, run `python demo.py` from the project folder in PowerShell:
if __name__ == '__main__':
ΓÇ» ΓÇ» search_address() ```
-Results returned by the `search_address()` method are ordered by confidence score and print the coordinates of the first result.
+The `SearchAddress` method returns results ordered by confidence score and prints the coordinates of the first result.
## Batch reverse search
-Azure Maps Search also provides some batch query methods. These methods will return long-running operations (LRO) objects. The requests might not return all the results immediately, so users can choose to wait until completion or query the result periodically. The examples below demonstrate how to call the batched reverse search method.
+Azure Maps Search also provides some batch query methods. These methods return long-running operations (LRO) objects. The requests might not return all the results immediately, so users can choose to wait until completion or query the result periodically. The following examples demonstrate how to call the batched reverse search method.
-Since these return LRO objects, you'll need the `asyncio` method included in the `aiohttp` package:
+Since these return LRO objects, you need the `asyncio` method included in the `aiohttp` package:
```powershell pip install aiohttp
if __name__ == '__main__':
ΓÇ» ΓÇ» asyncio.run(begin_reverse_search_address_batch()) ```
-In the above example, three queries are passed to the batched reverse search request. To get the LRO results, the request will create a batch request with a batch ID as result that can be used to fetch batch response later. The LRO results will be cached on the server side for 14 days.
+In the above example, three queries are passed to the batched reverse search request. To get the LRO results, the request creates a batch request with a batch ID as result that can be used to fetch batch response later. The LRO results are cached on the server side for 14 days.
The following example demonstrates the process of calling the batch ID and retrieving the operation results of the batch request:
The [Azure Maps Search package client library] in the *Azure SDK for Python Prev
<!> [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[authentication]: azure-maps-authentication.md
+[Authentication with Azure Maps]: azure-maps-authentication.md
[Azure Maps Search package client library]: /python/api/overview/azure/maps-search-readme?view=azure-python-preview [latest release]: https://www.python.org/downloads/
azure-monitor Azure Monitor Agent Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-performance.md
Last updated 4/07/2023 -
+
#customer-intent: As a deployment engineer, I can scope the resources required to scale my gateway data colletors the use the Azure Monitor Agent.
azure-monitor Javascript Feature Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md
const clickPluginConfig = {
}; // Application Insights Configuration const configObj = {
- instrumentationKey: "YOUR INSTRUMENTATION KEY",
+ connectionString: "YOUR CONNECTION STRING",
extensions: [clickPluginInstance], extensionConfig: { [clickPluginInstance.identifier]: clickPluginConfig
Ignore this setup if you use the npm setup.
} // Application Insights configuration var configObj = {
- instrumentationKey: "YOUR INSTRUMENTATION KEY",
+ connectionString: "YOUR CONNECTION STRING",
extensions: [ clickPluginInstance ],
Ignore this setup if you use the npm setup.
> After `parentDataTag` is used, the SDK begins looking for parent tags across your entire application and not just the HTML element where you used it. 1. The `customDataPrefix` provided by the user should always start with `data-`. An example is `data-sample-`. In HTML, the `data-*` global attributes are called custom data attributes that allow proprietary information to be exchanged between the HTML and its DOM representation by scripts. Older browsers like Internet Explorer and Safari drop attributes they don't understand, unless they start with `data-`.
- The asterisk (`*`) in `data-*` can be replaced by any name following the [production rule of XML names](https://www.w3.org/TR/REC-xml/#NT-Name) with the following restrictions:
+ You can replace the asterisk (`*`) in `data-*` with any name following the [production rule of XML names](https://www.w3.org/TR/REC-xml/#NT-Name) with the following restrictions.
- The name must not start with "xml," whatever case is used for the letters. - The name must not contain a semicolon (U+003A). - The name must not contain capital letters.
The following key properties are captured by default when the plug-in is enabled
| metaDataPrefix | String | Null | N/A | Automatic capture HTML Head's meta element name and content with provided prefix when captured. For example, `custom-` can be used in the HTML meta tag. | | captureAllMetaDataContent | Boolean | False | N/A | Automatic capture all HTML Head's meta element names and content. Default is false. If enabled, it overrides provided `metaDataPrefix`. | | parentDataTag | String | Null | N/A | Stops traversing up the DOM to capture content name and value of elements when encountered with this tag. For example, `data-<yourparentDataTag>` can be used in the HTML tags.|
-| dntDataTag | String | `ai-dnt` | `data-ai-dnt`| HTML elements with this attribute are ignored by the plug-in for capturing telemetry data.|
+| dntDataTag | String | `ai-dnt` | `data-ai-dnt`| The plug-in for capturing telemetry data ignores HTML elements with this attribute.|
### behaviorValidator
var behaviorMap = {
// Application Insights Configuration var configObj = {
- instrumentationKey: "YOUR INSTRUMENTATION KEY",
+ connectionString: "YOUR CONNECTION STRING",
extensions: [clickPluginInstance], extensionConfig: { [clickPluginInstance.identifier]: {
azure-monitor Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md
## Get started The Application Insights JavaScript SDK is implemented with a runtime snippet for out-of-the-box web analytics.
+The JavaScript snippet can be added to your webpages manually or via the automatic snippet injection.
-### Enable Application Insights SDK for JavaScript
+### Enable Application Insights SDK for JavaScript automatically
+
+The automatic Snippet injection feature available in the Application Insights .NET core SDK and the Application Insights Node.js SDK (preview)
+allows you to automatically inject the Application Insights JavaScript SDK into every webpage of your web application.
+For more information, see [Application Insights .NET core SDK Snippet Injection](./asp-net-core.md?tabs=netcorenew%2Cnetcore6#enable-client-side-telemetry-for-web-applications)
+and [Application Insights Node.js SDK Snippet Injection (preview)](./nodejs.md#automatic-web-instrumentationpreview).
+However, if you want more control over which pages to add the Application Insights JavaScript SDK
+or if you're using a programming language other than .NET and Node.js, please follow the manual configuration steps below.
+
+### Enable Application Insights SDK for JavaScript manually
Only two steps are required to enable the Application Insights SDK for JavaScript.
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
To manually configure Live Metrics:
1. Install the NuGet package [Microsoft.ApplicationInsights.PerfCounterCollector](https://www.nuget.org/packages/Microsoft.ApplicationInsights.PerfCounterCollector). 1. The following sample console app code shows setting up Live Metrics:
- ```csharp
- using Microsoft.ApplicationInsights;
- using Microsoft.ApplicationInsights.Extensibility;
- using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse;
- using System;
- using System.Threading.Tasks;
-
- namespace LiveMetricsDemo
+# [.NET 6.0+](#tab/dotnet6)
+
+```csharp
+using Microsoft.ApplicationInsights;
+using Microsoft.ApplicationInsights.Extensibility;
+using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse;
+
+// Create a TelemetryConfiguration instance.
+TelemetryConfiguration config = TelemetryConfiguration.CreateDefault();
+config.InstrumentationKey = "INSTRUMENTATION-KEY-HERE";
+QuickPulseTelemetryProcessor quickPulseProcessor = null;
+config.DefaultTelemetrySink.TelemetryProcessorChainBuilder
+ .Use((next) =>
+ {
+ quickPulseProcessor = new QuickPulseTelemetryProcessor(next);
+ return quickPulseProcessor;
+ })
+ .Build();
+
+var quickPulseModule = new QuickPulseTelemetryModule();
+
+// Secure the control channel.
+// This is optional, but recommended.
+quickPulseModule.AuthenticationApiKey = "YOUR-API-KEY-HERE";
+quickPulseModule.Initialize(config);
+quickPulseModule.RegisterTelemetryProcessor(quickPulseProcessor);
+
+// Create a TelemetryClient instance. It is important
+// to use the same TelemetryConfiguration here as the one
+// used to set up Live Metrics.
+TelemetryClient client = new TelemetryClient(config);
+
+// This sample runs indefinitely. Replace with actual application logic.
+while (true)
+{
+ // Send dependency and request telemetry.
+ // These will be shown in Live Metrics.
+ // CPU/Memory Performance counter is also shown
+ // automatically without any additional steps.
+ client.TrackDependency("My dependency", "target", "http://sample",
+ DateTimeOffset.Now, TimeSpan.FromMilliseconds(300), true);
+ client.TrackRequest("My Request", DateTimeOffset.Now,
+ TimeSpan.FromMilliseconds(230), "200", true);
+ Task.Delay(1000).Wait();
+}
+```
+
+# [.NET 5.0](#tab/dotnet5)
+
+```csharp
+using Microsoft.ApplicationInsights;
+using Microsoft.ApplicationInsights.Extensibility;
+using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse;
+using System;
+using System.Threading.Tasks;
+
+namespace LiveMetricsDemo
+{
+ internal class Program
{
- class Program
+ static void Main(string[] args)
{
- static void Main(string[] args)
+ // Create a TelemetryConfiguration instance.
+ TelemetryConfiguration config = TelemetryConfiguration.CreateDefault();
+ config.InstrumentationKey = "INSTRUMENTATION-KEY-HERE";
+ QuickPulseTelemetryProcessor quickPulseProcessor = null;
+ config.DefaultTelemetrySink.TelemetryProcessorChainBuilder
+ .Use((next) =>
+ {
+ quickPulseProcessor = new QuickPulseTelemetryProcessor(next);
+ return quickPulseProcessor;
+ })
+ .Build();
+
+ var quickPulseModule = new QuickPulseTelemetryModule();
+
+ // Secure the control channel.
+ // This is optional, but recommended.
+ quickPulseModule.AuthenticationApiKey = "YOUR-API-KEY-HERE";
+ quickPulseModule.Initialize(config);
+ quickPulseModule.RegisterTelemetryProcessor(quickPulseProcessor);
+
+ // Create a TelemetryClient instance. It is important
+ // to use the same TelemetryConfiguration here as the one
+ // used to set up Live Metrics.
+ TelemetryClient client = new TelemetryClient(config);
+
+ // This sample runs indefinitely. Replace with actual application logic.
+ while (true)
{
- // Create a TelemetryConfiguration instance.
- TelemetryConfiguration config = TelemetryConfiguration.CreateDefault();
- config.InstrumentationKey = "INSTRUMENTATION-KEY-HERE";
- QuickPulseTelemetryProcessor quickPulseProcessor = null;
- config.DefaultTelemetrySink.TelemetryProcessorChainBuilder
- .Use((next) =>
- {
- quickPulseProcessor = new QuickPulseTelemetryProcessor(next);
- return quickPulseProcessor;
- })
- .Build();
-
- var quickPulseModule = new QuickPulseTelemetryModule();
-
- // Secure the control channel.
- // This is optional, but recommended.
- quickPulseModule.AuthenticationApiKey = "YOUR-API-KEY-HERE";
- quickPulseModule.Initialize(config);
- quickPulseModule.RegisterTelemetryProcessor(quickPulseProcessor);
-
- // Create a TelemetryClient instance. It is important
- // to use the same TelemetryConfiguration here as the one
- // used to set up Live Metrics.
- TelemetryClient client = new TelemetryClient(config);
-
- // This sample runs indefinitely. Replace with actual application logic.
- while (true)
+ // Send dependency and request telemetry.
+ // These will be shown in Live Metrics.
+ // CPU/Memory Performance counter is also shown
+ // automatically without any additional steps.
+ client.TrackDependency("My dependency", "target", "http://sample",
+ DateTimeOffset.Now, TimeSpan.FromMilliseconds(300), true);
+ client.TrackRequest("My Request", DateTimeOffset.Now,
+ TimeSpan.FromMilliseconds(230), "200", true);
+ Task.Delay(1000).Wait();
+ }
+ }
+ }
+}
+
+```
+
+# [.NET Framework](#tab/dotnet-framework)
+
+```csharp
+using Microsoft.ApplicationInsights;
+using Microsoft.ApplicationInsights.Extensibility;
+using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse;
+using System;
+using System.Threading.Tasks;
+
+namespace LiveMetricsDemo
+{
+ class Program
+ {
+ static void Main(string[] args)
+ {
+ // Create a TelemetryConfiguration instance.
+ TelemetryConfiguration config = TelemetryConfiguration.CreateDefault();
+ config.InstrumentationKey = "INSTRUMENTATION-KEY-HERE";
+ QuickPulseTelemetryProcessor quickPulseProcessor = null;
+ config.DefaultTelemetrySink.TelemetryProcessorChainBuilder
+ .Use((next) =>
{
- // Send dependency and request telemetry.
- // These will be shown in Live Metrics.
- // CPU/Memory Performance counter is also shown
- // automatically without any additional steps.
- client.TrackDependency("My dependency", "target", "http://sample",
- DateTimeOffset.Now, TimeSpan.FromMilliseconds(300), true);
- client.TrackRequest("My Request", DateTimeOffset.Now,
- TimeSpan.FromMilliseconds(230), "200", true);
- Task.Delay(1000).Wait();
- }
+ quickPulseProcessor = new QuickPulseTelemetryProcessor(next);
+ return quickPulseProcessor;
+ })
+ .Build();
+
+ var quickPulseModule = new QuickPulseTelemetryModule();
+
+ // Secure the control channel.
+ // This is optional, but recommended.
+ quickPulseModule.AuthenticationApiKey = "YOUR-API-KEY-HERE";
+ quickPulseModule.Initialize(config);
+ quickPulseModule.RegisterTelemetryProcessor(quickPulseProcessor);
+
+ // Create a TelemetryClient instance. It is important
+ // to use the same TelemetryConfiguration here as the one
+ // used to set up Live Metrics.
+ TelemetryClient client = new TelemetryClient(config);
+
+ // This sample runs indefinitely. Replace with actual application logic.
+ while (true)
+ {
+ // Send dependency and request telemetry.
+ // These will be shown in Live Metrics.
+ // CPU/Memory Performance counter is also shown
+ // automatically without any additional steps.
+ client.TrackDependency("My dependency", "target", "http://sample",
+ DateTimeOffset.Now, TimeSpan.FromMilliseconds(300), true);
+ client.TrackRequest("My Request", DateTimeOffset.Now,
+ TimeSpan.FromMilliseconds(230), "200", true);
+ Task.Delay(1000).Wait();
} } }
- ```
+}
+```
++ The preceding sample is for a console app, but the same code can be used in any .NET applications. If any other telemetry modules are enabled to autocollect telemetry, it's important to ensure that the same configuration used for initializing those modules is used for the Live Metrics module.
It's possible to try custom filters without having to set up an authenticated ch
You can add an API key to configuration for ASP.NET, ASP.NET Core, WorkerService, and Azure Functions apps.
-#### ASP.NET
+# [.NET 6.0+](#tab/dotnet6)
-In the *applicationinsights.config* file, add `AuthenticationApiKey` to `QuickPulseTelemetryModule`:
+In the *Program.cs* file, add the following namespace:
-```xml
-<Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse.QuickPulseTelemetryModule, Microsoft.AI.PerfCounterCollector">
- <AuthenticationApiKey>YOUR-API-KEY-HERE</AuthenticationApiKey>
-</Add>
+```csharp
+using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse;
```
-#### ASP.NET Core
+Then add the following service registration:
-For [ASP.NET Core](./asp-net-core.md) applications, follow these instructions.
+```csharp
+// Existing code which includes services.AddApplicationInsightsTelemetry() to enable Application Insights.
+builder.Services.ConfigureTelemetryModule<QuickPulseTelemetryModule> ((module, o) => module.AuthenticationApiKey = "YOUR-API-KEY-HERE");
+```
-Modify `ConfigureServices` of your *Startup.cs* file as shown.
+# [.NET 5.0](#tab/dotnet5)
-Add the following namespace:
+In the *Startup.cs* file, add the following namespace:
```csharp using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse;
public void ConfigureServices(IServiceCollection services)
} ```
+# [.NET Framework](#tab/dotnet-framework)
+
+In the *applicationinsights.config* file, add `AuthenticationApiKey` to `QuickPulseTelemetryModule`:
+
+```xml
+<Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPulse.QuickPulseTelemetryModule, Microsoft.AI.PerfCounterCollector">
+ <AuthenticationApiKey>YOUR-API-KEY-HERE</AuthenticationApiKey>
+</Add>
+```
++++ For more information on how to configure ASP.NET Core applications, see [Configuring telemetry modules in ASP.NET Core](./asp-net-core.md#configure-or-remove-default-telemetrymodules). #### WorkerService
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
appInsights.defaultClient.context.tags[appInsights.defaultClient.context.keys.cl
appInsights.start(); ```
-### Automatic web snippet injection (preview)
+### Automatic web Instrumentation[Preview]
-You can use automatic web snippet injection to enable [Application Insights usage experiences](usage-overview.md) and browser diagnostic experiences with a simple configuration. It's an easier alternative to manually adding the JavaScript snippet or npm package to your JavaScript web code.
+ Automatic web Instrumentation can be enabled for node server via configuration
-For node server with configuration, set `enableAutoWebSnippetInjection` to `true`. Alternatively, set the environment variable as `APPLICATIONINSIGHTS_WEB_SNIPPET_ENABLED = true`. Automatic web snippet injection is available in Application Insights Node.js SDK version 2.3.0 or greater. For more information, see [Application Insights Node.js GitHub Readme](https://github.com/microsoft/ApplicationInsights-node.js#automatic-web-snippet-injectionpreview).
+```javascript
+let appInsights = require("applicationinsights");
+appInsights.setup("<connection_string>")
+ .enableWebInstrumentation(true)
+ .start();
+```
+
+or by setting environment variable `APPLICATIONINSIGHTS_WEB_INSTRUMENTATION_ENABLED = true`.
+
+Web Instrumentation will be enabled on node server responses when all of the following requirements are met:
+
+- Response has status code `200`.
+- Response method is `GET`.
+- Sever response has `Content-Type` html.
+- Server response contains both `<head>` and `</head>` Tags.
+- If response is compressed, it must have only one `Content-Encoding` type, and encoding type must be one of `gzip`, `br` or `deflate`.
+- Response does not contain current /backup web Instrumentation CDN endpoints. (current and backup Web Instrumentation CDN endpoints [here](https://github.com/microsoft/ApplicationInsights-JS#active-public-cdn-endpoints))
+
+web Instrumentation CDN endpoint can be changed by setting environment variable `APPLICATIONINSIGHTS_WEB_INSTRUMENTATION_SOURCE = "web Instrumentation CDN endpoints"`.
+web Instrumentation connection string can be changed by setting environment variable `APPLICATIONINSIGHTS_WEB_INSTRUMENTATION_CONNECTION_STRING = "web Instrumentation connection string"`
+
+> [!Note]
+> Web Instrumentation may slow down server response time, especially when response size is large or response is compressed. For the case in which some middle layers are applied, it may result in web Instrumentation not working and original response will be returned.
### Automatic third-party instrumentation
azure-monitor Metrics Store Custom Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-store-custom-rest-api.md
Save the access token from the response for use in the following HTTP requests.
-d @custommetric.json ```
-1. Change the timestamp and values in the JSON file.
+1. Change the timestamp and values in the JSON file. Note that the 'time' value in the JSON file is expected to be in UTC.
1. Repeat the previous two steps a few times to create data for several minutes. ## Troubleshooting
azure-monitor App Insights Azure Ad Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/app-insights-azure-ad-api.md
+
+ Title: Application Insights API Access with Microsoft Azure Active Directory (Azure AD) Authentication
+description: Learn how to authenticate and access the Azure Monitor Application Insights APIs using Azure AD
Last updated : 04/11/2023++++
+# Application Insights API Access with Microsoft Azure Active Directory (Azure AD) Authentication
+
+You can submit a query request to a workspace by using the Azure Monitor Log Analytics endpoint `https://api.loganalytics.azure.com`. To access the endpoint, you must authenticate through Azure Active Directory (Azure AD).
+
+>[!Note]
+> The `api.loganalytics.io` endpoint is being replaced by `api.loganalytics.azure.com`. The `api.loganalytics.io` endpoint will continue to be supported for the forseeable future.
+
+## Authenticate with a demo API key
+
+To quickly explore the API without Azure AD authentication, use the demonstration workspace with sample data, which supports API key authentication.
+
+To authenticate and run queries against the sample workspace, use `DEMO_WORKSPACE` as the {workspace-id} and pass in the API key `DEMO_KEY`.
+
+If either the Application ID or the API key is incorrect, the API service returns a [403](https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_Error) (Forbidden) error.
+
+The API key `DEMO_KEY` can be passed in three different ways, depending on whether you want to use a header, the URL, or basic authentication:
+
+- **Custom header**: Provide the API key in the custom header `X-Api-Key`.
+- **Query parameter**: Provide the API key in the URL parameter `api_key`.
+- **Basic authentication**: Provide the API key as either username or password. If you provide both, the API key must be in the username.
+
+This example uses the workspace ID and API key in the header:
+
+```
+ POST https://api.loganalytics.azure.com/v1/workspaces/DEMO_WORKSPACE/query
+ X-Api-Key: DEMO_KEY
+ Content-Type: application/json
+
+ {
+ "query": "AzureActivity | summarize count() by Category"
+ }
+```
+
+## Public API endpoint
+
+The public API endpoint is:
+
+```
+ https://api.loganalytics.azure.com/{api-version}/workspaces/{workspaceId}
+```
+where:
+ - **api-version**: The API version. The current version is "v1."
+ - **workspaceId**: Your workspace ID.
+
+The query is passed in the request body.
+
+For example:
+ ```
+ https://api.loganalytics.azure.com/v1/workspaces/1234abcd-def89-765a-9abc-def1234abcde
+
+ Body:
+ {
+ "query": "Usage"
+ }
+```
+
+## Set up authentication
+
+To access the API, you register a client app with Azure AD and request a token.
+
+1. [Register an app in Azure AD](./register-app-for-token.md).
+
+1. On the app's overview page, select **API permissions**.
+1. Select **Add a permission**.
+1. On the **APIs my organization uses** tab, search for **Log Analytics** and select **Log Analytics API** from the list.
+
+ :::image type="content" source="../media/api-register-app/request-api-permissions.png" alt-text="A screenshot that shows the Request API permissions page.":::
+
+1. Select **Delegated permissions**.
+1. Select the **Data.Read** checkbox.
+1. Select **Add permissions**.
+
+ :::image type="content" source="../media/api-register-app/add-requested-permissions.png" alt-text="A screenshot that shows the continuation of the Request API permissions page.":::
+
+Now that your app is registered and has permissions to use the API, grant your app access to your Log Analytics workspace.
+
+1. From your **Log Analytics workspace** overview page, select **Access control (IAM)**.
+1. Select **Add role assignment**.
+
+ :::image type="content" source="../media/api-register-app/workspace-access-control.png" alt-text="A screenshot that shows the Access control page for a Log Analytics workspace.":::
+
+1. Select the **Reader** role and then select **Members**.
+
+ :::image type="content" source="../media/api-register-app/add-role-assignment.png" alt-text="A screenshot that shows the Add role assignment page for a Log Analytics workspace.":::
+
+1. On the **Members** tab, choose **Select members**.
+1. Enter the name of your app in the **Select** box.
+1. Select your app and choose **Select**.
+1. Select **Review + assign**.
+
+ :::image type="content" source="../media/api-register-app/select-members.png" alt-text="A screenshot that shows the Select members pane on the Add role assignment page for a Log Analytics workspace.":::
+
+1. After you finish the Active Directory setup and workspace permissions, request an authorization token.
+
+>[!Note]
+> For this example, we applied the Reader role. This role is one of many built-in roles and might include more permissions than you require. More granular roles and permissions can be created. For more information, see [Manage access to Log Analytics workspaces](../../logs/manage-access.md).
+
+## Request an authorization token
+
+Before you begin, make sure you have all the values required to make the request successfully. All requests require:
+- Your Azure AD tenant ID.
+- Your workspace ID.
+- Your Azure AD client ID for the app.
+- An Azure AD client secret for the app.
+
+The Log Analytics API supports Azure AD authentication with three different [Azure AD OAuth2](/azure/active-directory/develop/active-directory-protocols-oauth-code) flows:
+- Client credentials
+- Authorization code
+- Implicit
+
+### Client credentials flow
+
+In the client credentials flow, the token is used with the Log Analytics endpoint. A single request is made to receive a token by using the credentials provided for your app in the previous step when you [register an app in Azure AD](./register-app-for-token.md).
+
+Use the `https://api.loganalytics.azure.com` endpoint.
+
+#### Client credentials token URL (POST request)
+
+```http
+ POST /<your-tenant-id>/oauth2/token
+ Host: https://login.microsoftonline.com
+ Content-Type: application/x-www-form-urlencoded
+
+ grant_type=client_credentials
+ &client_id=<app-client-id>
+ &resource=https://api.loganalytics.io
+ &client_secret=<app-client-secret>
+```
+
+A successful request receives an access token in the response:
+
+```http
+ {
+ token_type": "Bearer",
+ "expires_in": "86399",
+ "ext_expires_in": "86399",
+ "access_token": ""eyJ0eXAiOiJKV1QiLCJ.....Ax"
+ }
+```
+
+Use the token in requests to the Log Analytics endpoint:
+
+```http
+ POST /v1/workspaces/your workspace id/query?timespan=P1D
+ Host: https://api.loganalytics.azure.com
+ Content-Type: application/json
+ Authorization: bearer <your access token>
+
+ Body:
+ {
+ "query": "AzureActivity |summarize count() by Category"
+ }
+```
+
+Example response:
+
+```http
+ {
+ "tables": [
+ {
+ "name": "PrimaryResult",
+ "columns": [
+ {
+ "name": "OperationName",
+ "type": "string"
+ },
+ {
+ "name": "Level",
+ "type": "string"
+ },
+ {
+ "name": "ActivityStatus",
+ "type": "string"
+ }
+ ],
+ "rows": [
+ [
+ "Metric Alert",
+ "Informational",
+ "Resolved",
+ ...
+ ],
+ ...
+ ]
+ },
+ ...
+ ]
+ }
+```
+
+### Authorization code flow
+
+The main OAuth2 flow supported is through [authorization codes](/azure/active-directory/develop/active-directory-protocols-oauth-code). This method requires two HTTP requests to acquire a token with which to call the Azure Monitor Log Analytics API. There are two URLs, with one endpoint per request. Their formats are described in the following sections.
+
+#### Authorization code URL (GET request)
+
+```http
+ GET https://login.microsoftonline.com/YOUR_Azure AD_TENANT/oauth2/authorize?
+ client_id=<app-client-id>
+ &response_type=code
+ &redirect_uri=<app-redirect-uri>
+ &resource=https://api.loganalytics.io
+```
+
+When a request is made to the authorize URL, the client\_id is the application ID from your Azure AD app, copied from the app's properties menu. The redirect\_uri is the homepage/login URL from the same Azure AD app. When a request is successful, this endpoint redirects you to the sign-in page you provided at sign-up with the authorization code appended to the URL. See the following example:
+
+```http
+ http://<app-client-id>/?code=AUTHORIZATION_CODE&session_state=STATE_GUID
+```
+
+At this point, you've obtained an authorization code, which you need now to request an access token.
+
+#### Authorization code token URL (POST request)
+
+```http
+ POST /YOUR_Azure AD_TENANT/oauth2/token HTTP/1.1
+ Host: https://login.microsoftonline.com
+ Content-Type: application/x-www-form-urlencoded
+
+ grant_type=authorization_code
+ &client_id=<app client id>
+ &code=<auth code fom GET request>
+ &redirect_uri=<app-client-id>
+ &resource=https://api.loganalytics.io
+ &client_secret=<app-client-secret>
+```
+
+All values are the same as before, with some additions. The authorization code is the same code you received in the previous request after a successful redirect. The code is combined with the key obtained from the Azure AD app. If you didn't save the key, you can delete it and create a new one from the keys tab of the Azure AD app menu. The response is a JSON string that contains the token with the following schema. Types are indicated for the token values.
+
+Response example:
+
+```http
+ {
+ "access_token": "eyJ0eXAiOiJKV1QiLCJ.....Ax",
+ "expires_in": "3600",
+ "ext_expires_in": "1503641912",
+ "id_token": "not_needed_for_log_analytics",
+ "not_before": "1503638012",
+ "refresh_token": "eyJ0esdfiJKV1ljhgYF.....Az",
+ "resource": "https://api.loganalytics.io",
+ "scope": "Data.Read",
+ "token_type": "bearer"
+ }
+```
+
+The access token portion of this response is what you present to the Log Analytics API in the `Authorization: Bearer` header. You can also use the refresh token in the future to acquire a new access\_token and refresh\_token when yours have gone stale. For this request, the format and endpoint are:
+
+```http
+ POST /YOUR_AAD_TENANT/oauth2/token HTTP/1.1
+ Host: https://login.microsoftonline.com
+ Content-Type: application/x-www-form-urlencoded
+
+ client_id=<app-client-id>
+ &refresh_token=<refresh-token>
+ &grant_type=refresh_token
+ &resource=https://api.loganalytics.io
+ &client_secret=<app-client-secret>
+```
+
+Response example:
+
+```http
+ {
+ "token_type": "Bearer",
+ "expires_in": "3600",
+ "expires_on": "1460404526",
+ "resource": "https://api.loganalytics.io",
+ "access_token": "eyJ0eXAiOiJKV1QiLCJ.....Ax",
+ "refresh_token": "eyJ0esdfiJKV1ljhgYF.....Az"
+ }
+```
+
+### Implicit code flow
+
+The Log Analytics API supports the OAuth2 [implicit flow](/azure/active-directory/develop/active-directory-dev-understanding-oauth2-implicit-grant). For this flow, only a single request is required, but no refresh token can be acquired.
+
+#### Implicit code authorize URL
+
+```http
+ GET https://login.microsoftonline.com/YOUR_AAD_TENANT/oauth2/authorize?
+ client_id=<app-client-id>
+ &response_type=token
+ &redirect_uri=<app-redirect-uri>
+ &resource=https://api.loganalytics.io
+```
+
+A successful request produces a redirect to your redirect URI with the token in the URL:
+
+```http
+ http://YOUR_REDIRECT_URI/#access_token=YOUR_ACCESS_TOKEN&token_type=Bearer&expires_in=3600&session_state=STATE_GUID
+```
+
+This access\_token can be used as the `Authorization: Bearer` header value when it's passed to the Log Analytics API to authorize requests.
+
+## More information
+
+You can find documentation about OAuth2 with Azure AD here:
+ - [Azure AD authorization code flow](/azure/active-directory/develop/active-directory-protocols-oauth-code)
+ - [Azure AD implicit grant flow](/azure/active-directory/develop/active-directory-dev-understanding-oauth2-implicit-grant)
+ - [Azure AD S2S client credentials flow](/azure/active-directory/develop/active-directory-protocols-oauth-service-to-service)
+
+## Next steps
+
+- [Request format](./request-format.md)
+- [Response format](./response-format.md)
+- [Querying logs for Azure resources](./azure-resource-queries.md)
+- [Batch queries](./batch-queries.md)
azure-monitor Profiler Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-data.md
Title: Generate load and view Application Insights Profiler data
description: Generate load to your Azure service to view the Profiler data ms.contributor: charles.weininger Previously updated : 07/15/2022- Last updated : 04/11/2023+ # View Application Insights Profiler data
Methods such as **SqlCommand.Execute** indicate that the code is waiting for a d
However, logically, the thread that did the **AWAIT** is "blocked", waiting for the operation to finish. The **AWAIT\_TIME** statement indicates the blocked time, waiting for the task to finish.
+If the **AWAIT_TIME** appears to be in framework code instead of your code, the Profiler could be showing:
+- The framework code used to execute the **AWAIT**
+- Code used for recording telemetry about the **AWAIT**
+
+You can uncheck the **Framework dependencies** checkbox at the top of the page to show only your code and make it easier to see where the **AWAIT** originates.
+ ### Blocked time **BLOCKED_TIME** indicates that the code is waiting for another resource to be available. For example, it might be waiting for:
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/snapshot-debugger/snapshot-debugger.md
reviewer: cweining Previously updated : 08/18/2022 Last updated : 04/10/2023 # Debug snapshots on exceptions in .NET apps
You can view debug snapshots in the portal to see the call stack and inspect var
Snapshot collection is available for:
-* .NET Framework and ASP.NET applications running .NET Framework [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) or later.
-* .NET Core and ASP.NET Core applications running .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) on Windows.
-* .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) applications on Windows.
+- .NET Framework and ASP.NET applications running .NET Framework 4.6.2 and newer versions.
+- .NET and ASP.NET applications running .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) and newer versions on Windows.
+- .NET [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) (and newer versions) applications on Windows.
We don't recommend using .NET Core versions prior to LTS since they're out of support.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
## March 2023
-* [Disable showmount](disable-showmount.md) (Preview)
+* [Disable `showmount`](disable-showmount.md) (Preview)
- By default, Azure NetApp Files enables [showmount functionality](/windows-server/administration/windows-commands/showmount) to show NFS exported paths. The setting allows NFS clients tp use the `showmount -e` command to see a list of exports available on the Azure NetApp Files NFS-enabled storage endpoint. This functionality might cause security scanners to flag the Azure NetApp Files NFS service as having a vulnerability because these scanners often use showmount to see what is being returned. In those scenarios, you might want to disable showmount on Azure NetApp Files. This setting allows you to enable/disable showmount for your NFS-enabled storage endpoints.
+ By default, Azure NetApp Files enables [`showmount` functionality](/windows-server/administration/windows-commands/showmount) to show NFS exported paths. The setting allows NFS clients to use the `showmount -e` command to see a list of exports available on the Azure NetApp Files NFS-enabled storage endpoint. This functionality might cause security scanners to flag the Azure NetApp Files NFS service as having a vulnerability because these scanners often use `showmount` to see what is being returned. In those scenarios, you might want to disable `showmount` on Azure NetApp Files. This setting allows you to enable/disable `showmount` for your NFS-enabled storage endpoints.
* [Active Directory support improvement](create-active-directory-connections.md#preferred-server-ldap) (Preview)
Azure NetApp Files is updated regularly. This article provides a summary about t
* [Cross region replication enhancement: snapshot revert on replication source volume](snapshots-revert-volume.md)
- When using cross-region replication, reverting a snapshot in a source or destination volume with an active replication configuration was not initially supported. Restoring a snapshot on the source volume from the latest local snapshot was not possible. Instead you had to use either client copy using the .snapshot directory, single file snapshot restore, or needed to break the replication in order to apply a volume revert. With this new feature, a snapshot revert on a replication source volume is possible provided you select a snapshot that is newer than the latest SnapMirror snapshot. This enables data recovery (revert) from a snapshot while cross region replication stays active, improving data protection SLA.
+ When using cross-region replication, reverting a snapshot in a source or destination volume with an active replication configuration was not initially supported. Restoring a snapshot on the source volume from the latest local snapshot was not possible. Instead you had to use either client copy using the `.snapshot` directory, single file snapshot restore, or needed to break the replication in order to apply a volume revert. With this new feature, a snapshot revert on a replication source volume is possible provided you select a snapshot that is newer than the latest SnapMirror snapshot. This enables data recovery (revert) from a snapshot while cross region replication stays active, improving data protection SLA.
* [Access-based enumeration](azure-netapp-files-create-volumes-smb.md#access-based-enumeration) (Preview)
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-identity.md
Azure VMware Solution private clouds are provisioned with a vCenter Server and N
### View the vCenter Server privileges
-Use the steps to view the privileges granted to the Azure VMware Solution CloudAdmin role on your Azure VMware Solution private cloud vCenter.
+Use the following steps to view the privileges granted to the Azure VMware Solution CloudAdmin role on your Azure VMware Solution private cloud vCenter.
1. Sign in to the vSphere Client and go to **Menu** > **Administration**. 1. Under **Access Control**, select **Roles**.
azure-vmware Concepts Network Design Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-network-design-considerations.md
There are several networking considerations to review before you set up your Azu
## Azure VMware Solution compatibility with AS-Path Prepend
-Azure VMware Solution is compatible with AS-Path Prepend for redundant ExpressRoute configurations, with the caveat of not honoring the outbound path selection from Azure toward on-premises. If you're running two or more ExpressRoute paths between on-premises and Azure, and you don't meet the listed [prerequisites](#prerequisites), you might experience impaired connectivity or no connectivity between your on-premises networks and Azure VMware Solution.
+Azure VMware Solution has considerations relating to the use of AS-Path Prepend for redundant ExpressRoute configurations. If you're running two or more ExpressRoute paths between on-premises and Azure, consider the following guidance for influencing traffic out of Azure VMware Solution towards your on-premises location via ExpressRoute GlobalReach.
-The connectivity problem happens when Azure VMware Solution doesn't notice AS-Path Prepend and uses equal-cost multipath (ECMP) routing to send traffic toward your environment over both ExpressRoute circuits. That action causes problems with stateful firewall inspection.
+Due to asymmetric routing, connectivity issues can occur when Azure VMware Solution doesn't observe AS-Path Prepend and therefore uses equal-cost multipath (ECMP) routing to send traffic toward your environment over both ExpressRoute circuits. This behavior can cause problems with stateful firewall inspection devices placed behind existing ExpressRoute circuits.
### Prerequisites
-For AS-Path Prepend, verify that all of the following listed connections are true:
+For AS-Path Prepend, consider the following:
> [!div class="checklist"]
+> * The key point is that you must prepend **Public** ASN numbers to influence how AVS route's traffic back to on-premises. If you prepend using _Private_ ASN, AVS will ignore the prepend, and the ECMP behavior above will occur. Even if you operate a Private BGP ASN on-premises, it's still possible to configure your on-premises devices to utilizes Public ASN when prepending routes outbound, to ensure compatibility with Azure VMware Solution.
> * Both or all circuits are connected to Azure VMware Solution through ExpressRoute Global Reach. > * The same netblocks are being advertised from two or more circuits.
-> * Stateful firewalls are in the network path.
-> * You're using AS-Path Prepend to force Azure to prefer one path over others.
-
-Use either 2-byte or 4-byte public ASN numbers, and make sure that they're compatible with Azure VMware Solution. If you don't own a public ASN for prepending, open a [Microsoft support ticket](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview) to view options.
+> * You wish to use AS-Path Prepend to force Azure VMware solution to prefer one circuit over another.
+> * Use either 2-byte or 4-byte public ASN numbers. If you don't own a public ASN for prepending, open a [Microsoft support ticket](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview) to explore further options.
## Management VMs and default routes from on-premises
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
Title: Introduction
description: Learn the features and benefits of Azure VMware Solution to deploy and manage VMware-based workloads in Azure. Azure VMware Solution SLA guarantees that Azure VMware management tools (vCenter Server and NSX Manager) will be available at least 99.9% of the time. Previously updated : 4/6/2023 Last updated : 4/11/2023
The following table provides a detailed list of roles and responsibilities betwe
| -- | - | | Microsoft - Azure VMware Solution | Physical infrastructure<ul><li>Azure regions</li><li>Azure availability zones</li><li>Express Route/Global Reach</ul></li>Compute/Network/Storage<ul><li>Rack and power Bare Metal hosts</li><li>Rack and power network equipment</ul></li>Software defined Data Center (SDDC) deploy/lifecycle<ul><li>VMware ESXi deploy, patch, and upgrade</li><li>VMware vCenter Servers deploy, patch, and upgrade</li><li>VMware NSX-T Data Centers deploy, patch, and upgrade</li><li>VMware vSAN deploy, patch, and upgrade</ul></li>SDDC Networking - VMware NSX-T Data Center provider config<ul><li>Microsoft Edge node/cluster, VMware NSX-T Data Center host preparation</li><li>Provider Tier-0 and Tenant Tier-1 Gateway</li><li>Connectivity from Tier-0 (using BGP) to Azure Network via Express Route</ul></li>SDDC Compute - VMware vCenter Server provider config<ul><li>Create default cluster</li><li>Configure virtual networking for vMotion, Management, vSAN, and others</ul></li>SDDC backup/restore<ul><li>Backup and restore VMware vCenter Server</li><li>Backup and restore VMware NSX-T Data Center NSX-T Manager</ul></li>SDDC health monitoring and corrective actions, for example: replace failed hosts</br><br>(optional) VMware HCX deploys with fully configured compute profile on cloud side as add-on</br><br>(optional) SRM deploys, upgrade, and scale up/down</br><br>Support - SDDC platforms and VMware HCX | | Customer | Request Azure VMware Solution host quote with Microsoft<br>Plan and create a request for SDDCs on Azure portal with:<ul><li>Host count</li><li>Management network range</li><li>Other information</ul></li>Configure SDDC network and security (VMware NSX-T Data Center)<ul><li>Network segments to host applications</li><li>Additional Tier -1 routers</li><li>Firewall</li><li>VMware NSX-T Data Center LB</li><li>IPsec VPN</li><li>NAT</li><li>Public IP addresses</li><li>Distributed firewall/gateway firewall</li><li>Network extension using VMware HCX or VMware NSX-T Data Center</li><li>AD/LDAP config for RBAC</ul></li>Configure SDDC - VMware vCenter Server<ul><li>AD/LDAP config for RBAC</li><li>Deploy and lifecycle management of Virtual Machines (VMs) and application<ul><li>Install operating systems</li><li>Patch operating systems</li><li>Install antivirus software</li><li>Install backup software</li><li>Install configuration management software</li><li>Install application components</li><li>VM networking using VMware NSX-T Data Center segments</ul></li><li>Migrate Virtual Machines (VMs)<ul><li>VMware HCX configuration</li><li>Live vMotion</li><li>Cold migration</li><li>Content library sync</ul></li></ul></li>Configure SDDC - vSAN<ul><li>Define and maintain vSAN VM policies</li><li>Add hosts to maintain adequate 'slack space'</ul></li>Configure VMware HCX<ul><li>Download and deploy HCA connector OVA in on-premises</li><li>Pairing on-premises VMware HCX connector</li><li>Configure the network profile, compute profile, and service mesh</li><li>Configure VMware HCX network extension/MON</li><li>Upgrade/updates</ul></li>Network configuration to connect to on-premises, VNET, or internet</br><br>Add or delete hosts requests to cluster from Portal</br><br>Deploy/lifecycle management of partner (third party) solutions |
-| Partner ecosystem | Support for their product/solution. For reference, the following are some of the supported Azure VMware Solution partner solution/product:<ul><li>BCDR - SRM, JetStream, RiverMeadow, and others</li><li>Backup - Veeam, Commvault, Rubrik, and others</li><li>VDI - Horizon/Citrix</li><li>Security solutions - BitDefender, TrendMicro, Checkpoint</li><li>Other VMware products - vRA, vROps, AVI |
+| Partner ecosystem | Support for their product/solution. For reference, the following are some of the supported Azure VMware Solution partner solution/product:<ul><li>BCDR - SRM, JetStream, Zerto, and others</li><li>Backup - Veeam, Commvault, Rubrik, and others</li><li>VDI - Horizon/Citrix</li><li>Security solutions - BitDefender, TrendMicro, Checkpoint</li><li>Other VMware products - vRA, vROps, AVI |
## Next steps
azure-vmware Plan Private Cloud Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/plan-private-cloud-deployment.md
description: Learn how to plan your Azure VMware Solution deployment.
Previously updated : 10/26/2022 Last updated : 4/11/2023 # Plan the Azure VMware Solution deployment
Azure VMware Solution requires a /22 CIDR network, for example, `10.0.0.0/22`. T
## Define the IP address segment for VM workloads
-Like with any VMware vSphere environment, the VMs must connect to a network segment. As the production deployment of Azure VMware Solution expands, there's often a combination of L2 extended segments from on-premises and local NSX-T network segments.
+Like with any VMware vSphere environment, the VMs must connect to a network segment. As the production deployment of Azure VMware Solution expands, there's often a combination of L2 extended segments from on-premises and local NSX-T Data Center network segments.
For the initial deployment, identify a single network segment (IP network), for example, `10.0.4.0/24`. This network segment is used primarily for testing purposes during the initial deployment. The address block shouldn't overlap with any network segments on-premises or within Azure and shouldn't be within the /22 network segment already defined.
azure-vmware Tutorial Access Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-access-private-cloud.md
Title: Tutorial - Access your private cloud
description: Learn how to access an Azure VMware Solution private cloud Previously updated : 4/6/2023 Last updated : 4/11/2023
In this tutorial, you learn how to:
1. From the jump box, sign in to vSphere Client with VMware vCenter Server SSO using a cloudadmin username and verify that the user interface displays successfully.
-1. In the Azure portal, select your private cloud, and then **Manage** > **Identity**.
+1. In the Azure portal, select your private cloud, and then **Manage** > **VMware credentials**.
The URLs and user credentials for private cloud vCenter Server and NSX-T Manager display.
In this tutorial, you learned how to:
> [!div class="checklist"] > * Create a Windows VM to use to connect to vCenter Server > * Login to vCenter Server from your VM
+> * Login to NSX-T Manager from your VM
Continue to the next tutorial to learn how to create a virtual network to set up local management for your private cloud clusters.
backup Blob Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-restore.md
Title: Restore Azure Blobs description: Learn how to restore Azure Blobs. Previously updated : 02/20/2023 Last updated : 04/11/2023
To initiate a restore through the Backup center, follow these steps:
- For vaulted backup, choose a recovery point from which you want to perform the restore.
+ :::image type="content" source="./media/blob-restore/select-backup-type-for-restore-inline.png" alt-text="Screenshot shows the restore options for blob backup." lightbox="./media/blob-restore/select-backup-type-for-restore-expanded.png":::
+ >[!NOTE] > The time mentioned here is your local time.
To initiate a restore through the Backup center, follow these steps:
>[!Note] >The vault must have the *Storage account backup contributor* role assigned on the target storage account. Select **Validate** to ensure that the required permissions to perform the restore are assigned. Once done, proceed to the next tab.
+ :::image type="content" source="./media/blob-restore/choose-options-for-vaulted-backup.png" alt-text="Screenshot shows the option to choose for vaulted backup." lightbox="./media/blob-restore/choose-options-for-vaulted-backup.png":::
+ 1. Once you finish specifying what blobs to restore, continue to the **Review + restore** tab, and select **Restore** to initiate the restore. 1. **Track restore**: Use the **Backup Jobs** view to track the details and status of restores. To do this, navigate to **Backup Center** > **Backup Jobs**. The status will show **In progress** while the restore is being performed.
baremetal-infrastructure Concepts Baremetal Infrastructure Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/concepts-baremetal-infrastructure-overview.md
Title: What is BareMetal Infrastructure on Azure?
description: Provides an overview of the BareMetal Infrastructure on Azure. Previously updated : 09/27/2021 Last updated : 04/01/2023 # What is BareMetal Infrastructure on Azure?
baremetal-infrastructure Connect Baremetal Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/connect-baremetal-infrastructure.md
Title: Connect BareMetal Infrastructure instances in Azure
description: Learn how to identify and interact with BareMetal instances in the Azure portal or Azure CLI. Previously updated : 07/13/2021 Last updated : 04/01/2023 # Connect BareMetal Infrastructure instances in Azure
baremetal-infrastructure Know Baremetal Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/know-baremetal-terms.md
Title: Know the terms of Azure BareMetal Infrastructure description: Know the terms of Azure BareMetal Infrastructure. Previously updated : 07/13/2021 Last updated : 04/01/2023 # Know the terms for BareMetal Infrastructure
baremetal-infrastructure About Nc2 On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/about-nc2-on-azure.md
description: Learn about Nutanix Cloud Clusters on Azure and the benefits it off
Previously updated : 10/13/2022 Last updated : 04/01/2023 # About Nutanix Cloud Clusters on Azure
baremetal-infrastructure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/architecture.md
description: Learn about the architecture of several configurations of BareMetal
Previously updated : 10/13/2022 Last updated : 04/01/2023 # Architecture of BareMetal Infrastructure for Nutanix
baremetal-infrastructure Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/faq.md
description: Questions frequently asked about NC2 on Azure
Previously updated : 10/13/2022 Last updated : 04/01/2023 # Frequently asked questions about NC2 on Azure
baremetal-infrastructure Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/get-started.md
description: Learn how to sign up, set up, and use Nutanix Cloud Clusters on Azu
Previously updated : 10/13/2022 Last updated : 04/01/2023 # Getting started with Nutanix Cloud Clusters on Azure
baremetal-infrastructure Nc2 Baremetal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/nc2-baremetal-overview.md
description: Learn about the features BareMetal Infrastructure offers for NC2 wo
Previously updated : 10/13/2022 Last updated : 04/01/2023 # What is BareMetal Infrastructure for Nutanix Cloud Clusters on Azure?
baremetal-infrastructure Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/requirements.md
description: Learn what you need to run NC2 on Azure, including Azure, Nutanix,
Previously updated : 10/13/2022 Last updated : 04/01/2023 # Requirements
baremetal-infrastructure Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/skus.md
description: Learn about SKU options for NC2 on Azure, including core, RAM, stor
Previously updated : 10/13/2022 Last updated : 04/01/2023 # SKUs
baremetal-infrastructure Solution Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/solution-design.md
description: Learn about topologies and constraints for NC2 on Azure.
Previously updated : 10/13/2022 Last updated : 04/01/2023 # Solution design
baremetal-infrastructure Supported Instances And Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/supported-instances-and-regions.md
description: Learn about instances and regions supported for NC2 on Azure.
Previously updated : 10/13/2022 Last updated : 04/01/2023 # Supported instances and regions
baremetal-infrastructure Use Cases And Supported Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/use-cases-and-supported-scenarios.md
description: Learn about use cases and supported scenarios for NC2 on Azure, inc
Previously updated : 10/13/2022 Last updated : 04/01/2023 # Use cases and supported scenarios
bastion Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/diagnostic-logs.md
As users connect to workloads using Azure Bastion, Bastion can log diagnostics of the remote sessions. You can then use the diagnostics to view which users connected to which workloads, at what time, from where, and other such relevant logging information. In order to use the diagnostics, you must enable diagnostics logs on Azure Bastion. This article helps you enable diagnostics logs, and then view the logs.
+>[!NOTE]
+>To view all resource logs available for Bastion, select each of the resource logs. If you exclude the 'All Logs' setting, you will not see all the available resource logs.
+ ## <a name="enable"></a>Enable the resource log 1. In the [Azure portal](https://portal.azure.com), go to your Azure Bastion resource and select **Diagnostics settings** from the Azure Bastion page.
batch Batch Job Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-job-schedule.md
Title: Schedule your jobs
-description: Use job scheduling to manage your tasks.
+ Title: Schedule Batch jobs for efficiency
+description: Learn how to schedule Batch jobs to manage your tasks, prioritize jobs to run first, and minimize resource usage.
Previously updated : 07/16/2021 Last updated : 04/10/2023
-# Schedule jobs for efficiency
+# Schedule Batch jobs for efficiency
-Scheduling Batch jobs enables you to prioritize the jobs you want to run first, while taking into account [tasks that have dependencies on other tasks](batch-task-dependencies.md). By scheduling your jobs, you can make sure you use the least amount of resources. Nodes can be decommissioned when not needed, and tasks that are dependent on other tasks are spun up just in time optimizing the workflows. You can also set jobs to autocomplete, since only one job at a time runs, and a new one won't start until the previous one completes.
+Scheduling Batch jobs lets you prioritize the jobs you want to run first, while taking into account [task dependencies](batch-task-dependencies.md). You can also make sure to use the least amount of resources. Nodes can be decommissioned when not needed, and tasks that are dependent on other tasks are spun up just in time to optimize the workflows. Since only one job at a time runs, jobs can be set to autocomplete, and a new one doesn't start until the previous one completes.
-The tasks you schedule using the job manager task are associated with a job. The job manager task will create tasks for the job. To do so, the job manager task needs to authenticate with the Batch account. Use the the AZ_BATCH_AUTHENTICATION_TOKEN access token. The token will allow access to the rest of the job.
+The tasks you schedule using the job manager task are associated with a job. The job manager task will create tasks for the job. To do so, the job manager task needs to authenticate with the Batch account. Use the *AZ_BATCH_AUTHENTICATION_TOKEN* access token. The token allows access to the rest of the job.
To manage a job using the Azure CLI, see [az batch job-schedule](/cli/azure/batch/job-schedule). You can also create job schedules in the Azure portal. ## Schedule a job in the Azure portal
-1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select the Batch account you want to schedule jobs in. 1. In the left navigation pane, select **Job schedules**. 1. Select **Add** to create a new job schedule.
-1. In the **Basic form**, enter the following information:
+
+ :::image type="content" source="media/batch-job-schedule/add-job-schedule.png" alt-text="Screenshot of the Add button for job schedules.":::
+
+1. Under **Basic form**, enter the following information:
- **Job schedule ID**: A unique identifier for this job schedule. - **Display name**: This name is optional and doesn't have to be unique. It has a maximum length of 1024 characters.+
+ :::image type="content" source="media/batch-job-schedule/add-job-schedule-01.png" alt-text="Screenshot of the Basic form section of the job schedule options.":::
+
+1. In the **Schedule** section, enter the following information:
- **Do not run until**: Specifies the earliest time the job will run. If you don't set this, the schedule becomes ready to run jobs immediately.
- - **Do not run after**: No jobs will run after the time you enter here. If you don't specify a time, then you are creating a recurring job schedule, which remains active until you explicitly terminate it.
- - **Recurrence interval**: Select **Enabled** if you want to specify the amount of time between jobs. You can have only one job at a time scheduled, so if it is time to create a new job under a job schedule, but the previous job is still running, the Batch service won't create the new job until the previous job finishes.
+ - **Do not run after**: No jobs will run after the time you enter here. If you don't specify a time, then you're creating a recurring job schedule, which remains active until you explicitly terminate it.
+ - **Recurrence interval**: Select **Enabled** if you want to specify the amount of time between jobs. You can have only one job at a time scheduled, so if it's time to create a new job under a job schedule but the previous job is still running, the Batch service won't create the new job until the previous job finishes.
- **Start window**: Select **Custom** if you'd like to specify the time interval within which a job must be created. If a job isn't created within this window, no new job will be created until the next recurrence of the schedule.
- :::image type="content" source="media/batch-job-schedule/add-job-schedule-02.png" alt-text="Screenshot of the Add job schedule options in the Azure portal.":::
+ :::image type="content" source="media/batch-job-schedule/add-job-schedule-02.png" alt-text="Screenshot of the Schedule section of the job schedule options.":::
+
+1. In the **Job Specification** section, enter the following information:
+ - **Pool ID**: Select the pool where you want the job to run. To choose from a list of pools in your Batch account, select **Update**.
+ - **Job configuration task**: Select **Update** to name and configure the job manager task, as well as the job preparation task and job release tasks, if you're using them.
+
+ :::image type="content" source="media/batch-job-schedule/add-job-schedule-03.png" alt-text="Screenshot of the job specification options for a new job schedule.":::
-1. At the bottom of the basic form, specify the pool on which you want the job to run. To choose from a list of pools in your Batch account, select **Update**.
-1. Along with the **Pool ID**, enter the following information:
- - **Job configuration task**: Select **Update** to name and configure the job manager task, as well as the job preparation task and job release tasks, if you are using them.
+1. In the **Advanced settings** section, enter the following information:
- **Display name**: This name is optional and doesn't have to be unique. It has a maximum length of 1024 characters. - **Priority**: Use the slider to set a priority for the job, or enter a value in the box. - **Max wall clock time**: Select **Custom** if you want to set a maximum amount of time for the job to run. If you do so, Batch will terminate the job if it doesn't complete within that time frame.
- - **Max task retry count**: Select **Custom** if you want to specify the number of times a task can be retried, or **Unlimited** if you want the task to be tried for as many times as is needed. This is not the same as the number of retries an API call might have.
- - **When all tasks complete**: The default is NoAction, but you can select **TerminateJob** if you prefer to terminate the job when all tasks have been completed (or if there are no tasks in the job).
- - **When a task fails**: A task fails if the retry count is exhausted or there was an error when starting the task. The default is NoAction, but you can select **PerformExitOptionsJobAction** if you prefer to take the action associated with the task's exit condition if it fails.
+ - **Max task retry count**: Select **Custom** if you want to specify the number of times a task can be retried, or **Unlimited** if you want the task to be tried for as many times as is needed. This isn't the same as the number of retries an API call might have.
+ - **When all tasks complete**: The default is *NoAction*, but you can select *TerminateJob* if you prefer to terminate the job when all tasks have been completed (or if there are no tasks in the job).
+ - **When a task fails**: A task fails if the retry count is exhausted or there's an error when starting the task. The default is *NoAction*, but you can select *PerformExitOptionsJobAction* if you prefer to take the action associated with the task's exit condition if it fails.
- :::image type="content" source="media/batch-job-schedule/add-job-schedule-03.png" alt-text="Screenshot of the job specification options for a new job schedule in the Azure portal.":::
+ :::image type="content" source="media/batch-job-schedule/add-job-schedule-04.png" alt-text="Screenshot of the Advanced settings for a new job schedule.":::
1. Select **Save** to create your job schedule.
batch Quick Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-terraform.md
+
+ Title: 'Quickstart: Create an Azure Batch account using Terraform'
+description: 'In this article, you create an Azure Batch account using Terraform'
++ Last updated : 4/1/2023+++++
+# Quickstart: Create an Azure Batch account using Terraform
+
+Get started with [Azure Batch](/azure/batch/batch-technical-overview) by using Terraform to create a Batch account, including storage. You need a Batch account to create compute resources (pools of compute nodes) and Batch jobs. You can link an Azure Storage account with your Batch account. This pairing is useful to deploy applications and store input and output data for most real-world workloads.
+
+After completing this quickstart, you'll understand the key concepts of the Batch service and be ready to try Batch with more realistic workloads at larger scale.
++
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Create a random value for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet)
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group)
+> * Create a random value using [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string)
+> * Create an Azure Storage account using [azurerm_storage_account](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/storage_account)
+> * Create an Azure Batch account using [azurerm_batch_account](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/batch_account)
++
+## Prerequisites
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-batch-account-with-storage). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-batch-account-with-storage/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-batch-account-with-storage/providers.tf)]
+
+1. Create a file named `main.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-batch-account-with-storage/main.tf)]
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-batch-account-with-storage/variables.tf)]
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-batch-account-with-storage/outputs.tf)]
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Get the Azure resource group name.
+
+ ```console
+ resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the Batch account name.
+
+ ```console
+ batch_name=$(terraform output -raw batch_name)
+ ```
+
+1. Run [az batch account show](/cli/azure/batch/account#az-batch-account-show) to display information about the new Batch account.
+
+ ```azurecli
+ az batch account show \
+ --resource-group $resource_group_name \
+ --name $batch_name
+ ```
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+1. Get the Azure resource group name.
+
+ ```console
+ $resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the Batch account name.
+
+ ```console
+ $batch_name=$(terraform output -raw batch_name)
+ ```
+
+1. Run [Get-AzBatchAccount](/powershell/module/az.batch/get-azbatchaccount) to display information about the new service.
+
+ ```azurepowershell
+ Get-AzBatchAccount -ResourceGroupName $resource_group_name `
+ -Name $batch_name
+ ```
+++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Run your first Batch job with the Azure CLI](/azure/batch/quick-create-cli)
cognitive-services Computer Vision How To Install Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md
Previously updated : 01/18/2023 Last updated : 03/02/2023 keywords: on-premises, OCR, Docker, container
The JSON response object has the same object graph as the asynchronous version.
For an example use-case, see the <a href="https://aka.ms/ts-read-api-types" target="_blank" rel="noopener noreferrer">TypeScript sandbox here </a> and select **Run** to visualize its ease-of-use.
+## Run the container disconnected from the internet
++ ## Stop the container [!INCLUDE [How to stop the container](../../../includes/cognitive-services-containers-stop.md)]
cognitive-services Luis Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-container-howto.md
Previously updated : 01/18/2023 Last updated : 03/02/2023 keywords: on-premises, Docker, container
After the log is uploaded, [review the endpoint](./luis-concept-review-endpoint-
[!INCLUDE [Container API documentation](../../../includes/cognitive-services-containers-api-documentation.md)]
+## Run the container disconnected from the internet
++ ## Stop the container To shut down the container, in the command-line environment where the container is running, press **Ctrl+C**.
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
Previously updated : 01/18/2023 Last updated : 03/02/2023 keywords: on-premises, Docker, container
Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/)
# [Speech-to-text](#tab/stt)
+### Run the container connected to the internet
+ To run the standard speech-to-text container, execute the following `docker run` command: ```bash
This command:
> To install GStreamer in a container, > follow Linux instructions for GStreamer in [Use codec compressed audio input with the Speech SDK](how-to-use-codec-compressed-audio-input-streams.md).
-#### Diarization on the speech-to-text output
+### Run the container disconnected from the internet
++
+The speech-to-text container provide a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
+
+When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+
+Below is a sample command to set file/directory ownership.
+
+```bash
+sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
+```
+
+### Diarization on the speech-to-text output
Diarization is enabled by default. To get diarization in your response, use `diarize_speech_config.set_service_property`.
Diarization is enabled by default. To get diarization in your response, use `dia
> "Identity" mode returns `"SpeakerId": "Customer"` or `"SpeakerId": "Agent"`. > "Anonymous" mode returns `"SpeakerId": "Speaker 1"` or `"SpeakerId": "Speaker 2"`.
-#### Analyze sentiment on the speech-to-text output
+### Analyze sentiment on the speech-to-text output
Starting in v2.6.0 of the speech-to-text container, you should use Language service 3.0 API endpoint instead of the preview one. For example:
This command:
* Performs the same steps as the preceding command. * Stores a Language service API endpoint and key, for sending sentiment analysis requests.
-#### Phraselist v2 on the speech-to-text output
+### Phraselist v2 on the speech-to-text output
Starting in v2.6.0 of the speech-to-text container, you can get the output with your own phrases, either the whole sentence or phrases in the middle. For example, *the tall man* in the following sentence:
ApiKey={API_KEY}
Starting in v2.5.0 of the custom-speech-to-text container, you can get custom pronunciation results in the output. All you need to do is have your own custom pronunciation rules set up in your custom model and mount the model to a custom-speech-to-text container. +
+### Run the container disconnected from the internet
+
+To use this container disconnected from the internet, you must first request access by filling out an application, and purchasing a commitment plan. See [Use Docker containers in disconnected environments](../containers/disconnected-containers.md) for more information.
+
+In order to prepare and configure the Custom Speech-to-Text container you will need two separate speech resources:
+
+1. A regular Azure Speech Service resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. This will be used to train, download, and configure your custom speech models for use in your container.
+1. An Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. This is used to download your disconnected container license file required to run the container in disconnected mode.
+
+Download the docker container and run it to get the required speech model as [described above](#get-the-container-image-with-docker-pull) using the regular Azure Speech resource. Next, you will need to download your disconnected license file.
+
+The `DownloadLicense=True` parameter in your `docker run` command will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a speech-to-text container with a form recognizer container.
+
+| Placeholder | Value | Format or example |
+|-|-||
+| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
+| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted. | `/host/license:/path/to/license/directory` |
+| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
+| `{API_KEY}` | The key for your Text Analytics resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
+
+```bash
+docker run --rm -it -p 5000:5000 \
+-v {LICENSE_MOUNT} \
+{IMAGE} \
+eula=accept \
+billing={ENDPOINT_URI} \
+apikey={API_KEY} \
+DownloadLicense=True \
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+```
+
+Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
+
+Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written.
+
+Placeholder | Value | Format or example |
+|-|-||
+| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
+ `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `4g` |
+| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container. | `4` |
+| `{LICENSE_MOUNT}` | The path where the license will be located and mounted. | `/host/license:/path/to/license/directory` |
+| `{OUTPUT_PATH}` | The output path for logging [usage records](../containers/disconnected-containers.md#usage-records). | `/host/output:/path/to/output/directory` |
+| `{MODEL_PATH}` | The path where the model is located. | `/path/to/model/` |
+| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
+| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem. | `/path/to/output/directory` |
+
+```bash
+docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
+-v {LICENSE_MOUNT} \
+-v {OUTPUT_PATH} \
+-v {MODEL_PATH} \
+{IMAGE} \
+eula=accept \
+Mounts:License={CONTAINER_LICENSE_DIRECTORY}
+Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
+```
+
+The [Custom Speech-to-Text](../speech-service/speech-container-howto.md?tabs=cstt) container provides a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
+
+When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+
+Below is a sample command to set file/directory ownership.
+
+```bash
+sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
+```
+ # [Neural text-to-speech](#tab/ntts) To run the neural text-to-speech container, execute the following `docker run` command:
This command:
* Exposes TCP port 5000 and allocates a pseudo-TTY for the container. * Automatically removes the container after it exits. The container image is still available on the host computer. +
+### Run the container disconnected from the internet
+++
+The neural text-to-speech container provide a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
+
+When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
+
+Below is a sample command to set file/directory ownership.
+
+```bash
+sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
+```
++ # [Speech language identification](#tab/lid) To run the Speech language identification container, execute the following `docker run` command:
Increasing the number of concurrent calls can affect reliability and latency. Fo
> [!IMPORTANT] > The `Eula`, `Billing`, and `ApiKey` options must be specified to run the container. Otherwise, the container won't start. For more information, see [Billing](#billing).
-## Run the container in disconnected environments
-
-You must request access to use containers disconnected from the internet. For more information, see [Request access to use containers in disconnected environments](../containers/disconnected-containers.md#request-access-to-use-containers-in-disconnected-environments).
-
-For Speech Service container configuration, see [Disconnected containers](../containers/disconnected-containers.md#speech-containers).
- ## Query the container's prediction endpoint > [!NOTE]
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md
Previously updated : 02/27/2022 Last updated : 04/06/2023
Containers enable you to run Cognitive Services APIs in your own environment, an
* [Key Phrase Extraction](../language-service/key-phrase-extraction/how-to/use-containers.md) * [Language Detection](../language-service/language-detection/how-to/use-containers.md) * [Computer Vision - Read](../computer-vision/computer-vision-how-to-install-containers.md)-
-Disconnected container usage is also available for the following Applied AI service:
- * [Form Recognizer](../../applied-ai-services/form-recognizer/containers/form-recognizer-disconnected-containers.md) Before attempting to run a Docker container in an offline environment, make sure you know the steps to successfully download and use the container. For example:
Access is limited to customers that meet the following requirements:
> * You will only see the option to purchase a commitment tier if you have been approved by Microsoft. > * Pricing details are for example only.
- :::image type="content" source="media/offline-container-signup.png" alt-text="A screenshot showing resource creation on the Azure portal." lightbox="media/offline-container-signup.png":::
- 3. Select **Review + Create** at the bottom of the page. Review the information, and select **Create**.
-## Gather required parameters
-
-There are three primary parameters for all Cognitive Services' containers that are required. The end-user license agreement (EULA) must be present with a value of *accept*. Additionally, both an endpoint URL and API key are needed when you first run the container, to configure it for disconnected usage.
-
-You can find the key and endpoint on the **Key and endpoint** page for your resource.
-
-> [!IMPORTANT]
-> You will only use your key and endpoint to configure the container to be run in a disconnected environment. After you configure the container, you won't need them to send API requests. Store them securely, for example, using Azure Key Vault. Only one key is necessary for this process.
-
-## Download a Docker container with `docker pull`
-
-After you have a license file, download the Docker container you have approval to run in a disconnected environment. For example:
-
-```Docker
-docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice:latest
-```
-
-## Configure the container to be run in a disconnected environment
-
-Now that you've downloaded your container, you'll need to run the container with the `DownloadLicense=True` parameter in your `docker run` command. This parameter will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a speech-to-text container with a form recognizer container. Please do not rename or modify the license file as this will prevent the container from running successfully.
-
-> [!IMPORTANT]
->
-> * [**Translator container only**](../translator/containers/translator-how-to-install-container.md):
-> * You must include a parameter to download model files for the [languages](../translator/language-support.md) you want to translate. For example: `-e Languages=en,es`
-> * The container will generate a `docker run` template that you can use to run the container, containing parameters you will need for the downloaded models and configuration file. Make sure you save this template.
-
-The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
-
-| Placeholder | Value | Format or example |
-|-|-||
-| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
-| `{LICENSE_MOUNT}` | The path where the license will be downloaded, and mounted. | `/host/license:/path/to/license/directory` |
-| `{ENDPOINT_URI}` | The endpoint for authenticating your service request. You can find it on your resource's **Key and endpoint** page, on the Azure portal. | `https://<your-custom-subdomain>.cognitiveservices.azure.com` |
-| `{API_KEY}` | The key for your Text Analytics resource. You can find it on your resource's **Key and endpoint** page, on the Azure portal. |`xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx`|
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
-
-```bash
-docker run --rm -it -p 5000:5000 \
--v {LICENSE_MOUNT} \
-{IMAGE} \
-eula=accept \
-billing={ENDPOINT_URI} \
-apikey={API_KEY} \
-DownloadLicense=True \
-Mounts:License={CONTAINER_LICENSE_DIRECTORY}
-```
+4. See the following documentation for steps on downloading and configuring the container for disconnected usage:
-After you've configured the container, use the next section to run the container in your environment with the license, and appropriate memory and CPU allocations.
+ * [Computer Vision - Read](../computer-vision/computer-vision-how-to-install-containers.md#run-the-container-disconnected-from-the-internet)
+ * [Language Understanding (LUIS)](../LUIS/luis-container-howto.md#run-the-container-disconnected-from-the-internet)
+ * [Text Translation (Standard)](../translator/containers/translator-disconnected-containers.md)
+ * [Form recognizer](../../applied-ai-services/form-recognizer/containers/form-recognizer-disconnected-containers.md)
-## Run the container in a disconnected environment
-
-> [!IMPORTANT]
-> If you're using the Translator, Neural text-to-speech, or Speech-to-text containers, read the **Additional parameters** section below for information on commands or additional parameters you will need to use.
-
-Once the license file has been downloaded, you can run the container in a disconnected environment. The following example shows the formatting of the `docker run` command you'll use, with placeholder values. Replace these placeholder values with your own values.
-
-Wherever the container is run, the license file must be mounted to the container and the location of the license folder on the container's local filesystem must be specified with `Mounts:License=`. An output mount must also be specified so that billing usage records can be written.
-
-Placeholder | Value | Format or example |
-|-|-||
-| `{IMAGE}` | The container image you want to use. | `mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
- `{MEMORY_SIZE}` | The appropriate size of memory to allocate for your container. | `4g` |
-| `{NUMBER_CPUS}` | The appropriate number of CPUs to allocate for your container. | `4` |
-| `{LICENSE_MOUNT}` | The path where the license will be located and mounted. | `/host/license:/path/to/license/directory` |
-| `{OUTPUT_PATH}` | The output path for logging [usage records](#usage-records). | `/host/output:/path/to/output/directory` |
-| `{CONTAINER_LICENSE_DIRECTORY}` | Location of the license folder on the container's local filesystem. | `/path/to/license/directory` |
-| `{CONTAINER_OUTPUT_DIRECTORY}` | Location of the output folder on the container's local filesystem. | `/path/to/output/directory` |
-
-```bash
-docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
--v {LICENSE_MOUNT} \ --v {OUTPUT_PATH} \
-{IMAGE} \
-eula=accept \
-Mounts:License={CONTAINER_LICENSE_DIRECTORY}
-Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
-```
-
-### Additional parameters and commands
-
-See the following sections for additional parameters and commands you may need to run the container.
-
-#### Translator container
-
-If you're using the [Translator container](../translator/containers/translator-how-to-install-container.md), you'll need to add parameters for the downloaded translation models and container configuration. These values are generated and displayed in the container output when you [configure the container](#configure-the-container-to-be-run-in-a-disconnected-environment) as described above. For example:
-
-```bash
--e MODELS= /path/to/model1/, /path/to/model2/--e TRANSLATORSYSTEMCONFIG=/path/to/model/config/translatorsystemconfig.json
-```
+ **Speech service**
-#### Speech containers
+ * [Speech-to-Text](../speech-service/speech-container-howto.md?tabs=stt#run-the-container-disconnected-from-the-internet)
+ * [Custom Speech-to-Text](../speech-service/speech-container-howto.md?tabs=cstt#run-the-container-disconnected-from-the-internet-1)
+ * [Neural Text-to-Speech](../speech-service/speech-container-howto.md?tabs=ntts#run-the-container-disconnected-from-the-internet-2)
-# [Speech-to-text](#tab/stt)
+ **Language service**
-The [Speech-to-Text](../speech-service/speech-container-howto.md?tabs=stt) container provides a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
-
-When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
-
-Below is a sample command to set file/directory ownership.
-
-```bash
-sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
-```
-
-# [Neural Text-to-Speech](#tab/ntts)
-
-The [Neural Text-to-Speech](../speech-service/speech-container-howto.md?tabs=ntts) container provides a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
-
-When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
-
-Below is a sample command to set file/directory ownership.
-
-```bash
-sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
-```
-
-# [Custom Speech-to-Text](#tab/cstt)
-
-In order to prepare and configure the Custom Speech-to-Text container you will need two separate speech resources:
-
-1. A regular Azure Speech Service resource which is either configured to use a "**S0 - Standard**" pricing tier or a "**Speech to Text (Custom)**" commitment tier pricing plan. This will be used to train, download, and configure your custom speech models for use in your container.
-1. An Azure Speech Service resource which is configured to use the "**DC0 Commitment (Disconnected)**" pricing plan. This is used to download your disconnected container license file required to run the container in disconnected mode.
-
-To download all the required models into your Custom Speech-to-Text container follow the instructions for Custom Speech-to-Text containers on the [Install and run Speech containers](../speech-service/speech-container-howto.md?tabs=cstt) page and use the speech resource in step 1.
-
-After all required models have been downloaded to your host computer, you need to download the disconnected license file using the instructions in the above chapter, titled [Configure the container to be run in a disconnected environment](./disconnected-containers.md#configure-the-container-to-be-run-in-a-disconnected-environment), using the Speech resource from step 2.
-
-To run the container in disconnected mode, follow the instructions from above chapter titled [Run the container in a disconnected environment](./disconnected-containers.md#run-the-container-in-a-disconnected-environment) and add an additional `-v` parameter to mount the directory containing your custom speech model.
-
-Example for running a Custom Speech-to-Text container in disconnected mode:
-```bash
-docker run --rm -it -p 5000:5000 --memory {MEMORY_SIZE} --cpus {NUMBER_CPUS} \
--v {LICENSE_MOUNT} \ --v {OUTPUT_PATH} \--v {MODEL_PATH} \
-{IMAGE} \
-eula=accept \
-Mounts:License={CONTAINER_LICENSE_DIRECTORY}
-Mounts:Output={CONTAINER_OUTPUT_DIRECTORY}
-```
-
-The [Custom Speech-to-Text](../speech-service/speech-container-howto.md?tabs=cstt) container provides a default directory for writing the license file and billing log at runtime. The default directories are /license and /output respectively.
-
-When you're mounting these directories to the container with the `docker run -v` command, make sure the local machine directory is set ownership to `user:group nonroot:nonroot` before running the container.
-
-Below is a sample command to set file/directory ownership.
-
-```bash
-sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PATH_2> ...
-```
--
+ * [Sentiment Analysis](../language-service/sentiment-opinion-mining/how-to/use-containers.md#run-the-container-disconnected-from-the-internet)
+ * [Key Phrase Extraction](../language-service/key-phrase-extraction/how-to/use-containers.md#run-the-container-disconnected-from-the-internet)
+ * [Language Detection](../language-service/language-detection/how-to/use-containers.md#run-the-container-disconnected-from-the-internet)
+
## Usage records
cognitive-services Create Account Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/create-account-terraform.md
+
+ Title: 'Quickstart: Create an Azure Cognitive Services resource using Terraform'
+description: 'In this article, you create an Azure Cognitive Services resource using Terraform'
+keywords: cognitive services, cognitive solutions, cognitive intelligence, cognitive artificial intelligence
+++ Last updated : 3/29/2023+++++
+# Quickstart: Create an Azure Cognitive Services resource using Terraform
+
+This article shows how to use Terraform to create a [Cognitive Services account](/azure/cognitive-services/cognitive-services-apis-create-account) using [Terraform](/azure/developer/terraform/quickstart-configure).
+
+Azure Cognitive Services are cloud-based artificial intelligence (AI) services that help developers build cognitive intelligence into applications without having direct AI or data science skills or knowledge. They are available through REST APIs and client library SDKs in popular development languages. Azure Cognitive Services enables developers to easily add cognitive features into their applications with cognitive solutions that can see, hear, speak, and analyze.
++
+In this article, you learn how to:
+
+> [!div class="checklist"]
+> * Create a random pet name for the Azure resource group name using [random_pet](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/pet)
+> * Create an Azure resource group using [azurerm_resource_group](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group)
+> * Create a random string using [random_string](https://registry.terraform.io/providers/hashicorp/random/latest/docs/resources/string)
+> * Create a Cognitive Services account using [azurerm_cognitive_account](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cognitive_account)
++
+## Prerequisites
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-cognitive-services-account). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-cognitive-services-account/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
+
+1. Create a file named `main.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-cognitive-services-account/main.tf)]
+
+1. Create a file named `outputs.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-cognitive-services-account/outputs.tf)]
+
+1. Create a file named `providers.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-cognitive-services-account/providers.tf)]
+
+1. Create a file named `variables.tf` and insert the following code:
+
+ [!code-terraform[master](~/terraform_samples/quickstart/101-cognitive-services-account/variables.tf)]
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+#### [Azure CLI](#tab/azure-cli)
+
+1. Get the Azure resource name in which the Cognitive Services account was created.
+
+ ```console
+ resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the Cognitive Services account name.
+
+ ```console
+ azurerm_cognitive_account_name=$(terraform output -raw azurerm_cognitive_account_name)
+ ```
+
+1. Run [az cognitiveservices account show](/cli/azure/cognitiveservices/account#az-cognitiveservices-account-show) to show the Cognitive Services account you created in this article.
+
+ ```azurecli
+ az cognitiveservices account show --name $azurerm_cognitive_account_name \
+ --resource-group $resource_group_name
+ ```
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+1. Get the Azure resource name in which the Cognitive Services account was created.
+
+ ```console
+ $resource_group_name=$(terraform output -raw resource_group_name)
+ ```
+
+1. Get the Cognitive Services account name.
+
+ ```console
+ $azurerm_cognitive_account_name=$(terraform output -raw azurerm_cognitive_account_name)
+ ```
+
+1. Run [Get-AzCognitiveServicesAccount](/powershell/module/az.cognitiveservices/get-azcognitiveservicesaccount) to display information about the new service.
+
+ ```azurepowershell
+ Get-AzCognitiveServicesAccount -ResourceGroupName $resource_group_name `
+ -Name $azurerm_cognitive_account_name
+ ```
+++
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Recover deleted Cognitive Services resources](manage-resources.md)
cognitive-services Tag Utterances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/tag-utterances.md
In CLU, use Azure OpenAI to suggest utterances to add to your project using GPT
In the Data Labeling page: 1. Click on the **Suggest utterances** button. A pane will open up on the right side prompting you to select your Azure OpenAI resource and deployment.
-2. On selection of an Azure OpenAI resource, click **Connect**, which allows your Language resource to have direct access to your Azure OpenAI resource. It assigns your Language resource the role of `Cognitive Services User` to your Azure OpenAI resource, which allows your current Language resource to have access to Azure OpenAI's service.
+2. On selection of an Azure OpenAI resource, click **Connect**, which allows your Language resource to have direct access to your Azure OpenAI resource. It assigns your Language resource the role of `Cognitive Services User` to your Azure OpenAI resource, which allows your current Language resource to have access to Azure OpenAI's service. If the connection fails, follow these [steps](#add-required-configurations-to-azure-openai-resource) below to add the right role to your Azure OpenAI resource manually.
3. Once the resource is connected, select the deployment. The recommended model for the Azure OpenAI deployment is `text-davinci-002`. 4. Select the intent you'd like to get suggestions for. Make sure the intent you have selected has at least 5 saved utterances to be enabled for utterance suggestions. The suggestions provided by Azure OpenAI are based on the **most recent utterances** you've added for that intent. 5. Click on **Generate utterances**. Once complete, the suggested utterances will show up with a dotted line around it, with the note *Generated by AI*. Those suggestions need to be accepted or rejected. Accepting a suggestion simply adds it to your project, as if you had added it yourself. Rejecting it deletes the suggestion entirely. Only accepted utterances will be part of your project and used for training or testing. You can accept or reject by clicking on the green check or red cancel buttons beside each utterance. You can also use the `Accept all` and `Reject all` buttons in the toolbar.
In the Data Labeling page:
Using this feature entails a charge to your Azure OpenAI resource for a similar number of tokens to the suggested utterances generated. Details for Azure OpenAI's pricing can be found [here](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/).
+### Add required configurations to Azure OpenAI resource
+
+If connecting your Language resource to an Azure OpenAI resource fails, follow these steps:
+
+Enable identity management for your Language resource using the following options:
+
+### [Azure portal](#tab/portal)
+
+Your Language resource must have identity management, to enable it using [Azure portal](https://portal.azure.com/):
+
+1. Go to your Language resource
+2. From left hand menu, under **Resource Management** section, select **Identity**
+3. From **System assigned** tab, make sure to set **Status** to **On**
+
+### [Language Studio](#tab/studio)
+
+Your Language resource must have identity management, to enable it using [Language Studio](https://aka.ms/languageStudio):
+
+1. Click the settings icon in the top right corner of the screen
+2. Select **Resources**
+3. Select the check box **Managed Identity** for your Language resource.
+++
+After enabling managed identity, assign the role `Cognitive Services User` to your Azure OpenAI resource using the managed identity of your Language resource.
+
+ 1. Go to the [Azure portal](https://portal.azure.com/) and navigate to your Azure OpenAI resource.
+ 2. Click on the Access Control (IAM) tab on the left.
+ 3. Click on Add > Add role assignment.
+ 4. Select "Job function roles" and click Next.
+ 5. Select `Cognitive Services User` from the list of roles and click Next.
+ 6. Select Assign access to "Managed identity" and click on "Select members".
+ 7. Under "Managed identity" select "Language".
+ 8. Search for your resource and select it. Then click on the Select button below and next to complete the process.
+ 9. Review the details and click on Review + Assign.
++
+After a few minutes, refresh the Language Studio and you will be able to successfully connect to Azure OpenAI.
+ ## Next Steps * [Train Model](./train-model.md)
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/how-to/use-containers.md
Previously updated : 01/18/2023 Last updated : 04/11/2023 keywords: on-premises, Docker, container, natural language processing
Use the host, `http://localhost:5000`, for container APIs.
[!INCLUDE [Container's API documentation](../../../../../includes/cognitive-services-containers-api-documentation.md)] +
+## Run the container disconnected from the internet
++ ## Stop the container [!INCLUDE [How to stop the container](../../../../../includes/cognitive-services-containers-stop.md)]
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/how-to/use-containers.md
Previously updated : 01/18/2023 Last updated : 04/11/2023 keywords: on-premises, Docker, container
Use the host, `http://localhost:5000`, for container APIs.
[!INCLUDE [Container's API documentation](../../../../../includes/cognitive-services-containers-api-documentation.md)]
+## Run the container disconnected from the internet
++ ## Stop the container [!INCLUDE [How to stop the container](../../../../../includes/cognitive-services-containers-stop.md)]
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/how-to/use-containers.md
Previously updated : 01/18/2023 Last updated : 04/11/2023 keywords: on-premises, Docker, container, sentiment analysis, natural language processing
Use the host, `http://localhost:5000`, for container APIs.
[!INCLUDE [Container's API documentation](../../../../../includes/cognitive-services-containers-api-documentation.md)]
+## Run the container disconnected from the internet
++ ## Stop the container [!INCLUDE [How to stop the container](../../../../../includes/cognitive-services-containers-stop.md)]
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/language-support.md
Use this article to learn which natural languages are supported by document and
# [Document summarization](#tab/document-summarization)
-## Languages supported by extractive document summarization
-
-| Language | Language code | Notes |
-|--|||
-| Chinese-Simplified | `zh-hans` | `zh` also accepted |
-| English | `en` | |
-| French | `fr` | |
-| German | `de` | |
-| Italian | `it` | |
-| Japanese | `ja` | |
-| Korean | `ko` | |
-| Spanish | `es` | |
-| Portuguese (Brazil) | `pt-BR` | |
-| Portuguese (Portugal) | `pt-PT` | `pt` also accepted |
-
-## Languages supported by abstractive document summarization (preview)
+## Languages supported by extractive and abstractive document summarization
| Language | Language code | Notes | |--|||
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/whats-new.md
# What's new in Azure Cognitive Service for Language? Azure Cognitive Service for Language is updated on an ongoing basis. To stay up-to-date with recent developments, this article provides you with information about new releases and features.+
+## April 2023
+
+* You can now use Azure OpenAI to automatically label or generate data during authoring. Learn more with the links below.
+ * Auto-label your documents in [Custom text classification](./custom-text-classification/how-to/use-autolabeling.md) or [Custom named entity recognition](./custom-named-entity-recognition/how-to/use-autolabeling.md).
+ * Generate suggested utterances in [Conversational language understanding](./conversational-language-understanding/how-to/tag-utterances.md#suggest-utterances-with-azure-openai).
+ ## March 2023 * New model version ('2023-01-01-preview') for Personally Identifiable Information (PII) detection with quality updates and new [language support](./personally-identifiable-information/language-support.md)
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-provisioning.md
Title: Use direct routing to connect existing telephony service description: Learn how to add a Session Border Controller and configure voice routing for Azure Communication Services direct routing. -+ Previously updated : 05/26/2022 Last updated : 03/11/2023
For information about whether Azure Communication Services direct routing is the
## Connect the SBC with Azure Communication Services
-### Configure using Azure portal
-1. In the left navigation, select Direct routing under Voice Calling - PSTN and then select Configure from the Session Border Controller tab.
+### Validate domain ownership
+Follow [these instructions](../../how-tos/telephony/domain-validation.md) to validate a domain ownership of your SBC
-2. Enter a fully qualified domain name and signaling port for the SBC.
- - SBC certificate must match the name; wildcard certificates are supported.
- - The *.onmicrosoft.com domain canΓÇÖt be used for the FQDN of the SBC.
-
- For the full list of requirements, refer to [Azure direct routing infrastructure requirements](./direct-routing-infrastructure.md).
-
- :::image type="content" source="../media/direct-routing-provisioning/add-session-border-controller.png" alt-text="Screenshot of Adding Session Border Controller.":::
-
-3. When you're done, select Next.
-
- If everything is set up correctly, you should see an exchange of OPTIONS messages between Microsoft and your Session Border Controller. Use your SBC monitoring/logs to validate the connection.
+## Configure outbound voice routing
+Refer to [Voice routing quickstart](../../quickstarts/telephony/voice-routing-sdk-config.md) to add an SBC and configure outbound voice routing rules.
## Outbound voice routing considerations Azure Communication Services direct routing has a routing mechanism that allows a call to be sent to a specific SBC based on the called number pattern.
-When you add a direct routing configuration to a resource, all calls made from this resourceΓÇÖs instances (identities) will try a direct routing trunk first. The routing is based on a dialed number and a match in voice routes configured for the resource.
+When you add a direct routing configuration to a resource, all calls made from this resourceΓÇÖs instances (identities) try a direct routing trunk first. The routing is based on a dialed number and a match in voice routes configured for the resource.
- If there's a match, the call goes through the direct routing trunk. - If there's no match, the next step is to process the `alternateCallerId` parameter of the `callAgent.startCall` method. - If the resource is enabled for Voice Calling (PSTN) and has at least one number purchased from Microsoft, the `alternateCallerId` is checked. - If the `alternateCallerId` matches a purchased number for the resource, the call is routed through the Voice Calling (PSTN) using Microsoft infrastructure. -- If `alternateCallerId` parameter doesn't match any of the purchased numbers, the call will fail.
+- If `alternateCallerId` parameter doesn't match any of the purchased numbers, the call fails.
-The diagram below demonstrates the Azure Communication Services voice routing logic.
+The diagram demonstrates the Azure Communication Services voice routing logic.
:::image type="content" source="../media/direct-routing-provisioning/voice-routing-diagram.png" alt-text="Diagram of outgoing voice routing flowchart.":::
The following examples display voice routing in a call flow.
If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added `sbc1.contoso.biz` and `sbc2.contoso.biz` to it, then when the user makes a call to `+1 425 XXX XX XX` or `+1 206 XXX XX XX`, the call is first routed to SBC `sbc1.contoso.biz` or `sbc2.contoso.biz`. If neither SBC is available, the call is dropped. ### Two routes example:
-If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added `sbc1.contoso.biz` and `sbc2.contoso.biz` to it, and then created a second route with the same pattern with `sbc3.contoso.biz` and `sbc4.contoso.biz`. In this case, when the user makes a call to `+1 425 XXX XX XX` or `+1 206 XXX XX XX`, the call is first routed to SBC `sbc1.contoso.biz` or `sbc2.contoso.biz`. If both sbc1 and sbc2 are unavailable, the route with lower priority will be tried (`sbc3.contoso.biz` and `sbc4.contoso.biz`). If none of the SBCs of the second route are available, the call is dropped.
+If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added `sbc1.contoso.biz` and `sbc2.contoso.biz` to it, and then created a second route with the same pattern with `sbc3.contoso.biz` and `sbc4.contoso.biz`. In this case, when the user makes a call to `+1 425 XXX XX XX` or `+1 206 XXX XX XX`, the call is first routed to SBC `sbc1.contoso.biz` or `sbc2.contoso.biz`. If both sbc1 and sbc2 are unavailable, the route with lower priority is tried (`sbc3.contoso.biz` and `sbc4.contoso.biz`). If none of the SBCs of the second route are available, the call is dropped.
### Three routes example:
-If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added `sbc1.contoso.biz` and `sbc2.contoso.biz` to it, and then created a second route with the same pattern with `sbc3.contoso.biz` and `sbc4.contoso.biz`, and created a third route with `^+1(\d[10])$` with `sbc5.contoso.biz`. In this case, when the user makes a call to `+1 425 XXX XX XX` or `+1 206 XXX XX XX`, the call is first routed to SBC `sbc1.contoso.biz` or `sbc2.contoso.biz`. If both sbc1 nor sbc2 are unavailable, the route with lower priority will be tried (`sbc3.contoso.biz` and `sbc4.contoso.biz`). If none of the SBCs of a second route are available, the third route will be tried. If sbc5 is also not available, the call is dropped. Also, if a user dials `+1 321 XXX XX XX`, the call goes to `sbc5.contoso.biz`, and it isn't available, the call is dropped.
+If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added `sbc1.contoso.biz` and `sbc2.contoso.biz` to it, and then created a second route with the same pattern with `sbc3.contoso.biz` and `sbc4.contoso.biz`, and created a third route with `^+1(\d[10])$` with `sbc5.contoso.biz`. In this case, when the user makes a call to `+1 425 XXX XX XX` or `+1 206 XXX XX XX`, the call is first routed to SBC `sbc1.contoso.biz` or `sbc2.contoso.biz`. If both sbc1 nor sbc2 are unavailable, the route with lower priority is tried (`sbc3.contoso.biz` and `sbc4.contoso.biz`). If none of the SBCs of a second route are available, the third route is tried. If sbc5 is also not available, the call is dropped. Also, if a user dials `+1 321 XXX XX XX`, the call goes to `sbc5.contoso.biz`, and it isn't available, the call is dropped.
> [!NOTE] > Failover to the next SBC in voice routing works only for response codes 408, 503, and 504.
If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added
> [!NOTE] > In all the examples, if the dialed number does not match the pattern, the call will be dropped unless there is a purchased number exist for the communication resource, and this number was used as `alternateCallerId` in the application.
-## Configure outbound voice routing
-
-### Configure using Azure portal
--
-Give your voice route a name, specify the number pattern using regular expressions, and select SBC for that pattern.
-Here are some examples of basic regular expressions:
-- `^\+\d+$` - matches a telephone number with one or more digits that start with a plus-- `^\+1(\d{10})$` - matches a telephone number with a ten digits after a `+1`-- `^\+1(425|206)(\d{7})$` - matches a telephone number that starts with `+1425` or with `+1206` followed by seven digits-- `^\+0?1234$` - matches both `+01234` and `+1234` telephone numbers.-
-For more information about regular expressions, see [.NET regular expressions overview](/dotnet/standard/base-types/regular-expressions).
-
-You can select multiple SBCs for a single pattern. In such a case, the routing algorithm will choose them in random order. You may also specify the exact number pattern more than once. The higher row will have higher priority, and if all SBCs associated with that row aren't available next row will be selected. This way, you create complex routing scenarios.
- ## Managing inbound calls-
-For general inbound call management use [Call Automation SDKs](../call-automation/incoming-call-notification.md) to build an application that listen for and manage inbound calls placed to a phone number or received via ACS direct routing.
-Omnichannel for Customer Service customers please refer to [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling).
-
-## Delete direct routing configuration
-
-### Delete using Azure portal
-
-#### To delete a voice route:
-1. In the left navigation, go to Direct routing under Voice Calling - PSTN and then select the Voice Routes tab.
-1. Select route or routes you want to delete using a checkbox.
-1. Select Remove.
-
-#### To delete an SBC:
-1. In the left navigation, go to Direct routing under Voice Calling - PSTN.
-1. On a Session Border Controllers tab, select Configure.
-1. Clear the FQDN and port fields for the SBC that you want to remove, select Next.
-1. On a Voice Routes tab, review voice routing configuration, make changes if needed. select Save.
-
-> [!NOTE]
-> When you remove SBC associated with a voice route, you can choose a different SBC for the route on the Voice Routes tab. The voice route without an SBC will be deleted.
+For general inbound call management use [Call Automation SDKs](../call-automation/incoming-call-notification.md) to build an application that listens for and manage inbound calls placed to a phone number or received via ACS direct routing.
+Omnichannel for Customer Service customers, refer to [these instructions](/dynamics365/customer-service/voice-channel-inbound-calling).
## Next steps
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
Call Recording supports multiple media outputs and content types to address your
### Video
-| Channel Type | Content Format | Resolution | Sampling Rate | Output | Description |
-| :-- | :- | :-- | :- | : | : |
-| mixed | mp4 | 1920x1080, 16 FPS (frames per second) | 16 kHz | single file, single channel | mixed video in a default 3x3 (most active speakers) tile arrangement with display name support |
+| Channel Type | Content Format | Resolution | Sampling Rate | Bit rate | Data rate | Output | Description |
+| :-- | :- | :-- | :- | :-- | : | : | : |
+| mixed | mp4 | 1920x1080, 16 FPS (frames per second) | 16 kHz | 1 mbps | 7.5 MB/min* | single file, single channel | mixed video in a default 3x3 (most active speakers) tile arrangement with display name support |
### Audio
-| Channel Type | Content Format | Sampling Rate | Output | Description |
-| :-- | :- | :-- | :- | :- |
-| mixed | mp3 & wav | 16 kHz | single file, single channel | mixed audio of all participants |
-| unmixed | wav | 16 kHz | single file, up to 5 wav channels | unmixed audio, one participant per channel, up to five channels |
+| Channel Type | Content Format | Sampling Rate | Bit rate | Data rate | Output | Description |
+| :-- | :- | :-- | : | : | : | :- |
+| mixed | mp3 | 16 kHz | 48 kbps | 0.36 MB/min* | single file, single channel | mixed audio of all participants |
+| mixed | wav | 16 kHz | 256 kbps | 1.92 MB/min | single file, single channel | mixed audio of all participants |
+| unmixed | wav | 16 kHz | 256 kbps | 1.92 MB/min* per channel | single file, up to 5 wav channels | unmixed audio, one participant per channel, up to five channels |
+> [*NOTE]
+> Mp3 and Mp4 formats use lossy compression that results in variable bitrate; therefore, data rate values in the tables above reflect the theoretical maximum. WAV format is uncompressed and bitrate is fixed, so the data rate calculations are exact.
## Get full control over your recordings with our Call Recording APIs
A `recordingId` is returned when recording is started, which is then used for fo
Call Recording use [Azure Event Grid](../../../event-grid/event-schema-communication-services.md) to provide you with notifications related to media and metadata. - > [!NOTE] > Azure Communication Services provides short term media storage for recordings. **Recordings will be available to download for 48 hours.** After 48 hours, recordings will no longer be available.
-An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated` is published when a recording is ready for retrieval, typically a few minutes after the recording process has completed (for example, meeting ended, recording stopped). Recording event notifications include `contentLocation` and `metadataLocation`, which are used to retrieve both recorded media and a recording metadata file.
+
+An Event Grid notification `Microsoft.Communication.RecordingFileStatusUpdated` is published when a recording is ready for retrieval, typically a few minutes after the recording process has completed (for example, meeting ended, recording stopped). Recording event notifications include `contentLocation` and `metadataLocation`, which are used to retrieve both recorded media and a recording metadata file.
### Notification Schema Reference
communication-services User Facing Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/user-facing-diagnostics.md
Last updated 10/21/2021
+zone_pivot_groups: acs-plat-web-ios-android-windows
# User Facing Diagnostics
-When working with calls in Azure Communication Services, problems may arise that cause issues for your customers. To help with this scenario, we have a feature called "User Facing Diagnostics" that you can use to examine various properties of a call to determine what the issue might be.
-
-> [!NOTE]
-> User-facing diagnostics is currently supported only for our JavaScript / Web SDK.
-
-## Accessing diagnostics
-
-User-facing diagnostics is an extended feature of the core `Call` API and allows you to diagnose an active call.
-
-```js
-const userFacingDiagnostics = call.feature(Features.UserFacingDiagnostics);
-```
+When we are working with calls in Azure Communication Services, problems may arise that cause issues for your customers. To help with the previously described scenario, we have a feature that is called "User Facing Diagnostics" that can be used to examine various properties of a call to determine what the issue might be.
## Diagnostic values
The following user-facing diagnostics are available:
| Name | Description | Possible values | Use cases | Mitigation steps | | | -- | - | -- | |
-| noNetwork | There is no network available. | - Set to`True` when a call fails to start because there is no network available. <br/> - Set to `False` when there are ICE candidates present. | Device is not connected to a network. | Ensure that the call has a reliable internet connection that can sustain a voice call. See the [Network optimization](network-requirements.md#network-optimization) section for more details. |
-| networkRelaysNotReachable | Problems with a network. | - Set to`True` when the network has some constraint that is not allowing you to reach Azure Communication Services relays. <br/> - Set to `False` upon making a new call. | During a call when the WiFi signal goes on and off. | Ensure that firewall rules and network routing allow client to reach Microsoft turn servers. See the [Firewall configuration](network-requirements.md#firewall-configuration) section for more details. |
-| networkReconnect | The connection was lost and we are reconnecting to the network. | - Set to`Bad` when the network is disconnected <br/> - Set to `Poor`when the media transport connectivity is lost <br/> - Set to `Good` when a new session is connected. | Low bandwidth, no internet | Ensure that the call has a reliable internet connection that can sustain a voice call. See the [Network bandwidth requirement](network-requirements.md#network-bandwidth) section for more details. |
-| networkReceiveQuality | An indicator regarding incoming stream quality. | - Set to`Bad` when there is a severe problem with receiving the stream. <br/> - Set to `Poor` when there is a mild problem with receiving the stream. <br/> - Set to `Good` when there is no problem with receiving the stream. | Low bandwidth | Ensure that the call has a reliable internet connection that can sustain a voice call. See the [Network bandwidth requirement](network-requirements.md#network-bandwidth) section for more details. Also consider suggesting to end user to turn off their camera to conserve available internet bandwidth. |
-| networkSendQuality | An indicator regarding outgoing stream quality. | - Set to`Bad` when there is a severe problem with sending the stream. <br/> - Set to `Poor` when there is a mild problem with sending the stream. <br/> - Set to `Good` when there is no problem with sending the stream. | Low bandwidth | Ensure that the call has a reliable internet connection that can sustain a voice call. See the [Network bandwidth requirement](network-requirements.md#network-bandwidth) section for more details. Also consider suggesting to end user to turn off their camera to conserve available internet bandwidth. |
+| noNetwork | There's no network available. | - Set to `True` when a call fails to start because there's no network available. <br/> - Set to `False` when there are ICE candidates present. | Device isn't connected to a network. | Ensure that the call has a reliable internet connection that can sustain a voice call. For more information, see the [Network optimization](network-requirements.md#network-optimization) section. |
+| networkRelaysNotReachable | Problems with a network. | - Set to `True` when the network has some constraint that isn't allowing you to reach Azure Communication Services relays. <br/> - Set to `False` upon making a new call. | During a call when the WiFi signal goes on and off. | Ensure that firewall rules and network routing allow client to reach Microsoft turn servers. For more information, see the [Firewall configuration](network-requirements.md#firewall-configuration) section. |
+| networkReconnect | The connection was lost and we are reconnecting to the network. | - Set to`Bad` when the network is disconnected <br/> - Set to `Poor`when the media transport connectivity is lost <br/> - Set to `Good` when a new session is connected. | Low bandwidth, no internet | Ensure that the call has a reliable internet connection that can sustain a voice call. For more information, see the [Network bandwidth requirement](network-requirements.md#network-bandwidth) section. |
+| networkReceiveQuality | An indicator regarding incoming stream quality. | - Set to`Bad` when there's a severe problem with receiving the stream. <br/> - Set to `Poor` when there's a mild problem with receiving the stream. <br/> - Set to `Good` when there's no problem with receiving the stream. | Low bandwidth | Ensure that the call has a reliable internet connection that can sustain a voice call. For more information, see the [Network bandwidth requirement](network-requirements.md#network-bandwidth) section. Suggest that the end user turn off their camera to conserve available internet bandwidth. |
+| networkSendQuality | An indicator regarding outgoing stream quality. | - Set to`Bad` when there's a severe problem with sending the stream. <br/> - Set to `Poor` when there's a mild problem with sending the stream. <br/> - Set to `Good` when there's no problem with sending the stream. | Low bandwidth | Ensure that the call has a reliable internet connection that can sustain a voice call. For more information, see the [Network bandwidth requirement](network-requirements.md#network-bandwidth) section. Also, suggest that the end user turn off their camera to conserve available internet bandwidth. |
### Audio values | Name | Description | Possible values | Use cases | Mitigation steps | | -- | | | | |
-| noSpeakerDevicesEnumerated | There is no audio output device (speaker) on the user's system. | - Set to`True` when there are no speaker devices on the system, and speaker selection is supported. <br/> - Set to `False` when there is a least 1 speaker device on the system, and speaker selection is supported. | All speakers are unplugged | When value set to`True` consider giving visual notification to end user that their current call session does not have any speakers available. |
-| speakingWhileMicrophoneIsMuted | Speaking while being on mute. | - Set to`True` when local microphone is muted and the local user is speaking. <br/> - Set to `False` when local user either stops speaking, or unmutes the microphone. <br/> \* Note: Currently, this option isn't supported on Safari because the audio level samples are taken from WebRTC stats. | During a call, mute your microphone and speak into it. | When value set to`True` consider giving visual notification to end user that they might be talking and not realizing that their audio is muted. |
-| noMicrophoneDevicesEnumerated | No audio capture devices (microphone) on the user's system | - Set to`True` when there are no microphone devices on the system. <br/> - Set to `False` when there is at least 1 microphone device on the system. | All microphones are unplugged during the call. | When value set to`True` consider giving visual notification to end user that their current call session does not have a microphone. See how to [enable microphone from device manger](../best-practices.md#plug-in-microphone-or-enable-microphone-from-device-manager-when-azure-communication-services-call-in-progress) for more details. |
-| microphoneNotFunctioning | Microphone is not functioning. | - Set to`True` when we fail to start sending local audio stream because the microphone device may have been disabled in the system or it is being used by another process. This UFD takes about 10 seconds to get raised. <br/> - Set to `False` when microphone starts to successfully send audio stream again. | No microphones available, microphone access disabled in a system | When value set to`True` give visual notification to end user that there is a problem with their microphone. |
-| microphoneMuteUnexpectedly | Microphone is muted | - Set to`True` when microphone enters muted state unexpectedly. <br/> - Set to `False` when microphone starts to successfully send audio stream | Microphone is muted from the system. Most cases happen when user is on an Azure Communication Services call on a mobile device and a phone call comes in. In most cases the operating system will mute the Azure Communication Services call so a user can answer the phone call. | When value is set to`True` give visual notification to end user that their call was muted because a phone call came in. See how to best handle [OS muting an Azure Communication Services call](../best-practices.md#handle-os-muting-call-when-phone-call-comes-in) section for more details. |
-| microphonePermissionDenied | There is low volume from device or itΓÇÖs almost silent on macOS. | - Set to`True` when audio permission is denied by system settings (audio). <br/> - Set to `False` on successful stream acquisition. <br/> Note: This diagnostic only works on macOS. | Microphone permissions are disabled in the Settings. | When value is set to`True` give visual notification to end user that they did not enable permission to use microphone for an Azure Communication Services call. |
+| noSpeakerDevicesEnumerated | there's no audio output device (speaker) on the user's system. | - Set to `True` when there are no speaker devices on the system, and speaker selection is supported. <br/> - Set to `False` when there's a least one speaker device on the system, and speaker selection is supported. | All speakers are unplugged | When value set to `True`, consider giving visual notification to end user that their current call session doesn't have any speakers available. |
+| speakingWhileMicrophoneIsMuted | Speaking while being on mute. | - Set to `True` when local microphone is muted and the local user is speaking. <br/> - Set to `False` when local user either stops speaking, or unmutes the microphone. <br/> \* Note: Currently, this option isn't supported on Safari because the audio level samples are taken from WebRTC stats. | During a call, mute your microphone and speak into it. | When value set to `True` consider giving visual notification to end user that they might be talking and not realizing that their audio is muted. |
+| noMicrophoneDevicesEnumerated | No audio capture devices (microphone) on the user's system | - Set to `True` when there are no microphone devices on the system. <br/> - Set to `False` when there's at least one microphone device on the system. | All microphones are unplugged during the call. | When value set to `True` consider giving visual notification to end user that their current call session doesn't have a microphone. For more information, see [enable microphone from device manger](../best-practices.md#plug-in-microphone-or-enable-microphone-from-device-manager-when-azure-communication-services-call-in-progress) section. |
+| microphoneNotFunctioning | Microphone isn't functioning. | - Set to `True` when we fail to start sending local audio stream because the microphone device may have been disabled in the system or it is being used by another process. This UFD takes about 10 seconds to get raised. <br/> - Set to `False` when microphone starts to successfully send audio stream again. | No microphones available, microphone access disabled in a system | When value set to `True` give visual notification to end user that there's a problem with their microphone. |
+| microphoneMuteUnexpectedly | Microphone is muted | - Set to `True` when microphone enters muted state unexpectedly. <br/> - Set to `False` when microphone starts to successfully send audio stream | Microphone is muted from the system. Most cases happen when user is on an Azure Communication Services call on a mobile device and a phone call comes in. In most cases, the operating system mutes the Azure Communication Services call so a user can answer the phone call. | When value is set to `True`, give visual notification to end user that their call was muted because a phone call came in. For more information, see how to best handle [OS muting an Azure Communication Services call](../best-practices.md#handle-os-muting-call-when-phone-call-comes-in) section for more details. |
+| microphonePermissionDenied | there's low volume from device or itΓÇÖs almost silent on macOS. | - Set to `True` when audio permission is denied from the system settings (audio). <br/> - Set to `False` on successful stream acquisition. <br/> Note: This diagnostic only works on macOS. | Microphone permissions are disabled in the Settings. | When value is set to `True`, give visual notification to end user that they did not enable permission to use microphone for an Azure Communication Services call. |
### Camera values | Name | Description | Possible values | Use cases | Mitigation steps | | | -- | - | | - |
-| cameraFreeze | Camera stops producing frames for more than 5 seconds. | - Set to`True` when the local video stream is frozen. This means the remote side is seeing your video frozen on their screen or it means that the remote participants are not rendering your video on their screen. <br/> - Set to `False` when the freeze ends and users can see your video as per normal. | The Camera was lost during the call or bad network caused the camera to freeze. | When value is set to`True` consider giving notification to end user that the remote participant network might be bad - possibly suggest that they turn off their camera to conserve bandwidth. See the [Network bandwidth requirement](network-requirements.md#network-bandwidth) section for more details on needed internet abilities for an Azure Communication Services call. |
-| cameraStartFailed | Generic camera failure. | - Set to`True` when we fail to start sending local video because the camera device may have been disabled in the system or it is being used by another process~. <br/> - Set to `False` when selected camera device successfully sends local video again. | Camera failures | When value is set to`True` give visual notification to end user that their camera failed to start. |
-| cameraStartTimedOut | Common scenario where camera is in bad state. | - Set to`True` when camera device times out to start sending video stream. <br/> - Set to `False` when selected camera device successfully sends local video again. | Camera failures | When value is set to`True` give visual notification to end user that their camera is possibly having problems. (When value is set back to `False` remove notification). |
-| cameraPermissionDenied | Camera permissions were denied in settings. | - Set to`True` when camera permission is denied by system settings (video). <br/> - Set to `False` on successful stream acquisition. <br> Note: This diagnostic only works on macOS Chrome. | Camera permissions are disabled in the settings. | When value is set to`True` give visual notification to end user that they did not enable permission to use camera for an Azure Communication Services call. |
-| cameraStoppedUnexpectedly | Camera malfunction | - Set to`True` when camera enters stopped state unexpectedly. <br/> - Set to `False` when camera starts to successfully send video stream again. | Check camera is functioning correctly. | When value is set to`True` give visual notification to end user that their camera is possibly having problems. (When value is set back to `False` remove notification). |
+| cameraFreeze | Camera stops producing frames for more than 5 seconds. | - Set to `True` when the local video stream is frozen. This diagnostic means that the remote side is seeing your video frozen on their screen or it means that the remote participants are not rendering your video on their screen. <br/> - Set to `False` when the freeze ends and users can see your video as per normal. | The Camera was lost during the call or bad network caused the camera to freeze. | When value is set to `True`, consider giving notification to end user that the remote participant network might be bad - possibly suggest that they turn off their camera to conserve bandwidth. For more information, see [Network bandwidth requirement](network-requirements.md#network-bandwidth) section on needed internet abilities for an Azure Communication Services call. |
+| cameraStartFailed | Generic camera failure. | - Set to `True` when we fail to start sending local video because the camera device may have been disabled in the system or it is being used by another process~. <br/> - Set to `False` when selected camera device successfully sends local video again. | Camera failures | When value is set to `True`, give visual notification to end user that their camera failed to start. |
+| cameraStartTimedOut | Common scenario where camera is in bad state. | - Set to `True` when camera device times out to start sending video stream. <br/> - Set to `False` when selected camera device successfully sends local video again. | Camera failures | When value is set to `True`, give visual notification to end user that their camera is possibly having problems. (When value is set back to `False` remove notification). |
+| cameraPermissionDenied | Camera permissions were denied in settings. | - Set to `True` when camera permission is denied from the system settings (video). <br/> - Set to `False` on successful stream acquisition. <br> Note: This diagnostic only works on macOS Chrome. | Camera permissions are disabled in the settings. | When value is set to `True`, give visual notification to end user that they did not enable permission to use camera for an Azure Communication Services call. |
+| cameraStoppedUnexpectedly | Camera malfunction | - Set to `True` when camera enters stopped state unexpectedly. <br/> - Set to `False` when camera starts to successfully send video stream again. | Check camera is functioning correctly. | When value is set to `True`, give visual notification to end user that their camera is possibly having problems. (When value is set back to `False` remove notification). |
### Misc values | Name | Description | Possible values | Use cases | Mitigation Steps | | | -- | -- | :- | -- |
-| screenshareRecordingDisabled | System screen sharing was denied by preferences in Settings. | - Set to`True` when screen sharing permission is denied by system settings (sharing). <br/> - Set to `False` on successful stream acquisition. <br/> Note: This diagnostic only works on macOS.Chrome. | Screen recording is disabled in Settings. | When value is set to`True` give visual notification to end user that they did not enable permission to share their screen for an Azure Communication Services call. |
-| capturerStartFailed | System screen sharing failed. | - Set to`True` when we fail to start capturing the screen. <br/> - Set to `False` when capturing the screen can start successfully. | | When value is set to`True` give visual notification to end user that there was possibly a problem sharing their screen. (When value is set back to `False` remove notification). |
-| capturerStoppedUnexpectedly | System screen sharing malfunction | - Set to`True` when screen capturer enters stopped state unexpectedly. <br/> - Set to `False` when screen capturer starts to successfully capture again. | Check screen sharing is functioning correctly | When value is set to`True` give visual notification to end user that there possibly a problem that causes sharing their screen to stop. (When value is set back to `False` remove notification). |
-
-## User Facing Diagnostic events
--- Subscribe to the `diagnosticChanged` event to monitor when any user-facing diagnostic changes.-
-```js
-/**
- * Each diagnostic has the following data:
- * - diagnostic is the type of diagnostic, e.g. NetworkSendQuality, DeviceSpeakWhileMuted, etc...
- * - value is DiagnosticQuality or DiagnosticFlag:
- * - DiagnosticQuality = enum { Good = 1, Poor = 2, Bad = 3 }.
- * - DiagnosticFlag = true | false.
- * - valueType = 'DiagnosticQuality' | 'DiagnosticFlag'
- */
-const diagnosticChangedListener = (diagnosticInfo: NetworkDiagnosticChangedEventArgs | MediaDiagnosticChangedEventArgs) => {
- console.log(`Diagnostic changed: ` +
- `Diagnostic: ${diagnosticInfo.diagnostic}` +
- `Value: ${diagnosticInfo.value}` +
- `Value type: ${diagnosticInfo.valueType}`);
-
- if (diagnosticInfo.valueType === 'DiagnosticQuality') {
- if (diagnosticInfo.value === DiagnosticQuality.Bad) {
- console.error(`${diagnosticInfo.diagnostic} is bad quality`);
-
- } else if (diagnosticInfo.value === DiagnosticQuality.Poor) {
- console.error(`${diagnosticInfo.diagnostic} is poor quality`);
- }
-
- } else if (diagnosticInfo.valueType === 'DiagnosticFlag') {
- if (diagnosticInfo.value === true) {
- console.error(`${diagnosticInfo.diagnostic}`);
- }
- }
-};
-
-userFacingDiagnostics.network.on('diagnosticChanged', diagnosticChangedListener);
-userFacingDiagnostics.media.on('diagnosticChanged', diagnosticChangedListener);
-```
-
-## Get the latest User Facing Diagnostics
--- Get the latest diagnostic values that were raised. If a diagnostic is undefined, that is because it was never raised.-
-```js
-const latestNetworkDiagnostics = userFacingDiagnostics.network.getLatest();
-
-console.log(
- `noNetwork: ${latestNetworkDiagnostics.noNetwork.value}, ` +
- `value type = ${latestNetworkDiagnostics.noNetwork.valueType}`
-);
-
-console.log(
- `networkReconnect: ${latestNetworkDiagnostics.networkReconnect.value}, ` +
- `value type = ${latestNetworkDiagnostics.networkReconnect.valueType}`
-);
-
-console.log(
- `networkReceiveQuality: ${latestNetworkDiagnostics.networkReceiveQuality.value}, ` +
- `value type = ${latestNetworkDiagnostics.networkReceiveQuality.valueType}`
-);
-
-const latestMediaDiagnostics = userFacingDiagnostics.media.getLatest();
-
-console.log(
- `speakingWhileMicrophoneIsMuted: ${latestMediaDiagnostics.speakingWhileMicrophoneIsMuted.value}, ` +
- `value type = ${latestMediaDiagnostics.speakingWhileMicrophoneIsMuted.valueType}`
-);
-
-console.log(
- `cameraStartFailed: ${latestMediaDiagnostics.cameraStartFailed.value}, ` +
- `value type = ${latestMediaDiagnostics.cameraStartFailed.valueType}`
-);
-
-console.log(
- `microphoneNotFunctioning: ${latestMediaDiagnostics.microphoneNotFunctioning.value}, ` +
- `value type = ${latestMediaDiagnostics.microphoneNotFunctioning.valueType}`
-);
-```
+| screenshareRecordingDisabled | System screen sharing was denied from the preferences in Settings. | - Set to `True` when screen sharing permission is denied from the system settings (sharing). <br/> - Set to `False` on successful stream acquisition. <br/> Note: This diagnostic only works on macOS.Chrome. | Screen recording is disabled in Settings. | When value is set to `True`, give visual notification to end user that they did not enable permission to share their screen for an Azure Communication Services call. |
+| capturerStartFailed | System screen sharing failed. | - Set to `True` when we fail to start capturing the screen. <br/> - Set to `False` when capturing the screen can start successfully. | | When value is set to `True`, give visual notification to end user that there was possibly a problem sharing their screen. (When value is set back to `False`, remove notification). |
+| capturerStoppedUnexpectedly | System screen sharing malfunction | - Set to `True` when screen capturer enters stopped state unexpectedly. <br/> - Set to `False` when screen capturer starts to successfully capture again. | Check screen sharing is functioning correctly | When value is set to `True`, give visual notification to end user that there possibly a problem that causes sharing their screen to stop. (When value is set back to `False` remove notification). |
+
+### Native only
++
+| Name | Description | Possible values | Use cases | Mitigation Steps |
+| | -- | -- | :- | -- |
+| speakerVolumeIsZero | Zero volume on a device (speaker). | - Set to `True` when the speaker volume is zero. <br/> - Set to `False` when speaker volume is not zero. | Not hearing audio from participants on call. | When value is set to `True`, you may have accidentally the volume at lowest(zero). |
+| speakerMuted | Speaker device is muted. | - Set to `True` when the speaker device is muted. <br/> - Set to `False` when the speaker device isn't muted. | Not hearing audio from participants on call. | When value is set to `True`, you may have accidentally muted the speaker. |
+| speakerNotFunctioningDeviceInUse | Speaker is already in use. Either the device is being used in exclusive mode, or the device is being used in shared mode and the caller asked to use the device in exclusive mode. | - Set to `True` when the speaker device stream acquisition times out (audio). <br/> - Set to `False` when speaker acquisition is successful. | Not hearing audio from participants on call through speaker. | When value is set to `True`, give visual notification to end user so they can check if another application is using the speaker and try closing it. |
+| speakerNotFunctioning | Speaker isn't functioning (failed to initialize the audio device client or device became inactive for more than 5 sec) | - Set to `True` when speaker is unavailable, or device stream acquisition times out (audio). <br/> - Set to `False` when speaker acquisition is successful. | Not hearing audio from participants on call through speaker. | Try checking the state of the speaker device. |
+| microphoneNotFunctioningDeviceInUse | Microphone is already in use. Either the device is being used in exclusive mode, or the device is being used in shared mode and the caller asked to use the device in exclusive mode. | - Set to `True` when the microphone device stream acquisition times out (audio). <br/> - Set to `False` when microphone acquisition is successful. | Your audio not reaching other participants in the call. | When value is set to `True`, give visual notification to end user so they can check if another application is using the microphone and try closing it. |
++++
communication-services Domain Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/telephony/domain-validation.md
+
+ Title: Azure Communication Services direct routing domain validation
+description: A how-to page about domain validation for direct routing.
+++++ Last updated : 03/11/2023+++++
+# Domain validation
+
+This page describes the process of domain name ownership validation. Fully Qualified Domain Name (FQDN) consists of two parts: host name and domain name. For example, if your session border controller (SBC) name is `sbc1.contoso.com`, then `sbc1` would be a host name, while `contoso.com` would be a domain name. If there's an SBC with FQDN of `acs.sbc1.testing.contoso.com`, `acs` would be a host name, and `sbc1.testing.contoso.com` would be a domain name. To use direct routing, you need to validate that you own a domain part of your FQDN.
+
+Azure Communication Services direct routing configuration consists of the following steps:
+
+- Verify domain ownership for your SBC FQDN
+- Configure SBC FQDN and port number
+- Create voice routing rules
+
+## Domain ownership validation
+
+Make sure to add and verify domain name portion of the FQDN and keep in mind that the `*.onmicrosoft.com` and `*.azure.com` domain names aren't supported for the SBC FQDN domain name. For example, if you have two domain names, `contoso.com` and `contoso.onmicrosoft.com`, use `sbc.contoso.com` as the SBC name. If using a subdomain, make sure this subdomain is also added and verified. For example, if you want to use `sbc.acs.contoso.com`, then `acs.contoso.com` needs to be registered.
+
+### Domain verification using Azure portal
+
+#### Add new domain name
+
+1. Open Azure portal and navigate to your [Communication Service resource](../../quickstarts/create-communication-resource.md).
+1. In the left navigation pane, select Direct routing under Voice Calling - PSTN.
+1. Select Connect domain from the Domains tab.
+1. Enter the domain part of SBCΓÇÖs fully qualified domain name.
+1. Reenter the domain name.
+1. Select Confirm and then select Add.
+
+[ ![Screenshot of adding a custom domain.](./media/direct-routing-add-domain.png)](./media/direct-routing-add-domain.png#lightbox)
+
+#### Verify domain ownership
+
+1. Select Verify next to new domain that is now visible in DomainΓÇÖs list.
+1. Azure portal generates a value for a TXT record, you need to add that record to
+
+[ ![Screenshot of verifying a custom domain.](./media/direct-routing-verify-domain-2.png)](./media/direct-routing-verify-domain-2.png#lightbox)
+
+>[!Note]
+>It might take up to 30 minutes for new DNS record to propagate on the Internet
+
+3. Select Next. If everything is set up correctly, you should see Domain status changed to *Verified* next to the added domain.
+
+[ ![Screenshot of a verified domain.](./media/direct-routing-domain-verified.png)](./media/direct-routing-domain-verified.png#lightbox)
+
+#### Remove domain from Azure Communication Services
+
+If you want to remove a domain from your Azure Communication Services direct routing configuration, select the checkbox fir a corresponding domain name, and select *Remove*.
+
+[ ![Screenshot of removing a custom domain.](./media/direct-routing-remove-domain.png)](./media/direct-routing-remove-domain.png#lightbox)
+
+## Next steps:
+
+### Conceptual documentation
+
+- [Telephony in Azure Communication Services](../../concepts/telephony/telephony-concept.md)
+- [Direct routing infrastructure requirements](../../concepts/telephony/direct-routing-infrastructure.md)
+- [Pricing](../../concepts/pricing.md)
+
+### Quickstarts
+
+- [Outbound call to a phone number](../../quickstarts/telephony/pstn-call.md)
+- [Redirect inbound telephony calls with Call Automation](../../quickstarts/call-automation/redirect-inbound-telephony-calls.md)
communication-services Voice Routing Sdk Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/voice-routing-sdk-config.md
+
+ Title: Quickstart - Configure voice routing using SDK
+
+description: In this quickstart, you learn how to configure Azure Communication Services direct routing programmatically.
++ Last updated : 03/11/2023+++
+zone_pivot_groups: acs-azp-java-python-csharp-js
+++
+# Quickstart: Configure voice routing programmatically
+
+Configure outbound voice routing rules for Azure Communication Services direct routing
++++++
+## Clean up resources
+
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md#clean-up-resources).
+
+## Next steps
+
+For more information, see the following articles:
+
+- Learn about [Calling SDK capabilities](../voice-video-calling/getting-started-with-calling.md).
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md).
+- Call to a telephone number [quickstart](./pstn-call.md).
confidential-ledger Create Client Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/create-client-certificate.md
Previously updated : 11/14/2022 Last updated : 04/11/2023
The Azure confidential ledger APIs require client certificate-based authentication. Only those certificates added to an allowlist during ledger creation or a ledger update can be used to call the confidential ledger Functional APIs.
-You will need a certificate in PEM format. You can create more than one certificate and add or delete them using ledger Update API.
+You need a certificate in PEM format. You can create more than one certificate and add or delete them using ledger Update API.
## OpenSSL We recommend using OpenSSL to generate certificates. If you have git installed, you can run OpenSSL in the git shell. Otherwise, you can install OpenSSL for your OS. -- **Windows**: Install [chocolatey for Windows](https://chocolatey.org/install), open a PowerShell terminal windows in admin mode, and run `choco install openssl`. Alternatively, you can install OpenSSL for Windows directly from [here](http://gnuwin32.sourceforge.net/packages/openssl.htm).-- **Linux**: Run `sudo apt-get install openssl`
+- **Windows**: Install [Chocolatey for Windows](https://chocolatey.org/install), open a PowerShell terminal window in admin mode, and run `choco install openssl`. Alternatively, you can install OpenSSL for Windows directly from [here](http://gnuwin32.sourceforge.net/packages/openssl.htm).
+- **Linux**:
+ - Ubuntu:
+
+ ```bash
+ sudo apt-get install openssl
+ ```
+
+ - RHEL/CentOS:
+
+ ```bash
+ sudo yum install openssl -y
+ ```
+
+ - SUSE:
+
+ ```bash
+ sudo zypper install openssl
+ ```
You can then generate a certificate by running `openssl` in a Bash or PowerShell terminal window:
container-apps Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/disaster-recovery.md
Azure Container Apps uses [availability zones](../availability-zones/az-overview
Availability zones are unique physical locations within an Azure region. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking. To ensure resiliency, there's a minimum of three separate zones in all enabled regions. You can build high availability into your application architecture by co-locating your compute, storage, networking, and data resources within a zone and replicating in other zones.
-By enabling Container Apps' zone redundancy feature, replicas are automatically randomly distributed across the zones in the region. Traffic is load balanced among the replicas. If a zone outage occurs, traffic will automatically be routed to the replicas in the remaining zones.
+By enabling Container Apps' zone redundancy feature, replicas are automatically distributed across the zones in the region. Traffic is load balanced among the replicas. If a zone outage occurs, traffic will automatically be routed to the replicas in the remaining zones.
+
+> [!NOTE]
+> There is no extra charge for enabling zone redundancy, but it only provides benefits when you have 2 or more replicas, with 3 or more being ideal since most regions that support zone redundancy have 3 zones.
In the unlikely event of a full region outage, you have the option of using one of two strategies:
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
There are two architectures in Container Apps: the Consumption only architecture
| Architecture Type | Description | |--|-|
-| Workload profiles architecture (preview) | Supports user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is /27. |
+| Workload profiles architecture (preview) | Supports user defined routes (UDR) and egress through NAT Gateway when using a custom virtual network. The minimum required subnet size is /27. |
| Consumption only architecture | Doesn't support user defined routes (UDR) and egress through NAT Gateway. The minimum required subnet size is /23. | ## Accessibility Levels
As you create a custom VNet, keep in mind the following situations:
- You can define the subnet range used by the Container Apps environment. - You can restrict inbound requests to the environment exclusively to the VNet by deploying the environment as [internal](vnet-custom-internal.md).
+> [!NOTE]
+> When you provide your own virtual network, additional [managed resources](networking.md#managed-resources) are created, which incur billing.
+ As you begin to design the network around your container app, refer to [Plan virtual networks](../virtual-network/virtual-network-vnet-plan-design-arm.md) for important concerns surrounding running virtual networks on Azure. :::image type="content" source="media/networking/azure-container-apps-virtual-network.png" alt-text="Diagram of how Azure Container Apps environments use an existing V NET, or you can provide your own.":::
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
The *Is Configurable* column in the following tables denotes a feature maximum m
| Environments | Region | Up to 15 | Yes | Limit up to 15 environments per subscription, per region.<br><br>For example, if you deploy to three regions you can get up to 45 environments for a single subscription. | | Container Apps | Environment | Unlimited | n/a | | | Revisions | Container app | 100 | No | |
-| Replicas | Revision | 30 | Yes | |
+| Replicas | Revision | 300 | Yes | |
## Consumption plan
container-apps Workload Profiles Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/workload-profiles-manage-cli.md
Previously updated : 03/28/2023 Last updated : 04/10/2023
Use the following commands to create an environment with a workload profile.
--resource-group "<RESOURCE_GROUP>" \ --name "<NAME>" \ --location "<LOCATION>" \
- --infrastructure-subnet-resource-id "<SUBNET_ID>"
- --internal--only
+ --infrastructure-subnet-resource-id "<SUBNET_ID>" \
+ --internal-only true
```
cosmos-db Continuous Backup Restore Resource Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-resource-model.md
[!INCLUDE[NoSQL, MongoDB, Gremlin, Table](includes/appliesto-nosql-mongodb-gremlin-table.md)]
-This article explains the resource model for the Azure Cosmos DB point-in-time restore feature. It explains the parameters that support the continuous backup and resources that can be restored. This feature is supported in Azure Cosmos DB API for SQL and the Azure Cosmos DB API for MongoDB. Currently, this feature is in preview for Azure Cosmos DB API for Gremlin and Table accounts.
+This article explains the resource model for the Azure Cosmos DB point-in-time restore feature. It explains the parameters that support the continuous backup and resources that can be restored. This feature is supported in Azure Cosmos DB API for SQL, Azure Cosmos DB API for Gremlin, Table API and the Azure Cosmos DB API for MongoDB.
## Database account's resource model
The database account's resource model is updated with a few extra properties to
A new property in the account level backup policy named ``Type`` under the ``backuppolicy`` parameter enables continuous backup and point-in-time restore. This mode is referred to as **continuous backup**. You can set this mode when creating the account or while [migrating an account from periodic to continuous mode](migrate-continuous-backup.md). After continuous mode is enabled, all the containers and databases created within this account will have point-in-time restore and continuous backup enabled by default. The continuous backup tier can be set to ``Continuous7Days`` or ``Continuous30Days``. By default, if no tier is provided, ``Continuous30Days`` is applied on the account. > [!NOTE]
-> Currently the point-in-time restore feature is available for Azure Cosmos DB for MongoDB and API for NoSQL accounts. It is also available for API for Table and API for Gremlin in preview. After you create an account with continuous mode you can't switch it to a periodic mode. The ``Continuous7Days`` tier is in preview.
+> Currently the point-in-time restore feature is available for Azure Cosmos DB for NoSQL, API for MongoDB, Table and Gremlin accounts. After you create an account with continuous mode you can't switch it to a periodic mode. The ``Continuous7Days`` tier is in preview.
### CreateMode
cosmos-db Get Latest Restore Timestamp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/get-latest-restore-timestamp.md
Previously updated : 04/08/2022 Last updated : 03/31/2023
This article describes how to get the [latest restorable timestamp](latest-restore-timestamp-continuous-backup.md) for accounts with continuous backup mode. It explains how to get the latest restorable time using Azure PowerShell and Azure CLI, and provides the request and response format for the PowerShell and CLI commands.
-This feature is supported for Azure Cosmos DB API for NoSQL containers and Azure Cosmos DB API for MongoDB collections. This feature is in preview for API for Table tables and API for Gremlin graphs.
+This feature is supported for Azure Cosmos DB API for NoSQL containers, API for MongoDB , Table API and API for Gremlin graphs.
## SQL container
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
To enroll in the preview, your Azure Cosmos DB account must meet all the followi
- Logic Apps - Azure Functions - Azure Search
- - Azure Cosmos DB Spark connector
+ - Azure Cosmos DB Spark connector < 4.18.0
- Any third party library or tool that has a dependency on an Azure Cosmos DB SDK that isn't .NET v3 SDK >= v3.27.0 or Java v4 SDK >= 4.42.0 ### Account resources and configuration
If you enroll in the preview, the following connectors fail.
- Logic Apps ┬╣ - Azure Functions ┬╣ - Azure Search ┬╣-- Azure Cosmos DB Spark connector ┬╣
+- Azure Cosmos DB Spark connector < 4.18.0
- Any third party library or tool that has a dependency on an Azure Cosmos DB SDK that isn't .NET v3 SDK >= v3.27.0 or Java v4 SDK >= 4.42.0 ┬╣ Support for these connectors is planned for the future.
cosmos-db Migrate Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/migrate-continuous-backup.md
Previously updated : 08/24/2022 Last updated : 03/31/2023
The following are the key reasons to migrate into continuous mode:
* The ability to choose the events on the container, database, or account and decide when to initiate the restore. > [!IMPORTANT]
-> Support for 7-day continous backup in both provisioning and migration scenarios is still in preview. Please use PowerShell and Azure CLI to migrate or provision an account with continous backup configured at the 7-day tier.
+> Support for 7-day continous backup in both provisioning and migration scenarios is still in preview.
> [!NOTE] > The migration capability is one-way only and it's an irreversible action. Which means once you migrate from periodic mode to continuous mode, you canΓÇÖt switch back to periodic mode. > > You can migrate an account to continuous backup mode only if the following conditions are true. Also checkout the [point in time restore limitations](continuous-backup-restore-introduction.md#current-limitations) before migrating your account: >
-> * If the account is of type API for NoSQL or MongoDB.
-> * If the account is of type API for Table or Gremlin. Support for these two APIs is in preview.
+> * If the account is of type API for NoSQL,API for Table, Gremlin or API for MongoDB.
> * If the account has a single write region. > * If the account isn't enabled with analytical store. >
Yes.
### Which accounts can be targeted for backup migration?
-Currently, API for NoSQL and MongoDB accounts with single write region that have shared, provisioned, or autoscale provisioned throughput support migration. Support for API for Table and Gremlin is in preview.
+Currently, API for NoSQL, API for Table, Gremlin API and API for MongoDB accounts with single write region that have shared, provisioned, or autoscale provisioned throughput support migration.
Accounts enabled with analytical storage and multiple-write regions aren't supported for migration.
cosmos-db Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md
Previously updated : 03/09/2023 Last updated : 04/11/2023 # MongoDB compatibility and feature support with Azure Cosmos DB for MongoDB vCore
Azure Cosmos DB for MongoDB supports documents encoded in MongoDB BSON format.
Azure Cosmos DB for MongoDB vCore supports the following indexes and index properties:
+> [!NOTE]
+> Creating a **unique index** obtains an exclusive lock on the collection for the entire duration of the build process. This blocks read and write operations on the collection until the operation is completed.
+ ### Indexes | Command | Supported |
cosmos-db Troubleshoot Changefeed Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-changefeed-functions.md
Previously updated : 04/07/2023 Last updated : 04/11/2023
This scenario can have multiple causes. Consider trying any or all of the follow
### Some changes are repeated in my trigger
-The concept of a *change* is an operation on a document. The most common scenarios where events for the same document are received are:
+The concept of a *change* is an operation on an item. The most common scenarios where events for the same item are received are:
-* The account is using the *eventual consistency* model. While it's consuming the change feed at an eventual consistency level, there could be duplicate events in-between subsequent change feed read operations. That is, the *last* event of one read operation might appear as the *first* event of the next.
+* Your Function is failing during execution. If your Function has enabled [retry policies](../../azure-functions/functions-bindings-error-pages.md#retries) or in cases where your Function execution is exceeding the allowed execution time, the same batch of changes can be delivered again to your Function. This is expected and by design, look at your Function logs for indication of failures and make sure you have enabled [trigger logs](how-to-configure-cosmos-db-trigger.md#enabling-trigger-specific-logs) for further details.
-* The document is being updated. The change feed can contain multiple operations for the same documents. If the document is receiving updates, it can pick up multiple events (one for each update). One easy way to distinguish among different operations for the same document is to track the `_lsn` [property for each change](../change-feed.md#change-feed-and-_etag-_lsn-or-_ts). If the properties don't match, the changes are different.
+* There is a load balancing of leases across instances. When instances increase or decrease, [load balancing](change-feed-processor.md#dynamic-scaling) can cause the same batch of changes to be delivered to multiple Function instances. This is expected and by design, and should be transient. The [trigger logs](how-to-configure-cosmos-db-trigger.md#enabling-trigger-specific-logs) include the events when an instance acquires and releases leases.
-* If you're identifying documents only by `id`, remember that the unique identifier for a document is the `id` and its partition key. (Two documents can have the same `id` but a different partition key.)
+* The item is being updated. The change feed can contain multiple operations for the same item. If the item is receiving updates, it can pick up multiple events (one for each update). One easy way to distinguish among different operations for the same item is to track the `_lsn` [property for each change](../change-feed.md#change-feed-and-_etag-_lsn-or-_ts). If the properties don't match, the changes are different.
+
+* If you're identifying items only by `id`, remember that the unique identifier for an item is the `id` and its partition key. (Two items can have the same `id` but a different partition key).
### Some changes are missing in your trigger
cosmos-db Concepts Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-customer-managed-keys.md
Last updated 04/06/2023
-# Customer-managed keys in Azure Cosmos DB for PostgreSQL
+# Data Encryption with Customer Managed Keys Preview
[!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)]
cosmos-db How To Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/how-to-customer-managed-keys.md
Last updated 04/06/2023
-# Enable data encryption with customer-managed keys in Azure Cosmos DB for PostgreSQL
+# Enable data encryption with customer-managed keys (preview) in Azure Cosmos DB for PostgreSQL
[!INCLUDE [PostgreSQL](../includes/appliesto-postgresql.md)]
cosmos-db Restore Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md
description: Learn how to identify the restore time and restore a live or delete
Previously updated : 04/18/2022 Last updated : 03/31/2023
Azure Cosmos DB's point-in-time restore feature helps you to recover from an acc
This article describes how to identify the restore time and restore a live or deleted Azure Cosmos DB account. It shows restore the account using [Azure portal](#restore-account-portal), [PowerShell](#restore-account-powershell), [CLI](#restore-account-cli), or an [Azure Resource Manager template](#restore-arm-template).
-> [!NOTE]
-> Currently in preview, the restore action for API for Table and Gremlin is supported via PowerShell and the Azure CLI.
+ ## <a id="restore-account-portal"></a>Restore an account using Azure portal
Use the following steps to get the restore details from Azure portal:
## <a id="restore-account-powershell"></a>Restore an account using Azure PowerShell
-Before restoring the account, install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps) or version higher than 6.2.0. Next connect to your Azure account and select the required subscription with the following commands:
+Before restoring the account, install the [latest version of Azure PowerShell](/powershell/azure/install-az-ps) or version higher than 9.6.0. Next connect to your Azure account and select the required subscription with the following commands:
1. Sign into Azure using the following command:
To restore Customer Managed Key (CMK) continuous account please refer to the ste
### <a id="get-the-restore-details-powershell"></a>Get the restore details from the restored account
-Import the `Az.CosmosDB` module and run the following command to get the restore details. The restoreTimestamp will be under the restoreParameters object:
+Import the `Az.CosmosDB` module version 1.10.0 and run the following command to get the restore details. The restoreTimestamp will be under the restoreParameters object:
```azurepowershell Get-AzCosmosDBAccount -ResourceGroupName MyResourceGroup -Name MyCosmosDBDatabaseAccount
Before restoring the account, install Azure CLI with the following steps:
1. Install the latest version of Azure CLI
- * Install the latest version of [Azure CLI](/cli/azure/install-azure-cli) or version higher than 2.26.0.
- * If you have already installed CLI, run `az upgrade` command to update to the latest version. This command will only work with CLI version higher than 2.11. If you have an earlier version, use the above link to install the latest version.
+ * Install the latest version of [Azure CLI](/cli/azure/install-azure-cli) or version higher than 2.46.0.
+ * If you have already installed CLI, run `az upgrade` command to update to the latest version. This command will only work with CLI version higher than 2.46.0. If you have an earlier version, use the above link to install the latest version.
1. Sign in and select your subscription
Use the following ARM template to restore an account for the Azure Cosmos DB API
"restoreSource": "/subscriptions/2296c272-5d55-40d9-bc05-4d56dc2d7588/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/5cb9d82e-ec71-430b-b977-cd6641db85bc", "restoreMode": "PointInTime", "restoreTimestampInUtc": "2021-10-27T23:20:46Z",
- "gremlinDatabasesToRestore": {
+ "gremlinDatabasesToRestore": [{
"databaseName": "db1", "graphNames": [ "graph1", "graph2" ]
- }
+ }]
} } }
cost-management-billing Cancel Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cancel-azure-subscription.md
tags: billing
Previously updated : 04/04/2023 Last updated : 04/11/2023
After you cancel, your services are disabled. That means your virtual machines a
After your subscription is canceled, Microsoft waits 30 - 90 days before permanently deleting your data in case you need to access it, or if you want to reactivate the subscription. We don't charge you for keeping the data. To learn more, see [Microsoft Trust Center - How we manage your data](https://go.microsoft.com/fwLink/p/?LinkID=822930&clcid=0x409).
+>[!NOTE]
+> You must manually cancel your SaaS subscriptions before you cancel your Azure subscription. Only pay-as-you-go SaaS subscriptions are cancelled automatically by the Azure subscription cancellation process.
+ ## Delete subscriptions The **Delete subscription** option isn't available until at least 15 minutes after you cancel your subscription.
cost-management-billing Manage Billing Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-billing-access.md
Account administrator can grant others access to Azure billing information by as
- Owner - Contributor - Reader-- Billing reader
+- Billing Reader
These roles have access to billing information in the [Azure portal](https://portal.azure.com/). People that are assigned these roles can also use the [Cost Management APIs](../automate/automation-overview.md) to programmatically get invoices and usage details.
If you have questions or need help, [create a support request](https://go.micro
## Next steps - Users in other roles, such as Owner or Contributor, can access not just billing information, but Azure services as well. To manage these roles, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).-- For more information about roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
+- For more information about roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md).
cost-management-billing Troubleshoot Azure Sign Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/troubleshoot-azure-sign-up.md
tags: billing
Previously updated : 12/06/2022 Last updated : 04/11/2023 # Troubleshoot issues when you sign up for a new account in the Azure portal
-You may experience an issue when you try to sign up for a new account in the Microsoft Azure portal. This short guide will walk you through the sign-up process and discuss some common issues at each step.
+You may experience an issue when you try to sign up for a new account in the Microsoft Azure portal. This short guide walks you through the sign-up process and discusses some common issues at each step.
> [!NOTE] > If you already have an existing account and are looking for guidance to troubleshoot sign-in issues, see [Troubleshoot Azure subscription sign-in issues](./troubleshoot-sign-in-issue.md). ## Before you begin
-Before beginning sign-up, verify the following:
+Before beginning sign-up, verify the following information:
- The information for your Azure profile (including contact email address, street address, and telephone number) is correct. - Your credit card information is correct.
This walkthrough provides examples of the correct information to sign up for an
When you initially sign up for Azure, you have to provide some information about yourself, including: -- Your country or region
+- Your country/region
- First name - Last name - Email address
If you continue to receive the message, try to sign up by using a different brow
How about InPrivate browsing?
-#### Free trial is not available
+#### Free trial isn't available
Have you used an Azure subscription in the past? The Azure Terms of Use agreement limits free trial activation only for a user that's new to Azure. If you have had any other type of Azure subscription, you can't activate a free trial. Consider signing up for a [Pay-As-You-Go subscription](https://azure.microsoft.com/offers/ms-azr-0003p/).
To resolve this issue, double-check whether the following items are true:
#### You see the message 'Your current account type is not supported'
-This issue can occur if the account is registered in an [unmanaged Azure AD directory](../../active-directory/enterprise-users/directory-self-service-signup.md), and it is not in your organization's Azure AD directory.
-To resolve this issue, sign up the Azure account by using another account, or take over the unmanaged AD directory. For more information, see [Take over an unmanaged directory as administrator in Azure Active Directory](../../active-directory/enterprise-users/domains-admin-takeover.md).
+This issue can occur if the account is registered in an [unmanaged Azure AD directory](../../active-directory/enterprise-users/directory-self-service-signup.md), and it isn't in your organization's Azure AD directory. To resolve this issue, sign up the Azure account by using another account, or take over the unmanaged AD directory. For more information, see [Take over an unmanaged directory as administrator in Azure Active Directory](../../active-directory/enterprise-users/domains-admin-takeover.md).
+
+The issue can also occur if the account was created using the Microsoft 365 Developer Program. Microsoft doesn't allow purchasing other paid services using your Microsoft 365 Developer Program subscription. For more information, see [Does the subscription also include a subscription to Azure?](/office/developer-program/microsoft-365-developer-program-faq#does-the-subscription-also-include-a-subscription-to-azure-)
## Identity verification by phone
When you get the text message or telephone call, enter the code that you receive
Although the sign-up verification process is typically quick, it may take up to four minutes for a verification code to be delivered.
-Here are some additional tips:
+Here are some other tips:
- You can use any phone number for verification as long as it meets the requirements. The phone number that you enter for verification isn't stored as a contact number for the account. - A Voice-over-IP (VoiP) phone number can't be used for the phone verification process.
Here are some additional tips:
#### Credit card declined or not accepted
-Virtual or pre-paid credit cards aren't accepted as payment for Azure subscriptions. To see what else may cause your card to be declined, see [Troubleshoot a declined card at Azure sign-up](./troubleshoot-declined-card.md).
+Virtual or prepaid credit cards aren't accepted as payment for Azure subscriptions. To see what else may cause your card to be declined, see [Troubleshoot a declined card at Azure sign-up](./troubleshoot-declined-card.md).
#### Credit card form doesn't support my billing address
Use the following steps to update your browser's cookie settings.
### I saw a charge on my free trial account
-You may see a small, temporary verification hold on your credit card account after you sign up. This hold is removed within three to five days. If you are worried about managing costs, read more about [Analyzing unexpected charges](../understand/analyze-unexpected-charges.md).
+You may see a small, temporary verification hold on your credit card account after you sign up. This hold is removed within three to five days. If you're worried about managing costs, read more about [Analyzing unexpected charges](../understand/analyze-unexpected-charges.md).
## Agreement
Check that you're using the correct sign-in credentials. Then, check the benefit
- Sign in to the [Microsoft for Startups portal](https://startups.microsoft.com/#start-two) to verify your eligibility status for Microsoft for Startups. - If you can't verify your status, you can get help on the [Microsoft for Startups forums](https://www.microsoftpartnercommunity.com/t5/Microsoft-for-Startups/ct-p/Microsoft_Startups). - Cloud Partner Program
- - Sign in to the [Cloud Partner Program portal](https://mspartner.microsoft.com/Pages/Locale.aspx) to verify your eligibility status. If you have the appropriate [Cloud Platform Competencies](https://mspartner.microsoft.com/pages/membership/cloud-platform-competency.aspx), you may be eligible for additional benefits.
+ - Sign in to the [Cloud Partner Program portal](https://mspartner.microsoft.com/Pages/Locale.aspx) to verify your eligibility status. If you have the appropriate [Cloud Platform Competencies](https://mspartner.microsoft.com/pages/membership/cloud-platform-competency.aspx), you may be eligible for other benefits.
- If you can't verify your status, contact [Cloud Partner Program Support](https://mspartner.microsoft.com/Pages/Support/Premium/contact-support.aspx). ### Can't activate new Azure In Open subscription To create an Azure In Open subscription, you must have a valid Online Service Activation (OSA) key that has at least one Azure In Open token associated with it. If you don't have an OSA key, contact one of the Microsoft Partners that are listed in [Microsoft Pinpoint](https://pinpoint.microsoft.com/).
-## Additional help resources
+## Other help resources
Other troubleshooting articles for Azure Billing and Subscriptions
dev-box How To Configure Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-azure-compute-gallery.md
To learn more about Azure Compute Gallery and how to create galleries, see:
## Prerequisites -- A dev center. If you don't have one available, follow the steps in [1. Create a dev center](quickstart-configure-dev-box-service.md#1-create-a-dev-center).-- A compute gallery. For you to use a gallery to configure dev box definitions, it must have at least [one image definition and one image version](../virtual-machines/image-version.md):
- - The image version must meet the [Windows 365 image requirements](/windows-365/enterprise/device-images#image-requirements):
- - Generation 2.
- - Hyper-V v2.
- - Windows OS.
- - Generalized image.
- - Single-session virtual machine (VM) images. (Multiple-session VM images aren't supported.)
- - No recovery partition.
- - Default 64-GB OS disk size. The OS disk size is automatically adjusted to the size specified in the SKU description of the Windows 365 license.
-
- - The image definition must have [trusted launch enabled as the security type](../virtual-machines/trusted-launch.md). You configure the security type when you create the image definition.
+- A dev center. If you don't have one available, follow the steps in [Create a dev center](quickstart-configure-dev-box-service.md#1-create-a-dev-center).
+- A compute gallery. Images stored in a compute gallery can be used in a dev box definition, provided they meet the requirements listed in the [Compute gallery image requirements](#compute-gallery-image-requirements) section.
+
+> [!NOTE]
+> Microsoft Dev Box Preview doesn't support community galleries.
+
+## Compute gallery image requirements
+
+A gallery used to configure dev box definitions must have at least [one image definition and one image version](../virtual-machines/image-version.md).
+
+The image version must meet the following requirements:
+- Generation 2.
+- Hyper-V v2.
+- Windows OS.
+ - Windows 10 Enterprise version 20H2 or later.
+ - Windows 11 Enterprise 21H2 or later.
+- Generalized VM image.
+ - You must create the image using the following sysprep options: `/mode:vm flag: Sysprep /generalize /oobe /mode:vm`. </br>
+ For more information, see: [Sysprep Command-Line Options](/windows-hardware/manufacture/desktop/sysprep-command-line-options?view=windows-11#modevm&preserve-view=true).
+ - To speed up the Dev Box creation time, you can disable the reserved storage state feature in the image by using the following command: `DISM.exe /Online /Set-ReservedStorageState /State:Disabled`. </br>
+ For more information, see: [DISM Storage reserve command-line options](/windows-hardware/manufacture/desktop/dism-storage-reserve?view=windows-11#set-reservedstoragestate&preserve-view=true).
+- Single-session virtual machine (VM) images. (Multiple-session VM images aren't supported.)
+- No recovery partition.
+ - For information about how to remove a recovery partition, see the [Windows Server command: delete partition](/windows-server/administration/windows-commands/delete-partition).
+- Default 64-GB OS disk size. The OS disk size is automatically adjusted to the size specified in the SKU description of the Windows 365 license.
+- The image definition must have [trusted launch enabled as the security type](../virtual-machines/trusted-launch.md). You configure the security type when you create the image definition.
:::image type="content" source="media/how-to-configure-azure-compute-gallery/image-definition.png" alt-text="Screenshot that shows Windows 365 image requirement settings."::: > [!NOTE]
-> - If you have existing images that don't meet the Windows 365 image requirements, those images won't be listed for image creation.
-> - Microsoft Dev Box Preview doesn't support community galleries.
+> - Dev Box image requirements exceed [Windows 365 image requirements](/windows-365/enterprise/device-images) and include settings to optimize dev box creation time and performance.
+> - Images that do not meet Windows 365 requirements will not be listed for creation.
## Provide permissions for services to access a gallery
Use the following steps to manually assign each role.
| **Assign access to** | Select **Managed Identity**. | | **Members** | Search for and select the user-assigned managed identity that you created when you [added a user-assigned identity to the dev center](#add-a-user-assigned-identity-to-the-dev-center). |
-You can use the same managed identity in multiple dev centers and compute galleries. Any dev center with the managed identity added will have the necessary permissions to the images in the gallery that you've added the Owner role assignment to.
+You can use the same managed identity in multiple dev centers and compute galleries. Any dev center with the managed identity added has the necessary permissions to the images in the gallery that you've added the Owner role assignment to.
## Attach a gallery to a dev center
dms Tutorial Postgresql Azure Postgresql Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online-portal.md
description: Learn to perform an online migration from PostgreSQL on-premises to
Previously updated : 04/11/2020 Last updated : 03/31/2023
- ignite-2022
-# Tutorial: Migrate PostgreSQL to Azure Database for PostgreSQL online using DMS via the Azure portal
+# Tutorial: Migrate PostgreSQL to Azure Database for PostgreSQL online using DMS (classic) via the Azure portal
-You can use Azure Database Migration Service to migrate the databases from an on-premises PostgreSQL instance to [Azure Database for PostgreSQL](../postgresql/index.yml) with minimal downtime to the application. In this tutorial, you migrate the **DVD Rental** sample database from an on-premises instance of PostgreSQL 9.6 to Azure Database for PostgreSQL by using the online migration activity in Azure Database Migration Service.
+You can use Azure Database Migration Service to migrate the databases from an on-premises PostgreSQL instance to [Azure Database for PostgreSQL](../postgresql/index.yml) with minimal downtime to the application. In this tutorial, you migrate the **listdb** sample database from an on-premises instance of PostgreSQL 13.10 to Azure Database for PostgreSQL by using the online migration activity in Azure Database Migration Service.
In this tutorial, you learn how to: > [!div class="checklist"]
In this tutorial, you learn how to:
To complete this tutorial, you need to:
-* Download and install [PostgreSQL community edition](https://www.postgresql.org/download/) 9.4, 9.5, 9.6, or 10. The source PostgreSQL Server version must be 9.4, 9.5, 9.6, 10, 11, 12, or 13. For more information, see [Supported PostgreSQL database versions](../postgresql/concepts-supported-versions.md).
+* Download and install [PostgreSQL community edition](https://www.postgresql.org/download/). The source PostgreSQL Server version must be >= 9.4. For more information, see [Supported PostgreSQL database versions](../postgresql/flexible-server/concepts-supported-versions.md).
- Also note that the target Azure Database for PostgreSQL version must be equal to or later than the on-premises PostgreSQL version. For example, PostgreSQL 9.6 can migrate to Azure Database for PostgreSQL 9.6, 10, or 11, but not to Azure Database for PostgreSQL 9.5.
+ Also note that the target Azure Database for PostgreSQL version must be equal to or later than the on-premises PostgreSQL version. For example, PostgreSQL 12 can migrate to Azure Database for PostgreSQL >= 12 version but not to Azure Database for PostgreSQL 11.
+
+* [Create an Azure Database for PostgreSQL server](../postgresql/quickstart-create-server-database-portal.md).
-* [Create an Azure Database for PostgreSQL server](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server](../postgresql/hyperscale/quickstart-create-portal.md).
* Create a Microsoft Azure Virtual Network for Azure Database Migration Service by using the Azure Resource Manager deployment model, which provides site-to-site connectivity to your on-premises source servers by using either [ExpressRoute](../expressroute/expressroute-introduction.md) or [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md). For more information about creating a virtual network, see the [Virtual Network Documentation](../virtual-network/index.yml), and especially the quickstart articles with step-by-step details. > [!NOTE]
To complete all the database objects like table schemas, indexes and stored proc
pg_dump -O -h hostname -U db_username -d db_name -s > your_schema.sql ```
- For example, to create a schema dump file for the **dvdrental** database:
+ For example, to create a schema dump file for the **listdb** database:
```
- pg_dump -O -h localhost -U postgres -d dvdrental -s -x > dvdrentalSchema.sql
+ pg_dump -O -h localhost -U postgres -d listdb -s -x > listdbSchema.sql
``` For more information about using the pg_dump utility, see the examples in the [pg-dump](https://www.postgresql.org/docs/9.6/static/app-pgdump.html#PG-DUMP-EXAMPLES) tutorial. 2. Create an empty database in your target environment, which is Azure Database for PostgreSQL.
- For details on how to connect and create a database, see the article [Create an Azure Database for PostgreSQL server in the Azure portal](../postgresql/quickstart-create-server-database-portal.md) or [Create an Azure Database for PostgreSQL - Hyperscale (Citus) server in the Azure portal](../postgresql/hyperscale/quickstart-create-portal.md).
+ For details on how to connect and create a database, see the article [Create an Azure Database for PostgreSQL server in the Azure portal](../postgresql/quickstart-create-server-database-portal.md).
- > [!NOTE]
- > An instance of Azure Database for PostgreSQL - Hyperscale (Citus) has only a single database: **citus**.
3. Import the schema into the target database you created by restoring the schema dump file.
To complete all the database objects like table schemas, indexes and stored proc
For example: ```
- psql -h mypgserver-20170401.postgres.database.azure.com -U postgres -d dvdrental citus < dvdrentalSchema.sql
+ psql -h mypgserver-20170401.postgres.database.azure.com -U postgres -d migratedb < listdbSchema.sql
``` > [!NOTE]
To complete all the database objects like table schemas, indexes and stored proc
[!INCLUDE [resource-provider-register](../../includes/database-migration-service-resource-provider-register.md)]
-## Create a DMS instance
-
-1. In the Azure portal, select + **Create a resource**, search for Azure Database Migration Service, and then select **Azure Database Migration Service** from the drop-down list.
-
- ![Azure Marketplace](media/tutorial-postgresql-to-azure-postgresql-online-portal/portal-marketplace.png)
-
-2. On the **Azure Database Migration Service** screen, select **Create**.
-
- ![Create Azure Database Migration Service instance](media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-create1.png)
-
-3. On the **Create Migration Service** screen, specify a name, the subscription, a new or existing resource group, and the location for the service.
-
-4. Select an existing virtual network or create a new one.
-
- The virtual network provides Azure Database Migration Service with access to the source PostgreSQL server and the target Azure Database for PostgreSQL instance.
-
- For more information about how to create a virtual network in the Azure portal, see the article [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
-5. Select a pricing tier.
-
- For more information on costs and pricing tiers, see the [pricing page](https://aka.ms/dms-pricing).
-
- ![Configure Azure Database Migration Service instance settings](media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-settings4.png)
-
-6. Select **Review + create** to create the service.
-
- Service creation will complete within about 10 to 15 minutes.
## Create a migration project After the service is created, locate it within the Azure portal, open it, and then create a new migration project. 1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.-
- ![Locate all instances of Azure Database Migration Service](media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-search.png)
+
+ :::image type="content" source="media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-search-classic.png" alt-text="Screenshot of a Search Azure Database Migration Service.":::
2. On the **Azure Database Migration Services** screen, search for the name of Azure Database Migration Service instance that you created, select the instance, and then select + **New Migration Project**.
+ :::image type="content" source="media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-instance-search-classic.png" alt-text="Screenshot of a Searching the Azure Database Migration Service instance.":::
+ 3. On the **New migration project** screen, specify a name for the project, in the **Source server type** text box, select **PostgreSQL**, in the **Target server type** text box, select **Azure Database for PostgreSQL**.
-4. In the **Choose type of activity** section, select **Online data migration**.
+4. In the **Migration activity type** section, select **Online data migration**.
- ![Create Azure Database Migration Service project](media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-create-project.png)
+ :::image type="content" source="media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-create-project-classic.png" alt-text="Screenshot of a Create a new migration project.":::
> [!NOTE] > Alternately, you can choose **Create project only** to create the migration project now and execute the migration later.
-5. Select **Save**, note the requirements to successfully use Azure Database Migration Service to migrate data, and then select **Create and run activity**.
+5. Select **Create and run activity**, to successfully use Azure Database Migration Service to migrate data.
## Specify source details 1. On the **Add Source Details** screen, specify the connection details for the source PostgreSQL instance.
- ![Add Source Details screen](media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-add-source-details.png)
-
-2. Select **Save**.
+ :::image type="content" source="media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-add-source-details-classic.png" alt-text="Screenshot of an Add source details screen.":::
## Specify target details
-1. On the **Target details** screen, specify the connection details for the target Hyperscale (Citus) server, which is the pre-provisioned instance of Hyperscale (Citus) to which the **DVD Rentals** schema was deployed by using pg_dump.
+1. On the **Target details** screen, specify the connection details for the target Azure Database for PostgreSQL - Flexible server, which is the preprovisioned instance to which the schema was deployed by using pg_dump.
- ![Target details screen](media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-add-target-details.png)
+ :::image type="content" source="media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-add-target-details-classic.png" alt-text="Screenshot of an Add target details screen.":::
-2. Select **Save**, and then on the **Map to target databases** screen, map the source and the target database for migration.
+2. Click on **Next:Select databases**, and then on the **Select databases** screen, map the source and the target database for migration.
If the target database contains the same database name as the source database, Azure Database Migration Service selects the target database by default.
- ![Map to target databases screen](media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-map-target-databases.png)
+ :::image type="content" source="media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-map-target-databases-classic.png" alt-text="Screenshot of a map databases with the target screen.":::
+
+3. Click on **Next:Select tables**, and then on the **Select tables** screen, select the required tables that need to be migrated.
-3. Select **Save**, and then on the **Migration settings** screen, accept the default values.
+ :::image type="content" source="media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-select-tables-classic.png" alt-text="Screenshot of a selecting the tables for migration screen.":::
+
+4. Click on **Next:Configure migration settings**, and then on the **Configure migration settings** screen, accept the default values.
- ![Migration settings screen](media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-migration-settings.png)
+ :::image type="content" source="media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-migration-settings-classic.png" alt-text="Screenshot of configuring migration setting screen.":::
-4. Select **Save**, on the **Migration summary** screen, in the **Activity name** text box, specify a name for the migration activity, and then review the summary to ensure that the source and target details match what you previously specified.
+5. On the **Migration summary** screen, in the **Activity name** text box, specify a name for the migration activity, and then review the summary to ensure that the source and target details match what you previously specified.
- ![Migration summary screen](media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-migration-summary.png)
+ :::image type="content" source="media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-migration-summary-classic.png" alt-text="Screenshot of migration summary screen.":::
## Run the migration
-* Select **Run migration**.
+* Select **Start migration**.
The migration activity window appears, and the **Status** of the activity should update to show as **Backup in Progress**.
After the service is created, locate it within the Azure portal, open it, and th
1. On the migration activity screen, select **Refresh** to update the display until the **Status** of the migration shows as **Complete**.
- ![Monitor migration process](media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-monitor-migration.png)
+ :::image type="content" source="media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-monitor-migration-classic.png" alt-text="Screenshot of migration monitoring screen.":::
2. When the migration is complete, under **Database Name**, select a specific database to get to the migration status for **Full data load** and **Incremental data sync** operations. > [!NOTE] > **Full data load** shows the initial load migration status, while **Incremental data sync** shows change data capture (CDC) status.
- ![Full data load details](media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-full-data-load-details.png)
+ :::image type="content" source="media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-full-data-load-details-classic.png" alt-text="Screenshot of migration full load details screen.":::
- ![Incremental data sync details](media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-incremental-data-sync-details.png)
+ :::image type="content" source="media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-incremental-data-sync-details-classic.png" alt-text="Screenshot of migration incremental load details screen.":::
## Perform migration cutover
After the initial Full load is completed, the databases are marked **Ready to cu
2. Wait until the **Pending changes** counter shows **0** to ensure that all incoming transactions to the source database are stopped, select the **Confirm** checkbox, and then select **Apply**.
- ![Complete cutover screen](media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-complete-cutover.png)
+ :::image type="content" source="media/tutorial-postgresql-to-azure-postgresql-online-portal/dms-complete-cutover-classic.png" alt-text="Screenshot of cutover completion screen.":::
3. When the database migration status shows **Completed**, [recreate sequences](https://wiki.postgresql.org/wiki/Fixing_Sequences) (if applicable), and connect your applications to the new target instance of Azure Database for PostgreSQL.
dms Tutorial Postgresql Azure Postgresql Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-postgresql-azure-postgresql-online.md
- ignite-2022
-# Tutorial: Migrate PostgreSQL to Azure Database for PostgreSQL online using DMS via the Azure CLI
+# Tutorial: Migrate PostgreSQL to Azure Database for PostgreSQL online using DMS (classic) via the Azure CLI
You can use Azure Database Migration Service to migrate the databases from an on-premises PostgreSQL instance to [Azure Database for PostgreSQL](../postgresql/index.yml) with minimal downtime. In other words, migration can be achieved with minimal downtime to the application. In this tutorial, you migrate the **DVD Rental** sample database from an on-premises instance of PostgreSQL 9.6 to Azure Database for PostgreSQL by using the online migration activity in Azure Database Migration Service.
event-grid Partner Events Overview For Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md
Title: Partner Events overview for system owners who desire to become partners description: Provides an overview of the concepts and general steps to become a partner. Previously updated : 04/28/2022 Last updated : 04/10/2023 # Partner Events overview for partners - Azure Event Grid
You have two options:
## References
- * [Swagger](https://github.com/ahamad-MS/azure-rest-api-specs/blob/main/specification/eventgrid/resource-manager/Microsoft.EventGrid/stable/2022-06-15/EventGrid.json)
+ * [Swagger](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/eventgrid/resource-manager/Microsoft.EventGrid)
* [ARM template](/azure/templates/microsoft.eventgrid/allversions) * [ARM template schema](https://github.com/Azure/azure-resource-manager-schemas/blob/main/schemas/2022-06-15/Microsoft.EventGrid.json) * [REST APIs](/rest/api/eventgrid/controlplane-version2021-10-15-preview/partner-namespaces)
firewall Easy Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/easy-upgrade.md
# Azure Firewall easy upgrade/downgrade
-You can now easily upgrade your existing Firewall Standard SKU to Premium SKU and downgrade from Premium to Standard SKU. The process is fully automated and has no service impact (zero service downtime).
+You can now easily upgrade your existing Firewall Standard SKU to Premium SKU and downgrade from Premium to Standard SKU.
+
+> [!IMPORTANT]
+> Always perform any upgrade/downgrade operations during off-business hours and scheduled maintenance times.
## Policies
machine-learning How To Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connection.md
+
+ Title: Use connections
+
+description: Learn how to use connections to connect to External data sources for training with Azure Machine Learning.
+++++++ Last updated : 04/11/2023++
+# Customer intent: As an experienced data scientist with Python skills, I have data located in external sources outside of Azure. I need to make that data available to the Azure Machine Learning platform, to train my machine learning models.
++
+# Create connections
++
+In this article, learn how to connect to data sources located outside of Azure, to make that data available to Azure Machine Learning services. For this data availability, Azure supports connections to these external sources:
+- Snowflake DB
+- Amazon S3
+- Azure SQL DB
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+- The [Azure Machine Learning SDK for Python](https://aka.ms/sdk-v2-install).
+
+- An Azure Machine Learning workspace.
+
+> [!NOTE]
+> An Azure Machine Learning connection securely stores the credentials passed during connection creation in the Workspace Azure Key Vault. A connection references the credentials from the key vault storage location for further use. You won't need to directly deal with the credentials after they are stored in the key vault. You have the option to store the credentials in the YAML file. A CLI command or SDK can override them. We recommend that you **avoid** credential storage in a YAML file, because a security breach could lead to a credential leak.
+
+## Create a Snowflake DB connection
+
+# [CLI: Username/password](#tab/cli-username-password)
+This YAML file creates a Snowflake DB connection. Be sure to update the appropriate values:
+
+```yaml
+# my_snowflakedb_connection.yaml
+$schema: http://azureml/sdk-2-0/Connection.json
+type: snowflake
+name: my_snowflakedb_connection # add your datastore name here
+
+target: jdbc:snowflake://<myaccount>.snowflakecomputing.com/?db=<mydb>&warehouse=<mywarehouse>&role=<myrole>
+# add the Snowflake account, database, warehouse name and role name here. If no role name provided it will default to PUBLIC
+credentials:
+ type: username_password
+ username: <username> # add the Snowflake database user name here or leave this blank and type in CLI command line
+ password: <password> # add the Snowflake database password here or leave this blank and type in CLI command line
+```
+
+Create the Azure Machine Learning connection in the CLI:
+
+### Option 1: Use the username and password in YAML file
+
+```azurecli
+az ml connection create --file my_snowflakedb_connection.yaml
+```
+
+### Option 2: Override the username and password at the command line
+
+```azurecli
+az ml connection create --file my_snowflakedb_connection.yaml --set credentials.username="XXXXX" credentials.password="XXXXX"
+```
+
+# [Python SDK: username/ password](#tab/sdk-username-password)
+
+### Option 1: Load connection from YAML file
+
+```python
+from azure.ai.ml import MLClient, load_workspace_connection
+
+ml_client = MLClient.from_config()
+
+wps_connection = load_workspace_connection(source="./my_snowflakedb_connection.yaml")
+wps_connection.credentials.username="XXXXX"
+wps_connection.credentials.password="XXXXXXXX"
+ml_client.connections.create_or_update(workspace_connection=wps_connection)
+
+```
+
+### Option 2: Use WorkspaceConnection() in a Python script
+
+```python
+from azure.ai.ml import MLClient
+from azure.ai.ml.entities import WorkspaceConnection
+from azure.ai.ml.entities import UsernamePasswordConfiguration
+
+target= "jdbc:snowflake://<myaccount>.snowflakecomputing.com/?db=<mydb>&warehouse=<mywarehouse>&role=<myrole>"
+# add the Snowflake account, database, warehouse name and role name here. If no role name provided it will default to PUBLIC
+
+wps_connection = WorkspaceConnection(type="snowflake",
+target= target,
+credentials= UsernamePasswordConfiguration(username="XXXXX", password="XXXXXX")
+)
+
+ml_client.connections.create_or_update(workspace_connection=wps_connection)
+
+```
+++
+## Create an Azure SQL DB connection
+
+# [CLI: Username/password](#tab/cli-sql-username-password)
+
+This YAML script creates an Azure SQL DB connection. Be sure to update the appropriate values:
+
+```yaml
+# my_sqldb_connection.yaml
+$schema: http://azureml/sdk-2-0/Connection.json
+
+type: azuresqldb
+name: my_sqldb_connection
+
+target: Server=tcp:<myservername>,<port>;Database=<mydatabase>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30
+# add the sql servername, port addresss and database
+credentials:
+ type: sql_auth
+ username: <username> # add the sql database user name here or leave this blank and type in CLI command line
+ password: <password> # add the sql database password here or leave this blank and type in CLI command line
+```
+
+Create the Azure Machine Learning connection in the CLI:
+
+### Option 1: Use the username / password from YAML file
+
+```azurecli
+az ml connection create --file my_sqldb_connection.yaml
+```
+
+### Option 2: Override the username and password in YAML file
+
+```azurecli
+az ml connection create --file my_sqldb_connection.yaml --set credentials.username="XXXXX" credentials.password="XXXXX"
+```
+
+# [Python SDK: username/ password](#tab/sdk-sql-username-password)
+
+### Option 1: Load connection from YAML file
+
+```python
+from azure.ai.ml import MLClient, load_workspace_connection
+
+ml_client = MLClient.from_config()
+
+wps_connection = load_workspace_connection(source="./my_sqldb_connection.yaml")
+wps_connection.credentials.username="XXXXXX"
+wps_connection.credentials.password="XXXXXxXXX"
+ml_client.connections.create_or_update(workspace_connection=wps_connection)
+
+```
+
+### Option 2: Using WorkspaceConnection()
+
+```python
+from azure.ai.ml import MLClient
+from azure.ai.ml.entities import WorkspaceConnection
+from azure.ai.ml.entities import UsernamePasswordConfiguration
+
+target= "Server=tcp:<myservername>,<port>;Database=<mydatabase>;Trusted_Connection=False;Encrypt=True;Connection Timeout=30"
+# add the sql servername, port addresss and database
+
+wps_connection = WorkspaceConnection(type="azure_sql_db",
+target= target,
+credentials= UsernamePasswordConfiguration(username="XXXXX", password="XXXXXX")
+)
+
+ml_client.connections.create_or_update(workspace_connection=wps_connection)
+
+```
+++
+## Create Amazon S3 connection
+
+# [CLI: Access key](#tab/cli-s3-access-key)
+
+Create an Amazon S3 connection with the following YAML file. Be sure to update the appropriate values:
+
+```yaml
+# my_s3_connection.yaml
+$schema: http://azureml/sdk-2-0/Connection.json
+
+type: s3
+name: my_s3_connection
+
+target: https://<mybucket>.amazonaws.com # add the s3 bucket details
+credentials:
+ type: access_key
+ access_key_id: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX # add access key id
+ secret_access_key: XxXxXxXXXXXXXxXxXxxXxxXXXXXXXXxXxxXXxXXXXXXXxxxXxXXxXXXXXxXXxXXXxXxXxxxXXxXXxXXXXXxXxxXX # add access key secret
+```
+
+Create the Azure Machine Learning connection in the CLI:
+
+```azurecli
+az ml connection create --file my_s3_connection.yaml
+```
+
+# [Python SDK: Access key](#tab/sdk-s3-access-key)
+
+### Option 1: Load connection from YAML file
+
+```python
+from azure.ai.ml import MLClient, load_workspace_connection
+
+ml_client = MLClient.from_config()
++
+wps_connection = load_workspace_connection(source="./my_s3_connection.yaml")
+ml_client.connections.create_or_update(workspace_connection=wps_connection)
+
+```
+
+### Option 2: Use WorkspaceConnection() in a Python script
+
+```python
+from azure.ai.ml import MLClient
+from azure.ai.ml.entities import WorkspaceConnection
+from azure.ai.ml.entities import AccessKeyConfiguration
+
+target = "https://<mybucket>.amazonaws.com" # add the s3 bucket details
+wps_connection = WorkspaceConnection(type="s3",
+target= target,
+credentials= AccessKeyConfiguration(access_key_id="XXXXXX",acsecret_access_key="XXXXXXXX")
+)
+
+ml_client.connections.create_or_update(workspace_connection=wps_connection)
+
+```
++
+## Next steps
+
+- [Import data assets](how-to-import-data-assets.md#import-data-assets)
machine-learning How To Import Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-import-data-assets.md
+
+ Title: Import Data
+
+description: Learn how to import data from external sources on to Azure Machine Learning platform
+++++++ Last updated : 04/11/2023+++
+# Import data assets
+
+> [!div class="op_single_selector" title1="Select the version of Azure Machine Learning SDK you are using:"]
+> * [v2 ](how-to-import-data-assets.md)
+
+In this article, learn how to import data into the Azure Machine Learning platform from external sources. A successful import automatically creates and registers an Azure Machine Learning data asset with the name provided during the import. An Azure Machine Learning data asset resembles a web browser bookmark (favorites). You don't need to remember long storage paths (URIs) that point to your most-frequently used data. Instead, you can create a data asset, and then access that asset with a friendly name.
+
+A data import creates a cache of the source data, along with metadata, for faster and reliable data access in Azure Machine Learning training jobs. The data cache avoids network and connection constraints. The cached data is versioned to support reproducibility (which provides versioning capabilities for data imported from SQL Server sources). Additionally, the cached data provides data lineage for auditability. A data import uses ADF (Azure Data Factory pipelines) behind the scenes, which means that users can avoid complex interactions with ADF. Behind the scenes, Azure Machine Learning also handles management of ADF compute resource pool size, compute resource provisioning, and tear-down to optimize data transfer by determining proper parallelization.
+
+The transferred data is partitioned and securely stored as parquet files in Azure storage. This enables faster processing during training. ADF compute costs only involve the time used for data transfers. Storage costs only involve the time needed to cache the data, because cached data is a copy of the data imported from an external source. That external source is hosted in Azure storage.
+
+The caching feature involves upfront compute and storage costs. However, it pays for itself, and can save money, because it reduces recurring training compute costs compared to direct connections to external source data during training. It caches data as parquet files, which makes job training faster and more reliable against connection timeouts for larger data sets. This leads to fewer reruns, and fewer training failures.
+
+You can now import data from Snowflake, Amazon S3 and Azure SQL.
+
+## Prerequisites
+
+To create and work with data assets, you need:
+
+* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+* An Azure Machine Learning workspace. [Create workspace resources](quickstart-create-resources.md).
+
+* The [Azure Machine Learning CLI/SDK installed](how-to-configure-cli.md).
+
+* [Workspace connections created](how-to-connection.md)
+
+## Importing from external database sources / import from external sources to create a meltable data asset
+
+> [!NOTE]
+> The external databases can have Snowflake, Azure SQL, etc. formats.
+
+The following code samples can import data from external databases. The `connection` that handles the import action determines the external database data source metadata. In this sample, the code imports data from a Snowflake resource. The connection points to a Snowflake source. With a little modification, the connection can point to an Azure SQL database source and an Azure SQL database source. The imported asset `type` from an external database source is `mltable`.
+
+# [Azure CLI](#tab/cli)
+
+Create a `YAML` file `<file-name>.yml`:
+
+```yaml
+$schema: http://azureml/sdk-2-0/DataImport.json
+# Supported connections include:
+# Connection: azureml:<workspace_connection_name>
+# Supported paths include:
+# Datastore: azureml://datastores/<data_store_name>/paths/<my_path>/${{name}}
++
+type: mltable
+name: <name>
+source:
+ type: database
+ query: <query>
+ connection: <connection>
+path: <path>
+```
+
+Next, run the following command in the CLI:
+
+```cli
+> az ml data import -f <file-name>.yml
+```
+
+# [Python SDK](#tab/Python-SDK)
+```python
+
+from azure.ai.ml.entities import DataImport
+from azure.ai.ml.data_transfer import Database
+from azure.ai.ml import MLClient
+
+# Supported connections include:
+# Connection: azureml:<workspace_connection_name>
+# Supported paths include:
+# Datastore: azureml://datastores/<data_store_name>/paths/<my_path>/${{name}}
+
+ml_client = MLClient.from_config()
+
+data_import = DataImport(
+ name="<name>",
+ source=Database(connection="<connection>", query="<query>"),
+ path="<path>"
+ )
+ml_client.data.import_data(data_import=data_import)
+
+```
+++
+## Import data from external data and file system resources to create a uri_folder data asset
+
+> [!NOTE]
+> An Amazon S3 data resource can serve as an external file system resource.
+
+The `connection` that handles the data import action determines the details of the external data source. The connection defines an Amazon S3 bucket as the target. The connection expects a valid `path` value. An asset value imported from an external file system source has a `type` of `uri_folder`.
+
+The next code sample imports data from an Amazon S3 resource.
+
+# [Azure CLI](#tab/cli)
+
+Create a `YAML` file `<file-name>.yml`:
+
+```yaml
+$schema: http://azureml/sdk-2-0/DataImport.json
+# Supported connections include:
+# Connection: azureml:<workspace_connection_name>
+# Supported paths include:
+# Datastore: azureml://datastores/<data_store_name>/paths/<my_path>/${{name}}
++
+type: uri_folder
+name: <name>
+source:
+ type: file_system
+ path: <path_on_source>
+ connection: <connection>
+path: <path>
+```
+
+Next, execute this command in the CLI:
+
+```cli
+> az ml data import -f <file-name>.yml
+```
+
+# [Python SDK](#tab/Python-SDK)
+```python
+
+from azure.ai.ml.entities import DataImport
+from azure.ai.ml.data_transfer import FileSystem
+from azure.ai.ml import MLClient
+
+# Supported connections include:
+# Connection: azureml:<workspace_connection_name>
+# Supported paths include:
+# Datastore: azureml://datastores/<data_store_name>/paths/<my_path>/${{name}}
+
+ml_client = MLClient.from_config()
+
+data_import = DataImport(
+ name="<name>",
+ source=FileSystem(connection="<connection>", path="<path_on_source>"),
+ path="<path>"
+ )
+ml_client.data.import_data(data_import=data_import)
+
+```
+++
+## Check the import status of external data sources
+
+The data import action is an asynchronous action. It can take a long time. After submission of an import data action via the CLI or SDK, the Azure Machine Learning service might need several minutes to connect to the external data source. Then the service would start the data import and handle data caching and registration. The time needed for a data import also depends on the size of the source data set.
+
+The next example returns the status of the submitted data import activity. The command or method uses the "data asset" name as the input to determine the status of the data materialization.
+
+# [Azure CLI](#tab/cli)
++
+```cli
+> az ml data list-materialization-status --name <name>
+```
+
+# [Python SDK](#tab/Python-SDK)
+
+```python
+from azure.ai.ml.entities import DataImport
+from azure.ai.ml import MLClient
+
+ml_client = MLClient.from_config()
+
+ml_client.data.show_materialization_status(name="<name>")
+
+```
+++
+## Next steps
+
+- [Read data in a job](how-to-read-write-data-v2.md#read-data-in-a-job)
+- [Working with tables in Azure Machine Learning](how-to-mltable.md)
+- [Access data from Azure cloud storage during interactive development](how-to-access-data-interactive.md)
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
-# Troubleshooting environment image builds using troubleshooting log error messages
+# Troubleshooting environment issues
-In this article, learn how to troubleshoot common problems you may encounter with environment image builds.
+In this article, learn how to troubleshoot common problems you may encounter with environment image builds and learn about AzureML environment vulnerabilities.
We are actively seeking your feedback! If you navigated to this page via your Environment Definition or Build Failure Analysis logs, we'd like to know if the feature was helpful to you, or if you'd like to report a failure scenario that isn't yet covered by our analysis. You can also leave feedback on this documentation. Leave your thoughts [here](https://aka.ms/azureml/environment/log-analysis-feedback).
Multiple environments with the same definition may result in the same cached ima
Running a training script remotely requires the creation of a Docker image.
-### Reproducibility and vulnerabilities
-
-#### *Vulnerabilities*
+## Vulnerabilities in AzureML Environments
You can address vulnerabilities by upgrading to a newer version of a dependency (base image, Python package, etc.) or by migrating to a different dependency that satisfies security requirements. Mitigating vulnerabilities is time consuming and costly since it can require refactoring of code and infrastructure. With the prevalence
There are some ways to decrease the impact of vulnerabilities:
- Compartmentalize your environment so you can scope and fix issues in one place. - Understand flagged vulnerabilities and their relevance to your scenario.
-#### *Vulnerabilities vs Reproducibility*
+### Scan for Vulnerabilities
+
+You can monitor and maintain environment hygiene with [Microsoft Defender for Container Registry](../defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md) to help scan images for vulnerabilities.
+
+To automate this process based on triggers from Microsoft Defender, see [Automate responses to Microsoft Defender for Cloud triggers](../defender-for-cloud/workflow-automation.md).
+
+### Vulnerabilities vs Reproducibility
Reproducibility is one of the foundations of software development. When you're developing production code, a repeated operation must guarantee the same result. Mitigating vulnerabilities can disrupt reproducibility by changing dependencies.
result. Mitigating vulnerabilities can disrupt reproducibility by changing depen
Azure Machine Learning's primary focus is to guarantee reproducibility. Environments fall under three categories: curated, user-managed, and system-managed.
-**Curated environments** are pre-created environments that Azure Machine Learning manages and are available by default in every Azure Machine Learning workspace provisioned.
+### *Curated Environments*
-They contain collections of Python packages and settings to help you get started with various machine learning frameworks. You're meant to use them as is.
-These pre-created environments also allow for faster deployment time.
+Curated environments are pre-created environments that Azure Machine Learning manages and are available by default in every Azure Machine Learning workspace provisioned. New versions are released by Azure Machine Learning to address vulnerabilities. Whether you use the latest image may be a tradeoff between reproducibility and vulnerability management.
+
+Curated Environments contain collections of Python packages and settings to help you get started with various machine learning frameworks. You're meant to use them as is. These pre-created environments also allow for faster deployment time.
-In **user-managed environments**, you're responsible for setting up your environment and installing every package that your training script needs on the
+### *User-managed Environments*
+
+In user-managed environments, you're responsible for setting up your environment and installing every package that your training script needs on the
compute target and for model deployment. These types of environments have two subtypes: - BYOC (bring your own container): the user provides a Docker image to Azure Machine Learning - Docker build context: Azure Machine Learning materializes the image from the user provided content
-Once you install more dependencies on top of a Microsoft-provided image, or bring your own base image, vulnerability
-management becomes your responsibility.
+Once you install more dependencies on top of a Microsoft-provided image, or bring your own base image, vulnerability management becomes your responsibility.
+
+### *System-managed Environments*
-You use **system-managed environments** when you want conda to manage the Python environment for you. Azure Machine Learning creates a new isolated conda environment by materializing your conda specification on top of a base Docker image. While Azure Machine Learning patches base images with each release, whether you use the
+You use system-managed environments when you want conda to manage the Python environment for you. Azure Machine Learning creates a new isolated conda environment by materializing your conda specification on top of a base Docker image. While Azure Machine Learning patches base images with each release, whether you use the
latest image may be a tradeoff between reproducibility and vulnerability management. So, it's your responsibility to choose the environment version used for your jobs or model deployments while using system-managed environments.
+### Vulnerabilities: Common Issues
+
+### *Vulnerabilities in Base Docker Images*
+
+System vulnerabilities in an environment are usually introduced from the base image. For example, vulnerabilities marked as "Ubuntu" or "Debian" are from the system level of the environmentΓÇôthe base Docker image. If the base image is from a third-party issuer, please check if the latest version has fixes for the flagged vulnerabilities. Most common sources for the base images in Azure Machine Learning are:
+
+- Microsoft Artifact Registry (MAR) aka Microsoft Container Registry (mcr.microsoft.com).
+ - Images can be listed from MAR homepage, calling _catalog API, or [/tags/list](https://mcr.microsoft.com/v2/azureml/openmpi4.1.0-ubuntu20.04/tags/list)_
+ - Source and release notes for training base images from AzureML can be found in [Azure/AzureML-Containers](https://github.com/Azure/AzureML-Containers)
+- Nvidia (nvcr.io, or [nvidia's Profile](https://hub.docker.com/u/nvidia/#!))
+
+If the latest version of your base image does not resolve your vulnerabilities, base image vulnerabilities can be addressed by installing versions recommended by a vulnerability scan:
+
+```
+apt-get install -y library_name
+```
+
+### *Vulnerabilities in Python Packages*
+
+Vulnerabilities can also be from installed Python packages on top of the system-managed base image. These Python-related vulnerabilities should be resolved by updating your Python dependencies. Python (pip) vulnerabilities in the image usually come from user-defined dependencies.
+
+To search for known Python vulnerabilities and solutions please see [GitHub Advisory Database](https://github.com/advisories). To address Python vulnerabilities, update the package to the version that has fixes for the flagged issue:
+
+```
+pip install -u my_package=={good.version}
+```
+
+If you're using a conda environment, update the reference in the conda dependencies file.
+
+In some cases, Python packages will be automatically installed during conda's setup of your environment on top of a base Docker image. Mitigation steps for those are the same as those for user-introduced packages. Conda installs necessary dependencies for every environment it materializes. Packages like cryptography, setuptools, wheel, etc. will be automatically installed from conda's default channels. There's a known issue with the default anaconda channel missing latest package versions, so it's recommended to prioritize the community-maintained conda-forge channel. Otherwise, please explicitly specify packages and versions, even if you don't reference them in the code you plan to execute on that environment.
+
+### *Cache issues*
+ Associated to your Azure Machine Learning workspace is an Azure Container Registry instance that's a cache for container images. Any image materialized is pushed to the container registry and used if you trigger experimentation or deployment for the corresponding environment. Azure
-Machine Learning doesn't delete images from your container registry, and it's your responsibility to evaluate which images you need to maintain over time. You
-can monitor and maintain environment hygiene with [Microsoft Defender for Container Registry](../defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md)
-to help scan images for vulnerabilities. To
-automate this process based on triggers from Microsoft Defender, see [Automate responses to Microsoft Defender for Cloud triggers](../defender-for-cloud/workflow-automation.md).
+Machine Learning doesn't delete images from your container registry, and it's your responsibility to evaluate which images you need to maintain over time.
+
+## Troubleshooting environment image builds
+
+Learn how to troubleshoot issues with environment image builds and package installations.
## **Environment definition problems**
Ensure that you've spelled all listed packages correctly and that you've pinned
### Missing command <!--issueDescription-->
-This issue can happen when a command isn't recognized during an image build.
+This issue can happen when a command isn't recognized during an image build or in the specified Python package requirement.
**Potential causes:** * You didn't spell the command correctly
pip install --ignore-installed [package]
Try creating a separate environment using conda
+### Invalid operator
+<!--issueDescription-->
+This issue can happen when pip fails to install a Python package due to an invalid operator found in the requirement.
+
+**Potential causes:**
+* There's an invalid operator found in the Python package requirement
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+* Ensure that you've spelled the package correctly and that the specified version exists
+* Ensure that your package version specifier is formatted correctly and that you're using valid comparison operators. See [Version specifiers](https://peps.python.org/pep-0440/#version-specifiers)
+* Replace the invalid operator with the operator recommended in the error message
+
+### No matching distribution
+<!--issueDescription-->
+This issue can happen when there's no package found that matches the version you specified.
+
+**Potential causes:**
+* You spelled the package name incorrectly
+* The package and version can't be found on the channels or feeds that you specified
+* The version you specified doesn't exist
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because Azure Machine Learning implicitly builds the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+* Ensure that you've spelled the package correctly and that it exists
+* Ensure that the version you specified for the package exists
+* Run `pip install --upgrade pip` and then run the original command again
+* Ensure the pip you're using can install packages for the desired Python version. See [Should I use pip or pip3?](https://stackoverflow.com/questions/61664673/should-i-use-pip-or-pip3)
+
+**Resources**
+* [Running Pip](https://pip.pypa.io/en/stable/user_guide/#running-pip)
+* [pypi](https://aka.ms/azureml/environment/pypi)
+* [Installing Python Modules](https://docs.python.org/3/installing/https://docsupdatetracker.net/index.html)
+ ## *Make issues* ### No targets specified and no makefile found <!--issueDescription-->
This issue can happen when Docker fails to find and copy a file.
* [Docker COPY](https://docs.docker.com/engine/reference/builder/#copy) * [Docker Build Context](https://docs.docker.com/engine/context/working-with-contexts/)
+## *Apt-Get Issues*
+### Failed to run apt-get command
+<!--issueDescription-->
+This issue can happen when apt-get fails to run.
+
+**Potential causes:**
+* Network connection issue, which could be temporary
+* Broken dependencies related to the package you're running apt-get on
+* You don't have the correct permissions to use the apt-get command
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+* Check your network connection and DNS settings
+* Run `apt-get check` to check for broken dependencies
+* Run `apt-get update` and then run your original command again
+* Run the command with the `-f` flag, which will try to resolve the issue coming from the broken dependencies
+* Run the command with `sudo` permissions, such as `sudo apt-get install <package-name>`
+
+**Resources**
+* [Package management with APT](https://help.ubuntu.com/community/AptGet/Howto)
+* [Ubuntu Apt-Get](https://manpages.ubuntu.com/manpages/xenial/man8/apt-get.8.html)
+* [What to do when apt-get fails](https://www.linux.com/news/what-do-when-apt-get-fails/#:~:text=Check%20the%20broken%20dependencies%E2%80%99%20availability.%20Run%20apt-get%20update,adding%20another%20source%2C%20then%20run%20apt-get%20install%20again)
+* [apt-get command in Linux with Examples](https://www.geeksforgeeks.org/apt-get-command-in-linux-with-examples/)
+ ## *Docker push issues* ### Failed to store Docker image <!--issueDescription-->
This issue can happen when Docker doesn't recognize an instruction in the Docker
**Resources** * [Dockerfile reference](https://docs.docker.com/engine/reference/builder/)
+## *Command Not Found*
+### Command not recognized
+<!--issueDescription-->
+This issue can happen when the command being run isn't recognized.
+
+**Potential causes:**
+* You haven't installed the command via your Dockerfile before you try to execute the command
+* You haven't included the command in your path, or you haven't added it to your path
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+Ensure that you have an installation step for the command in your Dockerfile before trying to execute the command
+* Review this [example](https://stackoverflow.com/questions/67186341/make-install-in-dockerfile)
+
+If you've tried installing the command and are experiencing this issue, ensure that you've added the command to your path
+* Review this [example](https://stackoverflow.com/questions/27093612/in-a-dockerfile-how-to-update-path-environment-variable)
+* Review how to set [environment variables in a Dockerfile](https://docs.docker.com/engine/reference/builder/#env)
+ ## *Miscellaneous build issues* ### Build log unavailable <!--issueDescription-->
managed-grafana How To Api Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-api-calls.md
Previously updated : 03/23/2023 Last updated : 04/05/2023
You now need to gather some information, which you'll use to get a Grafana API a
## Get an access token
-To access Grafana APIs, you need to get an access token. Follow the example below to call Azure AD and retrieve a token. Replace `<tenant-id>`, `<client-id>`, and `<client-secret>` with the tenant ID, application (client) ID, and client secret collected in the previous step.
+To access Grafana APIs, you need to get an access token. You can get the access token using the Azure CLI or making a POST request.
+
+### [Azure CLI](#tab/azure-cli)
+
+Sign in to the Azure CLI by running the [az login](/cli/azure/reference-index#az-login) command and replace `<client-id>`, `<client-secret>`, and `<tenant-id>` with the application (client) ID, client secret, and tenant ID collected in the previous step:
+
+```
+az login --service-principal --username "<client-id>" --password "<client-secret>" --tenant "<tenant-id>"
+```
+
+Use the command [az grafana api-key create](/cli/azure/grafana/api-key#az-grafana-api-key-create) to create a key. Here's an example output:
+
+```
+az grafana api-key create --key keyname --name <name> --resource-group <rg> --resource-group editor --output json
+
+{
+ "id": 3,
+ "key": "<redacted>",
+ "name": "keyname"
+}
+```
+
+> [!NOTE]
+> You can only view this key here once. Save it in a secure place.
+
+### [POST request](#tab/post)
+
+Follow the example below to call Azure AD and retrieve a token. Replace `<tenant-id>`, `<client-id>`, and `<client-secret>` with the tenant ID, application (client) ID, and client secret collected in the previous step.
```bash curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' \
Here's an example of response:
} ``` ++ ## Call Grafana APIs You can now call Grafana APIs using the access token retrieved in the previous step as the Authorization header. For example:
networking Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/networking-overview.md
VPN Gateway helps you create encrypted cross-premises connections to your virtua
The following diagram illustrates multiple site-to-site VPN connections to the same virtual network. To view more connection diagrams, see [VPN Gateway - design](../../vpn-gateway/design.md). For more information about VPN Gateway, see [What is VPN Gateway?](../../vpn-gateway/vpn-gateway-about-vpngateways.md) ### <a name="virtualwan"></a>Virtual WAN Azure Virtual WAN is a networking service that brings many networking, security, and routing functionalities together to provide a single operational interface. Connectivity to Azure VNets is established by using virtual network connections. Some of the main features include:
postgresql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-azure-ad-authentication.md
Once you've authenticated against the Active Directory, you then retrieve a toke
## Other considerations - Multiple Azure AD principals (a user, group, service principal or managed identity) can be configured as Azure AD Administrator for an Azure Database for PostgreSQL server at any time.-- Azure AD groups must be a mail enabled security group for authentication to work. - Only an Azure AD administrator for PostgreSQL can initially connect to the Azure Database for PostgreSQL using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users. - If an Azure AD principal is deleted from Azure AD, it still remains as PostgreSQL role, but it will no longer be able to acquire new access token. In this case, although the matching role still exists in the database it won't be able to authenticate to the server. Database administrators need to transfer ownership and drop roles manually.
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store.md
Title: Query Store - Azure Database for PostgreSQL - Flex Server
-description: This article describes the Query Store feature in Azure Database for PostgreSQL - Flex Server.
+ Title: Query Store - Azure Database for PostgreSQL - Flexible Server
+description: This article describes the Query Store feature in Azure Database for PostgreSQL - Flexible Server.
The following options apply specifically to wait statistics.
||||| | pgms_wait_sampling.query_capture_mode | Sets which statements are tracked for wait stats. | none | none, all| | Pgms_wait_sampling.history_period | Set the frequency, in milliseconds, at which wait events are sampled. | 100 | 1-600000 |
-> [!NOTE]
-> **pg_qs.query_capture_mode** supersedes **pgms_wait_sampling.query_capture_mode**. If pg_qs.query_capture_mode is NONE, the pgms_wait_sampling.query_capture_mode setting has no effect.
+
+> [!NOTE]
+> **pg_qs.query_capture_mode** supersedes **pgms_wait_sampling.query_capture_mode**. If pg_qs.query_capture_mode is NONE, the pgms_wait_sampling.query_capture_mode setting has no effect.
Use the [Azure portal](howto-configure-server-parameters-using-portal.md) to get or set a different value for a parameter.
This view returns the query plan that was used to execute a query. There is one
## Limitations and known issues - If a PostgreSQL server has the parameter `default_transaction_read_only` on, Query Store will not capture any data.-- + ## Next steps - Learn more about [scenarios where Query Store can be especially helpful](concepts-query-store-scenarios.md). - Learn more about [best practices for using Query Store](concepts-query-store-best-practices.md).
postgresql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md
You can have a primary server in any [Azure Database for PostgreSQL region](http
[//]: # (If you are using cross-region replicas for disaster recovery planning, we recommend you create the replica in the paired region instead of one of the other regions. Paired regions avoid simultaneous updates and prioritize physical isolation and data residency.) ## Create a replica
+A primary server for Azure Database for PostgreSQL - Flexible Server can be deployed in [any region that supports the service](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=postgresql&regions=all). You can create replicas of the primary server within the same region or across different global Azure regions where Azure Database for PostgreSQL - Flexible Server is available. However, it is important to note that replicas cannot be created in [special Azure regions](../../virtual-machines/regions.md#special-azure-regions), regardless of whether they are in-region or cross-region.
When you start the create replica workflow, a blank Azure Database for PostgreSQL server is created. The new server is filled with the data that was on the primary server. For creation of replicas in the same region snapshot approach is used, therefore the time of creation doesn't depend on the size of data. Geo-replicas are created using base backup of the primary instance, which is then transmitted over the network therefore time of creation might range from minutes to several hours depending on the primary size.
It is essential to monitor storage usage and replication lag closely, and take n
### Server parameters
-You are free to change server parameters on your read replica server and set different values than on the primary server. The only exception are parameters that might affect recovery of the replica, mentioned also in the "Scaling" section below: max_connections, max_prepared_transactions, max_locks_per_transaction, max_wal_senders, max_worker_processes. Please ensure these parameters ale always [greater than or equal to the setting on the primary](https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-ADMIN) to ensure that the standby does not run out of shared memory during recovery.
+You are free to change server parameters on your read replica server and set different values than on the primary server. The only exception are parameters that might affect recovery of the replica, mentioned also in the "Scaling" section below: max_connections, max_prepared_transactions, max_locks_per_transaction, max_wal_senders, max_worker_processes. Please ensure these parameters are always [greater than or equal to the setting on the primary](https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-ADMIN) to ensure that the replica does not run out of shared memory during recovery.
### Scaling
You are free to scale up and down compute (vCores), changing the service tier fr
For compute scaling:
-* PostgreSQL requires several parameters on replicas to be [greater than or equal to the setting on the primary](https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-ADMIN) to ensure that the standby does not run out of shared memory during recovery. The parameters affected are: max_connections, max_prepared_transactions, max_locks_per_transaction, max_wal_senders, max_worker_processes.
+* PostgreSQL requires several parameters on replicas to be [greater than or equal to the setting on the primary](https://www.postgresql.org/docs/current/hot-standby.html#HOT-STANDBY-ADMIN) to ensure that the replica does not run out of shared memory during recovery. The parameters affected are: max_connections, max_prepared_transactions, max_locks_per_transaction, max_wal_senders, max_worker_processes.
* **Scaling up**: First scale up a replica's compute, then scale up the primary.
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| South Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | South India | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | Southeast Asia | :heavy_check_mark: | :x: $ | :heavy_check_mark: | :heavy_check_mark: |
-| Sweden Central | :heavy_check_mark: | :x: | :heavy_check_mark: | :x: |
+| Sweden Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
| Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Switzerland West | :heavy_check_mark: | :x: | :heavy_check_mark: | :heavy_check_mark: | | UAE North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
Title: Azure Database for PostgreSQL - Flexible Server Release notes description: Release notes of Azure Database for PostgreSQL - Flexible Server.--++ Previously updated : 11/05/2022 Last updated : 4/10/2023 # Release notes - Azure Database for PostgreSQL - Flexible Server
Last updated 11/05/2022
This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant for Flexible Server - PostgreSQL
+## Release: April 2023
+* Public preview of [Query Performance Insight](./concepts-query-performance-insight.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+ ## Release: March 2023 * General availability of [Read Replica](concepts-read-replicas.md) for Azure Database for PostgreSQL ΓÇô Flexible Server. * Public preview of [PgBouncer Metrics](./concepts-monitoring.md#pgbouncer-metrics) for Azure Database for PostgreSQL ΓÇô Flexible Server.
postgresql Whats Happening To Postgresql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/whats-happening-to-postgresql-single-server.md
Learn how to migrate from Azure Database for PostgreSQL - Single Server to Azure
**Q. After the Single Server retirement announcement, what if I still need to create a new single server to meet my business needs?**
-**A.** We aren't stopping the ability to create new single servers immediately, so you can continue to create new single servers through CLI to meet your business needs for all PostgresSQL versions supported on Azure Database for PostgreSQL ΓÇô Single Server. We strongly encourage you to explore Flexible Server and see if that will meet your needs. Don't hesitate to contact us if necessary so we can guide you and suggest the best path forward for you.
+**A.** We aren't stopping the ability to create new single servers immediately, so you can continue to create new single servers through CLI to meet your business needs for all PostgreSQL versions supported on Azure Database for PostgreSQL ΓÇô Single Server. We strongly encourage you to explore Flexible Server and see if that will meet your needs. Don't hesitate to contact us if necessary so we can guide you and suggest the best path forward for you.
**Q. Are there any additional costs associated with performing the migration?**
We know migrating services can be a frustrating experience, and we apologize in
## Next steps - [Migration tool](../migrate/concepts-single-to-flexible.md)-- [What is flexible server?](../flexible-server/overview.md)
+- [What is flexible server?](../flexible-server/overview.md)
private-5g-core Azure Private 5G Core Release Notes 2303 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-private-5g-core-release-notes-2303.md
Last updated 03/29/2023
The following release notes identify the new features, critical open issues, and resolved issues for the 2303 release of Azure Private 5G Core (AP5GC). The release notes are continuously updated, with critical issues requiring a workaround added as theyΓÇÖre discovered. Before deploying this new version, please review the information contained in these release notes.
-This article applies to the AP5GC 2303 release (PMN-2303-0). This release is compatible with the ASE Pro 1 GPU and ASE Pro 2 running the ASE 2301 release, and is supported by the 2022-04-01-preview and 2022-11-01 [Microsoft.MobileNetwork](/rest/api/mobilenetwork) API versions.
+This article applies to the AP5GC 2303 release (PMN-2303-0). This release is compatible with the ASE Pro 1 GPU and ASE Pro 2 running the ASE 2301 and ASE 2303 releases, and is supported by the 2022-04-01-preview and 2022-11-01 [Microsoft.MobileNetwork](/rest/api/mobilenetwork) API versions.
## Support
private-5g-core Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/whats-new.md
To help you stay up to date with the latest developments, this article covers:
This page is updated regularly with the latest developments in Azure Private 5G Core.
+## March 2023
+
+### Packet core 2303
+
+**Type:** New release
+
+**Date available:** March 30, 2023
+
+The 2303 release for the Azure Private 5G Core packet core is now available. For more information, see [Azure Private 5G Core 2303 release notes](azure-private-5g-core-release-notes-2303.md).
+ ## February 2023 ### Packet core 2302
purview How To Managed Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-managed-attributes.md
Previously updated : 12/16/2022 Last updated : 04/11/2023 # Managed attributes in the Microsoft Purview Data Catalog
Once you have created managed attributes, you can refine your [data catalog sear
Below are the known limitations of the managed attribute feature as it currently exists in Microsoft Purview. - Managed attributes can only be expired, not deleted.-- Managed attributes get matched to search keywords, but there's no user-facing filter in the search results. Managed attributes can be filtered using the Search APIs. - Managed attributes can't be applied via the bulk edit experience. - After creating an attribute group, you can't edit the name of the attribute group. - After creating a managed attribute, you can't update the attribute name, attribute group or the field type.
reliability Asm Retirement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/asm-retirement.md
Below is a list of classic resources being retired, their retirement dates, and
|[Integration Services Environment](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/) | Aug 24 |[Migrate Integration Services Environment to ARM](/azure/logic-apps/export-from-ise-to-standard-logic-app?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |[Microsoft HPC Pack](/powershell/high-performance-computing/burst-to-cloud-services-retirement-guide?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |Aug 24| [Migrate Microsoft HPC Pack to ARM](/powershell/high-performance-computing/burst-to-cloud-services-retirement-guide)| |[Virtual WAN](/azure/virtual-wan/virtual-wan-faq#update-router?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | Aug 24 | [Migrate Virtual WAN to ARM](/azure/virtual-wan/virtual-wan-faq?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#update-router) |
-|[Classic Storage](https://azure.microsoft.com/updates/classic-azure-storage-accounts-will-be-retired-on-31-august-2024/) | Aug 24 | [Migrate Classic Storage to ARM](/azure/storage/common/storage-account-migrate-classic?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+|[Classic Storage](https://azure.microsoft.com/updates/classic-azure-storage-accounts-will-be-retired-on-31-august-2024/) | Aug 24 | [Migrate Classic Storage to ARM](/azure/storage/common/classic-account-migration-overview?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
|[Classic Virtual Network](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) | Aug 24 | [Migrate Classic Virtual Network to ARM]( /azure/virtual-network/migrate-classic-vnet-powershell?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| |[Classic Application Gateway](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) | Aug 24 | [Migrate Classic Application Gateway to ARM](/azure/application-gateway/classic-to-resource-manager?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |[Classic Reserved IP addresses](https://azure.microsoft.com/updates/five-azure-classic-networking-services-will-be-retired-on-31-august-2024/) |Aug 24| [Migrate Classic Reserved IP addresses to ARM](/azure/virtual-network/ip-services/public-ip-upgrade-classic?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance contains the following:
[Azure Site Recovery](../site-recovery/site-recovery-overview.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure SQL](/azure/azure-sql/database/high-availability-sla?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Storage: Blob Storage](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+[Azure Storage Mover](reliability-azure-storage-mover.md)|
[Azure Virtual Machine Scale Sets](../virtual-machines/availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Virtual Machines](../virtual-machines/virtual-machines-reliability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Virtual Network](../vpn-gateway/create-zone-redundant-vnet-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
reliability Sovereign Cloud China https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/sovereign-cloud-china.md
This section outlines variations and considerations when using Networking servic
||--|| | Private Link| <li>For Private Link services availability, see [Azure Private Link availability](../private-link/availability.md).<li>For Private DNS zone names, see [Azure Private Endpoint DNS configuration](../private-link/private-endpoint-dns.md#government). |
+### Azure Container Apps
+
+This section outlines variations and considerations when using Azure Container Apps services.
+
+| Product | Unsupported, limited, and/or modified features | Notes |
+||--||
+| Azure Monitor| The Azure Monitor integration is not supported in Azure China |
+ ## Azure in China REST endpoints
For IP rangers for Azure in China, download [Azure Datacenter IP Ranges in China
| Azure Bot Services | <\*.botframework.com> | <\*.botframework.azure.cn> | | Azure Key Vault API | \*.vault.azure.net | \*.vault.azure.cn | | Sign in with PowerShell: <br>- Azure classic portal <br>- Azure Resource Manager <br>- Azure AD| - Add-AzureAccount<br>- Connect-AzureRmAccount <br> - Connect-msolservice |  - Add-AzureAccount -Environment AzureChinaCloud <br> - Connect-AzureRmAccount -Environment AzureChinaCloud <br>- Connect-msolservice -AzureEnvironment AzureChinaCloud |
+| Azure Container Apps Default Domain | \*.azurecontainerapps.io | No default domain is provided for external enviromment. The [custom domain](/azure/container-apps/custom-domains-certificates) is required. |
+| Azure Container Apps Event Stream Endpoint | \<region\>.azurecontainerapps.dev | \<region\>.chinanorth3.azurecontainerapps-dev.cn |
### Application Insights
sap Manage Virtual Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/manage-virtual-instance.md
To configure your VIS in the Azure portal:
:::image type="content" source="media/configure-virtual-instance/select-vis.png" lightbox="media/configure-virtual-instance/select-vis.png" alt-text="Screenshot of Azure portal, showing the VIS page in the Azure Center for SAP solutions service with a table of available VIS resources.":::
+> [!Important]
+> Each VIS resource has a unique Managed Resource Group associated with. This Resource Group contains resources like Storage Account, Keyvault etc. which are critical for Azure Center for SAP solutions service to provide capabilities like deployment of infrastructure for a new system, installation of SAP software, registration of existing systems and all other SAP system management functions. Please do not delete this RG or any resources within it. If they are deleted, you will have to re-register the VIS to use any capabilities of ACSS.
++ ## Monitor VIS To see infrastructure-based metrics for the VIS, [open the VIS in the Azure portal](#open-vis-in-portal). On the **Overview** pane, select the **Monitoring** tab. You can see the following metrics:
sap High Availability Guide Rhel Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-nfs-azure-files.md
Previously updated : 12/06/2022 Last updated : 04/10/2023
[2002167]:https://launchpad.support.sap.com/#/notes/2002167 [2772999]:https://launchpad.support.sap.com/#/notes/2772999
+[3108316]:https://launchpad.support.sap.com/#/notes/3108316
[2009879]:https://launchpad.support.sap.com/#/notes/2009879 [1928533]:https://launchpad.support.sap.com/#/notes/1928533 [2015553]:https://launchpad.support.sap.com/#/notes/2015553
The following items are prefixed with either **[A]** - applicable to all nodes,
8. **[A]** RHEL configuration
- Configure RHEL as described in SAP Note [2002167] for RHEL 7.x or SAP Note [2772999] for RHEL 8.x
+ Configure RHEL as described in SAP Note [2002167] for RHEL 7.x, SAP Note [2772999] for RHEL 8.x or SAP note [3108316] for RHEL 9.x.
### Installing SAP NetWeaver ASCS/ERS 1. **[1]** Configure cluster default properties ```bash
- # If using RHEL 7.X
+ # If using RHEL 7.x
pcs resource defaults resource-stickiness=1 pcs resource defaults migration-threshold=3
- # If using RHEL 8.X
+ # If using RHEL 8.x or later
pcs resource defaults update resource-stickiness=1 pcs resource defaults update migration-threshold=3 ```
sap High Availability Guide Rhel Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-pacemaker.md
vm-windows Previously updated : 04/03/2022 Last updated : 04/10/2022
The following items are prefixed with either **[A]** - applicable to all nodes,
1. **[A]** Install RHEL HA Add-On
- ```sudo yum install -y pcs pacemaker fence-agents-azure-arm nmap-ncat
+ ```bash
+ sudo yum install -y pcs pacemaker fence-agents-azure-arm nmap-ncat
``` > [!IMPORTANT]
The following items are prefixed with either **[A]** - applicable to all nodes,
1. If deploying on RHEL 9, install also the resource agents for cloud deployment:
- ```
+ ```bash
sudo yum install -y resource-agents-cloud ```
sap Planning Guide Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/planning-guide-storage.md
Previously updated : 12/28/2022 Last updated : 04/10/2023
Other built-in functionality of ANF storage:
> [!IMPORTANT] > Specifically for database deployments you want to achieve low latencies for at least your redo logs. Especially for SAP HANA, SAP requires a latency of less than than 1 millisecond for HANA redo log writes of smaller sizes. To get to such latencies, see the possibilities below. -- You can use a public preview functionality that allows you to create the NFS share in the same Azure Availability Zones as you placed your VM that should mount the NFS shares into. This functionality is documented in the article [Manage availability zone volume placement for Azure NetApp Files](../../azure-netapp-files/manage-availability-zone-volume-placement.md). For most of the deployment cases, the colocationof the NFS volume in the same zone as the virtual machine should be able to deliver a latency of less than 1 millisecond for smaller writes. Advantage of this method is that you don't need to go through a manual pinning process as this is the case today, and that you're flexible with change VM sizes and families within all the VM types and families offered in the Availability Zone you deployed. So, that you can react flexible on changing conditions or move faster to more cost efficient VM sizes or families.
+> [!IMPORTANT]
+> Even for non-DBMS usage, you should use the preview functionality that allows you to create the NFS share in the same Azure Availability Zones as you placed your VM(s) that should mount the NFS shares into. This functionality is documented in the article [Manage availability zone volume placement for Azure NetApp Files](../../azure-netapp-files/manage-availability-zone-volume-placement.md). The motivation to have this type of Availability Zone alignment is the reduction of risk surface by having the NFS shares yet in another AvZone where you don't run VMs in.
++ - You go for the closest proximity between VM and NFS share that can be arranged by using [Application Volume Groups](../../azure-netapp-files/application-volume-group-introduction.md). The advantage of Application Volume Groups, besides allocating best proximity and with that creating lowest latency, is that your different NFS shares for SAP HANA deployments are distributed across different controllers in the Azure NetApp Files backend clusters. Disadvantage of this method is that you need to go through a pinning process again. A process that will end restricting your VM deployment to a single datacenter. Instead of an Availability Zones as the first method introduced. This means less flexibility in changing VM sizes and VM families of the VMs that have the NFS volumes mounted. - Current process of not using Availability Placement Groups. Which so far are available for SAP HANA only. This process also uses the same manual pinning process as this is the case with Availability Volume groups. This method is the method used for the last three years. It has the same flexibility restrictions as the process has with Availability Volume Groups.
search Search Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-reliability.md
Previously updated : 02/24/2023 Last updated : 04/10/2023
The goal of a geo-distributed set of search services is to have two or more inde
![Cross-tab of services by region][1]
-You can implement this architecture by creating multiple services and designing a strategy for data synchronization. Optionally, you can include a resource like Azure Traffic Manager for routing requests. For more information, see [Create a search service](search-create-service-portal.md).
+You can implement this architecture by creating multiple services and designing a strategy for data synchronization. Optionally, you can include a resource like Azure Traffic Manager for routing requests.
+
+> [!TIP]
+> For help in deploying multiple search services across multiple regions, see this [Bicep sample on GitHub](https://github.com/Azure-Samples/azure-search-multiple-regions) that deploys a fully configured, multi-regional search solution. The sample gives you two options for index synchronization, and request redirection using Traffic Manager.
<a name="data-sync"></a>
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Previously updated : 03/21/2023 Last updated : 04/10/2023
Learn about the latest updates to Azure Cognitive Search functionality, docs, an
> [!NOTE] > Looking for preview features? Previews are announced here, but we also maintain a [preview features list](search-api-preview.md) so you can find them in one place.
+## April 2023
+
+| Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
+|--||--|
+| [**Multi-region deployment of Azure Cognitive Search for business continuity and disaster recovery**](https://github.com/Azure-Samples/azure-search-multiple-regions) | Sample | Deployment scripts that fully configure a multi-regional solution for Azure Cognitive Search, with options for synchronizing content and request redirection if an endpoint fails.|
+ ## March 2023 | Item&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Type | Description |
service-bus-messaging Deprecate Service Bus Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/deprecate-service-bus-management.md
Title: Azure messaging services - Service Manager to Resource Manager description: This article provides mapping of deprecated Azure Service Manager REST API & PowerShell cmdlets to Resource Manager REST API & PowerShell cmdlets. Previously updated : 08/31/2021 Last updated : 04/10/2023 # Deprecation of Azure Service Manager support for Azure Service Bus, Relay, and Event Hubs Resource Manager, our next-generation cloud infrastructure stack, is fully replacing the "classic" Azure Service Management model (classic deployment model). As a result, classic deployment model REST APIs for Service Bus, Relay, and Event Hubs will be retired on December 1, 2021. This deprecation was first announced on a [Microsoft Tech Community announcement](https://techcommunity.microsoft.com/t5/Service-Bus-blog/Deprecating-Service-Management-support-for-Azure-Service-Bus/ba-p/370909). For easy identification, these APIs have `management.core.windows.net` in their URI. Refer to the following table for a list of the deprecated APIs and their Azure Resource Manager API version that you should now use.
-To continue using Service Bus, Relay, and Event Hubs, move to Resource Manager by November 30, 2021. We encourage all customers who are still using old APIs to make the switch soon to take advantage of the additional benefits of Resource Manager, which include resource grouping, tags, a streamlined deployment and management process, and fine-grained access control using Azure role-based access control (Azure RBAC).
+To continue using Service Bus, Relay, and Event Hubs, move to Resource Manager by November 30, 2021. We encourage all customers who are still using old APIs to make the switch soon to take advantage of the extra benefits of Resource Manager, which include resource grouping, tags, a streamlined deployment and management process, and fine-grained access control using Azure role-based access control (Azure RBAC).
For more information on Service Manager and Resource Manager APIs for Azure Service Bus, Relay and Event Hubs, see our REST API documentation:
For more information on Service Manager and Resource Manager APIs for Azure Serv
## Service Manager REST API - Resource Manager REST API
-| Service Manager APIs (Deprecated) | Resource Manager - Service Bus API | Resource Manager - Event Hub API | Resource Manager - Relay API |
+| Service Manager APIs (Deprecated) | Resource Manager - Service Bus API | Resource Manager - Event Hubs API | Resource Manager - Relay API |
| | -- | -- | -- |
-| **Namespaces-GetNamespaceAsync** <br/>[Service Bus Get Namespace](/rest/api/servicebus/get-namespace)<br/>[Event Hub Get Namespace](/rest/api/eventhub/get-event-hub)<br/>[Relay Get Namespace](/rest/api/servicebus/get-relays)<br/> ```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [get](/rest/api/servicebus/stable/namespaces/get) | [get](/rest/api/eventhub/stable/namespaces/get) | [get](/rest/api/relay/namespaces/get) |
+| **Namespaces-GetNamespaceAsync** <br/>[Service Bus Get Namespace](/rest/api/servicebus/get-namespace)<br/>[Event Hubs Get Namespace](/rest/api/eventhub/get-event-hub)<br/>[Relay Get Namespace](/rest/api/servicebus/get-relays)<br/> ```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [get](/rest/api/servicebus/stable/namespaces/get) | [get](/rest/api/eventhub/stable/namespaces/get) | [get](/rest/api/relay/namespaces/get) |
| **ConnectionDetails-GetConnectionDetails**<br/>Service Bus/Event Hub/Relay GetConnectionDetals<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/ConnectionDetails``` | [listkeys](/rest/api/servicebus/stable/namespaces-authorization-rules/list-keys) | [listkeys](/rest/api/eventhub/stable/authorization-rules-event-hubs/list-keys) | [listkeys](/rest/api/relay/namespaces/listkeys) | | **Topics-GetTopicsAsync**<br/>Service Bus<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/topics? $skip={skip}&$top={top}``` | [list](/rest/api/servicebus/stable/topics/listbynamespace) | &nbsp; | &nbsp; | | **Queues-GetQueueAsync** <br/>Service Bus<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/queues/{queueName}``` | [get](/rest/api/servicebus/stable/queues/get) | &nbsp; | &nbsp; |
Service Bus/Event Hub/Relay<br/>```PUT https://management.core.windows.net/{subs
## Service Manager PowerShell - Resource Manager PowerShell | Service Manager PowerShell command (Deprecated) | New Resource Manager Commands | Newer Resource Manager Command | | -- | -- | -- |
-| [Get-AzureSBAuthorizationRule](/powershell/module/servicemanagement/azure.service/get-azuresbauthorizationrule) | [Get-AzureRmServiceBusAuthorizationRule](/powershell/module/azurerm.servicebus/get-azurermservicebusauthorizationrule) | [Get-AzServiceBusAuthorizationRule](/powershell/module/az.servicebus/get-azservicebusauthorizationrule) |
-| [Get-AzureSBLocation](/powershell/module/servicemanagement/azure.service/get-azuresblocation) | [Get-AzureRmServiceBusGeoDRConfiguration](/powershell/module/azurerm.servicebus/get-azurermservicebusgeodrconfiguration) | [Get-AzServiceBusGeoDRConfiguration](/powershell/module/az.servicebus/get-azservicebusgeodrconfiguration) |
-| [Get-AzureSBNamespace](/powershell/module/servicemanagement/azure.service/get-azuresbnamespace) | [Get-AzureRmServiceBusNamespace](/powershell/module/azurerm.servicebus/get-azurermservicebusnamespace) | [Get-AzServiceBusNamespace](/powershell/module/az.servicebus/get-azservicebusnamespace) |
-| [New-AzureSBAuthorizationRule](/powershell/module/servicemanagement/azure.service/new-azuresbauthorizationrule) | [New-AzureRmServiceBusAuthorizationRule](/powershell/module/azurerm.servicebus/new-azurermservicebusauthorizationrule) | [New-AzServiceBusAuthorizationRule](/powershell/module/az.servicebus/new-azservicebusauthorizationrule) |
-| [New-AzureSBNamespace](/powershell/module/servicemanagement/azure.service/new-azuresbnamespace) | [New-AzureRmServiceBusNamespace](/powershell/module/azurerm.servicebus/new-azurermservicebusnamespace) | [New-AzServiceBusNamespace](/powershell/module/az.servicebus/new-azservicebusnamespace) |
-| [Remove-AzureSBAuthorizationRule](/powershell/module/servicemanagement/azure.service/remove-azuresbauthorizationrule) | [Remove-AzureRmServiceBusAuthorizationRule](/powershell/module/azurerm.servicebus/remove-azurermservicebusauthorizationrule) | [Remove-AzServiceBusAuthorizationRule](/powershell/module/az.servicebus/remove-azservicebusauthorizationrule) |
-| [Remove-AzureSBNamespace](/powershell/module/servicemanagement/azure.service/remove-azuresbnamespace) | [Remove-AzureRmServiceBusNamespace](/powershell/module/azurerm.servicebus/remove-azurermservicebusnamespace) | [Remove-AzServiceBusNamespace](/powershell/module/az.servicebus/remove-azservicebusnamespace) |
-| [Set-AzureSBAuthorizationRule](/powershell/module/servicemanagement/azure.service/set-azuresbauthorizationrule) | [Set-AzureRmServiceBusAuthorizationRule](/powershell/module/azurerm.servicebus/set-azurermservicebusauthorizationrule) | [Set-AzServiceBusAuthorizationRule](/powershell/module/az.servicebus/set-azservicebusauthorizationrule) |
+| Get-AzureSBAuthorizationRule | [Get-AzureRmServiceBusAuthorizationRule](/powershell/module/azurerm.servicebus/get-azurermservicebusauthorizationrule) | [Get-AzServiceBusAuthorizationRule](/powershell/module/az.servicebus/get-azservicebusauthorizationrule) |
+| Get-AzureSBLocation | [Get-AzureRmServiceBusGeoDRConfiguration](/powershell/module/azurerm.servicebus/get-azurermservicebusgeodrconfiguration) | [Get-AzServiceBusGeoDRConfiguration](/powershell/module/az.servicebus/get-azservicebusgeodrconfiguration) |
+| Get-AzureSBNamespace | [Get-AzureRmServiceBusNamespace](/powershell/module/azurerm.servicebus/get-azurermservicebusnamespace) | [Get-AzServiceBusNamespace](/powershell/module/az.servicebus/get-azservicebusnamespace) |
+| New-AzureSBAuthorizationRule | [New-AzureRmServiceBusAuthorizationRule](/powershell/module/azurerm.servicebus/new-azurermservicebusauthorizationrule) | [New-AzServiceBusAuthorizationRule](/powershell/module/az.servicebus/new-azservicebusauthorizationrule) |
+| New-AzureSBNamespace | [New-AzureRmServiceBusNamespace](/powershell/module/azurerm.servicebus/new-azurermservicebusnamespace) | [New-AzServiceBusNamespace](/powershell/module/az.servicebus/new-azservicebusnamespace) |
+| Remove-AzureSBAuthorizationRule | [Remove-AzureRmServiceBusAuthorizationRule](/powershell/module/azurerm.servicebus/remove-azurermservicebusauthorizationrule) | [Remove-AzServiceBusAuthorizationRule](/powershell/module/az.servicebus/remove-azservicebusauthorizationrule) |
+| Remove-AzureSBNamespace | [Remove-AzureRmServiceBusNamespace](/powershell/module/azurerm.servicebus/remove-azurermservicebusnamespace) | [Remove-AzServiceBusNamespace](/powershell/module/az.servicebus/remove-azservicebusnamespace) |
+| Set-AzureSBAuthorizationRule | [Set-AzureRmServiceBusAuthorizationRule](/powershell/module/azurerm.servicebus/set-azurermservicebusauthorizationrule) | [Set-AzServiceBusAuthorizationRule](/powershell/module/az.servicebus/set-azservicebusauthorizationrule) |
## Next steps See the following documentation:
service-bus-messaging Monitor Service Bus Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/monitor-service-bus-reference.md
The following two types of errors are classified as **user errors**:
> [!IMPORTANT] > Values for messages, active, dead-lettered, scheduled, completed, and abandoned messages are point-in-time values. Incoming messages that were consumed immediately after that point-in-time may not be reflected in these metrics.
+> [!NOTE]
+> When a client tries to get the info about a queue or topic, the Service Bus service returns some static information like name, last updated time, created time, requires session or not etc., and some dynamic information like message counts. If the request gets throttled, the service returns the static information and empty dynamic information. That's why message counts are shown as 0 when the namespace is being throttled. This behavior is by design.
+ ### Connection metrics | Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions |
Azure Service Bus supports the following dimensions for metrics in Azure Monitor
|Dimension name|Description| | - | -- |
-|Entity Name| Service Bus supports messaging entities under the namespace. With the 'Incoming Requests' metric, the Entity Name dimension will see a value of '-NamespaceOnlyMetric-' in addition to all your queues and topics. This represents request which were made at the namespace level. Examples include a request to list all queues/topics under the namespace or requests to entities which failed authentication or authorization.|
+|Entity Name| Service Bus supports messaging entities under the namespace. With the 'Incoming Requests' metric, the Entity Name dimension will have a value of '-NamespaceOnlyMetric-' in addition to all your queues and topics. This represents the request, which was made at the namespace level. Examples include a request to list all queues/topics under the namespace or requests to entities that failed authentication or authorization.|
## Resource logs This section lists the types of resource logs you can collect for Azure Service Bus.
service-bus-messaging Service Bus End To End Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-end-to-end-tracing.md
In presence of multiple `DiagnosticSource` listeners for the same source, it's e
## Next steps
-* [Application Insights Correlation](../azure-monitor/app/correlation.md)
+* [Application Insights Correlation](../azure-monitor/app/distributed-tracing-telemetry-correlation.md)
* [Application Insights Monitor Dependencies](../azure-monitor/app/asp-net-dependencies.md) to see if REST, SQL, or other external resources are slowing you down. * [Track custom operations with Application Insights .NET SDK](../azure-monitor/app/custom-operations-tracking.md)
service-fabric Service Fabric Best Practices Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-security.md
To [set up an encryption certificate and encrypt secrets on Linux clusters](./se
Generate a self-signed certificate for encrypting your secrets: ```bash
-user@linux:~$ openssl req -newkey rsa:2048 -nodes -keyout TestCert.prv -x509 -days 365 -out TestCert.pem
-user@linux:~$ cat TestCert.prv >> TestCert.pem
+openssl req -newkey rsa:2048 -nodes -keyout TestCert.prv -x509 -days 365 -out TestCert.pem
+cat TestCert.prv >> TestCert.pem
``` Use the instructions in [Deploy Key Vault certificates to Service Fabric cluster virtual machine scale sets](#deploy-key-vault-certificates-to-service-fabric-cluster-virtual-machine-scale-sets) to your Service Fabric Cluster's Virtual Machine Scale Sets.
Use the instructions in [Deploy Key Vault certificates to Service Fabric cluster
Encrypt your secret using the following commands, and then update your Service Fabric Application Manifest with the encrypted value: ```bash
-user@linux:$ echo "Hello World!" > plaintext.txt
-user@linux:$ iconv -f ASCII -t UTF-16LE plaintext.txt -o plaintext_UTF-16.txt
-user@linux:$ openssl smime -encrypt -in plaintext_UTF-16.txt -binary -outform der TestCert.pem | base64 > encrypted.txt
+echo "Hello World!" > plaintext.txt
+iconv -f ASCII -t UTF-16LE plaintext.txt -o plaintext_UTF-16.txt
+openssl smime -encrypt -in plaintext_UTF-16.txt -binary -outform der TestCert.pem | base64 > encrypted.txt
``` After encrypting your protected values, [specify encrypted secrets in Service Fabric Application](./service-fabric-application-secret-management.md#specify-encrypted-secrets-in-an-application), and [decrypt encrypted secrets from service code](./service-fabric-application-secret-management.md#decrypt-encrypted-secrets-from-service-code).
Before your Service Fabric application can make use of a managed identity, permi
The following commands grant access to an Azure Resource: ```bash
-principalid=$(az resource show --id /subscriptions/<YOUR SUBSCRIPTON>/resourceGroups/<YOUR RG>/providers/Microsoft.Compute/virtualMachineScaleSets/<YOUR SCALE SET> --api-version 2018-06-01 | python -c "import sys, json; print(json.load(sys.stdin)['identity']['principalId'])")
+PRINCIPAL_ID=$(az resource show --id /subscriptions/<YOUR SUBSCRIPTON>/resourceGroups/<YOUR RG>/providers/Microsoft.Compute/virtualMachineScaleSets/<YOUR SCALE SET> --api-version 2018-06-01 | python -c "import sys, json; print(json.load(sys.stdin)['identity']['principalId'])")
-az role assignment create --assignee $principalid --role 'Contributor' --scope "/subscriptions/<YOUR SUBSCRIPTION>/resourceGroups/<YOUR RG>/providers/<PROVIDER NAME>/<RESOURCE TYPE>/<RESOURCE NAME>"
+az role assignment create --assignee $PRINCIPAL_ID --role 'Contributor' --scope "/subscriptions/<YOUR SUBSCRIPTION>/resourceGroups/<YOUR RG>/providers/<PROVIDER NAME>/<RESOURCE TYPE>/<RESOURCE NAME>"
``` In your Service Fabric application code, [obtain an access token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md#get-a-token-using-http) for Azure Resource Manager by making a REST all similar to the following: ```bash
-access_token=$(curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fmanagement.azure.com%2F' -H Metadata:true | python -c "import sys, json; print json.load(sys.stdin)['access_token']")
+ACCESS_TOKEN=$(curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fmanagement.azure.com%2F' -H Metadata:true | python -c "import sys, json; print json.load(sys.stdin)['access_token']")
```
Your Service Fabric app can then use the access token to authenticate to Azure R
The following example shows how to do this for a Azure Cosmos DB resource: ```bash
-cosmos_db_password=$(curl 'https://management.azure.com/subscriptions/<YOUR SUBSCRIPTION>/resourceGroups/<YOUR RG>/providers/Microsoft.DocumentDB/databaseAccounts/<YOUR ACCOUNT>/listKeys?api-version=2016-03-31' -X POST -d "" -H "Authorization: Bearer $access_token" | python -c "import sys, json; print(json.load(sys.stdin)['primaryMasterKey'])")
+COSMOS_DB_PASSWORD=$(curl 'https://management.azure.com/subscriptions/<YOUR SUBSCRIPTION>/resourceGroups/<YOUR RG>/providers/Microsoft.DocumentDB/databaseAccounts/<YOUR ACCOUNT>/listKeys?api-version=2016-03-31' -X POST -d "" -H "Authorization: Bearer $ACCESS_TOKEN" | python -c "import sys, json; print(json.load(sys.stdin)['primaryMasterKey'])")
``` ## Windows security baselines [We recommend that you implement an industry-standard configuration that is broadly known and well-tested, such as Microsoft security baselines, as opposed to creating a baseline yourself](/windows/security/threat-protection/windows-security-baselines); an option for provisioning these on your Virtual Machine Scale Sets is to use Azure Desired State Configuration (DSC) extension handler, to configure the VMs as they come online, so they are running the production software.
spring-apps Concept Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/concept-metrics.md
You can also use the **Apply splitting** option, which will draw multiple lines
## User metrics options > [!NOTE]
-> For spring boot applications, you need to [add spring-boot-starter-actuator dependency](concept-manage-monitor-app-spring-boot-actuator.md#add-actuator-dependency) to see metrics from spring boot actuator.
+> For Spring Boot applications, to see metrics from Spring Boot Actuator, add the `spring-boot-starter-actuator` dependency. For more information, see the [Add actuator dependency](concept-manage-monitor-app-spring-boot-actuator.md#add-actuator-dependency) section of [Manage and monitor app with Spring Boot Actuator](concept-manage-monitor-app-spring-boot-actuator.md).
The following tables show the available metrics and details.
spring-apps Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/faq.md
zone_pivot_groups: programming-languages-spring-apps
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
This article answers frequently asked questions about Azure Spring Apps.
Security and privacy are among the top priorities for Azure and Azure Spring App
Each service instance in Azure Spring Apps is backed by Azure Kubernetes Service with multiple worker nodes. Azure Spring Apps manages the underlying Kubernetes cluster for you, including high availability, scalability, Kubernetes version upgrade, and so on.
-Azure Spring Apps intelligently schedules your applications on the underlying Kubernetes worker nodes. To provide high availability, Azure Spring Apps distributes applications with 2 or more instances on different nodes.
+Azure Spring Apps intelligently schedules your applications on the underlying Kubernetes worker nodes. To provide high availability, Azure Spring Apps distributes applications with two or more instances on different nodes.
### In which regions is Azure Spring Apps Basic/Standard tier available?
-East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, Canada East, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, Germany West Central, Switzerland North, China East 2 (Mooncake), China North 2 (Mooncake), and China North 3 (Mooncake). [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud)
+East US, East US 2, Central US, South Central US, North Central US, West US, West US 2, West US 3, West Europe, North Europe, UK South, Southeast Asia, Australia East, Canada Central, Canada East, UAE North, Central India, Korea Central, East Asia, Japan East, South Africa North, Brazil South, France Central, Germany West Central, Switzerland North, China East 2, China North 2, and China North 3. [Learn More](https://azure.microsoft.com/global-infrastructure/services/?products=spring-cloud)
### In which regions is Azure Spring Apps Enterprise tier available?
Azure Spring Apps is a regional service. All customer data in Azure Spring Apps
Azure Spring Apps has the following known limitations:
-* `spring.application.name` will be overridden by the application name that's used to create each application.
-* `server.port` defaults to port 1025. If any other value is applied, it will be overridden. Please also respect this setting and not specify server port in your code.
-* The Azure portal, Azure Resource Manager templates, and Terraform do not support uploading application packages. You can upload application packages by deploying the application using the Azure CLI, Azure DevOps, Maven Plugin for Azure Spring Apps, Azure Toolkit for IntelliJ, and the Visual Studio Code extension for Azure Spring Apps.
+* `spring.application.name` is overridden by the application name that's used to create each application.
+* `server.port` defaults to port 1025. If any other value is applied, it's overridden, so don't specify a server port in your code.
+* The Azure portal, Azure Resource Manager templates, and Terraform don't support uploading application packages. You can upload application packages by deploying the application using the Azure CLI, Azure DevOps, Maven Plugin for Azure Spring Apps, Azure Toolkit for IntelliJ, and the Visual Studio Code extension for Azure Spring Apps.
### What pricing tiers are available?
Which one should I use and what are the limits within each tier?
### What's the difference between Service Binding and Service Connector?
-We are not actively developing additional capabilities for Service Binding in favor of the new Azure-wise solution named [Service Connector](../service-connector/overview.md). On the one hand, the new solution brings you consistent integration experience across App hosting services on Azure like App Service. On the other hand, it covers your needs better by starting with supporting 10+ most used target Azure services including MySQL, SQL DB, Azure Cosmos DB, Postgres DB, Redis, Storage and more. Service Connector is currently in Public Preview, we invite you to try out the new experience.
+We're not actively developing more capabilities for Service Binding. Instead, there's a new Azure-wise solution named [Service Connector](../service-connector/overview.md). On the one hand, the new solution brings you consistent integration experience across App hosting services on Azure like App Service. On the other hand, it covers your needs better by starting with supporting 10+ most used target Azure services including MySQL, SQL DB, Azure Cosmos DB, Postgres DB, Redis, Storage and more. Service Connector is currently in Public Preview, we invite you to try out the new experience.
### How can I provide feedback and report issues?
If you encounter any issues with Azure Spring Apps, create an [Azure Support Req
Enterprise tier has built-in VMware Spring Runtime Support, so you can open support tickets to [VMware](https://aka.ms/ascevsrsupport) if you think your issue is in the scope of VMware Spring Runtime Support. To better understand VMware Spring Runtime Support itself, see the [VMware Spring Runtime](https://tanzu.vmware.com/spring-runtime). To understand the details about how to register and use this support service, see the Support section in the [Enterprise tier FAQ from VMware](https://aka.ms/EnterpriseTierFAQ). For any other issues, open support tickets with Microsoft. > [!IMPORTANT]
-> After you create an Enterprise tier instance, your entitlement will be ready within ten business days. If you encounter any exceptions, raise a support ticket with Microsoft to get help with it.
+> After you create an Enterprise tier instance, your entitlement is ready within ten business days. If you encounter any exceptions, raise a support ticket with Microsoft to get help with it.
## Development
-### I am a Spring developer but new to Azure. What is the quickest way for me to learn how to develop an application in Azure Spring Apps?
+### I'm a Spring developer but new to Azure. What's the quickest way for me to learn how to develop an application in Azure Spring Apps?
For the quickest way to get started with Azure Spring Apps, follow the instructions in [Quickstart: Launch an application in Azure Spring Apps by using the Azure portal](./quickstart.md). ::: zone pivot="programming-language-java"+ ### Is Spring Boot 2.4.x supported?
-We've identified an issue with Spring Boot 2.4 and are currently working with the Spring community to resolve it. In the meantime, please include these two dependencies to enable TLS authentication between your apps and Eureka.
+
+We've identified an issue with Spring Boot 2.4 and are currently working with the Spring community to resolve it. In the meantime, include these two dependencies to enable TLS authentication between your apps and Eureka.
```xml <dependency>
We've identified an issue with Spring Boot 2.4 and are currently working with th
Find metrics in the App Overview tab and the [Azure Monitor](../azure-monitor/essentials/data-platform-metrics.md#metrics-explorer) tab.
-Azure Spring Apps supports exporting Spring application logs and metrics to Azure Storage, Event Hub, and [Log Analytics](../azure-monitor/logs/data-platform-logs.md). The table name in Log Analytics is *AppPlatformLogsforSpring*. To learn how to enable it, see [Diagnostic services](diagnostic-services.md).
+Azure Spring Apps supports exporting Spring application logs and metrics to Azure Storage, Event Hubs, and [Log Analytics](../azure-monitor/logs/data-platform-logs.md). The table name in Log Analytics is *AppPlatformLogsforSpring*. To learn how to enable it, see [Diagnostic services](diagnostic-services.md).
### Does Azure Spring Apps support distributed tracing?
-Yes. For more information, see [Tutorial: Use Distributed Tracing with Azure Spring Apps](./how-to-distributed-tracing.md).
+Yes. For more information, see [Use Application Insights Java In-Process Agent in Azure Spring Apps](./how-to-application-insights.md).
::: zone pivot="programming-language-java"+ ### What resource types does Service Binding support? Three services are currently supported:
The number of outbound public IP addresses may vary according to the tiers and o
Yes, you can open a [support ticket](https://azure.microsoft.com/support/faq/) to request for more outbound public IP addresses.
-### When I delete/move an Azure Spring Apps service instance, will its extension resources be deleted/moved as well?
+### When I delete/move an Azure Spring Apps service instance, are its extension resources deleted/moved as well?
-It depends on the logic of resource providers that own the extension resources. The extension resources of a `Microsoft.AppPlatform` instance do not belong to the same namespace, so the behavior varies by resource provider. For example, the delete/move operation won't cascade to the **diagnostics settings** resources. If a new Azure Spring Apps instance is provisioned with the same resource ID as the deleted one, or if the previous Azure Spring Apps instance is moved back, the previous **diagnostics settings** resources continue extending it.
+It depends on the logic of resource providers that own the extension resources. The extension resources of a `Microsoft.AppPlatform` instance don't belong to the same namespace, so the behavior varies by resource provider. For example, the delete/move operation won't cascade to the **diagnostics settings** resources. If a new Azure Spring Apps instance is provisioned with the same resource ID as the deleted one, or if the previous Azure Spring Apps instance is moved back, the previous **diagnostics settings** resources continue extending it.
You can delete the Azure Spring Apps diagnostic settings by using Azure CLI:
You can delete the Azure Spring Apps diagnostic settings by using Azure CLI:
``` ::: zone pivot="programming-language-java"+ ## Java runtime and OS versions ### Which versions of Java runtime are supported in Azure Spring Apps? Azure Spring Apps supports Java LTS versions with the most recent builds, currently Java 8, Java 11, and Java 17 are supported.
-### For how long will Java 8, Java 11, and Java 17 LTS versions be supported?
+### How long are Java 8, Java 11, and Java 17 LTS versions supported?
See [Java long-term support for Azure and Azure Stack](/azure/developer/java/fundamentals/java-support-on-azure). ### What is the retire policy for older Java runtimes?
-Public notice will be sent out at 12 months before any old runtime version is retired. You will have 12 months to migrate to a later version.
+Public notice is sent out at 12 months before any old runtime version is retired. You have 12 months to migrate to a later version.
-* Subscription admins will get email notification when we will retire a Java version.
-* The retire information will be published in the documentation.
+* Subscription admins get email notification when we retire a Java version.
+* The retirement information is published in the documentation.
### How can I get support for issues at the Java runtime level?
Azure Spring Apps continuously probes port 1025 for customer's applications. The
>[!NOTE] > Because of these probes, you currently can't launch applications in Azure Spring Apps without exposing port 1025.
-### Whether and when will my application be restarted?
+### Whether and when is my application restarted?
Yes. For more information, see [Monitor app lifecycle events using Azure Activity log and Azure Service Health](./monitor-app-lifecycle-events.md).
For more information, see [Migrate Spring applications to Azure Spring Apps](/az
::: zone-end ::: zone pivot="programming-language-csharp"+ ## .NET Core versions ### Which .NET Core versions are supported? .NET Core 3.1 and later versions.
-### How long will .NET Core 3.1 be supported?
+### How long is .NET Core 3.1 supported?
-Until Dec 3, 2022. See [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
+Until December 3, 2022. See [.NET Core Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
::: zone-end ## Troubleshooting ### What are the impacts of service registry rarely unavailable?
-In some rarely happened scenario, you may see some errors like the following one from your application logs:
+In some rare scenarios, you may see errors like the following from your application logs:
```output RetryableEurekaHttpClient: Request execution failure with status code 401; retrying on another server if available ```
-This issue is introduced by the Spring framework with very low rate due to network instability or other network issues.
-
-There should be no impacts to user experience, eureka client has both heartbeat and retry policy to take care of this. You could consider it as one transient error and skip it safely.
-
-We will enhance this part and avoid this error from usersΓÇÖ applications in short future.
+The Spring framework raises this issue at a low rate due to network instability or other network issues. There should be no impacts to the user experience. The Eureka client has both heartbeat and retry policy to take care of this problem. You can consider it a transient error and skip it safely.
## Next steps
spring-apps How To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-application-insights.md
zone_pivot_groups: spring-apps-tier-selection
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Standard consumption (Preview) ✔️ Basic/Standard tier ❌️ Enterprise tier
+**This article applies to:** ✔️ Standard consumption (Preview) ✔️ Basic/Standard ❌️ Enterprise
This article explains how to monitor applications by using the Application Insights Java agent in Azure Spring Apps.
Application Insights can provide many observable perspectives, including:
When the **Application Insights** feature is enabled, you can:
-* In the left navigation pane, select **Application Insights** to view the **Overview** page of Application Insights. The **Overview** page shows you an overview of all running applications.
+* In the navigation pane, select **Application Insights** to view the **Overview** page of Application Insights. The **Overview** page shows you an overview of all running applications.
* Select **Application Map** to see the status of calls between applications. :::image type="content" source="media/how-to-application-insights/insights-process-agent-map.png" alt-text="Screenshot of Azure portal Application Insights with Application map page showing." lightbox="media/how-to-application-insights/insights-process-agent-map.png":::
When the **Application Insights** feature is enabled, you can:
* Select the link between customers-service and `petclinic` to see more details such as a query from SQL. * Select an endpoint to see all the applications making requests to the endpoint.
-* In the left navigation pane, select **Performance** to see the performance data of all applications' operations, dependencies, and roles.
+* In the navigation pane, select **Performance** to see the performance data of all applications' operations, dependencies, and roles.
:::image type="content" source="media/how-to-application-insights/insights-process-agent-performance.png" alt-text="Screenshot of Azure portal Application Insights with Performance page showing." lightbox="media/how-to-application-insights/insights-process-agent-performance.png":::
-* In the left navigation pane, select **Failures** to see any unexpected failures or exceptions from your applications.
+* In the navigation pane, select **Failures** to see any unexpected failures or exceptions from your applications.
:::image type="content" source="media/how-to-application-insights/insights-process-agent-failures.png" alt-text="Screenshot of Azure portal Application Insights with Failures page showing." lightbox="media/how-to-application-insights/insights-process-agent-failures.png":::
-* In the left navigation pane, select **Metrics** and select the namespace to see both Spring Boot metrics and custom metrics, if any.
+* In the navigation pane, select **Metrics** and select the namespace to see both Spring Boot metrics and custom metrics, if any.
:::image type="content" source="media/how-to-application-insights/insights-process-agent-metrics.png" alt-text="Screenshot of Azure portal Application Insights with Metrics page showing." lightbox="media/how-to-application-insights/insights-process-agent-metrics.png":::
-* In the left navigation pane, select **Live Metrics** to see the real-time metrics for different dimensions.
+* In the navigation pane, select **Live Metrics** to see the real-time metrics for different dimensions.
:::image type="content" source="media/how-to-application-insights/petclinic-microservices-live-metrics.png" alt-text="Screenshot of Azure portal Application Insights with Live Metrics page showing." lightbox="media/how-to-application-insights/petclinic-microservices-live-metrics.png":::
-* In the left navigation pane, select **Availability** to monitor the availability and responsiveness of Web apps by creating [Availability tests in Application Insights](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability).
+* In the navigation pane, select **Availability** to monitor the availability and responsiveness of Web apps by creating [Availability tests in Application Insights](/previous-versions/azure/azure-monitor/app/monitor-web-app-availability).
:::image type="content" source="media/how-to-application-insights/petclinic-microservices-availability.png" alt-text="Screenshot of Azure portal Application Insights with Availability page showing." lightbox="media/how-to-application-insights/petclinic-microservices-availability.png":::
-* In the left navigation pane, select **Logs** to view all applications' logs, or one application's logs when filtering by `cloud_RoleName`.
+* In the navigation pane, select **Logs** to view all applications' logs, or one application's logs when filtering by `cloud_RoleName`.
:::image type="content" source="media/how-to-application-insights/application-insights-application-logs.png" alt-text="Screenshot of Azure portal Application Insights with Logs page showing." lightbox="media/how-to-application-insights/application-insights-application-logs.png":::
Enable the Java In-Process Agent by using the following procedure.
1. Select **Save** to save the change. > [!NOTE]
-> Don't use the same Application Insights instance in different Azure Spring Apps instances, or you'll see mixed data.
+> Don't use the same Application Insights instance in different Azure Spring Apps instances, or you're shown mixed data.
::: zone-end
resource "azurerm_spring_cloud_service" "example" {
::: zone pivot="sc-enterprise"
-Automation in Enterprise tier is pending support. Documentation will be added as soon as it's available.
+Automation in Enterprise tier is pending support. Documentation is added as soon as it's available.
::: zone-end
When data is stored in Application Insights, it contains the history of Azure Sp
## Next steps
-* [Use distributed tracing with Azure Spring Apps](./how-to-distributed-tracing.md)
+* [Use Application Insights Java In-Process Agent in Azure Spring Apps](./how-to-application-insights.md)
* [Analyze logs and metrics](diagnostic-services.md) * [Stream logs in real time](./how-to-log-streaming.md) * [Application Map](../azure-monitor/app/app-map.md)
spring-apps How To Built In Persistent Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-built-in-persistent-storage.md
**This article applies to:** ✔️ Java ✔️ C#
-**This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ❌ Enterprise
Azure Spring Apps provides two types of built-in storage for your application: persistent and temporary.
By default, Azure Spring Apps provides temporary storage for each application in
> [!WARNING] > If you restart an application instance, the associated temporary storage is permanently deleted.
-Persistent storage is a file-share container managed by Azure and allocated per application. Data stored in persistent storage is shared by all instances of an application. An Azure Spring Apps instance can have a maximum of 10 applications with persistent storage enabled. Each application is allocated 50 GB of persistent storage. The default mount path for persistent storage is */persistent*.
-
-> [!WARNING]
-> If you disable an applications's persistent storage, all of that storage is deallocated and all of the stored data is lost.
+Persistent storage is a file-share container managed by Azure and allocated per application. All instances of an application share data stored in persistent storage. An Azure Spring Apps instance can have a maximum of 10 applications with persistent storage enabled. Each application is allocated 50 GB of persistent storage. The default mount path for persistent storage is */persistent*.
## Enable or disable built-in persistent storage
Other operations:
- To create an app with built-in persistent storage enabled:
- ```azurecli
- az spring app create -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage true
- ```
+ ```azurecli
+ az spring app create -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage true
+ ```
- To enable built-in persistent storage for an existing app:
- ```azurecli
- az spring app update -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage true
- ```
+ ```azurecli
+ az spring app update -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage true
+ ```
- To disable built-in persistent storage in an existing app:
- ```azurecli
- az spring app update -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage false
- ```
+ ```azurecli
+ az spring app update -n <app> -g <resource-group> -s <service-name> --enable-persistent-storage false
+ ```
> [!WARNING]
-> If you disable an applications's persistent storage, all of that storage is deallocated and all of the stored data is permanently lost.
+> If you disable an application's persistent storage, all of that storage is deallocated and all of the stored data is permanently lost.
## Next steps
spring-apps How To Circuit Breaker Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-circuit-breaker-metrics.md
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
This article shows you how to collect Spring Cloud Resilience4j Circuit Breaker Metrics with Application Insights Java in-process agent. With this feature, you can monitor the metrics of Resilience4j circuit breaker from Application Insights with Micrometer.
Use the following steps to build and deploy the sample applications.
## Next steps * [Application insights](./how-to-application-insights.md)
-* [Distributed tracing](./how-to-distributed-tracing.md)
* [Circuit breaker dashboard](./tutorial-circuit-breaker.md)
spring-apps How To Deploy In Azure Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-deploy-in-azure-virtual-network.md
This table shows the maximum number of app instances Azure Spring Apps supports
For subnets, five IP addresses are reserved by Azure, and at least three IP addresses are required by Azure Spring Apps. At least eight IP addresses are required, so /29 and /30 are nonoperational.
-For a service runtime subnet, the minimum size is /28. This size has no bearing on the number of app instances.
+For a service runtime subnet, the minimum size is /28.
## Bring your own route table
spring-apps How To Distributed Tracing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-distributed-tracing.md
- Title: "Use Distributed Tracing with Azure Spring Apps"
-description: Learn how to use Azure Spring Apps distributed tracing through Azure Application Insights
--- Previously updated : 10/06/2019--
-zone_pivot_groups: programming-languages-spring-apps
--
-# Use distributed tracing with Azure Spring Apps (deprecated)
-
-> [!NOTE]
-> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-> [!NOTE]
-> Distributed Tracing is deprecated. For more information, see [Application Insights Java In-Process Agent in Azure Spring Apps](./how-to-application-insights.md).
-
-With the distributed tracing tools in Azure Spring Apps, you can easily debug and monitor complex issues. Azure Spring Apps integrates [Spring Cloud Sleuth](https://spring.io/projects/spring-cloud-sleuth) with Azure's [Application Insights](../azure-monitor/app/app-insights-overview.md). This integration provides powerful distributed tracing capability from the Azure portal.
-
-In this article, you learn how to enable a .NET Core Steeltoe app to use distributed tracing.
-
-## Prerequisites
-
-To follow these procedures, you need a Steeltoe app that is already [prepared for deployment to Azure Spring Apps](how-to-prepare-app-deployment.md).
-
-## Dependencies
-
-For Steeltoe 2.4.4, add the following NuGet packages:
-
-* [Steeltoe.Management.TracingCore](https://www.nuget.org/packages/Steeltoe.Management.TracingCore/)
-* [Steeltoe.Management.ExporterCore](https://www.nuget.org/packages/Microsoft.Azure.SpringCloud.Client/)
-
-For Steeltoe 3.0.0, add the following NuGet package:
-
-* [Steeltoe.Management.TracingCore](https://www.nuget.org/packages/Steeltoe.Management.TracingCore/)
-
-## Update Startup.cs
-
-1. For Steeltoe 2.4.4, call `AddDistributedTracing` and `AddZipkinExporter` in the `ConfigureServices` method.
-
- ```csharp
- public void ConfigureServices(IServiceCollection services)
- {
- services.AddDistributedTracing(Configuration);
- services.AddZipkinExporter(Configuration);
- }
- ```
-
- For Steeltoe 3.0.0, call `AddDistributedTracing` in the `ConfigureServices` method.
-
- ```csharp
- public void ConfigureServices(IServiceCollection services)
- {
- services.AddDistributedTracing(Configuration, builder => builder.UseZipkinWithTraceOptions(services));
- }
- ```
-
-1. For Steeltoe 2.4.4, call `UseTracingExporter` in the `Configure` method.
-
- ```csharp
- public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
- {
- app.UseEndpoints(endpoints =>
- {
- endpoints.MapControllers();
- });
- app.UseTracingExporter();
- }
- ```
-
- For Steeltoe 3.0.0, no changes are required in the `Configure` method.
-
-## Update configuration
-
-Add the following settings to the configuration source that will be used when the app runs in Azure Spring Apps:
-
-1. Set `management.tracing.alwaysSample` to true.
-
-2. If you want to see tracing spans sent between the Eureka server, the Configuration server, and user apps: set `management.tracing.egressIgnorePattern` to "/api/v2/spans|/v2/apps/.*/permissions|/eureka/.*|/oauth/.*".
-
-For example, *appsettings.json* would include the following properties:
-
-```json
-"management": {
- "tracing": {
- "alwaysSample": true,
- "egressIgnorePattern": "/api/v2/spans|/v2/apps/.*/permissions|/eureka/.*|/oauth/.*"
- }
- }
-```
-
-For more information about distributed tracing in .NET Core Steeltoe apps, see [Distributed tracing](https://docs.steeltoe.io/api/v3/tracing/) in the Steeltoe documentation.
-In this article, you learn how to:
-
-> [!div class="checklist"]
-> * Enable distributed tracing in the Azure portal.
-> * Add Spring Cloud Sleuth to your application.
-> * View dependency maps for your Spring applications.
-> * Search tracing data with different filters.
-
-## Prerequisites
-
-To follow these procedures, you need an Azure Spring Apps service that is already provisioned and running. Complete the [Deploy your first Spring Boot app in Azure Spring Apps](./quickstart.md) quickstart to provision and run an Azure Spring Apps service.
-
-## Add dependencies
-
-1. Add the following line to the application.properties file:
-
- ```xml
- spring.zipkin.sender.type = web
- ```
-
- After this change, the Zipkin sender can send to the web.
-
-1. Skip this step if you followed our [guide to preparing an application in Azure Spring Apps](how-to-prepare-app-deployment.md). Otherwise, go to your local development environment and edit your pom.xml file to include the following Spring Cloud Sleuth dependency:
-
- * Spring boot version < 2.4.x.
-
- ```xml
- <dependencyManagement>
- <dependencies>
- <dependency>
- <groupId>org.springframework.cloud</groupId>
- <artifactId>spring-cloud-sleuth</artifactId>
- <version>${spring-cloud-sleuth.version}</version>
- <type>pom</type>
- <scope>import</scope>
- </dependency>
- </dependencies>
- </dependencyManagement>
- <dependencies>
- <dependency>
- <groupId>org.springframework.cloud</groupId>
- <artifactId>spring-cloud-starter-sleuth</artifactId>
- </dependency>
- <dependency>
- <groupId>org.springframework.cloud</groupId>
- <artifactId>spring-cloud-starter-zipkin</artifactId>
- </dependency>
- </dependencies>
- ```
-
- * Spring boot version >= 2.4.x.
-
- ```xml
- <dependencyManagement>
- <dependencies>
- <dependency>
- <groupId>org.springframework.cloud</groupId>
- <artifactId>spring-cloud-sleuth</artifactId>
- <version>${spring-cloud-sleuth.version}</version>
- <type>pom</type>
- <scope>import</scope>
- </dependency>
- </dependencies>
- </dependencyManagement>
- <dependencies>
- <dependency>
- <groupId>org.springframework.cloud</groupId>
- <artifactId>spring-cloud-starter-sleuth</artifactId>
- </dependency>
- <dependency>
- <groupId>org.springframework.cloud</groupId>
- <artifactId>spring-cloud-sleuth-zipkin</artifactId>
- </dependency>
- </dependencies>
- ```
-
-1. Build and deploy again for your Azure Spring Apps service to reflect these changes.
-
-## Modify the sample rate
-
-You can change the rate at which your telemetry is collected by modifying the sample rate. For example, if you want to sample half as often, open your application.properties file, and change the following line:
-
-```xml
-spring.sleuth.sampler.probability=0.5
-```
-
-If you have already built and deployed an application, you can modify the sample rate. Do so by adding the previous line as an environment variable in the Azure CLI or the Azure portal.
-
-## Enable Application Insights
-
-1. Go to your Azure Spring Apps service page in the Azure portal.
-1. On the **Monitoring** page, select **Distributed Tracing**.
-1. Select **Edit setting** to edit or add a new setting.
-1. Create a new Application Insights query, or select an existing one.
-1. Choose which logging category you want to monitor, and specify the retention time in days.
-1. Select **Apply** to apply the new tracing.
-
-## View the application map
-
-Return to the **Distributed Tracing** page and select **View application map**. Review the visual representation of your application and monitoring settings. To learn how to use the application map, see [Application Map: Triage distributed applications](../azure-monitor/app/app-map.md).
-
-## Use search
-
-Use the search function to query for other specific telemetry items. On the **Distributed Tracing** page, select **Search**. For more information on how to use the search function, see [Using Search in Application Insights](../azure-monitor/app/diagnostic-search.md).
-
-## Use Application Insights
-
-Application Insights provides monitoring capabilities in addition to the application map and search function. Search the Azure portal for your application's name, and then open an Application Insights page to find monitoring information. For more guidance on how to use these tools, check out [Azure Monitor log queries](/azure/data-explorer/kusto/query/).
-
-## Disable Application Insights
-
-1. Go to your Azure Spring Apps service page in the Azure portal.
-1. On **Monitoring**, select **Distributed Tracing**.
-1. Select **Disable** to disable Application Insights.
-
-## Next steps
-
-In this article, you learned how to enable and understand distributed tracing in Azure Spring Apps. To learn about binding services to an application, see [Bind an Azure Cosmos DB database to an application in Azure Spring Apps](./how-to-bind-cosmos.md).
spring-apps How To Dynatrace One Agent Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-dynatrace-one-agent-monitor.md
ms.devlang: azurecli
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Standard consumption (Preview) ✔️ Basic/Standard tier ❌️ Enterprise tier
+**This article applies to:** ✔️ Standard consumption (Preview) ✔️ Basic/Standard ❌️ Enterprise
This article shows you how to use Dynatrace OneAgent to monitor Spring Boot applications in Azure Spring Apps.
You can find **Backtrace** from **Databases/Details/Backtrace**:
## View Dynatrace OneAgent logs
-By default, Azure Spring Apps will print the *info* level logs of the Dynatrace OneAgent to `STDOUT`. The logs will be mixed with the application logs. You can find the explicit agent version from the application logs.
+By default, Azure Spring Apps prints the *info* level logs of the Dynatrace OneAgent to `STDOUT`. The logs are mixed with the application logs. You can find the explicit agent version from the application logs.
You can also get the logs of the Dynatrace agent from the following locations:
You can also get the logs of the Dynatrace agent from the following locations:
You can apply some environment variables provided by Dynatrace to configure logging for the Dynatrace OneAgent. For example, `DT_LOGLEVELCON` controls the level of logs. > [!CAUTION]
-> We strongly recommend that you do not override the default logging behavior provided by Azure Spring Apps for Dynatrace. If you do, the logging scenarios above will be blocked, and the log file(s) may be lost. For example, you should not output the `DT_LOGLEVELFILE` environment variable to your applications.
+> We strongly recommend that you don't override the default logging behavior provided by Azure Spring Apps for Dynatrace. If you do, the logging scenarios previously described are blocked, and the log file(s) may be lost. For example, you shouldn't output the `DT_LOGLEVELFILE` environment variable to your applications.
## Dynatrace OneAgent upgrade
-The Dynatrace OneAgent auto-upgrade is disabled and will be upgraded quarterly with the JDK. Agent upgrade may affect the following scenarios:
+The Dynatrace OneAgent auto-upgrade is disabled and is upgraded quarterly with the JDK. Agent upgrade may affect the following scenarios:
-* Existing applications using Dynatrace OneAgent before upgrade will be unchanged, but will require restart or redeploy to engage the new version of Dynatrace OneAgent.
-* Applications created after upgrade will use the new version of Dynatrace OneAgent.
+* Existing applications using Dynatrace OneAgent before upgrade are unchanged, but require restart or redeploy to engage the new version of Dynatrace OneAgent.
+* Applications created after upgrade use the new version of Dynatrace OneAgent.
## Virtual network injection instance outbound traffic configuration
For information about limitations when deploying Dynatrace OneAgent in applicati
## Next steps
-* [Use distributed tracing with Azure Spring Apps](how-to-distributed-tracing.md)
+* [Use Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md)
spring-apps How To Enterprise Service Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-service-registry.md
This article shows you how to use VMware Tanzu® Service Registry with Azure Spring Apps Enterprise tier.
-The [Tanzu Service Registry](https://docs.vmware.com/en/Spring-Cloud-Services-for-VMware-Tanzu/2.1/spring-cloud-services/GUID-service-registry-https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. This component helps you apply the *service discovery* design pattern to your applications.
+Tanzu Service Registry is one of the commercial VMware Tanzu components. This component helps you apply the *service discovery* design pattern to your applications.
Service discovery is one of the main ideas of the microservices architecture. Without service discovery, you'd have to hand-configure each client of a service or adopt some form of access convention. This process can be difficult, and the configurations and conventions can be brittle in production. Instead, you can use the Tanzu Service Registry to dynamically discover and invoke registered services in your application.
spring-apps How To New Relic Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-new-relic-monitor.md
ms.devlang: azurecli
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Standard consumption (Preview) ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Standard consumption (Preview) ✔️ Basic/Standard ✔️ Enterprise
This article shows you how to monitor of Spring Boot applications in Azure Spring Apps with the New Relic Java agent.
Use the following procedure to access the agent:
--env NEW_RELIC_APP_NAME=appName NEW_RELIC_LICENSE_KEY=newRelicLicenseKey ```
-Azure Spring Apps pre-installs the New Relic Java agent to */opt/agents/newrelic/java/newrelic-agent.jar*. Customers can activate the agent from applications' **JVM options**, as well as configure the agent using the [New Relic Java agent environment variables](https://docs.newrelic.com/docs/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables).
+Azure Spring Apps preinstalls the New Relic Java agent to */opt/agents/newrelic/java/newrelic-agent.jar*. Customers can activate the agent from applications' **JVM options**, and configure the agent using the [New Relic Java agent environment variables](https://docs.newrelic.com/docs/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables).
## Azure portal
To configure the environment variables in an ARM template, add the following cod
## View New Relic Java Agent logs
-By default, Azure Spring Apps will print the logs of the New Relic Java agent to `STDOUT`. The logs will be mixed with the application logs. You can find the explicit agent version from the application logs.
+By default, Azure Spring Apps prints the logs of the New Relic Java agent to `STDOUT`. The logs are mixed with the application logs. You can find the explicit agent version from the application logs.
You can also get the logs of the New Relic agent from the following locations:
You can also get the logs of the New Relic agent from the following locations:
* Azure Spring Apps Application Insights * Azure Spring Apps LogStream
-You can leverage some environment variables provided by New Relic to configure the logging of the New Agent, such as, `NEW_RELIC_LOG_LEVEL` to control the level of logs. For more information, see [New Relic Environment Variables](https://docs.newrelic.com/docs/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables).
+You can use some environment variables provided by New Relic to configure the logging of the New Agent, such as, `NEW_RELIC_LOG_LEVEL` to control the level of logs. For more information, see [New Relic Environment Variables](https://docs.newrelic.com/docs/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables).
> [!CAUTION]
-> We strongly recommend that you *do not* override the logging default behavior provided by Azure Spring Apps for New Relic. If you do, the logging scenarios in above scenarios will be blocked, and the log file(s) may be lost. For example, you should not pass the following environment variables to your applications. Log file(s) may be lost after restart or redeployment of application(s).
+> We strongly recommend that you don't override the logging default behavior provided by Azure Spring Apps for New Relic. If you do, the logging scenarios previously described are blocked, and the log file(s) may be lost. For example, you shouldn't pass the following environment variables to your applications. Log file(s) may be lost after restart or redeployment of application(s).
> > * NEW_RELIC_LOG > * NEW_RELIC_LOG_FILE_PATH ## New Relic Java Agent update/upgrade
-The New Relic Java agent will update/upgrade the JDK regularly. The agent update/upgrade may impact following scenarios.
+The New Relic Java agent update/upgrade the JDK regularly. The agent update/upgrade may impact following scenarios.
-* Existing applications that use the New Relic Java agent before update/upgrade will be unchanged.
+* Existing applications that use the New Relic Java agent before update/upgrade are unchanged.
* Existing applications that use the New Relic Java agent before update/upgrade require restart or redeploy to engage the new version of the New Relic Java agent.
-* New applications created after update/upgrade will use the new version of the New Relic Java agent.
+* New applications created after update/upgrade use the new version of the New Relic Java agent.
## Vnet Injection Instance Outbound Traffic Configuration
For a vnet injection instance of Azure Spring Apps, you need to make sure the ou
## Next steps
-* [Distributed tracing and App Insights](how-to-distributed-tracing.md)
+* [Use Application Insights Java In-Process Agent in Azure Spring Apps](how-to-application-insights.md)
spring-apps How To Staging Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-staging-environment.md
**This article applies to:** ✔️ Java ❌ C#
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
This article explains how to set up a staging deployment by using the blue-green deployment pattern in Azure Spring Apps. Blue-green deployment is an Azure DevOps continuous delivery pattern that relies on keeping an existing (blue) version live while a new (green) one is deployed. This article shows you how to put that staging deployment into production without changing the production deployment.
az extension add --name spring
To build the application, follow these steps:
-1. Generate the code for the sample app by using Spring Initializr with [this configuration](https://start.spring.io/#!type=maven-project&language=java&platformVersion=2.3.4.RELEASE&packaging=jar&jvmVersion=1.8&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-starter-sleuth,cloud-starter-zipkin,cloud-config-client).
+1. Generate the code for the sample app by using Spring Initializr with [this configuration](https://start.spring.io/#!type=maven-project&language=java&packaging=jar&groupId=com.example&artifactId=hellospring&name=hellospring&description=Demo%20project%20for%20Spring%20Boot&packageName=com.example.hellospring&dependencies=web,cloud-eureka,actuator,cloud-config-client).
1. Download the code. 1. Add the following *HelloController.java* source file to the folder *\src\main\java\com\example\hellospring\*:
To build the application, follow these steps:
1. Create the app in your Azure Spring Apps instance: ```azurecli
- az spring app create -n demo -g <resourceGroup> -s <Azure Spring Apps instance> --assign-endpoint
+ az spring app create \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name demo \
+ --runtime-version Java_17 \
+ --assign-endpoint
``` 1. Deploy the app to Azure Spring Apps: ```azurecli
- az spring app deploy -n demo -g <resourceGroup> -s <Azure Spring Apps instance> --jar-path target\hellospring-0.0.1-SNAPSHOT.jar
+ az spring app deploy \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name demo \
+ --artifact-path target\hellospring-0.0.1-SNAPSHOT.jar
``` 1. Modify the code for your staging deployment:
To build the application, follow these steps:
1. Create the green deployment: ```azurecli
- az spring app deployment create -n green --app demo -g <resourceGroup> -s <Azure Spring Apps instance> --jar-path target\hellospring-0.0.1-SNAPSHOT.jar
+ az spring app deployment create \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --app demo \
+ --name green \
+ --runtime-version Java_17 \
+ --artifact-path target\hellospring-0.0.1-SNAPSHOT.jar
``` ## View apps and deployments
Use the following steps to view deployed apps.
1. Go to your Azure Spring Apps instance in the Azure portal.
-1. From the left pane, open the **Apps** pane to view apps for your service instance.
+1. From the navigation pane, open the **Apps** pane to view apps for your service instance.
:::image type="content" source="media/how-to-staging-environment/app-dashboard.png" lightbox="media/how-to-staging-environment/app-dashboard.png" alt-text="Screenshot of the Apps pane showing apps for your service instance.":::
If you visit your public-facing app gateway at this point, you should see the ol
If you're not satisfied with your change, you can modify your application code, build a new .jar package, and upload it to your green deployment by using the Azure CLI: ```azurecli
-az spring app deploy -g <resource-group-name> -s <service-instance-name> -n gateway -d green --jar-path gateway.jar
+az spring app deploy \
+ --resource-group <resource-group-name> \
+ --service <service-instance-name> \
+ --name gateway \
+ --deployment green \
+ --jar-path gateway.jar
``` ## Delete the staging deployment
To delete your staging deployment from the Azure portal, go to the page for your
Alternatively, delete your staging deployment from the Azure CLI by running the following command: ```azurecli
-az spring app deployment delete -n <staging-deployment-name> -g <resource-group-name> -s <service-instance-name> --app gateway
+az spring app deployment delete \
+ --resource-group <resource-group-name> \
+ --service <service-instance-name> \
+ --name <staging-deployment-name> \
+ --app gateway
``` ## Next steps
spring-apps How To Use Application Live View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-application-live-view.md
This article shows you how to use Application Live View for VMware Tanzu® with Azure Spring Apps Enterprise tier.
-[Application Live View for VMware Tanzu](https://docs.vmware.com/en/Application-Live-View-for-VMware-Tanzu/1.2/docs/GUID-https://docsupdatetracker.net/index.html) is a lightweight insights and troubleshooting tool that helps app developers and app operators look inside running apps.
+[Application Live View for VMware Tanzu](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.4/tap/app-live-view-about-app-live-view.html) is a lightweight insights and troubleshooting tool that helps app developers and app operators look inside running apps.
Application Live View only supports Spring Boot applications.
spring-apps How To Use Enterprise Api Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-enterprise-api-portal.md
This article shows you how to use API portal for VMware Tanzu® with Azure Spring Apps Enterprise Tier.
-[API portal](https://docs.vmware.com/en/API-portal-for-VMware-Tanzu/1.0/api-portal/GUID-https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. API portal supports viewing API definitions from [Spring Cloud Gateway for VMware Tanzu®](./how-to-use-enterprise-spring-cloud-gateway.md) and testing of specific API routes from the browser. It also supports enabling single sign-on (SSO) authentication via configuration.
+[API portal](https://docs.vmware.com/en/API-portal-for-VMware-Tanzu/1.1/api-portal/GUID-https://docsupdatetracker.net/index.html) is one of the commercial VMware Tanzu components. API portal supports viewing API definitions from [Spring Cloud Gateway for VMware Tanzu®](./how-to-use-enterprise-spring-cloud-gateway.md) and testing of specific API routes from the browser. It also supports enabling single sign-on (SSO) authentication via configuration.
## Prerequisites
spring-apps How To Use Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-enterprise-spring-cloud-gateway.md
The following table lists the route definitions. All the properties are optional
| uri | The full URI, which will override the name of app that the requests route to. | | ssoEnabled | A value that indicates whether to enable SSO validation. See [Configure single sign-on](./how-to-configure-enterprise-spring-cloud-gateway.md#configure-single-sign-on-sso). | | tokenRelay | Passes the currently authenticated user's identity token to the application. |
-| predicates | A list of predicates. See [Available Predicates](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-configuring-routes.html#available-predicates). |
-| filters | A list of filters. See [Available Filters](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-configuring-routes.html#available-filters). |
+| predicates | A list of predicates. See [Available Predicates](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.2/scg-k8s/GUID-configuring-routes.html#available-predicates). |
+| filters | A list of filters. See [Available Filters](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.2/scg-k8s/GUID-configuring-routes.html#available-filters). |
| order | The route processing order. A lower order is processed with higher precedence, as in [Spring Cloud Gateway](https://docs.spring.io/spring-cloud-gateway/docs/current/reference/html/). | | tags | Classification tags that will be applied to methods in the generated OpenAPI documentation. |
You can use Spring Cloud Gateway OSS filters in Spring Cloud Gateway for Kuberne
### Use commercial filters
-For more examples of commercial filters, see [Commercial Route Filters](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-route-filters.html#filters-added-in-spring-cloud-gateway-for-kubernetes) in the VMware Spring Cloud Gateway documentation. These examples are written using Kubernetes resource definitions.
+For more examples of commercial filters, see [Commercial Route Filters](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.2/scg-k8s/GUID-route-filters.html#filters-added-in-spring-cloud-gateway-for-kubernetes) in the VMware Spring Cloud Gateway documentation. These examples are written using Kubernetes resource definitions.
-The following example shows how to use the [AddRequestHeadersIfNotPresent](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.0/scg-k8s/GUID-route-filters.html#add-request-headers-if-not-present) filter by converting the Kubernetes resource definition.
+The following example shows how to use the [AddRequestHeadersIfNotPresent](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.2/scg-k8s/GUID-route-filters.html#add-request-headers-if-not-present) filter by converting the Kubernetes resource definition.
Start with the following resource definition in YAML:
spring-apps Monitor Apps By Application Live View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/monitor-apps-by-application-live-view.md
**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-[Application Live View for VMware Tanzu](https://docs.vmware.com/en/Application-Live-View-for-VMware-Tanzu/1.2/docs/GUID-https://docsupdatetracker.net/index.html) is a lightweight insights and troubleshooting tool that helps app developers and app operators look inside running apps.
+[Application Live View for VMware Tanzu](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.4/tap/app-live-view-about-app-live-view.html) is a lightweight insights and troubleshooting tool that helps app developers and app operators look inside running apps.
Application Live View provides visual insights into running apps by inspecting Spring Boot Actuator information. It provides a live view of the data from inside the app only. Application Live View doesn't store any of the app data for further analysis or historical views. The easy-to-use interface lets you troubleshoot, learn, and maintain an overview of certain aspects of the apps. It provides a certain level of control to users to let them change some parameters such as log levels and environment properties of running apps.
spring-apps Quickstart Deploy Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-apps.md
zone_pivot_groups: programming-languages-spring-apps
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ❌ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ❌ Enterprise
::: zone pivot="programming-language-csharp"
Use the following steps to download the sample app. If you've been using the Azu
Use the following steps to deploy the PlanetWeatherProvider project.
-1. Create an app for the PlanetWeatherProvider project in your Azure Spring Apps instance.
+1. Create an app for the `PlanetWeatherProvider` project in your Azure Spring Apps instance.
```azurecli az spring app create --name planet-weather-provider --runtime-version NetCore_31 ```
- To enable automatic service registration, you have given the app the same name as the value of `spring.application.name` in the project's *appsettings.json* file:
+ To enable automatic service registration, you've given the app the same name as the value of `spring.application.name` in the project's *appsettings.json* file:
```json "spring": {
Use the following steps to deploy the PlanetWeatherProvider project.
Make sure that the command prompt is in the project folder before running the following command. ```azurecli
- az spring app deploy -n planet-weather-provider --runtime-version NetCore_31 --main-entry Microsoft.Azure.SpringCloud.Sample.PlanetWeatherProvider.dll --artifact-path ./publish-deploy-planet.zip
+ az spring app deploy \
+ --name planet-weather-provider \
+ --runtime-version NetCore_31 \
+ --main-entry Microsoft.Azure.SpringCloud.Sample.PlanetWeatherProvider.dll \
+ --artifact-path ./publish-deploy-planet.zip
``` The `--main-entry` option specifies the relative path from the *.zip* file's root folder to the *.dll* file that contains the application's entry point. After the service uploads the *.zip* file, it extracts all the files and folders, and then tries to execute the entry point in the specified *.dll* file.
Use the following steps to deploy the SolarSystemWeather project.
1. Deploy the project to Azure. ```azurecli
- az spring app deploy -n solar-system-weather --runtime-version NetCore_31 --main-entry Microsoft.Azure.SpringCloud.Sample.SolarSystemWeather.dll --artifact-path ./publish-deploy-solar.zip
+ az spring app deploy \
+ --name solar-system-weather \
+ --runtime-version NetCore_31 \
+ --main-entry Microsoft.Azure.SpringCloud.Sample.SolarSystemWeather.dll \
+ --artifact-path ./publish-deploy-solar.zip
``` This command may take several minutes to run.
Before testing the application, get a public endpoint for an HTTP GET request to
1. Run the following command to assign the endpoint. ```azurecli
- az spring app update -n solar-system-weather --assign-endpoint true
+ az spring app update --name solar-system-weather --assign-endpoint true
``` 1. Run the following command to get the URL of the endpoint.
Before testing the application, get a public endpoint for an HTTP GET request to
Windows: ```azurecli
- az spring app show -n solar-system-weather -o table
+ az spring app show --name solar-system-weather --output table
``` Linux:
This article explains how to build and deploy Spring applications to Azure Sprin
- Completion of the previous quickstarts in this series: - [Provision an Azure Spring Apps service instance](./quickstart-provision-service-instance.md). - [Set up Azure Spring Apps Config Server](./quickstart-setup-config-server.md).-- [JDK 8 or JDK 11](/azure/developer/java/fundamentals/java-jdk-install)
+- [JDK 17](/azure/developer/java/fundamentals/java-jdk-install)
- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - Optionally, [Azure CLI version 2.0.67 or higher](/cli/azure/install-azure-cli). Install the Azure Spring Apps extension with the following command: `az extension add --name spring` - Optionally, [the Azure Toolkit for IntelliJ](https://plugins.jetbrains.com/plugin/8053-azure-toolkit-for-intellij/).
This article explains how to build and deploy Spring applications to Azure Sprin
Use the following commands to clone the sample repository, navigate to the sample folder, and then build the project.
-```azurecli
+```bash
git clone https://github.com/azure-samples/spring-petclinic-microservices cd spring-petclinic-microservices mvn clean package -DskipTests -Denv=cloud
Use the following steps to create and deploys apps on Azure Spring Apps using th
1. Create the two core Spring applications for PetClinic: API gateway and customers-service. ```azurecli
- az spring app create --name api-gateway --instance-count 1 --memory 2Gi --assign-endpoint
- az spring app create --name customers-service --instance-count 1 --memory 2Gi
+ az spring app create \
+ --name api-gateway \
+ --runtime-version Java_17 \
+ --instance-count 1 \
+ --memory 2Gi \
+ --assign-endpoint
+ az spring app create \
+ --name customers-service \
+ --runtime-version Java_17 \
+ --instance-count 1 \
+ --memory 2Gi
``` 1. Deploy the JAR files built in the previous step.
Use the following steps to create and deploys apps on Azure Spring Apps using th
## Verify the services
-Access the app gateway and customers service from browser with the **Public Url** shown above, in the format of `https://<service name>-api-gateway.azuremicroservices.io`.
+Access the app gateway and customers service from browser with the **Public Url** shown previously, in the format of `https://<service name>-api-gateway.azuremicroservices.io`.
:::image type="content" source="media/quickstart-deploy-apps/access-customers-service.png" alt-text="Screenshot of the PetClinic customers service." lightbox="media/quickstart-deploy-apps/access-customers-service.png":::
Access the app gateway and customers service from browser with the **Public Url*
To get the PetClinic app functioning with all features like Admin Server, Visits, and Veterinarians, deploy the other apps with following commands: ```azurecli
-az spring app create --name admin-server --instance-count 1 --memory 2Gi --assign-endpoint
-az spring app create --name vets-service --instance-count 1 --memory 2Gi
-az spring app create --name visits-service --instance-count 1 --memory 2Gi
-az spring app deploy --name admin-server --jar-path spring-petclinic-admin-server/target/spring-petclinic-admin-server-3.0.1.jar --jvm-options="-Xms2048m -Xmx2048m"
-az spring app deploy --name vets-service --jar-path spring-petclinic-vets-service/target/spring-petclinic-vets-service-3.0.1.jar --jvm-options="-Xms2048m -Xmx2048m"
-az spring app deploy --name visits-service --jar-path spring-petclinic-visits-service/target/spring-petclinic-visits-service-3.0.1.jar --jvm-options="-Xms2048m -Xmx2048m"
+az spring app create \
+ --name admin-server \
+ --runtime-version Java_17 \
+ --instance-count 1 \
+ --memory 2Gi \
+ --assign-endpoint
+az spring app create \
+ --name vets-service \
+ --runtime-version Java_17 \
+ --instance-count 1 \
+ --memory 2Gi
+az spring app create \
+ --name visits-service \
+ --runtime-version Java_17 \
+ --instance-count 1 \
+ --memory 2Gi
+az spring app deploy \
+ --name admin-server \
+ --runtime-version Java_17 \
+ --jar-path spring-petclinic-admin-server/target/spring-petclinic-admin-server-3.0.1.jar \
+ --jvm-options="-Xms1536m -Xmx1536m"
+az spring app deploy \
+ --name vets-service \
+ --runtime-version Java_17 \
+ --jar-path spring-petclinic-vets-service/target/spring-petclinic-vets-service-3.0.1.jar \
+ --jvm-options="-Xms1536m -Xmx1536m"
+az spring app deploy \
+ --name visits-service \
+ --runtime-version Java_17 \
+ --jar-path spring-petclinic-visits-service/target/spring-petclinic-visits-service-3.0.1.jar \
+ --jvm-options="-Xms1536m -Xmx1536m"
``` #### [Maven](#tab/Maven)
az spring app deploy --name visits-service --jar-path spring-petclinic-visits-se
Use the following commands to clone the sample repository, navigate to the sample folder, and then build the project.
-```azurecli
+```bash
git clone https://github.com/azure-samples/spring-petclinic-microservices cd spring-petclinic-microservices mvn clean package -DskipTests -Denv=cloud
The following steps show you how to generate configurations and deploy to Azure
1. Generate configurations by running the following command in the root folder of Pet Clinic containing the parent POM. If you've already signed-in with Azure CLI, the command automatically picks up the credentials. Otherwise, it signs you in with prompt instructions. For more information, see our [wiki page](https://github.com/microsoft/azure-maven-plugins/wiki/Authentication).
- ```azurecli
+ ```bash
mvn com.microsoft.azure:azure-spring-apps-maven-plugin:1.10.0:config ```
- You'll be asked to select:
+ You're asked to select:
- **Modules:** Select `api-gateway` and `customers-service`. - **Subscription:** The subscription you used to create an Azure Spring Apps instance.
The following steps show you how to generate configurations and deploy to Azure
- api-gateway - customers-service
-
+ Remove any prefix if needed, and save the file. 1. The POM now contains the plugin dependencies and configurations. Deploy the apps using the following command.
- ```azurecli
+ ```bash
mvn azure-spring-apps:deploy ```
To get the PetClinic app functioning with all sections like Admin Server, Visits
- vets-service - visits-service
-Correct app names in each *pom.xml* for above modules and then run the `deploy` command again.
+Correct the app names in each *pom.xml* file for these modules, and then run the `deploy` command again.
#### [IntelliJ](#tab/IntelliJ)
To deploy to Azure, you must sign in with your Azure account with Azure Toolkit
:::image type="content" source="media/quickstart-deploy-apps/deploy-to-azure-spring-apps-2-pet-clinic.png" alt-text="Screenshot of the spring-petclinic-microservices/gateway page and command line textbox." lightbox="media/quickstart-deploy-apps/deploy-to-azure-spring-apps-2-pet-clinic.png":::
- 1. Start the deployment by selecting the **Run** button at the bottom of the **Deploy Azure Spring Apps app** dialog. The plug-in runs the command `mvn package` on the `api-gateway` app and deploys the JAR file generated by the `package` command.
+1. Start the deployment by selecting the **Run** button at the bottom of the **Deploy Azure Spring Apps app** dialog. The plug-in runs the command `mvn package` on the `api-gateway` app and deploys the JAR file generated by the `package` command.
### Deploy customers-service and other apps to Azure Spring Apps
-Repeat the steps above to deploy `customers-service` and other Pet Clinic apps to Azure Spring Apps:
+Repeat the previous steps to deploy `customers-service` and other Pet Clinic apps to Azure Spring Apps:
1. Modify the **Name** and **Artifact** to identify the `customers-service` app. 1. In the **App:** textbox, select **Create app...** to create `customers-service` app.
spring-apps Quickstart Sample App Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-sample-app-introduction.md
zone_pivot_groups: programming-languages-spring-apps
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard ❌️ Enterprise
::: zone pivot="programming-language-csharp"
-This series of quickstarts uses a sample app composed of two Spring apps to show how to deploy a .NET Core Steeltoe app to the Azure Spring Apps service. You'll use Azure Spring Apps capabilities such as service discovery, config server, logs, metrics, and distributed tracing.
+This series of quickstarts uses a sample app composed of two Spring apps to show how to deploy a .NET Core Steeltoe app to the Azure Spring Apps service. You use Azure Spring Apps capabilities such as service discovery, config server, logs, metrics, and distributed tracing.
## Functional services
The sample app is composed of two Spring apps:
* The `planet-weather-provider` service returns weather text in response to an HTTP request that specifies the planet name. For example, it may return "very warm" for planet Mercury. It gets the weather data from the Config server. The Config server gets the weather data from a YAML file in a Git repository, for example:
- ```yaml
- MercuryWeather: very warm
- VenusWeather: quite unpleasant
- MarsWeather: very cool
- SaturnWeather: a little bit sandy
- ```
+ ```yaml
+ MercuryWeather: very warm
+ VenusWeather: quite unpleasant
+ MarsWeather: very cool
+ SaturnWeather: a little bit sandy
+ ```
* The `solar-system-weather` service returns data for four planets in response to an HTTP request. It gets the data by making four HTTP requests to `planet-weather-provider`. It uses the Eureka server discovery service to call `planet-weather-provider`. It returns JSON, for example:
- ```json
- [{
- "Key": "Mercury",
- "Value": "very warm"
- }, {
- "Key": "Venus",
- "Value": "quite unpleasant"
- }, {
- "Key": "Mars",
- "Value": "very cool"
- }, {
- "Key": "Saturn",
- "Value": "a little bit sandy"
- }]
- ```
+ ```json
+ [{
+ "Key": "Mercury",
+ "Value": "very warm"
+ }, {
+ "Key": "Venus",
+ "Value": "quite unpleasant"
+ }, {
+ "Key": "Mars",
+ "Value": "very cool"
+ }, {
+ "Key": "Saturn",
+ "Value": "a little bit sandy"
+ }]
+ ```
The following diagram illustrates the sample app architecture:
The instructions in the following quickstarts refer to the source code as needed
::: zone pivot="programming-language-java"
-In this quickstart, we use the well-known sample app [PetClinic](https://github.com/spring-petclinic/spring-petclinic-microservices) that will show you how to deploy apps to the Azure Spring Apps service. The **Pet Clinic** sample demonstrates the microservice architecture pattern and highlights the services breakdown. You'll see how to deploy services to Azure with Azure Spring Apps capabilities such as service discovery, config server, logs, metrics, distributed tracing, and developer-friendly tooling support.
+In this quickstart, we use the well-known sample app [PetClinic](https://github.com/spring-petclinic/spring-petclinic-microservices) to show you how to deploy apps to the Azure Spring Apps service. The **Pet Clinic** sample demonstrates the microservice architecture pattern and highlights the services breakdown. You see how to deploy services to Azure with Azure Spring Apps capabilities such as service discovery, config server, logs, metrics, distributed tracing, and developer-friendly tooling support.
To follow the Azure Spring Apps deployment examples, you only need the location of the source code, which is provided as needed.
The following diagram shows the architecture of the PetClinic application.
## Functional services to be deployed
-PetClinic is decomposed into 4 core Spring apps. All of them are independently deployable applications organized by business domains.
+PetClinic is decomposed into four core Spring apps. All of them are independently deployable applications organized by business domains.
* **Customers service**: Contains general user input logic and validation including pets and owners information (Name, Address, City, Telephone).
-* **Visits service**: Stores and shows visits information for each pets' comments.
+* **Visits service**: Stores and shows visits information for each pet's comments.
* **Vets service**: Stores and shows Veterinarians' information, including names and specialties. * **API Gateway**: The API Gateway is a single entry point into the system, used to handle requests and route them to an appropriate service or to invoke multiple services, and aggregate the results. The three core services expose an external API to client. In real-world systems, the number of functions can grow quickly with system complexity. Hundreds of services might be involved in rendering one complex webpage.
spring-apps Quickstart Set Request Rate Limits Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-set-request-rate-limits-enterprise.md
Rate limiting enables you to avoid problems that arise with spikes in traffic. W
## Set request rate limits
-Spring Cloud Gateway includes route filters from the Open Source version and several more route filters. One of these filters is the [RateLimit: Limiting user requests filter](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.1/scg-k8s/GUID-route-filters.html#ratelimit-limiting-user-requests-filter). The RateLimit filter limits the number of requests allowed per route during a time window.
+Spring Cloud Gateway includes route filters from the Open Source version and several more route filters. One of these filters is the [RateLimit: Limiting user requests filter](https://docs.vmware.com/en/VMware-Spring-Cloud-Gateway-for-Kubernetes/1.2/scg-k8s/GUID-route-filters.html#ratelimit-limiting-user-requests-filter). The RateLimit filter limits the number of requests allowed per route during a time window.
When defining a route, you can add the RateLimit filter by including it in the list of filters for the route. The filter accepts four options:
spring-apps Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshoot.md
When you're debugging application crashes, start by checking the running status
* The surge memory allocation for a specific logic path. * Gradual memory leaks.
- For more information, see [Metrics](./concept-metrics.md).
- > [!NOTE]
- > These metrics are available only for spring-boot applications, and you need to [add spring-boot-starter-actuator dependency](concept-manage-monitor-app-spring-boot-actuator.md#add-actuator-dependency) to enable these metrics.
+ For more information, see [Metrics](./concept-metrics.md).
+
+ > [!NOTE]
+ > These metrics are available only for Spring Boot applications. To enable these metrics, add the `spring-boot-starter-actuator` dependency. For more information, see the [Add actuator dependency](concept-manage-monitor-app-spring-boot-actuator.md#add-actuator-dependency) section of [Manage and monitor app with Spring Boot Actuator](concept-manage-monitor-app-spring-boot-actuator.md).
* If the application fails to start, verify that the application has valid jvm parameters. If jvm memory is set too high, the following error message might appear in your logs:
static-web-apps Functions Bring Your Own https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/functions-bring-your-own.md
You're responsible for setting up a [deployment workflow](../azure-functions/fun
- **Function access keys:** If your function requires an [access key](../azure-functions/security-concepts.md#function-access-keys), then you must provide the key with calls from the static app to the API.
+> [!NOTE]
+> To prevent accidentally exposing your function app to anonymous traffic, the identity provider created by the linking process is not automatically deleted. You can delete the identity provider named *Azure Static Web Apps (Linked)* from the function app's authentication settings.
+ ## Restrictions - Only one Azure Functions app is available to a single static web app.
storage-mover Log Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/log-monitoring.md
+
+ Title: Monitor copy logs in Azure Storage Mover
+description: Learn how to monitor the status of Azure Storage Mover migration jobs.
++++ Last updated : 03/20/2023++
+<!--
+!########################################################
+STATUS: DRAFT
+
+CONTENT:
+
+REVIEW Stephen/Fabian: Reviewed
+REVIEW Engineering: not reviewed
+EDIT PASS: started
+
+Initial doc score: 97 (1212 words and 2 issues)
+
+!########################################################
+-->
+
+# How to enable Azure Storage Mover copy and job logs
+
+When you use a migration tool to move your critical data from on-premises sources to Azure destination targets, you want to be able to monitor operations for potential issues. The data relating to the operations performed during your migration can be stored as either logs entries or metrics. When configured, Azure Storage Mover can provide **Copy logs** and **Job run logs**. These logs are especially useful because they allow you to trace the migration result of job runs and of individual files.
+
+Both the copy and job run logs can be sent to an Azure Analytics Workspace. Analytics workspaces are storage units where Azure services store the log data they generate. Log Analytics is integrated into the Storage Mover portal experience. This integration allows you to see the relevant logs for your copy jobs within the same surface you use to manage them. More importantly, the integration also allows you to create and run log queries from multiple logs and interactively analyze their results.
+
+> [!IMPORTANT]
+> Before you can access your migration's log data, you need to ensure that you've created an Azure Analytics Workspace and configured your Storage Mover instance to use it. Any logs generated prior to this configuration will be lost. You may be able to retrieve limited log information directly from the agent.
+
+This article describes the steps involved in creating an analytics workspace and to configure a diagnostic setting within a storage mover resource.
+
+## Configuring Azure Log Analytics and Storage Mover
+
+This section briefly describes how to configure an Azure Monitor Log Analytics Workspace and Storage Mover diagnostic settings. After completing the following steps, you'll be able to query the data provided by your Storage Mover resource.
+
+### Create a Log Analytics workspace
+
+Storage Mover collects copy and job logs, and stores the information in an Azure Log Analytics workspace. After you've created a workspace, you can configure Storage Mover to save its data there. If you don't have an existing workspace, you can create one in the Azure portal.
+
+Enter **Log Analytics** in the search box and select **Log Analytics workspace**. In the content pane, select either **Create** or **Create log analytics workspace** to create a workspace. Provide values for the **Subscription**, **Resource Group**, **Name**, and **Region** fields, and select **Review + Create**.
++
+You can get more detailed information about Log Analytics and its features by visiting the [Log Analytics overview](/azure/azure-monitor/logs/log-analytics-overview) article. If you prefer to view a tutorial, you can visit the [Log Analytics tutorial](/azure/azure-monitor/logs/log-analytics-tutorial) instead.
+
+### Configure Storage Mover diagnostic settings
+
+After an analytics workspace has been created, you can specify it as the destination in which Storage Mover logs and metrics can be displayed.
+
+There are two options for configuring Storage Mover to send logs to your analytics workspace. First, you can configure diagnostic settings during the initial deployment of your top-level Storage Mover resource. The following example shows how to specify diagnostic settings in the Azure portal during Storage Mover resource creation.
++
+You may also choose to add a diagnostic setting to a Storage Mover resource after it's been deployed. To add the diagnostic setting, navigate to the Storage Mover resource. In the menu pane, select **Diagnostic settings** and then select **Add diagnostic setting** as shown in the following example.
++
+In the **Diagnostic setting** pane, provide a value for the **Diagnostic setting name**. Within the **Logs** group, select one or more log categories to be collected. You may also select the **Job runs** option within the **Metrics** group to view the results of your individual job runs. Within the **Destination details** group, select **Send to Log Analytics workspace**, the name of your subscription, and the name of the Log Analytics workspace to collect your log data. Finally, select **Save** to add your new diagnostic setting. You can view the image provided as a reference.
++
+After your Storage Mover diagnostic setting has been saved, it will be reflected on the Diagnostic settings screen within the **Diagnostic settings** table.
+
+## Analyzing logs
+
+All resource logs in Azure Monitor have the same fields, and are followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md).
+
+Storage Mover generates two tables, StorageMoverCopyLogsFailed and StorageMoverJobRunLogs. The schema for **StorageMoverCopyLogsFailed** is found in the [Azure Storage Copy log data reference](/azure/azure-monitor/reference/tables/StorageMoverCopyLogsFailed), and the schema for **StorageMoverJobRunLogs** is found in the [Azure Storage Job run log data reference](/azure/azure-monitor/reference/tables/StorageMoverJobRunLogs).
+
+Your log data is integrated into Storage Mover's Azure portal user interface (UI) experience. To access your log data, migrate to your top-level storage mover resource and select **Logs** from within the **Monitoring** group in the navigation pane. Close the initial **Welcome to Log Analytics** window displayed in the main content pane as shown in the following example.
++
+After the **Welcome** window is closed within the main content pane, the **New Query** window is displayed. In the schema and filter pane, ensure that the **Tables** object is selected and that the **StorageMoverCopyLogsFailed** and **StorageMoverJobRunLogs** tables are visible. Using Kusto Query Language (KQL) queries, you can begin extracting log data from the tables displayed within the schema and filter pane. Enter your query into the query editing field and select **Run** as shown in the following screen capture. A simple query example is also provided used to retrieve details on any failed copy operations from the previous 60 days.
++
+```kusto
+ StorageMoverCopyLogsFailed
+ | top 1000 by timeGenerated desc
+```
+
+### Sample Kusto queries
+
+After you send logs to Log Analytics, you can access those logs by using Azure Monitor log queries. For more information, see the [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md).
+
+The following sample queries provided can be entered in the **Log search** bar to help you monitor your migration. These queries work with the [new language](../azure-monitor/logs/log-query-overview.md).
+
+- To list all the files that failed to copy from a specific job run within the last 30 days.
+
+ ```kusto
+ StorageMoverCopyLogsFailed
+ | where TimeGenerated > ago(30d) and JobRunName == "[job run ID]"
+ ```
+
+- To list the 10 most common copy log error codes over the last seven days.
+
+ ```kusto
+ StorageMoverCopyLogsFailed
+ | where TimeGenerated > ago(7d)
+ | summarize count() by StatusCode
+ | top 10 by count_ desc
+ ```
+
+- To list the 10 most recent job failure error codes over the last three days.
+
+ ```kusto
+ StorageMoverJobRunLogs
+ | where TimeGenerated > ago(3d) and StatusCode != "AZSM0000"
+ | summarize count() by StatusCode
+ | top 10 by count_ desc
+ ```
+
+- To create a pie chart of failed copy operations grouped by job run over the last 30 days.
+
+ ```kusto
+ StorageMoverCopyLogsFailed
+ | where TimeGenerated > ago(30d)
+ | summarize count() by JobRunName
+ | sort by count_ desc
+ | render piechart
+ ```
+
+## Next steps
+
+Get started with any of these guides.
+
+- [Log Analytics workspaces](../azure-monitor/logs/log-analytics-workspace-overview.md)
+- [Azure Monitor Logs overview](../azure-monitor/logs/data-platform-logs.md)
+- [Diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md?tabs=portal)
+- [Azure Storage Mover support bundle overview](troubleshooting.md)
+- [Troubleshooting Storage Mover job run error codes](status-code.md)
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
Previously updated : 12/02/2022 Last updated : 03/30/2023
# Azure Storage blob inventory
-The Azure Storage blob inventory feature provides an overview of your containers, blobs, snapshots, and blob versions within a storage account. Use the inventory report to understand various attributes of blobs and containers such as your total data size, age, encryption status, immutability policy, and legal hold and so on. The report provides an overview of your data for business and compliance requirements.
+Azure Storage blob inventory provides a list of the containers, blobs, blob versions, and snapshots in your storage account, along with their associated properties. It generates an output report in either comma-separated values (CSV) or Apache Parquet format on a daily or weekly basis. You can use the report to audit retention, legal hold or encryption status of your storage account contents, or you can use it to understand the total data size, age, tier distribution, or other attributes of your data. You can also use blob inventory to simplify your business workflows or speed up data processing jobs, by using blob inventory as a scheduled automation of the [List Containers](/rest/api/storageservices/list-containers2) and [List Blobs](/rest/api/storageservices/list-blobs) APIs. Blob inventory rules allow you to filter the contents of the report by blob type, prefix, or by selecting the blob properties to include in the report.
## Inventory features
storage Convert Append And Page Blobs To Block Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/convert-append-and-page-blobs-to-block-blobs.md
To convert blobs, copy them to a new location by using PowerShell, Azure CLI, or
Copy-AzStorageBlob -SrcContainer $containerName -SrcBlob $srcblobName -Context $ctx -DestContainer $destcontainerName -DestBlob $destblobName -DestContext $ctx -DestBlobType Block -StandardBlobTier $destTier ```
+6. To copy a page blob snapshot to block blob, use the [Get-AzStorageBlob](/powershell/module/az.storage/get-azstorageblob) and [Copy-AzStorageBlob](/powershell/module/az.storage/copy-azstorageblob) command with `-DestBlobType` parameter as `Block`.
+
+ ```powershell
+ $containerName = '<source container name>'
+ $srcPageBlobName = '<source page blob name>'
+ $srcPageBlobSnapshotTime = '<snapshot time of source page blob>'
+ $destContainerName = '<destination container name>'
+ $destBlobName = '<destination block blob name>'
+ $destTier = '<destination block blob tier>'
+
+ Get-AzStorageBlob -Container $containerName -Blob $srcPageBlobName -SnapshotTime $srcPageBlobSnapshotTime -Context $ctx | Copy-AzStorageBlob -DestContainer $destContainerName -DestBlob $destBlobName -DestBlobType block -StandardBlobTier $destTier -DestContext $ctx
+
+ ```
+ > [!TIP] > The `-StandardBlobTier` parameter is optional. If you omit that parameter, then the destination blob infers its tier from the [default account access tier setting](access-tiers-overview.md#default-account-access-tier-setting). To change the tier after you've created a block blob, see [Change a blob's tier](access-tiers-online-manage.md#change-a-blobs-tier).
To convert blobs, copy them to a new location by using PowerShell, Azure CLI, or
az storage blob copy start --account-name $accountName --destination-blob $destBlobName --destination-container $destcontainerName --destination-blob-type BlockBlob --source-blob $srcblobName --source-container $containerName --tier $destTier ```
+4. To copy a page blob snapshot to block blob, use the [az storage blob copy start](/cli/azure/storage/blob/copy#az-storage-blob-copy-start) command and set the `--destination-blob-type` parameter to `blockBlob` along with source page blob snapshot uri.
+
+ ```azurecli
+ srcPageblobSnapshotUri = '<source page blob snapshot uri>'
+ destcontainerName = '<destination container name>'
+ destblobName = '<destination block blob name>'
+ destTier = '<destination block blob tier>'
+
+ az storage blob copy start --account-name $accountName --destination-blob $destBlobName --destination-container $destcontainerName --destination-blob-type BlockBlob --source-uri $srcPageblobSnapshotUri --tier $destTier
+ ```
+ > [!TIP] > The `--tier` parameter is optional. If you omit that parameter, then the destination blob infers its tier from the [default account access tier setting](access-tiers-overview.md#default-account-access-tier-setting). To change the tier after you've created a block blob, see [Change a blob's tier](access-tiers-online-manage.md#change-a-blobs-tier).
azcopy copy 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<cont
> The optional `--metadata` parameter overwrites any existing metadata. Therefore, if you specify metadata by using this parameter, then none of the original metadata from the source blob will be copied to the destination blob. - ## See also - [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md) - [Set a blob's access tier](access-tiers-online-manage.md) - [Best practices for using blob access tiers](access-tiers-best-practices.md)++
storage Storage Blob Container Lease Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-java.md
# Create and manage container leases with Java
-This article shows how to create and manage container leases using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme).
+This article shows how to create and manage container leases using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). You can use the client library to acquire, renew, release, and break container leases.
-A lease establishes and manages a lock on a container for delete operations. The lock duration can be 15 to 60 seconds, or can be infinite. A lease on a container provides exclusive delete access to the container. A container lease only controls the ability to delete the container using the [Delete Container](/rest/api/storageservices/delete-container) operation. To delete a container with an active lease, a client must include the active lease ID with the delete request. All other container operations will succeed on a leased container without the lease ID. If you've enabled [container soft delete](soft-delete-container-overview.md), you can restore deleted containers.
+## About container leases
-You can use the Java client library to acquire, renew, release and break leases. Lease operations are handled by the [BlobLeaseClient](/java/api/com.azure.storage.blob.specialized.blobleaseclient) class, which provides a client containing all lease operations for [BlobContainerClient](/java/api/com.azure.storage.blob.blobcontainerclient) and [BlobClient](/java/api/com.azure.storage.blob.blobclient). To learn more about lease states and when you might perform an operation, see [Lease states and actions](#lease-states-and-actions).
+
+Lease operations are handled by the [BlobLeaseClient](/jav).
## Acquire a lease
-When you acquire a lease, you'll obtain a lease ID that your code can use to operate on the container. To acquire a lease, create an instance of the [BlobLeaseClient](/java/api/com.azure.storage.blob.specialized.blobleaseclient) class, and then use the following method:
+When you acquire a container lease, you obtain a lease ID that your code can use to operate on the container. If the container already has an active lease, you can only request a new lease by using the active lease ID. However, you can specify a new lease duration.
+
+To acquire a lease, create an instance of the [BlobLeaseClient](/java/api/com.azure.storage.blob.specialized.blobleaseclient) class, and then use the following method:
- [acquireLease](/java/api/com.azure.storage.blob.specialized.blobleaseclient)
The following example acquires a 30-second lease for a container:
## Renew a lease
-If your lease expires, you can renew it. To renew an existing lease, use the following method:
+You can renew a container lease if the lease ID specified on the request matches the lease ID associated with the container. The lease can be renewed even if it has expired, as long as the container hasn't been leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
+
+To renew an existing lease, use the following method:
- [renewLease](/java/api/com.azure.storage.blob.specialized.blobleaseclient)
The following example renews a lease for a container:
## Release a lease
-You can either wait for a lease to expire or explicitly release it. When you release a lease, other clients can immediately acquire a lease for the container as soon as the operation is complete. You can release a lease by using the following method:
+You can release a container lease if the lease ID specified on the request matches the lease ID associated with the container. Releasing a lease allows another client to acquire a lease for the container immediately after the release is complete.
+
+You can release a lease by using the following method:
- [releaseLease](/java/api/com.azure.storage.blob.specialized.blobleaseclient)
The following example releases the lease on a container:
## Break a lease
-When you break a lease, the lease ends, and other clients can't acquire a lease until the lease period expires. You can break a lease by using the following method:
+You can break a container lease if the container has an active lease. Any authorized request can break the lease; the request isn't required to specify a matching lease ID. A lease can't be renewed after it's broken, and breaking a lease prevents a new lease from being acquired for a period of time until the original lease expires or is released.
+
+You can break a lease by using the following method:
- [breakLease](/java/api/com.azure.storage.blob.specialized.blobleaseclient)
storage Storage Blob Container Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-python.md
# Create and manage container leases with Python
-This article shows how to create and manage container leases using the [Azure Storage client library for Python](/python/api/overview/azure/storage).
+This article shows how to create and manage container leases using the [Azure Storage client library for Python](/python/api/overview/azure/storage). You can use the client library to acquire, renew, release and break container leases.
-A lease establishes and manages a lock on a container for delete operations. The lock duration can be 15 to 60 seconds, or can be infinite. A lease on a container provides exclusive delete access to the container. A container lease only controls the ability to delete the container using the [Delete Container](/rest/api/storageservices/delete-container) REST API operation. To delete a container with an active lease, a client must include the active lease ID with the delete request. All other container operations will succeed on a leased container without the lease ID. If you've enabled [container soft delete](soft-delete-container-overview.md), you can restore deleted containers.
+## About container leases
-You can use the Python client library to acquire, renew, release and break leases. Lease operations are handled by the [BlobLeaseClient](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient) class, which provides a client containing all lease operations for [ContainerClient](/python/api/azure-storage-blob/azure.storage.blob.containerclient) and [BlobClient](/python/api/azure-storage-blob/azure.storage.blob.blobclient). To learn more about lease states and when you might perform an operation, see [Lease states and actions](#lease-states-and-actions).
+
+Lease operations are handled by the [BlobLeaseClient](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient) class, which provides a client containing all lease operations for blobs and containers. To learn more about blob leases using the client library, see [Create and manage blob leases with Python](storage-blob-lease-python.md).
## Acquire a lease
-When you acquire a lease, you'll obtain a lease ID that your code can use to operate on the container. To acquire a lease, create an instance of the [BlobLeaseClient](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient) class, and then use the following method:
+When you acquire a container lease, you obtain a lease ID that your code can use to operate on the container. If the container already has an active lease, you can only request a new lease by using the active lease ID. However, you can specify a new lease duration.
+
+To acquire a lease, create an instance of the [BlobLeaseClient](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient) class, and then use the following method:
- [BlobLeaseClient.acquire](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient#azure-storage-blob-blobleaseclient-acquire)
The following example acquires a 30-second lease on a container:
## Renew a lease
-If your lease expires, you can renew it. To renew a lease, use the following method:
+You can renew a container lease if the lease ID specified on the request matches the lease ID associated with the container. The lease can be renewed even if it has expired, as long as the container hasn't been leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
+
+To renew a lease, use the following method:
- [BlobLeaseClient.renew](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient#azure-storage-blob-blobleaseclient-renew)
The following example renews a lease for a container:
## Release a lease
-You can either wait for a lease to expire or explicitly release it. When you release a lease, other clients can obtain a lease. You can release a lease by using the following method:
+You can release a container lease if the lease ID specified on the request matches the lease ID associated with the container. Releasing a lease allows another client to acquire a lease for the container immediately after the release is complete.
+
+You can release a lease by using the following method:
- [BlobLeaseClient.release](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient#azure-storage-blob-blobleaseclient-release)
The following example releases the lease on a container:
## Break a lease
-When you break a lease, the lease ends, but other clients can't acquire a lease until the lease period expires. You can break a lease by using the following method:
+You can break a container lease if the container has an active lease. Any authorized request can break the lease; the request isn't required to specify a matching lease ID. A lease can't be renewed after it's broken, and breaking a lease prevents a new lease from being acquired for a period of time until the original lease expires or is released.
+
+You can break a lease by using the following method:
- [BlobLeaseClient.break_lease](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient#azure-storage-blob-blobleaseclient-break-lease)
storage Storage Blob Container Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease.md
Title: Create and manage blob or container leases with .NET
+ Title: Create and manage container leases with .NET
-description: Learn how to manage a lock on a blob or container in your Azure Storage account using the .NET client library.
+description: Learn how to manage a lock on a container in your Azure Storage account using the .NET client library.
Previously updated : 03/28/2022 Last updated : 04/10/2023 ms.devlang: csharp
-# Create and manage blob or container leases with .NET
+# Create and manage container leases with .NET
-A lease establishes and manages a lock on a container or the blobs in a container. You can use the .NET client library to acquire, renew, release and break leases. To learn more about leasing blobs or containers, see [Lease Container](/rest/api/storageservices/lease-container) or [Lease Blobs](/rest/api/storageservices/lease-blob).
+This article shows how to create and manage container leases using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). You can use the client library to acquire, renew, release, and break container leases.
+
+## About container leases
++
+Lease operations are handled by the [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) class, which provides a client containing all lease operations for blobs and containers. To learn more about blob leases using the client library, see [Create and manage blob leases with .NET](storage-blob-lease.md).
## Acquire a lease
-When you acquire a lease, you'll obtain a lease ID that your code can use to operate on the blob or container. To acquire a lease, create an instance of the [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) class, and then use either of these methods:
+When you acquire a container lease, you obtain a lease ID that your code can use to operate on the container. If the container already has an active lease, you can only request a new lease by using the active lease ID. However, you can specify a new lease duration.
+
+To acquire a lease, create an instance of the [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) class, and then use one of the following methods:
- [Acquire](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.acquire) - [AcquireAsync](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.acquireasync)
-The following example acquires a 30 second lease for a container.
-
-```csharp
-public static async Task AcquireLease(BlobContainerClient containerClient)
-{
- BlobLeaseClient blobLeaseClient = containerClient.GetBlobLeaseClient();
+The following example acquires a 30-second lease for a container:
- TimeSpan ts = new TimeSpan(0, 0, 0, 30);
- Response<BlobLease> blobLeaseResponse = await blobLeaseClient.AcquireAsync(ts);
-
- Console.WriteLine("Blob Lease Id:" + blobLeaseResponse.Value.LeaseId);
- Console.WriteLine("Remaining Lease Time: " + blobLeaseResponse.Value.LeaseTime);
-}
-```
## Renew a lease
-If your lease expires, you can renew it. To renew a lease, use either of the following methods of the [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) class:
+You can renew a container lease if the lease ID specified on the request matches the lease ID associated with the container. The lease can be renewed even if it has expired, as long as the container hasn't been leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
+
+To renew a lease, use one of the following methods on a [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) instance:
- [Renew](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.renew) - [RenewAsync](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.renewasync)
-Specify the lease ID by setting the [IfMatch](/dotnet/api/azure.matchconditions.ifmatch) property of a [RequestConditions](/dotnet/api/azure.requestconditions) instance.
-
-The following example renews a lease for a blob.
+The following example renews a container lease:
-```csharp
-public static async Task RenewLease(BlobClient blobClient, string leaseID)
-{
- BlobLeaseClient blobLeaseClient = blobClient.GetBlobLeaseClient();
- RequestConditions requestConditions = new RequestConditions();
- requestConditions.IfMatch = new ETag(leaseID);
- await blobLeaseClient.RenewAsync();
-}
-```
## Release a lease
-You can either wait for a lease to expire or explicitly release it. When you release a lease, other clients can obtain a lease. You can release a lease by using either of these methods of the [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) class.
+You can release a container lease if the lease ID specified on the request matches the lease ID associated with the container. Releasing a lease allows another client to acquire a lease for the container immediately after the release is complete.
+
+You can release a lease using one of the following methods on a [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) instance:
- [Release](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.release) - [ReleaseAsync](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.releaseasync)
-The following example releases the lease on a container.
+The following example releases a lease on a container:
-```csharp
-public static async Task ReleaseLease(BlobContainerClient containerClient)
-{
- BlobLeaseClient blobLeaseClient = containerClient.GetBlobLeaseClient();
- await blobLeaseClient.ReleaseAsync();
-}
-```
## Break a lease
-When you break a lease, the lease ends, but other clients can't acquire a lease until the lease period expires. You can break a lease by using either of these methods:
+You can break a container lease if the container has an active lease. Any authorized request can break the lease; the request isn't required to specify a matching lease ID. A lease can't be renewed after it's broken, and breaking a lease prevents a new lease from being acquired for a period of time until the original lease expires or is released.
+
+You can break a lease using one of the following methods on a [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) instance:
- [Break](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.break)-- [BreakAsync](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.breakasync);
+- [BreakAsync](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.breakasync)
+
+The following example breaks a lease on a container:
+++
+## Resources
+
+To learn more about managing container leases using the Azure Blob Storage client library for .NET, see the following resources.
+
+### REST API operations
+
+The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for managing container leases use the following REST API operation:
+
+- [Lease Container](/rest/api/storageservices/lease-container)
+
+### Code samples
-The following example breaks the lease on a blob.
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/dotnet/BlobDevGuideBlobs/LeaseContainer.cs)
-```csharp
-public static async Task BreakLease(BlobClient blobClient)
-{
- BlobLeaseClient blobLeaseClient = blobClient.GetBlobLeaseClient();
- await blobLeaseClient.BreakAsync();
-}
-```
-## See also
+### See also
-- [Get started with Azure Blob Storage and .NET](storage-blob-dotnet-get-started.md)-- [Managing Concurrency in Blob storage](concurrency-manage.md)
+- [Managing Concurrency in Blob storage](concurrency-manage.md)
storage Storage Blob Lease Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-java.md
# Create and manage blob leases with Java
-This article shows how to create and manage blob leases using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme).
+This article shows how to create and manage blob leases using the [Azure Storage client library for Java](/java/api/overview/azure/storage-blob-readme). You can use the client library to acquire, renew, release, and break blob leases.
-A lease creates and manages a lock on a blob for write and delete operations. The lock duration can be 15 to 60 seconds, or can be infinite. A lease on a blob provides exclusive write and delete access to the blob. To write to a blob with an active lease, a client must include the active lease ID with the write request.
+## About blob leases
-You can use the Java client library to acquire, renew, release and break leases. Lease operations are handled by the [BlobLeaseClient](/java/api/com.azure.storage.blob.specialized.blobleaseclient) class, which provides a client containing all lease operations for [BlobContainerClient](/java/api/com.azure.storage.blob.blobcontainerclient) and [BlobClient](/java/api/com.azure.storage.blob.blobclient). To learn more about lease states and when you might perform an operation, see [Lease states and actions](#lease-states-and-actions).
-All container operations are permitted on a container that includes blobs with an active lease, including [Delete Container](/rest/api/storageservices/delete-container). Therefore, a container may be deleted even if blobs within it have active leases. Use the [Lease Container](/rest/api/storageservices/lease-container) operation to control rights to delete a container. To learn more about container leases using the client library, see [Create and manage container leases with Java](storage-blob-container-lease-java.md)
+Lease operations are handled by the [BlobLeaseClient](/jav).
## Acquire a lease
-When you acquire a lease, you'll obtain a lease ID that your code can use to operate on the blob. To acquire a lease, create an instance of the [BlobLeaseClient](/java/api/com.azure.storage.blob.specialized.blobleaseclient) class, and then use the following method:
+When you acquire a blob lease, you obtain a lease ID that your code can use to operate on the blob. If the blob already has an active lease, you can only request a new lease by using the active lease ID. However, you can specify a new lease duration.
+
+To acquire a lease, create an instance of the [BlobLeaseClient](/java/api/com.azure.storage.blob.specialized.blobleaseclient) class, and then use the following method:
- [acquireLease](/java/api/com.azure.storage.blob.specialized.blobleaseclient)
The following example acquires a 30-second lease for a blob:
## Renew a lease
-If your lease expires, you can renew it. To renew an existing lease, use the following method:
+You can renew a blob lease if the lease ID specified on the request matches the lease ID associated with the blob. The lease can be renewed even if it has expired, as long as the blob hasn't been modified or leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
+
+To renew an existing lease, use the following method:
- [renewLease](/java/api/com.azure.storage.blob.specialized.blobleaseclient)
The following example renews a lease for a blob:
## Release a lease
-You can either wait for a lease to expire or explicitly release it. When you release a lease, other clients can immediately acquire a lease for the blob as soon as the operation is complete. You can release a lease by using the following method:
+You can release a blob lease if the lease ID specified on the request matches the lease ID associated with the blob. Releasing a lease allows another client to acquire a lease for the blob immediately after the release is complete.
+
+You can release a lease by using the following method:
- [releaseLease](/java/api/com.azure.storage.blob.specialized.blobleaseclient)
The following example releases the lease on a blob:
## Break a lease
-When you break a lease, the lease ends, and other clients can't acquire a lease until the lease period expires. You can break a lease by using the following method:
+You can break a blob lease if the blob has an active lease. Any authorized request can break the lease; the request isn't required to specify a matching lease ID. A lease can't be renewed after it's broken, and breaking a lease prevents a new lease from being acquired for a period of time until the original lease expires or is released.
+
+You can break a lease by using the following method:
- [breakLease](/java/api/com.azure.storage.blob.specialized.blobleaseclient)
storage Storage Blob Lease Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-python.md
# Create and manage blob leases with Python
-This article shows how to create and manage blob leases using the [Azure Storage client library for Python](/python/api/overview/azure/storage).
+This article shows how to create and manage blob leases using the [Azure Storage client library for Python](/python/api/overview/azure/storage). You can use the client library to acquire, renew, release, and break blob leases.
-A lease creates and manages a lock on a blob for write and delete operations. The lock duration can be 15 to 60 seconds, or can be infinite. A lease on a blob provides exclusive write and delete access to the blob. To write to a blob with an active lease, a client must include the active lease ID with the write request.
+## About blob leases
-You can use the Python client library to acquire, renew, release and break leases. Lease operations are handled by the [BlobLeaseClient](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient) class, which provides a client containing all lease operations for [ContainerClient](/python/api/azure-storage-blob/azure.storage.blob.containerclient) and [BlobClient](/python/api/azure-storage-blob/azure.storage.blob.blobclient). To learn more about lease states and when you might perform an operation, see [Lease states and actions](#lease-states-and-actions).
-All container operations are permitted on a container that includes blobs with an active lease, including [Delete Container](/rest/api/storageservices/delete-container). Therefore, a container may be deleted even if blobs within it have active leases. Use the [Lease Container](/rest/api/storageservices/lease-container) operation to control rights to delete a container. To learn more about container leases using the client library, see [Create and manage container leases with Python](storage-blob-container-lease-python.md).
+Lease operations are handled by the [BlobLeaseClient](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient) class, which provides a client containing all lease operations for blobs and containers. To learn more about container leases using the client library, see [Create and manage container leases with Python](storage-blob-container-lease-python.md).
## Acquire a lease
-When you acquire a lease, you'll obtain a lease ID that your code can use to operate on the blob. To acquire a lease, create an instance of the [BlobLeaseClient](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient) class, and then use the following method:
+When you acquire a blob lease, you obtain a lease ID that your code can use to operate on the blob. If the blob already has an active lease, you can only request a new lease by using the active lease ID. However, you can specify a new lease duration.
+
+To acquire a lease, create an instance of the [BlobLeaseClient](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient) class, and then use the following method:
- [BlobLeaseClient.acquire](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient#azure-storage-blob-blobleaseclient-acquire)
The following example acquires a 30-second lease for a blob:
## Renew a lease
-If your lease expires, you can renew it. To renew a lease, use the following method:
+You can renew a blob lease if the lease ID specified on the request matches the lease ID associated with the blob. The lease can be renewed even if it has expired, as long as the blob hasn't been modified or leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
+
+To renew a lease, use the following method:
- [BlobLeaseClient.renew](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient#azure-storage-blob-blobleaseclient-renew)
The following example renews a lease for a blob:
## Release a lease
-You can either wait for a lease to expire or explicitly release it. When you release a lease, other clients can obtain a lease. You can release a lease by using the following method:
+You can release a blob lease if the lease ID specified on the request matches the lease ID associated with the blob. Releasing a lease allows another client to acquire a lease for the blob immediately after the release is complete.
+
+You can release a lease by using the following method:
- [BlobLeaseClient.release](/python/api/azure-storage-blob/azure.storage.blob.blobleaseclient#azure-storage-blob-blobleaseclient-release)
The following example releases the lease on a blob:
## Break a lease
-When you break a lease, the lease ends, but other clients can't acquire a lease until the lease period expires. You can break a lease by using the following method:
+You can break a blob lease if the blob has an active lease. Any authorized request can break the lease; the request isn't required to specify a matching lease ID. A lease can't be renewed after it's broken, and breaking a lease prevents a new lease from being acquired for a period of time until the original lease expires or is released.
+
+You can break a lease by using the following method:
- [BlobLeaseClient.break_lease](/java/api/com.azure.storage.blob.specialized.blobleaseclient#com-azure-storage-blob-specialized-blobleaseclient-breaklease())
storage Storage Blob Lease https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease.md
+
+ Title: Create and manage blob leases with .NET
+
+description: Learn how to manage a lock on a blob in your Azure Storage account using the .NET client library.
++++++ Last updated : 04/10/2023+
+ms.devlang: csharp
+++
+# Create and manage blob leases with .NET
+
+This article shows how to create and manage blob leases using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage). You can use the client library to acquire, renew, release, and break blob leases.
+
+## About blob leases
++
+Lease operations are handled by the [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) class, which provides a client containing all lease operations for blobs and containers. To learn more about container leases using the client library, see [Create and manage container leases with .NET](storage-blob-container-lease.md).
+
+## Acquire a lease
+
+When you acquire a blob lease, you obtain a lease ID that your code can use to operate on the blob. If the blob already has an active lease, you can only request a new lease by using the active lease ID. However, you can specify a new lease duration.
+
+To acquire a lease, create an instance of the [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) class, and then use one of the following methods:
+
+- [Acquire](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.acquire)
+- [AcquireAsync](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.acquireasync)
+
+The following example acquires a 30-second lease for a blob:
++
+## Renew a lease
+
+You can renew a blob lease if the lease ID specified on the request matches the lease ID associated with the blob. The lease can be renewed even if it has expired, as long as the blob hasn't been modified or leased again since the expiration of that lease. When you renew a lease, the duration of the lease resets.
+
+To renew a lease, use one of the following methods on a [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) instance:
+
+- [Renew](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.renew)
+- [RenewAsync](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.renewasync)
+
+The following example renews a lease for a blob:
++
+## Release a lease
+
+You can release a blob lease if the lease ID specified on the request matches the lease ID associated with the blob. Releasing a lease allows another client to acquire a lease for the blob immediately after the release is complete.
+
+You can release a lease using one of the following methods on a [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) instance:
+
+- [Release](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.release)
+- [ReleaseAsync](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.releaseasync)
+
+The following example releases a lease on a blob:
++
+## Break a lease
+
+You can break a blob lease if the blob has an active lease. Any authorized request can break the lease; the request isn't required to specify a matching lease ID. A lease can't be renewed after it's broken, and breaking a lease prevents a new lease from being acquired for a period of time until the original lease expires or is released.
+
+You can break a lease using one of the following methods on a [BlobLeaseClient](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient) instance:
+
+- [Break](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.break)
+- [BreakAsync](/dotnet/api/azure.storage.blobs.specialized.blobleaseclient.breakasync)
+
+The following example breaks a lease on a blob:
+++
+## Resources
+
+To learn more about managing blob leases using the Azure Blob Storage client library for .NET, see the following resources.
+
+### REST API operations
+
+The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for managing blob leases use the following REST API operation:
+
+- [Lease Blob](/rest/api/storageservices/lease-blob)
+
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/dotnet/BlobDevGuideBlobs/LeaseBlob.cs)
++
+### See also
+
+- [Managing Concurrency in Blob storage](concurrency-manage.md)
storage Classic Account Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migrate.md
Previously updated : 04/06/2023 Last updated : 04/10/2023
To delete disk artifacts from the Azure portal, follow these steps:
For more information about errors that may occur when deleting disk artifacts and how to address them, see [Troubleshoot errors when you delete Azure classic storage accounts, containers, or VHDs](/troubleshoot/azure/virtual-machines/storage-classic-cannot-delete-storage-account-container-vhd).
+For more information about how to locate and delete disk artifacts in classic storage accounts with PowerShell or Azure CLI, see one of the following articles:
+
+- [Migrate to Resource Manager with PowerShell](../../virtual-machines/migration-classic-resource-manager-ps.md#step-52-migrate-a-storage-account)
+- [Migrate VMs to Resource Manager using Azure CLI](../../virtual-machines/migration-classic-resource-manager-cli.md#step-5-migrate-a-storage-account)
+ ## See also - [Migrate your classic storage accounts to Azure Resource Manager by August 31, 2024](classic-account-migration-overview.md)
storage Classic Account Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-overview.md
Previously updated : 04/05/2023 Last updated : 04/10/2023
If you have classic storage accounts, start planning your migration now. Complet
Storage accounts created using the classic deployment model will follow the [Modern Lifecycle Policy](https://support.microsoft.com/help/30881/modern-lifecycle-policy) for retirement.
+## Why is a migration required?
+
+On August 31, 2024, we'll retire classic Azure storage accounts and they'll no longer be accessible. Before that date, you'll need to migrate them to Azure Resource Manager, which provides the same capabilities as well as new features, including:
+
+- A management layer that simplifies deployment by enabling you to create, update, and delete resources.
+- Resource grouping, which allows you to deploy, monitor, manage, and apply access control policies to resources as a group.
+- All new features for Azure Storage are implemented for storage account in Azure Resource Manager deployments, so customers that are still using classic resources will no longer have access to new features and updates.
+ ## How does this affect me? -- Subscriptions created after August 31, 2022 can no longer create classic storage accounts.-- Subscriptions created before September 1, 2022 will be able to create classic storage accounts until September 1, 2023.-- On September 1, 2024, customers will no longer be able to connect to classic storage accounts by using Azure Service Manager. Any data still contained in these accounts will no longer be accessible through Azure Service Manager.
+On September 1, 2024, customers will no longer be able to connect to classic storage accounts by using Azure Service Manager. Any data still contained in these accounts will no longer be accessible through Azure Service Manager.
> [!WARNING] > If you do not migrate your classic storage accounts to Azure Resource Manager by August 31, 2024, you will permanently lose access to the data in those accounts.
+Depending on when your subscription was created, you may no longer to be able to create classic storage accounts:
+
+- Subscriptions created after August 31, 2022 can no longer create classic storage accounts.
+- Subscriptions created before September 1, 2022 will be able to create classic storage accounts until September 1, 2023
+
+We recommend creating storage accounts only in Azure Resource Manager from this point forward.
+ ## What actions should I take? To migrate your classic storage accounts, you should:
To migrate your classic storage accounts, you should:
1. Migrate any classic storage accounts to Azure Resource Manager. 1. Check your applications and logs to determine whether you are dynamically creating, updating, or deleting classic storage accounts from your code, scripts, or templates. If you are, then you need to update your applications to use Azure Resource Manager accounts instead.
-For step-by-step instructions, see [How to migrate your classic storage accounts to Azure Resource Manager](classic-account-migrate.md).
+For step-by-step instructions, see [How to migrate your classic storage accounts to Azure Resource Manager](classic-account-migrate.md). For an in-depth overview of the migration process, see [Understand storage account migration from the classic deployment model to Azure Resource Manager](classic-account-migration-process.md).
## How to get help
For step-by-step instructions, see [How to migrate your classic storage accounts
1. Under **Problem subtype**, select **Migrate account to new resource group/subscription/region/tenant**. 1. Select **Next**, then follow the instructions to submit your support request.
+## FAQ
+
+### How do I migrate my classic storage accounts to Resource Manager?
+
+For step-by-step instructions for migrating your classic storage accounts, see [How to migrate your classic storage accounts to Azure Resource Manager](classic-account-migrate.md). For an in-depth overview of the migration process, see [Understand storage account migration from the classic deployment model to Azure Resource Manager](classic-account-migration-process.md).
+
+### At what point can classic storage accounts no longer be created?
+
+Subscriptions created after August 2022 are no longer be able to create classic storage accounts. Subscriptions created before August 2022 can continue to create and manage classic storage resources until the retirement date of August 31, 2024.
+
+### What happens to existing classic storage accounts after August 31, 2024?
+
+After August 31, 2024, you will no longer be able to access data in your classic storage accounts or manage them. It won't be possible to migrate a classic storage account after August 31, 2024.
+
+### Can Microsoft handle this migration for me?
+
+No, Microsoft cannot migrate a customer's storage account on their behalf. Customers must use the self-serve options listed above.
+
+### Will there be downtime when migrating my storage account from Classic to Resource Manager?
+
+There is no downtime to migrate a classic storage account to Resource Manager. However, there is downtime for other scenarios linked to classic virtual machine (VM) migration.
+
+### What operations are not available during the migration?
+
+Also, during the migration, management operations are not available on the storage account. Data operations can continue to be performed during the migration.
+
+If you are creating or managing container objects with the Azure Storage resource provider, note that those operations will be blocked while the migration is underway. For more information, see [Understand storage account migration from the classic deployment model to Azure Resource Manager](classic-account-migration-process.md).
+
+### Are storage account access keys regenerated as part of the migration?
+
+No, account access keys are not regenerated during the migration. Your access keys and connection strings will continue to work unchanged after the migration is complete.
+
+### Are Azure RBAC role assignments maintained through the migration?
+
+Any RBAC role assignments that are scoped to the classic storage account are maintained after the migration.
+
+### What type of storage account is created by the migration process?
+
+Your storage account will be a general-purpose v1 account after the migration process completes. You can then upgrade it to general-purpose v2. For more information about upgrading your account, see [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md).
+
+### Will the URL of my storage account remain the same post-migration?
+
+Yes, the migrated storage account will have the same name and address as the classic account.
+
+### Can additional verbose logging be added as part of the migration process?
+
+No, migration is a service that doesn't have capabilities to provide additional logging.
+ ## See also - [How to migrate your classic storage accounts to Azure Resource Manager](classic-account-migrate.md)
storage Classic Account Migration Process https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-process.md
Previously updated : 04/05/2023 Last updated : 04/10/2023
The Validation step is the first step in the migration process. The goal of this
The Validation step analyzes the state of resources in the classic deployment model. It checks for failures and unsupported scenarios due to different configurations of the storage account in the classic deployment model.
-> [!NOTE]
-> The Validation step does not check for virtual machine (VM) disks that may be associated with the storage account. You must check your storage accounts manually to determine whether they support VM disks.
+The Validation step does not check for virtual machine (VM) disks that may be associated with the storage account. You must check your storage accounts manually to determine whether they support VM disks. For more information, see the following articles:
+
+- [How to migrate your classic storage accounts to Azure Resource Manager](classic-account-migrate.md)
+- [Migrate to Resource Manager with PowerShell](../../virtual-machines/migration-classic-resource-manager-ps.md#step-52-migrate-a-storage-account)
+- [Migrate VMs to Resource Manager using Azure CLI](../../virtual-machines/migration-classic-resource-manager-cli#step-5-migrate-a-storage-account.md
Keep in mind that it's not possible to check for every constraint that the Azure Resource Manager stack might impose on the storage account during migration. Some constraints are only checked when the resources undergo transformation in the next step of migration (the Prepare step).
storage File Sync Firewall And Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-firewall-and-proxy.md
The following table describes the required domains for communication:
| **Azure Resource Manager** | `https://management.azure.com` | `https://management.usgovcloudapi.net` | Any user call (like PowerShell) goes to/through this URL, including the initial server registration call. | | **Azure Active Directory** | `https://login.windows.net`<br>`https://login.microsoftonline.com` | `https://login.microsoftonline.us` | Azure Resource Manager calls must be made by an authenticated user. To succeed, this URL is used for user authentication. | | **Azure Active Directory** | `https://graph.microsoft.com/` | `https://graph.microsoft.com/` | As part of deploying Azure File Sync, a service principal in the subscription's Azure Active Directory will be created. This URL is used for that. This principal is used for delegating a minimal set of rights to the Azure File Sync service. The user performing the initial setup of Azure File Sync must be an authenticated user with subscription owner privileges. |
-| **Azure Active Directory** | `https://secure.aadcdn.microsoftonline-p.com` | Use the public endpoint URL. | This URL is accessed by the Active Directory authentication library that the Azure File Sync server registration UI uses to log in the administrator. |
+| **Azure Active Directory** | `https://secure.aadcdn.microsoftonline-p.com` | `https://secure.aadcdn.microsoftonline-p.com`<br>(same as public cloud endpoint URL) | This URL is accessed by the Active Directory authentication library that the Azure File Sync server registration UI uses to log in the administrator. |
| **Azure Storage** | &ast;.core.windows.net | &ast;.core.usgovcloudapi.net | When the server downloads a file, then the server performs that data movement more efficiently when talking directly to the Azure file share in the Storage Account. The server has a SAS key that only allows for targeted file share access. | | **Azure File Sync** | &ast;.one.microsoft.com<br>&ast;.afs.azure.net | &ast;.afs.azure.us | After initial server registration, the server receives a regional URL for the Azure File Sync service instance in that region. The server can use the URL to communicate directly and efficiently with the instance handling its sync. | | **Microsoft PKI** | `https://www.microsoft.com/pki/mscorp/cps`<br>`http://crl.microsoft.com/pki/mscorp/crl/`<br>`http://mscrl.microsoft.com/pki/mscorp/crl/`<br>`http://ocsp.msocsp.com`<br>`http://ocsp.digicert.com/`<br>`http://crl3.digicert.com/` | `https://www.microsoft.com/pki/mscorp/cps`<br>`http://crl.microsoft.com/pki/mscorp/crl/`<br>`http://mscrl.microsoft.com/pki/mscorp/crl/`<br>`http://ocsp.msocsp.com`<br>`http://ocsp.digicert.com/`<br>`http://crl3.digicert.com/` | Once the Azure File Sync agent is installed, the PKI URL is used to download intermediate certificates required to communicate with the Azure File Sync service and Azure file share. The OCSP URL is used to check the status of a certificate. |
synapse-analytics Intellij Tool Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/intellij-tool-synapse.md
In this tutorial, you learn how to:
|Project name| Enter a name. This tutorial uses `myApp`.| |Project&nbsp;location| Enter the wanted location to save your project.| |Project SDK| It might be blank on your first use of IDEA. Select **New...** and navigate to your JDK.|
- |Spark Version|The creation wizard integrates the proper version for Spark SDK and Scala SDK. Synapse only supports **Spark 2.4.0**.|
- |||
+ |Spark Version|The creation wizard integrates the proper version for Spark SDK and Scala SDK. Here you can choose the Spark version you need.|
![Selecting the Apache Spark SDK](./media/intellij-tool-synapse/create-synapse-application02.png)
update-center Manage Multiple Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-multiple-machines.md
Title: Manage multiple machines in update management center (preview) description: The article details how to use Update management center (preview) in Azure to manage multiple supported machines and view their compliance state in the Azure portal. Previously updated : 04/21/2022 Last updated : 04/11/2023
Instead of performing these actions from a selected Azure VM or Arc-enabled serv
- The machine has an unsupported OS - The machine is in an unsupported region and you can't perform an assessment.
- - **Patch orchestration configuration of Azure virtual machines** ΓÇö all the Azure or Arc-enabled machines inventoried in the subscription are summarized by each update orchestration method. Values are:
+ - **Patch orchestration configuration of Azure virtual machines** ΓÇö all the Azure machines inventoried in the subscription are summarized by each update orchestration method. Values are:
- - **Azure orchestrated**ΓÇöthis mode enables automatic VM guest patching for the Azure virtual machine and Arc-enabled server. Subsequent patch installation is orchestrated by Azure.
+ - **Azure orchestrated**ΓÇöthis mode enables automatic VM guest patching for the Azure virtual machine. Subsequent patch installation is orchestrated by Azure.
- **Image Default**ΓÇöfor Linux machines, it uses the default patching configuration. - **OS orchestrated**ΓÇöthe OS automatically updates the machine. - **Manual updates**ΓÇöyou control the application of patches to a machine by applying patches manually inside the machine. In this mode, automatic updates are disabled for Windows OS.
update-center Scheduled Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md
Title: Scheduling recurring updates in Update management center (preview) description: The article details how to use update management center (preview) in Azure to set update schedules that install recurring updates on your machines. Previously updated : 12/27/2022 Last updated : 04/11/2023
You can create a new Guest OS update maintenance configuration or modify an exis
1. Go to **Machines** and select machines from the list. 1. In the **Updates (Preview)**, select **Scheduled updates**. 1. In **Create a maintenance configuration**, follow step 3 in this [procedure](#schedule-recurring-updates-on-single-vm) to create a maintenance configuration.
+1. In **Basics** tab, select the **Maintenance scope** as *Guest (Azure VM, Arc-enabled VMs/servers)*.
- :::image type="content" source="./media/scheduled-updates/create-maintenance-configuration.png" alt-text="Create Maintenance configuration.":::
+ :::image type="content" source="./media/scheduled-updates/create-maintenance-configuration.png" alt-text="Create Maintenance configuration.":::
### Add/remove machines from maintenance configuration
You can create a new Guest OS update maintenance configuration or modify an exis
:::image type="content" source="./media/scheduled-updates/change-update-selection-criteria-of-maintenance-configuration-inline.png" alt-text="Change update selection criteria of Maintenance configuration." lightbox="./media/scheduled-updates/change-update-selection-criteria-of-maintenance-configuration-expanded.png":::
-## Dynamic scoping using policy
+## Grouping using policy
-The update management center (preview) allows you to target a dynamic group of Azure or non-Azure VMs for update deployment via Azure Policy. Using a dynamic group keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags or regions to define the scope and use dynamic scoping by using built-in policies which you can customize as per your use-case.
+The update management center (preview) allows you to target a group of Azure or non-Azure VMs for update deployment via Azure Policy. The grouping using policy, keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags or regions to define the scope and use this feature for the built-in policies which you can customize as per your use-case.
> [!NOTE] > This policy also ensures that the patch orchestration property for Azure machines is set to **Azure-orchestrated (Automatic by Platform)** as it is a prerequisite for scheduled patching.
virtual-desktop Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/management.md
Title: Manage session hosts with Microsoft Intune - Azure Virtual Desktop
description: Recommended ways for you to manage your Azure Virtual Desktop session hosts. Previously updated : 08/30/2022 Last updated : 04/11/2023
Microsoft Configuration Manager versions 1906 and later can manage your domain-j
Microsoft Intune can manage your Azure AD-joined and Hybrid Azure AD-joined session hosts. To learn more about using Intune to manage Windows 11 and Windows 10 single session hosts, see [Using Azure Virtual Desktop with Intune](/mem/intune/fundamentals/windows-virtual-desktop).
-For Windows 11 and Windows 10 multi-session hosts, Intune supports both device-based configurations on Windows 11 and Windows 10 and user-scope configurations on Windows 11. User-scope configurations for Windows 10 are currently in preview. To learn more about using Intune to manage multi-session hosts, see [Using Azure Virtual Desktop multi-session with Intune](/mem/intune/fundamentals/windows-virtual-desktop-multi-session).
+For Windows 11 and Windows 10 multi-session hosts, Intune supports both device-based configurations and user-based configurations on Windows 11 and Windows 10. User-scope configuration on Windows 10 requires the update March 2023 Cumulative Update Preview (KB5023773) and OS version 19042.2788, 19044.2788, 19045.2788 or later. To learn more about using Intune to manage multi-session hosts, see [Using Azure Virtual Desktop multi-session with Intune](/mem/intune/fundamentals/windows-virtual-desktop-multi-session).
> [!NOTE] > Managing Azure Virtual Desktop session hosts using Intune is currently supported in the Azure Public and [Azure Government clouds](/enterprise-mobility-security/solutions/ems-intune-govt-service-description).
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure
description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 03/31/2023 Last updated : 04/11/2023
Make sure to check back here often to keep up with new updates.
New versions of the Azure Virtual Desktop Agent are installed automatically. When new versions are released, they are rolled out progressively to all session hosts. This process is called *flighting* and it enables Microsoft to monitor the rollout in [validation environments](create-validation-host-pool.md) first. A rollout may take several weeks before the agent is available in all environments.
+## Version 1.0.6425.300
+
+This update was released at the beginning of April 2023 and includes the following changes:
+
+- General improvements and bug fixes.
+ ## Version 1.0.6298.2100 This update was released at the end of March 2023 and includes the following changes:
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
description: Learn about recent changes to the Remote Desktop client for Windows
Previously updated : 04/04/2023 Last updated : 04/11/2023 # What's new in the Remote Desktop client for Windows
The following table lists the current versions available for the public and Insi
| Release | Latest version | Download | ||-|-|
-| Public | 1.2.4066 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
+| Public | 1.2.4157 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
| Insider | 1.2.4155 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) |
-## Updates for version 1.2.4155 (Insider)
+## Updates for version 1.2.4157
-*Date published: March 28, 2023*
+*Date published: April 10, 2023*
-Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368)
+Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370)
In this release, we've made the following changes:
In this release, we've made the following changes:
- Fixed an issue that made the client sometimes drop connections if doing something like using a Smart Card made the connection take a long time to start. - Fixed a bug where users aren't able to update the client if the client is installed with the flags *ALLUSERS=2* and *MSIINSTALLPERUSER=1* - Fixed an issue that made the client disconnect and display error message 0x3000018 instead of showing a prompt to reconnect if the endpoint doesn't let users save their credentials.
+- Fixed the vulnerability known as [CVE-2023-28267](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-28267).
+- Fixed an issue that generated duplicate Activity IDs for unique connections.
- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. ## Updates for version 1.2.4066 *Date published: March 28, 2023*
-Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370)
+Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW10DEa), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW10GYu), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW10GYw)
In this release, we've made the following changes:
In this release, we've made the following changes:
*Date published: February 7, 2023*
-Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWWHz3), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWWzLu), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWWPlp)
- In this release, we've made the following changes: - Fixed a bug where refreshes increased memory usage.
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Title: What's new in Azure Virtual Desktop? - Azure
description: New features and product updates for Azure Virtual Desktop. Previously updated : 03/07/2023 Last updated : 04/11/2023
Azure Virtual Desktop updates regularly. This article is where you'll find out a
Make sure to check back here often to keep up with new updates.
+## March 2023
+
+Here's what changed in March 2023:
+
+### Redesigned connection bar for the Windows Desktop client
+
+The latest version of the Windows Desktop client includes a redesigned connection bar. For more information, see [Updates for version 1.2.4155](whats-new-client-windows.md#updates-for-version-124157).
+
+### Shutdown session host status
+
+The Shutdown session host status is now available in the Azure Virtual Desktop portal and the most recent API version. For more information, see [Session host statuses and health checks](troubleshoot-statuses-checks.md#session-host-statuses).
+
+### Windows 10 and 11 22H2 images now visible in the image drop-down menu
+
+Windows 10 and 11 22H2 Enterprise and Enterprise multi-session images are now visible in the image dropdown when creating a new host pool or adding a VM in a host pool from the Azure Virtual Desktop portal.
+
+### Uniform Resource Identifier Schemes in public preview
+
+Uniform Resource Identifier (URI) schemes with the Remote Desktop client for Azure Virtual Desktop is now in public preview. This new feature lets you subscribe to a workspace or connect to a particular desktop or Remote App using URI schemes. URI schemes also provide fast and efficient end-user connection to Azure Virtual Desktop resources. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-the-public-preview-of-uniform-resource-identifier/ba-p/3763075).
+
+### Azure Virtual Desktop Insights at Scale now generally available
+
+Azure Virtual Desktop Insights at Scale is now generally available. This feature gives you the ability to review performance and diagnostic information in multiple host pools at the same time in a single view. If you're an existing Azure Virtual Desktop Insights user, you get this feature without having to do any extra configuration or setup. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-the-general-availability-of-azure-virtual-desktop/ba-p/3738624).
+ ## February 2023 Here's what changed in February 2023:
virtual-machine-scale-sets Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/overview.md
Previously updated : 11/22/2022 Last updated : 03/09/2023
Learn more about the differences between Uniform scale sets and Flexible scale s
> [!IMPORTANT] > The orchestration mode is defined when you create the scale set and cannot be changed or updated later. ## Why use Virtual Machine Scale Sets? To provide redundancy and improved performance, applications are typically distributed across multiple instances. Customers may access your application through a load balancer that distributes requests to one of the application instances. If you need to perform maintenance or update an application instance, your customers must be distributed to another available application instance. To keep up with extra customer demand, you may need to increase the number of application instances that run your application.
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Instance Repairs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md
The following steps enabling automatic repairs policy when creating a new scale
1. Enable the **Monitor application health** option. 1. Locate the **Automatic repair policy** section. 1. Turn **On** the **Automatic repairs** option.
-1. In **Grace period (min)**, specify the grace period in minutes, allowed values are between 30 and 90 minutes.
+1. In **Grace period (min)**, specify the grace period in minutes, allowed values are between 10 and 90 minutes.
1. When you're done creating the new scale set, select **Review + create** button. ### REST API
After updating the model of an existing scale set, ensure that the latest model
You can modify the automatic repairs policy of an existing scale set through the Azure portal.
+> [!NOTE]
+> Enable the [Application Health extension](./virtual-machine-scale-sets-health-extension.md) or [Load Balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md) on your Virtual Machine Scale Sets before you start the next steps.
+ 1. Go to an existing Virtual Machine Scale Set.
-1. Under **Settings** in the menu on the left, select **Health and repair**.
-1. Enable the **Monitor application health** option.
-1. Locate the **Automatic repair policy** section.
-1. Turn **On** the **Automatic repairs** option.
-1. In **Grace period (min)**, specify the grace period in minutes, allowed values are between 30 and 90 minutes.
-1. When you're done, select **Save**.
+2. Under **Settings** in the menu on the left, select **Health and repair**.
+3. Enable the **Monitor application health** option.
+
+If you're monitoring your scale set by using the Application Health extension:
+
+4. Choose **Application Health extension** from the Application Health monitor dropdown list.
+5. From the **Protocol** dropdown list, choose the network protocol used by your application to report health. Select the appropriate protocol based on your application requirements. Protocol options are **HTTP, HTTPS**, or **TCP**.
+6. In the **Port number** configuration box, type the network port used to monitor application health.
+7. For **Path**, provide the application endpoint path (for example, "/") used to report application health.
+
+> [!NOTE]
+> The Application Health extension will ping this path inside each virtual machine in the scale set to get application health status for each instance. If you're using [Binary Health States](./virtual-machine-scale-sets-health-extension.md#binary-health-states) and the endpoint responds with a status 200 (OK), then the instance is marked as "Healthy". In all the other cases (including if the endpoint is unreachable), the instance is marked "Unhealthy". For more health state options, explore [Rich Health States](./virtual-machine-scale-sets-health-extension.md#binary-versus-rich-health-states).
+
+If you're monitoring your scale set using SLB Health probes:
+
+8. Choose **Load balancer probe** from the Application Health monitor dropdown list.
+9. For the Load Balancer health probe, select an existing health probe or create a new health probe for monitoring.
+
+To enable automatic repairs:
+
+10. Locate the **Automatic repair policy** section. Automatic repairs can be used to delete unhealthy instances from the scale set and create new ones to replace them.
+11. Turn **On** the **Automatic repairs** option.
+12. In **Grace period (min)**, specify the grace period in minutes. Allowed values are between 10 and 90 minutes.
+6. When you're done, select **Save**.
### REST API
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
az vmss update --name myScaleSet --resource-group myResourceGroup --set UpgradeP
> [!NOTE] >After configuring automatic OS image upgrades for your scale set, you must also bring the scale set VMs to the latest scale set model if your scale set uses the 'Manual' [upgrade policy](virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model).
+### ARM templates
+The following example describes how to set automatic OS upgrades on a scale set model via Azure Resource Manager templates (ARM templates):
+
+```json
+"properties": {
+ "upgradePolicy": {
+ "mode": "Automatic",
+ "RollingUpgradePolicy": {
+ "BatchInstancePercent": 20,
+ "MaxUnhealthyInstancePercent": 25,
+ "MaxUnhealthyUpgradedInstancePercent": 25,
+ "PauseTimeBetweenBatches": "PT0S"
+ "automaticOSUpgradePolicy": {
+ "enableAutomaticOSUpgrade": true,
+ "useRollingUpgradePolicy": true,
+ "disableAutomaticRollback": false
+ }
+ }
+"imagePublisher": {
+ "type": "string",
+ "defaultValue": "MicrosoftWindowsServer"
+ },
+ "imageOffer": {
+ "type": "string",
+ "defaultValue": "WindowsServer"
+ },
+ "imageSku": {
+ "type": "string",
+ "defaultValue": "2022-datacenter"
+ },
+ "imageOSVersion": {
+ "type": "string",
+ "defaultValue": "latest"
+ }
+}
+```
+
+### Bicep
+The following example describes how to set automatic OS upgrades on a scale set model via Bicep:
+
+```json
+properties:ΓÇ»{
+    overprovision: overProvision
+    upgradePolicy: {
+      mode: 'Automatic'
+      automaticOSUpgradePolicy: {
+        enableAutomaticOSUpgrade: true
+      }
+    }
+```
+ ## Using Application Health Probes During an OS Upgrade, VM instances in a scale set are upgraded one batch at a time. The upgrade should continue only if the customer application is healthy on the upgraded VM instances. We recommend that the application provides health signals to the scale set OS Upgrade engine. By default, during OS Upgrades the platform considers VM power state and extension provisioning state to determine if a VM instance is healthy after an upgrade. During the OS Upgrade of a VM instance, the OS disk on a VM instance is replaced with a new disk based on latest image version. After the OS Upgrade has completed, the configured extensions are run on these VMs. The application is considered healthy only when all the extensions on the instance are successfully provisioned.
virtual-machine-scale-sets Virtual Machine Scale Sets Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md
Update-AzVmss -ResourceGroupName $vmScaleSetResourceGroup `
-VirtualMachineScaleSet $vmScaleSet # Upgrade instances to install the extension
-Update-AzVmssInstances -ResourceGroupName $vmScaleSetResourceGroup `
+Update-AzVmssInstance -ResourceGroupName $vmScaleSetResourceGroup `
-VMScaleSetName $vmScaleSetName ` -InstanceId '*' ```
Update-AzVmss -ResourceGroupName $vmScaleSetResourceGroup `
-VirtualMachineScaleSet $vmScaleSet # Upgrade instances to install the extension
-Update-AzVmssInstances -ResourceGroupName $vmScaleSetResourceGroup `
+Update-AzVmssInstance -ResourceGroupName $vmScaleSetResourceGroup `
-VMScaleSetName $vmScaleSetName ` -InstanceId '*' ```
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
The following table compares the Flexible orchestration mode, Uniform orchestrat
| Disk Types | Managed disks only, all storage types | Managed and unmanaged disks | Managed and unmanaged disks. Ultradisk not supported | | Disk Server Side Encryption with Customer Managed Keys | Yes | Yes | Yes | | Write Accelerator  | Yes | Yes | Yes |
-| Proximity Placement Groups  | Yes, read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes, read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes |
+| Proximity Placement Groups  | Yes, when using one Availability Zone or none. Cannot be changed after deployment. Read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes, when using one Availability Zone or none. Can be changed after deployment stopping all instances. Read [Proximity Placement Groups documentation](../virtual-machine-scale-sets/proximity-placement-groups.md) | Yes |
| Azure Dedicated HostsΓÇ» | Yes | Yes | Yes | | Managed Identity | [User Assigned Identity](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md#user-assigned-managed-identity) only<sup>1</sup> | System Assigned or User Assigned | N/A (can specify Managed Identity on individual instances) | | Add/remove existing VM to the group | No | No | No |
virtual-machines Generalize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generalize.md
Make sure the server roles running on the machine are supported by Sysprep. For
> > If you plan to run Sysprep before uploading your virtual hard disk (VHD) to Azure for the first time, make sure you have [prepared your VM](./windows/prepare-for-upload-vhd-image.md). >
-> We do not support custom answer file in the sysprep step, hence you should not use the "/unattend:_answerfile_" switch with your sysprep command.
->
+> We do not support custom answer file in the sysprep step, hence you should not use the "/unattend:_answerfile_" switch with your sysprep command.
+>
+> Azure platform mounts an ISO file to the DVD-ROM when a Windows VM is created from a generalized image. For this reason, the **DVD-ROM must be enabled in the OS in the generalized image**. If it is disabled, the Windows VM will be stuck at out-of-box experience (OOBE).
+ To generalize your Windows VM, follow these steps:
To generalize your Windows VM, follow these steps:
2. Open a Command Prompt window as an administrator. 3. Delete the panther directory (C:\Windows\Panther).
+4. Verify if CD/DVD-ROM is enabled.If it is disabled, the Windows VM will be stuck at out-of-box experience (OOBE).
+```
+ Registry key Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\cdrom\start (Value 4 = disabled, expected value 1 = automatic) Make sure it is set to 1.
+ ```
+> [!NOTE]
+ > Verify if any policies applied restricting removable storage access (example: Computer configuration\Administrative Templates\System\Removable Storage Access\All Removable Storage classes: Deny all access)
+ 5. Then change the directory to %windir%\system32\sysprep, and then run: ```
virtual-machines Azure Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-dns.md
Previously updated : 10/19/2016 Last updated : 04/11/2023
The following table illustrates scenarios and corresponding name resolution solu
| Reverse DNS for internal IPs |[Name resolution using your own DNS server](#name-resolution-using-your-own-dns-server) |n/a | ## Name resolution that Azure provides+ Along with resolution of public DNS names, Azure provides internal name resolution for virtual machines and role instances that are in the same virtual network. In virtual networks that are based on Azure Resource Manager, the DNS suffix is consistent across the virtual network; the FQDN is not needed. DNS names can be assigned to both network interface cards (NICs) and virtual machines. Although the name resolution that Azure provides does not require any configuration, it is not the appropriate choice for all deployment scenarios, as seen on the preceding table. ### Features and considerations+ **Features:** * No configuration is required to use name resolution that Azure provides.
Along with resolution of public DNS names, Azure provides internal name resoluti
Names must use only 0-9, a-z, and '-', and they cannot start or end with a '-'. See RFC 3696 Section 2. * DNS query traffic is throttled for each virtual machine. Throttling shouldn't impact most applications. If request throttling is observed, ensure that client-side caching is enabled. For more information, see [Getting the most from name resolution that Azure provides](#getting-the-most-from-name-resolution-that-azure-provides).
-### Getting the most from name resolution that Azure provides
+### Getting the most from name resolution that Azure provides\
+ **Client-side caching:** Some DNS queries are not sent across the network. Client-side caching helps reduce latency and improve resilience to network inconsistencies by resolving recurring DNS queries from a local cache. DNS records contain a Time-To-Live (TTL), which enables the cache to store the record for as long as possible without impacting record freshness. As a result, client-side caching is suitable for most situations.
Some Linux distributions do not include caching by default. We recommend that yo
Several different DNS caching packages, such as dnsmasq, are available. Here are the steps to install dnsmasq on the most common distributions:
-**Ubuntu (uses resolvconf)**
- * Install the dnsmasq package (ΓÇ£sudo apt-get install dnsmasqΓÇ¥).
+# [Ubuntu](#tab/ubuntu)
+
+1. Install the dnsmasq package:
+
+```bash
+sudo apt-get install dnsmasq
+```
+
+2. Enable the dnsmasq service:
+
+```bash
+sudo systemctl enable dnsmasq.service
+```
+
+3. Start the dnsmasq service:
+
+```bash
+sudo systemctl start dnsmasq.service
+```
+
+# [SUSE](#tab/sles)
+
+1. Install the dnsmasq package:
+
+```bash
+sudo zypper install dnsmasq
+```
+
+2. Enable the dnsmasq service:
+
+```bash
+sudo systemctl enable dnsmasq.service
+```
+
+3. Start the dnsmasq service:
+
+```bash
+sudo systemctl start dnsmasq.service
+```
+
+4. Edit `/etc/sysconfig/network/config` file using a text editor, and change `NETCONFIG_DNS_FORWARDER=""` to `dnsmasq`.
+5. Update `/etc/resolv.conf` to set the cache as the local DNS resolver.
+
+```bash
+sudo netconfig update
+```
-**SUSE (uses netconf)**:
-1. Install the dnsmasq package (ΓÇ£sudo zypper install dnsmasqΓÇ¥).
-2. Enable the dnsmasq service (ΓÇ£systemctl enable dnsmasq.serviceΓÇ¥).
-3. Start the dnsmasq service (ΓÇ£systemctl start dnsmasq.serviceΓÇ¥).
-4. Edit ΓÇ£/etc/sysconfig/network/configΓÇ¥, and change NETCONFIG_DNS_FORWARDER="" to ΓÇ¥dnsmasqΓÇ¥.
-5. Update resolv.conf ("netconfig update") to set the cache as the local DNS resolver.
+# [CentOS/RHEL](#tab/rhel)
-**CentOS by Rogue Wave Software (formerly OpenLogic; uses NetworkManager)**
-1. Install the dnsmasq package (ΓÇ£sudo yum install dnsmasqΓÇ¥).
-2. Enable the dnsmasq service (ΓÇ£systemctl enable dnsmasq.serviceΓÇ¥).
-3. Start the dnsmasq service (ΓÇ£systemctl start dnsmasq.serviceΓÇ¥).
-4. Add ΓÇ£prepend domain-name-servers 127.0.0.1;ΓÇ¥ to ΓÇ£/etc/dhclient-eth0.confΓÇ¥.
-5. Restart the network service (ΓÇ£service network restartΓÇ¥) to set the cache as the local DNS resolver
+1. Install the dnsmasq package:
+
+```bash
+sudo yum install dnsmasq -y
+```
+
+2. Enable the dnsmasq service:
+
+```bash
+sudo systemctl enable dnsmasq.service
+```
+
+3. Start the dnsmasq service:
+
+```bash
+sudo systemctl start dnsmasq.service
+```
+
+4. Add `prepend domain-name-servers 127.0.0.1;` to `/etc/dhcp/dhclient.conf`.
+
+```bash
+sudo echo "prepend domain-name-servers 127.0.0.1;" >> /etc/dhcp/dhclient.conf
+```
+
+5. Restart the network service to set the cache as the local DNS resolver
+
+```bash
+sudo systemctl restart NetworkManager
+```
> [!NOTE]
-> : The 'dnsmasq' package is only one of the many DNS caches that are available for Linux. Before you use it, check its suitability for your needs and that no other cache is installed.
->
->
+> The `dnsmasq` package is only one of the many DNS caches that are available for Linux. Before you use it, check its suitability for your needs and that no other cache is installed.
++ **Client-side retries**
DNS is primarily a UDP protocol. Because the UDP protocol doesn't guarantee mess
To check the current settings on a Linux virtual machine, 'cat /etc/resolv.conf', and look at the 'options' line, for example:
+```bash
+sudo cat /etc/resolv.conf
+```
+ ```config-conf options timeout:1 attempts:5 ```
-The resolv.conf file is auto-generated and should not be edited. The specific steps that add the 'options' line vary by distribution:
+The `/etc/resolv.conf` file is auto-generated and should not be edited. The specific steps that add the 'options' line vary by distribution:
**Ubuntu** (uses resolvconf)
-1. Add the options line to '/etc/resolvconf/resolv.conf.d/head'.
-2. Run 'resolvconf -u' to update.
+
+1. Add the options line to `/etc/resolvconf/resolv.conf.d/head` file.
+2. Run `sudo resolvconf -u` to update.
**SUSE** (uses netconf)
-1. Add 'timeout:1 attempts:5' to the NETCONFIG_DNS_RESOLVER_OPTIONS="" parameter in '/etc/sysconfig/network/config'.
-2. Run 'netconfig update' to update.
+
+1. Add `timeout:1 attempts:5` to the `NETCONFIG_DNS_RESOLVER_OPTIONS=""` parameter in `/etc/sysconfig/network/config`.
+2. Run `sudo netconfig update` to update.
**CentOS by Rogue Wave Software (formerly OpenLogic)** (uses NetworkManager)
-1. Add 'RES_OPTIONS="timeout:1 attempts:5"' to '/etc/sysconfig/network'.
-2. Run 'service network restart' to update.
+
+1. Add `RES_OPTIONS="timeout:1 attempts:5"` to `/etc/sysconfig/network`.
+2. Run `systemctl restart NetworkManager` to update.
## Name resolution using your own DNS server+ Your name resolution needs may go beyond the features that Azure provides. For example, you might require DNS resolution between virtual networks. To cover this scenario, you can use your own DNS servers. DNS servers within a virtual network can forward DNS queries to recursive resolvers of Azure to resolve hostnames that are in the same virtual network. For example, a DNS server that runs in Azure can respond to DNS queries for its own DNS zone files and forward all other queries to Azure. This functionality enables virtual machines to see both your entries in your zone files and hostnames that Azure provides (via the forwarder). Access to the recursive resolvers of Azure is provided via the virtual IP 168.63.129.16.
virtual-machines Build Image With Packer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/build-image-with-packer.md
Previously updated : 05/07/2019 Last updated : 04/11/2023
Each virtual machine (VM) in Azure is created from an image that defines the Lin
> [!NOTE] > Azure now has a service, Azure Image Builder, for defining and creating your own custom images. Azure Image Builder is built on Packer, so you can even use your existing Packer shell provisioner scripts with it. To get started with Azure Image Builder, see [Create a Linux VM with Azure Image Builder](image-builder.md). - ## Create Azure resource group+ During the build process, Packer creates temporary Azure resources as it builds the source VM. To capture that source VM for use as an image, you must define a resource group. The output from the Packer build process is stored in this resource group. Create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named *myResourceGroup* in the *eastus* location:
-```azurecli
+```azurecli-interactive
az group create -n myResourceGroup -l eastus ``` - ## Create Azure credentials+ Packer authenticates with Azure using a service principal. An Azure service principal is a security identity that you can use with apps, services, and automation tools like Packer. You control and define the permissions as to what operations the service principal can perform in Azure. Create a service principal with [az ad sp create-for-rbac](/cli/azure/ad/sp) and output the credentials that Packer needs:
-```azurecli
+```azurecli-interactive
az ad sp create-for-rbac --role Contributor --scopes /subscriptions/<subscription_id> --query "{ client_id: appId, client_secret: password, tenant_id: tenant }" ```
An example of the output from the preceding commands is as follows:
To authenticate to Azure, you also need to obtain your Azure subscription ID with [az account show](/cli/azure/account):
-```azurecli
+```azurecli-interactive
az account show --query "{ subscription_id: id }" ``` You use the output from these two commands in the next step. - ## Define Packer template
-To build images, you create a template as a JSON file. In the template, you define builders and provisioners that carry out the actual build process. Packer has a [provisioner for Azure](https://www.packer.io/docs/builders/azure.html) that allows you to define Azure resources, such as the service principal credentials created in the preceding step.
+
+To build images, you create a template as a JSON file. In the template, you define builders and provisioners that carry out the actual build process. Packer has a [provisioner for Azure](https://developer.hashicorp.com/packer/plugins/builders/azure) that allows you to define Azure resources, such as the service principal credentials created in the preceding step.
Create a file named *ubuntu.json* and paste the following content. Enter your own values for the following parameters:
Create a file named *ubuntu.json* and paste the following content. Enter your ow
| *managed_image_resource_group_name* | Name of resource group you created in the first step | | *managed_image_name* | Name for the managed disk image that is created | - ```json { "builders": [{
Create a file named *ubuntu.json* and paste the following content. Enter your ow
"os_type": "Linux", "image_publisher": "Canonical", "image_offer": "UbuntuServer",
- "image_sku": "16.04-LTS",
+ "image_sku": "20.04-LTS",
"azure_tags": { "dept": "Engineering",
Create a file named *ubuntu.json* and paste the following content. Enter your ow
}] } ```+
+> [!NOTE]
+> Replace the `image_publisher`, `image_offer`, `image_sku` values and `inline` commands accordingly.
+ You can also create a filed named *ubuntu.pkr.hcl* and paste the following content with your own values as used for the above parameters table. ```HCL
source "azure-arm" "autogenerated_1" {
client_secret = "0e760437-bf34-4aad-9f8d-870be799c55d" image_offer = "UbuntuServer" image_publisher = "Canonical"
- image_sku = "16.04-LTS"
+ image_sku = "20.04-LTS"
location = "East US" managed_image_name = "myPackerImage" managed_image_resource_group_name = "myResourceGroup"
build {
} ``` - This template builds an Ubuntu 16.04 LTS image, installs NGINX, then deprovisions the VM. > [!NOTE] > If you expand on this template to provision user credentials, adjust the provisioner command that deprovisions the Azure agent to read `-deprovision` rather than `deprovision+user`. > The `+user` flag removes all user accounts from the source VM. - ## Build Packer image+ If you don't already have Packer installed on your local machine, [follow the Packer installation instructions](https://www.packer.io/docs/install). Build the image by specifying your Packer template file as follows: ```bash
-./packer build ubuntu.json
+sudo ./packer build ubuntu.json
``` You can also build the image by specifying the *ubuntu.pkr.hcl* file as follows: ```bash
-packer build ubuntu.pkr.hcl
+sudo packer build ubuntu.pkr.hcl
``` An example of the output from the preceding commands is as follows:
ManagedImageLocation: eastus
It takes a few minutes for Packer to build the VM, run the provisioners, and clean up the deployment. - ## Create VM from Azure Image+ You can now create a VM from your Image with [az vm create](/cli/azure/vm). Specify the Image you created with the `--image` parameter. The following example creates a VM named *myVM* from *myPackerImage* and generates SSH keys if they don't already exist:
-```azurecli
+```azurecli-interactive
az vm create \ --resource-group myResourceGroup \ --name myVM \
It takes a few minutes to create the VM. Once the VM has been created, take note
To allow web traffic to reach your VM, open port 80 from the Internet with [az vm open-port](/cli/azure/vm):
-```azurecli
+```azurecli-interactive
az vm open-port \ --resource-group myResourceGroup \ --name myVM \
az vm open-port \
``` ## Test VM and NGINX+ Now you can open a web browser and enter `http://publicIpAddress` in the address bar. Provide your own public IP address from the VM create process. The default NGINX page is displayed as in the following example: ![NGINX default site](./media/build-image-with-packer/nginx.png) - ## Next steps You can also use existing Packer provisioner scripts with [Azure Image Builder](image-builder.md).
virtual-machines Cli Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/cli-manage.md
Previously updated : 05/12/2017 Last updated : 04/11/2023 # Common Azure CLI commands for managing Azure resources
The Azure CLI allows you to create and manage your Azure resources on macOS, Lin
This article requires the Azure CLI version 2.0.4 or later. Run `az --version` to find the version. If you need to upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). You can also use [Cloud Shell](../../cloud-shell/quickstart.md) from your browser. ## Basic Azure Resource Manager commands in Azure CLI+ For more detailed help with specific command line switches and options, you can use the online command help and options by typing `az <command> <subcommand> --help`. ### Create VMs+ | Task | Azure CLI commands | | | | | Create a resource group | `az group create --name myResourceGroup --location eastus` |
-| Create a Linux VM | `az vm create --resource-group myResourceGroup --name myVM --image ubuntults` |
+| Create a Linux VM | `az vm create --resource-group myResourceGroup --name myVM --image LinuxImageName` |
| Create a Windows VM | `az vm create --resource-group myResourceGroup --name myVM --image win2016datacenter` | ### Manage VM state+ | Task | Azure CLI commands | | | | | Start a VM | `az vm start --resource-group myResourceGroup --name myVM` |
For more detailed help with specific command line switches and options, you can
| Delete a VM | `az vm delete --resource-group myResourceGroup --name myVM` | ### Get VM info+ | Task | Azure CLI commands | | | | | List VMs | `az vm list` |
For more detailed help with specific command line switches and options, you can
| Get all available VM sizes | `az vm list-sizes --location eastus` | ## Disks and images+ | Task | Azure CLI commands | | | | | Add a data disk to a VM | `az vm disk attach --resource-group myResourceGroup --vm-name myVM --disk myDataDisk --size-gb 128 --new` |
For more detailed help with specific command line switches and options, you can
| Create image of a VM | `az image create --resource-group myResourceGroup --source myVM --name myImage` | | Create VM from image | `az vm create --resource-group myResourceGroup --name myNewVM --image myImage` | - ## Next steps+ For additional examples of the CLI commands, see the [Create and Manage Linux VMs with the Azure CLI](tutorial-manage-vm.md) tutorial.
virtual-machines Disable Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disable-provisioning.md
Previously updated : 07/06/2020 Last updated : 04/11/2023
Before removing the Linux Agent, you must understand of what VM will not be able
Azure virtual machine (VM) [extensions](../extensions/overview.md) are small applications that provide post-deployment configuration and automation tasks on Azure VMs, extensions are installed and managed by the Azure control plane. It is the job of the [Azure Linux Agent](../extensions/agent-linux.md) to process the platform extension commands and ensure the correct state of the extension inside the VM. The Azure platform hosts many extensions that range from VM configuration, monitoring, security, and utility applications. There is a large choice of first and third-party extensions, examples of key scenarios that extensions are used for:+ * Supporting first party Azure services, such as Azure Backup, Monitoring, Disk Encryption, Security, Site Replication and others. * SSH / Password resets * VM configuration - Running custom scripts, installing Chef, Puppet agents etc..
The Azure platform hosts many extensions that range from VM configuration, monit
There are several ways to disable extension processing, depending on your needs, but before you continue, you **MUST** remove all extensions deployed to the VM, for example using the Azure CLI, you can [list](/cli/azure/vm/extension#az-vm-extension-list) and [delete](/cli/azure/vm/extension#az-vm-extension-delete):
-```azurecli
+```azurecli-interactive
az vm extension delete -g MyResourceGroup --vm-name MyVm -n extension_name ```+ > [!Note]
->
+>
> If you do not do the above, the platform will try to send the extension configuration and timeout after 40min. ### Disable at the control plane+ If you are not sure whether you will need extensions in the future, you can leave the Linux Agent installed on the VM, then disable extension processing capability from the platform. This is option is available in `Microsoft.Compute` api version `2018-06-01` or higher, and does not have a dependency on the Linux Agent version installed.
-```azurecli
+```azurecli-interactive
az vm update -g <resourceGroup> -n <vmName> --set osProfile.allowExtensionOperations=false ```+ You can easily reenable this extension processing from the platform, with the above command, but set it to 'true'. ## Remove the Linux Agent from a running VM
Ensure you have **removed** all existing extensions from the VM before, as per a
If you just remove the Linux Agent, and not the associated configuration artifacts, you can reinstall at a later date. Run one of the following, as root, to remove the Azure Linux Agent:
-#### For Ubuntu >=18.04
+#### For Ubuntu 18.04+
+ ```bash
-apt -y remove walinuxagent
+sudo apt -y remove walinuxagent
```
-#### For Redhat >= 7.7
+#### For Redhat 7.X, 8.X and 9.X
+ ```bash
-yum -y remove WALinuxAgent
+sudo yum -y remove WALinuxAgent
```
-#### For SUSE
+#### For SUSE 12.X, 15.X
+ ```bash
-zypper --non-interactive remove python-azure-agent
+sudo zypper --non-interactive remove python-azure-agent
``` ### Step 2: (Optional) Remove the Azure Linux Agent artifacts
-> [!IMPORTANT]
+
+> [!IMPORTANT]
> > You can remove all associated artifacts of the Linux Agent, but this will mean you cannot reinstall it at a later date. Therefore, it is strongly recommended you consider disabling the Linux Agent first, removing the Linux Agent using the above only. If you know you will not ever reinstall the Linux Agent again, then you can run the following:
-#### For Ubuntu >=18.04
+#### For Ubuntu 18.04+
+ ```bash
-apt -y purge walinuxagent
-rm -rf /var/lib/waagent
-rm -f /var/log/waagent.log
+sudo pt -y purge walinuxagent
+sudo cp -rp /var/lib/waagent /var/lib/waagent.bkp
+sudo rm -rf /var/lib/waagent
+sudo rm -f /var/log/waagent.log
```
-#### For Redhat >= 7.7
+#### For Redhat 7.X, 8.X, 9.X
+ ```bash
-yum -y remove WALinuxAgent
-rm -f /etc/waagent.conf.rpmsave
-rm -rf /var/lib/waagent
-rm -f /var/log/waagent.log
+sudo yum -y remove WALinuxAgent
+sudo rm -f /etc/waagent.conf.rpmsave
+sudo rm -rf /var/lib/waagent
+sudo rm -f /var/log/waagent.log
```
-#### For SUSE
+#### For SUSE 12.X, 15.X
+ ```bash
-zypper --non-interactive remove python-azure-agent
-rm -f /etc/waagent.conf.rpmsave
-rm -rf /var/lib/waagent
-rm -f /var/log/waagent.log
+sudo zypper --non-interactive remove python-azure-agent
+sudo rm -f /etc/waagent.conf.rpmsave
+sudo rm -rf /var/lib/waagent
+sudo rm -f /var/log/waagent.log
``` ## Preparing an image without the Linux Agent+ If you have an image that already contains cloud-init, and you want to remove the Linux agent, but still provision using cloud-init, run the steps in Step 2 (and optionally Step 3) as root to remove the Azure Linux Agent and then the following will remove the cloud-init configuration and cached data, and prepare the VM to create a custom image. ```bash
-cloud-init clean --logs --seed
+sudo cloud-init clean --logs --seed
``` ## Deprovision and create an image+ The Linux Agent has the ability to clean up some of the existing image metadata, with the step "waagent -deprovision+user", however, after it has been removed, you will need to perform actions such as the below, and remove any other sensitive data from it. -- Remove all existing ssh host keys
+* Remove all existing ssh host keys
```bash
- rm /etc/ssh/ssh_host_*key*
+ sudo rm /etc/ssh/ssh_host_*key*
```-- Delete the admin account+
+* Delete the admin account
```bash
- touch /var/run/utmp
- userdel -f -r <admin_user_account>
+ sudo touch /var/run/utmp
+ sudo userdel -f -r <admin_user_account>
```-- Delete the root password+
+* Delete the root password
```bash
- passwd -d root
+ sudo passwd -d root
``` Once you have completed the above, you can create the custom image using the Azure CLI.
+### Create a regular managed image
-**Create a regular managed image**
-```azurecli
+```azurecli-interactive
az vm deallocate -g <resource_group> -n <vm_name> az vm generalize -g <resource_group> -n <vm_name> az image create -g <resource_group> -n <image_name> --source <vm_name> ```
-**Create an image version in a Azure Compute Gallery**
+### Create an image version in a Azure Compute Gallery
-```azurecli
+```azurecli-interactive
az sig image-version create \ -g $sigResourceGroup --gallery-name $sigName
az sig image-version create \
--gallery-image-version 1.0.0 --managed-image /subscriptions/00000000-0000-0000-0000-00000000xxxx/resourceGroups/imageGroups/providers/images/MyManagedImage ```+ ### Creating a VM from an image that does not contain a Linux Agent+ When you create the VM from the image with no Linux Agent, you need to ensure the VM deployment configuration indicates extensions are not supported on this VM.
-> [!NOTE]
->
+> [!NOTE]
+>
> If you do not do the above, the platform will try to send the extension configuration and timeout after 40min. To deploy the VM with extensions disabled, you can use the Azure CLI with [--enable-agent](/cli/azure/vm#az-vm-create).
-```azurecli
+```azurecli-interactive
az vm create \ --resource-group $resourceGroup \ --name $prodVmName \
virtual-machines How To Resize Encrypted Lvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/how-to-resize-encrypted-lvm.md
Previously updated : 09/21/2020 Last updated : 04/11/2023 # How to resize logical volume management devices that use Azure Disk Encryption
You can use this resizing process in the following environments:
- Linux distributions: - Red Hat Enterprise Linux (RHEL) 7 or later
- - Ubuntu 16 or later
+ - Ubuntu 18.04 or later
- SUSE 12 or later-- Azure Disk Encryption versions:
+- Azure Disk Encryption versions:
- Single-pass extension - Dual-pass extension
This article assumes that you have:
- Experience using Linux and LVM. -- Experience using */dev/disk/scsi1/* paths for data disks on Azure. For more information, see [Troubleshoot Linux VM device name problems](/troubleshoot/azure/virtual-machines/troubleshoot-device-names-problems).
+- Experience using */dev/disk/scsi1/* paths for data disks on Azure. For more information, see [Troubleshoot Linux VM device name problems](/troubleshoot/azure/virtual-machines/troubleshoot-device-names-problems).
## Scenarios The procedures in this article apply to the following scenarios: - Traditional LVM and LVM-on-crypt configurations-- Traditional LVM encryption -- LVM-on-crypt
+- Traditional LVM encryption
+- LVM-on-crypt
### Traditional LVM and LVM-on-crypt configurations Traditional LVM and LVM-on-crypt configurations extend a logical volume (LV) when the volume group (VG) has available space.
-### Traditional LVM encryption
+### Traditional LVM encryption
In traditional LVM encryption, LVs are encrypted. The whole disk isn't encrypted.
By using traditional LVM encryption, you can:
- Extend the LV when you add a new physical volume (PV). - Extend the LV when you resize an existing PV.
-### LVM-on-crypt
+### LVM-on-crypt
The recommended method for disk encryption is LVM-on-encrypt. This method encrypts the entire disk, not just the LV.
-By using LVM-on-crypt, you can:
+By using LVM-on-crypt, you can:
- Extend the LV when you add a new PV. - Extend the LV when you resize an existing PV.
The traditional way to resize LVs is to extend an LV when the VG has space avail
2. Verify that the VG has enough space to increase the LV: ```bash
- vgs
+ sudo vgs
``` ![Screenshot showing the code that checks for space on the VG. The command and the result are highlighted.](./media/disk-encryption/resize-lvm/002-resize-lvm-scenarioa-check-vgs.png)
The traditional way to resize LVs is to extend an LV when the VG has space avail
You can also use `vgdisplay`: ```bash
- vgdisplay vgname
+ sudo vgdisplay vgname
``` ![Screenshot showing the V G display code that checks for space on the VG. The command and the result are highlighted.](./media/disk-encryption/resize-lvm/002-resize-lvm-scenarioa-check-vgdisplay.png)
The traditional way to resize LVs is to extend an LV when the VG has space avail
3. Identify which LV needs to be resized: ```bash
- lsblk
+ sudo lsblk
``` ![Screenshot showing the result of the l s b l k command. The command and the result are highlighted.](./media/disk-encryption/resize-lvm/002-resize-lvm-scenarioa-check-lsblk1.png)
The traditional way to resize LVs is to extend an LV when the VG has space avail
4. Check the LV size: ```bash
- lvdisplay lvname
+ sudo lvdisplay lvname
``` ![Screenshot showing the code that checks the logical volume size. The command and the result are highlighted.](./media/disk-encryption/resize-lvm/002-resize-lvm-scenarioa-check-lvdisplay01.png)
The traditional way to resize LVs is to extend an LV when the VG has space avail
5. Increase the LV size by using `-r` to resize the file system online: ```bash
- lvextend -r -L +2G /dev/vgname/lvname
+ sudo lvextend -r -L +2G /dev/vgname/lvname
``` ![Screenshot showing the code that increases the size of the logical volume. The command and the results are highlighted.](./media/disk-encryption/resize-lvm/003-resize-lvm-scenarioa-resize-lv.png)
The traditional way to resize LVs is to extend an LV when the VG has space avail
You can check the LV information again to confirm the changes at the level of the LV: ```bash
-lvdisplay lvname
+sudo lvdisplay lvname
``` ![Screenshot showing the code that confirms the new sizes. The sizes are highlighted.](./media/disk-encryption/resize-lvm/004-resize-lvm-scenarioa-check-lvdisplay2.png)
When you need to add a new disk to increase the VG size, extend your traditional
2. Verify the current PV configuration: ```bash
- pvs
+ sudo pvs
``` ![Screenshot showing the code that checks the current PV configuration. The command and the result are highlighted.](./media/disk-encryption/resize-lvm/006-resize-lvm-scenariob-check-pvs.png)
When you need to add a new disk to increase the VG size, extend your traditional
3. Check the current VG information: ```bash
- vgs
+ sudo vgs
``` ![Screenshot showing the code that checks the current volume group information. The command and the result are highlighted.](./media/disk-encryption/resize-lvm/007-resize-lvm-scenariob-check-vgs.png)
When you need to add a new disk to increase the VG size, extend your traditional
4. Check the current disk list. Identify data disks by checking the devices in */dev/disk/azure/scsi1/*. ```bash
- ls -l /dev/disk/azure/scsi1/
+ sudo ls -l /dev/disk/azure/scsi1/
``` ![Screenshot showing the code that checks the current disk list. The command and the results are highlighted.](./media/disk-encryption/resize-lvm/008-resize-lvm-scenariob-check-scs1.png)
-5. Check the output of `lsblk`:
+5. Check the output of `lsblk`:
```bash
- lsbk
+ sudo lsbk
``` ![Screenshot showing the code that checks the output of l s b l k. The command and the results are highlighted.](./media/disk-encryption/resize-lvm/008-resize-lvm-scenariob-check-lsblk.png)
When you need to add a new disk to increase the VG size, extend your traditional
7. Check the disk list, and notice the new disk. ```bash
- ls -l /dev/disk/azure/scsi1/
+ sudo ls -l /dev/disk/azure/scsi1/
``` ![Screenshot showing the code that checks the disk list. The results are highlighted.](./media/disk-encryption/resize-lvm/009-resize-lvm-scenariob-check-scsi12.png) ```bash
- lsbk
+ sudo lsbk
``` ![Screenshot showing the code that checks the disk list by using l s b l k. The command and the result are highlighted.](./media/disk-encryption/resize-lvm/009-resize-lvm-scenariob-check-lsblk1.png)
When you need to add a new disk to increase the VG size, extend your traditional
8. Create a new PV on top of the new data disk: ```bash
- pvcreate /dev/newdisk
+ sudo pvcreate /dev/newdisk
``` ![Screenshot showing the code that creates a new PV. The result is highlighted.](./media/disk-encryption/resize-lvm/010-resize-lvm-scenariob-pvcreate.png)
When you need to add a new disk to increase the VG size, extend your traditional
9. Verify that the PV was added to the PV list: ```bash
- pvs
+ sudo pvs
``` ![Screenshot showing the code that shows the physical volume list. The result is highlighted.](./media/disk-encryption/resize-lvm/011-resize-lvm-scenariob-check-pvs1.png)
When you need to add a new disk to increase the VG size, extend your traditional
10. Extend the VG by adding the new PV to it: ```bash
- vgextend vgname /dev/newdisk
+ sudo vgextend vgname /dev/newdisk
``` ![Screenshot showing the code that extends the volume group. The result is highlighted.](./media/disk-encryption/resize-lvm/012-resize-lvm-scenariob-vgextend.png)
When you need to add a new disk to increase the VG size, extend your traditional
11. Check the new VG size: ```bash
- vgs
+ sudo vgs
``` ![Screenshot showing the code that checks the volume group size. The results are highlighted.](./media/disk-encryption/resize-lvm/013-resize-lvm-scenariob-check-vgs1.png)
When you need to add a new disk to increase the VG size, extend your traditional
12. Use `lsblk` to identify the LV that needs to be resized: ```bash
- lsblk
+ sudo lsblk
``` ![Screenshot showing the code that identifies the local volume that needs to be resized. The results are highlighted.](./media/disk-encryption/resize-lvm/013-resize-lvm-scenariob-check-lsblk1.png)
When you need to add a new disk to increase the VG size, extend your traditional
13. Extend the LV size by using `-r` to increase the file system online: ```bash
- lvextend -r -L +2G /dev/vgname/lvname
+ sudo lvextend -r -L +2G /dev/vgname/lvname
``` ![Screenshot showing code that increases the size of the file system online. The results are highlighted.](./media/disk-encryption/resize-lvm/013-resize-lvm-scenariob-lvextend.png)
When you need to add a new disk to increase the VG size, extend your traditional
![Screenshot showing encryption information in the portal. The disk name and the encryption are highlighted.](./media/disk-encryption/resize-lvm/014-resize-lvm-scenariob-check-portal1.png) To update the encryption settings on the disk, add a new LV and enable the extension on the VM.
-
+ 16. Add a new LV, create a file system on it, and add it to `/etc/fstab`. 17. Set the encryption extension again. This time you'll stamp the encryption settings on the new data disk at the platform level. Here's a CLI example:
- ```azurecli
+ ```azurecliazurecli-interactive
az vm encryption enable -g ${RGNAME} --name ${VMNAME} --disk-encryption-keyvault "<your-unique-keyvault-name>" ```
Follow these steps to finish cleaning up:
1. Unmount the LV: ```bash
- umount /mountpoint
+ sudo umount /mountpoint
``` 1. Close the encrypted layer of the volume: ```bash
- cryptsetup luksClose /dev/vgname/lvname
+ sudo cryptsetup luksClose /dev/vgname/lvname
``` 1. Delete the LV: ```bash
- lvremove /dev/vgname/lvname
+ sudo lvremove /dev/vgname/lvname
``` #### Extend a traditional LVM volume by resizing an existing PV
Im some scenarios, your limitations might require you to resize an existing disk
1. Identify your encrypted disks: ```bash
- ls -l /dev/disk/azure/scsi1/
+ sudo ls -l /dev/disk/azure/scsi1/
``` ![Screenshot showing the code that identifies encrypted disks. The results are highlighted.](./media/disk-encryption/resize-lvm/015-resize-lvm-scenarioc-check-scsi1.png) ```bash
- lsblk -fs
+ sudo lsblk -fs
``` ![Screenshot showing alternative code that identifies encrypted disks. The results are highlighted.](./media/disk-encryption/resize-lvm/015-resize-lvm-scenarioc-check-lsblk.png)
Im some scenarios, your limitations might require you to resize an existing disk
2. Check the PV information: ```bash
- pvs
+ sudo pvs
``` ![Screenshot showing the code that checks information about the physical volume. The results are highlighted.](./media/disk-encryption/resize-lvm/016-resize-lvm-scenarioc-check-pvs.png)
Im some scenarios, your limitations might require you to resize an existing disk
3. Check the VG information: ```bash
- vgs
- vgdisplay -v vgname
+ sudo vgs
+ sudo vgdisplay -v vgname
``` ![Screenshot showing the code that checks information about the volume group. The results are highlighted.](./media/disk-encryption/resize-lvm/017-resize-lvm-scenarioc-check-vgs.png)
Im some scenarios, your limitations might require you to resize an existing disk
4. Check the disk sizes. You can use `fdisk` or `lsblk` to list the drive sizes. ```bash
- for disk in `ls -l /dev/disk/azure/scsi1/* | awk -F/ '{print $NF}'` ; do echo "fdisk -l /dev/${disk} | grep ^Disk "; done | bash
+ for disk in `sudo ls -l /dev/disk/azure/scsi1/* | awk -F/ '{print $NF}'` ; do echo "sudo fdisk -l /dev/${disk} | grep ^Disk "; done | bash
- lsblk -o "NAME,SIZE"
+ sudo lsblk -o "NAME,SIZE"
``` ![Screenshot showing the code that checks disk sizes. The results are highlighted.](./media/disk-encryption/resize-lvm/018-resize-lvm-scenarioc-check-fdisk.png)
Im some scenarios, your limitations might require you to resize an existing disk
Here we identified which PVs are associated with which LVs by using `lsblk -fs`. You can identify the associations by running `lvdisplay`. ```bash
- lvdisplay --maps VG/LV
- lvdisplay --maps datavg/datalv1
+ sudo lvdisplay --maps VG/LV
+ sudo lvdisplay --maps datavg/datalv1
``` ![Screenshot showing an alternative way to identify physical volume associations with local volumes. The results are highlighted.](./media/disk-encryption/resize-lvm/019-resize-lvm-scenarioc-check-lvdisplay.png)
Im some scenarios, your limitations might require you to resize an existing disk
7. Start the VM and check the new sizes by using `fdisk`. ```bash
- for disk in `ls -l /dev/disk/azure/scsi1/* | awk -F/ '{print $NF}'` ; do echo "fdisk -l /dev/${disk} | grep ^Disk "; done | bash
+ for disk in `sudo ls -l /dev/disk/azure/scsi1/* | awk -F/ '{print $NF}'` ; do echo "sudo fdisk -l /dev/${disk} | grep ^Disk "; done | bash
- lsblk -o "NAME,SIZE"
+ sudo lsblk -o "NAME,SIZE"
``` ![Screenshot showing the code that checks disk size. The result is highlighted.](./media/disk-encryption/resize-lvm/021-resize-lvm-scenarioc-check-fdisk1.png)
Im some scenarios, your limitations might require you to resize an existing disk
8. Check the current PV size: ```bash
- pvdisplay /dev/resizeddisk
+ sudo pvdisplay /dev/resizeddisk
``` ![Screenshot showing the code that checks the size of the P V. The result is highlighted.](./media/disk-encryption/resize-lvm/022-resize-lvm-scenarioc-check-pvdisplay.png)
-
+ Even though the disk was resized, the PV still has the previous size. 9. Resize the PV: ```bash
- pvresize /dev/resizeddisk
+ sudo pvresize /dev/resizeddisk
``` ![Screenshot showing the code that resizes the physical volume. The result is highlighted.](./media/disk-encryption/resize-lvm/023-resize-lvm-scenarioc-check-pvresize.png)
Im some scenarios, your limitations might require you to resize an existing disk
10. Check the PV size: ```bash
- pvdisplay /dev/resizeddisk
+ sudo pvdisplay /dev/resizeddisk
``` ![Screenshot showing the code that checks the physical volume's size. The result is highlighted.](./media/disk-encryption/resize-lvm/024-resize-lvm-scenarioc-check-pvdisplay1.png)
Im some scenarios, your limitations might require you to resize an existing disk
11. Check the VG information. ```bash
- vgdisplay vgname
+ sudo vgdisplay vgname
``` ![Screenshot showing the code that checks information for the volume group. The result is highlighted.](./media/disk-encryption/resize-lvm/025-resize-lvm-scenarioc-check-vgdisplay1.png)
Im some scenarios, your limitations might require you to resize an existing disk
12. Resize the LV: ```bash
- lvresize -r -L +5G vgname/lvname
- lvresize -r -l +100%FREE /dev/datavg/datalv01
+ sudo lvresize -r -L +5G vgname/lvname
+ sudo lvresize -r -l +100%FREE /dev/datavg/datalv01
``` ![Screenshot showing the code that resizes the L V. The results are highlighted.](./media/disk-encryption/resize-lvm/031-resize-lvm-scenarioc-check-lvresize1.png)
You can use this method to add space to an existing LV. Or you can create new VG
1. Verify the current size of your VG: ```bash
- vgdisplay vgname
+ sudo vgdisplay vgname
``` ![Screenshot showing the code that checks the volume group size. Results are highlighted.](./media/disk-encryption/resize-lvm/033-resize-lvm-scenarioe-check-vg01.png)
You can use this method to add space to an existing LV. Or you can create new VG
2. Verify the size of the file system and LV that you want to expand: ```bash
- lvdisplay /dev/vgname/lvname
+ sudo lvdisplay /dev/vgname/lvname
``` ![Screenshot showing the code that checks the size of the local volume. Results are highlighted.](./media/disk-encryption/resize-lvm/034-resize-lvm-scenarioe-check-lv01.png)
You can use this method to add space to an existing LV. Or you can create new VG
Before you add the new disk, check the disks: ```bash
- fdisk -l | egrep ^"Disk /"
+ sudo fdisk -l | egrep ^"Disk /"
``` ![Screenshot showing the code that checks the size of the disks. The result is highlighted.](./media/disk-encryption/resize-lvm/035-resize-lvm-scenarioe-check-newdisk01.png)
You can use this method to add space to an existing LV. Or you can create new VG
Here's another way to check the disks before you add the new disk: ```bash
- lsblk
+ sudo lsblk
``` ![Screenshot showing an alternative code that checks the size of the disks. The results are highlighted.](./media/disk-encryption/resize-lvm/035-resize-lvm-scenarioe-check-newdisk02.png)
You can use this method to add space to an existing LV. Or you can create new VG
4. Check the disks to make sure the new disk has been added: ```bash
- fdisk -l | egrep ^"Disk /"
+ sudo fdisk -l | egrep ^"Disk /"
``` ![Screenshot showing the code that lists the disks. The results are highlighted.](./media/disk-encryption/resize-lvm/036-resize-lvm-scenarioe-check-newdisk02.png) ```bash
- lsblk
+ sudo lsblk
``` ![Screenshot showing the newly added disk in the output.](./media/disk-encryption/resize-lvm/036-resize-lvm-scenarioe-check-newdisk03.png)
You can use this method to add space to an existing LV. Or you can create new VG
5. Create a file system on top of the recently added disk. Match the disk to the linked devices on `/dev/disk/azure/scsi1/`. ```bash
- ls -la /dev/disk/azure/scsi1/
+ sudo ls -la /dev/disk/azure/scsi1/
``` ![Screenshot showing the code that creates a file system. The results are highlighted.](./media/disk-encryption/resize-lvm/037-resize-lvm-scenarioe-check-newdisk03.png) ```bash
- mkfs.ext4 /dev/disk/azure/scsi1/${disk}
+ sudo mkfs.ext4 /dev/disk/azure/scsi1/${disk}
``` ![Screenshot showing additional code that creates a file system and matches the disk to the linked devices. The results are highlighted.](./media/disk-encryption/resize-lvm/038-resize-lvm-scenarioe-mkfs01.png)
You can use this method to add space to an existing LV. Or you can create new VG
```bash newmount=/data4
- mkdir ${newmount}
+ sudo mkdir ${newmount}
``` 7. Add the recently created file system to `/etc/fstab`. ```bash
- blkid /dev/disk/azure/scsi1/lun4| awk -F\" '{print "UUID="$2" '${newmount}' "$4" defaults,nofail 0 0"}' >> /etc/fstab
+ sudo blkid /dev/disk/azure/scsi1/lun4| awk -F\" '{print "UUID="$2" '${newmount}' "$4" defaults,nofail 0 0"}' >> /etc/fstab
``` 8. Mount the newly created file system: ```bash
- mount -a
+ sudo mount -a
``` 9. Verify that the new file system is mounted:
You can use this method to add space to an existing LV. Or you can create new VG
![Screenshot showing the code that verifies that the file system is mounted. The result is highlighted.](./media/disk-encryption/resize-lvm/038-resize-lvm-scenarioe-df.png) ```bash
- lsblk
+ sudo lsblk
``` ![Screenshot showing additional code that verifies that the file system is mounted. The result is highlighted.](./media/disk-encryption/resize-lvm/038-resize-lvm-scenarioe-lsblk.png)
You can use this method to add space to an existing LV. Or you can create new VG
Here's an example:
- ```azurecli
+ ```azurecli-interactive
az vm encryption enable \ --resource-group ${RGNAME} \ --name ${VMNAME} \
You can use this method to add space to an existing LV. Or you can create new VG
When the encryption finishes, you see a crypt layer on the newly added disk: ```bash
- lsblk
+ sudo lsblk
``` ![Screenshot showing the code that checks the crypt layer. The result is highlighted.](./media/disk-encryption/resize-lvm/038-resize-lvm-scenarioe-lsblk2.png)
You can use this method to add space to an existing LV. Or you can create new VG
11. Unmount the encrypted layer of the new disk: ```bash
- umount ${newmount}
+ sudo umount ${newmount}
``` 12. Check the current PV information: ```bash
- pvs
+ sudo pvs
``` ![Screenshot showing the code that checks information about the physical volume. The result is highlighted.](./media/disk-encryption/resize-lvm/038-resize-lvm-scenarioe-currentpvs.png)
You can use this method to add space to an existing LV. Or you can create new VG
13. Create a PV on top of the encrypted layer of the disk. Take the device name from the previous `lsblk` command. Add a `/dev/` mapper in front of the device name to create the PV: ```bash
- pvcreate /dev/mapper/mapperdevicename
+ sudo pvcreate /dev/mapper/mapperdevicename
``` ![Screenshot showing the code that creates a physical volume on the encrypted layer. The results are highlighted.](./media/disk-encryption/resize-lvm/038-resize-lvm-scenarioe-pvcreate.png)
You can use this method to add space to an existing LV. Or you can create new VG
14. Verify that the new PV was added to the LVM configuration: ```bash
- pvs
+ sudo pvs
``` ![Screenshot showing the code that verifies that the physical volume was added to the LVM configuration. The result is highlighted.](./media/disk-encryption/resize-lvm/038-resize-lvm-scenarioe-newpv.png)
You can use this method to add space to an existing LV. Or you can create new VG
15. Add the new PV to the VG that you need to increase. ```bash
- vgextend vgname /dev/mapper/nameofhenewpv
+ sudo vgextend vgname /dev/mapper/nameofhenewpv
``` ![Screenshot showing the code that adds a physical volume to a volume group. The results are highlighted.](./media/disk-encryption/resize-lvm/038-resize-lvm-scenarioe-vgextent.png)
You can use this method to add space to an existing LV. Or you can create new VG
16. Verify the new size and free space of the VG: ```bash
- vgdisplay vgname
+ sudo vgdisplay vgname
``` ![Screenshot showing the code that verifies the size and free space of the volume group. The results are highlighted.](./media/disk-encryption/resize-lvm/038-resize-lvm-scenarioe-vgdisplay.png)
You can use this method to add space to an existing LV. Or you can create new VG
17. Increase the size of the LV and the file system. Use the `-r` option on `lvextend`. In this example, we're adding the total available space in the VG to the given LV. ```bash
- lvextend -r -l +100%FREE /dev/vgname/lvname
+ sudo lvextend -r -l +100%FREE /dev/vgname/lvname
``` ![Screenshot showing the code that increases the size of the local volume and the file system. The results are highlighted.](./media/disk-encryption/resize-lvm/038-resize-lvm-scenarioe-lvextend.png)
Follow the next steps to verify your changes.
1. Verify the size of the LV: ```bash
- lvdisplay /dev/vgname/lvname
+ sudo lvdisplay /dev/vgname/lvname
``` ![Screenshot showing the code that verifies the new size of the local volume. The results are highlighted.](./media/disk-encryption/resize-lvm/038-resize-lvm-scenarioe-lvdisplay.png)
Follow the next steps to verify your changes.
1. Verify the new size of the file system: ```bash
- df -h mountpoint
+ df -h /mountpoint
``` ![Screenshot showing the code that verifies the new size of the file system. The result is highlighted.](./media/disk-encryption/resize-lvm/038-resize-lvm-scenarioe-df1.png)
Follow the next steps to verify your changes.
1. Verify that the LVM layer is on top of the encrypted layer: ```bash
- lsblk
+ sudo lsblk
``` ![Screenshot showing the code that verifies that the LVM layer is on top of the encrypted layer. The result is highlighted.](./media/disk-encryption/resize-lvm/038-resize-lvm-scenarioe-lsblk3.png)
Follow the next steps to verify your changes.
You might want to use `lsblk -fs`. In this command, `-fs` reverses the sort order so that the mount points are shown once. The disks are shown multiple times. ```bash
- lsblk -fs
+ sudo lsblk -fs
``` ![Screenshot showing alternative code that verifies that the LVM layer is on top of the encrypted layer. The result is highlighted.](./media/disk-encryption/resize-lvm/038-resize-lvm-scenarioe-lsblk4.png)
Follow the next steps to verify your changes.
1. Identify your encrypted disks: ```bash
- lsblk
+ sudo lsblk
``` ![Screenshot showing the code that identifies the encrypted disks. The results are highlighted.](./media/disk-encryption/resize-lvm/039-resize-lvm-scenariof-lsblk01.png) ```bash
- lsblk -s
+ sudo lsblk -s
``` ![Screenshot showing alternative code that identifies the encrypted disks. The results are highlighted.](./media/disk-encryption/resize-lvm/040-resize-lvm-scenariof-lsblk012.png)
Follow the next steps to verify your changes.
2. Check your PV information: ```bash
- pvs
+ sudo pvs
``` ![Screenshot showing the code that checks information for physical volumes. The results are highlighted.](./media/disk-encryption/resize-lvm/041-resize-lvm-scenariof-pvs.png)
Follow the next steps to verify your changes.
3. Check your VG information: ```bash
- vgs
+ sudo vgs
``` ![Screenshot showing the code that checks information for volume groups. The results are highlighted.](./media/disk-encryption/resize-lvm/042-resize-lvm-scenariof-vgs.png)
Follow the next steps to verify your changes.
4. Check your LV information: ```bash
- lvs
+ sudo lvs
``` ![Screenshot showing the code that checks information for the local volume. The result is highlighted.](./media/disk-encryption/resize-lvm/043-resize-lvm-scenariof-lvs.png)
Follow the next steps to verify your changes.
6. Check the sizes of your disks: ```bash
- fdisk
- fdisk -l | egrep ^"Disk /"
- lsblk
+ sudo fdisk
+ sudo fdisk -l | egrep ^"Disk /"
+ sudo lsblk
``` ![Screenshot showing the code that checks the size of disks. The results are highlighted.](./media/disk-encryption/resize-lvm/045-resize-lvm-scenariof-fdisk01.png)
Follow the next steps to verify your changes.
8. Check your disks sizes: ```bash
- fdisk
- fdisk -l | egrep ^"Disk /"
- lsblk
+ sudo fdisk
+ sudo fdisk -l | egrep ^"Disk /"
+ sudo lsblk
``` ![Screenshot showing code that checks disk sizes. The results are highlighted.](./media/disk-encryption/resize-lvm/046-resize-lvm-scenariof-fdisk02.png)
Follow the next steps to verify your changes.
9. Check the current PV size. Remember that on LVM-on-crypt, the PV is the `/dev/mapper/` device, not the `/dev/sd*` device. ```bash
- pvdisplay /dev/mapper/devicemappername
+ sudo pvdisplay /dev/mapper/devicemappername
``` ![Screenshot showing the code that checks the size of the current physical volume. The results are highlighted.](./media/disk-encryption/resize-lvm/047-resize-lvm-scenariof-pvs.png)
Follow the next steps to verify your changes.
10. Resize the PV: ```bash
- pvresize /dev/mapper/devicemappername
+ sudo pvresize /dev/mapper/devicemappername
``` ![Screenshot showing the code that resizes the physical volume. The results are highlighted.](./media/disk-encryption/resize-lvm/048-resize-lvm-scenariof-resize-pv.png)
Follow the next steps to verify your changes.
11. Check the new PV size: ```bash
- pvdisplay /dev/mapper/devicemappername
+ sudo pvdisplay /dev/mapper/devicemappername
``` ![Screenshot showing the code that checks the size of the physical volume. The results are highlighted.](./media/disk-encryption/resize-lvm/049-resize-lvm-scenariof-pv.png)
Follow the next steps to verify your changes.
12. Resize the encrypted layer on the PV: ```bash
- cryptsetup resize /dev/mapper/devicemappername
+ sudo cryptsetup resize /dev/mapper/devicemappername
``` Apply the same procedure for all of the disks that you want to resize.
Follow the next steps to verify your changes.
13. Check your VG information: ```bash
- vgdisplay vgname
+ sudo vgdisplay vgname
``` ![Screenshot showing the code that checks information for the volume group. The results are highlighted.](./media/disk-encryption/resize-lvm/050-resize-lvm-scenariof-vg.png)
Follow the next steps to verify your changes.
14. Check the LV information: ```bash
- lvdisplay vgname/lvname
+ sudo lvdisplay vgname/lvname
``` ![Screenshot showing the code that checks information for the local volume. The results are highlighted.](./media/disk-encryption/resize-lvm/051-resize-lvm-scenariof-lv.png)
Follow the next steps to verify your changes.
16. Resize the LV: ```bash
- lvresize -r -L +2G /dev/vgname/lvname
+ sudo lvresize -r -L +2G /dev/vgname/lvname
``` ![Screenshot showing the code that resizes the local volume. The results are highlighted.](./media/disk-encryption/resize-lvm/053-resize-lvm-scenariof-lvresize.png)
Follow the next steps to verify your changes.
17. Check the LV information: ```bash
- lvdisplay vgname/lvname
+ sudo lvdisplay vgname/lvname
``` ![Screenshot showing the code that gets information about the local volume. The results are highlighted.](./media/disk-encryption/resize-lvm/054-resize-lvm-scenariof-lvsize.png)
Apply the same resizing procedure to any other LV that requires it.
## Next steps
-[Troubleshoot Azure Disk Encryption](disk-encryption-troubleshooting.md)
+[Troubleshoot Azure Disk Encryption](disk-encryption-troubleshooting.md)
virtual-machines How To Verify Encryption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/how-to-verify-encryption-status.md
Previously updated : 03/11/2020 Last updated : 04/11/2023
Another way to validate the encryption status is by looking at the **Disk settin
You can validate the *general* encryption status of an encrypted VM by using the following PowerShell commands:
-```azurepowershell
+```azurepowershell-interactive
$VMNAME="VMNAME" $RGNAME="RGNAME" Get-AzVmDiskEncryptionStatus -ResourceGroupName ${RGNAME} -VMName ${VMNAME}
You can capture the encryption settings from each disk by using the following Po
### Single pass In a single pass, the encryption settings are stamped on each of the disks (OS and data). You can capture the encryption settings for an OS disk in a single pass as follows:
-```powershell
+```azurepowershell-interactive
$RGNAME = "RGNAME" $VMNAME = "VMNAME"
If the disk doesn't have encryption settings stamped, the output will be empty:
Use the following commands to capture encryption settings for data disks:
-```azurepowershell
+```azurepowershell-interactive
$RGNAME = "RGNAME" $VMNAME = "VMNAME"
In a dual pass, the encryption settings are stamped in the VM model and not on e
To verify that the encryption settings were stamped in a dual pass, use the following commands:
-```azurepowershell
+```azurepowershell-interactive
$RGNAME = "RGNAME" $VMNAME = "VMNAME"
Write-Host "====================================================================
Check the encryption settings for disks that aren't attached to a VM. ### Managed disks
-```powershell
+
+```azurepowershell-interactive
$Sourcedisk = Get-AzDisk -ResourceGroupName ${RGNAME} -DiskName ${TARGETDISKNAME} Write-Host "=============================================================================================================================================================" Write-Host "Encryption Settings:"
Write-Host "Secret URL:" $Sourcedisk.EncryptionSettingsCollection.EncryptionSett
Write-Host "Key URL:" $Sourcedisk.EncryptionSettingsCollection.EncryptionSettings.KeyEncryptionKey.KeyUrl Write-Host "=============================================================================================================================================================" ```+ ## Azure CLI You can validate the *general* encryption status of an encrypted VM by using the following Azure CLI commands:
-```azurecli
+```azurecli-interactive
VMNAME="VMNAME" RGNAME="RGNAME" az vm encryption show --name ${VMNAME} --resource-group ${RGNAME} --query "substatus" ```+ ![General encryption status from the Azure CLI ](./media/disk-encryption/verify-encryption-linux/verify-gen-cli.png) ### Single pass+ You can validate the encryption settings for each disk by using the following Azure CLI commands:
-```azurecli
+```azurecli-interactive
az vm encryption show -g ${RGNAME} -n ${VMNAME} --query "disks[*].[name, statuses[*].displayStatus]" -o table ```
Use the following commands to get detailed status and encryption settings.
OS disk:
-```bash
+```azurecli-interactive
RGNAME="RGNAME" VMNAME="VNAME"
done
Data disks:
-```azurecli
+```azurecli-interactive
RGNAME="RGNAME" VMNAME="VMNAME" az vm encryption show --name ${VMNAME} --resource-group ${RGNAME} --query "substatus"
done
### Dual pass
-```azurecli
+```azurecli-interactive
az vm encryption show --name ${VMNAME} --resource-group ${RGNAME} -o table ```
az vm encryption show --name ${VMNAME} --resource-group ${RGNAME} -o table
You can also check the encryption settings on the VM Model Storage profile of the OS disk:
-```bash
+```azurecli-interactive
disk=`az vm show -g ${RGNAME} -n ${VMNAME} --query storageProfile.osDisk.name -o tsv` for disk in $disk; do \ echo "============================================================================================================================================================="; \
Check the encryption settings for disks that aren't attached to a VM.
### Managed disks
-```bash
+```azurecli-interactive
RGNAME="RGNAME" TARGETDISKNAME="DISKNAME" echo "============================================================================================================================================================="
echo -ne "Disk Encryption Key: "; az disk show -g ${RGNAME} -n ${TARGETDISKNAME}
echo -ne "key Encryption Key: "; az disk show -g ${RGNAME} -n ${TARGETDISKNAME} --query encryptionSettingsCollection.encryptionSettings[].keyEncryptionKey.keyUrl -o tsv; \ echo "=============================================================================================================================================================" ```+ ### Unmanaged disks Unmanaged disks are VHD files that are stored as page blobs in Azure storage accounts.
To get the details for a specific disk, you need to provide:
This command lists all the IDs for all your storage accounts:
-```azurecli
+```azurecli-interactive
az storage account list --query [].[id] -o tsv ```+ The storage account IDs are listed in the following form: /subscriptions/\<subscription id>/resourceGroups/\<resource group name>/providers/Microsoft.Storage/storageAccounts/\<storage account name> Select the appropriate ID and store it on a variable:
-```bash
+
+```azurecli-interactive
id="/subscriptions/<subscription id>/resourceGroups/<resource group name>/providers/Microsoft.Storage/storageAccounts/<storage account name>" ``` This command gets the connection string for one particular storage account and stores it on a variable:
-```bash
+```azurecli-interactive
ConnectionString=$(az storage account show-connection-string --ids $id --query connectionString -o tsv) ``` The following command lists all the containers under a storage account:
-```azurecli
+
+```azurecli-interactive
az storage container list --connection-string $ConnectionString --query [].[name] -o tsv ```+ The container used for disks is normally named "vhds."
-Store the container name on a variable:
-```bash
+Store the container name on a variable:
+
+```azurecli-interactive
ContainerName="name of the container" ``` Use this command to list all the blobs on a particular container:
-```azurecli
+
+```azurecli-interactive
az storage blob list -c ${ContainerName} --connection-string $ConnectionString --query [].[name] -o tsv ```+ Choose the disk that you want to query and store its name on a variable:
-```bash
+
+```azurecli-interactive
DiskName="diskname.vhd" ```+ Query the disk encryption settings:
-```azurecli
+
+```azurecli-interactive
az storage blob show -c ${ContainerName} --connection-string ${ConnectionString} -n ${DiskName} --query metadata.DiskEncryptionSettings ``` ## Operating system+ Validate if the data disk partitions are encrypted (and the OS disk isn't). When a partition or disk is encrypted, it's displayed as a **crypt** type. When it's not encrypted, it's displayed as a **part/disk** type. ```bash
-lsblk
+sudo lsblk
``` ![OS crypt layer for a partition](./media/disk-encryption/verify-encryption-linux/verify-os-crypt-layer.png)
You can get more details by using the following **lsblk** variant.
You'll see a **crypt** type layer that is mounted by the extension. The following example shows logical volumes and normal disks having **crypto\_LUKS FSTYPE**. ```bash
-lsblk -o NAME,TYPE,FSTYPE,LABEL,SIZE,RO,MOUNTPOINT
+sudo lsblk -o NAME,TYPE,FSTYPE,LABEL,SIZE,RO,MOUNTPOINT
```+ ![OS crypt layer for logical volumes and normal disks](./media/disk-encryption/verify-encryption-linux/verify-os-crypt-layer-2.png) As an extra step, you can validate if the data disk has any keys loaded: ```bash
-cryptsetup luksDump /dev/VGNAME/LVNAME
+sudo cryptsetup luksDump /dev/VGNAME/LVNAME
``` ```bash
-cryptsetup luksDump /dev/sdd1
+sudo cryptsetup luksDump /dev/sdd1
``` And you can check which **dm** devices are listed as **crypt**: ```bash
-dmsetup ls --target crypt
+sudo dmsetup ls --target crypt
``` ## Next steps
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
description: Learn how to create a Bicep file or ARM JSON template to use with A
Previously updated : 03/15/2023 Last updated : 04/11/2023
If there's an error trying to download the file, or put it in a specified direct
### Windows update customizer
-The `WindowsUpdate` customizer is built on the [community Windows Update Provisioner](https://packer.io/docs/provisioners/community-supported.html) for Packer, which is an open source project maintained by the Packer community. Microsoft tests and validate the provisioner with the Image Builder service, and will support investigating issues with it, and work to resolve issues, however the open source project isn't officially supported by Microsoft. For detailed documentation on and help with the Windows Update Provisioner, see the project repository.
+The `WindowsUpdate` customizer is built on the [community Windows Update Provisioner](https://developer.hashicorp.com/packer/docs/provisioners/community-supported) for Packer, which is an open source project maintained by the Packer community. Microsoft tests and validate the provisioner with the Image Builder service, and will support investigating issues with it, and work to resolve issues, however the open source project isn't officially supported by Microsoft. For detailed documentation on and help with the Windows Update Provisioner, see the project repository.
# [JSON](#tab/json)
If Azure Image Builder creates a Windows custom image successfully, and you crea
#### Default Sysprep command
-```powershell
+```azurepowershell-interactive
Write-Output '>>> Waiting for GA Service (RdAgent) to start ...' while ((Get-Service RdAgent).Status -ne 'Running') { Start-Sleep -s 5 } Write-Output '>>> Waiting for GA Service (WindowsAzureTelemetryService) to start ...'
You can distribute an image to both of the target types in the same configuratio
Because you can have more than one target to distribute to, Image Builder maintains a state for every distribution target that can be accessed by querying the `runOutputName`. The `runOutputName` is an object you can query post distribution for information about that distribution. For example, you can query the location of the VHD, or regions where the image version was replicated to, or SIG Image version created. This is a property of every distribution target. The `runOutputName` must be unique to each distribution target. Here's an example for querying an Azure Compute Gallery distribution:
-```azurecli
+```azurecli-interactive
subscriptionID=<subcriptionID> imageResourceGroup=<resourceGroup of image template> runOutputName=<runOutputName>
az resource show \
Output:
-```json
+```output
{ "id": "/subscriptions/xxxxxx/resourcegroups/rheltest/providers/Microsoft.VirtualMachineImages/imageTemplates/ImageTemplateLinuxRHEL77/runOutputs/rhel77", "identity": null,
vnetConfig: {
To start a build, you need to invoke 'Run' on the Image Template resource, examples of `run` commands:
-```PowerShell
+```azurepowershell-interactive
Invoke-AzResourceAction -ResourceName $imageTemplateName -ResourceGroupName $imageResourceGroup -ResourceType Microsoft.VirtualMachineImages/imageTemplates -ApiVersion "2021-10-01" -Action Run -Force ```
-```azurecli
+```azurecli-interactive
az resource invoke-action \ --resource-group $imageResourceGroup \ --resource-type Microsoft.VirtualMachineImages/imageTemplates \
The build can be canceled anytime. If the distribution phase has started you can
Examples of `cancel` commands:
-```powerShell
+```azurepowershell-interactive
Invoke-AzResourceAction -ResourceName $imageTemplateName -ResourceGroupName $imageResourceGroup -ResourceType Microsoft.VirtualMachineImages/imageTemplates -ApiVersion "2021-10-01" -Action Cancel -Force ```
-```azurecli
+```azurecli-interactive
az resource invoke-action \ --resource-group $imageResourceGroup \ --resource-type Microsoft.VirtualMachineImages/imageTemplates \
virtual-machines Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder.md
description: Create Linux VM images with Azure VM Image Builder and Azure Comput
Previously updated : 03/02/2020 Last updated : 04/11/2023
In this article, you learn how to use Azure VM Image Builder and the Azure CLI to create an image version in an [Azure Compute Gallery](../shared-image-galleries.md) (formerly Shared Image Gallery) and then distribute the image globally. You can also create an image version by using [Azure PowerShell](../windows/image-builder-gallery.md). - This article uses a sample JSON template to configure the image. The JSON file is at [helloImageTemplateforSIG.json](https://github.com/danielsollondon/azvmimagebuilder/blob/master/quickquickstarts/1_Creating_a_Custom_Linux_Shared_Image_Gallery_Image/helloImageTemplateforSIG.json). To distribute the image to an Azure Compute Gallery, the template uses [sharedImage](image-builder-json.md#distribute-sharedimage) as the value for the `distribute` section of the template. - ## Register the features+ To use VM Image Builder, you need to register the feature. Check your registration by running the following commands: ```azurecli-interactive
az provider register -n Microsoft.Storage
az provider register -n Microsoft.Network ```
-## Set variables and permissions
+## Set variables and permissions
Because you'll be using some pieces of information repeatedly, create some variables to store that information.
az group create -n $sigResourceGroup -l $location
VM Image Builder uses the provided [user-identity](../../active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md#user-assigned-managed-identity) to inject the image into an Azure Compute Gallery. In this example, you create an Azure role definition with specific actions for distributing the image. The role definition is then assigned to the user identity.
-```bash
+```azurecli-interactive
# Create user-assigned identity for VM Image Builder to access the storage account where the script is stored identityName=aibBuiUserId$(date +'%s') az identity create -g $sigResourceGroup -n $identityName
az role assignment create \
--scope /subscriptions/$subscriptionID/resourceGroups/$sigResourceGroup ``` - ## Create an image definition and gallery To use VM Image Builder with Azure Compute Gallery, you need to have an existing gallery and image definition. VM Image Builder doesn't create the gallery and image definition for you.
az sig image-definition create \
--os-type Linux ``` - ## Download and configure the JSON file Download the JSON template and configure it with your variables:
az resource invoke-action \
It can take a few moments to create the image and replicate it to both regions. Wait until this part is finished before you move on to create a VM. - ## Create the VM Create the VM from the image version that was created by VM Image Builder.
When you're deleting gallery resources, you need delete all the image versions b
--gallery-name $sigName \ --gallery-image-definition $imageDefName \ --subscription $subscriptionID
- ```
+ ```
1. Delete the image definition.
virtual-machines Maintenance Configurations Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations-cli.md
az maintenance configuration create \
--maintenance-window-duration "05:00" \ --maintenance-window-recur-every "Month Fourth Monday" \ --maintenance-window-start-date-time "2020-12-30 08:00" \
- --maintenance-window-time-zone "Pacific Standard Time"
+ --maintenance-window-time-zone "Pacific Standard Time"
``` Using `--maintenance-scope host` ensures that the maintenance configuration is used for controlling updates to the host infrastructure. If you try to create a configuration with the same name, but in a different location, you will get an error. Configuration names must be unique to your resource group.
az maintenance configuration create \
--maintenance-window-duration "05:00" \ --maintenance-window-recur-every "Month Fourth Monday" \ --maintenance-window-start-date-time "2020-12-30 08:00" \
- --maintenance-window-time-zone "Pacific Standard Time"
+ --maintenance-window-time-zone "Pacific Standard Time"
``` ### Guest VMs
az maintenance configuration create \
--resource-group myMaintenanceRG \ --resource-name myConfig \ --maintenance-scope InGuestPatch \
- --location eastus
- --maintenance-window-duration "02:00"
- --maintenance-window-recur-every "20days"
- --maintenance-window-start-date-time "2022-12-30 07:00"
- --maintenance-window-time-zone "Pacific Standard Time"
- --install-patches-linux-parameters package-name-masks-to-exclude="ppt" package-name-masks-to-include="apt" classifications-to-include="Other"
- --install-patches-windows-parameters kb-numbers-to-exclude="KB123456" kb-numbers-to-include="KB123456" classifications-to-include="FeaturePack"
- --reboot-setting "IfRequired"
+ --location eastus \
+ --maintenance-window-duration "02:00" \
+ --maintenance-window-recur-every "20days" \
+ --maintenance-window-start-date-time "2022-12-30 07:00" \
+ --maintenance-window-time-zone "Pacific Standard Time" \
+ --install-patches-linux-parameters package-name-masks-to-exclude="ppt" package-name-masks-to-include="apt" classifications-to-include="Other" \
+ --install-patches-windows-parameters kb-numbers-to-exclude="KB123456" kb-numbers-to-include="KB123456" classifications-to-include="FeaturePack" \
+ --reboot-setting "IfRequired" \
--extension-properties InGuestPatchMode="User" ```
az maintenance assignment list \
--resource-type hosts \ --provider-name Microsoft.Compute \ --resource-parent-name myHostGroup \
- --resource-parent-type hostGroups
+ --resource-parent-type hostGroups \
--query "[].{ResourceGroup:resourceGroup,configName:name}" \ --output table ```
virtual-machines Spot Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/spot-vms.md
Previously updated : 8/30/2022 Last updated : 03/09/2023
Using Azure Spot Virtual Machines allows you to take advantage of our unused cap
The amount of available capacity can vary based on size, region, time of day, and more. When deploying Azure Spot Virtual Machines, Azure will allocate the VMs if there's capacity available, but there's no SLA for these VMs. An Azure Spot Virtual Machine offers no high availability guarantees. At any point in time when Azure needs the capacity back, the Azure infrastructure will evict Azure Spot Virtual Machines with 30-seconds notice. ## Eviction policy
virtual-machines Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/image-builder.md
description: In this article, you learn how to create a Windows VM by using VM I
Previously updated : 04/23/2021 Last updated : 04/11/2023
In this article, you learn how to create a customized Windows image by using Azu
Use the following sample JSON template to configure the image: [helloImageTemplateWin.json](https://raw.githubusercontent.com/danielsollondon/azvmimagebuilder/master/quickquickstarts/0_Creating_a_Custom_Windows_Managed_Image/helloImageTemplateWin.json). - > [!NOTE] > Windows users can run the following Azure CLI examples on [Azure Cloud Shell](https://shell.azure.com) by using Bash. - ## Register the features To use VM Image Builder, you need to register the feature. Check your registration by running the following commands:
az provider register -n Microsoft.Storage
az provider register -n Microsoft.Network ``` - ## Set variables Because you'll be using some pieces of information repeatedly, create some variables to store that information: - ```azurecli-interactive # Resource group name - we're using myImageBuilderRG in this example imageResourceGroup='myWinImgBuilderRG'
Create a variable for your subscription ID:
```azurecli-interactive subscriptionID=$(az account show --query id --output tsv) ```+ ## Create the resource group To store the image configuration template artifact and the image, use the following resource group:
VM Image Builder uses the provided [user-identity](../../active-directory/manage
Create a user-assigned identity so that VM Image Builder can access the storage account where the script is stored.
-```bash
+```azurecli-interactive
identityName=aibBuiUserId$(date +'%s') az identity create -g $imageResourceGroup -n $identityName
When you're done, delete the resources you've created.
``` 1. Delete the image resource group.
-
+ ```azurecli-interactive az group delete -n $imageResourceGroup ``` - ## Next steps To learn more about the components of the JSON file that this article uses, see the [VM Image Builder template reference](../linux/image-builder-json.md).
virtual-machines Template Description https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/template-description.md
Previously updated : 01/03/2019 Last updated : 04/11/2023 # Virtual machines in an Azure Resource Manager template
-**Applies to:** :heavy_check_mark: Windows VMs
+
+**Applies to:** :heavy_check_mark: Windows VMs
This article describes aspects of an Azure Resource Manager template that apply to virtual machines. This article doesn't describe a complete template for creating a virtual machine; for that you need resource definitions for storage accounts, network interfaces, public IP addresses, and virtual networks. For more information about how these resources can be defined together, see the [Resource Manager template walkthrough](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md). There are many [templates in the gallery](https://azure.microsoft.com/resources/templates/?term=VM) that include the VM resource. Not all elements that can be included in a template are described here.
-
- This example shows a typical resource section of a template for creating a specified number of VMs: ```json
This example shows a typical resource section of a template for creating a speci
] } ]
-```
+```
-> [!NOTE]
+> [!NOTE]
>This example relies on a storage account that was previously created. You could create the storage account by deploying it from the template. The example also relies on a network interface and its dependent resources that would be defined in the template. These resources are not shown in the example. > >
Use these opportunities for getting the latest API versions:
- PowerShell - [Get-AzResourceProvider](/powershell/module/az.resources/get-azresourceprovider) - Azure CLI - [az provider show](/cli/azure/provider) - ## Parameters and variables [Parameters](../../azure-resource-manager/templates/syntax.md) make it easy for you to specify values for the template when you run it. This parameters section is used in the example:
Also, notice in the example that the loop index is used when specifying some of
} ```
-> [!NOTE]
+> [!NOTE]
>This example uses managed disks for the virtual machines. > >
To set this property, the network interface must exist. Therefore, you need a de
## Profiles Several profile elements are used when defining a virtual machine resource. Some are required and some are optional. For example, the hardwareProfile, osProfile, storageProfile, and networkProfile elements are required, but the diagnosticsProfile is optional. These profiles define settings such as:
-
+ - [size](../sizes.md) - [name](/azure/architecture/best-practices/resource-naming) and credentials - disk and [operating system settings](cli-ps-findimage.md)
Several profile elements are used when defining a virtual machine resource. Some
- boot diagnostics ## Disks and images
-
+ In Azure, vhd files can represent [disks or images](../managed-disks-overview.md). When the operating system in a vhd file is specialized to be a specific VM, it's referred to as a disk. When the operating system in a vhd file is generalized to be used to create many VMs, it's referred to as an image.
-
+ ### Create new virtual machines and new disks from a platform image When you create a VM, you must decide what operating system to use. The imageReference element is used to define the operating system of a new VM. The example shows a definition for a Windows Server operating system:
If you want to create a Linux operating system, you might use this definition:
"imageReference": { "publisher": "Canonical", "offer": "UbuntuServer",
- "sku": "14.04.2-LTS",
+ "sku": "20.04.2-LTS",
"version": "latest" }, ```
+> [!NOTE]
+> Modify `publisher`, `offer`, `sku` and `version` values accordingly.
+ Configuration settings for the operating system disk are assigned with the osDisk element. The example defines a new managed disk with the caching mode set to **ReadWrite** and that the disk is being created from a [platform image](cli-ps-findimage.md): ```json
When you deploy a template, Azure tracks the resources that you deployed as a gr
If you're curious about the status of resources in the deployment, view the resource group in the Azure portal: ![Get deployment information](./media/template-description/virtual-machines-deployment-info.png)
-
+ It's not a problem to use the same template to create resources or to update existing resources. When you use commands to deploy templates, you have the opportunity to say which [mode](../../azure-resource-manager/templates/deploy-powershell.md) you want to use. The mode can be set to either **Complete** or **Incremental**. The default is to do incremental updates. Be careful when using the **Complete** mode because you may accidentally delete resources. When you set the mode to **Complete**, Resource Manager deletes any resources in the resource group that aren't in the template. ## Next Steps
virtual-network-manager Create Virtual Network Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-portal.md
Previously updated : 03/15/2023- Last updated : 04/12/2023+ # Quickstart: Create a mesh network topology with Azure Virtual Network Manager using the Azure portal
Deploy a network manager instance with the defined scope and access you need.
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Select **+ Create a resource** and search for **Network Manager**. Then select **Create** to begin setting up Azure Virtual Network Manager.
+1. Select **+ Create a resource** and search for **Network Manager**. 1. Select **Network Manager > Create** to begin setting up Azure Virtual Network Manager.
1. On the **Basics** tab, enter or select the following information:
- :::image type="content" source="./media/create-virtual-network-manager-portal/network-manager-basics.png" alt-text="Screenshot of Create a network manager Basics page.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/network-manager-basics-thumbnail.png" alt-text="Screenshot of Create a network manager Basics page." lightbox="./media/create-virtual-network-manager-portal/network-manager-basics-thumbnail.png":::
| Setting | Value | | - | -- | | Subscription | Select the subscription you want to deploy Azure Virtual Network Manager to. |
- | Resource group | Select or create a resource group to store Azure Virtual Network Manager. This example uses the **myAVNMResourceGroup** previously created.
- | Name | Enter a name for this Azure Virtual Network Manager instance. This example uses the name **myAVNM**. |
- | Region | Select the region for this deployment. Azure Virtual Network Manager can manage virtual networks in any region. The region selected is for where the Virtual Network Manager instance will be deployed. |
+ | Resource group | Select **Create new** and enter **rg-learn-eastus-001**.
+ | Name | Enter **vnm-learn-eastus-001**. |
+ | Region | Enter **eastus** or a region of your choosing. Azure Virtual Network Manager can manage virtual networks in any region. The region selected is for where the Virtual Network Manager instance will be deployed. |
| Description | *(Optional)* Provide a description about this Virtual Network Manager instance and the task it's managing. |
- | [Scope](concept-network-manager-scope.md#scope) | Define the scope for which Azure Virtual Network Manager can manage. This example uses a subscription-level scope.
- | [Features](concept-network-manager-scope.md#features) | Select the features you want to enable for Azure Virtual Network Manager. Available features are *Connectivity* and *SecurityAdmin*. </br> Connectivity - Enables the ability to create a full mesh or hub and spoke network topology between virtual networks within the scope. </br> SecurityAdmin - Enables the ability to create global network security rules. |
+ | Scope and features | |
+ | [Scope](concept-network-manager-scope.md#scope) | Select **Select scopes** and choose your subscription.</br> Select **Add to selected scope > Select**. </br> *Scope* is used to define the resources which Azure Virtual Network Manager can manage. You can choose subscriptions and management groups.
+ | [Features](concept-network-manager-scope.md#features) | Select **Connectivity** and **Security Admin** from the dropdown list. </br> *Connectivity* - Enables the ability to create a full mesh or hub and spoke network topology between virtual networks within the scope. </br> *Security Admin* - Enables the ability to create global network security rules. |
1. Select **Review + create** and then select **Create** once validation has passed. ## Create virtual networks
-Create five virtual networks using the portal. This example creates virtual networks named VNetA, VNetB, VNetC and VNetD in the West US location. Each virtual network has a tag of networkType used for dynamic membership. If you have existing virtual networks for your mesh configuration, you'll need to add tags listed in the table to your virtual networks and skip to the next section.
+Create three virtual networks using the portal. Each virtual network has a tag of networkType used for dynamic membership. If you have existing virtual networks for your mesh configuration, you'll need to add tags listed in the table to your virtual networks and skip to the next section.
-1. From the **Home** screen, select **+ Create a resource** and search for **Virtual network**. Then select **Create** to begin configuring the virtual network.
+1. From the **Home** screen, select **+ Create a resource** and search for **Virtual networks**. Then select **Create** to begin configuring the virtual network.
1. On the **Basics** tab, enter or select the following information.
- :::image type="content" source="./media/create-virtual-network-manager-portal/create-mesh-vnet-basic.png" alt-text="Screenshot of create a virtual network basics page.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/create-vnet-basic.png" alt-text="Screenshot of create a virtual network basics page.":::
| Setting | Value | | - | -- | | Subscription | Select the subscription you want to deploy this virtual network into. |
- | Resource group | Select or create a new resource group to store the virtual network. This quickstart uses a new resource group named **myAVNMResourceGroup**.
- | Name | Enter a **VNetA** for the virtual network name. |
- | Region | Select **West US**. |
+ | Resource group | Select **rg-learn-eastus-001**.
+ | Name | Enter a **vnet-learn-prod-eastus-001** for the virtual network name. |
+ | Region | Select **(US) East US**. |
-1. Select **Next: IP Addresses >** and configure the following network address spaces:
+1. Select **Next** or the **IP addresses** tab and configure the following network address spaces:
- :::image type="content" source="./media/create-virtual-network-manager-portal/create-mesh-vnet-ip.png" alt-text="Screenshot of create a virtual network ip addresses page.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/create-vnet-ip.png" alt-text="Screenshot of create a virtual network IP addresses page." lightbox="./media/create-virtual-network-manager-portal/create-vnet-ip.png":::
| Setting | Value | | -- | -- |
Create five virtual networks using the portal. This example creates virtual netw
| Subnet name | default | | Subnet address space | 10.0.0.0/24 |
-1. Select the **Tags** tab and enter the following values:
-
- :::image type="content" source="./media/create-virtual-network-manager-portal/create-vnet-tag.png" alt-text="Screenshot of create a virtual network tag page.":::
-
- | Setting | Value |
- |- | - |
- | Name | Enter **NetworkType** |
- | Value | Enter **Prod**. |
- 1. Select **Review + create** and then select **Create** once validation has passed to deploy the virtual network. 1. Repeat steps 2-5 to create more virtual networks with the following information:
Create five virtual networks using the portal. This example creates virtual netw
| Setting | Value | | - | -- | | Subscription | Select the same subscription you selected in step 3. |
- | Resource group | Select the **myAVNMResourceGroup**. |
- | Name | Enter **VNetB**, **VNetC**, and **VNetD** for each of the three extra virtual networks. |
- | Region | Region is selected for you when you select the resource group. |
- | VNetB IP addresses | IPv4 address space: 10.1.0.0/16 </br> Subnet name: default </br> Subnet address space: 10.1.0.0/24|
- | VNetC IP addresses | IPv4 address space: 10.2.0.0/16 </br> Subnet name: default </br> Subnet address space: 10.2.0.0/24|
- | VNetD IP addresses | IPv4 address space: 10.3.0.0/16 </br> Subnet name: default </br> Subnet address space: 10.3.0.0/24|
- | VNetB NetworkType tag | Enter **Prod**. |
- | VNetC NetworkType tag | Enter **Prod**. |
- | VNetD NetworkType tag | Enter **Test**. |
+ | Resource group | Select the **rg-learn-eastus-001**. |
+ | Name | Enter **vnet-learn-prod-eastus-002** and **vnet-learn-test-eastus-003** for each additional virtual network. |
+ | Region | Select **(US) East US** |
+ | vnet-learn-prod-eastus-002 IP addresses | IPv4 address space: 10.1.0.0/16 </br> Subnet name: default </br> Subnet address space: 10.1.0.0/24|
+ | vnet-learn-test-eastus-003 IP addresses | IPv4 address space: 10.2.0.0/16 </br> Subnet name: default </br> Subnet address space: 10.2.0.0/24|
## Create a network group Virtual Network Manager applies configurations to groups of VNets by placing them in network groups. Create a network group as follows:
-1. Go to Azure Virtual Network Manager instance you created.
+1. Browse to **rg-learn-eastus-001** resource group, and select the **vnm-learn-eastus-001** virtual network manager instance.
-1. Select **Network Groups** under *Settings*, then select **+ Create**.
+1. Select **Network Groups** under **Settings**, then select **+ Create**.
- :::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group-2.png" alt-text="Screenshot of add a network group button.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group-2.png" alt-text="Screenshot of add a network group.":::
-1. On the **Create a network group** page, enter a **Name** for the network group. This example uses the name **myNetworkGroup**. Select **Add** to create the network group.
+1. On the **Create a network group** page, enter **ng-learn-prod-eastus-001** and Select **Create**.
- :::image type="content" source="./media/create-virtual-network-manager-portal/network-group-basics.png" alt-text="Screenshot of create a network group page.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/create-network-group.png" alt-text="Screenshot of create a network group page." lightbox="./media/create-virtual-network-manager-portal/create-network-group.png":::
-1. You see the new network group added to the *Network Groups* page.
+1. The new network group is now listed on the **Network Groups** page.
:::image type="content" source="./media/create-virtual-network-manager-portal/network-groups-list.png" alt-text="Screenshot of network group page with list of network groups.":::
-1. Once your network group is created, you add virtual networks as members. Choose one of the options: *[Manually add membership](#manually-add-membership)* or *[Create policy to dynamically add members](#create-azure-policy-for-dynamic-membership)* with Azure Policy.
-
-## Define membership for a mesh configuration
+## Define membership for connectivity configuration
-Azure Virtual Network manager allows you two methods for adding membership to a network group. You can manually add virtual networks or use Azure Policy to dynamically add virtual networks based on conditions. Choose one of the options for your mesh membership configuration:
+Once your network group is created, you add virtual networks as members. Choose one of the options: *[Manually add membership](#manually-add-membership)* or *[Create policy to dynamically add members](#create-azure-policy-for-dynamic-membership)* with Azure Policy. Choose one of the options for your mesh membership configuration:
+# [Manual membership](#tab/manualmembership)
### Manually add membership
-In this task, you manually add three virtual networks for your Mesh configuration to your network group using these steps:
+In this task, you manually add two virtual networks for your Mesh configuration to your network group using these steps:
-1. From the list of network groups, select **myNetworkGroup** and select **Add virtual networks** under *Manually add members* on the *myNetworkGroup* page.
+1. From the list of network groups, select **ng-learn-prod-eastus-001** and select **Add virtual networks** under *Manually add members* on the *ng-learn-prod-eastus-001* page.
:::image type="content" source="./media/create-virtual-network-manager-portal/add-static-member.png" alt-text="Screenshot of add a virtual network f.":::
-1. On the **Manually add members** page, select three virtual networks created previously (VNetA, VNetB, and VNetC). Then select **Add** to add the 3 virtual networks to the network group.
+1. On the **Manually add members** page, select **vnet-learn-prod-eastus-001** and **vnet-learn-prod-eastus-002**, and select **Add**.
:::image type="content" source="./media/create-virtual-network-manager-portal/add-virtual-networks.png" alt-text="Screenshot of add virtual networks to network group page."::: 1. On the **Network Group** page under **Settings**, select **Group Members** to view the membership of the group you manually selected.
- :::image type="content" source="media/create-virtual-network-manager-portal/group-members-list-thumb.png" alt-text="Screenshot of group membership under Group Membership." lightbox="media/create-virtual-network-manager-portal/group-members-list.png":::
+ :::image type="content" source="media/create-virtual-network-manager-portal/group-members-list.png" alt-text="Screenshot of group membership under Group Membership." lightbox="media/create-virtual-network-manager-portal/group-members-list.png":::
+# [Azure Policy](#tab/azurepolicy)
### Create Azure Policy for dynamic membership
-Using [Azure Policy](concept-azure-policy-integration.md), you define a condition to dynamically add three virtual networks tagged as **Prod** to your network group using these steps:
+Using [Azure Policy](concept-azure-policy-integration.md), you define a condition to dynamically add two virtual networks to your network group when the name of the virtual network includes **prod** using these steps:
-1. From the list of network groups, select **myNetworkGroup** and select **Create Azure Policy** under *Create policy to dynamically add members*.
+1. From the list of network groups, select **ng-learn-prod-eastus-001** and select **Create azure policy** under *Create policy to dynamically add members*.
:::image type="content" source="media/create-virtual-network-manager-portal/define-dynamic-membership.png" alt-text="Screenshot of Create Azure Policy button.":::
-1. On the **Create Azure Policy** page, select or enter the following information:
+1. On the **Create azure policy** page, select or enter the following information:
:::image type="content" source="./media/create-virtual-network-manager-portal/network-group-conditional.png" alt-text="Screenshot of create a network group conditional statements tab."::: | Setting | Value | | - | -- |
- | Policy name | Enter **ProdVNets** in the text box. |
+ | Policy name | Enter **azpol-learn-prod-eastus-001** in the text box. |
| Scope | Select **Select Scopes** and choose your current subscription. | | Criteria | |
- | Parameter | Select **Tags** from the drop-down.|
- | Operator | Select **Exists** from the drop-down.|
- | Condition | Enter **Prod** to dynamically add the three previously created virtual networks into this network group. |
+ | Parameter | Select **Name** from the drop-down.|
+ | Operator | Select **Contains** from the drop-down.|
+ | Condition | Enter **-prod** for the condition in the text box. |
+
+1. Select **Preview resources** to view the **Effective virtual networks** page and select **Close**. This page shows the virtual networks that will be added to the network group based on the conditions defined in Azure Policy.
+
+ :::image type="content" source="media/create-virtual-network-manager-portal/effective-virtual-networks.png" alt-text="Screenshot of effective virtual networks page.":::
1. Select **Save** to deploy the group membership. It can take up to one minute for the policy to take effect and be added to your network group.
-1. On the *Network Group* page under **Settings**, select **Group Members** to view the membership of the group based on the conditions defined in Azure Policy.
+1. On the **Network Group** page under **Settings**, select **Group Members** to view the membership of the group based on the conditions defined in Azure Policy. You'll note the **Source** is listed as **azpol-learn-prod-eastus-001 - subscriptions/subscription_id**.
+
+ :::image type="content" source="media/create-virtual-network-manager-portal/group-members-list.png" alt-text="Screenshot of group membership under Group Membership." lightbox="media/create-virtual-network-manager-portal/group-members-list.png":::
++
- :::image type="content" source="media/create-virtual-network-manager-portal/group-members-list-thumb.png" alt-text="Screenshot of group membership under Group Membership." lightbox="media/create-virtual-network-manager-portal/group-members-list.png":::
-
## Create a configuration
-Now that the Network Group is created, and has the correct VNets, create a mesh network topology configuration. Replace <subscription_id> with your subscription and follow these steps:
+Now that the Network Group is created, and has the correct VNets, create a mesh network topology configuration. Replace **<subscription_id>** with your subscription and follow these steps:
1. Select **Configurations** under **Settings**, then select **+ Create**.
- :::image type="content" source="./media/create-virtual-network-manager-portal/add-configuration.png" alt-text="Screenshot of configuration creation screen for Network Manager.":::
- 1. Select **Connectivity configuration** from the drop-down menu to begin creating a connectivity configuration. :::image type="content" source="./media/create-virtual-network-manager-portal/connectivity-configuration-dropdown.png" alt-text="Screenshot of configuration drop-down menu.":::
Now that the Network Group is created, and has the correct VNets, create a mesh
| Setting | Value | | - | -- |
- | Name | Enter a name for this connectivity configuration. |
+ | Name | Enter **cc-learn-prod-eastus-001**. |
| Description | *(Optional)* Provide a description about this connectivity configuration. | 1. On the **Topology** tab, select the **Mesh** topology if not selected, and leave the **Enable mesh connectivity across regions** unchecked. Cross-region connectivity isn't required for this set up since all the virtual networks are in the same region. :::image type="content" source="./media/create-virtual-network-manager-portal/topology-configuration.png" alt-text="Screenshot of topology selection for network group connectivity configuration.":::
-1. Select **+ Add** and then select the network group you created in the last section. Select **Select** to add the network group to the configuration.
+1. Select **+ Add > Add network group** and select **ng-learn-prod-eastus-001** under **Network Groups**. Choose **Select** to add the network group to the configuration.
:::image type="content" source="./media/create-virtual-network-manager-portal/add-network-group-configuration.png" alt-text="Screenshot of add a network group to a connectivity configuration.":::
-1. Select the **Preview Topology** tab to view the topology of the configuration. This tab shows you a visual representation of the network groups you added to the configuration and how connectivity is established between network groups and their members.
+1. Select the **Visualization** tab to view the topology of the configuration. This tab shows you a visual representation of the network group you added to the configuration.
:::image type="content" source="./media/create-virtual-network-manager-portal/preview-topology.png" alt-text="Screenshot of preview topology for network group connectivity configuration.":::
Now that the Network Group is created, and has the correct VNets, create a mesh
:::image type="content" source="./media/create-virtual-network-manager-portal/create-connectivity-configuration.png" alt-text="Screenshot of create a connectivity configuration.":::
-1. Once the deployment completes, select **Refresh**, and you see the new connectivity configuration added to the *Configurations* page.
+1. Once the deployment completes, select **Refresh**, and you see the new connectivity configuration added to the **Configurations** page.
:::image type="content" source="./media/create-virtual-network-manager-portal/connectivity-configuration-list.png" alt-text="Screenshot of connectivity configuration list.":::
To have your configurations applied to your environment, you need to commit the
| Setting | Value | | - | -- |
- | Configurations | Select the type of configuration you want to deploy. This example selects **Include connectivity configurations in your goal state** . |
- | Connectivity configurations | Select the **ConnectivityConfigA** configuration created from the previous section. |
- | Regions | Select the region to deploy this configuration to. For this example, choose the **West US** region since all the virtual networks were created in that region. |
+ | Configurations | Select **Include connectivity configurations in your goal state** . |
+ | Connectivity configurations | Select **cc-learn-prod-eastus-001**. |
+ | Target regions | Select **East US** as the deployment region. |
1. Select **Next** and then select **Deploy** to complete the deployment. :::image type="content" source="./media/create-virtual-network-manager-portal/deployment-confirmation.png" alt-text="Screenshot of deployment confirmation message.":::
-1. You should now see the deployment show up in the list for the selected region. The deployment of the configuration can take several minutes to complete.
+1. The deployment will display in the list for the selected region. The deployment of the configuration can take a few minutes to complete.
:::image type="content" source="./media/create-virtual-network-manager-portal/deployment-in-progress.png" alt-text="Screenshot of configuration deployment in progress status."::: ## Verify configuration deployment
-Use the **Network Manager** section for each virtual machine to verify whether configuration was deployed in these steps:
-1. Select **Refresh** on the **Deployments** page to see the updated status of the configuration that you committed.
+Use the **Network Manager** section for each virtual network to verify whether configuration was deployed in these steps:
- :::image type="content" source="./media/create-virtual-network-manager-portal/deployment-status.png" alt-text="Screenshot of refresh button for updated deployment status.":::
+1. Go to **vnet-learn-prod-eastus-001** virtual network and select **Network Manager** under **Settings**. Verify that **cc-learn-prod-eastus-001** is listed under **Connectivity Configurations** tab.
-1. Go to **VNetA** virtual network and select **Network Manager** under *Settings*. You see the configuration you deployed with Azure Virtual Network Manager associated to the virtual network.
+ :::image type="content" source="./media/create-virtual-network-manager-portal/vnet-configuration-association.png" alt-text="Screenshot of connectivity configuration associated with vnet-learn-prod-eastus-001 virtual network." lightbox="./media/create-virtual-network-manager-portal/vnet-configuration-association.png":::
- :::image type="content" source="./media/create-virtual-network-manager-portal/vnet-configuration-association.png" alt-text="Screenshot of connectivity configuration associated with VNetA virtual network.":::
-
-1. You can also confirm the same for **VNetB**,**VNetC**, and **VNetD**.
+1. Repeat the previous step on **vnet-learn-prod-eastus-002**.
## Clean up resources
-If you no longer need Azure Virtual Network Manager, you need to make sure all of following is true before you can delete the resource:
+If you no longer need Azure Virtual Network Manager, the following steps will remove all configurations, network groups, and Virtual Network Manager.
-* There are no configurations deployed to any region.
-* All configurations have been deleted.
-* All network groups have been deleted.
+> [!NOTE]
+> Before you can remove Azure Virtual Network Manager, you must remove all deployments, configurations, and network groups.
1. To remove all configurations from a region, start in the virtual network manager and select **Deploy configurations**. Select the following settings:
If you no longer need Azure Virtual Network Manager, you need to make sure all o
| - | -- | | Configurations | Select **Include connectivity configurations in your goal state**. | | Connectivity configurations | Select the ****None - Remove existing connectivity configurations**** configuration. |
- | Regions | Select **West US** as the deployed region. |
+ | Target regions | Select **East US** as the deployed region. |
1. Select **Next** and select **Deploy** to complete the deployment removal.
-1. To delete a configuration, select **Configurations** under **Settings** from the left pane of Azure Virtual Network Manager. Select the checkbox next to the configuration you want to remove and then select **Delete** at the top of the resource page. Select **Yes** to confirm the configuration deletion.
+1. To delete a configuration, select **Configurations** under **Settings** from the left pane of Azure Virtual Network Manager. Select the checkbox next to the configuration you want to remove and then select **Delete** at the top of the resource page.
- :::image type="content" source="./media/create-virtual-network-manager-portal/delete-configuration.png" alt-text="Screenshot of delete button for a connectivity configuration.":::
+1. On the **Delete a configuration** page, select the following options:
-1. To delete a network group, select **Network Groups** under **Settings** from the left pane of Azure Virtual Network Manager. Select the checkbox next to the network group you want to remove and then select **Delete** at the top of the resource page.
+ :::image type="content" source="./media/create-virtual-network-manager-portal/configuration-delete-options.png" alt-text="Screenshot of configuration to be deleted option selection.":::
- :::image type="content" source="./media/create-virtual-network-manager-portal/delete-network-group.png" alt-text="Screenshot of delete a network group button.":::
+ | Setting | Value |
+ | - | -- |
+ | Delete option | Select **Force delete the resource and all dependent resources**. |
+ | Confirm deletion | Enter the name of the configuration. In this example, it's **cc-learn-prod-eastus-001**. |
+1. To delete a network group, select **Network Groups** under **Settings** from the left pane of Azure Virtual Network Manager. Select the checkbox next to the network group you want to remove and then select **Delete** at the top of the resource page.
1. On the **Delete a network group** page, select the following options:
- :::image type="content" source="./media/create-virtual-network-manager-portal/ng-delete-options.png" alt-text="Screenshot of Network group to be deleted option selection.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/network-group-delete-options.png" alt-text="Screenshot of Network group to be deleted option selection." lightbox="./media/create-virtual-network-manager-portal/network-group-delete-options.png":::
| Setting | Value | | - | -- | | Delete option | Select **Force delete the resource and all dependent resources**. |
- | Confirm deletion | Enter the name of the network group. In this example, it's **myNetworkGroup**. |
+ | Confirm deletion | Enter the name of the network group. In this example, it's **ng-learn-prod-eastus-001**. |
1. Select **Delete** and Select **Yes** to confirm the network group deletion.
If you no longer need Azure Virtual Network Manager, you need to make sure all o
| Setting | Value | | - | -- | | Delete option | Select **Force delete the resource and all dependent resources**. |
- | Confirm deletion | Enter the name of the network manager. In this example, it's **myAVNM**. |
+ | Confirm deletion | Enter the name of the network manager. In this example, it's **vnm-learn-eastus-001**. |
-1. To delete the resource group and virtual networks, locate the resource group and select the **Delete resource group**. Confirm that you want to delete by entering the name of the resource group, then select **Delete**
+1. To delete the resource group and virtual networks, locate **rg-learn-eastus-001** and select the **Delete resource group**. Confirm that you want to delete by entering **rg-learn-eastus-001** in the textbox, then select **Delete**
## Next steps
virtual-network Kubernetes Network Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/kubernetes-network-policies.md
Once the cluster is deployed run the following `kubectl` command to download and
For Linux: ```
- kubectl apply -f https://github.com/Azure/azure-container-networking/blob/master/npm/azure-npm.yaml
+ kubectl apply -f https://raw.githubusercontent.com/Azure/azure-container-networking/master/npm/azure-npm.yaml
``` For Windows: ```
- kubectl apply -f https://github.com/Azure/azure-container-networking/blob/master/npm/examples/windows/azure-npm.yaml
+ kubectl apply -f https://raw.githubusercontent.com/Azure/azure-container-networking/master/npm/examples/windows/azure-npm.yaml
``` The solution is also open source and the code is available on the [Azure Container Networking repository](https://github.com/Azure/azure-container-networking/tree/master/npm).
Following are some sample dashboard for Network Policy Manager metrics in contai
- Learn about [container networking](container-networking-overview.md). -- [Deploy the plug-in](deploy-container-networking.md) for Kubernetes clusters or Docker containers.
+- [Deploy the plug-in](deploy-container-networking.md) for Kubernetes clusters or Docker containers.
virtual-wan Howto Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-private-link.md
Previously updated : 09/22/2020 Last updated : 03/30/2023 # Use Private Link in Virtual WAN
-[Azure Private Link](../private-link/private-link-overview.md) is a technology that allows you to connect Azure Platform-as-a-Service offerings using private IP address connectivity by exposing [Private Endpoints](../private-link/private-endpoint-overview.md). With Azure Virtual WAN, you can deploy a Private Endpoint in one of the virtual networks connected to any virtual hub. This provides connectivity to any other virtual network or branch connected to the same Virtual WAN.
+[Azure Private Link](../private-link/private-link-overview.md) is a technology that allows you to connect Azure Platform-as-a-Service offerings using private IP address connectivity by exposing [Private Endpoints](../private-link/private-endpoint-overview.md). With Azure Virtual WAN, you can deploy a Private Endpoint in one of the virtual networks connected to any virtual hub. This private link provides connectivity to any other virtual network or branch connected to the same Virtual WAN.
## Before you begin
-The steps in this article assume that you have already deployed a virtual WAN with one or more hubs, as well as at least two virtual networks connected to Virtual WAN.
+The steps in this article assume that you've already deployed a virtual WAN with one or more hubs and at least two virtual networks connected to Virtual WAN.
To create a new virtual WAN and a new hub, use the steps in the following articles:
To create a new virtual WAN and a new hub, use the steps in the following articl
## <a name="endpoint"></a>Create a private link endpoint
-You can create a private link endpoint for many different services. In this example, we will use Azure SQL Database. You can find more information about how to create a private endpoint for an Azure SQL Database in [Quickstart: Create a Private Endpoint using the Azure portal](../private-link/create-private-endpoint-portal.md). The following image shows the network configuration of the Azure SQL Database:
+You can create a private link endpoint for many different services. In this example, we're using Azure SQL Database. You can find more information about how to create a private endpoint for an Azure SQL Database in [Quickstart: Create a Private Endpoint using the Azure portal](../private-link/create-private-endpoint-portal.md). The following image shows the network configuration of the Azure SQL Database:
:::image type="content" source="./media/howto-private-link/create-private-link.png" alt-text="create private link" lightbox="./media/howto-private-link/create-private-link.png":::
After creating the Azure SQL Database, you can verify the private endpoint IP ad
:::image type="content" source="./media/howto-private-link/endpoints.png" alt-text="private endpoints" lightbox="./media/howto-private-link/endpoints.png":::
-Clicking on the private endpoint we have created, you should see its private IP address, as well as its Fully Qualified Domain Name (FQDN). Note that the private endpoint has an IP address in the range of the VNet where it has been deployed (10.1.3.0/24):
+Clicking on the private endpoint we've created, you should see its private IP address and its Fully Qualified Domain Name (FQDN). The private endpoint should have an IP address in the range of the VNet where it has been deployed (10.1.3.0/24):
:::image type="content" source="./media/howto-private-link/sql-endpoint.png" alt-text="SQL endpoint" lightbox="./media/howto-private-link/sql-endpoint.png"::: ## <a name="connectivity"></a>Verify connectivity from the same VNet
-In this example, we will verify connectivity to the Azure SQL Database from an Ubuntu virtual machine with MS SQL tools installed. The first step is verifying that DNS resolution works and the Azure SQL Database Fully Qualified Domain Name is resolved to a private IP address, in the same VNet where the Private Endpoint has been deployed (10.1.3.0/24):
+In this example, we verify connectivity to the Azure SQL Database from a Linux virtual machine with the MS SQL tools installed. The first step is verifying that DNS resolution works and the Azure SQL Database Fully Qualified Domain Name is resolved to a private IP address, in the same VNet where the Private Endpoint has been deployed (10.1.3.0/24):
```bash
-$ nslookup wantest.database.windows.net
+nslookup wantest.database.windows.net
+```
+
+```output
Server: 127.0.0.53 Address: 127.0.0.53#53
Name: wantest.privatelink.database.windows.net
Address: 10.1.3.228 ```
-As you can see in the previous output, the FQDN `wantest.database.windows.net` is mapped to `wantest.privatelink.database.windows.net`, that the private DNS zone created along the private endpoint will resolve to the private IP address `10.1.3.228`. Looking into the private DNS zone will confirm that there is an A record for the private endpoint mapped to the private IP address:
+As you can see in the previous output, the FQDN `wantest.database.windows.net` is mapped to `wantest.privatelink.database.windows.net`, that the private DNS zone created along the private endpoint will resolve to the private IP address `10.1.3.228`. Looking into the private DNS zone will confirm that there's an A record for the private endpoint mapped to the private IP address:
:::image type="content" source="./media/howto-private-link/dns-zone.png" alt-text="DNS zone" lightbox="./media/howto-private-link/dns-zone.png"::: After verifying the correct DNS resolution, we can attempt to connect to the database: ```bash
-$ query="SELECT CONVERT(char(15), CONNECTIONPROPERTY('client_net_address'));"
-$ sqlcmd -S wantest.database.windows.net -U $username -P $password -Q "$query"
+query="SELECT CONVERT(char(15), CONNECTIONPROPERTY('client_net_address'));"
+sqlcmd -S wantest.database.windows.net -U $username -P $password -Q "$query"
+```
+```output
10.1.3.75 ```
-As you can see, we are using a special SQL query that gives us the source IP address that the SQL server sees from the client. In this case the server sees the client with its private IP (`10.1.3.75`), which means that the traffic goes from the VNet straight into the private endpoint.
+As you can see, we're using a special SQL query that gives us the source IP address that the SQL server sees from the client. In this case the server sees the client with its private IP (`10.1.3.75`), which means that the traffic goes from the VNet straight into the private endpoint.
-Note that you need to set the variables `username` and `password` to match the credentials defined in the Azure SQL Database to make the examples in this guide work.
+Set the variables `username` and `password` to match the credentials defined in the Azure SQL Database to make the examples in this guide work.
## <a name="vnet"></a>Connect from a different VNet
Once you have connectivity between the VNet or the branch to the VNet where the
* If connecting to the private endpoint from a VNet, you can use the same private zone that was created with the Azure SQL Database. * If connecting to the private endpoint from a branch (Site-to-site VPN, Point-to-site VPN or ExpressRoute), you need to use on-premises DNS resolution.
-In this example we will connect from a different VNet, so first we will attach the private DNS zone to the new VNet so that its workloads can resolve the Azure SQL Database Fully Qualified Domain Name to the private IP address. This is done through linking the private DNS zone to the new VNet:
+In this example we're connecting from a different VNet. First attach the private DNS zone to the new VNet so that its workloads can resolve the Azure SQL Database Fully Qualified Domain Name to the private IP address. This is done through linking the private DNS zone to the new VNet:
:::image type="content" source="./media/howto-private-link/dns-link.png" alt-text="DNS link" lightbox="./media/howto-private-link/dns-link.png"::: Now any virtual machine in the attached VNet should correctly resolve the Azure SQL Database FQDN to the private link's private IP address: ```bash
-$ nslookup wantest.database.windows.net
+nslookup wantest.database.windows.net
+```
+
+```output
Server: 127.0.0.53 Address: 127.0.0.53#53
In order to double-check that this VNet (10.1.1.0/24) has connectivity to the or
:::image type="content" source="./media/howto-private-link/effective-routes.png" alt-text="effective routes" lightbox="./media/howto-private-link/effective-routes.png":::
-As you can see, there is a route pointing to the VNet 10.1.3.0/24 injected by the Virtual Network Gateways in Azure Virtual WAN. Now we can finally test connectivity to the database:
+As you can see, there's a route pointing to the VNet 10.1.3.0/24 injected by the Virtual Network Gateways in Azure Virtual WAN. Now we can finally test connectivity to the database:
```bash
-$ query="SELECT CONVERT(char(15), CONNECTIONPROPERTY('client_net_address'));"
-$ sqlcmd -S wantest.database.windows.net -U $username -P $password -Q "$query"
+query="SELECT CONVERT(char(15), CONNECTIONPROPERTY('client_net_address'));"
+sqlcmd -S wantest.database.windows.net -U $username -P $password -Q "$query"
+```
+```output
10.1.1.75 ```
-With this example, we have seen how creating a private endpoint in one of the VNets attached to a Virtual WAN provides connectivity to the rest of VNets and branches in the Virtual WAN.
+With this example, we've seen how creating a private endpoint in one of the VNets attached to a Virtual WAN provides connectivity to the rest of VNets and branches in the Virtual WAN.
## Next steps
vpn-gateway Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/design.md
description: Learn about VPN Gateway topologies and designs to connect on-premis
Previously updated : 02/13/2023 Last updated : 04/10/2023
It's important to know that there are different configurations available for VPN gateway connections. You need to determine which configuration best fits your needs. In the sections below, you can view design information and topology diagrams about the following VPN gateway connections. Use the diagrams and descriptions to help select the connection topology to match your requirements. The diagrams show the main baseline topologies, but it's possible to build more complex configurations using the diagrams as guidelines.
-## <a name="s2smulti"></a>Site-to-Site VPN
+## <a name="s2smulti"></a>Site-to-site VPN
-A Site-to-Site (S2S) VPN gateway connection is a connection over IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. S2S connections can be used for cross-premises and hybrid configurations. A S2S connection requires a VPN device located on-premises that has a public IP address assigned to it. For information about selecting a VPN device, see the [VPN Gateway FAQ - VPN devices](vpn-gateway-vpn-faq.md#s2s).
+A Site-to-site (S2S) VPN gateway connection is a connection over IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. S2S connections can be used for cross-premises and hybrid configurations. A S2S connection requires a VPN device located on-premises that has a public IP address assigned to it. For information about selecting a VPN device, see the [VPN Gateway FAQ - VPN devices](vpn-gateway-vpn-faq.md#s2s).
-![Azure VPN Gateway Site-to-Site connection example](./media/design/vpngateway-site-to-site-connection-diagram.png)
VPN Gateway can be configured in active-standby mode using one public IP or in active-active mode using two public IPs. In active-standby mode, one IPsec tunnel is active and the other tunnel is in standby. In this setup, traffic flows through the active tunnel, and if some issue happens with this tunnel, the traffic switches over to the standby tunnel. Setting up VPN Gateway in active-active mode is *recommended* in which both the IPsec tunnels are simultaneously active, with data flowing through both tunnels at the same time. An additional advantage of active-active mode is that customers experience higher throughputs. You can create more than one VPN connection from your virtual network gateway, typically connecting to multiple on-premises sites. When working with multiple connections, you must use a RouteBased VPN type (known as a dynamic gateway when working with classic VNets). Because each virtual network can only have one VPN gateway, all connections through the gateway share the available bandwidth. This type of connection is sometimes referred to as a "multi-site" connection.
-![Azure VPN Gateway Multi-Site connection example](./media/design/vpngateway-multisite-connection-diagram.png)
### Deployment models and methods for S2S [!INCLUDE [site-to-site table](../../includes/vpn-gateway-table-site-to-site-include.md)]
-## <a name="P2S"></a>Point-to-Site VPN
+## <a name="P2S"></a>Point-to-site VPN
-A Point-to-Site (P2S) VPN gateway connection lets you create a secure connection to your virtual network from an individual client computer. A P2S connection is established by starting it from the client computer. This solution is useful for telecommuters who want to connect to Azure VNets from a remote location, such as from home or a conference. P2S VPN is also a useful solution to use instead of S2S VPN when you have only a few clients that need to connect to a VNet.
+A point-to-site (P2S) VPN gateway connection lets you create a secure connection to your virtual network from an individual client computer. A P2S connection is established by starting it from the client computer. This solution is useful for telecommuters who want to connect to Azure VNets from a remote location, such as from home or a conference. P2S VPN is also a useful solution to use instead of S2S VPN when you have only a few clients that need to connect to a VNet.
-Unlike S2S connections, P2S connections do not require an on-premises public-facing IP address or a VPN device. P2S connections can be used with S2S connections through the same VPN gateway, as long as all the configuration requirements for both connections are compatible. For more information about Point-to-Site connections, see [About Point-to-Site VPN](point-to-site-about.md).
+Unlike S2S connections, P2S connections don't require an on-premises public-facing IP address or a VPN device. P2S connections can be used with S2S connections through the same VPN gateway, as long as all the configuration requirements for both connections are compatible. For more information about point-to-site connections, see [About point-to-site VPN](point-to-site-about.md).
-![Azure VPN Gateway Point-to-Site connection example](./media/design/point-to-site.png)
### Deployment models and methods for P2S
The VNets you connect can be:
* in the same or different subscriptions * in the same or different deployment models
-![Azure VPN Gateway VNet to VNet connection example](./media/design/vpngateway-vnet-to-vnet-connection-diagram.png)
### Connections between deployment models
Azure currently has two deployment models: classic and Resource Manager. If you
### VNet peering
-You may be able to use VNet peering to create your connection, as long as your virtual network meets certain requirements. VNet peering does not use a virtual network gateway. For more information, see [VNet peering](../virtual-network/virtual-network-peering-overview.md).
+You may be able to use VNet peering to create your connection, as long as your virtual network meets certain requirements. VNet peering doesn't use a virtual network gateway. For more information, see [VNet peering](../virtual-network/virtual-network-peering-overview.md).
### Deployment models and methods for VNet-to-VNet [!INCLUDE [vpn-gateway-table-vnet-to-vnet](../../includes/vpn-gateway-table-vnet-to-vnet-include.md)]
-## <a name="coexisting"></a>Site-to-Site and ExpressRoute coexisting connections
+## <a name="coexisting"></a>Site-to-site and ExpressRoute coexisting connections
-[ExpressRoute](../expressroute/expressroute-introduction.md) is a direct, private connection from your WAN (not over the public Internet) to Microsoft Services, including Azure. Site-to-Site VPN traffic travels encrypted over the public Internet. Being able to configure Site-to-Site VPN and ExpressRoute connections for the same virtual network has several advantages.
+[ExpressRoute](../expressroute/expressroute-introduction.md) is a direct, private connection from your WAN (not over the public Internet) to Microsoft Services, including Azure. Site-to-site VPN traffic travels encrypted over the public Internet. Being able to configure site-to-site VPN and ExpressRoute connections for the same virtual network has several advantages.
-You can configure a Site-to-Site VPN as a secure failover path for ExpressRoute, or use Site-to-Site VPNs to connect to sites that are not part of your network, but that are connected through ExpressRoute. Notice that this configuration requires two virtual network gateways for the same virtual network, one using the gateway type 'Vpn', and the other using the gateway type 'ExpressRoute'.
+You can configure a site-to-site VPN as a secure failover path for ExpressRoute, or use site-to-site VPNs to connect to sites that aren't part of your network, but that are connected through ExpressRoute. Notice that this configuration requires two virtual network gateways for the same virtual network, one using the gateway type 'Vpn', and the other using the gateway type 'ExpressRoute'.
-![ExpressRoute and VPN Gateway coexisting connections example](./media/design/expressroute-vpngateway-coexisting-connections-diagram.png)
### Deployment models and methods for S2S and ExpressRoute coexist
vpn-gateway Tutorial Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-site-to-site-portal.md
Previously updated : 09/21/2022 Last updated : 04/10/2023
Last updated 09/21/2022
This tutorial shows you how to use the Azure portal to create a site-to-site VPN gateway connection between your on-premises network and a virtual network (VNet). You can also create this configuration using [Azure PowerShell](vpn-gateway-create-site-to-site-rm-powershell.md) or [Azure CLI](vpn-gateway-howto-site-to-site-resource-manager-cli.md). In this tutorial, you learn how to:
Create a local network gateway using the following values:
Site-to-site connections to an on-premises network require a VPN device. In this step, you configure your VPN device. When configuring your VPN device, you need the following values: * A shared key. This is the same shared key that you specify when creating your site-to-site VPN connection. In our examples, we use a basic shared key. We recommend that you generate a more complex key to use.
-* The Public IP address of your virtual network gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the Public IP address of your VPN gateway using the Azure portal, go to **Virtual network gateways**, then select the name of your gateway.
+* The public IP address of your virtual network gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the public IP address of your VPN gateway using the Azure portal, go to **Virtual network gateways**, then select the name of your gateway.
[!INCLUDE [Configure a VPN device](../../includes/vpn-gateway-configure-vpn-device-include.md)]
vpn-gateway Vpn Gateway About Vpn Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md
description: Learn about VPN devices and IPsec parameters for Site-to-Site cross
Previously updated : 10/24/2022 Last updated : 04/07/2023
To help configure your VPN device, refer to the links that correspond to the app
| Sentrium (Developer) | VyOS | VyOS 1.2.2 | Not tested | [Configuration guide](https://docs.vyos.io/en/latest/configexamples/azure-vpn-bgp.html)| | ShareTech | Next Generation UTM (NU series) | 9.0.1.3 | Not compatible | [Configuration guide](http://www.sharetech.com.tw/images/file/Solution/NU_UTM/S2S_VPN_with_Azure_Route_Based_en.pdf) | | SonicWall |TZ Series, NSA Series<br>SuperMassive Series<br>E-Class NSA Series |SonicOS 5.8.x<br>SonicOS 5.9.x<br>SonicOS 6.x |Not compatible |[Configuration guide](https://www.sonicwall.com/support/knowledge-base/170505320011694) |
-| Sophos | XG Next Gen Firewall | XG v17 | Not tested | [Configuration guide](https://community.sophos.com/kb/127546)<br><br>[Configuration guide - Multiple SAs](https://community.sophos.com/kb/en-us/133154) |
+| Sophos | XG Next Gen Firewall | XG v17 | Not tested | [Configuration guide](https://community.sophos.com/sophos-xg-firewall/f/recommended-reads/118402/sophos-xg-firewall-v17-x-how-to-establish-a-site-to-site-ipsec-vpn-to-microsoft-azure)<br><br>[Configuration guide - Multiple SAs](https://community.sophos.com/sophos-xg-firewall/f/recommended-reads/118404/sophos-firewall-configure-a-site-to-site-ipsec-vpn-with-multiple-sas-to-a-route-based-azure-vpn-gateway) |
| Synology | MR2200ac <br>RT2600ac <br>RT1900ac | SRM1.1.5/VpnPlusServer-1.2.0 | Not tested | [Configuration guide](https://www.synology.com/en-global/knowledgebase/SRM/tutorial/VPN/How_to_set_up_Site_to_Site_VPN_between_Synology_Router_and_MS_Azure) | | Ubiquiti | EdgeRouter | EdgeOS v1.10 | Not tested | [BGP over IKEv2/IPsec](https://help.ubnt.com/hc/en-us/articles/115012374708)<br><br>[VTI over IKEv2/IPsec](https://help.ubnt.com/hc/en-us/articles/115012305347) | | Ultra | 3E-636L3 | 5.2.0.T3 Build-13 | Not tested | Configuration guide |
vpn-gateway Vpn Gateway Create Site To Site Rm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md
Title: 'Connect your on-premises network to an Azure VNet: Site-to-Site VPN: PowerShell'
-description: Learn how to create a Site-to-Site VPN Gateway connection between your on-premises network and an Azure VNet using PowerShell.
+ Title: 'Connect your on-premises network to an Azure VNet: site-to-site VPN: PowerShell'
+description: Learn how to create a site-to-site VPN Gateway connection between your on-premises network and an Azure VNet using PowerShell.
Previously updated : 04/28/2021 Last updated : 04/10/2023
-# Create a VNet with a Site-to-Site VPN connection using PowerShell
+# Create a VNet with a site-to-site VPN connection using PowerShell
-This article shows you how to use PowerShell to create a Site-to-Site VPN gateway connection from your on-premises network to the VNet. The steps in this article apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). You can also create this configuration using a different deployment tool or deployment model by selecting a different option from the following list:
+This article shows you how to use PowerShell to create a site-to-site VPN gateway connection from your on-premises network to the VNet. The steps in this article apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). You can also create this configuration using a different deployment tool or deployment model by selecting a different option from the following list:
> [!div class="op_single_selector"] > * [Azure portal](./tutorial-site-to-site-portal.md)
This article shows you how to use PowerShell to create a Site-to-Site VPN gatewa
> >
-A Site-to-Site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about VPN gateways, see [About VPN gateway](vpn-gateway-about-vpngateways.md).
+A site-to-site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about VPN gateways, see [About VPN gateway](vpn-gateway-about-vpngateways.md).
## <a name="before"></a>Before you begin
Verify that you have met the following criteria before beginning your configurat
* Make sure you have a compatible VPN device and someone who is able to configure it. For more information about compatible VPN devices and device configuration, see [About VPN Devices](vpn-gateway-about-vpn-devices.md). * Verify that you have an externally facing public IPv4 address for your VPN device.
-* If you are unfamiliar with the IP address ranges located in your on-premises network configuration, you need to coordinate with someone who can provide those details for you. When you create this configuration, you must specify the IP address range prefixes that Azure will route to your on-premises location. None of the subnets of your on-premises network can over lap with the virtual network subnets that you want to connect to.
+* If you're unfamiliar with the IP address ranges located in your on-premises network configuration, you need to coordinate with someone who can provide those details for you. When you create this configuration, you must specify the IP address range prefixes that Azure will route to your on-premises location. None of the subnets of your on-premises network can over lap with the virtual network subnets that you want to connect to.
### Azure PowerShell
Create your virtual network.
$subnet1 = New-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix 10.1.255.0/27 $subnet2 = New-AzVirtualNetworkSubnetConfig -Name 'Frontend' -AddressPrefix 10.1.0.0/24 ```
-2. Create the VNet.
+
+1. Create the VNet.
```azurepowershell-interactive New-AzVirtualNetwork -Name VNet1 -ResourceGroupName TestRG1 `
Use the steps in this section if you already have a virtual network, but need to
```azurepowershell-interactive $vnet = Get-AzVirtualNetwork -ResourceGroupName TestRG1 -Name VNet1 ```
-2. Create the gateway subnet.
+
+1. Create the gateway subnet.
```azurepowershell-interactive Add-AzVirtualNetworkSubnetConfig -Name 'GatewaySubnet' -AddressPrefix 10.1.255.0/27 -VirtualNetwork $vnet ```
-3. Set the configuration.
+
+1. Set the configuration.
```azurepowershell-interactive Set-AzVirtualNetwork -VirtualNetwork $vnet
Use the steps in this section if you already have a virtual network, but need to
## 2. <a name="localnet"></a>Create the local network gateway
-The local network gateway (LNG) typically refers to your on-premises location. It is not the same as a virtual network gateway. You give the site a name by which Azure can refer to it, then specify the IP address of the on-premises VPN device to which you will create a connection. You also specify the IP address prefixes that will be routed through the VPN gateway to the VPN device. The address prefixes you specify are the prefixes located on your on-premises network. If your on-premises network changes, you can easily update the prefixes.
+The local network gateway (LNG) typically refers to your on-premises location. It isn't the same as a virtual network gateway. You give the site a name by which Azure can refer to it, then specify the IP address of the on-premises VPN device to which you will create a connection. You also specify the IP address prefixes that will be routed through the VPN gateway to the VPN device. The address prefixes you specify are the prefixes located on your on-premises network. If your on-premises network changes, you can easily update the prefixes.
Use the following values:
To modify IP address prefixes for your local network gateway:
Sometimes your local network gateway prefixes change. The steps you take to modify your IP address prefixes depend on whether you have created a VPN gateway connection. See the [Modify IP address prefixes for a local network gateway](#modify) section of this article.
-## <a name="PublicIP"></a>3. Request a Public IP address
-
-A VPN gateway must have a Public IP address. You first request the IP address resource, and then refer to it when creating your virtual network gateway. The IP address is dynamically assigned to the resource when the VPN gateway is created.
+## <a name="PublicIP"></a>3. Request a public IP address
-VPN Gateway currently only supports *Dynamic* Public IP address allocation. You cannot request a Static Public IP address assignment. However, this does not mean that the IP address will change after it has been assigned to your VPN gateway. The only time the Public IP address changes is when the gateway is deleted and re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of your VPN gateway.
+A VPN gateway must have a public IP address. You first request the IP address resource, and then refer to it when creating your virtual network gateway. The IP address is dynamically assigned to the resource when the VPN gateway is created. The only time the public IP address changes is when the gateway is deleted and re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of your VPN gateway.
-Request a Public IP address that will be assigned to your virtual network VPN gateway.
+Request a public IP address for your virtual network VPN gateway.
```azurepowershell-interactive $gwpip= New-AzPublicIpAddress -Name VNet1GWPIP -ResourceGroupName TestRG1 -Location 'East US' -AllocationMethod Static -Sku Standard
Creating a gateway can often take 45 minutes or more, depending on the selected
## <a name="ConfigureVPNDevice"></a>6. Configure your VPN device
-Site-to-Site connections to an on-premises network require a VPN device. In this step, you configure your VPN device. When configuring your VPN device, you need the following items:
+Site-to-site connections to an on-premises network require a VPN device. In this step, you configure your VPN device. When configuring your VPN device, you need the following items:
-- A shared key. This is the same shared key that you specify when creating your Site-to-Site VPN connection. In our examples, we use a basic shared key. We recommend that you generate a more complex key to use.-- The Public IP address of your virtual network gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the Public IP address of your virtual network gateway using PowerShell, use the following example. In this example, VNet1GWPIP is the name of the public IP address resource that you created in an earlier step.
+* A shared key. This is the same shared key that you specify when creating your site-to-site VPN connection. In our examples, we use a basic shared key. We recommend that you generate a more complex key to use.
+* The public IP address of your virtual network gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the public IP address of your virtual network gateway using PowerShell, use the following example. In this example, VNet1GWPIP is the name of the public IP address resource that you created in an earlier step.
```azurepowershell-interactive Get-AzPublicIpAddress -Name VNet1GWPIP -ResourceGroupName TestRG1
Site-to-Site connections to an on-premises network require a VPN device. In this
[!INCLUDE [Configure VPN device](../../includes/vpn-gateway-configure-vpn-device-rm-include.md)] - ## <a name="CreateConnection"></a>7. Create the VPN connection
-Next, create the Site-to-Site VPN connection between your virtual network gateway and your VPN device. Be sure to replace the values with your own. The shared key must match the value you used for your VPN device configuration. Notice that the '-ConnectionType' for Site-to-Site is **IPsec**.
+Next, create the site-to-site VPN connection between your virtual network gateway and your VPN device. Be sure to replace the values with your own. The shared key must match the value you used for your VPN device configuration. Notice that the '-ConnectionType' for site-to-site is **IPsec**.
1. Set the variables.+ ```azurepowershell-interactive $gateway1 = Get-AzVirtualNetworkGateway -Name VNet1GW -ResourceGroupName TestRG1 $local = Get-AzLocalNetworkGateway -Name Site1 -ResourceGroupName TestRG1 ```
-2. Create the connection.
+1. Create the connection.
+ ```azurepowershell-interactive New-AzVirtualNetworkGatewayConnection -Name VNet1toSite1 -ResourceGroupName TestRG1 ` -Location 'East US' -VirtualNetworkGateway1 $gateway1 -LocalNetworkGateway2 $local `
There are a few different ways to verify your VPN connection.
[!INCLUDE [Connect to a VM](../../includes/vpn-gateway-connect-vm.md)] - ## <a name="modify"></a>To modify IP address prefixes for a local network gateway If the IP address prefixes that you want routed to your on-premises location change, you can modify the local network gateway. When using these examples, modify the values to match your environment.
Remove-AzVirtualNetworkGatewayConnection -Name VNet1toSite1 `
* Once your connection is complete, you can add virtual machines to your virtual networks. For more information, see [Virtual Machines](../index.yml). * For information about BGP, see the [BGP Overview](vpn-gateway-bgp-overview.md) and [How to configure BGP](vpn-gateway-bgp-resource-manager-ps.md).
-* For information about creating a site-to-site VPN connection using Azure Resource Manager template, see [Create a Site-to-Site VPN Connection](https://azure.microsoft.com/resources/templates/site-to-site-vpn-create/).
+* For information about creating a site-to-site VPN connection using Azure Resource Manager template, see [Create a site-to-site VPN Connection](https://azure.microsoft.com/resources/templates/site-to-site-vpn-create/).
* For information about creating a vnet-to-vnet VPN connection using Azure Resource Manager template, see [Deploy HBase geo replication](https://azure.microsoft.com/resources/templates/hdinsight-hbase-replication-geo/).
vpn-gateway Vpn Gateway Howto Multi Site To Site Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md
Title: 'Add multiple VPN Gateway Site-to-Site connections to a VNet: Azure portal'
-description: Learn how to add additional Site-to-Site connections to a VPN gateway.
+ Title: 'Add multiple VPN Gateway site-to-site connections to a VNet: Azure portal'
+description: Learn how to add additional site-to-site connections to a VPN gateway.
Previously updated : 04/29/2021 Last updated : 04/10/2023
> * [PowerShell (classic)](vpn-gateway-multi-site.md) >
-This article helps you add additional Site-to-Site (S2S) connections to a VPN gateway that has an existing connection. This architecture is often referred to as a "multi-site" configuration. You can add a S2S connection to a VNet that already has a S2S connection, Point-to-Site connection, or VNet-to-VNet connection. There are some limitations when adding connections. Check the [Prerequisites](#before) section in this article to verify before you start your configuration.
+This article helps you add additional site-to-site (S2S) connections to a VPN gateway that has an existing connection. This architecture is often referred to as a "multi-site" configuration. You can add a S2S connection to a VNet that already has a S2S connection, point-to-site connection, or VNet-to-VNet connection. There are some limitations when adding connections. Check the [Prerequisites](#before) section in this article to verify before you start your configuration.
-**About ExpressRoute/Site-to-Site coexisting connections**
-* You can use the steps in this article to add a new VPN connection to an already existing ExpressRoute/Site-to-Site coexisting connection.
-* You can't use the steps in this article to configure a new ExpressRoute/Site-to-Site coexisting connection. To create a new coexsiting connection see: [ExpressRoute/S2S coexisting connections](../expressroute/expressroute-howto-coexist-resource-manager.md).
+**About ExpressRoute/site-to-site coexisting connections**
+
+* You can use the steps in this article to add a new VPN connection to an already existing ExpressRoute/site-to-site coexisting connection.
+* You can't use the steps in this article to configure a new ExpressRoute/site-to-site coexisting connection. To create a new coexisting connection see: [ExpressRoute/S2S coexisting connections](../expressroute/expressroute-howto-coexist-resource-manager.md).
## <a name="before"></a>Prerequisites Verify the following items:
-* You are NOT configuring a new coexisting ExpressRoute and VPN Gateway Site-to-Site connection.
+* You're NOT configuring a new coexisting ExpressRoute and VPN Gateway site-to-site connection.
* You have a virtual network that was created using the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) with an existing connection. * The virtual network gateway for your VNet is RouteBased. If you have a PolicyBased VPN gateway, you must delete the virtual network gateway and create a new VPN gateway as RouteBased. * None of the address ranges overlap for any of the VNets that this VNet is connecting to.
Verify the following items:
:::image type="content" source="./media/vpn-gateway-howto-multi-site-to-site-resource-manager-portal/add-connection.png" alt-text="Add connection page"::: 1. On the **Add connection** page, fill out the following fields:
- * **Name:** The name you want to give to the site you are creating the connection to.
+ * **Name:** The name you want to give to the site you're creating the connection to.
* **Connection type:** Select **Site-to-site (IPsec)**. ## <a name="local"></a>Add a local network gateway
vpn-gateway Vpn Gateway Howto Point To Site Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md
description: Learn how to configure VPN Gateway server settings for P2S configur
Previously updated : 01/18/2023 Last updated : 04/10/2023
This article helps you configure the necessary VPN Gateway point-to-site (P2S) server settings to let you securely connect individual clients running Windows, Linux, or macOS to an Azure VNet. P2S VPN connections are useful when you want to connect to your VNet from a remote location, such as when you're telecommuting from home or a conference. You can also use P2S instead of a site-to-site (S2S) VPN when you have only a few clients that need to connect to a virtual network (VNet). P2S connections don't require a VPN device or a public-facing IP address. + There are various different configuration options available for P2S. For more information about point-to-site VPN, see [About point-to-site VPN](point-to-site-about.md). This article helps you create a P2S configuration that uses **certificate authentication** and the Azure portal. To create this configuration using the Azure PowerShell, see the [Configure P2S - Certificate - PowerShell](vpn-gateway-howto-point-to-site-rm-ps.md) article. For RADIUS authentication, see the [P2S RADIUS](point-to-site-how-to-radius-ps.md) article. For Azure Active Directory authentication, see the [P2S Azure AD](openvpn-azure-ad-tenant.md) article. [!INCLUDE [P2S basic architecture](../../includes/vpn-gateway-p2s-architecture.md)]
vpn-gateway Vpn Gateway Howto Point To Site Rm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md
Previously updated : 01/18/2023 Last updated : 04/10/2023 # Configure server settings for P2S VPN Gateway connections - certificate authentication - Azure PowerShell
-This article helps you securely connect individual clients running Windows, Linux, or macOS to an Azure VNet. Point-to-site VPN connections are useful when you want to connect to your VNet from a remote location, such when you are telecommuting from home or a conference. You can also use P2S instead of a Site-to-Site VPN when you have only a few clients that need to connect to a VNet. Point-to-site connections do not require a VPN device or a public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol), or IKEv2.
+This article helps you securely connect individual clients running Windows, Linux, or macOS to an Azure VNet. Point-to-site VPN connections are useful when you want to connect to your VNet from a remote location, such when you're telecommuting from home or a conference. You can also use P2S instead of a Site-to-Site VPN when you have only a few clients that need to connect to a VNet. Point-to-site connections don't require a VPN device or a public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol), or IKEv2.
For more information about point-to-site VPN, see [About point-to-site VPN](point-to-site-about.md). To create this configuration using the Azure portal, see [Configure a point-to-site VPN using the Azure portal](vpn-gateway-howto-point-to-site-resource-manager-portal.md).
Verify that you have an Azure subscription. If you don't already have an Azure s
## <a name="declare"></a>Declare variables
-We use variables for this article so that you can easily change the values to apply to your own environment without having to change the examples themselves. Declare the variables that you want to use. You can use the following sample, substituting the values for your own when necessary. If you close your PowerShell/Cloud Shell session at any point during the exercise, just copy and paste the values again to re-declare the variables.
+We use variables for this article so that you can easily change the values to apply to your own environment without having to change the examples themselves. Declare the variables that you want to use. You can use the following sample, substituting the values for your own when necessary. If you close your PowerShell/Cloud Shell session at any point during the exercise, just copy and paste the values again to redeclare the variables.
```azurepowershell-interactive $VNetName = "VNet1"
$DNS = "10.2.1.4"
1. Create the virtual network.
- In this example, the -DnsServer server parameter is optional. Specifying a value does not create a new DNS server. The DNS server IP address that you specify should be a DNS server that can resolve the names for the resources you are connecting to from your VNet. This example uses a private IP address, but it is likely that this is not the IP address of your DNS server. Be sure to use your own values. The value you specify is used by the resources that you deploy to the VNet, not by the P2S connection or the VPN client.
+ In this example, the -DnsServer server parameter is optional. Specifying a value doesn't create a new DNS server. The DNS server IP address that you specify should be a DNS server that can resolve the names for the resources you're connecting to from your VNet. This example uses a private IP address, but it's likely that this isn't the IP address of your DNS server. Be sure to use your own values. The value you specify is used by the resources that you deploy to the VNet, not by the P2S connection or the VPN client.
```azurepowershell-interactive New-AzVirtualNetwork `
$DNS = "10.2.1.4"
$subnet = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet ```
-1. A VPN gateway must have a Public IP address. You first request the IP address resource, and then refer to it when creating your virtual network gateway. The IP address is dynamically assigned to the resource when the VPN gateway is created. VPN Gateway currently only supports *Dynamic* Public IP address allocation. You cannot request a Static Public IP address assignment. However, it doesn't mean that the IP address changes after it has been assigned to your VPN gateway. The only time the Public IP address changes is when the gateway is deleted and re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of your VPN gateway.
+1. A VPN gateway must have a Public IP address. You first request the IP address resource, and then refer to it when creating your virtual network gateway. The IP address is dynamically assigned to the resource when the VPN gateway is created. VPN Gateway currently only supports *Dynamic* Public IP address allocation. You can't request a Static Public IP address assignment. However, it doesn't mean that the IP address changes after it has been assigned to your VPN gateway. The only time the Public IP address changes is when the gateway is deleted and re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of your VPN gateway.
Request a dynamically assigned public IP address.
$DNS = "10.2.1.4"
In this step, you configure and create the virtual network gateway for your VNet. For more complete information about authentication and tunnel type, see [Specify tunnel and authentication type](vpn-gateway-howto-point-to-site-resource-manager-portal.md#type) in the Azure portal version of this article. * The -GatewayType must be **Vpn** and the -VpnType must be **RouteBased**.
-* The -VpnClientProtocol is used to specify the types of tunnels that you would like to enable. The tunnel options are **OpenVPN, SSTP**, and **IKEv2**. You can choose to enable one of them or any supported combination. If you want to enable multiple types, then specify the names separated by a comma. OpenVPN and SSTP cannot be enabled together. The strongSwan client on Android and Linux and the native IKEv2 VPN client on iOS and macOS will use only the IKEv2 tunnel to connect. Windows clients try IKEv2 first and if that doesnΓÇÖt connect, they fall back to SSTP. You can use the OpenVPN client to connect to OpenVPN tunnel type.
-* The virtual network gateway 'Basic' SKU does not support IKEv2, OpenVPN, or RADIUS authentication. If you are planning on having Mac clients connect to your virtual network, do not use the Basic SKU.
+* The -VpnClientProtocol is used to specify the types of tunnels that you would like to enable. The tunnel options are **OpenVPN, SSTP**, and **IKEv2**. You can choose to enable one of them or any supported combination. If you want to enable multiple types, then specify the names separated by a comma. OpenVPN and SSTP can't be enabled together. The strongSwan client on Android and Linux and the native IKEv2 VPN client on iOS and macOS will use only the IKEv2 tunnel to connect. Windows clients try IKEv2 first and if that doesnΓÇÖt connect, they fall back to SSTP. You can use the OpenVPN client to connect to OpenVPN tunnel type.
+* The virtual network gateway 'Basic' SKU doesn't support IKEv2, OpenVPN, or RADIUS authentication. If you're planning on having Mac clients connect to your virtual network, don't use the Basic SKU.
* A VPN gateway can take 45 minutes or more to complete, depending on the [gateway sku](vpn-gateway-about-vpn-gateway-settings.md) you select. 1. Configure and create the virtual network gateway for your VNet. It takes approximately 45 minutes for the gateway to create.
In this step, you configure and create the virtual network gateway for your VNet
## <a name="addresspool"></a>Add the VPN client address pool
-After the VPN gateway finishes creating, you can add the VPN client address pool. The VPN client address pool is the range from which the VPN clients receive an IP address when connecting. Use a private IP address range that does not overlap with the on-premises location that you connect from, or with the VNet that you want to connect to.
+After the VPN gateway finishes creating, you can add the VPN client address pool. The VPN client address pool is the range from which the VPN clients receive an IP address when connecting. Use a private IP address range that doesn't overlap with the on-premises location that you connect from, or with the VNet that you want to connect to.
In this example, the VPN client address pool is declared as a [variable](#declare) in an earlier step.
Set-AzVirtualNetworkGateway -VirtualNetworkGateway $Gateway -VpnClientAddressPoo
Certificates are used by Azure to authenticate VPN clients for point-to-site VPNs. You upload the public key information of the root certificate to Azure. The public key is then considered 'trusted'. Client certificates must be generated from the trusted root certificate, and then installed on each client computer in the Certificates-Current User/Personal certificate store. The certificate is used to authenticate the client when it initiates a connection to the VNet.
-If you use self-signed certificates, they must be created using specific parameters. You can create a self-signed certificate using the instructions for [PowerShell and Windows 10 or later](vpn-gateway-certificates-point-to-site.md), or, if you don't have Windows 10 or later, you can use [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md). It's important that you follow the steps in the instructions when generating self-signed root certificates and client certificates. Otherwise, the certificates you generate will not be compatible with P2S connections and you receive a connection error.
+If you use self-signed certificates, they must be created using specific parameters. You can create a self-signed certificate using the instructions for [PowerShell and Windows 10 or later](vpn-gateway-certificates-point-to-site.md), or, if you don't have Windows 10 or later, you can use [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md). It's important that you follow the steps in the instructions when generating self-signed root certificates and client certificates. Otherwise, the certificates you generate won't be compatible with P2S connections and you receive a connection error.
### <a name="cer"></a>Root certificate
If you use self-signed certificates, they must be created using specific paramet
1. [!INCLUDE [Generate a client certificate](../../includes/vpn-gateway-p2s-clientcert-include.md)]
-1. After you create client certificate, [export](vpn-gateway-certificates-point-to-site.md#clientexport) it. The client certificate will be distributed to the client computers that will connect.
+1. After you create client certificate, [export](vpn-gateway-certificates-point-to-site.md#clientexport) it. You'll distribute the client certificate to the client computers that will connect.
## <a name="upload"></a>Upload root certificate public key information
Verify that your VPN gateway has finished creating. Once it has completed, you c
$CertBase64 = [system.convert]::ToBase64String($cert.RawData) ```
-1. Upload the public key information to Azure. Once the certificate information is uploaded, Azure considers it to be a trusted root certificate. When uploading, make sure you are running PowerShell locally on your computer, or instead, you can use the [Azure portal steps](vpn-gateway-howto-point-to-site-resource-manager-portal.md#uploadfile). You can't upload using Azure Cloud Shell.
+1. Upload the public key information to Azure. Once the certificate information is uploaded, Azure considers it to be a trusted root certificate. When uploading, make sure you're running PowerShell locally on your computer, or instead, you can use the [Azure portal steps](vpn-gateway-howto-point-to-site-resource-manager-portal.md#uploadfile). You can't upload using Azure Cloud Shell.
```azurepowershell Add-AzVpnClientRootCertificate -VpnClientRootCertificateName $P2SRootCertName -VirtualNetworkGatewayname "VNet1GW" -ResourceGroupName "TestRG1" -PublicCertData $CertBase64
These instructions apply to Windows clients.
* Verify that the VPN client configuration package was generated after the DNS server IP addresses were specified for the VNet. If you updated the DNS server IP addresses, generate and install a new VPN client configuration package.
-* Use 'ipconfig' to check the IPv4 address assigned to the Ethernet adapter on the computer from which you are connecting. If the IP address is within the address range of the VNet that you are connecting to, or within the address range of your VPNClientAddressPool, this is referred to as an overlapping address space. When your address space overlaps in this way, the network traffic doesn't reach Azure, it stays on the local network.
+* Use 'ipconfig' to check the IPv4 address assigned to the Ethernet adapter on the computer from which you're connecting. If the IP address is within the address range of the VNet that you're connecting to, or within the address range of your VPNClientAddressPool, this is referred to as an overlapping address space. When your address space overlaps in this way, the network traffic doesn't reach Azure, it stays on the local network.
## <a name="addremovecert"></a>To add or remove a root certificate
The common practice is to use the root certificate to manage access at team or o
1. Retrieve the client certificate thumbprint. For more information, see [How to retrieve the Thumbprint of a Certificate](/dotnet/framework/wcf/feature-details/how-to-retrieve-the-thumbprint-of-a-certificate).
-1. Copy the information to a text editor and remove all spaces so that it is a continuous string. This string is declared as a variable in the next step.
+1. Copy the information to a text editor and remove all spaces so that it's a continuous string. This string is declared as a variable in the next step.
1. Declare the variables. Make sure to declare the thumbprint you retrieved in the previous step.
vpn-gateway Vpn Gateway Howto Site To Site Resource Manager Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-cli.md
Title: 'Connect on-premises networks to a virtual network: Site-to-Site VPN: CLI'
-description: Learn how to create an IPsec site-to-site VPN Gateway connection from your on-premises network to an Azure virtual network over the public internet using the CLI.
+ Title: ' Connect an on-premises network and a virtual network: S2S VPN: CLI'
+description: Learn how to create a site-to-site VPN Gateway IPsec connection between your on-premises network to a VNet using Azure CLI.
Previously updated : 07/26/2021 Last updated : 04/10/2023
-# Create a virtual network with a Site-to-Site VPN connection using CLI
+# Create a virtual network with a site-to-site VPN connection using CLI
-This article shows you how to use the Azure CLI to create a Site-to-Site VPN gateway connection from your on-premises network to the VNet. The steps in this article apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). You can also create this configuration using a different deployment tool or deployment model by selecting a different option from the following list:<br>
+This article shows you how to use the Azure CLI to create a site-to-site VPN gateway connection from your on-premises network to the VNet. The steps in this article apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). You can also create this configuration using a different deployment tool or deployment model by selecting a different option from the following list:<br>
> [!div class="op_single_selector"] > * [Azure portal](./tutorial-site-to-site-portal.md)
This article shows you how to use the Azure CLI to create a Site-to-Site VPN gat
> * [CLI](vpn-gateway-howto-site-to-site-resource-manager-cli.md) > * [Azure portal (classic)](vpn-gateway-howto-site-to-site-classic-portal.md)
-![Site-to-Site VPN Gateway cross-premises connection diagram](./media/vpn-gateway-howto-site-to-site-resource-manager-cli/site-to-site-diagram.png)
-A Site-to-Site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about VPN gateways, see [About VPN gateway](vpn-gateway-about-vpngateways.md).
+A site-to-site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about VPN gateways, see [About VPN gateway](vpn-gateway-about-vpngateways.md).
## Before you begin
Verify that you have met the following criteria before beginning configuration:
* Make sure you have a compatible VPN device and someone who is able to configure it. For more information about compatible VPN devices and device configuration, see [About VPN Devices](vpn-gateway-about-vpn-devices.md). * Verify that you have an externally facing public IPv4 address for your VPN device.
-* If you are unfamiliar with the IP address ranges located in your on-premises network configuration, you need to coordinate with someone who can provide those details for you. When you create this configuration, you must specify the IP address range prefixes that Azure will route to your on-premises location. None of the subnets of your on-premises network can over lap with the virtual network subnets that you want to connect to.
+* If you're unfamiliar with the IP address ranges located in your on-premises network configuration, you need to coordinate with someone who can provide those details for you. When you create this configuration, you must specify the IP address range prefixes that Azure will route to your on-premises location. None of the subnets of your on-premises network can over lap with the virtual network subnets that you want to connect to.
[!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] * This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
You can use the following values to create a test environment, or refer to these
``` #Example values
-VnetName = TestVNet1 
+VnetName = VNet1 
ResourceGroup = TestRG1  Location = eastus 
-AddressSpace = 10.11.0.0/16 
-SubnetName = Subnet1 
-Subnet = 10.11.0.0/24 
-GatewaySubnet = 10.11.255.0/27 
-LocalNetworkGatewayName = Site2 
-LNG Public IP = <VPN device IP address>
+AddressSpace = 10.1.0.0/16 
+SubnetName = Frontend
+Subnet = 10.1.0.0/24 
+GatewaySubnet = 10.1.255.0/27 
+LocalNetworkGatewayName = Site1
+LNG Public IP = <On-premises VPN device IP address>
LocalAddrPrefix1 = 10.0.0.0/24 LocalAddrPrefix2 = 20.0.0.0/24    GatewayName = VNet1GW 
ConnectionName = VNet1toSite2
## <a name="Login"></a>1. Connect to your subscription
-If you choose to run CLI locally, connect to your subscription. If you are using Azure Cloud Shell in the browser, you don't need to connect to your subscription. You will connect automatically in Azure Cloud Shell. However, you may want to verify that you are using the correct subscription after you connect.
+If you choose to run CLI locally, connect to your subscription. If you're using Azure Cloud Shell in the browser, you don't need to connect to your subscription. You'll connect automatically in Azure Cloud Shell. However, you may want to verify that you're using the correct subscription after you connect.
[!INCLUDE [CLI login](../../includes/vpn-gateway-cli-login-include.md)]
If you choose to run CLI locally, connect to your subscription. If you are using
The following example creates a resource group named 'TestRG1' in the 'eastus' location. If you already have a resource group in the region that you want to create your VNet, you can use that one instead. ```azurecli-interactive
-az group create --name TestRG1 --location eastus
+az group create --name TestRG --location eastus
``` ## <a name="VNet"></a>3. Create a virtual network
If you don't already have a virtual network, create one using the [az network vn
> >
-The following example creates a virtual network named 'TestVNet1' and a subnet, 'Subnet1'.
+The following example creates a virtual network named 'VNet1' and a subnet, 'Subnet1'.
```azurecli-interactive
-az network vnet create --name TestVNet1 --resource-group TestRG1 --address-prefix 10.11.0.0/16 --location eastus --subnet-name Subnet1 --subnet-prefix 10.11.0.0/24
+az network vnet create --name VNet1 --resource-group TestRG1 --address-prefix 10.1.0.0/16 --location eastus --subnet-name Subnet1 --subnet-prefix 10.1.0.0/24
``` ## 4. <a name="gwsub"></a>Create the gateway subnet - [!INCLUDE [About gateway subnets](../../includes/vpn-gateway-about-gwsubnet-include.md)] Use the [az network vnet subnet create](/cli/azure/network/vnet/subnet) command to create the gateway subnet. ```azurecli-interactive
-az network vnet subnet create --address-prefix 10.11.255.0/27 --name GatewaySubnet --resource-group TestRG1 --vnet-name TestVNet1
+az network vnet subnet create --address-prefix 10.1.255.0/27 --name GatewaySubnet --resource-group TestRG1 --vnet-name VNet1
``` [!INCLUDE [vpn-gateway-no-nsg](../../includes/vpn-gateway-no-nsg-include.md)]
Use the [az network local-gateway create](/cli/azure/network/local-gateway) comm
az network local-gateway create --gateway-ip-address 23.99.221.164 --name Site2 --resource-group TestRG1 --local-address-prefixes 10.0.0.0/24 20.0.0.0/24 ```
-## <a name="PublicIP"></a>6. Request a Public IP address
+## <a name="PublicIP"></a>6. Request a public IP address
-A VPN gateway must have a Public IP address. You first request the IP address resource, and then refer to it when creating your virtual network gateway. The IP address is dynamically assigned to the resource when the VPN gateway is created. VPN Gateway currently only supports *Dynamic* Public IP address allocation. You cannot request a Static Public IP address assignment. However, this does not mean that the IP address changes after it has been assigned to your VPN gateway. The only time the Public IP address changes is when the gateway is deleted and re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of your VPN gateway.
+A VPN gateway must have a public IP address. You first request the IP address resource, and then refer to it when creating your virtual network gateway. The IP address is dynamically assigned to the resource when the VPN gateway is created. The only time the public IP address changes is when the gateway is deleted and re-created. It doesn't change across resizing, resetting, or other internal maintenance/upgrades of your VPN gateway.
-Use the [az network public-ip create](/cli/azure/network/public-ip) command to request a Dynamic Public IP address.
+Use the [az network public-ip create](/cli/azure/network/public-ip) command to request a public IP address.
```azurecli-interactive
-az network public-ip create --name VNet1GWIP --resource-group TestRG1 --allocation-method Dynamic
+az network public-ip create --name VNet1GWIP --resource-group TestRG1 --allocation-method Static --sku Standard
``` ## <a name="CreateGateway"></a>7. Create the VPN gateway
Create the virtual network VPN gateway. Creating a gateway can often take 45 min
Use the following values:
-* The *--gateway-type* for a Site-to-Site configuration is *Vpn*. The gateway type is always specific to the configuration that you are implementing. For more information, see [Gateway types](vpn-gateway-about-vpn-gateway-settings.md#gwtype).
-* The *--vpn-type* can be *RouteBased* (referred to as a Dynamic Gateway in some documentation), or *PolicyBased* (referred to as a Static Gateway in some documentation). The setting is specific to requirements of the device that you are connecting to. For more information about VPN gateway types, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md#vpntype).
+* The *--gateway-type* for a site-to-site configuration is *Vpn*. The gateway type is always specific to the configuration that you're implementing. For more information, see [Gateway types](vpn-gateway-about-vpn-gateway-settings.md#gwtype).
+* The *--vpn-type* can be *RouteBased* (referred to as a Dynamic Gateway in some documentation), or *PolicyBased* (referred to as a Static Gateway in some documentation). The setting is specific to requirements of the device that you're connecting to. For more information about VPN gateway types, see [About VPN Gateway configuration settings](vpn-gateway-about-vpn-gateway-settings.md#vpntype).
* Select the Gateway SKU that you want to use. There are configuration limitations for certain SKUs. For more information, see [Gateway SKUs](vpn-gateway-about-vpn-gateway-settings.md#gwsku). Create the VPN gateway using the [az network vnet-gateway create](/cli/azure/network/vnet-gateway) command. If you run this command using the '--no-wait' parameter, you don't see any feedback or output. This parameter allows the gateway to create in the background. It takes 45 minutes or more to create a gateway. ```azurecli-interactive
-az network vnet-gateway create --name VNet1GW --public-ip-address VNet1GWIP --resource-group TestRG1 --vnet TestVNet1 --gateway-type Vpn --vpn-type RouteBased --sku VpnGw1 --no-wait 
+az network vnet-gateway create --name VNet1GW --public-ip-address VNet1GWIP --resource-group TestRG1 --vnet VNet1 --gateway-type Vpn --vpn-type RouteBased --sku VpnGw1 --no-wait 
``` ## <a name="VPNDevice"></a>8. Configure your VPN device
-Site-to-Site connections to an on-premises network require a VPN device. In this step, you configure your VPN device. When configuring your VPN device, you need the following:
+Site-to-site connections to an on-premises network require a VPN device. In this step, you configure your VPN device. When configuring your VPN device, you need the following:
-- A shared key. This is the same shared key that you specify when creating your Site-to-Site VPN connection. In our examples, we use a basic shared key. We recommend that you generate a more complex key to use.-- The Public IP address of your virtual network gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the public IP address of your virtual network gateway, use the [az network public-ip list](/cli/azure/network/public-ip) command. For easy reading, the output is formatted to display the list of public IPs in table format.
+* A shared key. This is the same shared key that you specify when creating your site-to-site VPN connection. In our examples, we use a basic shared key. We recommend that you generate a more complex key to use.
+* The public IP address of your virtual network gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the public IP address of your virtual network gateway, use the [az network public-ip list](/cli/azure/network/public-ip) command. For easy reading, the output is formatted to display the list of public IPs in table format.
```azurecli-interactive az network public-ip list --resource-group TestRG1 --output table ``` - [!INCLUDE [Configure VPN device](../../includes/vpn-gateway-configure-vpn-device-rm-include.md)] - ## <a name="CreateConnection"></a>9. Create the VPN connection
-Create the Site-to-Site VPN connection between your virtual network gateway and your on-premises VPN device. Pay particular attention to the shared key value, which must match the configured shared key value for your VPN device.
+Create the site-to-site VPN connection between your virtual network gateway and your on-premises VPN device. Pay particular attention to the shared key value, which must match the configured shared key value for your VPN device.
Create the connection using the [az network vpn-connection create](/cli/azure/network/vpn-connection) command.
This section contains common commands that are helpful when working with site-to
* For information about Forced Tunneling, see [About Forced Tunneling](vpn-gateway-forced-tunneling-rm.md). * For information about Highly Available Active-Active connections, see [Highly Available cross-premises and VNet-to-VNet connectivity](vpn-gateway-highlyavailable.md). * For a list of networking Azure CLI commands, see [Azure CLI](/cli/azure/network).
-* For information about creating a site-to-site VPN connection using Azure Resource Manager template, see [Create a Site-to-Site VPN Connection](https://azure.microsoft.com/resources/templates/site-to-site-vpn-create/).
+* For information about creating a site-to-site VPN connection using Azure Resource Manager template, see [Create a site-to-site VPN Connection](https://azure.microsoft.com/resources/templates/site-to-site-vpn-create/).
* For information about creating a vnet-to-vnet VPN connection using Azure Resource Manager template, see [Deploy HBase geo replication](https://azure.microsoft.com/resources/templates/hdinsight-hbase-replication-geo/).