Updates from: 01/18/2023 02:13:12
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Msal Client Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-client-applications.md
Previously updated : 12/19/2021 Last updated : 01/16/2023
# Public client and confidential client applications
-The Microsoft Authentication Library (MSAL) defines two types of clients: public clients and confidential clients. The two client types are distinguished by their ability to authenticate securely with the authorization server and maintain the confidentiality of their client credentials.
+The Microsoft Authentication Library (MSAL) defines two types of clients; public clients and confidential clients. The two client types are distinguished by the ability to authenticate securely with the authorization server and maintain the confidentiality of client credentials.
-- **Confidential client applications** are apps that run on servers (web apps, web API apps, or even service/daemon apps). They're considered difficult to access, and for that reason can keep an application secret. Confidential clients can hold configuration-time secrets. Each instance of the client has a distinct configuration (including client ID and client secret). These values are difficult for end users to extract. A web app is the most common confidential client. The client ID is exposed through the web browser, but the secret is passed only in the back channel and never directly exposed.
+- **Confidential client applications** are apps that run on servers, such as web apps, web API apps, or service/daemon apps. They're considered difficult to access, and for that reason can keep an application secret. Confidential clients can hold configuration-time secrets. Each instance of the client has a distinct configuration (including client ID and client secret). These values are difficult for end users to extract. A web app is the most common confidential client. The client ID is exposed through the web browser, but the secret is passed only in the back channel and never directly exposed.
Confidential client apps: ![Web app](media/msal-client-applications/web-app.png) ![Web API](media/msal-client-applications/web-api.png) ![Daemon/service](media/msal-client-applications/daemon-service.png) -- **Public client applications** are apps that run on devices or desktop computers or in a web browser. They're not trusted to safely keep application secrets, so they only access web APIs on behalf of the user. (They support only public client flows.) Public clients can't hold configuration-time secrets, so they don't have client secrets.
+- **Public client applications** are apps that run on devices, desktop computers or in a web browser. They're not trusted to safely keep application secrets, so they only access web APIs on behalf of the user. They also only support public client flows. Public clients can't hold configuration-time secrets, so they cannot have client secrets.
Public client apps:
In MSAL.js, there's no separation of public and confidential client apps. MSAL.j
## Comparing the client types
-Here are some similarities and differences between public and confidential client apps:
+The following are some similarities and differences between public and confidential client apps:
-- Both kinds of app maintain a user token cache and can acquire a token silently (when the token is already in the token cache). Confidential client apps also have an app token cache for tokens that are for the app itself.
+- Both types of app maintain a user token cache and can acquire a token silently (when the token is already in the token cache). Confidential client apps also have an app token cache for tokens that are for the app itself.
- Both types of app manage user accounts and can get an account from the user token cache, get an account from its identifier, or remove an account.-- Public client apps have four ways to acquire a token (four authentication flows). Confidential client apps have three ways to acquire a token (and one way to compute the URL of the identity provider authorize endpoint). For more information, see [Acquiring tokens](msal-acquire-cache-tokens.md).
+- Public client apps have four ways to acquire a token, through four separate authentication flows. Confidential client apps have three ways to acquire a token and one way to compute the URL of the identity provider authorize endpoint. For more information, see [Acquiring tokens](msal-acquire-cache-tokens.md).
-In MSAL, the client ID (also called the _application ID_ or _app ID_) is passed once at the construction of the application. It doesn't need to be passed again when the app acquires a token. This is true for both public and confidential client apps. Constructors of confidential client apps are also passed client credentials: the secret they share with the identity provider.
+In MSAL, the client ID, also called the _application ID_ or _app ID_, is passed once at the construction of the application. It doesn't need to be passed again when the app acquires a token. This is true for both public and confidential client apps. Constructors of confidential client apps are also passed client credentials: the secret they share with the identity provider.
## Next steps
active-directory Msal Js Initializing Client Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-initializing-client-applications.md
Previously updated : 10/21/2021 Last updated : 01/16/2023 -+ # Customer intent: As an application developer, I want to learn about initializing a client application in MSAL.js to enable support for authentication and authorization in a JavaScript single-page application (SPA).
This article describes initializing the Microsoft Authentication Library for JavaScript (MSAL.js) with an instance of a user-agent application.
-The user-agent application is a form of public client application in which the client code is executed in a user-agent such as a web browser. Such clients don't store secrets because the browser context is openly accessible.
+The user-agent application is a form of public client application in which the client code is executed in a user-agent such as a web browser. Clients such as these don't store secrets because the browser context is openly accessible.
To learn more about the client application types and application configuration options, see [Public and confidential client apps in MSAL](msal-client-applications.md).
After registering your app, you'll need some or all of the following values that
## Initialize MSAL.js 2.x apps
-Initialize the MSAL.js authentication context by instantiating a [PublicClientApplication][msal-js-publicclientapplication] with a [Configuration][msal-js-configuration] object. The minimum required configuration property is the `clientID` of your application, shown as the **Application (client) ID** on the **Overview** page of the app registration in the Azure portal.
+Initialize the MSAL.js authentication context by instantiating a [PublicClientApplication][msal-js-publicclientapplication] with a [Configuration][msal-js-configuration] object. The minimum required configuration property is the `clientID` of the application, shown as **Application (client) ID** on the **Overview** page of the app registration in the Azure portal.
Here's an example configuration object and instantiation of a `PublicClientApplication`: ```javascript const msalConfig = { auth: {
- clientId: "11111111-1111-1111-111111111111",
- authority: "https://login.microsoftonline.com/common",
+ clientId: "Enter_the_Application_Id_Here",
+ authority: "https://login.microsoftonline.com/Enter_the_Tenant_Info_Here",
knownAuthorities: [],
- redirectUri: "https://localhost:3001",
- postLogoutRedirectUri: "https://localhost:3001/logout",
+ redirectUri: "https://localhost:{port}/redirect",
+ postLogoutRedirectUri: "https://localhost:{port}/redirect",
navigateToLoginRequestUrl: true, }, cache: {
msalInstance
### `handleRedirectPromise`
-Invoke [handleRedirectPromise][msal-js-handleredirectpromise] when your application uses the redirect flows. When using the redirect flows, `handleRedirectPromise` should be run on every page load.
+Invoke [handleRedirectPromise][msal-js-handleredirectpromise] when the application uses redirect flows. When using redirect flows, `handleRedirectPromise` should be run on every page load.
-There are three possible outcomes from the promise:
+Three outcomes are possible from the promise:
- `.then` is invoked and `tokenResponse` is truthy: The application is returning from a redirect operation that was successful. - `.then` is invoked and `tokenResponse` is falsy (`null`): The application isn't returning from a redirect operation.
There are three possible outcomes from the promise:
## Initialize MSAL.js 1.x apps
-Initialize the MSAL 1.x authentication context by instantiating a [UserAgentApplication][msal-js-useragentapplication] with a configuration object. The minimum required configuration property is the `clientID` of your application, shown as the **Application (client) ID** on the **Overview** page of the app registration in the Azure portal.
+Initialize the MSAL 1.x authentication context by instantiating a [UserAgentApplication][msal-js-useragentapplication] with a configuration object. The minimum required configuration property is the `clientID` of your application, shown as **Application (client) ID** on the **Overview** page of the app registration in the Azure portal.
For authentication methods with redirect flows ([loginRedirect][msal-js-loginredirect] and [acquireTokenRedirect][msal-js-acquiretokenredirect]) in MSAL.js 1.2.x or earlier, you must explicitly register a callback for success or error through the `handleRedirectCallback()` method. Explicitly registering the callback is required in MSAL.js 1.2.x and earlier because redirect flows don't return promises like the methods with a pop-up experience do. Registering the callback is _optional_ in MSAL.js version 1.3.x and later.
For authentication methods with redirect flows ([loginRedirect][msal-js-loginred
// Configuration object constructed const msalConfig = { auth: {
- clientId: "11111111-1111-1111-111111111111",
+ clientId: "Enter_the_Application_Id_Here",
}, };
msalInstance.handleRedirectCallback(authCallback);
Both MSAL.js 1.x and 2.x are designed to have a single instance and configuration of the `UserAgentApplication` or `PublicClientApplication`, respectively, to represent a single authentication context.
-Multiple instances of `UserAgentApplication` or `PublicClientApplication` aren't recommended as they cause conflicting cache entries and behavior in the browser.
+Multiple instances of `UserAgentApplication` or `PublicClientApplication` aren't recommended as they can cause conflicting cache entries and behavior in the browser.
## Next steps
active-directory Msal Js Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-js-sso.md
Previously updated : 10/25/2021 Last updated : 01/16/2023 -+ #Customer intent: As an application developer, I want to learn about enabling single sign on experiences with MSAL.js library so I can decide if this platform meets my application development needs and requirements. # Single sign-on with MSAL.js
-Single sign-on (SSO) provides a more seamless experience by reducing the number of times your users are asked for their credentials. Users enter their credentials once, and the established session can be reused by other applications on the device without further prompting.
+Single sign-on (SSO) provides a more seamless experience by reducing the number of times a user is asked for credentials. Users enter their credentials once, and the established session can be reused by other applications on the same device without further prompting.
-Azure Active Directory (Azure AD) enables SSO by setting a session cookie when a user authenticates for the first time. MSAL.js also caches the ID tokens and access tokens of the user in the browser storage per application domain. These two mechanisms (i.e. Azure AD session cookie and MSAL cache) are independent of each other, but works together to provide SSO behavior.
+Azure Active Directory (Azure AD) enables SSO by setting a session cookie when a user authenticates for the first time. MSAL.js also caches the ID tokens and access tokens of the user in the browser storage per application domain. The two mechanisms, Azure AD session cookie and Microsoft Authentication Library (MSAL) cache, are independent of each other but work together to provide SSO behavior.
## SSO between browser tabs for the same app
-When a user has your application open in several tabs and signs in on one of them, they can be signed into the same app open on the other tabs without being prompted. To do so, you'll need to set the *cacheLocation* in MSAL.js configuration object to `localStorage` as shown below.
+When a user has an application open in several tabs and signs in on one of them, they can be signed into the same app open on other tabs without being prompted. To do so, you'll need to set the *cacheLocation* in MSAL.js configuration object to `localStorage` as shown in the following example:
```javascript const config = {
In this case, application instances in different browser tabs make use of the sa
## SSO between different apps
-When a user authenticates, a session cookie is set on the Azure AD domain in the browser. MSAL.js relies on this session cookie to provide SSO for the user between different applications. In particular, MSAL.js offers the `ssoSilent` method to sign-in the user and obtain tokens without an interaction. However, if the user has multiple user accounts in a session with Azure AD, then the user is prompted to pick an account to sign in with. As such, there are two ways to achieve SSO using `ssoSilent` method.
+When a user authenticates, a session cookie is set on the Azure AD domain in the browser. MSAL.js relies on this session cookie to provide SSO for the user between different applications. In particular, MSAL.js offers the `ssoSilent` method to sign-in the user and obtain tokens without an interaction. However, if the user has multiple user accounts in a session with Azure AD, they're then prompted to pick an account to sign in with. As such, there are two ways to achieve SSO using `ssoSilent` method.
### With user hint
try {
### Without user hint
-You can attempt to use the `ssoSilent` method without passing any `account`, `sid` or `login_hint` as shown in the code below:
+You can attempt to use the `ssoSilent` method without passing any `account`, `sid` or `login_hint` as shown in the following code:
```javascript const request = {
try {
} ```
-However, there's a likelihood of silent sign-in errors if the application has multiple users in a single browser session or if the user has multiple accounts for that single browser session. You may see the following error in the case of multiple accounts:
+However, there's a likelihood of silent sign-in errors if the application has multiple users in a single browser session or if the user has multiple accounts for that single browser session. The following error may be displayed if multiple accounts are available:
```txt InteractionRequiredAuthError: interaction_required: AADSTS16000: Either multiple user identities are available for the current request or selected account is not supported for the scenario. ```
-The error indicates that the server couldn't determine which account to sign into, and will require either one of the parameters above (`account`, `login_hint`, `sid`) or an interactive sign-in to choose the account.
+The error indicates that the server couldn't determine which account to sign into, and will require either one of the parameters in the previous example (`account`, `login_hint`, `sid`) or an interactive sign-in to choose the account.
## Considerations when using `ssoSilent`
The error indicates that the server couldn't determine which account to sign int
For better performance and to help avoid issues, set the `redirectUri` to a blank page or other page that doesn't use MSAL. -- If your application users only popup and silent methods, set the `redirectUri` on the `PublicClientApplication` configuration object.-- If your application also uses redirect methods, set the `redirectUri` on a per-request basis.
+- If the application users only popup and silent methods, set the `redirectUri` on the `PublicClientApplication` configuration object.
+- If the application also uses redirect methods, set the `redirectUri` on a per-request basis.
### Third-party cookies
To resolve the error, the user must create an interactive authentication request
## Negating SSO with prompt=login
-If you like Azure AD to prompt the user for entering their credentials despite there being an active session with the authorization server, you can use the **login** prompt parameter in requests with MSAL.js. See [MSAL.js prompt behavior](msal-js-prompt-behavior.md) for more.
+If you prefer Azure AD to prompt the user for entering their credentials despite an active session with the authorization server, you can use the **login** prompt parameter in requests with MSAL.js. See [MSAL.js prompt behavior](msal-js-prompt-behavior.md) for more.
## Sharing authentication state between ADAL.js and MSAL.js
active-directory Msal Net Acquire Token Silently https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-acquire-token-silently.md
Previously updated : 07/16/2019 Last updated : 01/16/2023 -+ #Customer intent: As an application developer, I want to learn how how to use the AcquireTokenSilent method so I can acquire tokens from the cache. # Get a token from the token cache using MSAL.NET
-When you acquire an access token using the Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should try to fetch it from the cache first.
+When you acquire an access token using the Microsoft Authentication Library for .NET (MSAL.NET), the token is cached. When the application needs a token, it should attempt to fetch it from the cache first.
You can monitor the source of the tokens by inspecting the `AuthenticationResult.AuthenticationResultMetadata.TokenSource` property
You can monitor the source of the tokens by inspecting the `AuthenticationResult
ASP.NET Core and ASP.NET Classic websites should integrate with [Microsoft.Identity.Web](microsoft-identity-web.md), a wrapper for MSAL.NET. Memory token caching or distributed token caching can be configured as described in [token cache serialization](msal-net-token-cache-serialization.md?tabs=aspnetcore).
-Web APIs on ASP.NET Core should use Microsoft.Identity.Web. Web APIs on ASP.NET classic, use MSAL directly, by calling `AcquireTokenOnBehalfOf` and should configure memory or distributed caching. For more information, see [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md?tabs=aspnet). There is no need to call `AcquireTokenSilent` API. There is no API to clear the cache. Cache size can be managed by setting eviction policies on the underlying cache store, such as MemoryCache, Redis etc.
+Web APIs on ASP.NET Core should use Microsoft.Identity.Web. Web APIs on ASP.NET classic, use MSAL directly, by calling `AcquireTokenOnBehalfOf` and should configure memory or distributed caching. For more information, see [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md?tabs=aspnet). There's no reason to call the `AcquireTokenSilent` API as there's no API to clear the cache. Cache size can be managed by setting eviction policies on the underlying cache store, such as MemoryCache, Redis etc.
## Web service / Daemon apps
-Applications which request tokens for an app identity, with no user involved, by calling `AcquiretTokenForClient` can either rely on MSAL's internal caching, define their own memory token caching or distributed token caching. For instructions and more information, see [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md?tabs=aspnet).
+Applications that request tokens for an app identity, with no user involved, by calling `AcquiretTokenForClient` can either rely on MSAL's internal caching, define their own memory token caching or distributed token caching. For instructions and more information, see [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md?tabs=aspnet).
-Since no user is involved, there is no need to call `AcquireTokenSilent` API. `AcquireTokenForClient` will look in the cache on its own. There is no API to clear the cache. Cache size is proportional with the number of tenants and resources you need tokens for. Cache size can be managed by setting eviction policies on the underlying cache store, such as MemoryCache, Redis etc.
+Since no user is involved, there's no reason to call `AcquireTokenSilent`. `AcquireTokenForClient` will look in the cache on its own as there's no API to clear the cache. Cache size is proportional with the number of tenants and resources you need tokens for. Cache size can be managed by setting eviction policies on the underlying cache store, such as MemoryCache, Redis etc.
## Desktop, command-line, and mobile applications Desktop, command-line, and mobile applications should first call the AcquireTokenSilent method to verify if an acceptable token is in the cache. In many cases, it's possible to acquire another token with more scopes based on a token in the cache. It's also possible to refresh a token when it's getting close to expiration (as the token cache also contains a refresh token).
-For authentication flows that require a user interaction, MSAL caches the access, refresh, and ID tokens, as well as the `IAccount` object, which represents information about a single account. Learn more about [IAccount](/dotnet/api/microsoft.identity.client.iaccount?view=azure-dotnet&preserve-view=true). For application flows, such as [client credentials](msal-authentication-flows.md#client-credentials), only access tokens are cached, because the `IAccount` object and ID token require a user, and the refresh token is not applicable.
+For authentication flows that require a user interaction, MSAL caches the access, refresh, and ID tokens, and the `IAccount` object, which represents information about a single account. Learn more about [IAccount](/dotnet/api/microsoft.identity.client.iaccount?view=azure-dotnet&preserve-view=true). For application flows, such as [client credentials](msal-authentication-flows.md#client-credentials), only access tokens are cached, because the `IAccount` object and ID token require a user, and the refresh token isn't applicable.
The recommended pattern is to call the `AcquireTokenSilent` method first. If `AcquireTokenSilent` fails, then acquire a token using other methods.
-In the following example, the application first attempts to acquire a token from the token cache. If a `MsalUiRequiredException` exception is thrown, the application acquires a token interactively.
+In the following example, the application first attempts to acquire a token from the token cache. If a `MsalUiRequiredException` exception is thrown, the application acquires a token interactively.
```csharp var accounts = await app.GetAccountsAsync();
if (result != null)
### Clearing the cache
-In public client applications, clearing the cache is achieved by removing the accounts from the cache. This does not remove the session cookie which is in the browser, though.
+In public client applications, clearing the cache is achieved by removing the accounts from the cache. This doesn't remove the session cookie, which is in the browser.
```csharp var accounts = (await app.GetAccountsAsync()).ToList();
active-directory Entitlement Management Access Package Approval Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md
For example, if you listed Alice and Bob as the first stage approver(s), list Ca
1. In the **Forward to alternate approver(s) after how many days** box, put in the number of days the approvers have to approve or deny a request. If no approvers have approved or denied the request before the request duration, the request expires (timeout), and the user will have to submit another request for the access package.
- Requests can only be forwarded to alternate approvers a day after the request duration reaches half-life, and the decision of the main approver(s) has to time out after at least four days. If the request time-out is less or equal than three, there isn't enough time to forward the request to alternate approver(s). In this example, the duration of the request is 14 days. So, the request duration reaches half-life at day 7. So the request can't be forwarded earlier than day 8. Also, requests can't be forwarded on the last day of the request duration. So in the example, the latest the request can be forwarded is day 13.
+ Requests can only be forwarded to alternate approvers a day after the request has been initiated. To use alternate approval, the request time-out needs to be at least 4 days.
## Enable requests
active-directory How To Connect Modify Group Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-modify-group-writeback.md
If the original version of group writeback is already enabled and in use in your
To configure directory settings to disable automatic writeback of newly created Microsoft 365 groups, use one of these methods: - Azure portal: Update the `NewUnifiedGroupWritebackDefault` setting to `false`. -- PowerShell: Use the [New-AzureADDirectorySetting](../enterprise-users/groups-settings-cmdlets.md) cmdlet. For example:
+- PowerShell: Use the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation?view=graph-powershell-1.0&preserve-view=true). For example:
```PowerShell
- $TemplateId = (Get-AzureADDirectorySettingTemplate | where {$_.DisplayName -eq "Group.Unified" }).Id
- $Template = Get-AzureADDirectorySettingTemplate | where -Property Id -Value $TemplateId -EQ
- $Setting = $Template.CreateDirectorySetting()
- $Setting["NewUnifiedGroupWritebackDefault"] = "False"
- New-AzureADDirectorySetting -DirectorySetting $Setting
+ # Import Module
+ Import-Module Microsoft.Graph.Identity.DirectoryManagement
+
+ #Connect to MgGraph with necessary scope and select the Beta API Version
+ Connect-MgGraph -Scopes Directory.ReadWrite.All
+ Select-MgProfile -Name beta
+
+ # Import "Group.Unified" template values to a hashtable
+ $Template = Get-MgDirectorySettingTemplate | Where-Object {$_.DisplayName -eq "Group.Unified"}
+ $TemplateValues = @{}
+ $Template.Values | ForEach-Object {
+ $TemplateValues.Add($_.Name, $_.DefaultValue)
+ }
+
+ # Update the value for new unified group writeback default
+ $TemplateValues["NewUnifiedGroupWritebackDefault"] = "false"
+ # Create a directory setting using the Template values hashtable including the updated value
+ $params = @{}
+ $params.Add("TemplateId", $Template.Id)
+ $params.Add("Values", @())
+ $TemplateValues.Keys | ForEach-Object {
+ $params.Values += @(@{Name = $_; Value = $TemplateValues[$_]})
+ }
+ New-MgDirectorySetting -BodyParameter $params
```
+> [!NOTE]
+> We recommend using Microsoft Graph PowerShell SDK with [Windows PowerShell 7](/powershell/scripting/whats-new/migrating-from-windows-powershell-51-to-powershell-7?view=powershell-7.3&preserve-view=true).
+ - Microsoft Graph: Use the [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta&preserve-view=true) resource type. ### Disable writeback for all existing Microsoft 365 group
To disable writeback of all Microsoft 365 groups that were created before these
#Import-module Import-module Microsoft.Graph
- #Connect to MgGraph and select the Beta API Version
+ #Connect to MgGraph with necessary scope and select the Beta API Version
Connect-MgGraph -Scopes Group.ReadWrite.All Select-MgProfile -Name beta
To disable writeback of all Microsoft 365 groups that were created before these
{ Update-MgGroup -GroupId $group.id -WritebackConfiguration @{isEnabled=$false} }
-> We recomend using Microsoft Graph PowerShell SDK with [Windows PowerShell 7](/powershell/scripting/whats-new/migrating-from-windows-powershell-51-to-powershell-7?view=powershell-7.3&preserve-view=true)
-
+ ```
+
- Microsoft Graph Explorer: Use a [group object](/graph/api/group-update?tabs=http&view=graph-rest-beta&preserve-view=true). ## Delete groups when they're disabled for writeback or soft deleted
active-directory Descartes Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/descartes-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Descartes
+description: Learn how to configure single sign-on between Azure Active Directory and Descartes.
++++++++ Last updated : 01/16/2023++++
+# Azure Active Directory SSO integration with Descartes
+
+In this article, you'll learn how to integrate Descartes with Azure Active Directory (Azure AD). The Descartes application provides logistics information services to delivery sensitive companies around the world. As an integrated suite it provides modules for various logistics business roles. When you integrate Descartes with Azure AD, you can:
+
+* Control in Azure AD who has access to Descartes.
+* Enable your users to be automatically signed-in to Descartes with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Descartes in a test environment. Descartes supports both **SP** and **IDP** initiated single sign-on and also supports **Just In Time** user provisioning.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Descartes, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Descartes single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Descartes application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Descartes from the Azure AD gallery
+
+Add Descartes from the Azure AD application gallery to configure single sign-on with Descartes. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Descartes** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user doesn't have to perform any step as the app is already pre-integrated with Azure.
+
+1. If you want to configure **SP** initiated SSO, then perform the following step:
+
+ In the **Relay State** textbox, type the URL:
+ `https://auth.gln.com/Welcome`
+
+1. Descartes application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, Descartes application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | telephone | user.telephonenumber |
+ | facsimiletelephonenumber | user.facsimiletelephonenumber |
+ | ou | user.department |
+ | assignedRoles | user.assignedroles |
+ | Group | user.groups |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate")
+
+1. Compose a list of the Azure AD Groups you want the Descartes Application use for the Role-based configuration. A list of User Roles Descartes application modules can be found at https://www.gln.com/docs/Descartes_Application_User_Roles.pdf. You can find the Azure Active Direction Group GUIDs please download the Groups from your Azure AD Portal Groups.
+
+ ![Screenshot shows the AAD Portal Groups.](media/descartes-tutorial/copy-groups.png "Groups")
+
+You can load this CSV file in Excel. Please select the groups that you want map to the Descartes application roles by list the ID in the first column and associating it with the Descartes Application User Role.
+
+## Configure Descartes SSO
+
+To configure single sign-on on **Descartes** side, you need to email the following values to the [Descartes support team](mailto:servicedesk@descartes.com). Please use the subject Azure AD SSO Setup request as the subject.
+
+1. The preferred identity domain suffix (often the same as the E-mail domain suffix).
+1. The App Federation Metadata URL.
+1. A list with the Azure AD Group GUIDs for users entitled to use the Descartes application.
+
+Descartes will use the information in the E-mail to have the SAML SSO connection set properly on the application side.
+
+An example of such a request below:
+
+![Screenshot shows the example of the request.](media/descartes-tutorial/example.png "Request")
+
+### Create Descartes test user
+
+In this section, a user called B.Simon is created in Descartes. Descartes supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Descartes, a new one is commonly created after authentication.
+
+Descartes application use domain qualified usernames for your Azure AD integrated users. The domain qualified usernames consist of the SAML claim subject and will always end with the domain suffix. Descartes recommends selecting your companies E-mail domain suffix all users in the domain have in common as the identity domain suffix (example B.Simon@contoso.com).
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Descartes Sign-on URL where you can initiate the login flow. Alternatively you can use a 'deep link' URL into a specific module of the Descartes application, and you will be redirected to a page to provide your domain qualified username which will lead you to your Azure AD login dialog.
+
+* Go to Descartes application direct access URL provided and initiate the login flow by specifying your domain qualified username (B.Simon@contoso.com) in the application login window. This will redirect the user automatically to Azure AD.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Descartes application menu for which you set up the SSO.
+
+* You can also use Microsoft My Apps to test the application in any mode. When you click the Descartes tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Descartes for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Descartes you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Digital Pigeon Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/digital-pigeon-tutorial.md
Previously updated : 11/21/2022 Last updated : 01/10/2023
To integrate Azure Active Directory with Digital Pigeon, you need:
* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Digital Pigeon single sign-on (SSO) enabled subscription.
+* Digital Pigeon single sign-on (SSO) enabled subscription (i.e.: Business or Enterprise plans)
+* Digital Pigeon account owner access to the above subscription
## Add application and assign a test user
Add Digital Pigeon from the Azure AD application gallery to configure single sig
Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. > [!NOTE]
- > Please click [here](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui) to know how to configure Role in Azure AD. Role value is one of 'Digital Pigeon User', 'Digital Pigeon Power User', or 'Digital Pigeon Admin'. If role claim not supplied, default role is configurable in Digital Pigeon app by a Digital Pigeon Owner.
+ > Please click [here](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui) to learn how to configure App Roles in Azure AD. The Role value must be one of 'Digital Pigeon User', 'Digital Pigeon Power User', or 'Digital Pigeon Admin'. If a role claim is not supplied, the default role is configurable in the Digital Pigeon app (`Account Settings > SSO > SAML Provisioning Settings`) by a Digital Pigeon Owner, as seen below:
+ ![Screenshot shows how to configure SAML Provisioning Default Role.](media/digital-pigeon-tutorial/saml-default-role.png "SAML Default Role")
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
Complete the following steps to enable Azure AD single sign-on in the Azure port
![Screenshot shows to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, perform the following steps:
+1. In another browser tab, log in to Digital Pigeon as an account administrator.
- a. In the **Identifier** textbox, type a URL using the following pattern:
- `https://digitalpigeon.com/saml2/service-provider-metadata/<CustomerID>`
+1. Navigate to **Account Settings > SSO** and copy the **SP Entity ID** and **SP ACS URL** values.
- b. In the **Reply URL** textbox, type a URL using the following pattern:
- `https://digitalpigeon.com/login/saml2/sso/<CustomerID>`
+ ![Screenshot shows Digital Pigeon SAML Service Provider Settings.](media/digital-pigeon-tutorial/saml-service-provider-settings.png "SAML Service Provider Settings")
+
+1. Now in Azure AD, in the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, paste the value from _Digital Pigeon > Account Settings > SSO > **SP Entity ID**_.
+ It should match the following pattern: `https://digitalpigeon.com/saml2/service-provider-metadata/<CustomerID>`
+
+ b. In the **Reply URL** textbox, paste the value from _Digital Pigeon > Account Settings > SSO > **SP ACS URL**_.
+ It should match the following pattern: `https://digitalpigeon.com/login/saml2/sso/<CustomerID>`
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: In the **Sign on URL** textbox, type the URL: `https://digitalpigeon.com/login`
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Digital Pigeon Client support team](mailto:help@digitalpigeon.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
- 1. Digital Pigeon application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes. ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Attributes")
Complete the following steps to enable Azure AD single sign-on in the Azure port
![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
-1. On the **Set up Digital Pigeon** section, copy the appropriate URL(s) based on your requirement.
+1. In Digital Pigeon, paste the content of downloaded **Federation Metadata XML** file into the **IDP Metadata XML** text field.
+
+ ![Screenshot shows IDP Metadata XML.](media/digital-pigeon-tutorial/idp-metadata-xml.png "IDP Metadata XML")
+
+1. In Azure AD, on the **Set up Digital Pigeon** section, copy the Azure AD Identifier URL.
![Screenshot shows to copy configuration appropriate U R L.](common/copy-configuration-urls.png "Metadata")
-## Configure Digital Pigeon SSO
+1. In Digital Pigeon, paste this URL into the **IDP Entity ID** text field.
+
+ ![Screenshot shows IDP Entity ID.](media/digital-pigeon-tutorial/idp-entity-id.png "IDP Entity ID")
-To configure single sign-on on **Digital Pigeon** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Digital Pigeon support team](mailto:help@digitalpigeon.com). They set this setting to have the SAML SSO connection set properly on both sides.
+1. Click **Save** button to activate Digital Pigeon SSO.
### Create Digital Pigeon test user
You can also use Microsoft My Apps to test the application in any mode. When you
## Additional resources
+* Should you run into any issues or require additional support, please contact the [Digital Pigeon support team](mailto:help@digitalpigeon.com)
+* For an alternative step-by-step guide, please refer to the Digital Pigeon KB article: [Azure AD SSO Configuration](https://digitalpigeon.zendesk.com/hc/en-us/articles/5403612403855-Azure-AD-SSO-Configuration)
* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) * [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
active-directory Kintone Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/kintone-tutorial.md
Previously updated : 11/21/2022 Last updated : 01/13/2023 # Tutorial: Azure AD SSO integration with Kintone
Follow these steps to enable Azure AD SSO in the Azure portal.
4. On the **Basic SAML Configuration** section, perform the following steps:
- a. In the **Identifier (Entity ID)** text box, type a URL using one of the following patterns:
-
- | **Identifier** |
- ||
- | `https://<companyname>.cybozu.com` |
- | `https://<companyname>.kintone.com` |
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `https://<companyname>.kintone.com`
b. In the **Sign on URL** text box, type a URL using the following pattern: `https://<companyname>.kintone.com`
active-directory Mysdworxcom Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mysdworxcom-tutorial.md
Previously updated : 01/04/2023 Last updated : 01/16/2023 # Azure Active Directory SSO integration with my.sdworx.com
-In this article, you'll learn how to integrate my.sdworx.com with Azure Active Directory (Azure AD). my.sdworx.com is a SD Worx portal. When you integrate my.sdworx.com with Azure AD, you can:
+In this article, you'll learn how to integrate my.sdworx.com with Azure Active Directory (Azure AD). my.sdworx.com is an SD Worx portal. When you integrate my.sdworx.com with Azure AD, you can:
* Control in Azure AD who has access to my.sdworx.com. * Enable your users to be automatically signed-in to my.sdworx.com with their Azure AD accounts.
Complete the following steps to enable Azure AD single sign-on in the Azure port
![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
-1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+1. On the **Basic SAML Configuration** section, the **Relay State** should be set to `https://auth.sdworx.com/idhub/tb/wf_startapp?AppCode=MWAM`
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
Complete the following steps to enable Azure AD single sign-on in the Azure port
## Configure my.sdworx.com SSO
-To configure single sign-on on **my.sdworx.com** side, you need to send the **App Federation Metadata Url** to [my.sdworx.com support team](mailto:support@sdworx.com). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **my.sdworx.com** side, you need to send the **App Federation Metadata Url** to [my.sdworx.com support team](mailto:prod_cloud&busoper_middleware&hostsol@sdworx.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create my.sdworx.com test user
-In this section, you create a user called Britta Simon at my.sdworx.com. Work with [my.sdworx.com support team](mailto:support@sdworx.com) to add the users in the my.sdworx.com platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon at my.sdworx.com. Work with [my.sdworx.com support team](mailto:prod_cloud&busoper_middleware&hostsol@sdworx.com) to add the users in the my.sdworx.com platform. Users must be created and activated before you use single sign-on.
## Test SSO
active-directory How To Use Quickstart Idtoken https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-use-quickstart-idtoken.md
The claims mapping in the following example requires that you configure the toke
{ "clientId": "8d5b446e-22b2-4e01-bb2e-9070f6b20c90", "configuration": "https://didplayground.b2clogin.com/didplayground.onmicrosoft.com/B2C_1_sisu/v2.0/.well-known/openid-configuration",
- "redirectUri": "vcclient://openid",
+ "redirectUri": "vcclient://openid/",
"scope": "openid profile email", "mapping": [ {
The clientId attribute is the application ID of a registered application in the
If you want only accounts in your tenant to be able to sign in, keep the **Accounts in this directory only** checkbox selected.
-1. In **Redirect URI (optional)**, select **Public client/native (mobile & desktop)**, and then enter **vcclient://openid**.
+1. In **Redirect URI (optional)**, select **Public client/native (mobile & desktop)**, and then enter **vcclient://openid/**.
If you want to be able to test what claims are in the Azure Active Directory ID token, do the following:
The easiest way to find this information for a custom credential is to go to you
## Next steps
-See the [Rules and display definitions reference](rules-and-display-definitions-model.md).
+See the [Rules and display definitions reference](rules-and-display-definitions-model.md).
aks Api Server Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-vnet-integration.md
Title: API Server VNet Integration in Azure Kubernetes Service (AKS) description: Learn how to create an Azure Kubernetes Service (AKS) cluster with API Server VNet Integration--+++++ Last updated 09/09/2022
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Title: Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS) (Preview) description: Learn how to configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet.--+++++ Last updated 12/12/2022
aks Azure Cni Powered By Cilium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md
Title: Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS) (Preview) description: Learn how to create an Azure Kubernetes Service (AKS) cluster with Azure CNI Powered by Cilium.-++++ Last updated 10/24/2022
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md
Use the DiskEncryptionSet and resource groups you created on the prior steps, an
```azurecli-interactive # Retrieve the DiskEncryptionSet value and set a variable
-$desIdentity=az disk-encryption-set show -n myDiskEncryptionSetName -g myResourceGroup --query "[identity.principalId]" -o tsv
+desIdentity=$(az disk-encryption-set show -n myDiskEncryptionSetName -g myResourceGroup --query "[identity.principalId]" -o tsv)
# Update security policy settings az keyvault set-policy -n myKeyVaultName -g myResourceGroup --object-id $desIdentity --key-permissions wrapkey unwrapkey get
Create a **new resource group** and AKS cluster, then use your key to encrypt th
```azurecli-interactive # Retrieve the DiskEncryptionSet value and set a variable
-$diskEncryptionSetId=az disk-encryption-set show -n mydiskEncryptionSetName -g myResourceGroup --query "[id]" -o tsv
+diskEncryptionSetId=$(az disk-encryption-set show -n mydiskEncryptionSetName -g myResourceGroup --query "[id]" -o tsv)
# Create a resource group for the AKS cluster az group create -n myResourceGroup -l myAzureRegionName
aks Azure Disk Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-volume.md
following command:
pvc-azuredisk Bound pv-azuredisk 20Gi RWO 5s ```
-5. Create a *azure-disk-pod.yaml* file to reference your *PersistentVolumeClaim*. For example:
+5. Create an *azure-disk-pod.yaml* file to reference your *PersistentVolumeClaim*. For example:
```yaml apiVersion: v1
aks Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices.md
Last updated 03/09/2021
# Cluster operator and developer best practices to build and manage applications on Azure Kubernetes Service (AKS)
-Building and running applications successfully in Azure Kubernetes Service (AKS) require understanding and implementation of some key considerations, including:
+Building and running applications successfully in Azure Kubernetes Service (AKS) requires understanding and implementation of some key concepts, including:
* Multi-tenancy and scheduler features. * Cluster and pod security. * Business continuity and disaster recovery.
-The AKS product group, engineering teams, and field teams (including global black belts [GBBs]) contributed to, wrote, and grouped the following best practices and conceptual articles. Their purpose is to help cluster operators and developers understand the considerations above and implement the appropriate features.
+The AKS product group, engineering teams, and field teams (including global black belts [GBBs]) contributed to, wrote, and grouped the following best practices and conceptual articles. Their purpose is to help cluster operators and developers better understand the concepts above and implement the appropriate features.
## Cluster operator best practices
-As a cluster operator, work together with application owners and developers to understand their needs. You can then use the following best practices to configure your AKS clusters as needed.
+If you're a cluster operator, work with application owners and developers to understand their needs. Then, you can use the following best practices to configure your AKS clusters to fit your needs.
-**Multi-tenancy**
+### Multi-tenancy
* [Best practices for cluster isolation](operator-best-practices-cluster-isolation.md) * Includes multi-tenancy core components and logical isolation with namespaces.
As a cluster operator, work together with application owners and developers to u
* [Best practices for authentication and authorization](operator-best-practices-identity.md) * Includes integration with Azure Active Directory, using Kubernetes role-based access control (Kubernetes RBAC), using Azure RBAC, and pod identities.
-**Security**
+### Security
* [Best practices for cluster security and upgrades](operator-best-practices-cluster-security.md) * Includes securing access to the API server, limiting container access, and managing upgrades and node reboots.
As a cluster operator, work together with application owners and developers to u
* [Best practices for pod security](developer-best-practices-pod-security.md) * Includes securing access to resources, limiting credential exposure, and using pod identities and digital key vaults.
-**Network and storage**
+### Network and storage
* [Best practices for network connectivity](operator-best-practices-network.md) * Includes different network models, using ingress and web application firewalls (WAF), and securing node SSH access. * [Best practices for storage and backups](operator-best-practices-storage.md) * Includes choosing the appropriate storage type and node size, dynamically provisioning volumes, and data backups.
-**Running enterprise-ready workloads**
+### Running enterprise-ready workloads
* [Best practices for business continuity and disaster recovery](operator-best-practices-multi-region.md) * Includes using region pairs, multiple clusters with Azure Traffic Manager, and geo-replication of container images. ## Developer best practices
-As a developer or application owner, you can simplify your development experience and define require application performance needs.
+If you're a developer or application owner, you can simplify your development experience and define required application performance features.
* [Best practices for application developers to manage resources](developer-best-practices-resource-management.md) * Includes defining pod resource requests and limits, configuring development tools, and checking for application issues. * [Best practices for pod security](developer-best-practices-pod-security.md) * Includes securing access to resources, limiting credential exposure, and using pod identities and digital key vaults.
-## Kubernetes / AKS concepts
+## Kubernetes and AKS concepts
-To help understand some of the features and components of these best practices, you can also see the following conceptual articles for clusters in Azure Kubernetes Service (AKS):
+The following conceptual articles cover some of the fundamental features and components for clusters in AKS:
* [Kubernetes core concepts](concepts-clusters-workloads.md) * [Access and identity](concepts-identity.md)
aks Cluster Container Registry Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-container-registry-integration.md
You can set up the AKS to ACR integration using the Azure CLI or Azure PowerShel
> [!IMPORTANT] > There is a latency issue with Azure Active Directory groups when attaching ACR. If the AcrPull role is granted to an Azure AD group and the kubelet identity is added to the group to complete the RBAC configuration, there might be up to a one-hour delay before the RBAC group takes effect. We recommended you to use the [Bring your own kubelet identity][byo-kubelet-identity] as a workaround. You can pre-create a user-assigned identity, add it to the Azure AD group, then use the identity as the kubelet identity to create an AKS cluster. This ensures the identity is added to the Azure AD group before a token is generated by kubelet, which avoids the latency issue.
-> [!IMPORTANT]
-> There is a latency issue with Azure Active Directory groups when attaching ACR. If the AcrPull role is granted to an Azure AD group and the kubelet identity is added to the group to complete the RBAC configuration, there might be up to a one-hour delay before the RBAC group update takes effect. We recommended you use the [Bring your own kubelet identity][byo-kubelet-identity] in the meantime. You can pre-create a user-assigned identity, add it to the Azure AD group, and then use the identity as the kubelet identity to create an AKS cluster. This ensures the identity is first added to the Azure AD group and then a token is generated by kubelet, which works around the latency issue.
- > [!NOTE] > This article covers automatic authentication between AKS and ACR. If you need to pull an image from a private external registry, use an [image pull secret][image-pull-secret].
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
Title: Configure Azure CNI networking in Azure Kubernetes Service (AKS)+ description: Learn how to configure Azure CNI (advanced) networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet.--+++++ Last updated 05/16/2022
aks Configure Kube Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kube-proxy.md
Title: Configure kube-proxy (iptables/IPVS) (preview) description: Learn how to configure kube-proxy to utilize different load balancing configurations with Azure Kubernetes Service (AKS).--+++ Last updated 10/25/2022--++ #Customer intent: As a cluster operator, I want to utilize a different kube-proxy configuration.
aks Configure Kubenet Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet-dual-stack.md
Title: Configure dual-stack kubenet networking in Azure Kubernetes Service (AKS)+ description: Learn how to configure dual-stack kubenet networking in Azure Kubernetes Service (AKS)--+++++ Last updated 12/15/2021
aks Configure Kubenet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md
Title: Configure kubenet networking in Azure Kubernetes Service (AKS)+ description: Learn how to configure kubenet (basic) network in Azure Kubernetes Service (AKS) to deploy an AKS cluster into an existing virtual network and subnet.--+++++ Last updated 10/26/2022-- # Use kubenet networking with your own IP address ranges in Azure Kubernetes Service (AKS)
aks Coredns Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/coredns-custom.md
Title: Customize CoreDNS for Azure Kubernetes Service (AKS) description: Learn how to customize CoreDNS to add subdomains or extend custom DNS endpoints using Azure Kubernetes Service (AKS)---++++ Last updated 03/15/2019-+ #Customer intent: As a cluster operator or developer, I want to learn how to customize the CoreDNS configuration to add sub domains or extend to custom DNS endpoints within my network
aks Egress Outboundtype https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-outboundtype.md
Title: Customize cluster egress with outbound types in Azure Kubernetes Service (AKS)
-description: Learn how to configure outbound types in Azure Kubernetes Service (AKS)
--
+description: Learn how to define a custom egress route in Azure Kubernetes Service (AKS)
+++++ Last updated 06/29/2020-- #Customer intent: As a cluster operator, I want to define my own egress paths with user-defined routes. Since I define this up front I do not want AKS provided load balancer configurations.
aks Egress Udr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-udr.md
Title: Customize user-defined routes (UDR) in Azure Kubernetes Service (AKS)
-description: Learn how to define a custom egress route in Azure Kubernetes Service (AKS)
-
+ Title: Customize cluster egress with a user-defined routing table
+description: Learn how to define a custom egress route in Azure Kubernetes Service (AKS) with a routing table.
++ Last updated 06/29/2020--++ #Customer intent: As a cluster operator, I want to define my own egress paths with user-defined routes. Since I define this up front I do not want AKS provided load balancer configurations.
aks Http Application Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-application-routing.md
Title: HTTP application routing add-on on Azure Kubernetes Service (AKS) description: Use the HTTP application routing add-on to access applications deployed on Azure Kubernetes Service (AKS).---++++ Last updated 04/23/2021-+ # HTTP application routing
aks Http Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/http-proxy.md
Title: Configuring Azure Kubernetes Service (AKS) nodes with an HTTP proxy description: Use the HTTP proxy configuration feature for Azure Kubernetes Service (AKS) nodes.--- Previously updated : 01/09/2023-++++ Last updated : 05/23/2022+ # HTTP proxy support in Azure Kubernetes Service
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
Title: Create an ingress controller in Azure Kubernetes Service (AKS) description: Learn how to create and configure an ingress controller in an Azure Kubernetes Service (AKS) cluster.----+++++ Last updated 05/17/2022
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
Title: Use TLS with an ingress controller on Azure Kubernetes Service (AKS) description: Learn how to install and configure an ingress controller that uses TLS in an Azure Kubernetes Service (AKS) cluster.----+++++ Last updated 05/18/2022 #Customer intent: As a cluster operator or developer, I want to use TLS with an ingress controller to handle the flow of incoming traffic and secure my apps using my own certificates or automatically generated certificates.
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
Title: Create an internal load balancer description: Learn how to create and use an internal load balancer to expose your services with Azure Kubernetes Service (AKS).-- Previously updated : 12/12/2022+++++ Last updated : 03/04/2019 #Customer intent: As a cluster operator or developer, I want to learn how to create a service in AKS that uses an internal Azure load balancer for enhanced security and without an external endpoint.
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
Title: Restrict egress traffic in Azure Kubernetes Service (AKS) description: Learn what ports and addresses are required to control egress traffic in Azure Kubernetes Service (AKS)---++++ Last updated 07/26/2022-+ #Customer intent: As an cluster operator, I want to restrict egress traffic for nodes to only access defined ports and addresses and improve cluster security.
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
Title: Use a public load balancer description: Learn how to use a public load balancer with a Standard SKU to expose your services with Azure Kubernetes Service (AKS).---- Previously updated : 12/19/2022+++ Last updated : 11/14/2020++ #Customer intent: As a cluster operator or developer, I want to learn how to create a service in AKS that uses an Azure Load Balancer with a Standard SKU.
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
Title: Managed NAT Gateway+ description: Learn how to create an AKS cluster with managed NAT integration--++++ Last updated 10/26/2021-+ # Managed NAT Gateway
aks Node Pool Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-pool-snapshot.md
Title: Snapshot Azure Kubernetes Service (AKS) node pools description: Learn how to snapshot AKS cluster node pools and create clusters and node pools from a snapshot. -+ Last updated 09/11/2020--++
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
OSM can be added to your Azure Kubernetes Service (AKS) cluster by enabling the
> [!IMPORTANT] > Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
-> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.2* of OSM.
+> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.3* of OSM.
> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.3* of OSM. > - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM.
aks Open Service Mesh Binary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-binary.md
This article will discuss how to download the OSM client library to be used to o
> [!IMPORTANT] > Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
-> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.2* of OSM.
+> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.3* of OSM.
> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.3* of OSM. > - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM.
aks Open Service Mesh Deploy Addon Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-az-cli.md
This article shows you how to install the Open Service Mesh (OSM) add-on on an A
> [!IMPORTANT] > Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
-> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.2* of OSM.
+> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.3* of OSM.
> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.3* of OSM. > - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM.
aks Open Service Mesh Deploy Addon Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md
This article shows you how to deploy the Open Service Mesh (OSM) add-on to Azure
> [!IMPORTANT] > Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM:
-> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.2* of OSM.
+> - If your cluster is running Kubernetes version 1.24.0 or greater, the OSM add-on installs version *1.2.3* of OSM.
> - If your cluster is running a version of Kubernetes between 1.23.5 and 1.24.0, the OSM add-on installs version *1.1.3* of OSM. > - If your cluster is running a version of Kubernetes below 1.23.5, the OSM add-on installs version *1.0.0* of OSM.
aks Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md
Title: Use static IP with load balancer description: Learn how to create and use a static IP address with the Azure Kubernetes Service (AKS) load balancer.--+++++ Last updated 11/14/2020
aks Use Byo Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-byo-cni.md
Title: Bring your own Container Network Interface (CNI) plugin+ description: Learn how to utilize Azure Kubernetes Service with your own Container Network Interface (CNI) plugin--+++++ Last updated 8/12/2022
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
Title: Use Key Management Service (KMS) etcd encryption in Azure Kubernetes Serv
description: Learn how to use the Key Management Service (KMS) etcd encryption with Azure Kubernetes Service (AKS) Previously updated : 01/09/2023 Last updated : 01/17/2023 # Add Key Management Service (KMS) etcd encryption to an Azure Kubernetes Service (AKS) cluster
After changing the key ID (including key name and key version), you can use [az
> [!WARNING] > Remember to update all secrets after key rotation. Otherwise, the secrets will be inaccessible if the old keys don't exist or aren't working.
+>
+> Once you rotate the key, the old key (key1) is still cached and shouldn't be deleted. If you want to delete the old key (key1) immediately, you need to rotate the key twice. Then key2 and key3 are cached, and key1 can be deleted without impacting existing cluster.
```azurecli-interactive az aks update --name myAKSCluster --resource-group MyResourceGroup --enable-azure-keyvault-kms --azure-keyvault-kms-key-vault-network-access "Public" --azure-keyvault-kms-key-id $NEW_KEY_ID
aks Use Node Public Ips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-node-public-ips.md
az provider register --namespace Microsoft.ContainerService
Triggering host port auto assignment is done by deploying a workload without any host ports and applying the `kubernetes.azure.com/assign-hostports-for-containerports` annotation with the list of ports that need host port assignments. The value of the annotation should be specified as a comma-separated list of entries like `port/protocol`, where the port is an individual port number that is defined in the Pod spec and the protocol is `tcp` or `udp`.
-Ports will be assigned from the range `40000-59999` and will be unique across the cluster. The assigned ports will also be added to environment variables inside the pod so that the application can determine what ports were assigned.
+Ports will be assigned from the range `40000-59999` and will be unique across the cluster. The assigned ports will also be added to environment variables inside the pod so that the application can determine what ports were assigned. The environment variable name will be in the following format (example below): `<deployment name>_PORT_<port number>_<protocol>_HOSTPORT`, so an example would be `mydeployment_PORT_8080_TCP_HOSTPORT: 41932`.
Here is an example `echoserver` deployment, showing the mapping of host ports for ports 8080 and 8443:
aks Web App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/web-app-routing.md
Title: Web Application Routing add-on on Azure Kubernetes Service (AKS) (Preview) description: Use the Web Application Routing add-on to securely access applications deployed on Azure Kubernetes Service (AKS).---++++ Last updated 05/13/2021-+ # Web Application Routing (Preview)
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
-# Use an Azure AD workload identity (preview) on Azure Kubernetes Service (AKS)
+# Use Azure AD workload identity (preview) with Azure Kubernetes Service (AKS)
Today with Azure Kubernetes Service (AKS), you can assign [managed identities at the pod-level][use-azure-ad-pod-identity], which has been a preview feature. This pod-managed identity allows the hosted workload or application access to resources through Azure Active Directory (Azure AD). For example, a workload stores files in Azure Storage, and when it needs to access those files, the pod authenticates itself against the resource as an Azure managed identity. This authentication method has been replaced with [Azure Active Directory (Azure AD) workload identities][azure-ad-workload-identity] (preview), which integrate with the Kubernetes native capabilities to federate with any external identity providers. This approach is simpler to use and deploy, and overcomes several limitations in Azure AD pod-managed identity:
app-service App Service Plan Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-plan-manage.md
You can create an empty App Service plan, or you can create a plan as part of ap
You can move an app to another App Service plan, as long as the source plan and the target plan are in the _same resource group and geographical region_. > [!NOTE]
-> Azure deploys each new App Service plan into a deployment unit, internally called a webspace. Each region can have many webspaces, but your app can only move between plans that are created in the same webspace. An App Service Environment is an isolated webspace, so apps can be moved between plans in the same App Service Environment, but not between plans in different App Service Environments.
+> Azure deploys each new App Service plan into a deployment unit, internally called a webspace. Each region can have many webspaces, but your app can only move between plans that are created in the same webspace. An App Service Environment can have multiple webspaces, but your app can only move between plans that are created in the same webspace.
> > You canΓÇÖt specify the webspace you want when creating a plan, but itΓÇÖs possible to ensure that a plan is created in the same webspace as an existing plan. In brief, all plans created with the same resource group, region combination and operating system are deployed into the same webspace. For example, if you created a plan in resource group A and region B, then any plan you subsequently create in resource group A and region B is deployed into the same webspace. Note that plans canΓÇÖt move webspaces after theyΓÇÖre created, so you canΓÇÖt move a plan into ΓÇ£the same webspaceΓÇ¥ as another plan by moving it to another resource group. >
automation Automation Alert Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-alert-metric.md
Alerts allow you to define a condition to monitor for and an action to take when
2. The **Configure signal logic** page is where you define the logic that triggers the alert. Under the historical graph you are presented with two dimensions, **Runbook Name** and **Status**. Dimensions are different properties for a metric that can be used to filter results. For **Runbook Name**, select the runbook you want to alert on or leave blank to alert on all runbooks. For **Status**, select a status from the drop-down you want to monitor for. The runbook name and status values that appear in the dropdown are only for jobs that have ran in the past week.
- If you want to alert on a status or runbook that isn't shown in the dropdown, click the **Add custom value** option next to the dimension. This action opens a dialog that allows you to specify a custom value, which hasn't emitted for that dimension recently. If you enter a value that doesn't exist for a property your alert won't be triggered.
+ If you want to alert on a status or runbook that isn't shown in the dropdown, click the **Add custom value** option next to the dimension. This action opens a dialog that allows you to specify a custom value, which hasn't emitted for that dimension recently. If you enter a value that doesn't exist for a property your alert won't be triggered.For list of job statuses, see [Job statuses](automation-runbook-execution.md#job-statuses).
> [!NOTE] > If you don't specify a name for the **Runbook Name** dimension, if there are any runbooks that meet the status criteria, which includes hidden system runbooks, you will receive an alert.
azure-monitor Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices.md
- Title: Azure Monitor best practices
-description: Guidance and recommendations for deploying Azure Monitor.
--- Previously updated : 10/18/2021---
-# Azure Monitor best practices
-This scenario provides recommended guidance for configuring features of Azure Monitor to monitor the performance and availability of your cloud and hybrid applications and resources.
-
-Azure Monitor is available the moment you create an Azure subscription. The Activity log immediately starts collecting events about activity in the subscription, and platform metrics are collected for any Azure resources you created. Features such as metrics explorer are available to analyze data. Other features require configuration. This scenario identifies the configuration steps required to take advantage of all Azure Monitor features. It also makes recommendations for which features you should leverage and how to determine configuration options based on your particular requirements.
-
-Enabling Azure Monitor to monitor all of your Azure resources is a combination of configuring Azure Monitor components and configuring Azure resources to generate monitoring data for Azure Monitor to collect. The goal of a complete implementation is to collect all useful data from all of your cloud resources and applications and enable the entire set of Azure Monitor features based on that data.
--
-> [!IMPORTANT]
-> If you're new to Azure Monitor or are focused on simply monitoring a single Azure resource, then you should start with the tutorial [Monitor Azure resources with Azure Monitor](essentials/monitor-azure-resource.md). The tutorial provides general concepts for Azure Monitor and guidance for monitoring a single Azure resource. This scenario provides recommendations for preparing your environment to leverage all features of Azure Monitor to monitoring your entire set of applications and resources together at scale.
-
-## Scope of the scenario
-The goal of this scenario is to walk you through the basic steps of a complete Azure Monitor implementation to ensure that you're taking full advantage of its features and maximizing the observability of your cloud and hybrid applications and resources. It focuses on configuration requirements and deployment options as opposed to actual configuration details. Links are provided to other content that provide the details for actually performing required configuration.
-
-## Scenario articles
-This article introduces the scenario. If you want to jump right into a specific area, see one of the other articles that are part of this scenario described in the following table.
-
-| Article | Description |
-|:|:|
-| [Planning](best-practices-plan.md) | Planning that you should consider before starting your implementation. Includes design decisions and information about your organization and requirements that you should gather. |
-| [Configure data collection](best-practices-data-collection.md) | Tasks required to collect monitoring data from your Azure and hybrid applications and resources. |
-| [Analysis and visualizations](best-practices-analysis.md) | Standard features and additional visualizations that you can create to analyze collected monitoring data. |
-| [Alerts and automated responses](best-practices-alerts.md) | Configure notifications and processes that are automatically triggered when an alert is created. |
-| [Best practices and cost management](best-practices-cost.md) | Reducing your cloud monitoring costs by implementing and managing Azure Monitor in the most cost-effective manner. |
--
-## Next steps
--- [Planning your monitoring strategy and configuration](best-practices-plan.md)
azure-monitor Change Analysis Custom Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-custom-filters.md
- Title: Navigate to a change using custom filters in Change Analysis
-description: Learn how to navigate to a change in your service using custom filters in Azure Monitor's Change Analysis.
---
-ms.contributor: cawa
- Previously updated : 05/09/2022--
-ms.reviwer: cawa
--
-# Navigate to a change using custom filters in Change Analysis
-
-Browsing through a long list of changes in the entire subscription is time consuming. With Change Analysis custom filters and search capability, you can efficiently navigate to changes relevant to issues for troubleshooting.
-
-## Custom filters and search bar
--
-### Filters
-
-| Filter | Description |
-| | -- |
-| Subscription | This filter is in-sync with the Azure portal subscription selector. It supports multiple-subscription selection. |
-| Time range | Specifies how far back the UI display changes, up to 14 days. By default, itΓÇÖs set to the past 24 hours. |
-| Resource group | Select the resource group to scope the changes. By default, all resource groups are selected. |
-| Change level | Controls which levels of changes to display. Levels include: important, normal, and noisy. <ul><li>Important: related to availability and security</li><li>Noisy: Read-only properties that are unlikely to cause any issues</li></ul> By default, important and normal levels are checked. |
-| Resource | Select **Add filter** to use this filter. </br> Filter the changes to specific resources. Helpful if you already know which resources to look at for changes. [If the filter is only returning 1,000 resources, see the corresponding solution in troubleshooting guide](./change-analysis-troubleshoot.md#cant-filter-to-your-resource-to-view-changes). |
-| Resource type | Select **Add filter** to use this filter. </br> Filter the changes to specific resource types. |
-
-### Search bar
-
-The search bar filters the changes according to the input keywords. Search bar results apply only to the changes loaded by the page already and don't pull in results from the server side.
--
-## Next steps
-[Troubleshoot Change Analysis](./change-analysis-troubleshoot.md).
azure-monitor Change Analysis Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-query.md
- Title: Pin and share a Change Analysis query to the Azure dashboard
-description: Learn how to pin an Azure Monitor Change Analysis query to the Azure dashboard and share with your team.
---
-ms.contributor: cawa
Previously updated : 05/12/2022 --
-ms.reviwer: cawa
--
-# Pin and share a Change Analysis query to the Azure dashboard
-
-Let's say you want to curate a change view on specific resources, like all Virtual Machine changes in your subscription, and include it in a report sent periodically. You can pin the view to an Azure dashboard for monitoring or sharing scenarios. If you'd like to share a specific change with your team members, you can use the share feature in the Change Details page.
-
-## Pin to the Azure dashboard
-
-Once you have applied filters to the Change Analysis homepage:
-
-1. Select **Pin current filters** from the top menu.
-1. Enter a name for the pin.
-1. Click **OK** to proceed.
-
- :::image type="content" source="./media/change-analysis/click-pin-menu.png" alt-text="Screenshot of selecting Pin current filters button in Change Analysis":::
-
-A side pane will open to configure the dashboard where you'll place your pin. You can select one of two dashboard types:
-
-| Dashboard type | Description |
-| -- | -- |
-| Private | Only you can access a private dashboard. Choose this option if you're creating the pin for your own easy access to the changes. |
-| Shared | A shared dashboard supports role-based access control for view/read access. Shared dashboards are created as a resource in your subscription with a region and resource group to host it. Choose this option if you're creating the pin to share with your team. |
-
-### Select an existing dashboard
-
-If you already have a dashboard to place the pin:
-
-1. Select the **Existing** tab.
-1. Select either **Private** or **Shared**.
-1. Select the dashboard you'd like to use.
-1. If you've selected **Shared**, select the subscription in which you'd like to place the dashboard.
-1. Select **Pin**.
-
- :::image type="content" source="./media/change-analysis/existing-dashboard-small.png" alt-text="Screenshot of selecting an existing dashboard to pin your changes to. ":::
-
-### Create a new dashboard
-
-You can create a new dashboard for this pin.
-
-1. Select the **Create new** tab.
-1. Select either **Private** or **Shared**.
-1. Enter the name of the new dashboard.
-1. If you're creating a shared dashboard, enter the resource group and region information.
-1. Click **Create and pin**.
-
- :::image type="content" source="./media/change-analysis/create-pin-dashboard-small.png" alt-text="Screenshot of creating a new dashboard to pin your changes to.":::
-
-Once the dashboard and pin are created, navigate to the Azure dashboard to view them.
-
-1. From the Azure portal home menu, select **Dashboard**. Use the **Manage Sharing** button in the top menu to handle access or "unshare". Click on the pin to navigate to the curated view of changes.
-
- :::image type="content" source="./media/change-analysis/azure-dashboard.png" alt-text="Screenshot of selecting the Dashboard in the Azure portal home menu.":::
-
- :::image type="content" source="./media/change-analysis/view-share-dashboard.png" alt-text="Screenshot of the pin in the dashboard.":::
-
-## Share a single change with your team
-
-In the Change Analysis homepage, select a line of change to view details on the change.
-
-1. On the Changed properties page, select **Share** from the top menu.
-1. On the Share Change Details pane, copy the deep link of the page and share with your team in messages, emails, reports, or whichever communication channel your team prefers.
-
- :::image type="content" source="./media/change-analysis/share-single-change.png" alt-text="Screenshot of selecting the share button on the dashboard and copying link.":::
---
-## Next steps
-
-Learn more about [Change Analysis](change-analysis.md).
azure-monitor Change Analysis Track Outages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-track-outages.md
+
+ Title: Track a web app outage using Change Analysis
+description: Describes how to identify the root cause of a web app outage using Azure Monitor Change Analysis.
+++
+ms.contributor: cawa
+ Last updated : 01/11/2023++++
+# Track a web app outage using Change Analysis
+
+When issues happen, one of the first things to check is what changed in application, configuration and resources to triage and root cause issues. Change Analysis provides a centralized view of the changes in your subscriptions for up to the past 14 days to provide the history of changes for troubleshooting issues.
+
+To track an outage, we will:
+
+> [!div class="checklist"]
+> - Clone, create, and deploy a [sample web application](https://github.com/Azure-Samples/changeanalysis-webapp-storage-sample) with a storage account.
+> - Enable Change Analysis to track changes for Azure resources and for Azure Web App configurations
+> - Troubleshoot a Web App issue using Change Analysis
+
+## Pre-requisites
+
+- Install [.NET 7.0 or above](https://dotnet.microsoft.com/download).
+- Install [the Azure CLI](/cli/azure/install-azure-cli).
+
+## Set up the test application
+
+### Clone
+
+1. In your preferred terminal, log in to your Azure subscription.
+
+```bash
+az login
+az account set --s {azure-subscription-id}
+```
+
+1. Clone the [sample web application with storage to test Change Analysis](https://github.com/Azure-Samples/changeanalysis-webapp-storage-sample).
+
+```bash
+git clone https://github.com/Azure-Samples/changeanalysis-webapp-storage-sample.git
+```
+
+1. Change the working directory to the project folder.
+
+```bash
+cd changeanalysis-webapp-storage-sample
+```
+
+### Run the PowerShell script
+
+1. Open `Publish-WebApp.ps1`.
+
+1. Edit the `SUBSCRIPTION_ID` and `LOCATION` environment variables.
+
+ | Environment variable | Description |
+ | -- | -- |
+ | `SUBSCRIPTION_ID` | Your Azure subscription ID. |
+ | `LOCATION` | The location of the resource group where you'd like to deploy the sample application. |
+
+1. Run the script from the `./changeanalysis-webapp-storage-sample` directory.
+
+```bash
+./Publish-WebApp.ps1
+```
+
+## Enable Change Analysis
+
+In the Azure portal, [navigate to the Change Analysis standalone UI](./change-analysis-visualizations.md). Page loading may take a few minutes as the `Microsoft.ChangeAnalysis` resource provider is registered.
++
+Once the Change Analysis page loads, you can see resource changes in your subscriptions. To view detailed web app in-guest change data:
+
+- Select **Enable now** from the banner, or
+- Select **Configure** from the top menu.
+
+In the web app in-guest enablement pane, select the web app you'd like to enable:
++
+Now Change Analysis is fully enabled to track both resources and web app in-guest changes.
+
+## Simulate a web app outage
+
+In a typical team environment, multiple developers can work on the same application without notifying the other developers. Simulate this scenario and make a change to the web app setting:
+
+```azurecli
+az webapp config appsettings set -g {resourcegroup_name} -n {webapp_name} --settings AzureStorageConnection=WRONG_CONNECTION_STRING
+```
+
+Visit the web app URL to view the following error:
++
+## Troubleshoot the outage using Change Analysis
+
+In the Azure portal, navigate to the Change Analysis overview page. Since you've triggered a web app outage, you'll see an entry of change for `AzureStorageConnection`:
++
+Since the connection string is a secret value, we hide it on the overview page for security purposes. With sufficient permission to read the web app, you can select the change to view details around the old and new values:
++
+The change details pane also shows important information, including who made the change.
+
+Now that you've discovered the web app in-guest change and understand next steps, you can proceed with troubleshooting the issue.
+
+## Virtual network changes
+
+Knowing what changed in your application's networking resources is critical due to their effect on connectivity, availability, and performance. Change Analysis supports all network resource changes and captures those changes immediately. Networking changes include:
+
+- Firewalls created or edited
+- Network critical changes (for example, blocking port 22 for TCP connections)
+- Load balancer changes
+- Virtual network changes
+
+The sample application includes a virtual network to make sure the application remains secure. Via the Azure portal, you can view and assess the network changes captured by Change Analysis.
+++
+## Next steps
+
+Learn more about [Change Analysis](./change-analysis.md).
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-visualizations.md
Title: Visualizations for Change Analysis in Azure Monitor
-description: Learn how to use visualizations in Azure Monitor's Change Analysis.
+ Title: Scenarios for using Change Analysis in Azure Monitor
+description: Learn the various scenarios in which you can use Azure Monitor's Change Analysis.
ms.contributor: cawa Previously updated : 07/28/2022 Last updated : 01/12/2023
-# Visualizations for Change Analysis in Azure Monitor
+# Scenarios for using Change Analysis in Azure Monitor
-Change Analysis provides data for various management and troubleshooting scenarios to help you understand what changes to your application might have caused the issues. You can view the Change Analysis data through several channels:
+Change Analysis provides data for various management and troubleshooting scenarios to help you understand what changes to your application might have caused the issues.
-## Change Analysis overview portal
+## View Change Analysis data
You can access the Change Analysis overview portal under Azure Monitor, where you can view all changes and application dependency/resource insights. You can access Change Analysis through a couple of entry points:
You can also drill to Change Analysis logs via a chart you've created or pinned
:::image type="content" source="./media/change-analysis/view-change-analysis-2.png" alt-text="Drill into logs and select to view Change Analysis.":::
+## Browse using custom filters and search bar
+
+Browsing through a long list of changes in the entire subscription is time consuming. With Change Analysis custom filters and search capability, you can efficiently navigate to changes relevant to issues for troubleshooting.
++
+### Filters
+
+| Filter | Description |
+| | -- |
+| Subscription | This filter is in-sync with the Azure portal subscription selector. It supports multiple-subscription selection. |
+| Time range | Specifies how far back the UI display changes, up to 14 days. By default, itΓÇÖs set to the past 24 hours. |
+| Resource group | Select the resource group to scope the changes. By default, all resource groups are selected. |
+| Change level | Controls which levels of changes to display. Levels include: important, normal, and noisy. </br> **Important:** related to availability and security </br> **Noisy:** Read-only properties that are unlikely to cause any issues </br> By default, important and normal levels are checked. |
+| Resource | Select **Add filter** to use this filter. </br> Filter the changes to specific resources. Helpful if you already know which resources to look at for changes. [If the filter is only returning 1,000 resources, see the corresponding solution in troubleshooting guide](./change-analysis-troubleshoot.md#cant-filter-to-your-resource-to-view-changes). |
+| Resource type | Select **Add filter** to use this filter. </br> Filter the changes to specific resource types. |
+
+### Search bar
+
+The search bar filters the changes according to the input keywords. Search bar results apply only to the changes loaded by the page already and don't pull in results from the server side.
+
+## Pin and share a Change Analysis query to the Azure dashboard
+
+Let's say you want to curate a change view on specific resources, like all Virtual Machine changes in your subscription, and include it in a report sent periodically. You can pin the view to an Azure dashboard for monitoring or sharing scenarios. If you'd like to share a specific change with your team members, you can use the share feature in the Change Details page.
+
+## Pin to the Azure dashboard
+
+Once you have applied filters to the Change Analysis homepage:
+
+1. Select **Pin current filters** from the top menu.
+1. Enter a name for the pin.
+1. Click **OK** to proceed.
+
+ :::image type="content" source="./media/change-analysis/click-pin-menu.png" alt-text="Screenshot of selecting Pin current filters button in Change Analysis.":::
+
+A side pane will open to configure the dashboard where you'll place your pin. You can select one of two dashboard types:
+
+| Dashboard type | Description |
+| -- | -- |
+| Private | Only you can access a private dashboard. Choose this option if you're creating the pin for your own easy access to the changes. |
+| Shared | A shared dashboard supports role-based access control for view/read access. Shared dashboards are created as a resource in your subscription with a region and resource group to host it. Choose this option if you're creating the pin to share with your team. |
+
+### Select an existing dashboard
+
+If you already have a dashboard to place the pin:
+
+1. Select the **Existing** tab.
+1. Select either **Private** or **Shared**.
+1. Select the dashboard you'd like to use.
+1. If you've selected **Shared**, select the subscription in which you'd like to place the dashboard.
+1. Select **Pin**.
+
+ :::image type="content" source="./media/change-analysis/existing-dashboard-small.png" alt-text="Screenshot of selecting an existing dashboard to pin your changes to. ":::
+
+### Create a new dashboard
+
+You can create a new dashboard for this pin.
+
+1. Select the **Create new** tab.
+1. Select either **Private** or **Shared**.
+1. Enter the name of the new dashboard.
+1. If you're creating a shared dashboard, enter the resource group and region information.
+1. Click **Create and pin**.
+
+ :::image type="content" source="./media/change-analysis/create-pin-dashboard-small.png" alt-text="Screenshot of creating a new dashboard to pin your changes to.":::
+
+Once the dashboard and pin are created, navigate to the Azure dashboard to view them.
+
+1. From the Azure portal home menu, select **Dashboard**. Use the **Manage Sharing** button in the top menu to handle access or "unshare". Click on the pin to navigate to the curated view of changes.
+
+ :::image type="content" source="./media/change-analysis/azure-dashboard.png" alt-text="Screenshot of selecting the Dashboard in the Azure portal home menu.":::
+
+ :::image type="content" source="./media/change-analysis/view-share-dashboard.png" alt-text="Screenshot of the pin in the dashboard.":::
+
+## Share a single change with your team
+
+In the Change Analysis homepage, select a line of change to view details on the change.
+
+1. On the Changed properties page, select **Share** from the top menu.
+1. On the Share Change Details pane, copy the deep link of the page and share with your team in messages, emails, reports, or whichever communication channel your team prefers.
+
+ :::image type="content" source="./media/change-analysis/share-single-change.png" alt-text="Screenshot of selecting the share button on the dashboard and copying link.":::
++ ## Next steps - Learn how to [troubleshoot problems in Change Analysis](change-analysis-troubleshoot.md)
azure-monitor Container Insights Enable Provisioned Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-provisioned-clusters.md
+
+ Title: Monitor AKS hybrid clusters
Last updated : 01/10/2023+++
+description: Collect metrics and logs of AKS hybrid clusters using Azure Monitor.
+++
+# Azure Monitor container insights for Azure Kubernetes Service (AKS) hybrid clusters (preview)
+
+>[!NOTE]
+>Support for monitoring AKS hybrid clusters is currently in preview. We recommend only using preview features in safe testing environments.
+
+[Azure Monitor container insights](./container-insights-overview.md) provides a rich monitoring experience for [AKS hybrid clusters (preview)](/azure/aks/hybrid/aks-hybrid-options-overview). This article describes how to set up Container insights to monitor an AKS hybrid cluster.
+
+## Supported configurations
+
+- Azure Monitor container insights supports monitoring only Linux containers.
+
+## Prerequisites
+
+- Pre-requisites listed under the [generic cluster extensions documentation](../../azure-arc/kubernetes/extensions.md#prerequisites).
+- Log Analytics workspace. Azure Monitor Container Insights supports a Log Analytics workspace in the regions listed under Azure [products by region page](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). You can create your own workspace using [Azure Resource Manager](../logs/resource-manager-workspace.md), [PowerShell](../logs/powershell-workspace-configuration.md), or [Azure portal](../logs/quick-create-workspace.md).
+- [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role assignment on the Azure subscription containing the Azure Arc-enabled Kubernetes resource. If the Log Analytics workspace is in a different subscription, then [Log Analytics Contributor](../logs/manage-access.md#azure-rbac) role assignment is needed on the Log Analytics workspace.
+- To view the monitoring data, you need to have [Log Analytics Reader](../logs/manage-access.md#azure-rbac) role assignment on the Log Analytics workspace.
+- The following endpoints need to be enabled for outbound access in addition to the ones mentioned under [connecting a Kubernetes cluster to Azure Arc](../../azure-arc/kubernetes/quickstart-connect-cluster.md#meet-network-requirements).
+- Azure CLI version 2.43.0 or higher
+- Azure k8s-extension version 1.3.7 or higher
+- Azure Resource-graph version 2.1.0
+
+## Onboarding
+
+## [CLI](#tab/create-cli)
+
+```acli
+az login
+
+az account set --subscription <cluster-subscription-name>
+
+az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice" --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true
+```
+## [Azure portal](#tab/create-portal)
+
+### Onboarding from the AKS hybrid resource pane
+
+1. In the Azure portal, select the AKS hybrid cluster that you wish to monitor.
+
+2. From the resource pane on the left, select the 'Insights' item under the 'Monitoring' section.
+
+3. On the onboarding page, select the 'Configure Azure Monitor' button
+
+4. You can now choose the [Log Analytics workspace](../logs/quick-create-workspace.md) to send your metrics and logs data to.
+
+5. Select the 'Configure' button to deploy the Azure Monitor Container Insights cluster extension.
+
+### Onboarding from Azure Monitor pane
+
+1. In the Azure portal, navigate to the 'Monitor' pane, and select the 'Containers' option under the 'Insights' menu.
+
+2. Select the 'Unmonitored clusters' tab to view the AKS hybrid clusters that you can enable monitoring for.
+
+3. Click on the 'Enable' link next to the cluster that you want to enable monitoring for.
+
+4. Choose the Log Analytics workspace.
+
+5. Select the 'Configure' button to continue.
++
+## [Resource Manager](#tab/create-arm)
+
+1. Download the Azure Resource Manager Template and Parameter files
+
+```bash
+curl -L https://aka.ms/existingClusterOnboarding.json -o existingClusterOnboarding.json
+```
+
+```bash
+curl -L https://aka.ms/existingClusterParam.json -o existingClusterParam.json
+```
+
+2. Edit the values in the parameter file.
+
+ - For clusterResourceId and clusterRegion, use the values on the Overview page for the LCM cluster
+ - For workspaceResourceId, use the resource ID of your Log Analytics workspace
+ - For workspaceRegion, use the Location of your Log Analytics workspace
+ - For workspaceDomain, use the workspace domain value as ΓÇ£opinsights.azure.comΓÇ¥ for public cloud and for Azure China cloud as ΓÇ£opinsights.azure.cnΓÇ¥
+ - For resourceTagValues, leave as empty if not specific
+
+3. Deploy the ARM template
+
+```azurecli
+az login
+
+az account set --subscription <cluster-subscription-name>
+
+az deployment group create --resource-group <resource-group> --template-file ./existingClusterOnboarding.json --parameters existingClusterParam.json
+```
++
+## Validation
+
+### Extension details
+
+Showing the extension details:
+
+```azcli
+az k8s-extension list --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice"
+```
++
+## Delete extension
+
+The command for deleting the extension:
+
+```azcli
+az k8s-extension delete --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice" --name azuremonitor-containers --yes
+```
+
+## Known Issues/Limitations
+
+- Windows containers are not supported currently
azure-monitor Data Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-platform.md
Read more about distributed tracing at [What is distributed tracing?](app/distri
Once [Change Analysis is enabled](./change/change-analysis-enable.md), the `Microsoft.ChangeAnalysis` resource provider is registered with an Azure Resource Manager subscription to make the resource properties and configuration change data available. Change Analysis provides data for various management and troubleshooting scenarios to help users understand what changes might have caused the issues: - Troubleshoot your application via the [Diagnose & solve problems tool](./change/change-analysis-enable.md).-- Perform general management and monitoring via the [Change Analysis overview portal](./change/change-analysis-visualizations.md#change-analysis-overview-portal) and [the activity log](./change/change-analysis-visualizations.md#activity-log-change-history).
+- Perform general management and monitoring via the [Change Analysis overview portal](./change/change-analysis-visualizations.md#view-change-analysis-data) and [the activity log](./change/change-analysis-visualizations.md#activity-log-change-history).
- [Learn more about how to view data results for other scenarios](./change/change-analysis-visualizations.md). Read more about Change Analysis, including data sources in [Use Change Analysis in Azure Monitor](./change/change-analysis.md).
azure-monitor Diagnostic Settings Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings-policy.md
Title: Create diagnostic settings at scale using Azure Policy description: Use Azure Policy to create diagnostic settings in Azure Monitor to be created at scale as each Azure resource is created. -+ Last updated 05/09/2022
For example, the following image shows the built-in diagnostic setting policy de
![Partial screenshot from the Azure Policy Definitions page showing two built-in diagnostic setting policy definitions for Data Lake Analytics.](media/diagnostic-settings-policy/built-in-diagnostic-settings.png)
+For a complete listof built-in policies for Azure Monitor, see [Azure Policy built-in definitions for Azure Monitor](../policy-reference.md)
+ ## Custom policy definitions For resource types that don't have a built-in policy, you need to create a custom policy definition. You could do this manually in the Azure portal by copying an existing built-in policy and then modifying it for your resource type. It's more efficient, though, to create the policy programmatically by using a script in the PowerShell Gallery.
Diagnostic settings do not support resourceIDs with non-ASCII characters (for ex
## Next steps - [Read more about Azure platform Logs](./platform-logs-overview.md)-- [Read more about diagnostic settings](./diagnostic-settings.md)
+- [Read more about diagnostic settings](./diagnostic-settings.md)
azure-monitor Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/getting-started.md
+
+ Title: Getting started with Azure Monitor
+description: Guidance and recommendations for deploying Azure Monitor.
+++ Last updated : 10/18/2021+++
+# Getting started with Azure Monitor
+This article helps guide you through getting started with Azure Monitor including recommendations for preparing your environment and configuring Azure Monitor. It presents an overview of the basic steps you need for a complete Azure Monitor implementation. It will help you understand how you can take advantage of Azure Monitor's features to maximize the observability of your cloud and hybrid applications and resources.
+
+This article focuses on configuration requirements and deployment options, as opposed to actual configuration details. Links are provided for detailed information for the required configurations.
+
+Azure Monitor is available the moment you create an Azure subscription. The Activity log immediately starts collecting events about activity in the subscription, and platform metrics are collected for any Azure resources you created. Features such as metrics explorer are available to analyze data. Other features require configuration. This scenario identifies the configuration steps required to take advantage of all Azure Monitor features. It also makes recommendations for which features you should use and how to determine configuration options based on your particular requirements.
+
+The goal of a complete implementation is to collect all useful data from all of your cloud resources and applications and enable the entire set of Azure Monitor features based on that data.
+To enable Azure Monitor to monitor all of your Azure resources, you need to both:
+- Configure Azure Monitor components
+- Configure Azure resources to generate monitoring data for Azure Monitor to collect.
+
+> [!IMPORTANT]
+> If you're new to Azure Monitor or are want to monitor a single Azure resource, start with the [Monitor Azure resources with Azure Monitor tutorial](essentials/monitor-azure-resource.md). The tutorial provides general concepts for Azure Monitor and guidance for monitoring a single Azure resource. This article provides recommendations for preparing your environment to leverage all features of Azure Monitor to monitoring your entire set of applications and resources together at scale.
+
+## Getting started workflow
+These articles provide detailed information about each of the main steps you'll need to do when getting started with Azure Monitor.
+
+| Article | Description |
+|:|:|
+| [Planning](best-practices-plan.md) | Things that you should consider before starting your implementation. Includes design decisions and information about your organization and requirements that you should gather. |
+| [Configure data collection](best-practices-data-collection.md) | Tasks required to collect monitoring data from your Azure and hybrid applications and resources. |
+| [Analysis and visualizations](best-practices-analysis.md) | Standard features and additional visualizations that you can create to analyze collected monitoring data. |
+| [Alerts and automated responses](best-practices-alerts.md) | Configure notifications and processes that are automatically triggered when an alert is created. |
+| [Best practices and cost management](best-practices-cost.md) | Reducing your cloud monitoring costs by implementing and managing Azure Monitor in the most cost-effective manner. |
++
+## Next steps
+
+- [Planning your monitoring strategy and configuration](best-practices-plan.md)
azure-netapp-files Azacsnap Cmd Ref Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-delete.md
na Previously updated : 04/21/2021 Last updated : 01/18/2023
The `-c delete` command has the following options:
- `--delete sync` when used with options `--dbsid <SID>` and `--hanabackupid <HANA backup id>` gets the storage snapshot name from the backup catalog for the `<HANA backup id>`, and then deletes the entry in the backup catalog _and_ the snapshot from any of the volumes containing the named snapshot. -- `--delete sync` when used with `--snapshot <snapshot name>` will check for any entries in the backup catalog for the `<snapshot name>`, gets the SAP HANA backup ID and deletes both the entry in the backup catalog _and_ the snapshot from any of the volumes containing the named snapshot.
+- `--delete sync` when used with options `--dbsid <SID>` and `--snapshot <snapshot name>` will check for any entries in the backup catalog for the `<snapshot name>`, gets the SAP HANA backup ID and deletes both the entry in the backup catalog _and_ the snapshot from any of the volumes containing the named snapshot.
- `[--force]` (optional) *Use with caution*. This operation will force deletion without prompting for confirmation.
azure-netapp-files Azure Government https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md
na Previously updated : 01/12/2023 Last updated : 01/17/2023
All [Azure NetApp Files features](whats-new.md) available on Azure public cloud
|: |: |: | | Azure NetApp Files cross-region replication | Generally available (GA) | [Limited](cross-region-replication-introduction.md#supported-region-pairs) | | Azure NetApp Files backup | Public preview | No |
+| Cross-zone replication | Public preview | No |
| Standard network features | Generally available (GA) | No | ## Portal access
You can follow [Azure NetApp Files](./index.yml) documentation for details about
## Azure CLI access
-You can connect to Azure Government by setting the cloud name to `AzureUSGovernment` and then proceeding to log in as you normally would with the `az login` command. After running the log-in command, a browser will launch where you enter the appropriate Azure Government credentials.
+You can connect to Azure Government by setting the cloud name to `AzureUSGovernment` and then proceeding to sign in as you normally would with the `az login` command. After running the sign-in command, a browser will launch where you enter the appropriate Azure Government credentials.
```azurecli
azure-netapp-files Cross Zone Replication Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-zone-replication-introduction.md
# Understand cross-zone replication of Azure NetApp Files (preview)
-In many cases resiliency across availability zones is achieved by HA architectures using application-based replication and HA, as explained in Using availability zones for high availability. However, often simpler, more cost-effective approaches are considered by using storage-based data replication instead.
+In many cases resiliency across availability zones is achieved by HA architectures using application-based replication and HA, as explained in [Use availability zones for high availability](use-availability-zones.md). However, simpler, more cost-effective approaches are often considered by using storage-based data replication instead.
Similar to the Azure NetApp Files [cross-region replication feature](cross-region-replication-introduction.md), the cross-zone replication (CZR) capability provides data protection between volumes in different availability zones. You can asynchronously replicate data from an Azure NetApp Files volume (source) in one availability zone to another Azure NetApp Files volume (destination) in another availability. This capability enables you to fail over your critical application if a zone-wide outage or disaster happens.
azure-video-indexer Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-support.md
Some notes for the above values are:
| **Language** | **Code** | **Supported source language** | **Language identification** | **Customization** (language model) | |:--:|:--:|:--:|:-:|:--:|
-| Afrikaans | `af-ZA` | | Γ£ö | |
+| Afrikaans | `af-ZA` | | | |
| Arabic (Israel) | `ar-IL` | Γ£ö | | Γ£ö | | Arabic (Iraq) | `ar-IQ` | Γ£ö | Γ£ö | | | Arabic (Jordan) | `ar-JO` | Γ£ö | Γ£ö | Γ£ö |
Some notes for the above values are:
| Arabic Modern Standard (Bahrain) | `ar-BH` | Γ£ö | Γ£ö | Γ£ö | | Arabic Syrian Arab Republic | `ar-SY` | Γ£ö | Γ£ö| Γ£ö | | Armenian | `hy-AM` | Γ£ö | | |
-| Bangla | `bn-BD` | | Γ£ö | |
-| Bosnian | `bs-Latn` | | Γ£ö | |
+| Bangla | `bn-BD` | | | |
+| Bosnian | `bs-Latn` | | | |
| Bulgarian | `bg-BG` | Γ£ö | Γ£ö | | | Catalan | `ca-ES` | Γ£ö | Γ£ö | | | Chinese (Cantonese Traditional) | `zh-HK` | Γ£ö | Γ£ö | Γ£ö | | Chinese (Simplified) | `zh-Hans` | Γ£ö | Γ£ö | Γ£ö | | Chinese (Simplified) | `zh-CK` | Γ£ö | Γ£ö | Γ£ö |
-| Chinese (Traditional) | `zh-Hant` | | Γ£ö | |
+| Chinese (Traditional) | `zh-Hant` | | | |
| Croatian | `hr-HR` | Γ£ö | Γ£ö | | | Czech | `cs-CZ` | Γ£ö |Γ£ö | Γ£ö | | Danish | `da-DK` | Γ£ö |Γ£ö | Γ£ö |
Some notes for the above values are:
| English United Kingdom | `en-GB` | Γ£ö |Γ£ö | Γ£ö | | English United States | `en-US` | Γ£ö |Γ£ö | Γ£ö | | Estonian | `et-EE` | Γ£ö |Γ£ö | |
-| Fijian | `en-FJ` | | Γ£ö | |
-| Filipino | `fil-PH` | | Γ£ö | |
+| Fijian | `en-FJ` | | | |
+| Filipino | `fil-PH` | | | |
| Finnish | `fi-FI` | Γ£ö |Γ£ö | Γ£ö | | French | `fr-FR` | Γ£ö |Γ£ö | Γ£ö | | French (Canada) | `fr-CA` | Γ£ö |Γ£ö | Γ£ö | | German | `de-DE` | Γ£ö |Γ£ö | Γ£ö | | Greek | `el-GR` | Γ£ö |Γ£ö | | | Gujarati | `gu-IN` | Γ£ö |Γ£ö | |
-| Haitian | `fr-HT` | | Γ£ö | |
+| Haitian | `fr-HT` | | | |
| Hebrew | `he-IL` | Γ£ö |Γ£ö | Γ£ö | | Hindi | `hi-IN` | Γ£ö |Γ£ö | Γ£ö | | Hungarian | `hu-HU` | | Γ£ö | |
-| Icelandic | `is-IS` | Γ£ö | | |
-| Indonesian | `id-ID` | | Γ£ö | |
+| Icelandic | `is-IS` | Γ£ö | | |
+| Indonesian | `id-ID` | | | |
| Irish | `ga-IE` | Γ£ö | Γ£ö | | | Italian | `it-IT` | Γ£ö | Γ£ö | Γ£ö | | Japanese | `ja-JP` | Γ£ö | Γ£ö | Γ£ö | | Kannada | `kn-IN` | Γ£ö | Γ£ö | |
-| Kiswahili | `sw-KE` | | Γ£ö | |
+| Kiswahili | `sw-KE` | | | |
| Korean | `ko-KR` | Γ£ö | Γ£ö| Γ£ö | | Latvian | `lv-LV` | Γ£ö | Γ£ö | |
-| Lithuanian | `lt-LT` | | Γ£ö | |
-| Malagasy | `mg-MG` | | Γ£ö | |
+| Lithuanian | `lt-LT` | | | |
+| Malagasy | `mg-MG` | | | |
| Malay | `ms-MY` | Γ£ö | | | | Malayalam | `ml-IN` |Γ£ö |Γ£ö | |
-| Maltese | `mt-MT` | | Γ£ö | |
+| Maltese | `mt-MT` | | | |
| Norwegian | `nb-NO` | Γ£ö |Γ£ö | Γ£ö | | Persian | `fa-IR` | Γ£ö | | Γ£ö | | Polish | `pl-PL` | Γ£ö |Γ£ö | Γ£ö |
Some notes for the above values are:
| Portuguese (Portugal) | `pt-PT` | Γ£ö | Γ£ö | Γ£ö | | Romanian | `ro-RO` | Γ£ö | Γ£ö | | | Russian | `ru-RU` | Γ£ö | Γ£ö | Γ£ö |
-| Samoan | `en-WS` | | Γ£ö | |
-| Serbian (Cyrillic) | `sr-Cyrl-RS` | |Γ£ö | |
-| Serbian (Latin) | `sr-Latn-RS` | | Γ£ö | |
+| Samoan | `en-WS` | | | |
+| Serbian (Cyrillic) | `sr-Cyrl-RS` | | | |
+| Serbian (Latin) | `sr-Latn-RS` | | | |
| Slovak | `sk-SK` | Γ£ö | Γ£ö | | | Slovenian | `sl-SI` | Γ£ö | Γ£ö | | | Spanish | `es-ES` | Γ£ö | Γ£ö | Γ£ö | | Spanish (Mexico) | `es-MX` | Γ£ö | Γ£ö | Γ£ö | | Swedish | `sv-SE` | Γ£ö |Γ£ö | Γ£ö |
-| Tamil | `ta-IN` | Γ£ö | Γ£ö | |
-| Telugu | `te-IN` | Γ£ö | Γ£ö | |
+| Tamil | `ta-IN` | Γ£ö | Γ£ö | |
+| Telugu | `te-IN` | Γ£ö | Γ£ö | |
| Thai | `th-TH` | Γ£ö |Γ£ö | Γ£ö |
-| Tongan | `to-TO` | | Γ£ö | |
-| Turkish | `tr-TR` | Γ£ö | Γ£ö| Γ£ö |
+| Tongan | `to-TO` | | | |
+| Turkish | `tr-TR` | Γ£ö | Γ£ö| Γ£ö |
| Ukrainian | `uk-UA` | Γ£ö | Γ£ö | |
-| Urdu | `ur-PK` | | | |
+| Urdu | `ur-PK` | | | |
| Vietnamese | `vi-VN` | Γ£ö |Γ£ö| | **Default languages supported by Language identification (LID)**: German (de-DE) , English United States (en-US) , Spanish (es-ES) , French (fr-FR), Italian (it-IT) , Japanese (ja-JP), Portuguese (pt-BR), Russian (ru-RU), Chinese (Simplified) (zh-Hans).
backup Backup Azure Backup Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-sql.md
Title: Back up SQL Server to Azure as a DPM workload description: An introduction to backing up SQL Server databases by using the Azure Backup service- Previously updated : 01/30/2019+ Last updated : 01/17/2023++++ + # Back up SQL Server to Azure as a DPM workload
-This article leads you through the configuration steps to back up SQL Server databases by using Azure Backup.
+This article describes how to back up and restore the SQL Server databases using Azure Backup.
+
+Azure Backup helps you to back up SQL Server databases to Azure via an Azure account. If you don't have one, you can create a free account in just a few minutes. For more information, see [Create your Azure free account](https://azure.microsoft.com/pricing/free-trial/).
-To back up SQL Server databases to Azure, you need an Azure account. If you don't have one, you can create a free account in just a few minutes. For more information, see [Create your Azure free account](https://azure.microsoft.com/pricing/free-trial/).
+## SQL Server database backup workflow
To back up a SQL Server database to Azure and to recover it from Azure:
To back up a SQL Server database to Azure and to recover it from Azure:
1. Create on-demand backup copies in Azure. 1. Recover the database from Azure.
->[!NOTE]
->DPM 2019 UR2 supports SQL Server Failover Cluster Instances (FCI) using Cluster Shared Volumes (CSV).<br><br>
->Protection of [SQL Server failover cluster instance with Storage Spaces Direct on Azure](/azure/azure-sql/virtual-machines/windows/failover-cluster-instance-storage-spaces-direct-manually-configure) and [SQL Server failover cluster instance with Azure shared disks](/azure/azure-sql/virtual-machines/windows/failover-cluster-instance-azure-shared-disks-manually-configure) is supported with this feature. The DPM server must be deployed in the Azure Virtual Machine to protect SQL FCI instance deployed on Azure VMs.
+## Supported scenarios
+
+- DPM 2019 UR2 supports SQL Server Failover Cluster Instances (FCI) using Cluster Shared Volumes (CSV).<br><br>
+- Protection of [SQL Server failover cluster instance with Storage Spaces Direct on Azure](/azure/azure-sql/virtual-machines/windows/failover-cluster-instance-storage-spaces-direct-manually-configure) and [SQL Server failover cluster instance with Azure shared disks](/azure/azure-sql/virtual-machines/windows/failover-cluster-instance-azure-shared-disks-manually-configure) is supported with this feature. The DPM server must be deployed in the Azure Virtual Machine to protect SQL FCI instance deployed on Azure VMs.
## Prerequisites and limitations
-* If you have a database with files on a remote file share, protection will fail with Error ID 104. DPM doesn't support protection for SQL Server data on a remote file share.
+* If you've a database with files on a remote file share, protection will fail with Error ID 104. DPM doesn't support protection for SQL Server data on a remote file share.
* DPM can't protect databases that are stored on remote SMB shares. * Ensure that the [availability group replicas are configured as read-only](/sql/database-engine/availability-groups/windows/configure-read-only-access-on-an-availability-replica-sql-server). * You must explicitly add the system account **NTAuthority\System** to the Sysadmin group on SQL Server.
To back up a SQL Server database to Azure and to recover it from Azure:
* If the backup fails on the selected node, then the backup operation fails. * Recovery to the original location isn't supported. * SQL Server 2014 or above backup issues:
- * SQL server 2014 added a new feature to create a [database for on-premises SQL Server in Windows Azure Blob storage](/sql/relational-databases/databases/sql-server-data-files-in-microsoft-azure). DPM can't be used to protect this configuration.
+ * SQL server 2014 added a new feature to create a [database for on-premises SQL Server in Microsoft Azure Blob storage](/sql/relational-databases/databases/sql-server-data-files-in-microsoft-azure). DPM can't be used to protect this configuration.
* There are some known issues with "Prefer secondary" backup preference for the SQL Always On option. DPM always takes a backup from secondary. If no secondary can be found, then the backup fails. ## Before you start
To protect SQL Server databases in Azure, first create a backup policy:
1. On the Data Protection Manager (DPM) server, select the **Protection** workspace. 1. Select **New** to create a protection group.
- ![Create a protection group](./media/backup-azure-backup-sql/protection-group.png)
+ ![Screenshot shows how to start creating a protection group.](./media/backup-azure-backup-sql/protection-group.png)
1. On the start page, review the guidance about creating a protection group. Then select **Next**. 1. Select **Servers**.
- ![Select the Servers protection group type](./media/backup-azure-backup-sql/pg-servers.png)
+ ![Screenshot shows how to select the Servers protection group type.](./media/backup-azure-backup-sql/pg-servers.png)
1. Expand the SQL Server virtual machine where the databases that you want to back up are located. You see the data sources that can be backed up from that server. Expand **All SQL Shares** and then select the databases that you want to back up. In this example, we select ReportServer$MSDPM2012 and ReportServer$MSDPM2012TempDB. Then select **Next**.
- ![Select a SQL Server database](./media/backup-azure-backup-sql/pg-databases.png)
+ ![Screenshot shows how to select a SQL Server database.](./media/backup-azure-backup-sql/pg-databases.png)
1. Name the protection group and then select **I want online protection**.
- ![Choose a data-protection method - short-term disk protection or online Azure protection](./media/backup-azure-backup-sql/pg-name.png)
+ ![Screenshot shows how to choose a data-protection method - short-term disk protection or online Azure protection.](./media/backup-azure-backup-sql/pg-name.png)
1. On the **Specify Short-Term Goals** page, include the necessary inputs to create backup points to the disk. In this example, **Retention range** is set to *5 days*. The backup **Synchronization frequency** is set to once every *15 minutes*. **Express Full Backup** is set to *8:00 PM*.
- ![Set up short-term goals for backup protection](./media/backup-azure-backup-sql/pg-shortterm.png)
+ ![Screenshot shows how to set up short-term goals for backup protection.](./media/backup-azure-backup-sql/pg-shortterm.png)
> [!NOTE] > In this example, a backup point is created at 8:00 PM every day. The data that has been modified since the previous day's 8:00 PM backup point is transferred. This process is called **Express Full Backup**. Although the transaction logs are synchronized every 15 minutes, if we need to recover the database at 9:00 PM, then the point is created by replaying the logs from the last express full backup point, which is 8:00 PM in this example.
- >
- >
1. Select **Next**. DPM shows the overall storage space available. It also shows the potential disk space utilization.
- ![Set up disk allocation](./media/backup-azure-backup-sql/pg-storage.png)
+ ![Screenshot shows how to set up disk allocation.](./media/backup-azure-backup-sql/pg-storage.png)
By default, DPM creates one volume per data source (SQL Server database). The volume is used for the initial backup copy. In this configuration, Logical Disk Manager (LDM) limits DPM protection to 300 data sources (SQL Server databases). To work around this limitation, select **Co-locate data in DPM Storage Pool**. If you use this option, DPM uses a single volume for multiple data sources. This setup allows DPM to protect up to 2,000 SQL Server databases.
To protect SQL Server databases in Azure, first create a backup policy:
1. If you're an administrator, you can choose to transfer this initial backup **Automatically over the network** and choose the time of transfer. Or choose to **Manually** transfer the backup. Then select **Next**.
- ![Choose a replica-creation method](./media/backup-azure-backup-sql/pg-manual.png)
+ ![Screenshot shows how to choose a replica-creation method.](./media/backup-azure-backup-sql/pg-manual.png)
The initial backup copy requires the transfer of the entire data source (SQL Server database). The backup data moves from the production server (SQL Server computer) to the DPM server. If this backup is large, then transferring the data over the network could cause bandwidth congestion. For this reason, administrators can choose to use removable media to transfer the initial backup **Manually**. Or they can transfer the data **Automatically over the network** at a specified time.
To protect SQL Server databases in Azure, first create a backup policy:
1. Choose when to run a consistency check. Then select **Next**.
- ![Choose when to run a consistency check](./media/backup-azure-backup-sql/pg-consistent.png)
+ ![Screenshot shows how to choose the schedule to run a consistency check.](./media/backup-azure-backup-sql/pg-consistent.png)
DPM can run a consistency check on the integrity of the backup point. It calculates the checksum of the backup file on the production server (the SQL Server computer in this example) and the backed-up data for that file in DPM. If the check finds a conflict, then the backed-up file in DPM is assumed to be corrupt. DPM fixes the backed-up data by sending the blocks that correspond to the checksum mismatch. Because the consistency check is a performance-intensive operation, administrators can choose to schedule the consistency check or run it automatically. 1. Select the data sources to protect in Azure. Then select **Next**.
- ![Select data sources to protect in Azure](./media/backup-azure-backup-sql/pg-sqldatabases.png)
+ ![Screenshot shows how to select data sources to protect in Azure.](./media/backup-azure-backup-sql/pg-sqldatabases.png)
1. If you're an administrator, you can choose backup schedules and retention policies that suit your organization's policies.
- ![Choose schedules and retention policies](./media/backup-azure-backup-sql/pg-schedule.png)
+ ![Screenshot shows how to choose schedules and retention policies.](./media/backup-azure-backup-sql/pg-schedule.png)
In this example, backups are taken daily at 12:00 PM and 8:00 PM.
To protect SQL Server databases in Azure, first create a backup policy:
1. Choose the retention policy schedule. For more information about how the retention policy works, see [Use Azure Backup to replace your tape infrastructure](backup-azure-backup-cloud-as-tape.md).
- ![Choose a retention policy](./media/backup-azure-backup-sql/pg-retentionschedule.png)
+ ![Screenshot shows how to choose a retention policy.](./media/backup-azure-backup-sql/pg-retentionschedule.png)
In this example:
To protect SQL Server databases in Azure, first create a backup policy:
1. On the **Summary** page, review the policy details. Then select **Create group**. You can select **Close** and watch the job progress in the **Monitoring** workspace.
- ![The progress of the protection group creation](./media/backup-azure-backup-sql/pg-summary.png)
+ ![Screenshot shows the progress of the protection group creation.](./media/backup-azure-backup-sql/pg-summary.png)
## Create on-demand backup copies of a SQL Server database
A recovery point is created when the first backup occurs. Rather than waiting fo
1. In the protection group, make sure the database status is **OK**.
- ![A protection group, showing the database status](./media/backup-azure-backup-sql/sqlbackup-recoverypoint.png)
+ ![Screenshot shows the database status in a protection group.](./media/backup-azure-backup-sql/sqlbackup-recoverypoint.png)
1. Right-click the database and then select **Create recovery point**.
- ![Choose to create an online recovery point](./media/backup-azure-backup-sql/sqlbackup-createrp.png)
+ ![Screenshot shows how to choose creating an online recovery point.](./media/backup-azure-backup-sql/sqlbackup-createrp.png)
1. In the drop-down menu, select **Online protection**. Then select **OK** to start the creation of a recovery point in Azure.
- ![Start creating a recovery point in Azure](./media/backup-azure-backup-sql/sqlbackup-azure.png)
+ ![Screenshot shows how to start creating a recovery point in Azure.](./media/backup-azure-backup-sql/sqlbackup-azure.png)
1. You can view the job progress in the **Monitoring** workspace.
- ![View job progress in the Monitoring console](./media/backup-azure-backup-sql/sqlbackup-monitoring.png)
+ ![Screenshot shows how to view job progress in the Monitoring console.](./media/backup-azure-backup-sql/sqlbackup-monitoring.png)
## Recover a SQL Server database from Azure
To recover a protected entity, such as a SQL Server database, from Azure:
1. Open the DPM server management console. Go to the **Recovery** workspace to see the servers that DPM backs up. Select the database (in this example, ReportServer$MSDPM2012). Select a **Recovery time** that ends with **Online**.
- ![Select a recovery point](./media/backup-azure-backup-sql/sqlbackup-restorepoint.png)
+ ![Screenshot shows how to select a recovery point.](./media/backup-azure-backup-sql/sqlbackup-restorepoint.png)
1. Right-click the database name and select **Recover**.
- ![Recover a database from Azure](./media/backup-azure-backup-sql/sqlbackup-recover.png)
+ ![Screenshot shows how to recover a database from Azure.](./media/backup-azure-backup-sql/sqlbackup-recover.png)
1. DPM shows the details of the recovery point. Select **Next**. To overwrite the database, select the recovery type **Recover to original instance of SQL Server**. Then select **Next**.
- ![Recover a database to its original location](./media/backup-azure-backup-sql/sqlbackup-recoveroriginal.png)
+ ![Screenshot shows how to recover a database to its original location.](./media/backup-azure-backup-sql/sqlbackup-recoveroriginal.png)
In this example, DPM allows the database to be recovered to another SQL Server instance or to a standalone network folder. 1. On the **Specify Recovery Options** page, you can select the recovery options. For example, you can choose **Network bandwidth usage throttling** to throttle the bandwidth that recovery uses. Then select **Next**.
To recover a protected entity, such as a SQL Server database, from Azure:
The recovery status shows the database being recovered. You can select **Close** to close the wizard and view the progress in the **Monitoring** workspace.
- ![Start the recovery process](./media/backup-azure-backup-sql/sqlbackup-recoverying.png)
+ ![Screenshot shows how to start the recovery process.](./media/backup-azure-backup-sql/sqlbackup-recoverying.png)
When the recovery is complete, the restored database is consistent with the application.
backup Backup Azure Sql Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-mabs.md
Title: Back up SQL Server by using Azure Backup Server description: In this article, learn the configuration to back up SQL Server databases by using Microsoft Azure Backup Server (MABS).- Previously updated : 07/28/2021+ Last updated : 01/16/2023++++ + # Back up SQL Server to Azure by using Azure Backup Server
-Microsoft Azure Backup Server (MABS) provides backup and recovery for SQL Server databases. In addition to backing up SQL Server databases, you can run a system backup or full bare-metal backup of the SQL Server computer. Here's what MABS can protect:
+This article describes how to back up and restore SQL Server to Azure by using Microsoft Azure Backup Server (MABS).
+
+Microsoft Azure Backup Server (MABS) provides backup and recovery for SQL Server databases. In addition to backing up SQL Server databases, you can run a system backup or full bare-metal backup of the SQL Server computer. You can use MABS to protect:
- A standalone SQL Server instance - A SQL Server Failover Cluster Instance (FCI)
->[!Note]
->MABS v3 UR2 supports SQL Server Failover Cluster Instance (FCI) using Cluster Shared Volume (CSV).
->
->Protection of SQL Server FCI with Storage Spaces Direct on Azure, and SQL Server FCI with Azure shared disks is supported with this feature. The DPM server must be deployed in the Azure Virtual Machine to protect the SQL FCI instance, deployed on the Azure VMs.
->
->A SQL Server Always On availability group with theses preferences:
->- Prefer Secondary
->- Secondary only
->- Primary
->- Any Replica
+## Supported scenarios
+
+- MABS v3 UR2 supports SQL Server Failover Cluster Instance (FCI) using Cluster Shared Volume (CSV).
+- Protection of SQL Server FCI with Storage Spaces Direct on Azure, and SQL Server FCI with Azure shared disks is supported with this feature. The DPM server must be deployed in the Azure Virtual Machine to protect the SQL FCI instance, deployed on the Azure VMs.
+- A SQL Server Always On availability group with theses preferences:
+ - Prefer Secondary
+ - Secondary only
+ - Primary
+ - Any Replica
+
+## SQL Server database protection workflow
To back up a SQL Server database and recover it from Azure:
To protect SQL Server databases in Azure, first create a backup policy:
1. In Azure Backup Server, select the **Protection** workspace. 1. Select **New** to create a protection group.
- ![Create a protection group in Azure Backup Server](./media/backup-azure-backup-sql/protection-group.png)
+ ![Screenshot shows how to start creating a protection group in Azure Backup Server.](./media/backup-azure-backup-sql/protection-group.png)
1. On the start page, review the guidance about creating a protection group. Then select **Next**. 1. For the protection group type, select **Servers**.
- ![Select the Servers protection group type](./media/backup-azure-backup-sql/pg-servers.png)
+ ![Screenshot shows how to select the Servers protection group type.](./media/backup-azure-backup-sql/pg-servers.png)
1. Expand the SQL Server instance where the databases that you want to back up are located. You see the data sources that can be backed up from that server. Expand **All SQL Shares** and then select the databases that you want to back up. In this example, we select ReportServer$MSDPM2012 and ReportServer$MSDPM2012TempDB. Select **Next**.
- ![Select a SQL Server database](./media/backup-azure-backup-sql/pg-databases.png)
+ ![Screenshot shows how to select a SQL Server database.](./media/backup-azure-backup-sql/pg-databases.png)
1. Name the protection group and then select **I want online protection**.
- ![Choose a data-protection method - short-term disk protection or online Azure protection](./media/backup-azure-backup-sql/pg-name.png)
+ ![Screenshot shows how to choose a data-protection method - short-term disk protection or online Azure protection.](./media/backup-azure-backup-sql/pg-name.png)
1. On the **Specify Short-Term Goals** page, include the necessary inputs to create backup points to the disk. In this example, **Retention range** is set to *5 days*. The backup **Synchronization frequency** is set to once every *15 minutes*. **Express Full Backup** is set to *8:00 PM*.
- ![Set up short-term goals for backup protection](./media/backup-azure-backup-sql/pg-shortterm.png)
+ ![Screenshot shows how to set up short-term goals for backup protection.](./media/backup-azure-backup-sql/pg-shortterm.png)
> [!NOTE] > In this example, a backup point is created at 8:00 PM every day. The data that has been modified since the previous day's 8:00 PM backup point is transferred. This process is called **Express Full Backup**. Although the transaction logs are synchronized every 15 minutes, if we need to recover the database at 9:00 PM, then the point is created by replaying the logs from the last express full backup point, which is 8:00 PM in this example.
- >
- >
1. Select **Next**. MABS shows the overall storage space available. It also shows the potential disk space utilization.
- ![Set up disk allocation in MABS](./media/backup-azure-backup-sql/pg-storage.png)
+ ![Screenshot shows how to set up disk allocation in MABS.](./media/backup-azure-backup-sql/pg-storage.png)
By default, MABS creates one volume per data source (SQL Server database). The volume is used for the initial backup copy. In this configuration, Logical Disk Manager (LDM) limits MABS protection to 300 data sources (SQL Server databases). To work around this limitation, select **Co-locate data in DPM Storage Pool**. If you use this option, MABS uses a single volume for multiple data sources. This setup allows MABS to protect up to 2,000 SQL Server databases. If you select **Automatically grow the volumes**, then MABS can account for the increased backup volume as the production data grows. If you don't select **Automatically grow the volumes**, then MABS limits the backup storage to the data sources in the protection group. 1. If you're an administrator, you can choose to transfer this initial backup **Automatically over the network** and choose the time of transfer. Or choose to **Manually** transfer the backup. Then select **Next**.
- ![Choose a replica-creation method in MABS](./media/backup-azure-backup-sql/pg-manual.png)
+ ![Screenshot shows how to choose a replica-creation method in MABS.](./media/backup-azure-backup-sql/pg-manual.png)
The initial backup copy requires the transfer of the entire data source (SQL Server database). The backup data moves from the production server (SQL Server computer) to MABS. If this backup is large, then transferring the data over the network could cause bandwidth congestion. For this reason, administrators can choose to use removable media to transfer the initial backup **Manually**. Or they can transfer the data **Automatically over the network** at a specified time. After the initial backup finishes, backups continue incrementally on the initial backup copy. Incremental backups tend to be small and are easily transferred across the network. 1. Choose when to run a consistency check. Then select **Next**.
- ![Choose when to run a consistency check](./media/backup-azure-backup-sql/pg-consistent.png)
+ ![Screenshot shows how to choose a schedule to run a consistency check.](./media/backup-azure-backup-sql/pg-consistent.png)
MABS can run a consistency check on the integrity of the backup point. It calculates the checksum of the backup file on the production server (the SQL Server computer in this example) and the backed-up data for that file in MABS. If the check finds a conflict, then the backed-up file in MABS is assumed to be corrupt. MABS fixes the backed-up data by sending the blocks that correspond to the checksum mismatch. Because the consistency check is a performance-intensive operation, administrators can choose to schedule the consistency check or run it automatically. 1. Select the data sources to protect in Azure. Then select **Next**.
- ![Select data sources to protect in Azure](./media/backup-azure-backup-sql/pg-sqldatabases.png)
+ ![Screenshot show how to select data sources to protect in Azure.](./media/backup-azure-backup-sql/pg-sqldatabases.png)
1. If you're an administrator, you can choose backup schedules and retention policies that suit your organization's policies.
- ![Choose schedules and retention policies](./media/backup-azure-backup-sql/pg-schedule.png)
+ ![Screenshot shows how to choose schedules and retention policies.](./media/backup-azure-backup-sql/pg-schedule.png)
In this example, backups are taken daily at 12:00 PM and 8:00 PM.
To protect SQL Server databases in Azure, first create a backup policy:
1. Choose the retention policy schedule. For more information about how the retention policy works, see [Use Azure Backup to replace your tape infrastructure](backup-azure-backup-cloud-as-tape.md).
- ![Choose a retention policy in MABS](./media/backup-azure-backup-sql/pg-retentionschedule.png)
+ ![Screenshot shows how to choose a retention policy in MABS.](./media/backup-azure-backup-sql/pg-retentionschedule.png)
In this example:
To protect SQL Server databases in Azure, first create a backup policy:
After you choose a transfer mechanism, select **Next**. 1. On the **Summary** page, review the policy details. Then select **Create group**. You can select **Close** and watch the job progress in the **Monitoring** workspace.
- ![The progress of the protection group creation](./media/backup-azure-backup-sql/pg-summary.png)
+ ![Screenshot shows the progress of the protection group creation.](./media/backup-azure-backup-sql/pg-summary.png)
## Create on-demand backup copies of a SQL Server database
A recovery point is created when the first backup occurs. Rather than waiting fo
1. In the protection group, make sure the database status is **OK**.
- ![A protection group, showing the database status](./media/backup-azure-backup-sql/sqlbackup-recoverypoint.png)
+ ![Screenshot shows the database status in a protection group.](./media/backup-azure-backup-sql/sqlbackup-recoverypoint.png)
1. Right-click the database and then select **Create recovery point**.
- ![Choose to create an online recovery point](./media/backup-azure-backup-sql/sqlbackup-createrp.png)
+ ![Screenshot shows how to choose creating an online recovery point.](./media/backup-azure-backup-sql/sqlbackup-createrp.png)
1. In the drop-down menu, select **Online protection**. Then select **OK** to start the creation of a recovery point in Azure.
- ![Start creating a recovery point in Azure](./media/backup-azure-backup-sql/sqlbackup-azure.png)
+ ![Screenshot shows how to start creating a recovery point in Azure.](./media/backup-azure-backup-sql/sqlbackup-azure.png)
1. You can view the job progress in the **Monitoring** workspace.
- ![View job progress in the Monitoring console](./media/backup-azure-backup-sql/sqlbackup-monitoring.png)
+ ![Screenshot shows how to view job progress in the Monitoring console.](./media/backup-azure-backup-sql/sqlbackup-monitoring.png)
## Recover a SQL Server database from Azure
To recover a protected entity, such as a SQL Server database, from Azure:
1. Open the DPM server management console. Go to the **Recovery** workspace to see the servers that DPM backs up. Select the database (in this example, ReportServer$MSDPM2012). Select a **Recovery time** that ends with **Online**.
- ![Select a recovery point](./media/backup-azure-backup-sql/sqlbackup-restorepoint.png)
+ ![Screenshot shows how to select a recovery point.](./media/backup-azure-backup-sql/sqlbackup-restorepoint.png)
1. Right-click the database name and select **Recover**.
- ![Recover a database from Azure](./media/backup-azure-backup-sql/sqlbackup-recover.png)
+ ![Screenshot shows how to recover a database from Azure.](./media/backup-azure-backup-sql/sqlbackup-recover.png)
1. DPM shows the details of the recovery point. Select **Next**. To overwrite the database, select the recovery type **Recover to original instance of SQL Server**. Then select **Next**.
- ![Recover a database to its original location](./media/backup-azure-backup-sql/sqlbackup-recoveroriginal.png)
+ ![Screenshot shows how to recover a database to its original location.](./media/backup-azure-backup-sql/sqlbackup-recoveroriginal.png)
In this example, DPM allows the recovery of the database to another SQL Server instance or to a standalone network folder. 1. On the **Specify Recovery Options** page, you can select the recovery options. For example, you can choose **Network bandwidth usage throttling** to throttle the bandwidth that recovery uses. Then select **Next**.
To recover a protected entity, such as a SQL Server database, from Azure:
The recovery status shows the database being recovered. You can select **Close** to close the wizard and view the progress in the **Monitoring** workspace.
- ![Start the recovery process](./media/backup-azure-backup-sql/sqlbackup-recoverying.png)
+ ![Screenshot shows how to start the recovery process.](./media/backup-azure-backup-sql/sqlbackup-recoverying.png)
When the recovery is complete, the restored database is consistent with the application.
-### Next steps
+## Next steps
For more information, see [Azure Backup FAQ](backup-azure-backup-faq.yml).
cognitive-services Multi Turn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/multi-turn.md
QnA Maker supports version control by including multi-turn conversation steps in
## Next steps
-* Learn more about contextual conversations from this [dialog sample](https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/csharp_dotnetcore/11.qnamaker) or learn more about [conceptual bot design for multi-turn conversations](/azure/bot-service/bot-builder-conversations).
+* Learn more about contextual conversations from this [dialog sample](https://github.com/microsoft/BotBuilder-Samples/tree/main/archive/samples/csharp_dotnetcore/11.qnamaker) or learn more about [conceptual bot design for multi-turn conversations](/azure/bot-service/bot-builder-conversations).
cognitive-services Custom Speech Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-speech-overview.md
Out of the box, speech to text utilizes a Universal Language Model as a base mod
A custom model can be used to augment the base model to improve recognition of domain-specific vocabulary specific to the application by providing text data to train the model. It can also be used to improve recognition based for the specific audio conditions of the application by providing audio data with reference transcriptions.
+> [!NOTE]
+> You pay to use Custom Speech models, but you are not charged for training a model. Usage includes hosting of your deployed custom endpoint in addition to using the endpoint for speech-to-text. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+ ## How does it work? With Custom Speech, you can upload your own data, test and train a custom model, compare accuracy between models, and deploy a model to a custom endpoint.
cognitive-services How To Custom Speech Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-deploy-model.md
zone_pivot_groups: speech-studio-cli-rest
In this article, you'll learn how to deploy an endpoint for a Custom Speech model. With the exception of [batch transcription](batch-transcription.md), you must deploy a custom endpoint to use a Custom Speech model.
+> [!NOTE]
+> You pay to use Custom Speech models, but you are not charged for training a model. Usage includes hosting of your deployed custom endpoint in addition to using the endpoint for speech-to-text. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
+ You can deploy an endpoint for a base or custom model, and then [update](#change-model-and-redeploy-endpoint) the endpoint later to use a better trained model. > [!NOTE]
cognitive-services How To Custom Speech Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-train-model.md
zone_pivot_groups: speech-studio-cli-rest
In this article, you'll learn how to train a custom model to improve recognition accuracy from the Microsoft base model. The speech recognition accuracy and quality of a Custom Speech model will remain consistent, even when a new base model is released. > [!NOTE]
-> You pay to use Custom Speech models, but you are not charged for training a model.
+> You pay to use Custom Speech models, but you are not charged for training a model. Usage includes hosting of your deployed custom endpoint in addition to using the endpoint for speech-to-text. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/).
Training a model is typically an iterative process. You will first select a base model that is the starting point for a new model. You train a model with [datasets](./how-to-custom-speech-test-and-train.md) that can include text and audio, and then you test. If the recognition quality or accuracy doesn't meet your requirements, you can create a new model with additional or modified training data, and then test again.
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams exter
When Teams external users leave the meeting, or the meeting ends, they can no longer send or receive new chat messages and no longer have access to messages sent and received during the meeting.
-*Azure Communication Services provides developers tools to integrate Microsoft Teams Data Loss Prevention that is compatible with Microsoft Teams. For more information, go to [how to implement Data Loss Prevention (DLP] (../../../../how-to/chat-sdk/data-loss-prevention.md)
+*Azure Communication Services provides developers tools to integrate Microsoft Teams Data Loss Prevention that is compatible with Microsoft Teams. For more information, go to [how to implement Data Loss Prevention (DLP)](../../../how-tos/chat-sdk/data-loss-prevention.md)
## Server capabilities
communication-services Teams Interop Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pricing/teams-interop-pricing.md
Teams users participating in Teams meetings and calls generate usage on Azure Co
| Action | Tool | Price| |--|| --|
-| Send message | [Graph API](/graph/teams-licenses) | $0 or $0.00075|
+| Send message | [Graph API](/graph/teams-licenses) | $0 |
| Receive message | [Graph API](/graph/teams-licenses) | $0 or $0.00075| | Teams users participate in Teams meeting with audio, video, screen sharing, and TURN services | Azure Communication Services | $0.004 per minute |
container-registry Container Registry Tasks Reference Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-reference-yaml.md
steps:
[property]: [value] ```
+Run the [az acr run][az-acr-run]command to get the docker version.
+
+```azurecli
+az acr run -r $ACR_NAME --cmd "docker version"
+```
+
+Add environment variable `DOCKER_BUILDKIT=1` in yaml file to enable `buildkit` and use `secret` with `buildkit`.
+ The `build` step type supports the parameters in the following table. The `build` step type also supports all build options of the [docker build](https://docs.docker.com/engine/reference/commandline/build/) command, such as `--build-arg` to set build-time variables. + | Parameter | Description | Optional | | | -- | :-: | | `-t` &#124; `--image` | Defines the fully qualified `image:tag` of the built image.<br /><br />As images may be used for inner task validations, such as functional tests, not all images require `push` to a registry. However, to instance an image within a Task execution, the image does need a name to reference.<br /><br />Unlike `az acr build`, running ACR Tasks doesn't provide default push behavior. With ACR Tasks, the default scenario assumes the ability to build, validate, then push an image. See [push](#push) for how to optionally push built images. | Yes |
container-registry Container Registry Tasks Scheduled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-scheduled.md
NAME PLATFORM STATUS SOURCE REPOSITORY TRIGGERS
timertask linux Enabled BASE_IMAGE, TIMER ``` +
+Also, a simple example, of the task running with source code context. The following task triggers running the `hello-world` image from Microsoft Container Registry every day at 21:00 UTC.
+
+Follow the [Prerequisites](/azure/container-registry/container-registry-tutorial-quick-task#prerequisites) to build the source code context and then create a scheduled task with context.
+
+```azurecli
+az acr task create \
+ --name timertask \
+ --registry $ACR_NAME \
+ --context https://github.com/$GIT_USER/acr-build-helloworld-node.git#master \
+ --file Dockerfile \
+ --image timertask:{{.Run.ID}} \
+ --git-access-token $GIT_PAT \
+ --schedule "0 21 * * *"
+```
+
+Run the [az acr task show][az-acr-task-show] command to see that the timer trigger is configured. By default, the base image update trigger is also enabled.
+
+```azurecli
+az acr task show --name timertask --registry $ACR_NAME --output table
+```
+
+Run the [az acr task run][az-acr-task-run ] command to trigger the task manually.
+
+```azurecli
+az acr task run --name timertask --registry $ACR_NAME
+```
+ ## Trigger the task Trigger the task manually with [az acr task run][az-acr-task-run] to ensure that it is set up properly:
cosmos-db How To Setup Customer Managed Keys Mhsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys-mhsm.md
+
+ Title: Configure customer-managed keys for your Azure Cosmos DB account with Azure Managed HSM Key Vault
+description: Learn how to configure customer-managed keys for your Azure Cosmos DB account with Azure Managed HSM Key Vault
+++ Last updated : 12/25/2022++
+ms.devlang: azurecli
++
+# Configure customer-managed keys for your Azure Cosmos DB account with Azure Managed HSM Key Vault
++
+Please refer to link [Configure customer-managed keys with Azure Key Vault](./how-to-setup-customer-managed-keys.md)
+
+> [!NOTE]
+> Currently, customer-managed keys are available only for new Azure Cosmos DB accounts. You should configure them during account creation.
+
+## <a id="register-resource-provider"></a> Register the Azure Cosmos DB resource provider for your Azure subscription
+
+1. Sign in to the [Azure portal](https://portal.azure.com/), go to your Azure subscription, and select **Resource providers** under the **Settings** tab:
+
+ :::image type="content" source="./media/how-to-setup-cmk-mhsm/navigation-resource-providers.png" alt-text="Screenshot of the Resource providers option in the resource navigation menu.":::
+
+1. Search for the **Microsoft.DocumentDB** resource provider. Verify if the resource provider is already marked as registered. If not, choose the resource provider and select **Register**:
+
+ :::image type="content" source="media/how-to-setup-cmk-mhsm/resource-provider-registration.png" lightbox="media/how-to-setup-cmk-mhsm/resource-provider-registration.png" alt-text="Screenshot of the Register option for the Microsoft.DocumentDB resource provider.":::
+
+## Configure your Azure Managed HSM Key Vault
+
+Using customer-managed keys with Azure Cosmos DB requires you to set two properties on the Azure Key Vault instance that you plan to use to host your encryption keys: **Soft Delete** and **Purge Protection**.
+
+Because soft delete is turned on by default, only purge protection must be enabled. When creating your managed HSM, use the following CLI command:
++
+```azurecli-interactive
+objectId = az ad signed-in-user show --query id -o tsv
+az keyvault create --hsm-name $hsmName --resource-group $rgName --location $location --enable-purge-protection true --administrators $objectId --retention-days 7
+
+```
+
+If you're using an existing Azure Managed HSM Key Vault instance, you can verify that these properties are enabled by looking at the **Properties** section with the following command:
+
+```azurecli-interactive
+az keyvault show $hsmName $rgName
+
+```
+
+If purge protection isn't enabled, the following command can be used:
+
+```azurecli-interactive
+az keyvault update-hsm --enable-purge-protection true --hsm-name $hsmName --resource-group $rgName
+
+```
+
+For more information about the CLI commands available for managed HSM, refer to the following [Azure Key Vault](../key-vault/general/overview.md)
+++
+## Creating the encryption key and assigning the correspondent roles
+
+Once the Managed HSM [has been activated,](../key-vault/managed-hsm/quick-create-cli.md#activate-your-managed-hsm)
+ the key that is going to be used for the CMK account needs to be created. For this, the role ΓÇ£Managed HSM Crypto UserΓÇ¥ is assigned to the administrator. To read more about how RBAC (role based access control) works with Managed HSM, refer to the following articles: [Managed HSM local RBAC built-in roles - Azure Key Vault | Microsoft Learn](../key-vault/managed-hsm/built-in-roles.md) and [Azure Managed HSM access control | Microsoft Learn](../key-vault/managed-hsm/access-control.md)
+
+```azurecli-interactive
+objectId = az ad signed-in-user show --query id -o tsv
+$keyName = "Name of your key"
+az keyvault role assignment create --hsm-name $hsmName --role "Managed HSM Crypto User" --assignee $objectId --scope /keys
+az keyvault key create --hsm-name $hsmName --name $keyName --ops wrapKey unwrapKey --kty RSA-HSM --size 3072
+
+```
+Now that the key has been created, the correspondent role needs to be assigned to either the Cosmos DB Principal ID or the Azure Managed Identity for provisioning the account. The role ΓÇ£Managed HSM Crypto Service Encryption UserΓÇ¥ is used because it has the only three permissions needed to work with a CMK account, being: get, wrap and unwrap. These permissions are also scoped to only be useful on the keys stored on the Azure Managed HSM.
+
+Without Azure managed identity:
+
+```azurecli-interactive
+$cosmosPrincipal = az ad sp show --id a232010e-820c-4083-83bb-3ace5fc29d0b --query id -o tsv
+az keyvault role assignment create --hsm-name $hsmName --role "Managed HSM Crypto Service Encryption User" --assignee $cosmosPrincipal --scope /keys
+$keyURI = "https://{0}.managedhsm.azure.net/keys/{1}" -f $hsmName, $keyName
+az cosmosdb create -n $cosmosName -g $rgName --key-uri $keyURI
+
+```
+With Azure managed identity:
+
+```azurecli-interactive
+$identityResourceID = az identity show -g $rgName -n $identityName --query id -o tsv
+$identityPrincipal = az identity show -g $rgName -n $identityName --query principalId -o tsv
+$defaultIdentity = "UserAssignedIdentity={0}" -f $identityResourceID
+az keyvault role assignment create --hsm-name $hsmName --role "Managed HSM Crypto Service Encryption User" --assignee $cosmosPrincipal --scope /keys
+$keyURI = "https://{0}.managedhsm.azure.net/keys/{1}" -f $hsmName, $keyName
+az cosmosdb create -n $cosmosName -g $rgName --key-uri $keyURI --assign-identity $identityResourceID --default-identity $defaultIdentity
+
+```
+This will provision a Cosmos DB CMK account with a key stored on an Azure Managed HSM Key Vault.
+
+## Switching to system assigned managed identity.
+
+Cosmos DB supports the use of a system assigned managed identity for a CMK Cosmos DB account. For more information about system assigned managed identity CMK, refer to: [Configure customer-managed keys for your Azure Cosmos DB account](./how-to-setup-customer-managed-keys.md)
+
+Execute the following commands to switch from default identity to system assigned managed identity:
+
+```azurecli-interactive
+az cosmosdb identity assign -n $cosmosName -g $rgName
+$principalMSIId = az cosmosdb identity show -n $cosmosName -g $rgName --query principalId -o tsv
+az keyvault role assignment create --hsm-name $hsmName --role "Managed HSM Crypto Service Encryption User" --assignee $principalMSIId --scope /keys
+az cosmosdb update --resource-group $rgName --name $cosmosName --default-identity "SystemAssignedIdentity"
+
+```
+As an optional note, the original role assignment to Cosmos DBΓÇÖs principal ID or Azure Managed Identity can be removed.
+
+```azurecli-interactive
+az keyvault role assignment delete --hsm-name $hsmName --role "Managed HSM Crypto Service Encryption User" --assignee $cosmosPrincipal --scope /keys
+
+```
+
+## Next steps
+
+- Learn more about [data encryption in Azure Cosmos DB](./database-encryption-at-rest.md).
+- Get an overview of [secure access to data in Azure Cosmos DB](secure-access-to-data.md).
cosmos-db Troubleshoot Changefeed Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/troubleshoot-changefeed-functions.md
Title: Troubleshoot issues when using Azure Functions trigger for Azure Cosmos DB
-description: Common issues, workarounds, and diagnostic steps, when using the Azure Functions trigger for Azure Cosmos DB
+ Title: Troubleshoot issues with the Azure Functions trigger for Azure Cosmos DB
+description: This article discusses common issues, workarounds, and diagnostic steps when you're using the Azure Functions trigger for Azure Cosmos DB
-# Diagnose and troubleshoot issues when using Azure Functions trigger for Azure Cosmos DB
+# Diagnose and troubleshoot issues with the Azure Functions trigger for Azure Cosmos DB
[!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-This article covers common issues, workarounds, and diagnostic steps, when you use the [Azure Functions trigger for Azure Cosmos DB](change-feed-functions.md).
+This article covers common issues, workarounds, and diagnostic steps when you're using the [Azure Functions trigger for Azure Cosmos DB](change-feed-functions.md).
## Dependencies
-The Azure Functions trigger and bindings for Azure Cosmos DB depend on the extension package [Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB) over the base Azure Functions runtime. Always keep these packages updated, as they might include fixes and new features that might address any potential issues you may encounter.
-
-This article will always refer to Azure Functions V2 whenever the runtime is mentioned, unless explicitly specified.
+The Azure Functions trigger and bindings for Azure Cosmos DB depend on the extension package [Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB) over the base Azure Functions runtime. Always keep these packages updated, because they include fixes and new features that can help you address any potential issues you might encounter.
## Consume the Azure Cosmos DB SDK independently
-The key functionality of the extension package is to provide support for the Azure Functions trigger and bindings for Azure Cosmos DB. It also includes the [Azure Cosmos DB .NET SDK](sdk-dotnet-core-v2.md), which is helpful if you want to interact with Azure Cosmos DB programmatically without using the trigger and bindings.
+The key functionality of the extension package is to provide support for the Azure Functions trigger and bindings for Azure Cosmos DB. The package also includes the [Azure Cosmos DB .NET SDK](sdk-dotnet-core-v2.md), which is helpful if you want to interact with Azure Cosmos DB programmatically without using the trigger and bindings.
-If you want to use the Azure Cosmos DB SDK, make sure that you don't add to your project another NuGet package reference. Instead, **let the SDK reference resolve through the Azure Functions' Extension package**. Consume the Azure Cosmos DB SDK separately from the trigger and bindings
+If you want to use the Azure Cosmos DB SDK, make sure that you don't add to your project another NuGet package reference. Instead, let the SDK reference resolve through the Azure Functions extension package. Consume the Azure Cosmos DB SDK separately from the trigger and bindings.
-Additionally, if you're manually creating your own instance of the [Azure Cosmos DB SDK client](./sdk-dotnet-core-v2.md), you should follow the pattern of having only one instance of the client [using a Singleton pattern approach](../../azure-functions/manage-connections.md?tabs=csharp#azure-cosmos-db-clients). This process avoids the potential socket issues in your operations.
+Additionally, if you're manually creating your own instance of the [Azure Cosmos DB SDK client](./sdk-dotnet-core-v2.md), you should follow the pattern of having only one instance of the client and [use a singleton pattern approach](../../azure-functions/manage-connections.md?tabs=csharp#azure-cosmos-db-clients). This process avoids the potential socket issues in your operations.
## Common scenarios and workarounds
-### Azure Function fails with error message collection doesn't exist
+### Your Azure function fails with an error message that the collection "doesn't exist"
+
+The Azure function fails with the following error message: "Either the source collection 'collection-name' (in database 'database-name') or the lease collection 'collection2-name' (in database 'database2-name') doesn't exist. Both collections must exist before the listener starts. To automatically create the lease collection, set 'CreateLeaseCollectionIfNotExists' to 'true'."
+
+This error means that one or both of the Azure Cosmos DB containers that are required for the trigger to work either:
+* Don't exist
+* Aren't reachable to the Azure function
+
+The error text itself tells you which Azure Cosmos DB database and container the trigger is looking for, based on your configuration.
+
+To resolve this issue:
+
+1. Verify the `ConnectionStringSetting` attribute and that it references a setting that exists in your Azure function app.
+
+ The value on this attribute shouldn't be the connection string itself, but the name of the configuration setting.
+
+1. Verify that the `databaseName` and `collectionName` values exist in your Azure Cosmos DB account.
+
+ If you're using automatic value replacement (using `%settingName%` patterns), make sure that the name of the setting exists in your Azure function app.
+
+1. If you don't specify a `LeaseCollectionName/leaseCollectionName` value, the default is `leases`. Verify that such a container exists.
+
+ Optionally, you can set the `CreateLeaseCollectionIfNotExists` attribute in your trigger to `true` to automatically create it.
+
+1. Verify your [Azure Cosmos DB account's firewall configuration](../how-to-configure-firewall.md) to ensure that it's not blocking the Azure function.
+
+### Your Azure function fails to start, with error message "Shared throughput collection should have a partition key"
+
+Previous versions of the Azure Cosmos DB extension didn't support using a leases container that was created within a [shared throughput database](../set-throughput.md#set-throughput-on-a-database).
+
+To resolve this issue:
-Azure Function fails with error message "Either the source collection 'collection-name' (in database 'database-name') or the lease collection 'collection2-name' (in database 'database2-name') doesn't exist. Both collections must exist before the listener starts. To automatically create the lease collection, set 'CreateLeaseCollectionIfNotExists' to 'true'"
+* Update the [Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB) extension to get the latest version.
-This means that either one or both of the Azure Cosmos DB containers required for the trigger to work don't exist or aren't reachable to the Azure Function. **The error itself will tell you which Azure Cosmos DB database and container is the trigger looking for** based on your configuration.
+### Your Azure function fails to start, with error message "PartitionKey must be supplied for this operation"
-1. Verify the `ConnectionStringSetting` attribute and that it **references a setting that exists in your Azure Function App**. The value on this attribute shouldn't be the Connection String itself, but the name of the Configuration Setting.
-2. Verify that the `databaseName` and `collectionName` exist in your Azure Cosmos DB account. If you're using automatic value replacement (using `%settingName%` patterns), make sure the name of the setting exists in your Azure Function App.
-3. If you don't specify a `LeaseCollectionName/leaseCollectionName`, the default is "leases". Verify that such container exists. Optionally you can set the `CreateLeaseCollectionIfNotExists` attribute in your Trigger to `true` to automatically create it.
-4. Verify your [Azure Cosmos DB account's Firewall configuration](../how-to-configure-firewall.md) to see that it's not blocking the Azure Function.
+This error means that you're currently using a partitioned lease collection with an old [extension dependency](#dependencies).
-### Azure Function fails to start with "Shared throughput collection should have a partition key"
+To resolve this issue:
-The previous versions of the Azure Cosmos DB Extension didn't support using a leases container that was created within a [shared throughput database](../set-throughput.md#set-throughput-on-a-database). To resolve this issue, update the [Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB) extension to get the latest version.
+* Upgrade to the latest available version.
-### Azure Function fails to start with "PartitionKey must be supplied for this operation."
+### Your Azure function fails to start, with error message "Forbidden (403); Substatus: 5300... The given request [POST ...] can't be authorized by AAD token in data plane"
-This error means that you're currently using a partitioned lease collection with an old [extension dependency](#dependencies). Upgrade to the latest available version. If you're currently running on Azure Functions V1, you'll need to upgrade to Azure Functions V2.
+This error means that your function is attempting to [perform a non-data operation by using Azure Active Directory (Azure AD) identities](troubleshoot-forbidden.md#non-data-operations-are-not-allowed). You can't use `CreateLeaseContainerIfNotExists = true` when you're using Azure AD identities.
-### Azure Function fails to start with "Forbidden (403); Substatus: 5300... The given request [POST ...] can't be authorized by AAD token in data plane"
+### Your Azure function fails to start, with error message "The lease collection, if partitioned, must have partition key equal to id"
-This error means your Function is attempting to [perform a non-data operation using Azure AD identities](troubleshoot-forbidden.md#non-data-operations-are-not-allowed). You can't use `CreateLeaseContainerIfNotExists = true` when using Azure AD identities.
+This error means that your current leases container is partitioned, but the partition key path isn't `/id`.
-### Azure Function fails to start with "The lease collection, if partitioned, must have partition key equal to id."
+To resolve this issue:
-This error means that your current leases container is partitioned, but the partition key path isn't `/id`. To resolve this issue, you need to recreate the leases container with `/id` as the partition key.
+* Re-create the leases container with `/id` as the partition key.
-### You see a "Value can't be null. Parameter name: o" in your Azure Functions logs when you try to Run the Trigger
+### When you try to run the trigger, you get the error message "Value can't be null. Parameter name: o" in your Azure function logs
-This issue appears if you're using the Azure portal and you try to select the **Run** button on the screen when inspecting an Azure Function that uses the trigger. The trigger doesn't require for you to select Run to start, it will automatically start when the Azure Function is deployed. If you want to check the Azure Function's log stream on the Azure portal, just go to your monitored container and insert some new items, you'll automatically see the Trigger executing.
+This issue might arise if you're using the Azure portal and you select the **Run** button when you're inspecting an Azure function that uses the trigger. The trigger doesn't require you to select **Run** to start it. It automatically starts when you deploy the function.
-### My changes take too long to be received
+To resolve this issue:
-This scenario can have multiple causes and all of them should be checked:
+* To check the function's log stream on the Azure portal, go to your monitored container and insert some new items. The trigger will run automatically.
-1. Is your Azure Function deployed in the same region as your Azure Cosmos DB account? For optimal network latency, both the Azure Function and your Azure Cosmos DB account should be colocated in the same Azure region.
-2. Are the changes happening in your Azure Cosmos DB container continuous or sporadic?
-If it's the latter, there could be some delay between the changes being stored and the Azure Function picking them up. This is because internally, when the trigger checks for changes in your Azure Cosmos DB container and finds none pending to be read, it will sleep for a configurable amount of time (5 seconds, by default) before checking for new changes (to avoid high RU consumption). You can configure this sleep time through the `FeedPollDelay/feedPollDelay` setting in the [configuration](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) of your trigger (the value is expected to be in milliseconds).
-3. Your Azure Cosmos DB container might be [rate-limited](../request-units.md).
-4. You can use the `PreferredLocations` attribute in your trigger to specify a comma-separated list of Azure regions to define a custom preferred connection order.
-5. The speed at which your Trigger receives new changes is dictated by the speed at which you're processing them. Verify the Function's [Execution Time / Duration](../../azure-functions/analyze-telemetry-data.md), if your Function is slow that will increase the time it takes for your Trigger to get new changes. If you see a recent increase in Duration, there could be a recent code change that might affect it. If the speed at which you're receiving operations on your Azure Cosmos DB container is faster than the speed of your Trigger, you'll keep lagging behind. You might want to investigate in the Function's code, what is the most time consuming operation and how to optimize it.
+### Your changes are taking too long to be received
-### Some changes are repeated in my Trigger
+This scenario can have multiple causes. Consider trying any or all of the following solutions:
-The concept of a "change" is an operation on a document. The most common scenarios where events for the same document are received are:
-* The account is using Eventual consistency. While consuming the change feed in an Eventual consistency level, there could be duplicate events in-between subsequent change feed read operations (the last event of one read operation appears as the first of the next).
-* The document is being updated. The Change Feed can contain multiple operations for the same documents, if that document is receiving updates, it can pick up multiple events (one for each update). One easy way to distinguish among different operations for the same document is to track the `_lsn` [property for each change](../change-feed.md#change-feed-and-_etag-_lsn-or-_ts). If they don't match, these are different changes over the same document.
-* If you're identifying documents just by `id`, remember that the unique identifier for a document is the `id` and its partition key (there can be two documents with the same `id` but different partition key).
+* Are your Azure function and your Azure Cosmos DB account deployed in separate regions? For optimal network latency, your Azure function and your Azure Cosmos DB account should be colocated in the same Azure region.
-### Some changes are missing in my Trigger
+* Are the changes that are happening in your Azure Cosmos DB container continuous or sporadic?
-If you find that some of the changes that happened in your Azure Cosmos DB container aren't being picked up by the Azure Function or some changes are missing in the destination when you're copying them, follow the below steps.
+ If they're sporadic, there could be some delay between the changes being stored and the Azure function that's picking them up. This is because when the trigger checks internally for changes in your Azure Cosmos DB container and finds no changes waiting to be read, the trigger sleeps for a configurable amount of time (5 seconds, by default) before it checks for new changes. It does this to avoid high request unit (RU) consumption. You can configure the sleep time through the `FeedPollDelay/feedPollDelay` setting in the [configuration](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) of your trigger. The value is expected to be in milliseconds.
-When your Azure Function receives the changes, it often processes them, and could optionally, send the result to another destination. When you're investigating missing changes, make sure you **measure which changes are being received at the ingestion point** (when the Azure Function starts), not on the destination.
+* Your Azure Cosmos DB container might be [rate-limited](../request-units.md).
-If some changes are missing on the destination, this could mean that is some error happening during the Azure Function execution after the changes were received.
+* You can use the `PreferredLocations` attribute in your trigger to specify a comma-separated list of Azure regions to define a custom preferred connection order.
-In this scenario, the best course of action is to add `try/catch` blocks in your code and inside the loops that might be processing the changes, to detect any failure for a particular subset of items and handle them accordingly (send them to another storage for further analysis or retry). Alternatively, you can configure Azure Functions [retry policies](../../azure-functions/functions-bindings-error-pages.md#retries).
+* The speed at which your trigger receives new changes is dictated by the speed at which you're processing them. Verify the function's [execution time, or duration](../../azure-functions/analyze-telemetry-data.md). If your function is slow, that will increase the time it takes for the trigger to get new changes. If you see a recent increase in duration, a recent code change might be affecting it. If the speed at which you're receiving operations on your Azure Cosmos DB container is faster than the speed of your trigger, it will keep lagging behind. You might want to investigate the function's code to determine the most time-consuming operation and how to optimize it.
-> [!NOTE]
-> The Azure Functions trigger for Azure Cosmos DB, by default, won't retry a batch of changes if there was an unhandled exception during your code execution. This means that the reason that the changes did not arrive at the destination might be because you are failing to process them.
+### Some changes are repeated in my trigger
-If the destination is another Azure Cosmos DB container and you're performing Upsert operations to copy the items, **verify that the Partition Key Definition on both the monitored and destination container are the same**. Upsert operations could be saving multiple source items as one in the destination because of this configuration difference.
+The concept of a *change* is an operation on a document. The most common scenarios where events for the same document are received are:
-If you find that some changes weren't received at all by your trigger, the most common scenario is that there's **another Azure Function running**. It could be another Azure Function deployed in Azure or an Azure Function running locally on a developer's machine that has **exactly the same configuration** (same monitored and lease containers), and this Azure Function is stealing a subset of the changes you would expect your Azure Function to process.
+* The account is using the *eventual consistency* model. While it's consuming the change feed at an eventual consistency level, there could be duplicate events in-between subsequent change feed read operations. That is, the *last* event of one read operation might appear as the *first* event of the next.
-Additionally, the scenario can be validated, if you know how many Azure Function App instances you have running. If you inspect your leases container and count the number of lease items within, the distinct values of the `Owner` property in them should be equal to the number of instances of your Function App. If there are more owners than the known Azure Function App instances, it means that these extra owners are the ones "stealing" the changes.
+* The document is being updated. The change feed can contain multiple operations for the same documents. If the document is receiving updates, it can pick up multiple events (one for each update). One easy way to distinguish among different operations for the same document is to track the `_lsn` [property for each change](../change-feed.md#change-feed-and-_etag-_lsn-or-_ts). If the properties don't match, the changes are different.
-One easy way to work around this situation, is to apply a `LeaseCollectionPrefix/leaseCollectionPrefix` to your Function with a new/different value or, alternatively, test with a new leases container.
+* If you're identifying documents only by `id`, remember that the unique identifier for a document is the `id` and its partition key. (Two documents can have the same `id` but a different partition key.)
+
+### Some changes are missing in your trigger
+
+You might find that some of the changes that occurred in your Azure Cosmos DB container aren't being picked up by the Azure function. Or some changes are missing at the destination when you're copying them. If so, try the solutions in this section.
+
+* When your Azure function receives the changes, it often processes them and could, optionally, send the result to another destination. When you're investigating missing changes, make sure that you measure which changes are being received at the ingestion point (that is, when the Azure function starts), not at the destination.
+
+* If some changes are missing on the destination, this could mean that some error is happening during the Azure function execution after the changes were received.
+
+ In this scenario, the best course of action is to add `try/catch` blocks in your code and inside the loops that might be processing the changes. Adding it will help you detect any failure for a particular subset of items and handle them accordingly (send them to another storage for further analysis or retry). Alternatively, you can configure Azure Functions [retry policies](../../azure-functions/functions-bindings-error-pages.md#retries).
+
+ > [!NOTE]
+ > The Azure Functions trigger for Azure Cosmos DB, by default, won't retry a batch of changes if there was an unhandled exception during the code execution. This means that the reason that the changes didn't arrive at the destination might be because you've failed to process them.
+
+* If the destination is another Azure Cosmos DB container and you're performing upsert operations to copy the items, verify that the partition key definition on both the monitored and destination container are the same. Upsert operations could be saving multiple source items as one at the destination because of this configuration difference.
+
+* If you find that the trigger didn't receive some changes, the most common scenario is that another Azure function is running. The other function might be deployed in Azure or a function might be running locally on a developer's machine with exactly the same configuration (that is, the same monitored and lease containers). If so, this function might be stealing a subset of the changes that you would expect your Azure function to process.
+
+Additionally, the scenario can be validated, if you know how many Azure function app instances you have running. If you inspect your leases container and count the number of lease items within it, the distinct values of the `Owner` property in them should be equal to the number of instances of your function app. If there are more owners than known Azure function app instances, it means that these extra owners are the ones "stealing" the changes.
+
+One easy way to work around this situation is to apply a `LeaseCollectionPrefix/leaseCollectionPrefix` to your function with a new or different value or, alternatively, to test with a new leases container.
+
+### You need to restart and reprocess all the items in your container from the beginning
-### Need to restart and reprocess all the items in my container from the beginning
To reprocess all the items in a container from the beginning:+ 1. Stop your Azure function if it's currently running.
-1. Delete the documents in the lease collection (or delete and re-create the lease collection so it's empty)
-1. Set the [StartFromBeginning](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) CosmosDBTrigger attribute in your function to true.
+
+1. Delete the documents in the lease collection (or delete and re-create the lease collection so that it's empty).
+
+1. Set the [StartFromBeginning](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) CosmosDBTrigger attribute in your function to `true`.
+ 1. Restart the Azure function. It will now read and process all changes from the beginning.
-Setting [StartFromBeginning](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) to true will tell the Azure function to start reading changes from the beginning of the history of the collection instead of the current time. This only works when there are no already created leases (that is, documents in the leases collection). Setting this property to true when there are leases already created has no effect; in this scenario, when a function is stopped and restarted, it will begin reading from the last checkpoint, as defined in the leases collection. To reprocess from the beginning, follow the above steps 1-4.
+Setting [StartFromBeginning](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) to `true` tells the Azure function to start reading changes from the beginning of the history of the collection instead of the current time.
+
+This solution works only when there are no already-created leases (that is, documents in the leases collection).
+
+Setting this property to `true` has no effect when there are leases already created. In this scenario, when a function is stopped and restarted, it begins reading from the last checkpoint, as defined in the leases collection.
-### Binding can only be done with IReadOnlyList\<Document> or JArray
+### Error: Binding can be done only with IReadOnlyList\<Document> or JArray
-This error happens if your Azure Functions project (or any referenced project) contains a manual NuGet reference to the Azure Cosmos DB SDK with a different version than the one provided by the [Azure Functions Azure Cosmos DB Extension](./troubleshoot-changefeed-functions.md#dependencies).
+This error happens if your Azure Functions project (or any referenced project) contains a manual NuGet reference to the Azure Cosmos DB SDK with a version that's different from the one provided by the [Azure Cosmos DB extension for Azure Functions](./troubleshoot-changefeed-functions.md#dependencies).
-To work around this situation, remove the manual NuGet reference that was added and let the Azure Cosmos DB SDK reference resolve through the Azure Functions Azure Cosmos DB Extension package.
+To work around this situation, remove the manual NuGet reference that was added, and let the Azure Cosmos DB SDK reference resolve through the Azure Cosmos DB extension for Azure Functions package.
-### Changing Azure Function's polling interval for the detecting changes
+### Change the Azure function's polling interval for detecting changes
-As explained earlier for [My changes take too long to be received](#my-changes-take-too-long-to-be-received), Azure function will sleep for a configurable amount of time (5 seconds, by default) before checking for new changes (to avoid high RU consumption). You can configure this sleep time through the `FeedPollDelay/feedPollDelay` setting in the [configuration](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) of your trigger (the value is expected to be in milliseconds).
+As explained earlier in the [Your changes are taking too long to be received](#your-changes-are-taking-too-long-to-be-received) section, your Azure function will sleep for a configurable amount of time (5 seconds, by default) before it checks for new changes (to avoid high RU consumption). You can configure this sleep time through the `FeedPollDelay/feedPollDelay` setting in the [trigger configuration](../../azure-functions/functions-bindings-cosmosdb-v2-trigger.md#configuration) (the value is expected to be in milliseconds).
## Next steps
-* [Enable monitoring for your Azure Functions](../../azure-functions/functions-monitoring.md)
-* [Azure Cosmos DB .NET SDK Troubleshooting](./troubleshoot-dotnet-sdk.md)
+* [Enable monitoring for your Azure function](../../azure-functions/functions-monitoring.md)
+* [Troubleshoot the Azure Cosmos DB .NET SDK](./troubleshoot-dotnet-sdk.md)
cosmos-db Self Serve Minimum Tls Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/self-serve-minimum-tls-enforcement.md
+
+ Title: Self-serve minimum tls version enforcement in Azure Cosmos DB
+
+description: Learn how to self-serve minimum TLS version enforcement for your Azure Cosmos DB account to improve your security posture.
++++ Last updated : 01/18/2023++
+# Self-serve minimum TLS version enforcement in Azure Cosmos DB
++
+This article discusses how to enforce a minimum version of the TLS protocol for your Cosmos DB account, using a self-service API.
+
+## How minimum TLS version enforcement works in Azure Cosmos DB
+
+Because of the multi-tenant nature of Cosmos DB, the service is required to meet the access and security needs of every user. To achieve this, **Cosmos DB enforces minimum TLS protocols at the application layer**, and not lower layers in the network stack where TLS operates. This enforcement occurs on any authenticated request to a specific database account, according to the settings set on that account by the customer.
+
+The **minimum service-wide accepted version is TLS 1.0**. This can be changed on a per account basis, as discussed in the following section.
+
+## How to set the minimum TLS version for my Cosmos DB database account
+
+Starting with the [2022-11-15 API version of the Azure Cosmos DB Resource Provider API](), a new property is exposed for every Cosmos DB database account, called `minimalTlsVersion`. It accepts one of the following values:
+- `Tls` for setting the minimum version to TLS 1.0.
+- `Tls11` for setting the minimum version to TLS 1.1.
+- `Tls12` for setting the minimum version to TLS 1.2.
+
+The **default value for new and existing accounts is `Tls`**.
+
+> [!IMPORTANT]
+> Staring on April 1st, 2023, the **default value for new accounts will be switched to `Tls12`**.
+
+### Set via Azure CLI
+
+To set using Azure CLI, use the command below:
+
+```azurecli-interactive
+subId=$(az account show --query id -o tsv)
+rg="myresourcegroup"
+dbName="mycosmosdbaccount"
+minimalTlsVersion="Tls12"
+az rest --uri "/subscriptions/$subId/resourceGroups/$rg/providers/Microsoft.DocumentDB/databaseAccounts/$dbName?api-version=2022-11-15" --method PATCH --body "{ 'properties': { 'minimalTlsVersion': '$minimalTlsVersion' } }" --headers "Content-Type=application/json"
+```
+
+### Set via Azure PowerShell
+
+To set using Azure PowerShell, use the command below:
+
+```azurepowershell-interactive
+$minimalTlsVersion = 'Tls12'
+$patchParameters = @{
+ ResourceGroupName = 'myresourcegroup'
+ Name = 'mycosmosdbaccount'
+ ResourceProviderName = 'Microsoft.DocumentDB'
+ ResourceType = 'databaseaccounts'
+ ApiVersion = '2022-11-15'
+ Payload = "{ 'properties': {
+ 'minimalTlsVersion': '$minimalTlsVersion'
+ } }"
+ Method = 'PATCH'
+}
+Invoke-AzRestMethod @patchParameters
+```
+
+### Set via ARM template
+
+To set this property using an ARM template, update your existing template or export a new template for your current deployment, then add `"minimalTlsVersion"` to the properties for the `databaseAccounts` resources, with the desired minimum TLS version value. Below is a basic example of an Azure Resource Manager template with this property setting, using a parameter.
+
+```json
+{
+ {
+ "type": "Microsoft.DocumentDB/databaseAccounts",
+ "name": "mycosmosdbaccount",
+ "apiVersion": "2022-11-15",
+ "location": "[parameters('location')]",
+ "kind": "GlobalDocumentDB",
+ "properties": {
+ "consistencyPolicy": {
+ "defaultConsistencyLevel": "[parameters('defaultConsistencyLevel')]",
+ "maxStalenessPrefix": 1,
+ "maxIntervalInSeconds": 5
+ },
+ "locations": [
+ {
+ "locationName": "[parameters('location')]",
+ "failoverPriority": 0
+ }
+ ],
+ "locations": "[variable('locations')]",
+ "databaseAccountOfferType": "Standard",
+ "minimalTlsVersion": "[parameters('minimalTlsVersion')]",
+ }
+ }
+}
+```
+
+> [!IMPORTANT]
+> Make sure you include the other properties for your account and child resources when redeploying with this property. Do not deploy this template as is or it will reset all of your account properties.
+
+### For new accounts
+
+You can create accounts with the `minimalTlsVersion` property set by using the ARM template above, or by changing the PATCH method to a PUT on either Azure CLI or Azure PowerShell. Make sure to include the other properties for your account.
+
+> [!IMPORTANT]
+> If the account exists and the `minimalTlsVersion` property is ommited in a PUT request, then the property will reset to its default value, starting with the 2022-11-15 API version.
+
+## How to verify minimum TLS version enforcement
+
+Because Cosmos DB enforces the minimum TLS version at the application layer, conventional TLS scanners that check whether handshakes are accepted by the service for a specific TLS version are unreliable to test enforcement in Cosmos DB. To verify enforcement, refer to the [official open-source cosmos-tls-scanner tool](https://github.com/Azure/cosmos-tls-scanner/).
+
+You can also get the current value of the `minimalTlsVersion` property by using Azure CLI or Azure PowerShell.
+
+### Get current value via Azure CLI
+
+To get the current value of the property using Azure CLI, run the command below:
+
+```azurecli-interactive
+subId=$(az account show --query id -o tsv)
+rg="myresourcegroup"
+dbName="mycosmosdbaccount"
+az rest --uri "/subscriptions/$subId/resourceGroups/$rg/providers/Microsoft.DocumentDB/databaseAccounts/$dbName?api-version=2022-11-15" --method GET
+```
+
+### Get current value via Azure PowerShell
+
+To get the current value of the property using Azure PowerShell, run the command below:
+
+```azurepowershell-interactive
+$getParameters = @{
+ ResourceGroupName = 'myresourcegroup'
+ Name = 'mycosmosdbaccount'
+ ResourceProviderName = 'Microsoft.DocumentDB'
+ ResourceType = 'databaseaccounts'
+ ApiVersion = '2022-11-15'
+ Method = 'GET'
+}
+Invoke-AzRestMethod @getParameters
+```
+
+## Next steps
+
+For more information about security in Azure Cosmos DB, see [Overview of database security in Azure Cosmos DB
+](./database-security.md).
cosmos-db Troubleshoot Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/troubleshoot-cmk.md
+
+ Title: Cross Tenant CMK Troubleshooting Guide
+description: Cross Tenant CMK Troubleshooting Guide
+++ Last updated : 12/25/2022++
+ms.devlang: azurecli
++
+# Cross Tenant CMK Troubleshooting Guide
++
+## This article helps troubleshoot Cross Tenant CMK errors
+
+**Public documentation links**
+
+- [Cosmos DB Customer Managed Key Documentation:](./how-to-setup-customer-managed-keys.md)
+- [Cosmos DB MSI Documentation:](./how-to-setup-managed-identity.md)
+
+**Cosmos DB account is in revoke state**
+
+- Was the Key Vault deleted?
+ - if YES, recover the key vault from recycle bin.
+- Is the Key Vault Key Disabled?
+ - if YES, re-enable the key.
+- Check Key Vault -\> Networking -\> Firewalls and virtual networks are set to either "Allow public access from all networks" or "Allow public access from specific virtual networks and IP addresses". If later is selected, check if Firewall allow-lists are configured correctly, and "Allow trusted Microsoft services to bypass this firewall" is selected.
+- Check if Key Vault missing any of the Wrap/Unwrap/Get permission in the access policy following [Cosmos DB Customer Managed Key Documentation:](./how-to-setup-customer-managed-keys.md#add-an-access-policy)
+ - If YES, regrant the access
+- In case the Multi-Tenant App used in the default identity has been mistakenly deleted
+ - If YES, follow [restore application documentation](..//active-directory/manage-apps/restore-application.md) to restore the Application.
+- In case UserAssigned identity used in the default identity has been mistakenly deleted
+ - If YES, since UserAssigned identity isn't recoverable once deleted. The customer needs to create new UserAssigned Identity to the db account, and then follow the exact same configuration steps during provision like set FedereatedCrdential with Multi-Tenant App. Finally, customer need to update the db account's default identity with the new UserAssigned identity.
+ - Example:`` _az cosmosdb update --resource-group \<rg\> --name \<dbname\> --default-identity "UserAssignedIdentity=\<New\_UA\_Resource\_ID1\>&FederatedClientId=00000000-0000-0000-0000-000000000000"_.``
+ - Customer also needs to clean the old UserAssigned identity from the Cosmos DB account which has been deleted in Azure. Sample command: ``az cosmosdb identity remove --resource-group \<rg\> --name \<dbname\> --identities \<OLD\_UA\_Resource\_ID\>``
+ - Wait 1 hour to let the account recovery from Revoke State
+ - Try to access the Cosmos DB data plane by making a SDK/REST API request or using Azure portal's Data Explorer to view a document.
+
+## 1. Basic Control Plane Create/Update Error Cases
+
+___________________________________
+**1.1**
+___________________________________
+**Scenario**
+
+Customer creates a CMK db account via Azure CLI/ARM Template with Key Vault's Firewall configuration "Allow trusted Microsoft services to bypass this firewall" unchecked.
+
+**Error Message**
+
+``Database account creation failed. Operation Id: 00000000-0000-0000-0000-000000000000, Error: {\"error\":{\"code\":\"Forbidden\",\"message\":\"Client address is not authorized and caller was ignored **because bypass is set to None** \\r\\nClient address: xx.xx.xx.xx\\r\\nCaller: name=Unknown/unknown;appid=00000000-0000-0000-0000-000000000000;oid=00000000-0000-0000-0000-000000000000\\r\\nVault: mykeyvault;location=eastus\",\"innererror\":{\"code\":\" **ForbiddenByFirewall** \"}}}\r\nActivityId: 00000000-0000-0000-0000-000000000000, ``
+
+**Status Code**
+
+Forbidden (403)
+
+**Root Cause**
+
+Key Vault isn't correctly configured to allow VNet/Trusted Service Bypass. The provision detects that it can't access key vault therefore throw the error. |
+| Mitigation | Go to Azure portal -\> Key Vault -\> Networking -\> Firewalls and virtual networks -\> Ensure "Allow public access from specific virtual networks and IP addresses" is selected and the "Allow trusted Microsoft services to bypass this firewall" is checked -\> Save
+++
+___________________________________
+
+**1.2**
+___________________________________
+**Scenario**
+
+1. A customer attempts to create CMK account with a Key Vault Key Uri that doesn't exist in the tenant.
+2. A customer tries to create a Cross Tenant CMK account with db account and key vault in different tenant, however the customer forgot to include the "&FederatedClientId=\<00000000-0000-0000-0000-000000000000\>" in the default identity.
+
+For example: *``az cosmosdb create -n mydb -g myresourcegroup --key-uri "https://myvault.vault.azure.net/keys/mykey" --assign-identity "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myuserassignedidentity" --default-identity "UserAssignedIdentity=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myuserassignedidentity"``*
+
+The "&FederatedClientId=\<00000000-0000-0000-0000-000000000000\>" is missing in the default identity.
+
+3. Customer tries creating a Cross Tenant CMK account with db account and key vault in different tenant with "&FederatedClientId=\<00000000-0000-0000-0000-000000000000\>" in the default identity. However, the multi-tenant app doesn't exist or has been deleted.
++
+**Error Message**
+
+``Database account creation failed. Operation Id: 00000000-0000-0000-0000-000000000000, Error: Error contacting the Azure Key Vault resource. Please try again.``
+
+**Status Code**
+
+ServiceUnavailable(503)
++
+**Root Cause**
+
+In Scenario 1: Expected
+
+In Scenario 2: the missing of ΓÇ£&FederatedClientId=<00000000-0000-0000-0000-000000000000>ΓÇ¥ make the system think the key vault is in the same tenant as the db account, however customer might not have such key vault with same name in the same tenant, which results in this error.
+
+In Scenario 3: Expected as the Multi-tenant App is not there or deleted.
++
+**Mitigation**
+
+For Scenario 1: customer needs to follow Configure your azure key vault instance section on [ setup customer managed keys documentation](./how-to-setup-customer-managed-keys.md) to retrieve the Key Vault Key Uri
+
+For Scenario 2: customer need to add the missing ΓÇ£&FederatedClientId=<00000000-0000-0000-0000-000000000000>ΓÇ¥ into the default identity.
+
+For Scenario 3: customer need to use the correct FederatedClientId or restore the multi-tenant app using [restore application documentation](..//active-directory/manage-apps/restore-application.md)
++
+___________________________________
+**1.3**
+___________________________________
+
+**Scenario**
+
+Customer tries creating/updating a CMK account with invalid Key Vault key URI.
+
+**Error Message**
+
+``Provided KeyVaultKeyUri http://mykeyvault.vault1.azure2.net3/keys4/mykey is Invalid.
+ActivityId: 00000000-0000-0000-0000-000000000000, Microsoft.Azure.Documents.Common/2.14.0 ``
++
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+The input Key Vault key URI is invalid.
+
+**Mitigation**
+
+Customer should follow [setup customer managed keys documentation](./how-to-setup-customer-managed-keys.md#generate-a-key-in-azure-key-vault)
+to retrieve the correct Key Vault key URI from the portal.
+
+___________________________________
+**1.4**
+___________________________________
+
+**Scenario**
+
+1. A customer is trying to create a CMK account with ΓÇ£keyVaultKeyUriΓÇ¥ while using API version less than "2019-12-12".
+2. Customer updates the ΓÇ£keyVaultKeyUriΓÇ¥ of a CMK account while using API version less than "2019-12-12".
+3. Customer tries updating ΓÇ£keyVaultKeyUriΓÇ¥ on existing CMK account with non-null ΓÇ£keyVaultKeyUriΓÇ¥ into a null value. (Convert CMK account into Non-CMK account)
++
+**Error Message**
+
+``Updating KeyVaultKeyUri is not supported
+ActivityId: 00000000-0000-0000-0000-000000000000``
++
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+1. customer using API version less than "2019-12-12" when updating the KeyVaultKeyUri.
+2. Convert CMK account into Non-CMK account is currently not supported.
++
+**Mitigation**
+
+N/A, not supported right now.
+
+___________________________________
+**1.5**
+___________________________________
+
+**Scenario**
+
+A customer is trying to update the Cosmos DB CMK account which is in revoke state. Notice the customerΓÇÖs update is neither updating default identity or assign/unassign managed service identity.
+
+**Error Message**
+
+``No Update is allowed on Database Account with Customer Managed Key in Revoked Status
+ActivityId: 00000000-0000-0000-0000-000000000000, Microsoft.Azure.Documents.Common/2.14.0``
++
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+During Revoke state, only updating default identity or assigning/unassigning managed service identity is allowed on the account. Other updates are forbidden until the db account recovers from the Revoke state.
+
+**Mitigation**
+
+Customer should follow ΓÇ£Key Vault Revoke State Troubleshooting guideΓÇ¥
+To regrant the key vault access
+
+___________________________________
+**1.6**
+___________________________________
+
+**Scenario**
+
+A customer is trying to create a CMK db account with SystemAssigned as the default identity.
+
+Sample Command:
+*``az cosmosdb create -n mydb -g myresourcegroup --key-uri "https://myvault.vault.azure.net/keys/mykey" --assign-identity "[system]" --default-identity "SystemAssignedIdentity&FederatedClientId=00000000-0000-0000-0000-000000000000" --backup-policy-type Continuous``*
++
+**Error Message**
+
+``Database account creation failed. Operation Id: 00000000-0000-0000-0000-000000000000, Error: Updating default identity not allowed. Cannot set SystemAssignedIdentity as the default identity during provision.
+ActivityId: 00000000-0000-0000-0000-000000000000``
++
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+SystemAssigned identity as the default identity is currently not supported in the db account creation. SystemAssigned identity as the default identity is only supported in scenario when customer update the default identity to SystemAssignedidentity & and the key vault and the db account are in the same tenant.
+
+**Mitigation**
+
+If a customer wants to create CMK account **with Continuous backup/ Synapse link / Full fidelity change feed / Materialized view enabled**, then UserAssigned identity is the only supported default identity right now. Notice that SystemAssignedIdentity as default identity is only supported in scenario when customer updates the default identity to SystemAssigned identity and the key vault and the db account must be in the same tenant.
+
+Sample Command for creation db account using UserAssigned identity.
+(Refer to ΓÇ£Provision a Cross Tenant CMK account via UserAssigned IdentityΓÇ¥)
++
+___________________________________
+**1.7**
+___________________________________
+
+**Scenario**
+
+A customer is trying to update the KeyVaultKeyUri of an existing Cross Tenant CMK db account to a new key vault which has a different tenant from the old Key Vault.
+
+**Error Message**
+
+``The tenant for the new Key Vault 00000000-0000-0000-0000-000000000000 does not match the one in the old Key Vault 00000000-0000-0000-0000-000000000001. New Key Vaults must be on the same tenant as the old ones.``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+Once the default identity is set to Cross Tenant with ΓÇ£FedereatedClientIdΓÇ¥, we only allow updating the Key Vault Key Uri to a new one which has the same tenant as the old one. We disallow update key vault key uri to a different tenant due to security reasons.
+
+**Mitigation**
+
+N/A, not supported
++
+___________________________________
+**1.8**
+___________________________________
+
+**Scenario**
+
+Customer tries changing default identity from "UserAssignedIdentity=<UA_Resource_ID>&FederatedClientId=00000000-0000-0000-0000-000000000000" into "SystemAssignedIdentity&FederatedClientId=00000000-0000-0000-0000-000000000000" on an existing Cross Tenant CMK account.
+
+**Error Message**
+
+``Cross-tenant CMK is not supported with System Assigned identities as Default identities. Please use a User Assigned Identity instead.``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+SystemAssigned identity isn't supported in Cross Tenant CMK scenario right now.
+
+**Mitigation**
+
+N/A, not supported
+
+___________________________________
+**1.9**
+___________________________________
+
+**Scenario**
+
+ΓÇó A customer is trying to provision a cross tenant CMK account with FirstPartyIdentity as the default identity.
+
+ΓÇó Customer tries changing default identity from "UserAssignedIdentity=<UA_Resource_ID>&FederatedClientId=00000000-0000-0000-0000-000000000000" into "FirstPartyIdentity" on an existing Cross Tenant CMK account.
++
+**Error Message**
+
+``Cross-tenant CMK is not supported with First Party identities as Default identities. Please use a User Assigned identity instead``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+First Party Identity isn't supported in Cross Tenant CMK scenario right now
+
+**Mitigation**
+
+N/A, not supported
+
+___________________________________
+
+## 2. Data Plane Error Cases
+___________________________________
+**2.1**
+___________________________________
+
+**Scenario**
+
+A customer is trying to query SQL documents /Table Entity/Graph Vertex via Cosmos DB DataPlane via SDK/DocumentDBStudio/PortalΓÇÖs DataExplorer while the db account is in Revoke State.
+
+**Error Message**
+
+``{"Errors":["Request is blocked due to Customer Managed Key not being accessible."]} ActivityId: 00000000-0000-0000-0000-000000000000, Request URI: /apps/00000000-0000-0000-0000-000000000000/services/00000000-0000-0000-0000-000000000000/partitions/00000000-0000-0000-0000-000000000000/replicas/1234567p/, RequestStats: Microsoft.Azure.Cosmos.Tracing.TraceData.ClientSideRequestStatisticsTraceDatum, SDK``
+
+**Status Code**
+
+Forbidden (403)
+
+**Root Cause**
+
+There could be multi reason that the db account go revoke state, refer to the ΓÇ£6 checksΓÇ¥ of the ΓÇ£Key Vault Revoke State Troubleshooting guideΓÇ¥.
+
+**Mitigation**
+
+Customer should follow ΓÇ£Key Vault Revoke State Troubleshooting guideΓÇ¥ to recovery from Revoke State.
+
+___________________________________
+**2.2**
+___________________________________
+
+**Scenario**
+
+A customer is trying to query Cassandra Row via Cosmos DB DataPlane via SDK/DocumentDBStudio/PortalΓÇÖs DataExplorer while the db account is in Revoke State.
+
+**Error Message**
+
+``{"readyState":4,"responseText":"","status":401,"statusText":"error"}``
+
+**Status Code**
+
+Unauthorized (401)
+
+**Root Cause**
+
+There could be multi reason that the db account go revoke state, refer to the ΓÇ£6 checksΓÇ¥ of the ΓÇ£Key Vault Revoke State Troubleshooting guideΓÇ¥.
+
+**Mitigation**
+
+Customer should follow ΓÇ£Key Vault Revoke State Troubleshooting guideΓÇ¥ to recovery from Revoke State.
+
+___________________________________
+**2.3**
+___________________________________
+
+**Scenario**
+
+Customer trying to query Mongo API via Cosmos DB DataPlane via SDK/DocumentDBStudio/PortalΓÇÖs DataExplorer while the db account is in Revoke State.
+
+**Error Message**
+
+``Error querying documents: An exception occurred while opening a connection to the server., Payload: {<redacted>}``
+
+**Status Code**
+
+Internal Server Error (500)
+
+**Root Cause**
+
+There could be multi reason that the db account go revoke state, refer to the ΓÇ£six checksΓÇ¥ of the ΓÇ£Key Vault Revoke State Troubleshooting guideΓÇ¥.
+
+**Mitigation**
+
+Customer should follow ΓÇ£Key Vault Revoke State Troubleshooting guideΓÇ¥ to recovery from Revoke State.
+
+___________________________________
+**2.4**
+___________________________________
+
+**Scenario**
+
+Customer trying to create/modify collection/documents (or other naming depends on the API) via Cosmos DB DataPlane via SDK/DocumentDBStudio/PortalΓÇÖs DataExplorer while the db account is in Revoke State.
+
+**Error Message**
+
+``Request timed out.``
+
+**Status Code**
+
+Request Timeout (408)
+
+**Root Cause**
+
+There could be multi reason that the db account go revoke state, refer to the ΓÇ£six checksΓÇ¥ of the ΓÇ£Key Vault Revoke State Troubleshooting guideΓÇ¥.
+
+**Mitigation**
+
+Customer should follow ΓÇ£Key Vault Revoke State Troubleshooting guideΓÇ¥ to recovery from Revoke State.
+
+___________________________________
+
+## 3. Cosmos DB Cross Tenant CMK with Continuous backup / Azure Synapse Link / Full fidelity change feed / Materialized View
+
+___________________________________
+**3.1**
+___________________________________
+
+**Scenario**
+
+1. Customers try creating a db account with both Continuous backup mode and multiple write locations enabled
+2. Customers try enabling Continuous backup mode on existing db account with multiple write locations enabled
+3. Customers try enabling multiple write locations on existing db account with Continuous backup mode enabled.
++
+**Error Message**
+
+``Continuous backup mode and multiple write locations cannot be enabled together for a global database account
+ActivityId: 00000000-0000-0000-0000-000000000000``
++
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+Continuous backup mode and multiple write locations cannot be enabled together
+
+**Mitigation**
+
+N/A, not supported right now.
+
+___________________________________
+**3.2**
+___________________________________
+
+**Scenario**
+
+1. Customer creates a CMK db account with Continuous backup / Azure Synapse Link / Full fidelity change feed / Materialized view with First Party Identity as the default identity.
+2. Customer enables Continuous backup / Azure Synapse Link / Full fidelity change feed / Materialized view on an existing account with First Party as the default identity.
+
+Sample Command:
+*``az cosmosdb create -n mydb -g myresourcegroup --key-uri "https://myvault.vault.azure.net/keys/mykey" --assign-identity "[system]" --default-identity "FirstPartyIdentity" --backup-policy-type Continuous``*
++
+**Error Message**
+
+``Setting Non-FPI default identity is required for dedicated storage account features. Please set a valid System or User Assigned Identity to default and retry the request.\r\nActivityId: 00000000-0000-0000-0000-000000000000``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+The Continuous backup / Azure Synapse Link / Full fidelity change feed / Materialized view features requires dedicated storage account, which doesnΓÇÖt support FirstPartyIdentity as the default identity.
+
+**Mitigation**
+
+If customer wants to create CMK account with **Continuous backup/ Synapse link / Full fidelity change feed / Materialized view enabled**, then UserAssigned identity is the only supported default identity right now. Notice the SystemAssignedIdentity as default identity is only supported in scenario when customer update the default identity to SystemAssigned identity & and the key vault and the db account must be in the same tenant.
+
+Sample Command for creation db account using UserAssigned identity.
+``az cosmosdb create -n mydb -g myresourcegroup --key-uri "https://myvault.vault.azure.net/keys/mykey" --assign-identity "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myuserassignedidentity" --default-identity "UserAssignedIdentity=/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myresourcegroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myuserassignedidentity&FederatedClientId=00000000-0000-0000-0000-000000000000"``
++
+___________________________________
+**3.3**
+___________________________________
+
+**Scenario**
+
+Customer tries enabling CMK on existing non-cmk account with Continuous backup / Synapse link / Full fidelity change feed / Materialized view already enabled.
+
+**Error Message**
+
+``Customer Managed Key enablement on an existing Analytical Store/Continuous Backup/Materialized View/Full Fidelity Change Feed enabled Account is not supported ActivityId: 00000000-0000-0000-0000-000000000000``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+Enabling CMK on existing non-cmk account with Continuous backup / Azure Synapse Link / Full fidelity change feed / Materialized view already enabled is currently under development and not supported yet.
+
+**Mitigation**
+
+N/A, not supported right now.
+
+___________________________________
+**3.4**
+___________________________________
+
+**Scenario**
+
+Customer tries turn off Azure Synapse Link (Also called analytical storage) on existing db accounts that has Azure Synapse Link already enabled.
+
+**Error Message**
+
+``EnableAnalyticalStorage cannot be disabled once it is enabled on an account.\r\nActivityId: 00000000-0000-0000-0000-000000000000, Microsoft.Azure.Documents.Common/2.14.0``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+Today, Azure Synapse Link once turned on, cannot be turned off.
+
+**Mitigation**
+
+N/A, not supported right now.
+
+___________________________________
+**3.5**
+___________________________________
+
+**Scenario**
+
+Customer tries turn off Continuous backup mode (Also called PITR) on existing db accounts that has Continuous backup mode already enabled.
+
+**Error Message**
+
+``Continuous backup mode cannot be disabled once it is enabled on the account.\\r\\nActivityId: 00000000-0000-0000-0000-000000000000, Microsoft.Azure.Documents.Common/2.14.0\``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+Today, Continuous backup mode once turned on, cannot be turned off.
+
+**Mitigation**
+
+N/A, not supported right now.
+
+___________________________________
+**3.6**
+___________________________________
+
+**Scenario**
+
+Customer tries to turn off Materialized View on existing db accounts which has Materialized View already enabled.
+
+**Error Message**
+
+``EnableMaterializedViews cannot be disabled once it is enabled on an account.\r\nActivityId: 00000000-0000-0000-0000-000000000000``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+Today, Materialized View once turned on, cannot be turned off.
+
+**Mitigation**
+
+N/A, not supported right now.
+
+___________________________________
+**3.7**
+___________________________________
+
+**Scenario**
+
+Customers try to enable continuous backup mode with any other properties.
+For example:
+*``az cosmosdb update -n mydb -g myresourcegroup --backup-policy-type Continuous --enable-analytical-storage``*
++
+**Error Message**
+
+``Cannot update continuous backup mode and other properties at the same time.
+ActivityId: 00000000-0000-0000-0000-000000000000``
++
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+Enable continuous backup mode with any other properties on existing account is not supported.
++
+**Mitigation**
+
+Enable continuous backup mode without any other properties on existing account.
+
+___________________________________
+**3.8**
+___________________________________
+
+**Scenario**
+
+Customer tries creating a CMK account with both continuous backup (also called PITR) and Azure Synapse Link (also called analytical storage) enabled.
+
+**Error Message**
+
+``Continuous backup mode cannot be enabled together with Storage Analytics feature.``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+Continuous backup (also called PITR) and Azure Synapse Link (also called analytical storage) cannot be enabled during creation at the same time. However, customer can enable Azure Synapse Link on an existing db account with Continuous backup enabled.
+
+**Mitigation**
+
+N/A, not supported right now.
+
+___________________________________
+**3.9**
+___________________________________
+
+**Scenario**
+
+Customer tries creating CMK account with Full Fidelity Change Feed enabled.
+
+**Error Message**
+
+``Customer Managed Key and Full Fidelity Change Feed cannot be enabled together for a global database account\r\nActivityId: 00000000-0000-0000-0000-000000000000``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+Customer Managed Key and Full Fidelity Change Feed cannot be enabled together for a global database account today.
+
+**Mitigation**
+
+N/A, not supported right now.
+
+___________________________________
+**3.10**
+___________________________________
+
+**Scenario**
+
+Customer tries enabling Materialized View on an existing db account with continuous backup mode already enabled.
+
+**Error Message**
+
+``Cannot enable Materialized View when continuous backup mode is already enabled.\r\nActivityId: 00000000-0000-0000-0000-000000000000``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+enable Materialized View when continuous backup mode is already enabled is not supported.
+
+**Mitigation**
+
+N/A, not supported right now.
+
+___________________________________
+**3.11**
+___________________________________
+
+**Scenario**
+
+Customer tries enabling continuous backup mode on an existing account with Materialized View enabled.
+
+**Error Message**
+
+``Cannot enable continuous backup mode when Materialized View is already enabled.\r\nActivityId: 00000000-0000-0000-0000-000000000000``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+Cannot enable continuous backup mode when Materialized View is already enabled
+
+**Mitigation**
+
+N/A, not supported right now.
+
+___________________________________
+**3.12**
+___________________________________
+
+**Scenario**
+
+Customer tries enabling full fidelity change feed on an existing account with Materialized View enabled.
+
+**Error Message**
+
+``Cannot enable full fidelity change feed when materialized view is already enabled.\r\nActivityId: 00000000-0000-0000-0000-000000000000``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+Cannot enable full fidelity change feed when materialized view is already enabled
++
+**Mitigation**
+
+N/A, not supported right now.
+
+___________________________________
+
+## 4. Cosmos DB Multi-API compatibility with Continuous backup / Azure Synapse Link / Full fidelity change feed / Materialized view Error Cases
+
+___________________________________
+**4.1**
+___________________________________
+
+**Scenario**
+
+1. Provision Mongo/Gremlin/Table CMK db accounts with Materialized Views enabled.
+2. Enable MaterializedViews on a Mongo/Gremlin/Table CMK db accounts.
++
+**Error Message**
+
+``MaterializedViews is not supported on this account type.\r\nActivityId: 00000000-0000-0000-0000-000000000000``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+Only SQL and Cassandra API mode are compatible with Materialized View. The Mongo, Gremlin, Table API aren't supported right now.
+
+**Mitigation**
+
+1. Provision a CMK db account with Materialized Views enabled with only SQL and Cassandra API.
+2. Enable Materialized Views on a CMK account with only SQL and Cassandra API.
++
+___________________________________
+**4.2**
+___________________________________
+
+**Scenario**
+
+1. Customer creates a CMK db account using Cassandra API with Continuous backup mode enabled.
+2. Customer enables Continuous backup mode on a CMK db account using Cassandra API.
++
+**Error Message**
+
+``Continuous backup mode cannot be enabled together with Cassandra database account\r\nActivityId: e2b1b7c8-211a-4fa5-bd9c-253e6c65d6f0, Microsoft.Azure.Documents.Common/2.14.0``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+Continuous backup mode cannot be enabled together with Cassandra database
+
+**Mitigation**
+
+N/A, Not supported as of today.
+
+___________________________________
+**4.3**
+___________________________________
+
+**Scenario**
+
+1. Customer tries creating db account with both with Gremlin V1 and Continuous backup mode enabled.
+1. Customer tries enabling Continuous backup mode on existing db account with Gremlin API.
++
+**Error Message**
+
+``Continuous backup mode cannot be enabled together with Gremlin V1 enabled database account\\r\\nActivityId: 00000000-0000-0000-0000-000000000000``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+Continuous backup mode cannot be enabled together with Gremlin V1 account right now.
++
+**Mitigation**
+
+Expected behavior.
+
+___________________________________
+**4.4**
+___________________________________
+
+**Scenario**
+
+1. Customer tries creating db account with both with Table API and Continuous backup mode enabled.
+1. Customer tries enabling Continuous backup mode on existing db account with Table API.
++
+**Error Message**
+
+``Continuous backup mode cannot be enabled together with table enabled database account\\r\\nActivityId: 00000000-0000-0000-0000-000000000000``
+
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+Continuous backup mode cannot be enabled together with table enabled database account.
+
+**Mitigation**
+
+Expected behavior.
+
+___________________________________
+
+## 5. Azure Synapse Link Error Cases
+
+___________________________________
+**5.1**
+___________________________________
+
+**Scenario**
+
+Customer trying to use Azure Synapse Link to query data from a Cosmos DB CMK account with Azure Synapse Link enabled, however at the same the key vault access is lost.
+
+For example, customer tries using Azure synapse studioΓÇÖs Spark Notebook to query Cosmos DB data via Azure Synapse Link, and at the same time customer has removed the current default identityΓÇÖs ΓÇ£GET/WRAP/UnwrapΓÇ¥ permission from the Key Vault access policy for a while.
++
+**Error Message**
+
+```
+Py4JJavaError Traceback (most recent call last)
+<ipython-input-30-668efb4> in <module>
+-> 1 df = spark.read.format("cosmos.olap").option("spark.synapse.linkedService", "CosmosDb1").option("spark.cosmos.container", "cc").load()
+ 2
+ 3 display(df.limit(10))
+
+/opt/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py in load(self, path, format, schema, **options)
+ 162 return self._df(self._jreader.load(self._spark._sc._jvm.PythonUtils.toSeq(path)))
+ 163 else:
+--> 164 return self._df(self._jreader.load())
+ 165
+ 166 def json(self, path, schema=None, primitivesAsString=None, prefersDecimal=None,
+
+~/cluster-env/env/lib/python3.8/site-packages/py4j/java_gateway.py in __call__(self, *args)
+ 1319
+ 1320 answer = self.gateway_client.send_command(command)
+-> 1321 return_value = get_return_value(
+ 1322 answer, self.gateway_client, self.target_id, self.name)
+ 1323
+
+/opt/spark/python/lib/pyspark.zip/pyspark/sql/utils.py in deco(*a, **kw)
+ 109 def deco(*a, **kw):
+ 110 try:
+--> 111 return f(*a, **kw)
+ 112 except py4j.protocol.Py4JJavaError as e:
+ 113 converted = convert_exception(e.java_exception)
+
+~/cluster-env/env/lib/python3.8/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
+ 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
+ 325 if answer[1] == REFERENCE_TYPE:
+--> 326 raise Py4JJavaError(
+ 327 "An error occurred while calling {0}{1}{2}.\n".
+ 328 format(target_id, ".", name), value)
+
+Py4JJavaError: An error occurred while calling o1292.load.
+: org.apache.hadoop.fs.azure.AzureException: java.util.NoSuchElementException: An error occurred while enumerating the result, check the original exception for details.
+ at org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.retrieveMetadata(AzureNativeFileSystemStore.java:2223)
+...
+**Caused by: com.microsoft.azure.storage.StorageException: The key vault key is not found to unwrap the encryption key.**
+ at com.microsoft.azure.storage.StorageException.translateException(StorageException.java:87)
+ at com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:315)
+ at com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:185)
+ at com.microsoft.azure.storage.core.LazySegmentedIterator.hasNext(LazySegmentedIterator.java:109)
+
+ ```
+**Status Code**
+
+BadRequest(400)
+
+**Root Cause**
+
+As customer has removed the current default identityΓÇÖs ΓÇ£GET/WRAP/UnwrapΓÇ¥ permission from the Key Vault access policy for a while, both Cosmos DB account and the dedicated storage account no longer able to access the key vault and will go to revoke state. The Azure Synapse Link will query data from the dedicated storage account, which is in revoke state:
+ΓÇ£Caused by: com.microsoft.azure.storage.StorageException: The key vault key is not found to unwrap the encryption key.ΓÇ¥
++
+**Mitigation**
+
+Customer should follow ΓÇ£Key Vault Revoke State Troubleshooting guideΓÇ¥ to regrant key vault access.
+
+___________________________________
+
data-factory Sap Change Data Capture Prerequisites Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-prerequisites-configuration.md
SAP Landscape Transformation Replication Server (SLT) is a database trigger-enab
1. In **Scenario for RFC Communication**, select **Operational Data Provisioning (ODP)**.
- 1. In **Queue Alias**, enter the queue alias to use to select the context of your data extractions via ODP in Data Factory. Use the format `SLT-<your queue alias>`.
+ 1. In **Queue Alias**, enter the queue alias to use to select the context of your data extractions via ODP in Data Factory. Use the format `SLT~<your queue alias>`.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-slt-configurations.png" alt-text="Screenshot of the SAP SLT configuration dialog.":::
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
na- Previously updated : 11/14/2022 Last updated : 01/17/2023 + + # What is Azure DDoS Protection? Distributed denial of service (DDoS) attacks are some of the largest availability and security concerns facing customers that are moving their applications to the cloud. A DDoS attack attempts to exhaust an application's resources, making the application unavailable to legitimate users. DDoS attacks can be targeted at any endpoint that is publicly reachable through the internet.
Distributed denial of service (DDoS) attacks are some of the largest availabilit
Azure DDoS Protection, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network. Protection is simple to enable on any new or existing virtual network, and it requires no application or resource changes. :::image type="content" source="./media/ddos-best-practices/ddos-protection-overview-architecture.png" alt-text="Diagram of the reference architecture for a DDoS protected PaaS web application.":::
+## Region Availability
+
+DDoS IP Protection is currently available in the following regions.
+
+| Americas | Europe | Middle East | Africa | Asia Pacific |
+||-||--||
+| West Central US | France Central | UAE Central | South Africa North | Australia Central |
+| North Central US | Germany West Central | Qatar Central | | Korea Central |
+| West US | Switzerland North | | | Japan East |
+| West US 3 | France South | | | West India |
+| | Norway East | | | Jio India Central |
+| | Sweden Central | | | Australia Central 2 |
+| | Germany North | | | |
+++ ## Key benefits ### Always-on traffic monitoring
ddos-protection Ddos Protection Sku Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-sku-comparison.md
Previously updated : 01/09/2023 Last updated : 01/17/2023
Azure DDoS Network Protection, combined with application design best practices,
> [!NOTE] > DDoS IP Protection is currently only available in Azure Preview PowerShell.
-DDoS IP Protection is currently available in the following regions.
-
-| Americas | Europe | Middle East | Africa | Asia Pacific |
-||-||--||
-| West Central US | France Central | UAE Central | South Africa North | Australia Central |
-| North Central US | Germany West Central | Qatar Central | | Korea Central |
-| West US | Switzerland North | | | Japan East |
-| West US 3 | France South | | | West India |
-| | Norway East | | | Jio India Central |
-| | Sweden Central | | | Australia Central 2 |
-| | Germany North | | | |
--
-
## SKUs Azure DDoS Protection supports two SKU Types, DDoS IP Protection and DDoS Network Protection. The SKU is configured in the Azure portal during the workflow when you configure Azure DDoS Protection.
defender-for-cloud Quickstart Onboard Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md
API calls performed by Defender for Cloud count against the [Azure DevOps Global
- An Azure account with Defender for Cloud onboarded. If you don't already have an Azure account [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- You must [configure the Microsoft Security DevOps Azure DevOps extension](azure-devops-extension.md).
+ ## Availability
For information on how to correct this issue, check out the [DevOps trouble shoo
## Next steps Learn more about [Defender for DevOps](defender-for-devops-introduction.md).
-Learn how to [configure the MSDO Azure DevOps extension](azure-devops-extension.md).
- Learn how to [configure pull request annotations](enable-pull-request-annotations.md) in Defender for Cloud.
defender-for-cloud Tutorial Security Incident https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-incident.md
description: In this tutorial, you'll learn how to triage security alerts and de
ms.assetid: 181e3695-cbb8-4b4e-96e9-c4396754862f Previously updated : 01/08/2023 Last updated : 01/17/2023 # Tutorial: Triage, investigate, and respond to security alerts
After you've investigated a security alert and understood its scope, you can res
> [!TIP] > We review your feedback to improve our algorithms and provide better security alerts.
-## CLean up resources
+## Clean up resources
Other quickstarts and tutorials in this collection build upon this quickstart. If you plan to continue to work with subsequent quickstarts and tutorials, keep automatic provisioning and Defender for Cloud's enhanced security features enabled.
defender-for-iot How To Set Up Snmp Mib Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-snmp-mib-monitoring.md
Title: Set up SNMP MIB monitoring
-description: You can perform sensor health monitoring by using SNMP. The sensor responds to SNMP queries sent from an authorized monitoring server.
+description: Perform sensor health monitoring by using SNMP. The sensor responds to SNMP queries sent from an authorized monitoring server.
Last updated 05/31/2022 # Set up SNMP MIB monitoring
-Monitoring sensor health is possible through the Simple Network Management Protocol (SNMP). The sensor responds to SNMP requests sent by an authorized monitoring server. The SNMP monitor polls sensor OIDs periodically (up to 50 times a second).
+Monitor sensor health through the Simple Network Management Protocol (SNMP), as the sensor responds to SNMP requests sent by an authorized monitoring server, and the SNMP monitor polls sensor OIDs periodically (up to 50 times a second).
Supported SNMP versions are SNMP version 2 and version 3. The SNMP protocol utilizes UDP as its transport protocol with port 161.
+## Prerequisites
+
+- To set up SNMP monitoring, you must be able to access the OT network sensor as an **Admin** user.
+
+ For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
+
+- To download the SNMP MIB file, make sure you can access the Azure portal as a [Security admin](/azure/role-based-access-control/built-in-roles#security-admin), [Contributor](/azure/role-based-access-control/built-in-roles#contributor), or [Owner](/azure/role-based-access-control/built-in-roles#owner) user.
+
+ If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/).
+
+### Prerequisites for AES and 3-DES Encryption Support for SNMP Version 3
+
+- The network management station (NMS) must support Simple Network Management Protocol (SNMP) Version 3 to be able to use this feature.
+
+- It's important to understand the SNMP architecture and the terminology of the architecture to understand the security model used and how the security model interacts with the other subsystems in the architecture.
+
+- Before you begin configuring SNMP monitoring, you need to open the port UDP 161 in the firewall.
+
+## Set up SNMP monitoring
+
+Set up SNMP monitoring through the OT sensor console.
+
+You can also download the log that contains all the SNMP queries that the sensor receives, including the connection data and raw data, from the same **SNMP MIB monitoring configuration** pane.
+
+To set up SNMP monitoring:
+
+1. Sign in to your OT sensor as an **Admin** user.
+1. Select **System Settings** on the left and then, under **Sensor Management**, select **SNMP MIB Monitoring**.
+1. Select **+ Add host** and enter the IP address of the server that performs the system health monitoring. You can add multiple servers.
+
+ For example:
+
+ :::image type="content" source="media/configure-active-monitoring/set-up-snmp-mib-monitoring.png" alt-text="Screenshot of the SNMP MIB monitoring configuration page." lightbox="media/configure-active-monitoring/set-up-snmp-mib-monitoring.png":::
+
+1. In the **Authentication** section, select the SNMP version:
+ - If you select **V2**, type a string in **SNMP v2 Community String**.
+
+ You can enter up to 32 characters, and include any combination of alphanumeric characters with no spaces.
+
+ - If you select **V3**, specify the following parameters:
+
+ | Parameter | Description |
+ |--|--|
+ | **Username** | Enter a unique username. <br><br> The SNMP username can contain up to 32 characters and include any combination of alphanumeric characters with no spaces. <br><br> The username for the SNMP v3 authentication must be configured on the system and on the SNMP server. |
+ | **Password** | Enter a case-sensitive authentication password. <br><br> The authentication password can contain 8 to 12 characters and include any combination of alphanumeric characters. <br><br> The password for the SNMP v3 authentication must be configured on the system and on the SNMP server. |
+ | **Auth Type** | Select **MD5** or **SHA-1**. |
+ | **Encryption** | Select one of the following: <br>- **DES** (56-bit key size): RFC3414 User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3). <br>- **AES** (AES 128 bits supported): RFC3826 The Advanced Encryption Standard (AES) Cipher Algorithm in the SNMP User-based Security Model. |
+ | **Secret Key** | The key must contain exactly eight characters and include any combination of alphanumeric characters. |
+++
+1. When you're done adding servers, select **Save**.
+ ## Download the SNMP MIB file
-Download the SNMP MIB file from Defender for IoT in the Azure portal. Select **Sites and sensors > More actions > Download SNMP MIB file**.
+To download the SNMP MIB file from Defender for IoT in the Azure portal:
+1. Sign in to the Azure portal.
+1. Select **Sites and sensors > More actions > Download SNMP MIB file**.
## Sensor OIDs
+Use the following table for reference regarding sensor object identifier values (OIDs):
+ | Management console and sensor | OID | Format | Description | |--|--|--|--| | Appliance name | 1.3.6.1.2.1.1.5.0 | STRING | Appliance name for the on-premises management console |
Download the SNMP MIB file from Defender for IoT in the Azure portal. Select **S
| Serial number | 1.3.6.1.4.1.53313.1 |STRING | String that the license uses | | Software version | 1.3.6.1.4.1.53313.2 | STRING | Xsense full-version string and management full-version string | | CPU usage | 1.3.6.1.4.1.53313.3.1 | GAUGE32 | Indication for zero to 100 |
-| CPU temperature | 1.3.6.1.4.1.53313.3.2 | STRING | Celsius indication for zero to 100 based on Linux input. "No sensors found" will be returned from any machine that has no actual physical temperature sensor (for example VMs)|
+| CPU temperature | 1.3.6.1.4.1.53313.3.2 | STRING | Celsius indication for zero to 100 based on Linux input. <br><br> Any machine that has no actual physical temperature sensor (for example VMs) will return "No sensors found" |
| Memory usage | 1.3.6.1.4.1.53313.3.3 | GAUGE32 | Indication for zero to 100 | | Disk Usage | 1.3.6.1.4.1.53313.3.4 | GAUGE32 | Indication for zero to 100 | | Service Status | 1.3.6.1.4.1.53313.5 |STRING | Online or offline if one of the four crucial components is down |
Download the SNMP MIB file from Defender for IoT in the Azure portal. Select **S
| License status | 1.3.6.1.4.1.53313.7 |STRING | Activation period of this appliance: Active / Expiration Date / Expired | Note that:-- Non-existing keys respond with null, HTTP 200. -- Hardware-related MIBs (CPU usage, CPU temperature, memory usage, disk usage) should be tested on all architectures and physical sensors. CPU temperature on virtual machines is expected to be not applicable.-- You can download the log that contains all the SNMP queries that the sensor receives, including the connection data and raw data.
-## Prerequisites for AES and 3-DES Encryption Support for SNMP Version 3
-- The network management station (NMS) must support Simple Network Management Protocol (SNMP) Version 3 to be able to use this feature.-- It's important to understand the SNMP architecture and the terminology of the architecture to understand the security model used and how the security model interacts with the other subsystems in the architecture.-- Before you begin configuring SNMP monitoring, you need to open the port UDP 161 in the firewall.--
-## Set up SNMP monitoring
-
-1. On the side menu, select **System Settings**.
-1. Expand **Sensor Management**, and select **SNMP MIB Monitoring** :
-1. Select **Add host** and enter the IP address of the server that performs the system health monitoring. You can add multiple servers.
-1. In **Authentication** section, select the SNMP version.
- - If you select V2, type the string in **SNMP v2 Community String**. You can enter up to 32 characters, and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). Spaces aren't allowed.
- - If you select V3, specify the following:
-
- | Parameter | Description |
- |--|--|
- | **Username** | The SNMP username can contain up to 32 characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). Spaces aren't allowed. <br /> <br />The username for the SNMP v3 authentication must be configured on the system and on the SNMP server. |
- | **Password** | Enter a case-sensitive authentication password. The authentication password can contain 8 to 12 characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). <br /> <br/>The username for the SNMP v3 authentication must be configured on the system and on the SNMP server. |
- | **Auth Type** | Select MD5 or SHA-1. |
- | **Encryption** | Select DES (56 bit key size)<sup>[1](#1)</sup> or AES (AES 128 bits supported)<sup>[2](#2)</sup>. |
- | **Secret Key** | The key must contain exactly eight characters and include any combination of alphanumeric characters (uppercase letters, lowercase letters, and numbers). |
-
- <a name="1"></a><sup>1</sup> RFC3414 User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3)
-
- <a name="2"></a><sup>2</sup> RFC3826 The Advanced Encryption Standard (AES) Cipher Algorithm in the SNMP User-based Security Model
-
-1. Select **Save**.
+- Non-existing keys respond with null, HTTP 200.
+- Hardware-related MIBs (CPU usage, CPU temperature, memory usage, disk usage) should be tested on all architectures and physical sensors. CPU temperature on virtual machines is expected to be not applicable.
## Next steps
-For more information, see [Export troubleshooting logs](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md).
+For more information, see [Export troubleshooting logs](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/overview.md
The following diagram shows where Azure Digital Twins might lie in the context o
## Resources
-Here are some resources that may be useful while working with Azure Digital Twins. You can view more resources under the **Resources** header in the table of contents for this documentation set.
+This section highlights some resources that may be useful while working with Azure Digital Twins. You can view additional resources in the **Resources** section of this this documentation set (accessible through the navigation links to the left).
### Service limits
energy-data-services Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md
Microsoft Energy Data Services is updated on an ongoing basis. To stay up to dat
- Deprecated functionality - Plans for changes
+<hr width = 100%>
+
+## January 2023
+
+### Managed Identity Support
+
+You can use a managed identity to authenticate to any [service that supports Azure AD (Active Directory) authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md) with Microsoft Energy Data Services. For example, you can write a script in Azure Function to ingest data in Microsoft Energy Data Services. Now, you can use managed identity to connect to Microsoft Energy Data Services using system or user assigned managed identity from other Azure services. [Learn more.]( ../energy-data-services/how-to-use-managed-identity.md)
++ <hr width=100%> ## December 2022
Microsoft Energy Data Services is updated on an ongoing basis. To stay up to dat
Most operations, support, and troubleshooting performed by Microsoft personnel do not require access to customer data. In those rare circumstances where such access is required, Customer Lockbox for Microsoft Energy Data Services provides you an interface to review and approve or reject data access requests. Microsoft Energy Data Services now supports Lockbox. [Learn more](../security/fundamentals/customer-lockbox-overview.md). - <hr width=100%>
-## October 20, 2022
+## October 2022
### Support for Private Links
Microsoft Energy Data Services Preview supports customer managed encryption keys
<hr width=100%>
-## Microsoft Energy Data Services Preview Release
-
+## September 2022
-### Key Announcement
+### Key Announcement: Preview Release
-Microsoft Energy Data Services is now available in public preview.
+Microsoft Energy Data Services is now available in public preview. Information on latest releases, bug fixes, & deprecated functionality for Microsoft Energy Data Services Preview will be updated monthly. Keep tracking this page.
Microsoft Energy Data Services is developed in alignment with the emerging requirements of the OSDUΓäó Technical Standard, Version 1.0. and is currently aligned with Mercury Release(R3), [Milestone-12](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M12-Release-Notes).
external-attack-surface-management Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/index.md
For security purposes, Microsoft collects users' IP addresses when they log in.
In the case of a region down scenario, customers should see no downtime as Defender EASM uses technologies that replicate data to a backup region. Defender EASM processes customer data. By default, customer data is replicated to the paired region.
-The Microsoft compliance framework requires that all customer data be deleted within 180 days in accordance with [Azure subscription states](https://learn.microsoft.com/azure/cost-management-billing/manage/subscription-states) handling.  This also includes storage of customer data in offline locations, such as database backups. 
+The Microsoft compliance framework requires that all customer data be deleted within 180 days of that organization no longer being a customer of Microsoft.  This also includes storage of customer data in offline locations, such as database backups. Once a resource is deleted, it cannot be restored by our teams.  The customer data will be retained in our data stores for 75 days, however the actual resource cannot be restored.  After the 75 day period, customer data will be permanently deleted.  
+ ## Next Steps
external-attack-surface-management Understanding Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-asset-details.md
This section is comprised of high-level information that is key to understanding
|--|--|--| | Asset Name | The name of an asset. | All | | UUID | This 128-bit label represents the universally unique identifier (UUID) for the | All |
+| Added to inventory | The date than an asset was added to inventory, whether automatically to the ΓÇ£Approved InventoryΓÇ¥ state or in another state (e.g. ΓÇ£CandidateΓÇ¥). | All |
| Status | The status of the asset within the RiskIQ system. Options include Approved Inventory, Candidate, Dependencies, or Requires Investigation. | All |
-| First seen | This column displays the date that the asset was first observed by crawling | All |
-| Last seen | This column displays the date that the asset was last observed by crawling infrastructure. | All |
-| Discovered on | The date that the asset was found in a discovery run scanning for assets related to an organizationΓÇÖs known infrastructure. | All |
-| Last updated | This column displays the date that the asset was last updated in the system after new data was found in a scan. | All |
+| First seen (Global Security Graph) | The date that Microsoft first scanned the asset and added it to our comprehensive Global Security Graph. | All |
+| Last seen (Global Security Graph) | The date that Microsoft most recently scanned the asset. | All |
+| Discovered on | Indicates the creation date of the Discovery Group that detected the asset. | All |
+| Last updated | The date that the asset was last updated, whether by new data discovered in a scan or manual user actions (e.g. a state change). | All |
| Country | The country of origin detected for this asset. | All | | State/Province | The state or province of origin detected for this asset. | All | | City | The city of origin detected for this asset. | All |
-| WhoIs name | | Host |
-| WhoIs email | The primary contact email in a WhoIs record. | Host |
-| WhoIS organization | The listed organization in a WhoIs record. | Host |
-| WhoIs registrar | The listed registrar in a WhoIs record. | Host |
-| WhoIs name servers | The listed name servers in a WhoIs record. | Host |
+| WhoIs name | The name associated with a Whois record. | Host |
+| WhoIs email | The primary contact email in a Whois record. | Host |
+| WhoIS organization | The listed organization in a Whois record. | Host |
+| WhoIs registrar | The listed registrar in a Whois record. | Host |
+| WhoIs name servers | The listed name servers in a Whois record. | Host |
| Certificate issued | The date when a certificate was issued. | SSL certificate | | Certificate expires | The date when a certificate will expire. | SSL certificate | | Serial number | The serial number associated with an SSL certificate. | SSL certificate |
In the example below, we see that the seed domain is tied to this asset through
### Discovery information
-This section provides information about the process used to detect the asset. It includes information about the discovery seed that connects to the asset, as well as the approval process. Options include ΓÇ£Approved InventoryΓÇ¥ which indicates the relationship between the seed and discovered asset was strong enough to warrant an automatic approval by the Defender EASM system. Otherwise, the process will be listed as ΓÇ£CandidateΓÇ¥, indicating that the asset required manual approval to be incorporated into your inventory. This section also provides the date that the asset was added to your inventory, as well as the date that it was last scanned in a discovery run.
+This section provides information about the process used to detect the asset. It includes information about the discovery seed that connects to the asset, as well as the approval process. Options include ΓÇ£Approved InventoryΓÇ¥ which indicates the relationship between the seed and discovered asset was strong enough to warrant an automatic approval by the Defender EASM system. Otherwise, the process will be listed as ΓÇ£CandidateΓÇ¥, indicating that the asset required manual approval to be incorporated into your inventory. This section also provides the "Last discovery run" date that indicates when the Discovery Group that initially detected the asset was last utilized for a discovery scan.
### IP reputation
ItΓÇÖs important to note that many organizations opt to obfuscate their registry
## Next steps - [Understanding dashboards](understanding-dashboards.md)-- [Using and managing discovery](using-and-managing-discovery.md)
+- [Using and managing discovery](using-and-managing-discovery.md)
firewall-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/overview.md
Previously updated : 09/01/2022 Last updated : 01/17/2023
frontdoor Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/private-link.md
[Azure Private Link](../private-link/private-link-overview.md) enables you to access Azure PaaS services and services hosted in Azure over a private endpoint in your virtual network. Traffic between your virtual network and the service goes over the Microsoft backbone network, eliminating exposure to the public Internet.
-Azure Front Door Premium can connect to your origin using Private Link. Your origin can be hosted in a virtual network or hosted as a PaaS service such as Azure App Service or Azure Storage. Private Link removes the need for your origin to be access publicly.
+Azure Front Door Premium can connect to your origin using Private Link. Your origin can be hosted in a virtual network or hosted as a PaaS service such as Azure App Service or Azure Storage. Private Link removes the need for your origin to be accessed publicly.
:::image type="content" source="./media/private-link/front-door-private-endpoint-architecture.png" alt-text="Diagram of Azure Front Door with Private Link enabled.":::
Azure Front Door Premium can connect to your origin using Private Link. Your ori
When you enable Private Link to your origin in Azure Front Door Premium, Front Door creates a private endpoint on your behalf from an Azure Front Door managed regional private network. You'll receive an Azure Front Door private endpoint request at the origin pending your approval.
-You must approve the private endpoint connection before traffic can pass to the origin privately. You can approve private endpoint connections by using the Azure portal, Azure CLI, or Azure PowerShell. For more information, see [Manage a Private Endpoint connection](../private-link/manage-private-endpoint.md).
- > [!IMPORTANT]
-> You must approve the private endpoint connection before traffic will flow to your origin.
+> You must approve the private endpoint connection before traffic can pass to the origin privately. You can approve private endpoint connections by using the Azure portal, Azure CLI, or Azure PowerShell. For more information, see [Manage a Private Endpoint connection](../private-link/manage-private-endpoint.md).
-After you enable an origin for Private Link and approve the private endpoint connection, it can take a few minutes for the connection to get established. During this time, requests to the origin will receive an Azure Front Door error message. The error message will go away once the connection is established.
+After you enable an origin for Private Link and approve the private endpoint connection, it can take a few minutes for the connection to be established. During this time, requests to the origin will receive an Azure Front Door error message. The error message will go away once the connection is established.
Once your request is approved, a private IP address gets assigned from the Azure Front Door managed virtual network. Traffic between your Azure Front Door and your origin will communicate using the established private link over the Microsoft backbone network. Incoming traffic to your origin is now secured when arriving at your Azure Front Door.
Once your request is approved, a private IP address gets assigned from the Azure
### Private endpoint creation
-Within a single Azure Front Door profile, if two or more Private Link enabled origins are created with the same set of Private Link, resource ID and group ID, then for all such origins only one private endpoint gets created. Connections to the backend can be enabled using this private endpoint. This setup means you only have to approve the private endpoint once because only one private endpoint gets created. If you create more Private Link enabled origins using the same set of Private Link location, resource ID, group ID, you won't need to approve anymore private endpoints.
+Within a single Azure Front Door profile, if two or more Private Link enabled origins are created with the same set of Private Link, resource ID and group ID, then for all such origins only one private endpoint gets created. Connections to the backend can be enabled using this private endpoint. This setup means you only have to approve the private endpoint once because only one private endpoint gets created. If you create more Private Link enabled origins using the same set of Private Link location, resource ID and group ID, you won't need to approve anymore private endpoints.
#### Single private endpoint
A new private endpoint gets created in the following scenario:
### Private endpoint removal
-When an Azure Front Door profile get deleted, private endpoints associated with the profile will also get deleted.
+When an Azure Front Door profile gets deleted, private endpoints associated with the profile will also get deleted.
#### Single private endpoint
-If AFD-Profile-1 gets deleted, then PE1 private endpoint across all the origin will also get deleted.
+If AFD-Profile-1 gets deleted, then the PE1 private endpoint across all the origins will also be deleted.
:::image type="content" source="./media/private-link/delete-endpoint.png" alt-text="Diagram showing if AFD-Profile-1 gets deleted then PE1 across all origins will get deleted."::: #### Multiple private endpoints
-* If AFD-Profile-1 gets deleted, all private endpoints from PE1 through PE4 will get deleted.
+* If AFD-Profile-1 gets deleted, all private endpoints from PE1 through to PE4 will be deleted.
:::image type="content" source="./media/private-link/delete-multiple-endpoints.png" alt-text="Diagram showing if AFD-Profile-1 gets deleted, all private endpoints from PE1 through PE4 gets deleted.":::
governance Logic App Calling Arg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/tutorials/logic-app-calling-arg.md
Keep the query above handy as we will need it later when we configure our Logic
2. Click the **Add** button on upper left of your screen and continue with creating your Logic App.
+3. When creating the Logic App, ensure you choose **Consumption** under **Plan Type**.
+ ## Setup a Managed Identity ### Create a New System-Assigned Managed Identity
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
Title: Tutorial - Azure IoT in-store analytics | Microsoft Docs
-description: This tutorial shows how to deploy and use and create an in-store analytics retail application in IoT Central.
+ Title: Tutorial - Create and deploy an Azure IoT in-store analytics application template | Microsoft Docs
+description: This tutorial shows how to create and deploy an in-store analytics retail application in IoT Central.
Last updated 06/14/2022
-# Tutorial: Deploy and walk through the in-store analytics application template
+# Tutorial: Create and deploy an in-store analytics application template
-For many retailers, environmental conditions within their stores are a key differentiator from their competitors. Retailers want to maintain pleasant conditions within their stores for the benefit of their customers.
+For many retailers, environmental conditions are a key way to differentiate their stores from their competitors' stores. The most successful retailers make every effort to maintain pleasant conditions within their stores for the comfort of their customers.
-You can use the IoT Central _in-store analytics checkout_ application template to build an end-to-end solution. The application template lets you digitally connect to and monitor a retail store environment using different kinds of sensor devices. These sensor devices generate telemetry that you can convert into business insights to help the retailer reduce operating costs and create a great experience for their customers.
+To build an end-to-end solution, you can use the IoT Central _in-store analytics checkout_ application template. This template lets you digitally connect to and monitor a store's environment through various sensor devices. These devices generate telemetry that retailers can convert into business insights to help reduce operating costs and create a great experience for their customers.
-Use the application template to:
-
-1. Connect different kinds of IoT sensors to an IoT Central application instance.
-2. Monitor and manage the health of the sensor network and any gateway devices in the environment.
-3. Create custom rules around the environmental conditions within a store to trigger alerts for store managers.
-4. Transform the environmental conditions within your store into insights that the retail store team can use to improve the customer experience.
-5. Export the aggregated insights into existing or new business applications to provide useful and timely information to retail staff.
-
-The application template comes with a set of device templates and uses a set of simulated devices to populate the dashboard.
+The application template comes with a set of device templates and uses a set of simulated devices to populate the dashboard:
:::image type="content" source="media/tutorial-in-store-analytics-create-app/store-analytics-architecture-frame.png" alt-text="Diagram of the in-store analytics application architecture." border="false":::
-### Condition monitoring sensors (1)
+As shown in the preceding application architecture diagram, you can use the application template to:
+
+* **1**. Connect various IoT sensors to an IoT Central application instance.
-An IoT solution starts with a set of sensors capturing meaningful signals from within a retail store environment. It's reflected by different kinds of sensors on the far left of the architecture diagram above.
+ An IoT solution starts with a set of sensors that capture meaningful signals from within a retail store environment. The sensors are represented by the various icons at the far left of the architecture diagram.
-### Gateway devices (2)
+* **2**. Monitor and manage the health of the sensor network and any gateway devices in the environment.
-Many IoT sensors can feed raw signals directly to the cloud or to a gateway device located near them. The gateway device performs data aggregation at the edge before sending summary insights to an IoT Central application. The gateway devices are also responsible for relaying command and control operations to the sensor devices when applicable.
+ Many IoT sensors can feed raw signals directly to the cloud or to a gateway device located near them. The gateway device aggregates data at the edge before it sends summary insights to an IoT Central application. The gateway device is also responsible for relaying command and control operations to the sensor devices when applicable.
-### IoT Central application
+* **3**. Create custom rules around the environmental conditions within a store to trigger alerts for store managers.
-The Azure IoT Central application ingests data from different kinds of IoT sensors and gateway devices within the retail store environment and generates a set of meaningful insights.
+ The Azure IoT Central application ingests data from the various IoT sensors and gateway devices within the retail store environment and then generates a set of meaningful insights.
-Azure IoT Central also provides a tailored experience to the store operator enabling them to remotely monitor and manage the infrastructure devices.
+ Azure IoT Central also provides a tailored experience to store operators that enables them to remotely monitor and manage the infrastructure devices.
-### Data transform (3)
+* **4**. Transform the environmental conditions within the stores into insights that the store team can use to improve the customer experience.
-The Azure IoT Central application within a solution can be configured to export raw or aggregated insights to a set of Azure PaaS (Platform-as-a Service) services that can perform data manipulation and enrich these insights before landing them in a business application.
+ You can configure an Azure IoT Central application within a solution to export raw or aggregated insights to a set of Azure platform as a service (PaaS) services. PAAS services can perform data manipulation and enrich these insights before landing them in a business application.
-### Business application (4)
+* **5**. Export the aggregated insights into existing or new business applications to provide useful and timely information to retail staff.
-The IoT data can be used to power different kinds of business applications deployed within a retail environment. A retail store manager or staff member can use these applications to visualize business insights and take meaningful actions in real time. To learn how to build a real-time Power BI dashboard for your retail team, follow the [tutorial](./tutorial-in-store-analytics-create-app.md).
+ The IoT data can be used to power different kinds of business applications deployed within a retail environment. A retail store manager or staff member can use these applications to visualize business insights and take meaningful action in real time. To learn how to build a real-time Power BI dashboard for your retail team, see [tutorial](./tutorial-in-store-analytics-customize-dashboard.md).
-In this tutorial, you learn how to:
+In this tutorial, you'll learn how to:
> [!div class="checklist"]
-> - Use the Azure IoT Central **In-store analytics - checkout** template to create a retail store application
+> - Use the Azure IoT Central *In-store analytics - checkout* template to create a retail store application
> - Customize the application settings > - Create and customize IoT device templates > - Connect devices to your application
In this tutorial, you learn how to:
## Prerequisites
-An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+An active Azure subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-## Create in-store analytics application
+## Create an in-store analytics application
-Create the application using following steps:
+Create the application by doing the following:
-1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab.
+1. Sign in to the [Azure IoT Central](https://aka.ms/iotcentral) build site with a Microsoft personal, work, or school account.
-1. Select **Create app** under **In-store analytics - checkout**.
+1. On the left pane, select **Build**, and then select the **Retail** tab.
+
+1. Under **In-store analytics - checkout**, select **Create app**.
To learn more, see [Create an IoT Central application](../core/howto-create-iot-central-application.md). ## Walk through the application
-The following sections walk you through the key features of the application:
+The following sections describe the key features of the application.
-### Customize application settings
+### Customize the application settings
-You can change several settings to customize the user experience in your application. In this section, you'll select a predefined application theme. Optionally, you'll learn how to create a custom theme, and update the application image. A custom theme enables you to set the application browser colors, browser icon, and the application logo that appears in the masthead.
+You can change several settings to customize the user experience in your application. In this section, you select a predefined application theme. Optionally, you'll learn how to create a custom theme and update the application image. A custom theme enables you to set the application browser colors, the browser icon, and the application logo that appears in the masthead.
To select a predefined application theme:
To select a predefined application theme:
3. Select **Save**.
-Rather than use a predefined theme, you can create a custom theme. If you want to use a set of sample images to customize the application and complete the tutorial, download the [Contoso sample images](https://github.com/Azure-Samples/iot-central-docs-samples/tree/main/retail).
+Alternatively, you can create a custom theme. If you want to use a set of sample images to customize the application and complete the tutorial, download the [Contoso sample images](https://github.com/Azure-Samples/iot-central-docs-samples/tree/main/retail).
To create a custom theme:
-1. Expand the left pane, if not already expanded.
-
-1. Select **Customization > Appearance**.
+1. On the left pane, select **Customization** > **Appearance**.
-1. Use the **Change** button to choose an image to upload as the masthead logo. Optionally, specify a value for **Logo alt text**.
+1. Select **Change**, and then select an image to upload as the masthead logo. Optionally, enter a value for **Logo alt text**.
-1. Use the **Change** button to choose a **Browser icon** image that will appear on browser tabs.
+1. Select **Change**, and then select a **Browser icon** image that will appear on browser tabs.
-1. Optionally, replace the default **Browser colors** by adding HTML hexadecimal color codes. For the **Header**, add *#008575*. For the **Accent**, add *#A1F3EA*.
+1. Optionally, replace the default **Browser colors** by adding HTML hexadecimal color codes:
+ a. For **Header**, enter **#008575**.
+ b. For **Accent**, enter **#A1F3EA**.
-1. Select **Save**. After you save, the application updates the browser colors, the logo in the masthead, and the browser icon.
+1. Select **Save**. After you save your changes, the application updates the browser colors, the logo in the masthead, and the browser icon.
To update the application image:
-1. Select **Application > Management.**
+1. Select **Application** > **Management**.
-1. Select **Change** to choose an image to upload as the application image. This image appears on the application tile in the **My Apps** page of the IoT Central application manager.
+1. Select **Change**, and then select an image to upload as the application image.
1. Select **Save**.
-1. Optionally, navigate to the **My Apps** view on the [Azure IoT Central application manager](https://aka.ms/iotcentral) website. The application tile displays the updated application image.
+ The image appears on the application tile on the **My Apps** page of the [Azure IoT Central application manager](https://aka.ms/iotcentral) site.
-### Create device templates
-You can create device templates that enable you and the application operators to configure and manage devices. You can create a template by building a custom one, by importing an existing template file, or by importing a template from the Azure IoT device catalog. After you create and customize a device template, use it to connect real devices to your application. Optionally, use a device template to generate simulated devices for testing.
+### Create the device templates
-The **In-store analytics - checkout** application template has device templates for several devices. There are device templates for two of the three devices you use in the application. The RuuviTag device template isn't included in the **In-store analytics - checkout** application template. In this section, you add a device template for RuuviTag sensors to your application.
+By creating device templates, you and the application operators can configure and manage devices. You can build a custom template, import an existing template file, or import a template from the Azure IoT device catalog. After you create and customize a device template, use it to connect real devices to your application.
-To add a RuuviTag device template to your application:
+Optionally, you can use a device template to generate simulated devices for testing.
-1. Select **Device Templates** in the left pane.
+The *In-store analytics - checkout* application template has device templates for several devices, including templates for two of the three devices you use in the application. The RuuviTag device template isn't included in the *In-store analytics - checkout* application template.
-1. Select **+ New** to create a new device template.
+In this section, you add a device template for RuuviTag sensors to your application. To do so:
-1. Find and select the **RuuviTag Multisensor** device template in the Azure IoT device catalog.
+1. On the left pane, select **Device Templates**.
+
+1. Select **New** to create a new device template.
+
+1. Search for and then select the **RuuviTag Multisensor** device template in the Azure IoT device catalog.
1. Select **Next: Review**.
-1. Select **Create**. The application adds the RuuviTag device template.
+1. Select **Create**.
+
+ The application adds the RuuviTag device template.
+
+1. On the left pane, select **Device templates**.
+
+ The page displays all the device templates in the application template and the RuuviTag device template you just added.
-1. Select **Device templates** on the left pane. The page displays all device templates included in the application template, and the RuuviTag device template you just added.
+### Customize the device templates
-### Customize device templates
+You can customize the device templates in your application in three ways:
-You can customize the device templates in your application in three ways. First, you customize the native built-in interfaces in your devices by changing the device capabilities. For example, with a temperature sensor, you can change details such as the display name of the temperature interface, the data type, the units of measurement, and minimum and maximum operating ranges.
+* Customize the native built-in interfaces in your devices by changing the device capabilities.
-Second, customize your device templates by adding cloud properties. Cloud properties aren't part of the built-in device capabilities. Cloud properties are custom data that your Azure IoT Central application creates, stores, and associates with your devices. An example of a cloud property could be a calculated value, or metadata such as a location that you want to associate with a set of devices.
+ For example, with a temperature sensor, you can change details such as the display name of the temperature interface, the data type, the units of measurement, and the minimum and maximum operating ranges.
-Third, customize device templates by building custom views. Views provide a way for operators to visualize telemetry and metadata for your devices, such as device metrics and health.
+* Customize your device templates by adding cloud properties.
-Here, you use the first two methods to customize the device template for your RuuviTag sensors.
+ Cloud properties aren't part of the built-in device capabilities. Cloud properties are custom data that your Azure IoT Central application creates, stores, and associates with your devices. Examples of cloud properties could be:
+ * A calculated value
+ * Metadata, such as a location that you want to associate with a set of devices
-To customize the built-in interfaces of the RuuviTag device template:
+* Customize device templates by building custom views.
-1. Select **Device Templates** in the left pane.
+ Views provide a way for operators to visualize telemetry and metadata for your devices, such as device metrics and health.
-1. Select the template for RuuviTag sensors.
+In this section, you use the first two methods to customize the device template for your RuuviTag sensors.
+
+**Customize the built-in interfaces of the RuuviTag device template**
+
+1. On the left pane, select **Device Templates**.
+
+1. Select **RuuviTag**.
1. Hide the left pane. The summary view of the template displays the device capabilities.
- :::image type="content" source="media/tutorial-in-store-analytics-create-app/ruuvitag-device-summary-view.png" alt-text="Screenshot showing the in-store analytics application RuuviTag device template." lightbox="media/tutorial-in-store-analytics-create-app/ruuvitag-device-summary-view.png":::
+ :::image type="content" source="media/tutorial-in-store-analytics-create-app/ruuvitag-device-summary-view.png" alt-text="Screenshot that shows the in-store analytics application RuuviTag device template." lightbox="media/tutorial-in-store-analytics-create-app/ruuvitag-device-summary-view.png":::
-1. Select **RuvviTag** model in the RuuviTag device template menu.
+1. Select the **RuuviTag** model in the RuuviTag device template menu.
-1. Scroll in the list of capabilities and find the `RelativeHumidity` telemetry type. It's the row item with the editable **Display name** value of *RelativeHumidity*.
+1. In the list of capabilities, scroll for the **RelativeHumidity** telemetry type. It's the row item with the editable **Display name** value of *RelativeHumidity*.
-In the following steps, you customize the `RelativeHumidity` telemetry type for the RuuviTag sensors. Optionally, customize some of the other telemetry types.
+In the following steps, you customize the **RelativeHumidity** telemetry type for the RuuviTag sensors. Optionally, you can customize some of the other telemetry types.
-For the `RelativeHumidity` telemetry type, make the following changes:
+For the **RelativeHumidity** telemetry type, make the following changes:
1. Select the **Expand** control to expand the schema details for the row.
-1. Update the **Display Name** value from *RelativeHumidity* to a custom value such as *Humidity*.
+1. Update the **Display Name** value from **RelativeHumidity** to a custom value such as **Humidity**.
-1. Change the **Semantic Type** option from *Relative humidity* to *Humidity*. Optionally, set schema values for the humidity telemetry type in the expanded schema view. Schema settings allow you to create detailed validation requirements for the data that your sensors track. For example, you could set minimum and maximum operating range values for a given interface.
+1. Change the **Semantic Type** option from **Relative humidity** to **Humidity**.
+
+ Optionally, set schema values for the humidity telemetry type in the expanded schema view. By setting schema values, you can create detailed validation requirements for the data that your sensors track. For example, you could set minimum and maximum operating range values for a specified interface.
1. Select **Save** to save your changes.
-To add a cloud property to a device template in your application:
+**Add a cloud property to a device template in your application**
Specify the following values to create a custom property to store the location of each device:
-1. Enter the value *Location* for the **Display Name**. This value is automatically copied to the **Name** field, which is a friendly name for the property. You can use the copied value or change it.
+1. For **Display Name**, enter the **Location** value.
+
+ This value, which is a friendly name for the property, is automatically copied to the **Name**. You can use the copied value or change it.
-1. Select **Capability Type** as **Cloud Property**.
+1. For **Cloud Property**, select **Capability Type**.
-1. Select *String* in the **Schema** dropdown. A string type enables you to associate a location name string with any device based on the template. For instance, you could associate an area in a store with each device.
+1. In the **Schema** dropdown list, select **String**.
-1. Set **Minimum Length** to *2*.
+ By specifying a string type, you can associate a location name string with any device that's based on the template. For instance, you could associate an area in a store with each device.
+
+1. Set **Minimum Length** to **2**.
1. Set **Trim Whitespace** to **On**.
Specify the following values to create a custom property to store the location o
1. Select **Publish**.
- Publishing a device template makes it visible to application operators. After you've published a template, use it to generate simulated devices for testing, or to connect real devices to your application. If you already have devices connected to your application, publishing a customized template pushes the changes to the devices.
+ Publishing a device template makes it visible to application operators. After you've published a template, use it to generate simulated devices for testing or to connect real devices to your application. If you already have devices connected to your application, publishing a customized template pushes the changes to the devices.
### Add devices
-After you have created and customized device templates, it's time to add devices.
+After you've created and customized the device templates, it's time to add devices.
For this tutorial, you use the following set of real and simulated devices to build the application: - A real Rigado C500 gateway. - Two real RuuviTag sensors.-- A simulated **Occupancy** sensor. The simulated sensor is included in the application template, so you don't need to create it.
+- A simulated *Occupancy* sensor. This simulated sensor is included in the application template, so you don't need to create it.
> [!NOTE] > If you don't have real devices, you can still complete this tutorial by creating simulated RuuviTag sensors. The following directions include steps to create a simulated RuuviTag. You don't need to create a simulated gateway.
-Complete the steps in the following two articles to connect a real Rigado gateway and RuuviTag sensors. After you're done, return to this tutorial. Because you already created device templates in this tutorial, you don't need to create them again in the following set of directions.
+Complete the steps in the following two articles to connect a real Rigado gateway and RuuviTag sensors. After you're done, return to this tutorial. Because you've already created device templates in this tutorial, you don't need to create them again in the following set of directions.
- To connect a Rigado gateway, see [Connect a Rigado Cascade 500 to your Azure IoT Central application](../core/howto-connect-rigado-cascade-500.md). - To connect RuuviTag sensors, see [Connect a RuuviTag sensor to your Azure IoT Central application](../core/howto-connect-ruuvi.md). You can also use these directions to create two simulated sensors, if needed. ### Add rules and actions
-As part of using sensors in your Azure IoT Central application to monitor conditions, you can create rules to run actions when certain conditions are met. A rule is associated with a device template and one or more devices, and contains conditions that must be met based on device telemetry or events. A rule also has one or more associated actions. The actions may include sending email notifications, or triggering a webhook action to send data to other services. The **In-store analytics - checkout** application template includes some predefined rules for the devices in the application.
+As part of using sensors in your Azure IoT Central application to monitor conditions, you can create rules to run actions when certain conditions are met.
+
+A rule is associated with a device template and one or more devices, and it contains conditions that must be met based on device telemetry or events. A rule also has one or more associated actions. The actions might include sending email notifications, or triggering a webhook action to send data to other services. The *In-store analytics - checkout* application template includes some predefined rules for the devices in the application.
-In this section, you create a new rule that checks the maximum relative humidity level based on the RuuviTag sensor telemetry. You add an action to the rule so that if the humidity exceeds the maximum, the application sends an email.
+In this section, you create a new rule that checks the maximum relative humidity level based on the RuuviTag sensor telemetry. You add an action to the rule so that if the humidity exceeds the maximum, the application sends an email notification.
To create a rule:
-1. Expand the left pane.
+1. On the left pane, select **Rules**.
-1. Select **Rules**.
+1. Select **New**.
-1. Select **+ New**.
+1. Enter **Humidity level** as the name of the rule.
-1. Enter *Humidity level* as the name of the rule.
+1. For **Device template**, select the RuuviTag device template.
-1. Choose the RuuviTag device template in **Device template**. The rule you define will apply to all sensors based on that template. Optionally, you could create a filter that would apply the rule only to a defined subset of the sensors.
+ The rule that you define applies to all sensors, based on that template. Optionally, you could create a filter that would apply the rule to only a defined subset of the sensors.
-1. Choose `RelativeHumidity` as the **Telemetry**. It's the device capability that you customized in a previous step.
+1. For **Telemetry**, select **RelativeHumidity**. It's the device capability that you customized in an earlier step.
-1. Choose `Is greater than` as the **Operator**.
+1. For **Operator**, select **Is greater than**.
-1. Enter a typical upper range indoor humidity level for your environment as the **Value**. For example, enter *65*. You've set a condition for your rule that occurs when relative humidity in any RuuviTag real or simulated sensor exceeds this value. You may need to adjust the value up or down depending on the normal humidity range in your environment.
+1. For **Value**, enter a typical upper range indoor humidity level for your environment (for example, **65**).
+
+ You've set a condition for your rule that occurs when relative humidity in any RuuviTag real or simulated sensor exceeds this value. You might need to adjust the value up or down depending on the normal humidity range in your environment.
To add an action to the rule:
-1. Select **+ Email**.
+1. Select **Email**.
+
+1. For a friendly **Display name** for the action, enter **High humidity notification**.
-1. Enter *High humidity notification* as the friendly **Display name** for the action.
+1. For **To**, enter the email address that's associated with your account.
-1. Enter the email address associated with your account in **To**. If you use a different email, the address you use must be for a user who has been added to the application. The user also needs to sign in and out at least once.
+ If you use a different email address, the one you use must be for a user who has been added to the application. The user also needs to sign in and out at least once.
-1. Optionally, enter a note to include in text of the email.
+1. Optionally, enter a note to include in the text of the email.
1. Select **Done** to complete the action. 1. Select **Save** to save and activate the new rule.
- Within a few minutes, the specified email account should begin to receive emails. The application sends email each time a sensor indicates that the humidity level exceeded the value in your condition.
+ Within a few minutes, the specified email account should begin to receive messages. The application sends email each time a sensor indicates that the humidity level exceeded the value in your condition.
## Clean up resources
To add an action to the rule:
In this tutorial, you learned how to:
-* Use the Azure IoT Central **In-store analytics - checkout** template to create a retail store application
-* Customize the application settings
-* Create and customize IoT device templates
-* Connect devices to your application
-* Add rules and actions to monitor conditions
+* Use the Azure IoT Central *In-store analytics - checkout* template to create a retail store application.
+* Customize the application settings.
+* Create and customize IoT device templates.
+* Connect devices to your application.
+* Add rules and actions to monitor conditions.
-Now that you've created an Azure IoT Central condition monitoring application, here's the suggested next step:
+Now that you've created an Azure IoT Central condition-monitoring application, here's the suggested next step:
> [!div class="nextstepaction"] > [Customize the dashboard](./tutorial-in-store-analytics-customize-dashboard.md)
iot-central Tutorial In Store Analytics Customize Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-customize-dashboard.md
Title: 'Tutorial - Customize the dashboard in Azure IoT Central'
-description: 'This tutorial shows how to customize the dashboard in an IoT Central application, and manage devices.'
+ Title: Tutorial - Customize the dashboard in Azure IoT Central
+description: This tutorial shows how to customize the dashboard in an IoT Central application, and manage devices.
Last updated 06/14/2022
In this tutorial, you learn how to customize the dashboard in your Azure IoT Central in-store analytics application. Application operators can use the customized dashboard to run the application and manage the attached devices.
-In this tutorial, you learn how to:
+In this tutorial, you'll learn how to:
> [!div class="checklist"]- > * Customize image tiles on the dashboard > * Arrange tiles to modify the layout > * Add telemetry tiles to display conditions
In this tutorial, you learn how to:
## Prerequisites
-The builder should complete the tutorial to create the Azure IoT Central in-store analytics application and add devices:
-
-* [Create an in-store analytics application in Azure IoT Central](./tutorial-in-store-analytics-create-app.md) (Required)
+Before you begin, complete the following tutorial:
+* [Create an in-store analytics application in Azure IoT Central](./tutorial-in-store-analytics-create-app.md)
## Change the dashboard name
-To customize the dashboard, you have to edit the default dashboard in your application. Also, you can create additional new dashboards. The first step to customize the dashboard in your application is to change the name.
+After you've created your condition-monitoring application, you can edit its default dashboard. You can also create additional dashboards.
-1. Navigate to the [Azure IoT Central application manager](https://aka.ms/iotcentral) website.
+The first step in customizing the application dashboard is to change the name:
-1. Open the condition monitoring application that you created in the [Create an in-store analytics application in Azure IoT Central](./tutorial-in-store-analytics-create-app.md) tutorial.
+1. Go to the [Azure IoT Central application manager](https://aka.ms/iotcentral) website.
-1. Select **Dashboard settings** and enter **Name** for your dashboard and select **Save**.
+1. Open the condition-monitoring application that you created.
+1. Select **Dashboard settings**, enter a name for your dashboard, and then select **Save**.
+ ## Customize image tiles on the dashboard
-An Azure IoT Central application dashboard consists of one or more tiles. A tile is a rectangular container for displaying content on a dashboard. You associate various types of content with tiles, and you drag, drop, and resize tiles to customize a dashboard layout. There are several types of tiles for displaying content. Image tiles contain images, and you can add a URL that enables users to click the image. Label tiles display plain text. Markdown tiles contain formatted content and let you set an image, a URL, a title, and markdown code that renders as HTML. Telemetry, property, or command tiles display device-specific data.
+An Azure IoT Central application dashboard consists of one or more tiles. A tile is a rectangular container for displaying content on a dashboard. You associate various types of content with tiles, and you can drag, drop, and resize tiles to customize the dashboard layout.
+
+There are several types of tiles for displaying content:
+* **Image** tiles contain images, and you can add a URL that lets you select the image.
+* **Label** tiles display plain text.
+* **Markdown** tiles contain formatted content and let you set an image, a URL, a title, and Markdown code that renders as HTML.
+* **Telemetry, property, or command** tiles display device-specific data.
-In this section, you learn how to customize image tiles on the dashboard.
+In this section, you customize image tiles on the dashboard.
To customize the image tile that displays a brand image on the dashboard: 1. Select **Edit** on the dashboard toolbar.
-1. Select **Edit** on the image tile that displays the Northwind brand image.
+1. Select **Edit** on the image tile that displays the Northwind Traders brand image.
-1. Change the **Title**. The title appears when a user hovers over the image.
+1. Change the **Title**. The title appears when you hover over the image.
-1. Select **Image**. A dialog opens and enables you to upload a custom image.
+1. Select **Image**. A window opens where you can upload a custom image or, optionally, specify a URL for the image.
-1. Optionally, specify a URL for the image.
-
-1. Select **Update**
+1. Select **Update**.
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/brand-image-save.png" alt-text="Screenshot showing the in-store analytics application dashboard brand image tile." lightbox="media/tutorial-in-store-analytics-customize-dashboard/brand-image-save.png":::
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/brand-image-save.png" alt-text="Screenshot that shows the in-store analytics application dashboard brand image tile." lightbox="media/tutorial-in-store-analytics-customize-dashboard/brand-image-save.png":::
-1. Optionally, select **Configure** on the tile titled **Documentation**, and specify a URL for support content.
+1. Optionally, on the **Documentation** tile, select **Configure**, and then specify a URL that links to support content.
To customize the image tile that displays a map of the sensor zones in the store:
-1. Select **Configure** on the image tile that displays the default store zone map.
+1. On the image tile that displays the default store zone map, select **Configure**.
-1. Select **Image**, and use the dialog to upload a custom image of a store zone map.
+1. Select **Image**, and then upload a custom image of a store zone map.
1. Select **Update**.
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/store-map-save.png" alt-text="Screenshot showing the in-store analytics application dashboard store map tile." lightbox="media/tutorial-in-store-analytics-customize-dashboard/store-map-save.png":::
- The example Contoso store map shows four zones: two checkout zones, a zone for apparel and personal care, and a zone for groceries and deli. In this tutorial, you'll associate sensors with these zones to provide telemetry.
+The example Contoso store map shows four zones: two checkout zones, a zone for apparel and personal care, and a zone for groceries and deli.
+
+In this tutorial, you'll associate sensors with these zones to provide telemetry.
## Arrange tiles to modify the layout
-A key step in customizing a dashboard is to rearrange the tiles to create a useful view. Application operators use the dashboard to visualize device telemetry, manage devices, and monitor conditions in a store. Azure IoT Central simplifies the application builder task of creating a dashboard. The dashboard edit mode enables you to quickly add, move, resize, and delete tiles. The **In-store analytics - checkout** application template also simplifies the task of creating a dashboard. It provides a working dashboard layout, with sensors connected, and tiles that display checkout line counts and environmental conditions.
+A key step in customizing a dashboard is to rearrange the tiles to create a useful view. Application operators use the dashboard to visualize device telemetry, manage devices, and monitor conditions in a store.
+
+Azure IoT Central simplifies the application builder task of creating a dashboard. By using the dashboard edit mode, you can quickly add, move, resize, and delete tiles.
+
+The *In-store analytics - checkout* application template also simplifies the task of creating a dashboard. The template provides a working dashboard layout, with sensors connected, and tiles that display checkout line counts and environmental conditions.
-In this section, you rearrange the dashboard in the **In-store analytics - checkout** application template to create a custom layout.
+In this section, you rearrange the dashboard tiles in the *In-store analytics - checkout* application template to create a custom layout.
To remove tiles that you don't plan to use in your application: 1. Select **Edit** on the dashboard toolbar.
-1. Select **ellipsis** and **Delete** to remove the following tiles: **Back to all zones**, **Visit store dashboard**, **Warm-up checkout zone**, **Cool-down checkout zone**, **Occupancy sensor settings**, **Thermostat settings**, **Wait time**, and **Environment conditions** and all three tiles associated with **Checkout 3**. The Contoso store dashboard doesn't use these tiles.
+1. For each of the following tiles, which the Contoso store dashboard doesn't use, select the ellipsis (**...**), and then select **Delete**:
+ * **Back to all zones**
+ * **Visit store dashboard**
+ * **Warm-up checkout zone**
+ * **Cool-down checkout zone**
+ * **Occupancy sensor settings**
+ * **Thermostat settings**
+ * **Wait time**
+ * **Environment conditions**
+ * **Checkout 3**: All three tiles associated with it
-1. Select **Save**. Removing unused tiles frees up space in the edit page, and simplifies the dashboard view for operators.
+1. Select **Save**. Removing unused tiles frees space on the edit page, and it simplifies the dashboard view for operators.
-After you remove unused tiles, rearrange the remaining tiles to create an organized layout. The new layout includes space for tiles you add in a later step.
+After you've removed the unused tiles, rearrange the remaining tiles to create an organized layout. The new layout includes space for tiles that you'll add later.
To rearrange the remaining tiles: 1. Select **Edit**.
-1. Select the **Occupancy firmware** tile and drag it to the right of the **Occupancy** battery tile.
+1. Drag the **Occupancy firmware** tile to the right of the **Occupancy** battery tile.
-1. Select the **Thermostat firmware** tile and drag it to the right of the **Thermostat** battery tile.
+1. Drag the **Thermostat firmware** tile to the right of the **Thermostat** battery tile.
1. Select **Save**. 1. View your layout changes. ## Add telemetry tiles to display conditions
-After you customize the dashboard layout, you're ready to add tiles to show telemetry. To create a telemetry tile, select a device template and device instance, then select device-specific telemetry to display in the tile. The **In-store analytics - checkout** application template includes several telemetry tiles in the dashboard. The four tiles in the two checkout zones display telemetry from the simulated occupancy sensor. The **People traffic** tile shows counts in the two checkout zones.
+After you customize the dashboard layout, you're ready to add tiles to display telemetry. To create a telemetry tile, select a device template and device instance, then select device-specific telemetry to display in the tile. The *In-store analytics - checkout* application template includes several telemetry tiles on the dashboard. The four tiles in the two checkout zones display telemetry from the simulated occupancy sensor. The **People traffic** tile shows counts in the two checkout zones.
-In this section, you add two more telemetry tiles to show environmental telemetry from the RuuviTag sensors you added in the [Create an in-store analytics application in Azure IoT Central](./tutorial-in-store-analytics-create-app.md) tutorial.
+In this section, you add two more telemetry tiles to display environmental telemetry from the RuuviTag sensors you added in the [Create an in-store analytics application in Azure IoT Central](./tutorial-in-store-analytics-create-app.md) tutorial.
To add tiles to display environmental data from the RuuviTag sensors: 1. Select **Edit**.
-1. Select `RuuviTag` in the **Device template** list.
+1. In the **Device template** list, select **RuuviTag**.
-1. Select a **Device instance** of one of the two RuuviTag sensors. In the example Contoso store, select `Zone 1 Ruuvi` to create a telemetry tile for Zone 1.
+1. Select a **Device instance** of one of the two RuuviTag sensors. In the Contoso store example, select **Zone 1 Ruuvi** to create a telemetry tile for Zone 1.
-1. Select `Relative humidity` and `Temperature` in the **Telemetry** list. These are the telemetry items that display for each zone on the tile.
+1. In the **Telemetry** list, select **Relative humidity** and **Temperature**, the telemetry items that are displayed for each zone on the tile.
-1. Select **Add tile**. A new tile appears to display combined humidity and temperature telemetry for the selected sensor.
+1. Select **Add tile**. This new tile displays combined humidity and temperature telemetry for the selected sensor.
-1. Select **Configure** on the new tile for the RuuviTag sensor.
+1. On the new tile for the RuuviTag sensor, select **Configure**.
-1. Change the **Title** to *Zone 1 environment*.
+1. Change the **Title** to **Zone 1 environment**.
1. Select **Update**.
-1. Repeat the previous steps to create a tile for the second sensor instance. Set the **Title** to *Zone 2 environment* and then select **Update configuration.**
+1. Repeat steps 1 through 8 to create a tile for the second sensor instance. For **Title**, enter **Zone 2 environment**, and then select **Update configuration**.
-1. Drag the tile titled **Zone 2 environment** below the **Thermostat firmware** tile.
+1. Drag the tile titled **Zone 2 environment** to below the **Thermostat firmware** tile.
-1. Drag the tile titled **Zone 1 environment** below the **People traffic** tile.
+1. Drag the tile titled **Zone 1 environment** to below the **People traffic** tile.
1. Select **Save**. The dashboard displays zone telemetry in the two new tiles. To edit the **People traffic** tile to show telemetry for only two checkout zones: 1. Select **Edit**.
-1. Select **Edit** on the **People traffic** tile.
+1. On the **People traffic** tile, select **Edit**.
1. Remove the **count3** telemetry.
To edit the **People traffic** tile to show telemetry for only two checkout zone
1. Select **Save**. The updated dashboard displays counts for only your two checkout zones, which are based on the simulated occupancy sensor. ## Add command tiles to run commands
To add a command tile to reboot the gateway:
1. Select **Edit**.
-1. Select `C500` in the **Device template** list. It is the template for the Rigado C500 gateway.
+1. In the **Device template** list, select **C500**. It's the template for the Rigado C500 gateway.
1. Select the gateway instance in **Device instance**.
To add a command tile to reboot the gateway:
1. View your completed Contoso dashboard.
- :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/completed-dashboard.png" alt-text="Screenshot showing the completed in-store analytics application dashboard." lightbox="media/tutorial-in-store-analytics-customize-dashboard/completed-dashboard.png":::
+ :::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/completed-dashboard.png" alt-text="Screenshot that shows the completed in-store analytics application dashboard." lightbox="media/tutorial-in-store-analytics-customize-dashboard/completed-dashboard.png":::
1. Optionally, select the **Reboot** tile to run the reboot command on your gateway.
To add a command tile to reboot the gateway:
In this tutorial, you learned how to:
-* Change the dashboard name
-* Customize image tiles on the dashboard
-* Arrange tiles to modify the layout
-* Add telemetry tiles to display conditions
-* Add property tiles to display device details
-* Add command tiles to run commands
+* Change the dashboard name.
+* Customize image tiles on the dashboard.
+* Arrange tiles to modify the layout.
+* Add telemetry tiles to display conditions.
+* Add property tiles to display device details.
+* Add command tiles to run commands.
-Now that you've customized the dashboard in your Azure IoT Central in-store analytics application, here is the suggested next step:
+Now that you've customized the dashboard in your Azure IoT Central in-store analytics application, here's the suggested next step:
> [!div class="nextstepaction"] > [Export data and visualize insights](./tutorial-in-store-analytics-export-data-visualize-insights.md)
iot-central Tutorial Iot Central Smart Inventory Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-smart-inventory-management.md
Title: Tutorial - Azure IoT Smart inventory management | Microsoft Docs
-description: This tutorial shows you how to deploy and use smart inventory management application template for IoT Central
+ Title: Tutorial - Azure IoT smart inventory management | Microsoft Docs
+description: This tutorial shows you how to deploy and use a smart inventory-management application template for IoT Central.
Last updated 06/13/2022
-# Tutorial: Deploy and walk through the smart inventory management application template
+# Tutorial: Deploy a smart inventory-management application template
-Inventory is the stock of goods a retailer holds. Inventory management is critical to ensure the right product is in the right place at the right time. A retailer must balance the costs of storing too much inventory against the costs of not having sufficient items in stock to meet demand.
+Inventory is the stock of goods that a retail business holds. As a retailer, you must balance the costs of storing too much inventory against the costs of having insufficient inventory to meet customer demand. It's critical to deploy smart inventory-management practices to ensure that the right products are in stock and in the right place at the right time.
-IoT data generated from radio-frequency identification (RFID) tags, beacons, and cameras provide opportunities to improve inventory management processes. You can combine telemetry gathered from IoT sensors and devices with other data sources such as weather and traffic information in cloud-based business intelligence systems.
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+> * Create a smart inventory-management application
+> * Walk through the application
The benefits of smart inventory management include: -- Reducing the risk of items being out of stock and ensuring the desired customer service level.-- In-depth analysis and insights into inventory accuracy in near real time.-- Tools to help decide on the right amount of inventory to hold to meet customer orders.
+- You reduce the risk of items being out of stock and ensure that you're reaching the desired customer service level.
+- You get in-depth analysis and insights into inventory accuracy in near real time.
+- You apply the right tools to help decide on the right amount of inventory to hold to meet customer orders.
-This application template focuses on device connectivity, and the configuration and management of RFID and Bluetooth low energy (BLE) reader devices.
+IoT data that you generate from radio-frequency identification (RFID) tags, beacons, and cameras gives you opportunities to improve inventory-management processes. You can combine telemetry that you've gathered from IoT sensors and devices with other data sources, such as weather and traffic information in cloud-based business intelligence systems.
+The application template that you'll create focuses on device connectivity, and it helps you configure and manage the RFID and Bluetooth low energy (BLE) reader devices.
-### RFID tags (1)
+## Smart inventory-management architecture
-RFID tags transmit data about an item through radio waves. RFID tags typically don't have a battery unless specified. Tags receive energy from the radio waves generated by the reader and transmit a signal back toward the RFID reader.
-### BLE tags (1)
+The preceding architecture diagram illustrates the smart inventory-management application workflow:
-Energy beacon broadcasts packets of data at regular intervals. Beacon data is detected by BLE readers or installed services on smartphones and then transmitting that to the cloud.
+* (**1**) RFID tags
-### RFID and BLE readers (1)
+ RFID tags transmit data about an item through radio waves. RFID tags ordinarily don't have a battery, unless specified. Tags receive energy from radio waves that are generated by the reader and then transmit a signal back to the RFID reader.
-RFID reader converts the radio waves to a more usable form of data. Information collected from the tags is then stored in local edge server or sent to cloud using JSON-RPC 2.0 over MQTT.
-BLE reader also known as Access Points (AP) are similar to RFID reader. It's used to detect nearby Bluetooth signals and relay its message to local Azure IoT Edge or cloud using JSON-RPC 2.0 over MQTT.
-Many readers are capable of reading RFID and beacon signals, and providing additional sensor capability related to temperature, humidity, accelerometer, and gyroscope.
+* (**1**) BLE tags
-### Azure IoT Edge gateway (2)
+ An energy beacon broadcasts packets of data at regular intervals. Beacon data is detected by BLE readers or installed services on smartphones and then transmitted to the cloud.
-Azure IoT Edge server provides a place to preprocess that data locally before sending it on to the cloud. We can also deploy cloud workloads artificial intelligence, Azure and third-party services, business logic using standard containers.
+* (**1**) RFID and BLE readers
-### Device management with IoT Central
+ An RFID reader converts the radio waves to a more usable form of data. Information that's collected from the tags is then stored on a local edge server or sent to the cloud via JSON-RPC 2.0 over Message Queuing Telemetry Transport (MQTT).
-Azure IoT Central is a solution development platform that simplifies IoT device connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers and partners can build an end-to-end enterprise solution to achieve a digital feedback loop in inventory management.
+ BLE readers, also known as Access Points (AP), are similar to RFID readers. They're used to detect nearby Bluetooth signals and relay them to a local Azure IoT Edge instance or the cloud via JSON-RPC 2.0 over MQTT.
-### Business insights and actions using data egress (3)
+ Many readers can read RFID and beacon signals and provide additional sensor capability that's related to temperature and humidity, via accelerometer and gyroscope.
-IoT Central platform provides rich extensibility options through Continuous Data Export (CDE) and APIs. Business insights based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application. It can be achieved using webhook, service bus, event hub, or blob storage to build, train, and deploy machine learning models and further enrich insights.
+* (**2**) Azure IoT Edge gateway
-In this tutorial, you learn how to,
+ Azure IoT Edge server provides a place to preprocess the data locally before sending it on to the cloud. We can also deploy cloud workloads artificial intelligence, Azure and third-party services, and business logic by using standard containers.
-> [!div class="checklist"]
-> * create smart inventory management application
-> * walk through the application
+* Device management with IoT Central
+
+ Azure IoT Central is a solution-development platform that simplifies IoT device connectivity, configuration, and management. The platform significantly reduces the burden and costs of IoT device management, operations, and related developments. Customers and partners can build an end-to-end enterprise solution to achieve a digital feedback loop in inventory management.
+
+* (**3**) Business insights and actions using data egress
+
+ The IoT Central platform provides rich extensibility options through Continuous Data Export (CDE) and APIs. Business insights that are based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application.
+
+ You can use a webhook, service bus, event hub, or blob storage to build, train, and deploy machine learning models and further enrich insights.
## Prerequisites
-An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+An active Azure subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-## Create smart inventory management application
+## Create a smart inventory-management application
-Create the application using the following steps:
+Create the application by doing the following:
-1. Navigate to the [Azure IoT Central Build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account. Select **Build** from the left-hand navigation bar and then select the **Retail** tab.
+1. Sign in to the [Azure IoT Central Build](https://aka.ms/iotcentral) site with a Microsoft personal, work, or school account. On the left pane, select **Build**, and then select the **Retail** tab.
1. Select **Create app** under **smart inventory management**.
To learn more, see [Create an IoT Central application](../core/howto-create-iot-
## Walk through the application
-The following sections walk you through the key features of the application:
+The following sections describe the key features of the application.
### Dashboard
-After you deploy the application, your default dashboard is a smart inventory management operator focused portal. Northwind Trader is a fictitious smart inventory provider managing warehouse with Bluetooth low energy (BLE) and retail store with Radio-frequency identification (RFID). In this dashboard, you'll see two different gateways providing telemetry about inventory along with associated commands, jobs, and actions that you can perform.
+After you deploy the application, your default dashboard is a smart, operator-focused, inventory-management portal. Northwind Trader is a fictitious smart inventory provider that manages its warehouse with Bluetooth low energy (BLE) and its retail store with RFID.
+
+On this dashboard are two different gateways, each providing telemetry about inventory, along with associated commands, jobs, and actions that you can perform.
-This dashboard is pre-configured to showcase the critical smart inventory management device operations activity.
-The dashboard is logically divided between two different gateway device management operations:
+This dashboard is preconfigured to display the activity of the critical smart inventory-management device. It's logically divided between two separate gateway device-management operations:
* The warehouse is deployed with a fixed BLE gateway and BLE tags on pallets to track and trace inventory at a larger facility.
-* Retail store is implemented with a fixed RFID gateway and RFID tags at individual an item level to track and trace the stock in a store outlet.
-* View the gateway [location](../core/howto-use-location-data.md), status and related details.
-* You can easily track the total number of gateways, active, and unknown tags.
-* You can perform device management operations such as update firmware, disable sensor, enable sensor, update sensor threshold, update telemetry intervals and update device service contracts.
+
+* The retail store is implemented with a fixed RFID gateway and RFID tags at the item level to track and trace the inventory in a store outlet.
+* View the [gateway location](../core/howto-use-location-data.md), status, and related details.
+* You can easily track the total number of gateways, active tags, and unknown tags.
+* You can perform device management operations, such as:
+ * Update firmware
+ * Enable or disable sensors
+ * Update sensor threshold
+ * Update telemetry intervals
+ * Update device service contracts
+ * Gateway devices can perform on-demand inventory management with a complete or incremental scan.
-### Device Template
+### Device templates
-Select the Device templates tab, and you'll see the gateway capability model. A capability model is structured around two different interfaces **Gateway Telemetry and Property** and **Gateway Commands**
+Select the **Device templates** tab to display the gateway capability model. A capability model is structured around two separate interfaces:
-**Gateway Telemetry and Property** - This interface represents all the telemetry related to sensors, location, device info, and device twin property capability such as gateway thresholds and update intervals.
+* **Gateway Telemetry and Property**: This interface displays the telemetry that's related to sensors, location, device info, and device twin property capability, such as gateway thresholds and update intervals.
-**Gateway Commands** - This interface organizes all the gateway command capabilities
+* **Gateway Commands**: This interface organizes all the gateway command capabilities.
### Rules
-Select the rules tab to see two different rules that exist in this application template. These rules are configured to email notifications to the operators for further investigations.
+Select the **Rules** tab to display two rules that exist in this application template. The rules are configured to email notifications to the operators for further investigation.
-**Gateway offline**: This rule will trigger if the gateway doesn't report to the cloud for a prolonged period. Gateway could be unresponsive because of low battery mode, loss of connectivity, device health.
+* **Gateway offline**: This rule is triggered if the gateway doesn't report to the cloud for a prolonged period. The gateway could be unresponsive because of a low battery, loss of connectivity, or device health.
-**Unknown tags**: It's critical to track every RFID and BLE tags associated with an asset. If the gateway is detecting too many unknown tags, it's an indication of synchronization challenges with tag sourcing applications.
+* **Unknown tags**: It's critical to track all RFID and BLE tags that are associated with an asset. If the gateway detects too many unknown tags, it's an indication of synchronization challenges with tag-sourcing applications.
## Clean up resources
iot-dps Concepts Control Access Dps Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-control-access-dps-azure-ad.md
Authenticating access by using Azure AD and controlling permissions by using Azu
## Authentication and authorization
-When an Azure AD security principal requests access to an Azure IoT Hub Device Provisioning Service (DPS) API, the principal's identity is first *authenticated*. For authentication, the request needs to contain an OAuth 2.0 access token at runtime. The resource name for requesting the token is `https://iothubs.azure.net`. If the application runs in an Azure resource like an Azure VM, Azure Functions app, or Azure App Service app, it can be represented as a [managed identity](../active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md).
+When an Azure AD security principal requests access to an Azure IoT Hub Device Provisioning Service (DPS) API, the principal's identity is first *authenticated*. For authentication, the request needs to contain an OAuth 2.0 access token at runtime. The resource name for requesting the token is `https://azure-devices-provisioning.net`. If the application runs in an Azure resource like an Azure VM, Azure Functions app, or Azure App Service app, it can be represented as a [managed identity](../active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md).
After the Azure AD principal is authenticated, the next step is *authorization*. In this step, Azure IoT Hub Device Provisioning Service (DPS) uses the Azure AD role assignment service to determine what permissions the principal has. If the principal's permissions match the requested resource or API, Azure IoT Hub Device Provisioning Service (DPS) authorizes the request. So this step requires one or more Azure roles to be assigned to the security principal. Azure IoT Hub Device Provisioning Service (DPS) provides some built-in roles that have common groups of permissions.
iot-hub-device-update Deploy Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/deploy-update.md
An Azure CLI environment:
# [Azure CLI](#tab/cli)
+
+
+The [`az iot du device group list`](/cli/azure/iot/du/device/group#az-iot-du-device-group-list) to verify the best available update for your group. The command takes the following arguments:
+
+* `--account`: The Device Update account name.
+* `--instance`: The Device Update instance name.
+* `--group-id`: The device group ID that you're targeting with this deployment. This ID is the value of the **ADUGroup** tag, or `$default` for devices with no tag.
+* `--best-updates`: This flag indicates the command should fetch the best available updates for the device group including a count of how many devices need each update.
+* `--resource-group -g': Device Update account resource group name.
+* '--update-compliance': This flag indicates the command should fetch device group update compliance information such as how many devices are on their latest update, how many need new updates, and how many are in progress on receiving a new update.
+
+```azurecli
+az iot du device group list \
+ --account <Device Update account name> \
+ --instance <Device Update instance name>\
+ --gid <device group id>\
+ --best-updates {false, true}
+```
+ Use [az iot du device deployment create](/cli/azure/iot/du/device/deployment#az-iot-du-device-deployment-create) to create a deployment for a device group. The `device deployment create` command takes the following arguments:
lab-services Class Type Jupyter Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-jupyter-notebook.md
Title: Set up a lab to teach data science with Python and Jupyter Notebooks | Microsoft Docs description: Learn how to set up a lab to teach data science using Python and Jupyter Notebooks. - Last updated 01/04/2022
lab-services Class Type Matlab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-matlab.md
Title: Set up a lab to teach MATLAB with Azure Lab Services | Microsoft Docs description: Learn how to set up a lab to teach MATLAB with Azure Lab Services.- Last updated 04/06/2022
lab-services Class Type Rstudio Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-rstudio-linux.md
Title: Set up a lab with R and RStudio on Linux using Azure Lab Services description: Learn how to set up labs to teach R using RStudio on Linux- Last updated 08/25/2021
lab-services Class Type Rstudio Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-rstudio-windows.md
Title: Set up a lab with R and RStudio on Windows using Azure Lab Services description: Learn how to set up labs to teach R using RStudio on Windows- Last updated 08/26/2021
lab-services Class Type Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-sql-server.md
Title: Set up a lab to manage and develop with Azure SQL Database | Azure Lab Services description: Learn how to set up a lab to manage and develop with Azure SQL Database.- Last updated 06/26/2020
lab-services Classroom Labs Fundamentals 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-fundamentals-1.md
Title: Architecture fundamentals with lab accounts in Azure Lab Services | Microsoft Docs description: This article will cover the fundamental resources used by Lab Services and basic architecture of a lab that using lab accounts. - Last updated 05/30/2022
lab-services Classroom Labs Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-fundamentals.md
Title: Architecture Fundamentals in Azure Lab Services | Microsoft Docs description: This article will cover the fundamental resources used by Lab Services and basic architecture of a lab. - Last updated 05/30/2022
lab-services How To Attach External Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-external-storage.md
Title: Use external file storage in Azure Lab Services | Microsoft Docs description: Learn how to set up a lab that uses external file storage in Lab Services. - Last updated 03/30/2021
lab-services How To Create A Lab With Shared Resource 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-a-lab-with-shared-resource-1.md
Title: How to Create a lab with a shared resource when using lab accounts | Azure Lab Services description: Learn how to create a lab that requires a resource shared among the students when using lab accounts. - Last updated 03/03/2022
lab-services How To Create A Lab With Shared Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-a-lab-with-shared-resource.md
Title: How to Create a Lab with a Shared Resource | Azure Lab Services description: Learn how to create a lab that requires a resource shared among the students. - Last updated 07/04/2022
lab-services How To Enable Nested Virtualization Template Vm Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-nested-virtualization-template-vm-ui.md
Title: Enable nested virtualization on a template VM in Azure Lab Services (UI) | Microsoft Docs description: Learn how to create a template VM with multiple VMs inside. In other words, enable nested virtualization on a template VM in Azure Lab Services. - Last updated 01/27/2022
lab-services How To Prepare Windows Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-prepare-windows-template.md
Title: Guide to setting up a Windows template machine | Microsoft Docs description: Generic steps to prepare a Windows template machine in Lab Services. These steps include setting Windows Update schedule, installing OneDrive, and installing Office.- Last updated 06/26/2020
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
Cross-region load balancer routes the traffic to the appropriate regional load b
* Private or internal load balancer can't be added to the backend pool of a cross-region load balancer
-* Cross-region IPv6 frontend IP configurations aren't supported.
+* NAT64 translation isn't supported at this time. The frontend and backend IPs must be of the same type (v4 or v6).
* UDP traffic isn't supported on Cross-region Load Balancer. * A health probe can't be configured currently. A default health probe automatically collects availability information about the regional load balancer every 20 seconds.
-* Currently, regional load load balancers with floating IP enabled aren't supported by the cross-region load balancer.
+* Currently, regional load balancers with floating IP enabled aren't supported by the cross-region load balancer.
## Pricing and SLA
migrate Migrate Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-services-overview.md
Azure Migrate provides a simplified migration, modernization, and optimization s
- **Unified migration platform**: A single portal to start, run, and track your migration to Azure. - **Range of tools**: A range of tools for assessment and migration. Azure Migrate tools include *Azure Migrate: Discovery and assessment* and *Migration and modernization*. Azure Migrate also integrates with other Azure services and tools, and with independent software vendor (ISV) offerings. - **Assessment, migration, and modernization**: In the Azure Migrate hub, you can assess, migrate, and modernize:
- - **Servers, databases and web apps**: Assess on-premises servers including web apps and SQL Server instances and migrate them to Azure virtual machines or Azure VMware Solution (AVS) (Preview).
+ - **Servers, databases and web apps**: Assess on-premises servers including web apps and SQL Server instances and migrate them to Azure.
- **Databases**: Assess on-premises SQL Server instances and databases to migrate them to an SQL Server on an Azure VM or an Azure SQL Managed Instance or to an Azure SQL Database. - **Web applications**: Assess on-premises web applications and migrate them to Azure App Service and Azure Kubernetes Service. - **Virtual desktops**: Assess your on-premises virtual desktop infrastructure (VDI) and migrate it to Azure Virtual Desktop.
migrate Troubleshoot Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-assessment.md
Conditionally supported Windows operating system | The operating system has pass
Unsupported Windows operating system | Azure supports only [selected Windows OS versions](/troubleshoot/azure/virtual-machines/server-software-support). Consider upgrading the server before you migrate to Azure. Conditionally endorsed Linux OS | Azure endorses only [selected Linux OS versions](../virtual-machines/linux/endorsed-distros.md). Consider upgrading the server before you migrate to Azure. [Learn more](#linux-vms-are-conditionally-ready-in-an-azure-vm-assessment). Unendorsed Linux OS | The server might start in Azure, but Azure provides no operating system support. Consider upgrading to an [endorsed Linux version](../virtual-machines/linux/endorsed-distros.md) before you migrate to Azure.
-Unknown operating system | The operating system of the VM was specified as **Other** in vCenter Server. This behavior blocks Azure Migrate from verifying the Azure readiness of the VM. Ensure that the operating system is [supported](./migrate-support-matrix-vmware-migration.md#azure-vm-requirements) by Azure before you migrate the server.
+Unknown operating system | The operating system of the VM was specified as **Other** in vCenter Server or could not be identified as a known OS in Azure Migrate. This behavior blocks Azure Migrate from verifying the Azure readiness of the VM. Ensure that the operating system is [supported](./migrate-support-matrix-vmware-migration.md#azure-vm-requirements) by Azure before you migrate the server.
Unsupported bit version | VMs with a 32-bit operating system might boot in Azure, but we recommend that you upgrade to 64-bit before you migrate to Azure. Requires a Microsoft Visual Studio subscription | The server is running a Windows client operating system, which is supported only through a Visual Studio subscription. VM not found for the required storage performance | The storage performance (input/output operations per second (IOPS) and throughput) required for the server exceeds Azure VM support. Reduce storage requirements for the server before migration.
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-java.md
Using your favorite IDE, create a new Java project, and add a *pom.xml* file in
</dependency> <dependency> <groupId>com.azure</groupId>
- <artifactId>azure-identity-providers-jdbc-mysql</artifactId>
- <version>1.0.0-beta.1</version>
+ <artifactId>azure-identity-extensions</artifactId>
+ <version>1.0.0</version>
</dependency> </dependencies> </project>
Run the following script in the project root directory to create a *src/main/res
mkdir -p src/main/resources && touch src/main/resources/database.properties cat << EOF > src/main/resources/database.properties
-url=jdbc:mysql://${AZ_DATABASE_NAME}.mysql.database.azure.com:3306/demo?sslMode=REQUIRED&serverTimezone=UTC&defaultAuthenticationPlugin=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin&authenticationPlugins=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin
+url=jdbc:mysql://${AZ_DATABASE_NAME}.mysql.database.azure.com:3306/demo?sslMode=REQUIRED&serverTimezone=UTC&defaultAuthenticationPlugin=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin&authenticationPlugins=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin
user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME} EOF ``` > [!NOTE]
-> If you are using MysqlConnectionPoolDataSource class as the datasource in your application, please remove "defaultAuthenticationPlugin=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin" in the url.
+> If you are using MysqlConnectionPoolDataSource class as the datasource in your application, please remove "defaultAuthenticationPlugin=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin" in the url.
```bash mkdir -p src/main/resources && touch src/main/resources/database.properties cat << EOF > src/main/resources/database.properties
-url=jdbc:mysql://${AZ_DATABASE_NAME}.mysql.database.azure.com:3306/demo?sslMode=REQUIRED&serverTimezone=UTC&authenticationPlugins=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin
+url=jdbc:mysql://${AZ_DATABASE_NAME}.mysql.database.azure.com:3306/demo?sslMode=REQUIRED&serverTimezone=UTC&authenticationPlugins=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin
user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME} EOF ```
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-java.md
Using your favorite IDE, create a new Java project using Java 8 or above. Create
</dependency> <dependency> <groupId>com.azure</groupId>
- <artifactId>azure-identity-providers-jdbc-mysql</artifactId>
- <version>1.0.0-beta.1</version>
+ <artifactId>azure-identity-extensions</artifactId>
+ <version>1.0.0</version>
</dependency> </dependencies> </project>
Run the following script in the project root directory to create a *src/main/res
mkdir -p src/main/resources && touch src/main/resources/database.properties cat << EOF > src/main/resources/database.properties
-url=jdbc:mysql://${AZ_DATABASE_SERVER_NAME}.mysql.database.azure.com:3306/${AZ_DATABASE_NAME}?sslMode=REQUIRED&serverTimezone=UTC&defaultAuthenticationPlugin=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin&authenticationPlugins=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin
+url=jdbc:mysql://${AZ_DATABASE_SERVER_NAME}.mysql.database.azure.com:3306/${AZ_DATABASE_NAME}?sslMode=REQUIRED&serverTimezone=UTC&defaultAuthenticationPlugin=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin&authenticationPlugins=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin
user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME}@${AZ_DATABASE_SERVER_NAME} EOF ``` > [!NOTE]
-> If you are using MysqlConnectionPoolDataSource class as the datasource in your application, please remove "defaultAuthenticationPlugin=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin" in the url.
+> If you are using MysqlConnectionPoolDataSource class as the datasource in your application, please remove "defaultAuthenticationPlugin=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin" in the url.
```bash mkdir -p src/main/resources && touch src/main/resources/database.properties cat << EOF > src/main/resources/database.properties
-url=jdbc:mysql://${AZ_DATABASE_SERVER_NAME}.mysql.database.azure.com:3306/${AZ_DATABASE_NAME}?sslMode=REQUIRED&serverTimezone=UTC&authenticationPlugins=com.azure.identity.providers.mysql.AzureIdentityMysqlAuthenticationPlugin
+url=jdbc:mysql://${AZ_DATABASE_SERVER_NAME}.mysql.database.azure.com:3306/${AZ_DATABASE_NAME}?sslMode=REQUIRED&serverTimezone=UTC&authenticationPlugins=com.azure.identity.extensions.jdbc.mysql.AzureMysqlAuthenticationPlugin
user=${AZ_MYSQL_AD_NON_ADMIN_USERNAME}@${AZ_DATABASE_SERVER_NAME} EOF ```
partner-solutions New Relic Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-create.md
+
+ Title: Create an instance of Azure Native New Relic Service Preview
+description: Learn how to create a resource by using Azure Native New Relic Service.
++ Last updated : 01/16/2023++
+# Quickstart: Get started with Azure Native New Relic Service Preview
+
+In this quickstart, you create an instance of Azure Native New Relic Service Preview. You can either [create a New Relic account](new-relic-create.md) or [link to an existing New Relic account](new-relic-link-to-existing.md).
+
+When you use the integrated New Relic experience in the Azure portal by using Azure Native New Relic Service, the service creates and maps the following entities for monitoring and billing purposes.
++
+- **New Relic resource in Azure**: By using the New Relic resource, you can manage the New Relic account on Azure. The resource is created in the Azure subscription and resource group that you select during the creation process or linking process.
+- **New Relic organization**: The New Relic organization on New Relic software as a service (SaaS) is used for user management and billing.
+- **New Relic account**: The New Relic account on New Relic SaaS is used to store and process telemetry data.
+- **Azure Marketplace SaaS resource**: When you set up a new account and organization on New Relic by using Azure Native New Relic Service, the SaaS resource is created automatically, based on the plan that you select from the Azure New Relic offer in Azure Marketplace. This resource is used for billing.
+
+## Prerequisites
+
+Before you link the subscription to New Relic, complete the pre-deployment configuration. For more information, see [Configure pre-deployment for Azure Native New Relic Service](new-relic-how-to-configure-prereqs.md).
+
+## Find an offer
+
+Use the Azure portal to find the Azure Native New Relic Service application:
+
+1. Go to the [Azure portal](https://portal.azure.com/) and sign in.
+
+1. If you visited Azure Marketplace in a recent session, select the icon from the available options. Otherwise, search for **marketplace** and then select the **Marketplace** result under **Services**.
+
+ :::image type="content" source="media/new-relic-create/new-relic-search.png" alt-text="Screenshot that shows entering the word Marketplace in a search box.":::
+
+1. In Azure Marketplace, search for **new relic** and select the **Azure Native New Relic Service** result. The page for the service opens.
+
+ :::image type="content" source="media/new-relic-create/new-relic-marketplace.png" alt-text="Screenshot that shows Azure Native New Relic Service in Azure Marketplace.":::
+
+1. Select **Subscribe**.
+
+## Create a New Relic resource on Azure
+
+1. When you're creating a New Relic resource, you have two options. One creates a New Relic account, and the other links an Azure subscription to an existing New Relic account. For this example, select **Create** under the **Create a New Relic resource** option.
+
+ :::image type="content" source="media/new-relic-create/new-relic-create.png" alt-text="Screenshot that shows New Relic resources.":::
+
+1. A form to create a New Relic resource appears on the **Basics** tab.
+
+ :::image type="content" source="media/new-relic-create/new-relic-basics.png" alt-text="Screenshot that shows the tab for basic information about a New Relic resource.":::
+
+1. Provide the following values:
+
+ | Property | Description |
+ |--|--|
+ | **Subscription** | Select the Azure subscription that you want to use for creating the New Relic resource. You must have owner access.|
+ | **Resource group** |Specify whether you want to create a new resource group or use an existing one. A [resource group](/azure/azure-resource-manager/management/overview) is a container that holds related resources for an Azure solution.|
+ | **Resource name** |Specify a name for the New Relic resource. This name will be the friendly name of the New Relic account.|
+ | **Region** |Select the region where the New Relic resource on Azure and the New Relic account will be created.|
+
+1. When you're choosing the organization under which to create the New Relic account, you have two options: create a new organization, or select an existing organization to link the newly created account.
+
+ > [!IMPORTANT]
+ > You can't use **Associate with existing** functionality, presently. The ability to create a new New Relic resource and associate it with an existing organization is currently disabled.
+
+ If you opt to create a new organization, you can choose a plan from the list of available plans by selecting **Change Plan** on the working pane.
+
+ :::image type="content" source="media/new-relic-create/new-relic-change-plan.png" alt-text="Screenshot of the panel for changing a plan.":::
+
+
+## Configure metrics and logs
+
+Your next step is to configure metrics and logs on the **Logs** tab. When you're creating the New Relic resource, you can set up automatic log forwarding for two types of logs:
+
+1. To send subscription-level logs to New Relic, select **Subscription activity logs**. If you leave this option cleared, no subscription-level logs will be sent to New Relic.
+
+ These logs provide insight into the operations on your resources at the [control plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). These logs also include updates on service-health events.
+
+ Use the activity log to determine what, who, and when for any write operations (`PUT`, `POST`, `DELETE`). There's a single activity log for each Azure subscription.
+
+1. To send Azure resource logs to New Relic, select **Azure resource logs** for all supported resource types. The types of Azure resource logs are listed in [Azure Monitor Resource Log categories](/azure/azure-monitor/essentials/resource-logs-categories).
+
+ These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure/azure-resource-manager/management/control-plane-and-data-plane). For example, getting a secret from a key vault is a data plane operation. Making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
+
+ :::image type="content" source="media/new-relic-create/new-relic-metrics.png" alt-text="Screenshot of the tab for logs in a New Relic resource, with resource logs selected.":::
+
+1. When the checkbox for Azure resource logs is selected, logs are forwarded for all resources by default. To filter the set of Azure resources that send logs to New Relic, use inclusion and exclusion rules and set Azure resource tags:
+
+ - All Azure resources with tags defined in include rules send logs to New Relic.
+ - All Azure resources with tags defined in exclude rules don't send logs to New Relic.
+ - If there's a conflict between inclusion and exclusion rules, the exclusion rule applies.
+
+ Azure charges for logs sent to New Relic. For more information, see the [pricing of platform logs](/azure.microsoft.com/pricing/details/monitor/) sent to Azure Marketplace partners.
+
+ > [!NOTE]
+ > You can collect metrics for virtual machines and app services by installing the New Relic agent after you create the New Relic resource.
+
+1. After you finish configuring metrics and logs, select **Next**.
+
+## Set up resource tags
+
+On the **Tags** tab, you can choose to set up tags for the New Relic resource.
++
+You can also skip this step and go directly to the **Review and Create** tab.
+
+## Review and create the resource
+
+1. On the **Review and Create** tab, review the resource setup information.
+
+ :::image type="content" source="media/new-relic-create/new-relic-review.png" alt-text="Screenshot of the tab for reviewing and creating a New Relic resource.":::
+
+1. Ensure that you've passed validation, and then select **Create** to begin the resource deployment.
+
+## Next steps
+
+- [Manage the New Relic resource](new-relic-how-to-manage.md)
partner-solutions New Relic How To Configure Prereqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-how-to-configure-prereqs.md
+
+ Title: Configure pre-deployment on Azure Native New Relic Service Preview
+description: Learn how to configure New Relic in Azure Marketplace before deployment.
++ Last updated : 01/16/2023++
+# Configure pre-deployment for Azure Native New Relic Service Preview
+
+This article describes the prerequisites that you must complete in your Azure subscription before you create your first New Relic resource on Azure.
+
+## Access control
+
+To set up New Relic on Azure, you must have owner access on the Azure subscription. [Confirm that you have the appropriate access](/azure/role-based-access-control/check-access) before you start the setup.
+
+## Resource provider registration
+
+To set up New Relic on Azure, you need to register the `NewRelic.Observability` resource provider in the specific Azure subscription:
+
+- To register the resource provider in the Azure portal, follow the steps in [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types).
+
+- To register the resource provider in the Azure CLI, use this command:
+
+ ```azurecli
+ az provider register \--namespace NewRelic.Observability \--subscription \<subscription-id\>
+ ```
+
+## Next steps
+
+- [Quickstart: Get started with New Relic](new-relic-create.md)
+- [Troubleshoot Azure Native New Relic Service](new-relic-troubleshoot.md)
+
partner-solutions New Relic How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-how-to-manage.md
+
+ Title: Manage Azure Native New Relic Service Preview
+description: Learn how to manage your Azure Native New Relic Service settings.
++ Last updated : 01/16/2023+++
+# Manage Azure Native New Relic Service Preview
+
+This article describes how to manage the settings for Azure Native New Relic Service Preview.
+
+## Resource overview
+
+To see the details of your New Relic resource, select **Overview** on the left pane.
++
+The details include:
+
+- Resource group
+- Region
+- Subscription
+- Tags
+- New Relic account
+- New Relic organization
+- Status
+- Pricing plan
+- Billing term
+
+At the bottom:
+
+- The **Get started** tab provides deep links to New Relic dashboards, logs, and alerts.
+- The **Monitoring** tab provides a summary of the resources that send logs and metrics to New Relic.
+
+If you select **Monitored resources**, the pane that opens includes a table with information about the Azure resources that are sending logs and metrics to New Relic.
++
+The columns in the table denote valuable information for your resource:
+
+|Property | Description |
+|||
+| **Resource type** | Azure resource type |
+| **Total resources** | Count of all resources for the resource type |
+| **Logs to New Relic** | Count of logs for the resource type |
+| **Metrics to New Relic** | Count of resources that are sending metrics to New Relic through the integration |
+
+## Reconfigure rules for logs or metrics
+
+To change the configuration rules for logs or metrics, select **Metrics and Logs** in the Resource menu.
++
+For more information, see [Configure metrics and logs](new-relic-how-to-configure-prereqs.md).
+
+## View monitored resources
+
+To see the list of resources that are sending metrics and logs to New Relic, select **Monitored resources** on the left pane.
++
+You can filter the list of resources by resource type, resource group name, region, and whether the resource is sending metrics and logs.
+
+The column **Logs to New Relic** indicates whether the resource is sending logs to New Relic. If the resource isn't sending logs, the reasons could be:
+
+- **Resource does not support sending logs**: Only resource types with monitoring log categories can be configured to send logs. See [Supported categories](/azure/azure-monitor/essentials/resource-logs-categories).
+- **Limit of five diagnostic settings reached**: Each Azure resource can have a maximum of five diagnostic settings. For more information, see [Diagnostic settings](/cli/azure/monitor/diagnostic-settings).
+- **Error**: The resource is configured to send logs to New Relic but is blocked by an error.
+- **Logs not configured**: Only Azure resources that have the appropriate resource tags are configured to send logs to New Relic.
+- **Agent not configured**: Virtual machines or app services without the New Relic agent installed don't send logs to New Relic.
+
+The column **Metrics to New Relic** indicates whether New Relic is receiving metrics that correspond to this resource.
+
+## Monitor virtual machines by using the New Relic agent
+
+You can install the New Relic agent on virtual machines as an extension. Select **Virtual Machines** on the left pane. The **Virtual machine agent** pane shows a list of all virtual machines in the subscription.
++
+For each virtual machine, the following info appears:
+
+ | Property | Description |
+ |--|--|
+ | **Virtual machine name** | Name of the virtual machine. |
+ | **Resource status** | Indicates whether the virtual machine is stopped or running. The New Relic agent can be installed only on virtual machines that are running. If the virtual machine is stopped, installing the New Relic agent will be disabled. |
+ | **Agent status** | Indicates whether the New Relic agent is running on the virtual machine. |
+ | **Agent version** | Version number of the New Relic agent. |
+
+> [!NOTE]
+> If a virtual machine shows that an agent is installed, but the option **Uninstall extension** is disabled, the agent was configured through a different New Relic resource in the same Azure subscription. To make any changes, go to the other New Relic resource in the Azure subscription.
+
+## Monitor app services by using the New Relic agent
+
+You can install the New Relic agent on app services as an extension. Select **App Services** on the left pane. The working pane shows a list of all app services in the subscription.
++
+For each app service, the following information appears:
+
+ |Property | Description |
+ |--|-|
+ | **Resource name** | App service name.|
+ | **Resource status** | Indicates whether the App service is running or stopped. The New Relic agent can be installed only on app services that are running.|
+ | **App Service plan** | The plan that's configured for the app service.|
+ | **Agent status** | Status of the agent. |
+
+To install the New Relic agent, select the app service and then select **Install Extension**. The application settings for the selected app service are updated, and the app service is restarted to complete the configuration of the New Relic agent.
+
+> [!NOTE]
+> App Service extensions are currently supported only for app services that are running on Windows operating systems. The list doesn't show app services that use Linux operating systems.
+
+> [!NOTE]
+> This page currently shows only the web app type of app services. Managing agents for function apps is not supported at this time.
+
+## Delete a New Relic resource
+
+1. Select **Overview** on the left pane. Then, select **Delete**.
+
+ :::image type="content" source="media/new-relic-how-to-manage/new-relic-delete.png" alt-text="Screenshot of the delete button on a resource overview.":::
+
+1. Confirm that you want to delete the New Relic resource. Select **Delete**.
+
+If only one New Relic resource is mapped to a New Relic account, logs and metrics are no longer sent to New Relic.
+
+For a New Relic organization where billing is managed through Azure Marketplace, deleting the last associated New Relic resource also removes the corresponding Azure Marketplace billing relationship.
+
+If you map more than one New Relic resource to the New Relic account by using the link option, deleting the New Relic resource only stops sending logs for Azure resources associated with that New Relic resource. Because other Azure Native New Relic Service resources are linked with this New Relic account, billing continues through Azure Marketplace.
+
+## Next steps
+
+- [Troubleshoot Azure Native New Relic Service](new-relic-troubleshoot.md)
+- [Quickstart: Get started with New Relic](new-relic-create.md)
partner-solutions New Relic Link To Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-link-to-existing.md
+
+ Title: Link Azure Native New Relic Service Preview to an existing account
+description: Learn how to link to an existing New Relic account.
+++ Last updated : 01/16/2023+++
+# Quickstart: Link to an existing New Relic account
+
+In this quickstart, you link an Azure subscription to an existing New Relic account. You can then monitor the linked Azure subscription and the resources in that subscription by using the New Relic account.
+
+> [!NOTE]
+> You can link New Relic accounts that you previously created by using Azure Native New Relic Service Preview.
+
+When you use Azure Native New Relic Service Preview in the Azure portal for linking, and both the organization and the account at New Relic were created through the Azure Native New Relic Service, your billing and monitoring for the following entities are tracked in the portal.
+++
+- **New Relic resource in Azure**: By using the New Relic resource, you can manage the New Relic account on Azure. The resource is created in the Azure subscription and resource group that you select during the linking process.
+- **New Relic account**: When you choose to link an existing account on New Relic software as a service (SaaS), a New Relic resource is created on Azure.
+- **New Relic organization**: The New Relic organization on New Relic SaaS is used for user management and billing.
+- **Azure Marketplace SaaS resource**: The SaaS resource is used for billing. The SaaS resource typically resides in a different Azure subscription from where the New Relic account was created.
+
+> [!NOTE]
+> The Azure Marketplace SaaS resource is set up only if you created the New Relic organization by using Azure Native New Relic Service. If you created your New Relic organization directly from the New Relic portal, the Azure Marketplace SaaS resource doesn't exist, and New Relic will manage your billing.
+
+## Find an offer
+
+1. Use the Azure portal to find Azure Native New Relic Service. Go to the [Azure portal](https://portal.azure.com/) and sign in.
+
+1. If you visited Azure Marketplace in a recent session, select the icon from the available options. Otherwise, search for **marketplace** and then select the **Marketplace** result under **Services**.
+
+ :::image type="content" source="media/new-relic-link-to-existing/new-relic-search.png" alt-text="Screenshot that shows the word Marketplace typed in a search box.":::
+
+1. In Azure Marketplace, search for **new relic**.
+
+1. When you find Azure Native New Relic Service on the working pane, select **Subscribe**.
+
+ :::image type="content" source="media/new-relic-link-to-existing/new-relic-service-monitoring.png" alt-text="Screenshot that shows Azure Native New Relic Service and Cloud Monitoring in Azure Marketplace.":::
+
+## Link to an existing New Relic account
+
+1. When you're creating a New Relic resource, you have two options. One creates a New Relic account, and the other links an Azure subscription to an existing New Relic account. When you complete this process, you create a New Relic resource on Azure that links to an existing New Relic account.
+
+ For this example, use the **Link an existing New Relic resource** option and select **Create**.
+
+ :::image type="content" source="media/new-relic-link-to-existing/new-relic-create.png" alt-text="Screenshot that shows two options for creating a New Relic resource on Azure.":::
+
+1. A form to create the New Relic resource appears on the **Basics** tab. Select an existing in account in **New Relic account**.
+
+ :::image type="content" source="media/new-relic-link-to-existing/new-relic-account.png" alt-text="Screenshot that shows the tab for basic information about linking an existing New Relic account.":::
+
+1. Provide the following values:
+
+ |Property | Description |
+ |||
+ | **Subscription** | Select the Azure subscription that you want to use for creating the New Relic resource. This subscription will be linked to the New Relic account for monitoring purposes.|
+ | **Resource group** | Specify whether you want to create a new resource group or use an existing one. A [resource group](/azure-resource-manager/management/overview) is a container that holds related resources for an Azure solution.|
+ | **Resource name** | Specify a name for the New Relic resource.|
+ | **Region** | Select the Azure region where the New Relic resource should be created.|
+ | **New Relic account** | The Azure portal displays a list of existing accounts that can be linked. Select the desired account from the available options.|
+
+1. After you select a New Relic account, the New Relic billing details appear for your reference. The user who is performing the linking action should have global administrator permissions on the New Relic account that's being linked.
+
+ :::image type="content" source="media/new-relic-link-to-existing/new-relic-form.png" alt-text="Screenshot that shows the Basics tab and New Relic account details in a red box.":::
+
+1. Select **Next**.
+
+## Configure metrics and logs
+
+Your next step is to configure metrics and logs on the **Metrics + Logs** tab. When you're linking an existing New Relic account, you can set up automatic log forwarding for two types of logs:
+
+- **Send subscription activity logs**: These logs provide insight into the operations on your resources at the [control plane](/azure-resource-manager/management/control-plane-and-data-plane). The logs also include updates on service-health events.
+
+ Use the activity log to determine what, who, and when for any write operations (`PUT`, `POST`, `DELETE`). There's a single activity log for each Azure subscription.
+
+- **Azure resource logs**: These logs provide insight into operations that were taken on an Azure resource at the [data plane](/azure-resource-manager/management/control-plane-and-data-plane). For example, getting a secret from a key vault is a data plane operation. Making a request to a database is also a data plane operation. The content of resource logs varies by the Azure service and resource type.
++
+1. To send Azure resource logs to New Relic, select **Send Azure resource logs for all defined resources**. The types of Azure resource logs are listed in [Azure Monitor resource log categories](/azure-monitor/essentials/resource-logs-categories).
+
+1. When the checkbox for Azure resource logs is selected, logs are forwarded for all resources by default. To filter the set of Azure resources that are sending logs to New Relic, use inclusion and exclusion rules and set the Azure resource tags:
+
+ - All Azure resources with tags defined in include rules send logs to New Relic.
+ - All Azure resources with tags defined in exclude rules don't send logs to New Relic.
+ - If there's a conflict between inclusion and exclusion rules, the exclusion rule applies.
+
+ Azure charges for logs sent to New Relic. For more information, see the [pricing of platform logs](https://azure.microsoft.com/pricing/details/monitor/) sent to Azure Marketplace partners.
+
+ > [!NOTE]
+ > You can collect metrics for virtual machines and app services by installing the New Relic agent after you create the New Relic resource and link an existing New Relic account to it.
+``
+1. After you finish configuring metrics and logs, select **Next**.
+
+## Add tags
+
+1. On the **Tags** tab, you can add tags for your New Relic resource. Provide name/value pairs for the tags to apply to the New Relic resource.
+
+ :::image type="content" source="media/new-relic-link-to-existing/new-relic-tags.png" alt-text="Screenshot that shows the tab for adding tags, with names and values to complete.":::
+
+1. When you finish adding tags, select **Next**.
+
+## Review and create
+
+1. On the **Review + Create** tab, review your selections and the terms of use.
+
+ :::image type="content" source="media/new-relic-link-to-existing/new-relic-link-create.png" alt-text="Screenshot that shows the tab for reviewing and creating a New Relic resource, with a summary of completed information.":::
+
+1. After validation finishes, select **Create**. Azure deploys the New Relic resource. When that process finishes, select **Go to resource** to see the New Relic resource.
+
+## Next steps
+
+- [Manage the New Relic resource](new-relic-how-to-manage.md)
+- [Quickstart: Get started with New Relic](new-relic-create.md)
+
partner-solutions New Relic Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-overview.md
+
+ Title: Azure Native New Relic Service Preview overview
+description: Learn about using New Relic in Azure Marketplace.
++ Last updated : 01/16/2023++
+# What is Azure Native New Relic Service Preview?
+
+New Relic is a full-stack observability platform that enables a single source of truth for application performance, infrastructure monitoring, log management, error tracking, real-user monitoring, and more. Combined with the Azure platform, use Azure Native New Relic Service Preview to help monitor, troubleshoot, and optimize Azure services and applications.
+
+Azure Native New Relic Service in Azure Marketplace enables you to create and manage New Relic accounts by using the Azure portal with a fully integrated experience. Integration with Azure enables you to use New Relic as a monitoring solution for your Azure workloads through a streamlined workflow, starting from procurement and moving all the way to configuration and management.
+
+You can create and manage the New Relic resources by using the Azure portal through a resource provider named `NewRelic.Observability`. New Relic owns and runs the software as a service (SaaS) application, including the New Relic organizations and accounts that are created through this experience.
+
+> [!NOTE]
+> For New Relic accounts that you create by using Azure Native New Relic Service, customer data is stored and processed in the region where the service was deployed.
+>
+> For accounts that you create directly by using the New Relic portal and use for linking, New Relic determines where the customer data is stored and processed. Depending on the configuration at the time of setup, this might be on or outside Azure.
+
+## Capabilities
+
+Azure Native New Relic Service provides the following capabilities:
+
+- Easily onboard and use New Relic as a natively integrated service on Azure.
+- Get a single bill for all the resources that you consume on Azure, including New Relic.
+- Automatically monitor subscription activity and resource logs for New Relic.
+- Automatically monitor metrics by using New Relic.
+- Use a single experience to install and uninstall the New Relic agent on virtual machines and app services.
+
+## New Relic links
+
+For more help with using Azure Native New Relic Service, see the [New Relic documentation](https://docs.newrelic.com/).
+
+## Next steps
+
+- [Quickstart: Get started with New Relic](new-relic-create.md)
+- [Quickstart: Link to an existing New Relic account](new-relic-link-to-existing.md)
partner-solutions New Relic Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-troubleshoot.md
+
+ Title: Troubleshoot Azure Native New Relic Service Preview
+description: Learn about troubleshooting Azure Native New Relic Service.
++ Last updated : 01/16/2023+++
+# Troubleshoot Azure Native New Relic Service
+
+This article describes how to fix common problems when you're working with Azure Native New Relic Service Preview resources.
+
+Try the troubleshooting information in this article first. If that doesn't work, contact New Relic support:
+
+1. In the Azure portal, go to the resource.
+1. On the left pane, under **Support + troubleshooting**, select **New Support Request**.
+1. Select the link to go to the [New Relic support website](https://support.newrelic.com/) and raise a request.
++
+## Fix common errors
+
+### Purchase fails
+
+A purchase can fail because a valid credit card isn't connected to the Azure subscription, or because a payment method isn't associated with the subscription. To solve this problem, use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [Add, update, or delete a payment method](/azure/cost-management-billing/manage/change-credit-card).
+
+A purchase can also fail because an Enterprise Agreement (EA) subscription doesn't allow Azure Marketplace purchases. Try to use a different subscription. Or, check if your EA subscription is enabled for Azure Marketplace purchases. For more information, see [Enabling Azure Marketplace purchases](/azure/cost-management-billing/manage/ea-azure-marketplace#enabling-azure-marketplace-purchases).
+
+### You can't create a New Relic resource
+
+To set up Azure Native New Relic Service, you must have owner access on the Azure subscription. Ensure that you have the appropriate access before you start the setup.
+
+To find the New Relic offering on Azure and set up the service, you must first register the `NewRelic.Observability` resource provider in your Azure subscription. To register the resource provider by using the Azure portal, follow the guidance in [Azure resource providers and types](/azure/azure-resource-manager/management/resource-providers-and-types).
+To register the resource provider from a command line, enter `az provider register --namespace NewRelic.Observability --subscription <subscription-id>`.
+
+### Logs aren't being sent to New Relic
+
+Only resource types in [supported categories](/azure/azure-monitor/essentials/resource-logs-categories) send logs to New Relic through the integration. To check whether the resource is set up to send logs to New Relic, go to the [Azure diagnostic settings](/azure/azure-monitor/platform/diagnostic-settings) for that resource. Then, verify that there's a New Relic diagnostic setting.
+
+### You can't install or uninstall an extension on a virtual machine
+
+Only virtual machines without the New Relic agent installed should be selected together to install the extension. Deselect any virtual machines that already have the New Relic agent installed, so that **Install Extension** is active. The **Agent Status** column shows the status **Running** or **Shutdown** for any virtual machines that already have the New Relic agent installed.
+
+Only virtual machines that currently have the New Relic agent installed should be selected together to uninstall the extension. Deselect any virtual machines that don't already have the New Relic agent installed, so that **Uninstall Extension** is active. The **Agent Status** column shows the status **Not Installed** for any virtual machines that don't already have the New Relic agent installed.
+
+### Resource monitoring stopped working
+
+Resource monitoring in New Relic is enabled through the *ingest API key*, which you set up at the time of resource creation. Revoking the ingest API key from the New Relic portal disrupts monitoring of logs and metrics for all resources, including virtual machines and app services. You shouldn't* revoke the ingest API key. If the API key is already revoked, contact New Relic support.
+
+If your Azure subscription is suspended or deleted because of payment-related issues, resource monitoring in New Relic automatically stops. Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [Add, update, or delete a payment method](/azure/cost-management-billing/manage/change-credit-card).
+
+New Relic manages the APIs for creating and managing resources, and for the storage and processing of customer telemetry data. The New Relic APIs might be on or outside Azure. If your Azure subscription and resource are working correctly but the New Relic portal shows problems with monitoring data, contact New Relic support.
+<!-- need some clarification here -->
+
+## Next steps
+
+- [Manage Azure Native New Relic Service](new-relic-how-to-manage.md)
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/overview.md
# Azure Native ISV Services overview
-Azure Native ISV Services enables customers to easily provision, manage, and tightly integrate most used ISV software and services on Azure. Currently, several services are publicly available across these areas: observability, data, networking, and storage. For a list of all our current ISV partner service, see [Extend Azure with Azure Native ISV Services](partners.md).
+An Azure Native ISV Service enables users to easily provision, manage, and tightly integrate ISV software and services on Azure. Currently, several services are publicly available across these areas: observability, data, networking, and storage. For a list of all our current ISV partner services, see [Extend Azure with Azure Native ISV Services](partners.md).
## Features of Azure Native ISV Services
-A comprehensive list of features of Azure Native ISV Services is listed below.
+A list of features of any Azure Native ISV Service is listed below.
### Unified operations
partner-solutions Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/partners.md
Title: Partner services description: Learn about services offered by partners on Azure. - - Previously updated : 01/11/2023- Last updated : 01/16/2023
Azure Native ISV Services are available through the Marketplace.
## Observability |Partner |Description |
-|||
+||-|
|[Datadog](datadog/overview.md) | Monitoring and analytics platform for large scale applications. | |[Elastic](elastic/overview.md) | Build modern search experiences and maximize visibility into health, performance, and security of your infrastructure, applications, and data. | |[Logz.io](logzio/overview.md) | Observability platform that centralizes log, metric, and tracing analytics. | |[Azure Native Dynatrace Service](dynatrace/dynatrace-overview.md) | Provides deep cloud observability, advanced AIOps, and continuous runtime application security. |
+|[New Relic Preview](new-relic/new-relic-overview.md) | A cloud-based end-to-end observability platform for analyzing and troubleshooting the performance of applications, infrastructure, logs, real-user monitoring, and more. |
+ ## Data and storage |Partner |Description | |||
-| [Apache Kafka for Confluent Cloud](apache-kafka-confluent-cloud/overview.md) | Fully managed event streaming platform powered by Apache Kafka. |
+| [Apache Kafka for Confluent Cloud](apache-kafka-confluent-cloud/overview.md) | Fully managed event-streaming platform powered by Apache Kafka. |
## Networking and security
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-java.md
Using your favorite IDE, create a new Java project using Java 8 or above, and ad
</dependency> <dependency> <groupId>com.azure</groupId>
- <artifactId>azure-identity-providers-jdbc-postgresql</artifactId>
+ <artifactId>azure-identity-extensions</artifactId>
<version>1.0.0</version> </dependency> </dependencies>
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-java.md
Using your favorite IDE, create a new Java project using Java 8 or above, and ad
</dependency> <dependency> <groupId>com.azure</groupId>
- <artifactId>azure-identity-providers-jdbc-postgresql</artifactId>
+ <artifactId>azure-identity-extensions</artifactId>
<version>1.0.0</version> </dependency> </dependencies>
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Collect all the values in the following table for the mobile network site resour
|The Azure subscription to use to create the mobile network site resource. You must use the same subscription for all resources in your private mobile network deployment. |**Project details: Subscription**| |The Azure resource group in which to create the mobile network site resource. We recommend that you use the same resource group that already contains your private mobile network. |**Project details: Resource group**| |The name for the site. |**Instance details: Name**|
- |The region in which youΓÇÖre creating the mobile network site resource. We recommend that you use the East US region. |**Instance details: Region**|
+ |The region in which you deployed the private mobile network. |**Instance details: Region**|
+ |The [region code name](region-code-names.md) of the region in which you deployed the private mobile network. For the East US region, this is *eastus*; for West Europe, this is *westeurope*. </br></br>You only need to collect this value if you're going to create your site using an ARM template. |Not applicable.|
|The mobile network resource representing the private mobile network to which youΓÇÖre adding the site. |**Instance details: Mobile network**| |The billing plan for the site that you are creating. The available plans have the following allowances:</br></br> G1 - 1 Gbps per site and 100 devices per network. </br> G2 - 2 Gbps per site and 200 devices per network. </br> G3 - 3 Gbps per site and 300 devices per network. </br> G4 - 4 Gbps per site and 400 devices per network. </br> G5 - 5 Gbps per site and 500 devices per network.|**Instance details: Site plan**|
private-5g-core Collect Required Information For Private Mobile Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-private-mobile-network.md
Collect all of the following values for the mobile network resource that will re
|The Azure subscription to use to deploy the mobile network resource. You must use the same subscription for all resources in your private mobile network deployment. You identified this subscription in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). |**Project details: Subscription** |The Azure resource group to use to deploy the mobile network resource. You should use a new resource group for this resource. It's useful to include the purpose of this resource group in its name for future identification (for example, *contoso-pmn-rg*). |**Project details: Resource group**| |The name for the private mobile network. |**Instance details: Mobile network name**|
- |The region in which you're deploying the private mobile network. We recommend you use the East US region. |**Instance details: Region**|
+ |The region in which you're deploying the private mobile network. This can be the East US or the West Europe region. |**Instance details: Region**|
|The mobile country code for the private mobile network. |**Network configuration: Mobile country code (MCC)**| |The mobile network code for the private mobile network. |**Network configuration: Mobile network code (MNC)**|
private-5g-core Configure Service Sim Policy Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/configure-service-sim-policy-arm-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope. - Identify the name of the Mobile Network resource corresponding to your private mobile network and the resource group containing it.
+- Identify the Azure region in which you deployed your private mobile network.
- Identify the name of the data network to which you want to assign the new policy. - The ARM template is populated with values to configure a default service and SIM policy that allows all traffic in both directions.
Two Azure resources are defined in the template.
- **Subscription:** select the Azure subscription you used to create your private mobile network. - **Resource group:** select the resource group containing the Mobile Network resource representing your private mobile network.
- - **Region:** select **East US**.
- - **Location:** enter *eastus*.
+ - **Region:** select the region in which you deployed the private mobile network.
+ - **Location:** enter the [code name](region-code-names.md) of the region in which you deployed the private mobile network. For the East US region, this is *eastus*; for West Europe, this is *westeurope*.
- **Existing Mobile Network Name:** enter the name of the Mobile Network resource representing your private mobile network. - **Existing Slice Name:** enter **slice-1**. - **Existing Data Network Name:** enter the name of the data network. This value must match the name you used when creating the data network.
private-5g-core Create Overview Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-overview-dashboard.md
If your environment meets the prerequisites and you're familiar with using ARM t
- The name of the **Kubernetes - Azure Arc** resource that represents the Kubernetes cluster on which your packet core instance is running. - The name of the resource group containing the **Kubernetes - Azure Arc** resource.
+ - The Azure region in which you deployed your private mobile network.
## Review the template
The template defines one [**Microsoft.Portal/dashboards**](/azure/templates/micr
- **Subscription:** set this to the Azure subscription you used to create your private mobile network. - **Resource group:** set this to the resource group in which you want to create the dashboard. You can use an existing resource group or create a new one.
- - **Region:** select **East US**.
+ - **Region:** select the region in which you deployed the private mobile network.
- **Connected Cluster Name:** enter the name of the **Kubernetes - Azure Arc** resource that represents the Kubernetes cluster on which your packet core instance is running. - **Connected Cluster Resource Group:** enter the name of the resource group containing the **Kubernetes - Azure Arc** resource. - **Dashboard Display Name:** enter the name you want to use for the dashboard.
private-5g-core Create Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-site-arm-template.md
Four Azure resources are defined in the template.
|--|--| | **Subscription** | Select the Azure subscription you used to create your private mobile network. | | **Resource group** | Select the resource group containing the mobile network resource representing your private mobile network. |
- | **Region** | Select **East US**. |
- | **Location** | Enter *eastus*. |
+ | **Region** | Select the region in which you deployed the private mobile network. |
+ | **Location** | Enter the [code name](region-code-names.md) of the region in which you deployed the private mobile network. For the East US region, this is *eastus*; for West Europe, this is *westeurope*. |
| **Existing Mobile Network Name** | Enter the name of the mobile network resource representing your private mobile network. | | **Existing Data Network Name** | Enter the name of the data network. This value must match the name you used when creating the data network. | | **Site Name** | Enter a name for your site.|
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
The following Azure resources are defined in the template.
||| |**Subscription** | Select the Azure subscription you want to use to create your private mobile network. | |**Resource group** | Create a new resource group. |
- |**Region** | Select **East US**. |
+ |**Region** | Select the region in which you're deploying the private mobile network. |
|**Location** | Leave this field unchanged. | |**Mobile Network Name** | Enter a name for the private mobile network. | |**Mobile Country Code** | Enter the mobile country code for the private mobile network. |
private-5g-core Manage Sim Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/manage-sim-groups.md
To create a new SIM group:
1. Do the following on the **Basics** configuration tab. - Enter a name for the new SIM group into the **SIM group name** field.
- - Set **Region** to **East US**.
+ - In the **Region** field, select the region in which you deployed the private mobile network.
- Select your private mobile network from the **Mobile network** drop-down menu. :::image type="content" source="media/manage-sim-groups/create-sim-group-basics-tab.png" alt-text="Screenshot of the Azure portal showing the Basics configuration tab.":::
private-5g-core Provision Sims Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-arm-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
- Ensure you can sign in to the Azure portal using an account with access to the active subscription you identified in [Complete the prerequisite tasks for deploying a private mobile network](complete-private-mobile-network-prerequisites.md). This account must have the built-in Contributor role at the subscription scope. - Identify the name of the Mobile Network resource corresponding to your private mobile network and the resource group containing it.
+- Identify the Azure region in which you deployed your private mobile network.
- Choose a name for the new SIM group to which your SIMs will be added. - Identify the SIM policy you want to assign to the SIMs you're provisioning. You must have already created this SIM policy using the instructions in [Configure a SIM policy - Azure portal](configure-sim-policy-azure-portal.md).
The following Azure resources are defined in the template.
- **Subscription:** select the Azure subscription you used to create your private mobile network. - **Resource group:** select the resource group containing the Mobile Network resource representing your private mobile network.
- - **Region:** select **East US**.
- - **Location:** enter *eastus*.
+ - **Region:** select the region in which you deployed the private mobile network.
+ - **Location:** enter the [code name](region-code-names.md) of the region in which you deployed the private mobile network. For the East US region, this is *eastus*; for West Europe, this is *westeurope*.
- **Existing Mobile Network Name:** enter the name of the Mobile Network resource representing your private mobile network. - **Existing Sim Policy Name:** enter the name of the SIM policy you want to assign to the SIMs. - **Sim Group Name:** enter the name for the new SIM group.
The following Azure resources are defined in the template.
:::image type="content" source="media/provision-sims-arm-template/sims-arm-template-configuration-fields.png" alt-text="Screenshot of the Azure portal showing the configuration fields for the SIMs ARM template.":::
-1. Select **Review + create**.
-1. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
+2. Select **Review + create**.
+3. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
-1. Once your configuration has been validated, you can select **Create** to provision your SIMs. The Azure portal will display a confirmation screen when the SIMs have been provisioned.
+4. Once your configuration has been validated, you can select **Create** to provision your SIMs. The Azure portal will display a confirmation screen when the SIMs have been provisioned.
## Review deployed resources
private-5g-core Region Code Names https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/region-code-names.md
+
+ Title: Region code names for Azure Private 5G Core
+description: Learn about the region code names used for the location parameter in Azure Private 5G Core ARM templates
+++++ Last updated : 11/17/2022++
+# Region code names
+
+When the **location** parameter is used in a command or request, you need to provide the region code name as the **location** value. To get the code name of the region that your private mobile network is in, run the following command in the Azure CLI.
+
+```cloudshell-bash
+az account list-locations -o table
+```
+
+The output of this command is a table of the names and locations for all the Azure regions that your subscription supports. Navigate to the Azure region that has the *DisplayName* you are looking for and use its *Name* value for the **location** parameter.
+
+For example, if you're deploying in the East US region, use *eastus* for the **location** parameter.
+
+```json
+DisplayName Name RegionalDisplayName
+ - -
+East US eastus (US) East US
+West Europe westeurope (Europe) West Europe
+```
private-5g-core Upgrade Packet Core Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-arm-template.md
If you determined in [Plan for your upgrade](#plan-for-your-upgrade) that you ne
- **Subscription:** select the Azure subscription you used to create your private mobile network. - **Resource group:** select the resource group containing the mobile network resource representing your private mobile network.
- - **Region:** select **East US**.
+ - **Region:** select the region in which you deployed the private mobile network.
- **Existing packet core:** select the name of the packet core instance you want to upgrade. - **New version:** enter the version to which you want to upgrade the packet core instance.
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Web Apps (Microsoft.Web/sites) / sites | privatelink.azurewebsites.net </br> scm.privatelink.azurewebsites.net | azurewebsites.net </br> scm.azurewebsites.net | | Azure Machine Learning (Microsoft.MachineLearningServices/workspaces) / amlworkspace | privatelink.api.azureml.ms<br/>privatelink.notebooks.azure.net | api.azureml.ms<br/>notebooks.azure.net<br/>instances.azureml.ms<br/>aznbcontent.net | | SignalR (Microsoft.SignalRService/SignalR) / signalR | privatelink.service.signalr.net | service.signalr.net |
-| Azure Monitor (Microsoft.Insights/privateLinkScopes) / azuremonitor | privatelink.monitor.azure.com<br/> privatelink.oms.opinsights.azure.com <br/> privatelink.ods.opinsights.azure.com <br/> privatelink.agentsvc.azure-automation.net <br/> privatelink.blob.core.windows.net | monitor.azure.com<br/> oms.opinsights.azure.com<br/> ods.opinsights.azure.com<br/> agentsvc.azure-automation.net <br/> blob.core.windows.net |
+| Azure Monitor (Microsoft.Insights/privateLinkScopes) / azuremonitor | privatelink.monitor.azure.com<br/> privatelink.oms.opinsights.azure.com <br/> privatelink.ods.opinsights.azure.com <br/> privatelink.agentsvc.azure-automation.net <br/> privatelink.blob.core.windows.net <br/> privatelink.applicationinsights.azure.com| monitor.azure.com<br/> oms.opinsights.azure.com<br/> ods.opinsights.azure.com<br/> agentsvc.azure-automation.net <br/> blob.core.windows.net <br/> applicationinsights.azure.com |
| Cognitive Services (Microsoft.CognitiveServices/accounts) / account | privatelink.cognitiveservices.azure.com | cognitiveservices.azure.com | | Azure File Sync (Microsoft.StorageSync/storageSyncServices) / afs | {region}.privatelink.afs.azure.net | {region}.afs.azure.net | | Azure Data Factory (Microsoft.DataFactory/factories) / dataFactory | privatelink.datafactory.azure.net | datafactory.azure.net |
For Azure services, use the recommended zone names as described in the following
| Azure Bot Service (Microsoft.BotService/botServices) / Bot | privatelink.directline.botframework.com | directline.botframework.com </br> europe.directline.botframework.com | | Azure Bot Service (Microsoft.BotService/botServices) / Token | privatelink.token.botframework.com | token.botframework.com </br> europe.token.botframework.com | | Azure Data Health Data Services (Microsoft.HealthcareApis/workspaces) / healthcareworkspace | workspace.privatelink.azurehealthcareapis.com </br> fhir.privatelink.azurehealthcareapis.com </br> dicom.privatelink.azurehealthcareapis.com | workspace.azurehealthcareapis.com </br> fhir.azurehealthcareapis.com </br> dicom.azurehealthcareapis.com |
+| Azure Databricks (Microsoft.Databricks/workspaces) / databricks_ui_api, browser_authentication | privatelink.azuredatabricks.net | azuredatabricks.net
<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hubs-compatible-endpoint)
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
A private-link resource is the destination target of a specified private endpoin
| Azure App Service | Microsoft.Web/sites | sites | | Azure Static Web Apps | Microsoft.Web/staticSites | staticSites | | Azure Media Services | Microsoft.Media/mediaservices | keydelivery, liveevent, streamingendpoint |
+| Azure Databricks | Microsoft.Databricks/workspaces | databricks_ui_api, browser_authentication |
> [!NOTE] > You can create private endpoints only on a General Purpose v2 (GPv2) storage account.
private-link Private Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-link-overview.md
Title: What is Azure Private Link?
description: Overview of Azure Private Link features, architecture, and implementation. Learn how Azure Private Endpoints and Azure Private Link service works and how to use them.
-# Customer intent: As someone with a basic network background, but is new to Azure, I want to understand the capabilities of Azure Private Link so that I can securely connect to my Azure PaaS services within the virtual network.
- Previously updated : 03/15/2021 Last updated : 01/17/2023 -+ # What is Azure Private Link? + Azure Private Link enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a [private endpoint](private-endpoint-overview.md) in your virtual network. Traffic between your virtual network and the service travels the Microsoft backbone network. Exposing your service to the public internet is no longer necessary. You can create your own [private link service](private-link-service-overview.md) in your virtual network and deliver it to your customers. Setup and consumption using Azure Private Link is consistent across Azure PaaS, customer-owned, and shared partner services.
Traffic between your virtual network and the service travels the Microsoft backb
:::image type="content" source="./media/private-link-overview/private-link-center.png" alt-text="Screenshot of Azure Private Link center in Azure portal." ::: ## Key benefits+ Azure Private Link provides the following benefits: + - **Privately access services on the Azure platform**: Connect your virtual network using private endpoints to all services that can be used as application components in Azure. Service providers can render their services in their own virtual network and consumers can access those services in their local virtual network. The Private Link platform will handle the connectivity between the consumer and services over the Azure backbone network. - **On-premises and peered networks**: Access services running in Azure from on-premises over ExpressRoute private peering, VPN tunnels, and peered virtual networks using private endpoints. There's no need to configure ExpressRoute Microsoft peering or traverse the internet to reach the service. Private Link provides a secure way to migrate workloads to Azure.
For the most up-to-date notifications, check the [Azure Private Link updates pag
Azure Private Link has integration with Azure Monitor. This combination allows: - Archival of logs to a storage account.+
+ - Streaming of events to your Event Hubs.
+ - Azure Monitor logging. You can access the following information on Azure Monitor: + - **Private endpoint**: + - Data processed by the Private Endpoint ΓÇ»(IN/OUT) - **Private Link service**:+ - Data processed by the Private Link service (IN/OUT)+ - NAT port availability ## Pricing
For SLA, see [SLA for Azure Private Link](https://azure.microsoft.com/support/le
## Next steps - [Quickstart: Create a Private Endpoint using Azure portal](create-private-endpoint-portal.md)+ - [Quickstart: Create a Private Link service by using the Azure portal](create-private-link-service-portal.md)+ - [Learn module: Introduction to Azure Private Link](/training/modules/introduction-azure-private-link/)
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
The Microsoft Purview governance portal uses a set of predefined roles to contro
|I need to create workflows for my Microsoft Purview account in the governance portal| Workflow administrator | |I need to share data from sources registered in Microsoft Purview | Data share contributor| |I need to view insights for collections I'm a part of | Insights reader **or** data curator |
+|I need to create or manage our [self-hosted integration runtime (SHIR)](manage-integration-runtimes.md) | Data source administrator |
:::image type="content" source="media/catalog-permissions/catalog-permission-role.svg" alt-text="Chart showing Microsoft Purview governance portal roles" lightbox="media/catalog-permissions/catalog-permission-role.svg"::: >[!NOTE]
purview Concept Best Practices Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-security.md
Examples of control plane operations and data plane operations:
|Deploy a Microsoft Purview account | Control plane | Azure subscription owner or contributor | Azure RBAC roles | |Set up a Private Endpoint for Microsoft Purview | Control plane | Contributor  | Azure RBAC roles | |Delete a Microsoft Purview account | Control plane | Contributor  | Azure RBAC roles |
+|Add or manage a [self-hosted integration runtime (SHIR)](manage-integration-runtimes.md) | Control plane | Data source administrator |Microsoft Purview roles |
|View Microsoft Purview metrics to get current capacity units | Control plane | Reader | Azure RBAC roles | |Create a collection | Data plane | Collection Admin | Microsoft Purview roles | |Register a data source | Data plane | Collection Admin | Microsoft Purview roles |
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-integration-runtimes.md
Installation of the self-hosted integration runtime on a domain controller isn't
> For your source, **[refer to each source article for prerequisite details.](azure-purview-connector-overview.md)** > Any requirements will be listed in the **Prerequisites** section.
+- To add and manage a SHIR in Microsoft Purview, you'll need [data source administrator permissions](catalog-permissions.md) in Microsoft Purview.
+ - Self-hosted integration runtime requires a 64-bit Operating System with .NET Framework 4.7.2 or above. See [.NET Framework System Requirements](/dotnet/framework/get-started/system-requirements) for details. - Ensure Visual C++ Redistributable for Visual Studio 2015 or higher is installed on the self-hosted integration runtime machine. If you don't have this update installed, [you can download it here](/cpp/windows/latest-supported-vc-redist#visual-studio-2015-2017-2019-and-2022).
To create and set up a self-hosted integration runtime, use the following proced
### Create a self-hosted integration runtime
+>[!NOTE]
+> To add or manage a SHIR in Microsoft Purview, you'll need [data source administrator permissions](catalog-permissions.md) in Microsoft Purview.
+ 1. On the home page of the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/), select **Data Map** from the left navigation pane. 2. Under **Sources and scanning** on the left pane, select **Integration runtimes**, and then select **+ New**.
You can register multiple nodes for a self-hosted integration runtime using the
## Manage a self-hosted integration runtime
-You can edit a self-hosted integration runtime by navigating to **Integration runtimes** in the Microsoft Purview governance portal, hover on the IR then click the **Edit** button.
+You can edit a self-hosted integration runtime by navigating to **Integration runtimes** in the Microsoft Purview governance portal, hover on the IR then select the **Edit** button.
In the **Settings** tab, you can update the description, copy the key, or regenerate new keys. In the **Nodes** tab, you can manage the registered nodes. And in the **Version** tab, you can see the IR version status. :::image type="content" source="media/manage-integration-runtimes/edit-integration-runtime-settings.png" alt-text="edit IR details.":::
-You can delete a self-hosted integration runtime by navigating to **Integration runtimes**, hover on the IR then click the **Delete** button.
+You can delete a self-hosted integration runtime by navigating to **Integration runtimes**, hover on the IR then select the **Delete** button.
### Notification area icons and notifications
Make sure the account has the permission of Log-on as a service. Otherwise self-
You can associate a self-hosted integration runtime with multiple on-premises machines or virtual machines in Azure. These machines are called nodes. You can have up to four nodes associated with a self-hosted integration runtime. The benefits of having multiple nodes are: - Higher availability of the self-hosted integration runtime so that it's no longer the single point of failure for scan. This availability helps ensure continuity when you use up to four nodes.-- Run more concurrent scans. Each self-hosted integration runtime can empower a number of scans at the same time, auto determined based on the machine's CPU/memory. You can install additional nodes if you have more concurrency need. Each scan will be executed on one of the nodes. Having more nodes doesn't improve the performance of a single scan execution.
+- Run more concurrent scans. Each self-hosted integration runtime can empower many scans at the same time, auto determined based on the machine's CPU/memory. You can install more nodes if you have more concurrency need. Each scan will be executed on one of the nodes. Having more nodes doesn't improve the performance of a single scan execution.
You can associate multiple nodes by installing the self-hosted integration runtime software from [Download Center](https://www.microsoft.com/download/details.aspx?id=39717). Then, register it by using the same authentication key.
search Search Get Started Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-python.md
ms.devlang: python Previously updated : 08/31/2022 Last updated : 01/17/2023
This step shows you how to query an index using the **search** method of the [se
1. The following step executes an empty search (`search=*`), returning an unranked list (search score = 1.0) of arbitrary documents. Because there are no criteria, all documents are included in results. This query prints just two of the fields in each document. It also adds `include_total_count=True` to get a count of all documents (4) in the results. ```python
- results = search_client.search(search_text="*", include_total_count=True)
+ results = search_client.search(search_text="*", include_total_count=True)
print ('Total Documents Matching Query:', results.get_count()) for result in results:
This step shows you how to query an index using the **search** method of the [se
1. The next query adds whole terms to the search expression ("wifi"). This query specifies that results contain only those fields in the `select` statement. Limiting the fields that come back minimizes the amount of data sent back over the wire and reduces search latency. ```python
- results = search_client.search(search_text="wifi", include_total_count=True, select='HotelId,HotelName,Tags')
+ results = search_client.search(search_text="wifi", include_total_count=True, select='HotelId,HotelName,Tags')
print ('Total Documents Matching Query:', results.get_count()) for result in results:
This step shows you how to query an index using the **search** method of the [se
1. Next, apply a filter expression, returning only those hotels with a rating greater than 4, sorted in descending order. ```python
- results = search_client.search(search_text="hotels", select='HotelId,HotelName,Rating', filter='Rating gt 4', order_by='Rating desc')
+ results = search_client.search(search_text="hotels", select='HotelId,HotelName,Rating', filter='Rating gt 4', order_by='Rating desc')
for result in results: print("{}: {} - {} rating".format(result["HotelId"], result["HotelName"], result["Rating"])) ```
-1. Add `search_fields` to scope query matching to a single field.
+1. Add `search_fields` (an array) to scope query matching to a single field.
```python
- results = search_client.search(search_text="sublime", search_fields='HotelName', select='HotelId,HotelName')
+ results = search_client.search(search_text="sublime", search_fields=['HotelName'], select='HotelId,HotelName')
for result in results: print("{}: {}".format(result["HotelId"], result["HotelName"]))
This step shows you how to query an index using the **search** method of the [se
1. Facets are labels that can be used to compose facet navigation structure. This query returns facets and counts for Category. ```python
- results = search_client.search(search_text="*", facets=["Category"])
+ results = search_client.search(search_text="*", facets=["Category"])
facets = results.get_facets()
search Semantic Answers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-answers.md
Previously updated : 11/02/2022 Last updated : 01/16/2023 # Return a semantic answer in Azure Cognitive Search
For best results, return semantic answers on a document corpus having the follow
+ The "semanticConfiguration" must include fields that offer sufficient text in which an answer is likely to be found. Fields more likely to contain answers should be listed first in "prioritizedContentFields". Only verbatim text from a document can appear as an answer.
-+ Query strings must not be null (search=`*`) and the string should have the characteristics of a question, as opposed to a keyword search (a sequential list of arbitrary terms or phrases). If the query string doesn't appear to be answer, answer processing is skipped, even if the request specifies "answers" as a query parameter.
++ Query strings must not be null (search=`*`) and the string should have the characteristics of a question, such as "what is" or "how to", as opposed to a keyword search consisting of terms or phrases in arbitrary order. If the query string doesn't appear to be a question, answer processing is skipped, even if the request specifies "answers" as a query parameter. + Semantic extraction and summarization have limits over how many tokens per document can be analyzed in a timely fashion. In practical terms, if you have large documents that run into hundreds of pages, try to break up the content into smaller documents first.
security Operational Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-best-practices.md
description: This article provides a set of operational best practices for prote
documentationcenter: na -+ ms.assetid:
na Previously updated : 05/06/2019 Last updated : 01/16/2023
Here are some best practices for using management groups:
Good candidates include: - Regulatory requirements that have a clear business impact (for example, restrictions related to data sovereignty)-- Requirements with near-zero potential negative affect on operations, like policy with audit effect or Azure RBAC permission assignments that have been carefully reviewed
+- Requirements with near-zero potential negative effect on operations, like policy with audit effect or Azure RBAC permission assignments that have been carefully reviewed
**Best practice**: Carefully plan and test all enterprise-wide changes on the root management group before applying them (policy, Azure RBAC model, and so on). **Detail**: Changes in the root management group can affect every resource on Azure. While they provide a powerful way to ensure consistency across the enterprise, errors or incorrect usage can negatively affect production operations. Test all changes to the root management group in a test lab or production pilot.
Ensuring that an application is resilient enough to handle a denial of service t
For Azure Cloud Services, configure each of your roles to use [multiple instances](../../cloud-services/cloud-services-choose-me.md).
-For [Azure Virtual Machines](../../virtual-machines/windows/overview.md), ensure that your VM architecture includes more than one VM and that each VM is included in an [availability set](../../virtual-machines/windows/tutorial-availability-sets.md). We recommend using virtual machine scale sets for autoscaling capabilities.
+For [Azure Virtual Machines](../../virtual-machines/windows/overview.md), ensure that your VM architecture includes more than one VM and that each VM is included in an [availability set](../../virtual-machines/windows/tutorial-availability-sets.md). We recommend using Virtual Machine Scale Sets for autoscaling capabilities.
**Best practice**: Layering security defenses in an application reduces the chance of a successful attack. Implement secure designs for your applications by using the built-in capabilities of the Azure platform. **Detail**: The risk of attack increases with the size (surface area) of the application. You can reduce the surface area by using an approval list to close down the exposed IP address space and listening ports that are not needed on the load balancers ([Azure Load Balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md) and [Azure Application Gateway](../../application-gateway/application-gateway-create-probe-portal.md)).
See [Azure security best practices and patterns](best-practices-and-patterns.md)
The following resources are available to provide more general information about Azure security and related Microsoft * [Azure Security Team Blog](/archive/blogs/azuresecurity/) - for up to date information on the latest in Azure Security
-* [Microsoft Security Response Center](https://technet.microsoft.com/library/dn440717.aspx) - where Microsoft security vulnerabilities, including issues with Azure, can be reported or via email to secure@microsoft.com
+* [Microsoft Security Response Center](https://technet.microsoft.com/library/dn440717.aspx) - where Microsoft security vulnerabilities, including issues with Azure, can be reported or via email to secure@microsoft.com
security Services Technologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/services-technologies.md
description: The article provides a curated list of Azure Security services and
documentationcenter: na -+ ms.assetid: a5a7f60a-97e2-49b4-a8c5-7c010ff27ef8--++ na Previously updated : 1/29/2019 Last updated : 01/16/2023
Over time, this list will change and grow, just as Azure does. Make sure to chec
## General Azure security |Service|Description| |--|--|
-|[Microsoft Defender for Cloud](../../security-center/security-center-introduction.md)| A cloud workload protection solution that provides security management and advanced threat protection across hybrid cloud workloads.|
+|[Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md)| A cloud workload protection solution that provides security management and advanced threat protection across hybrid cloud workloads.|
|[Microsoft Sentinel](../../sentinel/overview.md)| A scalable, cloud-native solution that delivers intelligent security analytics and threat intelligence across the enterprise.| |[Azure Key Vault](../../key-vault/general/overview.md)| A secure secrets store for the passwords, connection strings, and other information you need to keep your apps working. | |[Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md)|A monitoring service that collects telemetry and other data, and provides a query language and analytics engine to deliver operational insights for your apps and resources. Can be used alone or with other services such as Defender for Cloud. |
Over time, this list will change and grow, just as Azure does. Make sure to chec
|Service|Description| ||--| | [Azure&nbsp;Storage&nbsp;Service&nbsp;Encryption](../../storage/common/storage-service-encryption.md)|A security feature that automatically encrypts your data in Azure storage. |
-|[StorSimple Encrypted Hybrid Storage](../../storsimple/storsimple-ova-overview.md)| An integrated storage solution that manages storage tasks between on-premises devices and Azure cloud storage.|
-|[Azure Client-Side Encryption](../../storage/common/storage-client-side-encryption.md)| A client-side encryption solution that encrypts data inside client applications before uploading to Azure Storage; also decrypts the data while downloading. |
-| [Azure Storage Shared Access Signatures](../../storage/common/storage-sas-overview.md)|A shared access signature provides delegated access to resources in your storage account. |
-|[Azure Storage Account Keys](../../storage/common/storage-account-create.md)| An access control method for Azure storage that is used for authentication when the storage account is accessed. |
-|[Azure File shares with SMB 3.0 Encryption](../../storage/files/storage-files-introduction.md)|A network security technology that enables automatic network encryption for the Server Message Block (SMB) file sharing protocol. |
-|[Azure Storage Analytics](/rest/api/storageservices/Storage-Analytics)| A logging and metrics-generating technology for data in your storage account. |
+|[Azure StorSimple Virtual Array](../../storsimple/storsimple-ova-overview.md)| An integrated storage solution that manages storage tasks between an on-premises virtual array running in a hypervisor and Microsoft Azure cloud storage.|
+|[Client-Side encryption for blobs](../../storage/blobs/client-side-encryption.md)| A client-side encryption solution that supports encrypting data within client applications before uploading to Azure Storage, and decrypting data while downloading to the client. |
+| [Azure Storage shared access signatures](../../storage/common/storage-sas-overview.md)|A shared access signature (SAS) provides delegated access to resources in your storage account. |
+|[Azure Storage Account Keys](../../storage/common/storage-account-create.md)| An access control method for Azure storage that is used authorize requests to the storage account using either the account access keys or an Azure Active Directory (Azure AD) account (default). |
+|[Azure File shares](../../storage/files/storage-files-introduction.md)| A storage security technology that offers fully managed file shares in the cloud that are accessible via the industry standard Server Message Block (SMB) protocol, Network File System (NFS) protocol, and Azure Files REST AP. |
+|[Azure Storage Analytics](../../storage/common/storage-analytics.md)| A logging and metrics-generating technology for data in your storage account. |
<!>
Over time, this list will change and grow, just as Azure does. Make sure to chec
|Service|Description| ||--| | [Azure&nbsp;SQL&nbsp;Firewall](/azure/azure-sql/database/firewall-configure)|A network access control feature that protects against network-based attacks to database. |
-|[Azure&nbsp;SQL&nbsp;Cell&nbsp;Level Encryption](/archive/blogs/sqlsecurity/recommendations-for-using-cell-level-encryption-in-azure-sql-database)| A database security technology that provides encryption at a granular level. |
| [Azure&nbsp;SQL&nbsp;Connection Encryption](/azure/azure-sql/database/logins-create-manage)|To provide security, SQL Database controls access with firewall rules limiting connectivity by IP address, authentication mechanisms requiring users to prove their identity, and authorization mechanisms limiting users to specific actions and data. |
-| [Azure SQL Always Encryption](/sql/relational-databases/security/encryption/always-encrypted-database-engine)|Protects sensitive data, such as credit card numbers or national identification numbers (for example, U.S. social security numbers), stored in Azure SQL Database or SQL Server databases. |
-| [Azure&nbsp;SQL&nbsp;Transparent Data Encryption](/sql/relational-databases/security/encryption/transparent-data-encryption-azure-sql)| A database security feature that encrypts the storage of an entire database. |
-| [Azure SQL Database Auditing](/azure/azure-sql/database/auditing-overview)|A database auditing feature that tracks database events and writes them to an audit log in your Azure storage account. |
-| [Virtual network rules](/azure/azure-sql/database/vnet-service-endpoint-rule-overview)|A firewall security feature that controls whether the server for your databases and elastic pools in Azure SQL Database or for your dedicated SQL pool (formerly SQL DW) databases in Azure Synapse Analytics accepts communications that are sent from particular subnets in virtual networks. |
-
+| [Azure SQL Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine)|Protects sensitive data, such as credit card numbers or national identification numbers (for example, U.S. social security numbers), stored in Azure SQL Database, Azure SQL Managed Instance, and SQL Server databases. |
+| [Azure&nbsp;SQL&nbsp;transparent data encryption](/sql/relational-databases/security/encryption/transparent-data-encryption-azure-sql)| A database security feature that helps protect Azure SQL Database, Azure SQL Managed Instance, and Azure Synapse Analytics against the threat of malicious offline activity by encrypting data at rest. |
+| [Azure SQL Database Auditing](/azure/azure-sql/database/auditing-overview)|An auditing feature for Azure SQL Database and Azure Synapse Analytics that tracks database events and writes them to an audit log in your Azure storage account, Log Analytics workspace, or Event Hubs. |
+| [Virtual network rules](/azure/azure-sql/database/vnet-service-endpoint-rule-overview)|A firewall security feature that controls whether the server for your databases and elastic pools in Azure SQL Database or for your dedicated SQL pool (formerly SQL DW) databases in Azure Synapse Analytics accepts communications that are sent from particular subnets in virtual networks. |
## Identity and access management |Service|Description| ||--| | [Azure&nbsp;role-based&nbsp;access control](../../role-based-access-control/role-assignments-portal.md)|An access control feature designed to allow users to access only the resources they are required to access based on their roles within the organization. |
-| [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md)|A cloud-based authentication repository that supports a multi-tenant, cloud-based directory and multiple identity management services within Azure. |
-| [Azure Active Directory B2C](../../active-directory-b2c/overview.md)|An identity management service that enables control over how customers sign-up, sign-in, and manage their profiles when using Azure-based applications. |
-| [Azure Active Directory Domain Services](../../active-directory-domain-services/overview.md)| A cloud-based and managed version of Active Directory Domain Services. |
+| [Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md)|A cloud-based identity and access management service that supports a multi-tenant, cloud-based directory and multiple identity management services within Azure. |
+| [Azure Active Directory B2C](../../active-directory-b2c/overview.md)| A customer identity access management (CIAM) solution that enables control over how customers sign-up, sign-in, and manage their profiles when using Azure-based applications. |
+| [Azure Active Directory Domain Services](../../active-directory-domain-services/overview.md)| A cloud-based and managed version of Active Directory Domain Services that provides managed domain services such as domain join, group policy, lightweight directory access protocol (LDAP), and Kerberos/NTLM authentication. |
| [Azure AD Multi-Factor Authentication](../../active-directory/authentication/concept-mfa-howitworks.md)| A security provision that employs several different forms of authentication and verification before allowing access to secured information. | ## Backup and disaster recovery
Over time, this list will change and grow, just as Azure does. Make sure to chec
## Networking |Service|Description| ||--|
-| [Network&nbsp;Security&nbsp;Groups](../../virtual-network/virtual-network-vnet-plan-design-arm.md)| A network-based access control feature using a 5-tuple to make allow or deny decisions. |
+| [Network&nbsp;Security&nbsp;Groups](../../virtual-network/network-security-groups-overview.md)| A network-based access control feature to filter network traffic between Azure resources in an Azure virtual network. |
| [Azure VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md)| A network device used as a VPN endpoint to allow cross-premises access to Azure Virtual Networks. |
-| [Azure Application Gateway](../../application-gateway/overview.md)|An advanced web application load balancer that can route based on URL and perform SSL-offloading. |
+| [Azure Application Gateway](../../application-gateway/overview.md)|An advanced web traffic load balancer that enables you to manage traffic to your web applications. |
|[Web application firewall](../../web-application-firewall/overview.md) (WAF)|A feature that provides centralized protection of your web applications from common exploits and vulnerabilities| | [Azure Load Balancer](../../load-balancer/load-balancer-overview.md)|A TCP/UDP application network load balancer. |
-| [Azure ExpressRoute](../../expressroute/expressroute-introduction.md)| A dedicated WAN link between on-premises networks and Azure Virtual Networks. |
-| [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md)| A global DNS load balancer.|
-| [Azure Application Proxy](../../active-directory/app-proxy/application-proxy.md)| An authenticating front-end used to secure remote access for web applications hosted on-premises. |
-|[Azure Firewall](../../firewall/overview.md)|A managed, cloud-based network security service that protects your Azure Virtual Network resources.|
+| [Azure ExpressRoute](../../expressroute/expressroute-introduction.md)| A feature that lets you extend your on-premises networks into the Microsoft cloud over a private connection with the help of a connectivity provider. |
+| [Azure Traffic Manager](../../traffic-manager/traffic-manager-overview.md)| A DNS-based traffic load balancer.|
+| [Azure Active Directory Application Proxy](../../active-directory/app-proxy/application-proxy.md)| An authenticating front-end used to secure remote access to on-premises web applications. |
+|[Azure Firewall](../../firewall/overview.md)|A cloud-native and intelligent network firewall security service that provides threat protection for your cloud workloads running in Azure.|
|[Azure DDoS protection](../../ddos-protection/ddos-protection-overview.md)|Combined with application design best practices, provides defense against DDoS attacks.|
-|[Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md)|Extends your virtual network private address space and the identity of your VNet to the Azure services, over a direct connection.|
-|[Azure Private Link](../../private-link/private-link-overview.md)|Provides private connectivity from a virtual network to Azure platform as a service (PaaS), customer-owned, or Microsoft partner services.|
-|[Azure Bastion](../../bastion/bastion-overview.md)|A service you deploy that lets you connect to a virtual machine using your browser and the Azure portal.|
+|[Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md)| Provides secure and direct connectivity to Azure services over an optimized route over the Azure backbone network. |
+|[Azure Private Link](../../private-link/private-link-overview.md)|Enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a private endpoint in your virtual network.|
+|[Azure Bastion](../../bastion/bastion-overview.md)|A service you deploy that lets you connect to a virtual machine using your browser and the Azure portal, or via the native SSH or RDP client already installed on your local computer.|
|[Azure Front Door](../../frontdoor/front-door-application-security.md)|Provides web application protection capability to safeguard your web applications from network attacks and common web vulnerabilities exploits like SQL Injection or Cross Site Scripting (XSS).|
+## Next steps
+Learn more about Azure's [end-to-end security](end-to-end.md) and how Azure services can help you meet the security needs of your business and protect your users, devices, resources, data, and applications in the cloud.
sentinel Forward Syslog Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/forward-syslog-monitor-agent.md
#Customer intent: As a security-engineer, I want to get syslog data into Microsoft Sentinel so that I can use the data with other data to do attack detection, threat visibility, proactive hunting, and threat response. As an IT administrator, I want to get syslog data into my Log Analytics workspace to monitor my linux-based devices.
-# Tutorial: Forward syslog data to a Log Analytics workspace by using the Azure Monitor agent
+# Forward syslog data to a Log Analytics workspace by using the Azure Monitor agent
-In this tutorial, you'll configure a Linux virtual machine (VM) to forward syslog data to your workspace by using the Azure Monitor agent. These steps allow you to collect and monitor data from Linux-based devices where you can't install an agent like a firewall network device.
+In this article, we'll describe how to configure a Linux virtual machine (VM) to forward syslog data to your workspace by using the Azure Monitor agent. These steps allow you to collect and monitor data from Linux-based devices where you can't install an agent like a firewall network device.
Configure your linux-based device to send data to a Linux VM. The Azure Monitor agent on the VM forwards the syslog data to the Log Analytics workspace. Then use Microsoft Sentinel or Azure Monitor to monitor the device from the data stored in the Log Analytics workspace.
-In this tutorial, you learn how to:
+In this article, you learn how to:
> [!div class="checklist"] > * Create a data collection rule
In this tutorial, you learn how to:
## Prerequisites
-To complete the steps in this tutorial, you must have the following resources and roles.
+To complete the steps in this article, you must have the following resources and roles.
- Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure account with the following roles to deploy the agent and create the data collection rules:
sentinel Sentinel Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solutions.md
Title: About Microsoft Sentinel content and solutions | Microsoft Docs
description: This article describes Microsoft Sentinel content and solutions, which customers can use to find data analysis tools packaged together with data connectors. Previously updated : 05/06/2022 Last updated : 01/05/2023
site-recovery Avs Tutorial Dr Drill Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-dr-drill-azure.md
This is the fourth tutorial in a series that shows you how to set up disaster re
In this tutorial, learn how to: > [!div class="checklist"]- > * Set up an isolated network for the test failover > * Prepare to connect to the Azure VM after failover > * Run a test failover for a single machine.
In this tutorial, learn how to:
> [!NOTE] > Tutorials show you the simplest deployment path for a scenario. They use default options where possible, and don't show all possible settings and paths. If you want to learn about the disaster recovery drill steps in more detail, [review this article](site-recovery-test-failover-to-azure.md).
-## Before you start
+## Prerequisites
-Complete the previous tutorials:
+**Before you begin, complete the previous tutorials:**
1. Make sure you've [set up Azure](avs-tutorial-prepare-azure.md) for disaster recovery to Azure. 2. Follow [these steps](avs-tutorial-prepare-avs.md) to prepare your Azure VMware Solution deployment for disaster recovery to Azure. 3. [Set up](avs-tutorial-replication.md) disaster recovery for Azure VMware Solution VMs.
-## Verify VM properties
+### Verify VM properties
Before you run a test failover, verify the VM properties, and make sure that the [VMware vSphere VM](vmware-physical-azure-support-matrix.md#replicated-machines) complies with Azure requirements.
-1. In **Protected Items**, click **Replicated Items** > and the VM.
+1. In **Protected Items**, select **Replicated Items** > and the VM.
2. In the **Replicated item** pane, there's a summary of VM information, health status, and the
- latest available recovery points. Click **Properties** to view more details.
+ latest available recovery points. Select **Properties** to view more details.
3. In **Compute and Network**, you can modify the Azure name, resource group, target size, availability set, and managed disk settings. 4. You can view and modify network settings, including the network/subnet in which the Azure VM will be located after failover, and the IP address that will be assigned to it. 5. In **Disks**, you can see information about the operating system and data disks on the VM. + ## Create a network for test failover We recommended that for test failover, you choose a network that's isolated from the production recovery site network specific in the **Compute and Network** settings for each VM. By default, when you create an Azure virtual network, it is isolated from other networks. The test network should mimic your production network:
When you run a test failover, the following happens:
Run the test failover as follows:
-1. In **Settings** > **Replicated Items**, click the VM > **+Test Failover**.
+1. In **Settings** > **Replicated Items**, select the VM > **+Test Failover**.
2. Select the **Latest processed** recovery point for this tutorial. This fails over the VM to the latest available point in time. The time stamp is shown. With this option, no time is spent processing data, so it provides a low RTO (recovery time objective). 3. In **Test Failover**, select the target Azure network to which Azure VMs will be connected after failover occurs.
-4. Click **OK** to begin the failover. You can track progress by clicking on the VM to open its
- properties. Or you can click the **Test Failover** job in vault name > **Settings** > **Jobs** >
+4. Select **OK** to begin the failover. You can track progress by selecting on the VM to open its
+ properties. Or you can select the **Test Failover** job in vault name > **Settings** > **Jobs** >
**Site Recovery jobs**. 5. After the failover finishes, the replica Azure VM appears in the Azure portal > **Virtual Machines**. Check that the VM is the appropriate size, that it's connected to the right network, and that it's running. 6. You should now be able to connect to the replicated VM in Azure.
-7. To delete Azure VMs created during the test failover, click **Cleanup test failover** on the
+7. To delete Azure VMs created during the test failover, select **Cleanup test failover** on the
VM. In **Notes**, record and save any observations associated with the test failover. In some scenarios, failover requires additional processing that takes around eight to ten minutes
If you want to connect to Azure VMs using RDP/SSH after failover, [prepare to co
## Next steps
-> [!div class="nextstepaction"]
-> [Run a failover](avs-tutorial-failover.md)
+[Learn more](avs-tutorial-failover.md) about running a failover.
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Previously updated : 12/07/2022 Last updated : 01/17/2023 -+ # Support matrix for Azure VM disaster recovery between Azure regions
Azure Site Recovery allows you to perform global disaster recovery. You can repl
America | Canada East, Canada Central, South Central US, West Central US, East US, East US 2, West US, West US 2, West US 3, Central US, North Central US Europe | UK West, UK South, North Europe, West Europe, South Africa West, South Africa North, Norway East, France Central, Switzerland North, Germany West Central, UAE North (UAE is treated as part of the Europe geo cluster) Asia | South India, Central India, West India, Southeast Asia, East Asia, Japan East, Japan West, Korea Central, Korea South, Qatar Central
-JIO | JIO India West<br/><br/>Replication cannot be done between JIO and non-JIO regions for Virtual Machines present in JIO subscriptions. This is because JIO subscriptions can have resources only in JIO regions.
+JIO | JIO India West<br/><br/>Replication can't be done between JIO and non-JIO regions for Virtual Machines present in JIO subscriptions. This is because JIO subscriptions can have resources only in JIO regions.
Australia | Australia East, Australia Southeast, Australia Central, Australia Central 2 Azure Government | US GOV Virginia, US GOV Iowa, US GOV Arizona, US GOV Texas, US DOD East, US DOD Central Germany | Germany Central, Germany Northeast China | China East, China North, China North2, China East2 Brazil | Brazil South
-Restricted Regions reserved for in-country disaster recovery |Switzerland West reserved for Switzerland North, France South reserved for France Central, Norway West for Norway East customers, JIO India Central for JIO India West customers, Brazil Southeast for Brazil South customers, South Africa West for South Africa North customers, Germany North for Germany West Central customers, UAE Central for UAE North customers.<br/><br/> To use restricted regions as your primary or recovery region, please get yourselves allowlisted by raising a request [here](/troubleshoot/azure/general/region-access-request-process) for both source and target subscriptions.
+Restricted Regions reserved for in-country disaster recovery |Switzerland West reserved for Switzerland North, France South reserved for France Central, Norway West for Norway East customers, JIO India Central for JIO India West customers, Brazil Southeast for Brazil South customers, South Africa West for South Africa North customers, Germany North for Germany West Central customers, UAE Central for UAE North customers.<br/><br/> To use restricted regions as your primary or recovery region, get yourselves allowlisted by raising a request [here](/troubleshoot/azure/general/region-access-request-process) for both source and target subscriptions.
>[!NOTE] >
Premium storage | Supported | Use Premium Block Blob storage accounts to get Hig
Region | Same region as virtual machine | Cache storage account should be in the same region as the virtual machine being protected. Subscription | Can be different from source virtual machines | Cache storage account need not be in the same subscription as the source virtual machine(s). Azure Storage firewalls for virtual networks | Supported | If you are using firewall enabled cache storage account or target storage account, ensure you ['Allow trusted Microsoft services'](../storage/common/storage-network-security.md#exceptions).<br></br>Also, ensure that you allow access to at least one subnet of source Vnet.<br></br>Note: Do not restrict virtual network access to your storage accounts used for Site Recovery. You should allow access from 'All networks'.
-Soft delete | Not supported | Soft delete is not supported because once it is enabled on cache storage account, it increases cost. Azure Site Recovery performs very frequent creates/deletes of log files while replicating causing costs to increase.
+Soft delete | Not supported | Soft delete is not supported because once it is enabled on cache storage account, it increases cost. Azure Site Recovery performs frequent creates/deletes of log files while replicating causing costs to increase.
Encryption at rest (CMK) | Supported | Storage account encryption can be configured with customer managed keys (CMK)
-The table below lists the limits in terms of number of disks that can replicate to a single storage account.
+The following table lists the limits in terms of number of disks that can replicate to a single storage account.
**Storage account type** | **Churn = 4 MBps per disk** | **Churn = 8 MBps per disk** | |
V2 storage account | 750 disks | 375 disks
As average churn on the disks increases, the number of disks that a storage account can support decreases. The above table may be used as a guide for making decisions on number of storage accounts that need to be provisioned.
-Please note that the above limits are specific to Azure-to-Azure and Zone-to-Zone DR scenarios.
+Note that the above limits are specific to Azure-to-Azure and Zone-to-Zone DR scenarios.
## Replicated machine operating systems
Windows 7 (x64) with SP1 onwards | From version [9.30](https://support.microsoft
**Operating system** | **Details** |
-Red Hat Enterprise Linux | 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6,[7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/), [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609/), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher), [8.6](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b)
-CentOS | 6.5, 6.6, 6.7, 6.8, 6.9, 6.10 </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, [7.8](https://support.microsoft.com/help/4564347/), [7.9 pre-GA version](https://support.microsoft.com/help/4578241/), 7.9 GA version is supported from 9.37 hot fix patch** </br> 8.0, 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4 (4.18.0-305.30.1.el8_4.x86_64 or later), 8.5 (4.18.0-348.5.1.el8_5.x86_64 or later), 8.6.
+Red Hat Enterprise Linux | 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6,[7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/), [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609/), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher), [8.6](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b), 8.7 (Azure Site Recovery for 8.7 is not available in China regions).
+CentOS | 6.5, 6.6, 6.7, 6.8, 6.9, 6.10 </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, [7.8](https://support.microsoft.com/help/4564347/), [7.9 pre-GA version](https://support.microsoft.com/help/4578241/), 7.9 GA version is supported from 9.37 hot fix patch** </br> 8.0, 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4 (4.18.0-305.30.1.el8_4.x86_64 or later), 8.5 (4.18.0-348.5.1.el8_5.x86_64 or later), 8.6, 8.7 (Azure Site Recovery for 8.7 is not available in China regions).
Ubuntu 14.04 LTS Server | Includes support for all 14.04.*x* versions; [Supported kernel versions](#supported-ubuntu-kernel-versions-for-azure-virtual-machines); Ubuntu 16.04 LTS Server | Includes support for all 16.04.*x* versions; [Supported kernel version](#supported-ubuntu-kernel-versions-for-azure-virtual-machines)<br/><br/> Ubuntu servers using password-based authentication and sign-in, and the cloud-init package to configure cloud VMs, might have password-based sign-in disabled on failover (depending on the cloudinit configuration). Password-based sign in can be re-enabled on the virtual machine by resetting the password from the Support > Troubleshooting > Settings menu (of the failed over VM in the Azure portal. Ubuntu 18.04 LTS Server | Includes support for all 18.04.*x* versions; [Supported kernel version](#supported-ubuntu-kernel-versions-for-azure-virtual-machines)<br/><br/> Ubuntu servers using password-based authentication and sign-in, and the cloud-init package to configure cloud VMs, might have password-based sign-in disabled on failover (depending on the cloudinit configuration). Password-based sign in can be re-enabled on the virtual machine by resetting the password from the Support > Troubleshooting > Settings menu (of the failed over VM in the Azure portal.
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
**Release** | **Mobility service version** | **Kernel version** | | | |
-14.04 LTS | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new 14.04 LTS kernels supported in this release. |
-14.04 LTS | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new 14.04 LTS kernels supported in this release. |
-14.04 LTS | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new 14.04 LTS kernels supported in this release. |
-14.04 LTS | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 14.04 LTS kernels supported in this release. |
+14.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new 14.04 LTS kernels supported in this release. |
+14.04 LTS | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new 14.04 LTS kernels supported in this release. |
+14.04 LTS | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new 14.04 LTS kernels supported in this release. |
+14.04 LTS | [9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 14.04 LTS kernels supported in this release. |
14.04 LTS | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new 14.04 LTS kernels supported in this release. | |||
-16.04 LTS | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new 16.04 LTS kernels supported in this release. |
-16.04 LTS | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new 16.04 LTS kernels supported in this release. |
-16.04 LTS | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new 16.04 LTS kernels supported in this release. |
-16.04 LTS | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 16.04 LTS kernels supported in this release. |
+16.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new 16.04 LTS kernels supported in this release. |
+16.04 LTS | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new 16.04 LTS kernels supported in this release. |
+16.04 LTS | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new 16.04 LTS kernels supported in this release. |
+16.04 LTS | [9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 16.04 LTS kernels supported in this release. |
16.04 LTS | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1112-azure, 4.15.0-1113-azure | |||
-18.04 LTS |[9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0)| 4.15.0-196-generic |
-18.04 LTS | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |4.15.0-1151-azure </br> 4.15.0-193-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic</br>4.15.0-1153-azure </br>4.15.0-194-generic </br>5.4.0-1094-azure </br>5.4.0-128-generic </br>5.4.0-131-generic |
-18.04 LTS |[9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 4.15.0-1149-azure </br> 4.15.0-1150-azure </br> 4.15.0-191-generic </br> 4.15.0-192-generic </br>5.4.0-1089-azure </br>5.4.0-1090-azure </br>5.4.0-124-generic|
-18.04 LTS |[9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.15.0-1139-azure </br> 4.15.0-1142-azure </br> 4.15.0-1145-azure </br> 4.15.0-1146-azure </br> 4.15.0-180-generic </br> 4.15.0-184-generic </br> 4.15.0-187-generic </br> 4.15.0-188-generic </br> 4.15.0-189-generic </br> 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic |
+18.04 LTS |[9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0)| 4.15.0-196-generic |
+18.04 LTS | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |4.15.0-1151-azure </br> 4.15.0-193-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic</br>4.15.0-1153-azure </br>4.15.0-194-generic </br>5.4.0-1094-azure </br>5.4.0-128-generic </br>5.4.0-131-generic |
+18.04 LTS |[9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 4.15.0-1149-azure </br> 4.15.0-1150-azure </br> 4.15.0-191-generic </br> 4.15.0-192-generic </br>5.4.0-1089-azure </br>5.4.0-1090-azure </br>5.4.0-124-generic|
+18.04 LTS |[9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.15.0-1139-azure </br> 4.15.0-1142-azure </br> 4.15.0-1145-azure </br> 4.15.0-1146-azure </br> 4.15.0-180-generic </br> 4.15.0-184-generic </br> 4.15.0-187-generic </br> 4.15.0-188-generic </br> 4.15.0-189-generic </br> 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic |
18.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.15.0-1134-azure </br> 4.15.0-1136-azure </br> 4.15.0-1137-azure </br> 4.15.0-1138-azure </br> 4.15.0-173-generic </br> 4.15.0-175-generic </br> 4.15.0-176-generic </br> 4.15.0-177-generic </br> 5.4.0-105-generic </br> 5.4.0-1073-azure </br> 5.4.0-1074-azure </br> 5.4.0-1077-azure </br> 5.4.0-1078-azure </br> 5.4.0-107-generic </br> 5.4.0-109-generic </br> 5.4.0-110-generic | |||
-20.04 LTS | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 5.4.0-1095-azure <br> 5.15.0-1023-azure |
-20.04 LTS | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |5.13.0-1009-azure </br> 5.13.0-1012-azure </br> 5.13.0-1013-azure </br> 5.13.0-1014-azure </br> 5.13.0-1017-azure </br> 5.13.0-1021-azure </br> 5.13.0-1022-azure </br> 5.13.0-1023-azure </br> 5.13.0-1025-azure </br> 5.13.0-1028-azure </br> 5.13.0-1029-azure </br> 5.13.0-1031-azure </br> 5.13.0-21-generic </br> 5.13.0-22-generic </br> 5.13.0-23-generic </br> 5.13.0-25-generic </br> 5.13.0-27-generic </br> 5.13.0-28-generic </br> 5.13.0-30-generic </br> 5.13.0-35-generic </br> 5.13.0-37-generic </br> 5.13.0-39-generic </br> 5.13.0-40-generic </br> 5.13.0-41-generic </br> 5.13.0-44-generic </br> 5.13.0-48-generic </br> 5.13.0-51-generic </br> 5.13.0-52-generic </br> 5.15.0-1007-azure </br> 5.15.0-1008-azure </br> 5.15.0-1013-azure </br> 5.15.0-1014-azure </br> 5.15.0-1017-azure </br> 5.15.0-1019-azure </br> 5.15.0-1020-azure </br> 5.15.0-33-generic </br> 5.15.0-51-generic </br> 5.15.0-43-generic </br> 5.15.0-46-generic </br> 5.15.0-48-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic </br> 5.15.0-1021-azure </br> 5.15.0-1022-azure </br> 5.15.0-50-generic </br> 5.15.0-52-generic </br> 5.4.0-1094-azure </br> 5.4.0-128-generic </br> 5.4.0-131-generic |
-20.04 LTS |[9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-1089-azure </br> 5.4.0-1090-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic </br> 5.4.0-124-generic </br> 5.4.0-125-generic |
-20.04 LTS |[9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 20.04 LTS kernels supported in this release. |
+20.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 5.4.0-1095-azure <br> 5.15.0-1023-azure |
+20.04 LTS | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |5.13.0-1009-azure </br> 5.13.0-1012-azure </br> 5.13.0-1013-azure </br> 5.13.0-1014-azure </br> 5.13.0-1017-azure </br> 5.13.0-1021-azure </br> 5.13.0-1022-azure </br> 5.13.0-1023-azure </br> 5.13.0-1025-azure </br> 5.13.0-1028-azure </br> 5.13.0-1029-azure </br> 5.13.0-1031-azure </br> 5.13.0-21-generic </br> 5.13.0-22-generic </br> 5.13.0-23-generic </br> 5.13.0-25-generic </br> 5.13.0-27-generic </br> 5.13.0-28-generic </br> 5.13.0-30-generic </br> 5.13.0-35-generic </br> 5.13.0-37-generic </br> 5.13.0-39-generic </br> 5.13.0-40-generic </br> 5.13.0-41-generic </br> 5.13.0-44-generic </br> 5.13.0-48-generic </br> 5.13.0-51-generic </br> 5.13.0-52-generic </br> 5.15.0-1007-azure </br> 5.15.0-1008-azure </br> 5.15.0-1013-azure </br> 5.15.0-1014-azure </br> 5.15.0-1017-azure </br> 5.15.0-1019-azure </br> 5.15.0-1020-azure </br> 5.15.0-33-generic </br> 5.15.0-51-generic </br> 5.15.0-43-generic </br> 5.15.0-46-generic </br> 5.15.0-48-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic </br> 5.15.0-1021-azure </br> 5.15.0-1022-azure </br> 5.15.0-50-generic </br> 5.15.0-52-generic </br> 5.4.0-1094-azure </br> 5.4.0-128-generic </br> 5.4.0-131-generic |
+20.04 LTS |[9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | 5.4.0-1080-azure </br> 5.4.0-1083-azure </br> 5.4.0-1085-azure </br> 5.4.0-1086-azure </br> 5.4.0-1089-azure </br> 5.4.0-1090-azure </br> 5.4.0-113-generic </br> 5.4.0-117-generic </br> 5.4.0-120-generic </br> 5.4.0-121-generic </br> 5.4.0-122-generic </br> 5.4.0-124-generic </br> 5.4.0-125-generic |
+20.04 LTS |[9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new 20.04 LTS kernels supported in this release. |
20.04 LTS |[9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 5.4.0-1074-azure </br> 5.4.0-107-generic </br> 5.4.0-1077-azure </br> 5.4.0-1078-azure </br> 5.4.0-109-generic </br> 5.4.0-110-generic </br> 5.11.0-1007-azure </br> 5.11.0-1012-azure </br> 5.11.0-1013-azure </br> 5.11.0-1015-azure </br> 5.11.0-1017-azure </br> 5.11.0-1019-azure </br> 5.11.0-1020-azure </br> 5.11.0-1021-azure </br> 5.11.0-1022-azure </br> 5.11.0-1023-azure </br> 5.11.0-1025-azure </br> 5.11.0-1027-azure </br> 5.11.0-1028-azure </br> 5.11.0-22-generic </br> 5.11.0-25-generic </br> 5.11.0-27-generic </br> 5.11.0-34-generic </br> 5.11.0-36-generic </br> 5.11.0-37-generic </br> 5.11.0-38-generic </br> 5.11.0-40-generic </br> 5.11.0-41-generic </br> 5.11.0-43-generic </br> 5.11.0-44-generic </br> 5.11.0-46-generic </br> 5.4.0-1077-azure </br> 5.4.0-1078-azure </br> 5.8.0-1033-azure </br> 5.8.0-1036-azure </br> 5.8.0-1039-azure </br> 5.8.0-1040-azure </br> 5.8.0-1041-azure </br> 5.8.0-1042-azure </br> 5.8.0-1043-azure </br> 5.8.0-23-generic </br> 5.8.0-25-generic </br> 5.8.0-28-generic </br> 5.8.0-29-generic </br> 5.8.0-31-generic </br> 5.8.0-33-generic </br> 5.8.0-34-generic </br> 5.8.0-36-generic </br> 5.8.0-38-generic </br> 5.8.0-40-generic </br> 5.8.0-41-generic </br> 5.8.0-43-generic </br> 5.8.0-44-generic </br> 5.8.0-45-generic </br> 5.8.0-48-generic </br> 5.8.0-49-generic </br> 5.8.0-50-generic </br> 5.8.0-53-generic </br> 5.8.0-55-generic </br> 5.8.0-59-generic </br> 5.8.0-63-generic |
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
**Release** | **Mobility service version** | **Kernel version** | | | |
-Debian 7 | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 7 kernels supported in this release. |
-Debian 7 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new Debian 7 kernels supported in this release. |
-Debian 7 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 7 kernels supported in this release. |
-Debian 7 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new Debian 7 kernels supported in this release. |
+Debian 7 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 7 kernels supported in this release. |
+Debian 7 | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new Debian 7 kernels supported in this release. |
+Debian 7 | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 7 kernels supported in this release. |
+Debian 7 | [9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new Debian 7 kernels supported in this release. |
Debian 7 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new Debian 7 kernels supported in this release. | |||
-Debian 8 | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 8 kernels supported in this release. |
-Debian 8 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new Debian 8 kernels supported in this release. |
-Debian 8 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 8 kernels supported in this release. |
-Debian 8 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new Debian 8 kernels supported in this release. |
+Debian 8 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 8 kernels supported in this release. |
+Debian 8 | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new Debian 8 kernels supported in this release. |
+Debian 8 | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 8 kernels supported in this release. |
+Debian 8 | [9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | No new Debian 8 kernels supported in this release. |
Debian 8 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | No new Debian 8 kernels supported in this release. | |||
-Debian 9.1 | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 9.1 kernels supported in this release. |
-Debian 9.1 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new Debian 9.1 kernels supported in this release. |
-Debian 9.1 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 9.1 kernels supported in this release. |
-Debian 9.1 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.9.0-19-amd64
+Debian 9.1 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 9.1 kernels supported in this release. |
+Debian 9.1 | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new Debian 9.1 kernels supported in this release. |
+Debian 9.1 | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 9.1 kernels supported in this release. |
+Debian 9.1 | [9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.9.0-19-amd64
Debian 9.1 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.9.0-18-amd64 </br> 4.19.0-0.bpo.19-amd64 </br> 4.19.0-0.bpo.17-cloud-amd64 to 4.19.0-0.bpo.19-cloud-amd64 |||
-Debian 10 | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 10 kernels supported in this release. |
-Debian 10 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 4.19.0-22-amd64 </br> 4.19.0-22-cloud-amd64 </br> 5.10.0-0.deb10.19-amd64 </br> 5.10.0-0.deb10.19-cloud-amd64 |
-Debian 10 | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 10 kernels supported in this release.
-Debian 10 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.19.0-21-amd64 </br> 4.19.0-21-cloud-amd64 </br> 5.10.0-0.bpo.15-amd64 </br> 5.10.0-0.bpo.15-cloud-amd64
+Debian 10 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 10 kernels supported in this release. |
+Debian 10 | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 4.19.0-22-amd64 </br> 4.19.0-22-cloud-amd64 </br> 5.10.0-0.deb10.19-amd64 </br> 5.10.0-0.deb10.19-cloud-amd64 |
+Debian 10 | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | No new Debian 10 kernels supported in this release.
+Debian 10 | [9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 4.19.0-21-amd64 </br> 4.19.0-21-cloud-amd64 </br> 5.10.0-0.bpo.15-amd64 </br> 5.10.0-0.bpo.15-cloud-amd64
Debian 10 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.19.0-20-amd64 </br> 4.19.0-20-cloud-amd64 </br> 5.8.0-0.bpo.2-amd64, 5.8.0-0.bpo.2-cloud-amd64, 5.9.0-0.bpo.2-amd64, 5.9.0-0.bpo.2-cloud-amd64, 5.9.0-0.bpo.5-amd64, 5.9.0-0.bpo.5-cloud-amd64, 5.10.0-0.bpo.7-amd64, 5.10.0-0.bpo.7-cloud-amd64, 5.10.0-0.bpo.9-amd64, 5.10.0-0.bpo.9-cloud-amd64, 5.10.0-0.bpo.11-amd64, 5.10.0-0.bpo.11-cloud-amd64, 5.10.0-0.bpo.12-amd64, 5.10.0-0.bpo.12-cloud-amd64 | |||
-Debian 11 | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 5.10.0-10-amd64 </br> 5.10.0-10-cloud-amd64 </br> 5.10.0-12-amd64 <br> 5.10.0-12-cloud-amd64 <br> 5.10.0-13-amd64 <br> 5.10.0-13-cloud-amd64 <br> 5.10.0-14-amd64 <br> 5.10.0-14-cloud-amd64 <br> 5.10.0-15-amd64 <br> 5.10.0-15-cloud-amd64 <br> 5.10.0-16-amd64 <br> 5.10.0-16-cloud-amd64 <br> 5.10.0-17-amd64 <br> 5.10.0-17-cloud-amd64 <br> 5.10.0-18-amd64 <br> 5.10.0-18-cloud-amd64 <br> 5.10.0-19-amd64 <br> 5.10.0-19-cloud-amd64 |
+Debian 11 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 5.10.0-10-amd64 </br> 5.10.0-10-cloud-amd64 </br> 5.10.0-12-amd64 <br> 5.10.0-12-cloud-amd64 <br> 5.10.0-13-amd64 <br> 5.10.0-13-cloud-amd64 <br> 5.10.0-14-amd64 <br> 5.10.0-14-cloud-amd64 <br> 5.10.0-15-amd64 <br> 5.10.0-15-cloud-amd64 <br> 5.10.0-16-amd64 <br> 5.10.0-16-cloud-amd64 <br> 5.10.0-17-amd64 <br> 5.10.0-17-cloud-amd64 <br> 5.10.0-18-amd64 <br> 5.10.0-18-cloud-amd64 <br> 5.10.0-19-amd64 <br> 5.10.0-19-cloud-amd64 |
> [!Note] > To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario.
Debian 11 | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-fo
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SUSE Linux Enterprise Server 12 kernels supported in this release. |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.106-azure:5 </br> 4.12.14-16.112-azure |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SLES 12 Azure kernels supported in this release. |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br>4.12.14-16.100-azure:5 </br> 4.12.14-16.103-azure:5 |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.52](https://support.microsoft.coms/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SUSE Linux Enterprise Server 12 kernels supported in this release. |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.106-azure:5 </br> 4.12.14-16.112-azure |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> No new SLES 12 Azure kernels supported in this release. |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br>4.12.14-16.100-azure:5 </br> 4.12.14-16.103-azure:5 |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 4.12.14-16.94-azure:5 </br> 4.12.14-16.97-azure:5 </br> 4.12.14-122.110-default:5 </br> 4.12.14-122.113-default:5 </br> 4.12.14-122.116-default:5 </br> 4.12.14-122.121-default:5 |
SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.48](https://suppo
**Release** | **Mobility service version** | **Kernel version** | | | |
-SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.12-azure:4 <br> 5.14.21-150400.14.10-azure:4 <br> 5.14.21-150400.14.13-azure:4 <br> 5.14.21-150400.14.16-azure:4 <br> 5.14.21-150400.14.7-azure:4 <br> 5.3.18-150300.38.83-azure:3 |
-SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.3.18-150300.38.80-azure |
-SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.50](https://support.microsoft.com/en-us/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.3.18-150300.38.75-azure:3 |
-SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.49](https://support.microsoft.com/en-us/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br>5.3.18-150300.38.69-azure:3 </br>
+SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.12-azure:4 <br> 5.14.21-150400.14.10-azure:4 <br> 5.14.21-150400.14.13-azure:4 <br> 5.14.21-150400.14.16-azure:4 <br> 5.14.21-150400.14.7-azure:4 <br> 5.3.18-150300.38.83-azure:3 |
+SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.3.18-150300.38.80-azure |
+SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.50](https://support.microsoft.com/topic/update-rollup-63-for-azure-site-recovery-kb5017421-992e63af-aa94-4ea6-8d1b-2dd89a9cc70b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.3.18-150300.38.75-azure:3 |
+SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.49](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br>5.3.18-150300.38.69-azure:3 </br>
SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.48](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 5.3.18-150300.38.37-azure:3 </br> 5.3.18-150300.38.40-azure:3 </br> 5.3.18-150300.38.47-azure:3 </br> 5.3.18-150300.38.50-azure:3 </br> 5.3.18-150300.38.53-azure:3 </br> 5.3.18-150300.38.56-azure:3 </br> 5.3.18-150300.38.59-azure:3 </br> 5.3.18-150300.38.62-azure:3 </br> 5.3.18-36-azure:3 </br> 5.3.18-38.11-azure:3 </br> 5.3.18-38.14-azure:3 </br> 5.3.18-38.17-azure:3 </br> 5.3.18-38.22-azure:3 </br> 5.3.18-38.25-azure:3 </br> 5.3.18-38.28-azure:3 </br> 5.3.18-38.31-azure:3 </br> 5.3.18-38.34-azure:3 </br> 5.3.18-38.3-azure:3 </br> 5.3.18-38.8-azure:3 |
Availability sets | Supported | If you enable replication for an Azure VM with t
Availability zones | Supported | Dedicated Hosts | Not supported | Hybrid Use Benefit (HUB) | Supported | If the source VM has a HUB license enabled, a test failover or failed over VM also uses the HUB license.
-Virtual machine scale set Flex | Availability scenario - supported. Scalability scenario - not supported. |
+Virtual Machine Scale Set Flex | Availability scenario - supported. Scalability scenario - not supported. |
Azure gallery images - Microsoft published | Supported | Supported if the VM runs on a supported operating system. Azure Gallery images - Third party published | Supported | Supported if the VM runs on a supported operating system. Custom images - Third party published | Supported | Supported if the VM runs on a supported operating system.
Tags | Supported | User-generated tags applied on source virtual machines are c
**Action** | **Details** -- |
-Resize disk on replicated VM | Resizing up on the source VM is supported. Resizing down on the source VM is not supported. Resizing should be performed before failover. No need to disable/re-enable replication.<br/><br/> If you change the source VM after failover, the changes aren't captured.<br/><br/> If you change the disk size on the Azure VM after failover, changes aren't captured by Site Recovery, and failback will be to the original VM size.<br/><br/> If resizing to >=4 TB, please note Azure guidance on disk caching [here](../virtual-machines/premium-storage-performance.md).
+Resize disk on replicated VM | Resizing up on the source VM is supported. Resizing down on the source VM is not supported. Resizing should be performed before failover. No need to disable/re-enable replication.<br/><br/> If you change the source VM after failover, the changes aren't captured.<br/><br/> If you change the disk size on the Azure VM after failover, changes aren't captured by Site Recovery, and failback will be to the original VM size.<br/><br/> If resizing to >=4 TB, note Azure guidance on disk caching [here](../virtual-machines/premium-storage-performance.md).
Add a disk to a replicated VM | Supported Offline changes to protected disks | Disconnecting disks and making offline modifications to them require triggering a full resync. Disk caching | Disk Caching is not supported for disks 4 TiB and larger. If multiple disks are attached to your VM, each disk that is smaller than 4 TiB will support caching. Changing the cache setting of an Azure disk detaches and re-attaches the target disk. If it is the operating system disk, the VM is restarted. Stop all applications/services that might be affected by this disruption before changing the disk cache setting. Not following those recommendations could lead to data corruption.
Traffic Manager | Supported | You can preconfigure Traffic Manager so that t
Azure DNS | Supported | Custom DNS | Supported | Unauthenticated proxy | Supported | [Learn more](./azure-to-azure-about-networking.md)
-Authenticated Proxy | Not supported | If the VM is using an authenticated proxy for outbound connectivity, it cannot be replicated using Azure Site Recovery.
+Authenticated Proxy | Not supported | If the VM is using an authenticated proxy for outbound connectivity, it can't be replicated using Azure Site Recovery.
VPN site-to-site connection to on-premises<br/><br/>(with or without ExpressRoute)| Supported | Ensure that the UDRs and NSGs are configured in such a way that the Site Recovery traffic is not routed to on-premises. [Learn more](./azure-to-azure-about-networking.md) VNET to VNET connection | Supported | [Learn more](./azure-to-azure-about-networking.md) Virtual Network Service Endpoints | Supported | If you are restricting the virtual network access to storage accounts, ensure that the trusted Microsoft services are allowed access to the storage account.
site-recovery Migrate Tutorial Windows Server 2008 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/migrate-tutorial-windows-server-2008.md
This tutorial shows you how to migrate on-premises servers running Windows Serve
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Prepare your on-premises environment for migration.
-> * Set up the target environment.
-> * Set up a replication policy.
-> * Enable replication.
+> * Migrate on-premises Windows Server 2008 machines to Azure.
> * Run a test migration to make sure everything's working as expected. > * Fail over to Azure and complete the migration.
We recommend that you migrate machines to Azure using the [Azure Migrate](../mig
|Operating System | Environment |
-|||
+||--|
|Windows Server 2008 SP2 - 32 bit and 64 bit(IA-32 and x86-64)</br>- Standard</br>- Enterprise</br>- Datacenter | VMware VMs, Hyper-V VMs, and Physical Servers | |Windows Server 2008 R2 SP1 - 64 bit</br>- Standard</br>- Enterprise</br>- Datacenter | VMware VMs, Hyper-V VMs, and Physical Servers|
We recommend that you migrate machines to Azure using the [Azure Migrate](../mig
### Prerequisites
-Before you start, it's helpful to review the Azure Site Recovery architecture for [VMware and physical server migration](vmware-azure-architecture.md) or [Hyper-V virtual machine migration](hyper-v-azure-architecture.md)
+Before you start, it's helpful to review the Azure Site Recovery architecture for [VMware and physical server migration](vmware-azure-architecture.md) or [Hyper-V virtual machine migration](hyper-v-azure-architecture.md).
To migrate Hyper-V virtual machines running Windows Server 2008 or Windows Server 2008 R2, follow the steps in the [migrate on-premises machines to Azure](migrate-tutorial-on-premises-azure.md) tutorial.
The rest of this tutorial shows you how you can migrate on-premises VMware virtu
> [!TIP] > A test failover is highly recommended before migrating servers. Ensure that you've performed at least one successful test failover on each server that you are migrating. As part of the test failover, connect to the test failed over machine and ensure things work as expected. >
- >The test failover operation is non-disruptive and helps you test migrations by creating virtual machines in an isolated network of your choice. Unlike the failover operation, during the test failover operation, data replication continues to progres. You can perform as many test failovers as you like before you are ready to migrate.
+ >The test failover operation is non-disruptive and helps you test migrations by creating virtual machines in an isolated network of your choice. Unlike the failover operation, during the test failover operation, data replication continues to progress. You can perform as many test failovers as you like before you are ready to migrate.
>
The new vault is added to the **Dashboard** under **All resources**, and on the
### Prepare your on-premises environment for migration -- To migrate Windows Server 2008 virtual machines running on VMware, [setup the on-premises Configuration Server on VMware](vmware-azure-tutorial.md#set-up-the-source-environment).-- If the Configuration Server cannot be setup as a VMware virtual machine, [setup the Configuration Server on an on-premises physical server or virtual machine](physical-azure-disaster-recovery.md#set-up-the-source-environment).
+- To migrate Windows Server 2008 virtual machines running on VMware, [set up the on-premises Configuration Server on VMware](vmware-azure-tutorial.md#set-up-the-source-environment).
+- If the Configuration Server cannot be set up as a VMware virtual machine, [set up the Configuration Server on an on-premises physical server or virtual machine](physical-azure-disaster-recovery.md#set-up-the-source-environment).
### Set up the target environment
static-web-apps Application Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/application-settings.md
# Configure application settings for Azure Static Web Apps
-When you configure application settings and environment variables, you modify the configuration input to your app without the need to change application code, such as with database connection strings. You can also store secrets used in [authentication configuration](key-vault-secrets.md).
+Application settings hold configuration values that may change, such as database connection strings. Adding application settings allows you to modify the configuration input to your app, without having to change application code.
-Application settings are encrypted at rest, copied to [staging](review-publish-pull-requests.md) and production environments, used by backend APIs, and may only be alphanumeric characters, plus `.` and `_`.
+Application settings:
+
+- Are available as environment variables to the backend API of a static web app
+- Can be used to store secrets used in [authentication configuration](key-vault-secrets.md)
+- Are encrypted at rest
+- Are copied to [staging](review-publish-pull-requests.md) and production environments
+- May only be alphanumeric characters, `.`, and `_`
> [!IMPORTANT] > The application settings described in this article only apply to the backend API of an Azure Static Web App.
const connectionString = process.env.DATABASE_CONNECTION_STRING;
The `local.settings.json` file isn't tracked by the GitHub repository because sensitive information, like database connection strings, are often included in the file. Since the local settings remain on your machine, you need to manually configure your settings in Azure.
-Generally, your settings are infrequently set, so aren't required with every build.
+Generally, configuring your settings is done infrequently, and isn't required with every build.
## Configure application settings
storage Archive Rehydrate To Online Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-to-online-tier.md
Title: Rehydrate an archived blob to an online tier
-description: Before you can read a blob that is in the Archive tier, you must rehydrate it to either the Hot or Cool tier. You can rehydrate a blob either by copying it from the Archive tier to an online tier, or by changing its tier from Archive to Hot or Cool.
+description: Before you can read a blob that is in the archive tier, you must rehydrate it to either the hot or cool tier. You can rehydrate a blob either by copying it from the archive tier to an online tier, or by changing its tier from archive to hot or cool.
Previously updated : 09/29/2022 Last updated : 01/17/2023 ms.devlang: powershell, azurecli-+ # Rehydrate an archived blob to an online tier
-To read a blob that is in the Archive tier, you must first rehydrate the blob to an online tier (Hot or Cool) tier. You can rehydrate a blob in one of two ways:
+To read a blob that is in the archive tier, you must first rehydrate the blob to an online tier (hot or cool) tier. You can rehydrate a blob in one of two ways:
-- By copying it to a new blob in the Hot or Cool tier with the [Copy Blob](/rest/api/storageservices/copy-blob) operation. -- By changing its tier from Archive to Hot or Cool with the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation.
+- By copying it to a new blob in the hot or cool tier with the [Copy Blob](/rest/api/storageservices/copy-blob) operation.
+- By changing its tier from archive to hot or cool with the [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation.
When you rehydrate a blob, you can specify the priority for the operation to either standard priority or high priority. A standard-priority rehydration operation may take up to 15 hours to complete. A high-priority operation is prioritized over standard-priority requests and may complete in less than one hour for objects under 10 GB in size. You can change the rehydration priority from *Standard* to *High* while the operation is pending. You can configure Azure Event Grid to fire an event when rehydration is complete and run application code in response. To learn how to handle an event that runs an Azure Function when the blob rehydration operation is complete, see [Run an Azure Function in response to a blob rehydration event](archive-rehydrate-handle-event.md).
-For more information about rehydrating a blob, see [Blob rehydration from the Archive tier](archive-rehydrate-overview.md).
+For more information about rehydrating a blob, see [Blob rehydration from the archive tier](archive-rehydrate-overview.md).
## Rehydrate a blob with a copy operation
-To rehydrate a blob from the Archive tier by copying it to an online tier, use PowerShell, Azure CLI, or one of the Azure Storage client libraries. Keep in mind that when you copy an archived blob to an online tier, the source and destination blobs must have different names.
+To rehydrate a blob from the archive tier by copying it to an online tier, use the Azure portal, PowerShell, Azure CLI, or one of the Azure Storage client libraries. Keep in mind that when you copy an archived blob to an online tier, the source and destination blobs must have different names.
Copying an archived blob to an online destination tier is supported within the same storage account. Beginning with service version 2021-02-12, you can copy an archived blob to a different storage account, as long as the destination account is in the same region as the source account.
-After the copy operation is complete, the destination blob appears in the Archive tier. The destination blob is then rehydrated to the online tier that you specified in the copy operation. When the destination blob is fully rehydrated, it becomes available in the new online tier.
+After the copy operation is complete, the destination blob appears in the archive tier. The destination blob is then rehydrated to the online tier that you specified in the copy operation. When the destination blob is fully rehydrated, it becomes available in the new online tier.
### Rehydrate a blob to the same storage account
-The following examples show how to copy an archived blob to a blob in the Hot tier in the same storage account.
+The following examples show how to copy an archived blob to a blob in the hot tier in the same storage account.
#### [Portal](#tab/azure-portal)
-N/A
+1. Navigate to the source storage account in the Azure portal.
+
+2. In the navigation pane for the storage account, select **Storage browser**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of Storage explorer button in the navigation pane.](./media/archive-rehydrate-to-online-tier/open-storage-browser.png)
+
+3. In storage browser, navigate to the location of the archived blob, select the checkbox that appears beside the blob, and then select the **Copy** button.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the checkbox next to an archived blob and then the location of the copy button.](./media/archive-rehydrate-to-online-tier/copy-button.png)
+
+4. Navigate to the container where you would like to place the rehydrated blob, and then select the **Paste** button.
+
+ The **Paste archive blob** dialog box appears.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the paste archive blob dialog box.](./media/archive-rehydrate-to-online-tier/paste-dialog-box.png)
+
+ > [!NOTE]
+ > If you select the **Paste** button while in the same location as the source blob, then the default name that appears in the **Destination blob name** field contains a numeric suffix. This ensures that the source and destination blobs have different names. You can change this name if you want as long as the name is different than the name of the source blob.
+
+5. In the **Paste archive blob** dialog box, choose an access tier and a rehydration priority. Then, select **Paste** to rehydrate the blob.
+
+ > [!IMPORTANT]
+ > Don't delete the source blob while it is rehydrating.
+ #### [PowerShell](#tab/azure-powershell)
$ctx = (Get-AzStorageAccount `
-ResourceGroupName $rgName ` -Name $accountName).Context
-# Copy the source blob to a new destination blob in Hot tier with Standard priority.
+# Copy the source blob to a new destination blob in hot tier with Standard priority.
Start-AzStorageBlobCopy -SrcContainer $srcContainerName ` -SrcBlob $srcBlobName ` -DestContainer $destContainerName `
N/A
### Rehydrate a blob to a different storage account in the same region
-The following examples show how to copy an archived blob to a blob in the Hot tier in a different storage account.
+The following examples show how to copy an archived blob to a blob in the hot tier in a different storage account.
+
+> [!NOTE]
+> The destination and source account must be in the same region.
#### [Portal](#tab/azure-portal)
-N/A
+1. Navigate to the source storage account in the Azure portal.
+
+2. In the navigation pane for the storage account, select **Storage browser**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of Storage explorer button in the navigation pane.](./media/archive-rehydrate-to-online-tier/open-storage-browser.png)
+
+3. In storage browser, navigate to the location of the archived blob, select the checkbox that appears beside the blob, and then select the **Copy** button.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of selecting the checkbox next to an archived blob and then the location of the copy button.](./media/archive-rehydrate-to-online-tier/copy-button.png)
+
+4. Navigate to the destination storage account, and in the navigation pane, select **Storage browser**.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of Storage explorer button in the navigation pane of the destination storage account.](./media/archive-rehydrate-to-online-tier/open-storage-browser-2.png)
+
+1. Navigate to the container where you would like to place the rehydrated blob, and then select the **Paste** button.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the location of the paste button.](./media/archive-rehydrate-to-online-tier/paste-button.png)
+
+ The **Paste archive blob** dialog box appears.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of the paste archive blob dialog box.](./media/archive-rehydrate-to-online-tier/paste-dialog-box.png)
+
+5. In the **Paste archive blob** dialog box, choose an access tier and a rehydration priority. Then, select **Paste** to rehydrate the blob.
+
+ > [!IMPORTANT]
+ > Don't delete the source blob while it is rehydrating.
#### [PowerShell](#tab/azure-powershell) To copy an archived blob to a blob in an online tier in a different storage account with PowerShell, make sure you've installed the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage/) module, version 4.4.0 or higher. Next, call the [Start-AzStorageBlobCopy](/powershell/module/az.storage/start-azstorageblobcopy) command and specify the target online tier and the rehydration priority. You must specify a shared access signature (SAS) with read permissions for the archived source blob.
-The following example shows how to copy an archived blob to the Hot tier in a different storage account. Remember to replace placeholders in angle brackets with your own values:
+The following example shows how to copy an archived blob to the hot tier in a different storage account. Remember to replace placeholders in angle brackets with your own values:
Start-AzStorageBlobCopy -AbsoluteUri $srcBlobUri `
To copy an archived blob to a blob in an online tier in a different storage account with the Azure CLI, make sure you have installed version 2.35.0 or higher. Next, call the [az storage blob copy start](/cli/azure/storage/blob/copy#az-storage-blob-copy-start) command and specify the target online tier and the rehydration priority. You must specify a shared access signature (SAS) with read permissions for the archived source blob.
-The following example shows how to copy an archived blob to the Hot tier in a different storage account. Remember to replace placeholders in angle brackets with your own values:
+The following example shows how to copy an archived blob to the hot tier in a different storage account. Remember to replace placeholders in angle brackets with your own values:
```azurecli # Specify the expiry interval
srcBlobUri=$(az storage blob generate-sas \
--as-user \ --auth-mode login | tr -d '"')
-# Copy to the destination blob in the Hot tier
+# Copy to the destination blob in the hot tier
az storage blob copy start \ --source-uri $srcBlobUri \ --account-name <dest-account> \
To learn more about obtaining read access to secondary regions, see [Read access
## Rehydrate a blob by changing its tier
-To rehydrate a blob by changing its tier from Archive to Hot or Cool, use the Azure portal, PowerShell, or Azure CLI.
+To rehydrate a blob by changing its tier from archive to hot or cool, use the Azure portal, PowerShell, or Azure CLI.
### [Portal](#tab/azure-portal)
-To change a blob's tier from Archive to Hot or Cool in the Azure portal, follow these steps:
+To change a blob's tier from archive to hot or cool in the Azure portal, follow these steps:
1. Locate the blob to rehydrate in the Azure portal. 1. Select the **More** button on the right side of the page.
To change a blob's tier from Archive to Hot or Cool in the Azure portal, follow
1. Select the target access tier from the **Access tier** dropdown. 1. From the **Rehydrate priority** dropdown, select the desired rehydration priority. Keep in mind that setting the rehydration priority to *High* typically results in a faster rehydration, but also incurs a greater cost.
- :::image type="content" source="media/archive-rehydrate-to-online-tier/rehydrate-change-tier-portal.png" alt-text="Screenshot showing how to rehydrate a blob from the Archive tier in the Azure portal ":::
+ :::image type="content" source="media/archive-rehydrate-to-online-tier/rehydrate-change-tier-portal.png" alt-text="Screenshot showing how to rehydrate a blob from the archive tier in the Azure portal. ":::
1. Select the **Save** button. ### [PowerShell](#tab/azure-powershell)
-To change a blob's tier from Archive to Hot or Cool with PowerShell, use the blob's **BlobClient** property to return a .NET reference to the blob, then call the **SetAccessTier** method on that reference. Remember to replace placeholders in angle brackets with your own values:
+To change a blob's tier from archive to hot or cool with PowerShell, use the blob's **BlobClient** property to return a .NET reference to the blob, then call the **SetAccessTier** method on that reference. Remember to replace placeholders in angle brackets with your own values:
```azurepowershell # Initialize these variables with your values.
$ctx = (Get-AzStorageAccount `
-ResourceGroupName $rgName ` -Name $accountName).Context
-# Change the blob's access tier to Hot with Standard priority.
+# Change the blob's access tier to hot with Standard priority.
$blob = Get-AzStorageBlob -Container $containerName -Blob $blobName -Context $ctx $blob.BlobClient.SetAccessTier("Hot", $null, "Standard") ``` ### [Azure CLI](#tab/azure-cli)
-To change a blob's tier from Archive to Hot or Cool with Azure CLI, call the [az storage blob set-tier](/cli/azure/storage/blob#az-storage-blob-set-tier) command. Remember to replace placeholders in angle brackets with your own values:
+To change a blob's tier from archive to hot or cool with Azure CLI, call the [az storage blob set-tier](/cli/azure/storage/blob#az-storage-blob-set-tier) command. Remember to replace placeholders in angle brackets with your own values:
```azurecli az storage blob set-tier \
az storage blob set-tier \
### [AzCopy](#tab/azcopy)
-To change a blob's tier from Archive to Hot or Cool with AzCopy, use the [azcopy set-properties](..\common\storage-ref-azcopy-set-properties.md) command and set the `-block-blob-tier` parameter to the desired tier, and the `--rehydrate-priority` to `standard` or `high`. By default, this parameter is set to `standard`. To learn more about the trade offs of each option, see [Rehydration priority](archive-rehydrate-overview.md#rehydration-priority).
+To change a blob's tier from archive to hot or cool with AzCopy, use the [azcopy set-properties](..\common\storage-ref-azcopy-set-properties.md) command and set the `-block-blob-tier` parameter to the desired tier, and the `--rehydrate-priority` to `standard` or `high`. By default, this parameter is set to `standard`. To learn more about the trade offs of each option, see [Rehydration priority](archive-rehydrate-overview.md#rehydration-priority).
> [!IMPORTANT] > The ability to change a blob's tier by using AzCopy is currently in PREVIEW.
Keep in mind that rehydration of an archived blob may take up to 15 hours, and r
To check the status and priority of a pending rehydration operation in the Azure portal, display the **Change tier** dialog for the blob: When the rehydration is complete, you can see in the Azure portal that the fully rehydrated blob now appears in the targeted online tier. ### [PowerShell](#tab/azure-powershell)
N/A
While a standard-priority rehydration operation is pending, you can change the rehydration priority setting for a blob from *Standard* to *High* to rehydrate that blob more quickly.
-The rehydration priority setting can't be lowered from *High* to *Standard* for a pending operation. Also keep in mind that changing the rehydration priority may have a billing impact. For more information, see [Blob rehydration from the Archive tier](archive-rehydrate-overview.md).
+The rehydration priority setting can't be lowered from *High* to *Standard* for a pending operation. Also keep in mind that changing the rehydration priority may have a billing impact. For more information, see [Blob rehydration from the archive tier](archive-rehydrate-overview.md).
### Change the rehydration priority for a pending Set Blob Tier operation
To change the rehydration priority for a pending operation with the Azure portal
1. Navigate to the blob for which you want to change the rehydration priority, and select the blob. 1. Select the **Change tier** button.
-1. In the **Change tier** dialog, set the access tier to the target online access tier for the rehydrating blob (Hot or Cool). The **Archive status** field shows the target online tier.
+1. In the **Change tier** dialog, set the access tier to the target online access tier for the rehydrating blob (hot or cool). The **Archive status** field shows the target online tier.
1. In the **Rehydrate priority** dropdown, set the priority to *High*. 1. Select **Save**.
- :::image type="content" source="media/archive-rehydrate-to-online-tier/update-rehydration-priority-portal.png" alt-text="Screenshot showing how to update the rehydration priority for a rehydrating blob in Azure portal":::
+ :::image type="content" source="media/archive-rehydrate-to-online-tier/update-rehydration-priority-portal.png" alt-text="Screenshot showing how to update the rehydration priority for a rehydrating blob in Azure portal.":::
#### [PowerShell](#tab/azure-powershell)
if ($rehydratingBlob.BlobProperties.RehydratePriority -eq "Standard")
if ($rehydratingBlob.BlobProperties.ArchiveStatus -eq "rehydrate-pending-to-hot") { $rehydratingBlob.BlobClient.SetAccessTier("Hot", $null, "High")
- "Changing rehydration priority to High for blob moving to Hot tier."
+ "Changing rehydration priority to High for blob moving to hot tier."
} if ($rehydratingBlob.BlobProperties.ArchiveStatus -eq "rehydrate-pending-to-cool") { $rehydratingBlob.BlobClient.SetAccessTier("Cool", $null, "High")
- "Changing rehydration priority to High for blob moving to Cool tier."
+ "Changing rehydration priority to High for blob moving to cool tier."
} } ```
if ($rehydratingBlob.BlobProperties.RehydratePriority -eq "Standard")
To change the rehydration priority for a pending operation with Azure CLI, first make sure that you've installed the Azure CLI, version 2.29.2 or later. For more information about installing the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
-Next, call the [az storage blob set-tier](/cli/azure/storage/blob#az-storage-blob-set-tier) command with the `--rehydrate-priority` parameter set to *High*. The target tier (Hot or Cool) must be the same tier that you originally specified for the rehydration operation. Remember to replace placeholders in angle brackets with your own values:
+Next, call the [az storage blob set-tier](/cli/azure/storage/blob#az-storage-blob-set-tier) command with the `--rehydrate-priority` parameter set to *High*. The target tier (hot or cool) must be the same tier that you originally specified for the rehydration operation. Remember to replace placeholders in angle brackets with your own values:
```azurecli
-# Update the rehydration priority for a blob moving to the Hot tier.
+# Update the rehydration priority for a blob moving to the hot tier.
az storage blob set-tier \ --account-name <storage-account> \ --container-name <container> \
N/A
### Change the rehydration priority for a pending Copy Blob operation
-When you rehydrate a blob by copying the archived blob to an online tier, Azure Storage immediately creates the destination blob in the Archive tier. The destination blob is then rehydrated to the target tier with the priority specified on the copy operation. For more information on rehydrating an archived blob with a copy operation, see [Copy an archived blob to an online tier](archive-rehydrate-overview.md#copy-an-archived-blob-to-an-online-tier).
+When you rehydrate a blob by copying the archived blob to an online tier, Azure Storage immediately creates the destination blob in the archive tier. The destination blob is then rehydrated to the target tier with the priority specified on the copy operation. For more information on rehydrating an archived blob with a copy operation, see [Copy an archived blob to an online tier](archive-rehydrate-overview.md#copy-an-archived-blob-to-an-online-tier).
-To perform the copy operation from the Archive tier to an online tier with Standard priority, use PowerShell, Azure CLI, or one of the Azure Storage client libraries. For more information, see [Rehydrate a blob with a copy operation](archive-rehydrate-to-online-tier.md#rehydrate-a-blob-with-a-copy-operation). Next, to change the rehydration priority from *Standard* to *High* for the pending rehydration, call **Set Blob Tier** on the destination blob and specify the target tier.
+To perform the copy operation from the archive tier to an online tier with Standard priority, use PowerShell, Azure CLI, or one of the Azure Storage client libraries. For more information, see [Rehydrate a blob with a copy operation](archive-rehydrate-to-online-tier.md#rehydrate-a-blob-with-a-copy-operation). Next, to change the rehydration priority from *Standard* to *High* for the pending rehydration, call **Set Blob Tier** on the destination blob and specify the target tier.
#### [Portal](#tab/azure-portal)
-After you've initiated the copy operation, you'll see in the Azure portal, that both the source and destination blob are in the Archive tier. The destination blob is rehydrating with Standard priority.
+After you've initiated the copy operation, you'll see in the Azure portal, that both the source and destination blob are in the archive tier. The destination blob is rehydrating with Standard priority.
To change the rehydration priority for the destination blob, follow these steps: 1. Select the destination blob. 1. Select the **Change tier** button.
-1. In the **Change tier** dialog, set the access tier to the target online access tier for the rehydrating blob (Hot or Cool). The **Archive status** field shows the target online tier.
+1. In the **Change tier** dialog, set the access tier to the target online access tier for the rehydrating blob (hot or cool). The **Archive status** field shows the target online tier.
1. In the **Rehydrate priority** dropdown, set the priority to *High*. 1. Select **Save**. The destination blob's properties page now shows that it's rehydrating with High priority. #### [PowerShell](#tab/azure-powershell)
-After you've initiated the copy operation, check the properties of the destination blob. You'll see that the destination blob is in the Archive tier and is rehydrating with Standard priority.
+After you've initiated the copy operation, check the properties of the destination blob. You'll see that the destination blob is in the archive tier and is rehydrating with Standard priority.
```azurepowershell # Initialize these variables with your values.
$destinationBlob.BlobProperties.ArchiveStatus
$destinationBlob.BlobProperties.RehydratePriority ```
-Next, call the **SetAccessTier** method via PowerShell to change the rehydration priority for the destination blob to *High*, as described in [Change the rehydration priority for a pending Set Blob Tier operation](#change-the-rehydration-priority-for-a-pending-set-blob-tier-operation). The target tier (Hot or Cool) must be the same tier that you originally specified for the rehydration operation. Check the properties again to verify that the blob is now rehydrating with High priority.
+Next, call the **SetAccessTier** method via PowerShell to change the rehydration priority for the destination blob to *High*, as described in [Change the rehydration priority for a pending Set Blob Tier operation](#change-the-rehydration-priority-for-a-pending-set-blob-tier-operation). The target tier (hot or cool) must be the same tier that you originally specified for the rehydration operation. Check the properties again to verify that the blob is now rehydrating with High priority.
#### [Azure CLI](#tab/azure-cli)
-After you've initiated the copy operation, check the properties of the destination blob. You'll see that the destination blob is in the Archive tier and is rehydrating with Standard priority.
+After you've initiated the copy operation, check the properties of the destination blob. You'll see that the destination blob is in the archive tier and is rehydrating with Standard priority.
```azurecli az storage blob show \
az storage blob show \
--auth-mode login ```
-Next, call the [az storage blob set-tier](/cli/azure/storage/blob#az-storage-blob-set-tier) command with the `--rehydrate-priority` parameter set to *High*, as described in [Change the rehydration priority for a pending Set Blob Tier operation](#change-the-rehydration-priority-for-a-pending-set-blob-tier-operation). The target tier (Hot or Cool) must be the same tier that you originally specified for the rehydration operation. Check the properties again to verify that the blob is now rehydrating with High priority.
+Next, call the [az storage blob set-tier](/cli/azure/storage/blob#az-storage-blob-set-tier) command with the `--rehydrate-priority` parameter set to *High*, as described in [Change the rehydration priority for a pending Set Blob Tier operation](#change-the-rehydration-priority-for-a-pending-set-blob-tier-operation). The target tier (hot or cool) must be the same tier that you originally specified for the rehydration operation. Check the properties again to verify that the blob is now rehydrating with High priority.
#### [AzCopy](#tab/azcopy)
N/A
## See also -- [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md).-- [Overview of blob rehydration from the Archive tier](archive-rehydrate-overview.md)
+- [hot, cool, and archive access tiers for blob data](access-tiers-overview.md).
+- [Overview of blob rehydration from the archive tier](archive-rehydrate-overview.md)
- [Run an Azure Function in response to a blob rehydration event](archive-rehydrate-handle-event.md) - [Reacting to Blob storage events](storage-blob-event-overview.md)
storage Storage Use Azurite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azurite.md
There are several different ways to install and run Azurite on your local system
### [Visual Studio](#tab/visual-studio)
-Azurite is automatically available with [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). If you are running an earlier version of Visual Studio, you'll need to install Azurite by using either Node Package Manager, DockerHub, or by cloning the Azurite GitHub repository.
+Azurite is automatically available with [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). If you're running an earlier version of Visual Studio, you'll need to install Azurite by using either Node Package Manager, DockerHub, or by cloning the Azurite GitHub repository.
### [Visual Studio Code](#tab/visual-studio-code)
With a few configurations, Azure Functions or ASP.NET projects start Azurite aut
#### Running Azurite from the command line
-You can find the Azurite executable file in the extensions folder of your Visual Studio installation. The specific location can vary based on which version of Visual Studio you have installed. For example, if you've installed Visual Studio 2022 professional edition on a Windows computer or Virtual Machine (VM), you would find the Azurite executable file at this location:
+You can find the Azurite executable file in the extensions folder of your Visual Studio installation. The specific location can vary based on which version of Visual Studio is installed. For example, if you've installed Visual Studio 2022 professional edition on a Windows computer or Virtual Machine (VM), you would find the Azurite executable file at this location:
`C:\Program Files\Microsoft Visual Studio\2022\Professional\Common7\IDE\Extensions\Microsoft\Azure Storage Emulator`.
Open the required location and start the `azurite.exe`. After you run the execut
#### Running Azurite from an Azure Functions project
-In Visual Studio 2022, create an **Azure Functions** project. As you create the project, choose the **Storage Emulator**.
+In Visual Studio 2022, create an **Azure Functions** project. While setting the project options, mark the box labeled **Use Azurite for runtime storage account**.
-> [!div class="mx-imgBorder"]
-> ![Storage emulator option in Azure Functions project](media/storage-use-azurite/visual-studio-azure-function-project-settings.png)
-After you create the project, Azurite starts automatically.
+After you create the project, Azurite starts automatically. The output looks similar to the following screenshot:
-> [!div class="mx-imgBorder"]
-> ![Azurite command-line output in Azure Functions project](media/storage-use-azurite/output-window-azure-functions-project.png)
#### Running Azurite from an ASP.NET project In Visual Studio 2022, create an **ASP.NET Core Web App** project. Then, open the **Connected Services** dialog box, select **Add a service dependency**, and then select **Storage Azurite emulator**.
-> [!div class="mx-imgBorder"]
-> ![Connected services dialog box in ASP.NET Core Web App project](media/storage-use-azurite/connected-service-storage-emulator.png)
In the **Configure Storage Azurite emulator** dialog box, set the **Connection string name** field to `StorageConnectionString`, and then select **Finish**.
-> [!div class="mx-imgBorder"]
-> ![Configure Storage Azurite emulator dialog box](media/storage-use-azurite/connection-string-for-azurite-emulator-configuration.png)
-When the configuration completes, select **Close**. The Azurite emulator starts automatically.
+When the configuration completes, select **Close** and the Azurite emulator starts automatically. The output looks similar to the following screenshot:
-> [!div class="mx-imgBorder"]
-> ![Azurite command-line output in ASP.NET project](media/storage-use-azurite/output-window-asp-net-project.png)
### [Visual Studio Code](#tab/visual-studio-code)
azurite --skipApiVersionCheck
### Disable Production Style Url
-**Optional**. When using the fully-qualified domain name instead of the IP in request Uri host, by default Azurite will parse the storage account name from request Uri host. You can force the parsing of the storage account name from request Uri path by using `--disableProductStyleUrl`:
+**Optional**. When using the fully qualified domain name instead of the IP in request Uri host, by default Azurite will parse the storage account name from request Uri host. You can force the parsing of the storage account name from request Uri path by using `--disableProductStyleUrl`:
```cmd azurite --disableProductStyleUrl
Start Azurite and use a customized connection string to access your account. The
DefaultEndpointsProtocol=http;AccountName=account1;AccountKey=key1;BlobEndpoint=http://account1.blob.localhost:10000;QueueEndpoint=http://account1.queue.localhost:10001;TableEndpoint=http://account1.table.localhost:10002; ```
-Do not access default account in this way with Azure Storage Explorer. There is a bug that Storage Explorer is always adding account name in URL path, causing failures.
+Don't access default account in this way with Azure Storage Explorer. There's a bug that Storage Explorer is always adding account name in URL path, causing failures.
-By default, when using Azurite with a production-style URL, the account name should be the host name in fully-qualified domain name such as "http://devstoreaccount1.blob.localhost:10000/container". To use production-style URL with account name in the URL path such as "http://foo.bar.com:10000/devstoreaccount1/container", make sure to use the `--disableProductStyleUrl` parameter when you start Azurite.
+By default, when using Azurite with a production-style URL, the account name should be the host name in fully qualified domain name such as "http://devstoreaccount1.blob.localhost:10000/container". To use production-style URL with account name in the URL path such as "http://foo.bar.com:10000/devstoreaccount1/container", make sure to use the `--disableProductStyleUrl` parameter when you start Azurite.
If use `host.docker.internal` as request Uri host (For example: `http://host.docker.internal:10000/devstoreaccount1/container`), Azurite will always get the account name from the request Uri path. This is true regardless of whether you use the `--disableProductStyleUrl` parameter when you start Azurite.
stream-analytics Stream Analytics Define Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-outputs.md
Previously updated : 07/13/2022 Last updated : 01/18/2023 # Outputs from Azure Stream Analytics
Some outputs types support [partitioning](#partitioning), and [output batch size
|[Azure Event Hubs](event-hubs-output.md)|Yes, need to set the partition key column in output configuration.|Access key, </br> Managed Identity| |[Power BI](power-bi-output.md)|No|Azure Active Directory user, </br> Managed Identity| |[Azure Table storage](table-storage-output.md)|Yes|Account key|
-|[Azure Service Bus queues](service-bus-queues-output.md)|Yes|Access key|
-|[Azure Service Bus topics](service-bus-topics-output.md)|Yes|Access key|
-|[Azure Cosmos DB](azure-cosmos-db-output.md)|Yes|Access key|
+|[Azure Service Bus queues](service-bus-queues-output.md)|Yes|Access key, </br> Managed Identity|
+|[Azure Service Bus topics](service-bus-topics-output.md)|Yes|Access key, </br> Managed Identity|
+|[Azure Cosmos DB](azure-cosmos-db-output.md)|Yes|Access key, </br> Managed Identity|
|[Azure Functions](azure-functions-output.md)|Yes|Access key| > [!IMPORTANT]
By design, the Avro and Parquet formats do not support variable schemas in a sin
The following behaviors may occur when directing a stream with variable schemas to an output using these formats: -- If the schema change can be detected, the current output file will be closed, and a new one initialized on the new schema. Splitting files as such will severely slow down the output when schema changes happen frequently. With back pressure this will in turn severly impact the overall performance of the job
+- If the schema change can be detected, the current output file will be closed, and a new one initialized on the new schema. Splitting files as such will severely slow down the output when schema changes happen frequently. With back pressure this will in turn severely impact the overall performance of the job
- If the schema change cannot be detected, the row will most likely be rejected, and the job become stuck as the row can't be output. Nested columns, or multi-type arrays, are situations that won't be discovered and be rejected. It is highly recommended to consider outputs using the Avro or Parquet format to be strongly typed, or schema-on-write, and queries targeting them to be written as such (explicit conversions and projections for a uniform schema).
synapse-analytics Quickstart Transform Data Using Spark Job Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-transform-data-using-spark-job-definition.md
On this panel, you can reference to the Spark job definition to run.
| Property | Description | | -- | -- | |Main definition file| The main file used for the job. Select a PY/JAR/ZIP file from your storage. You can select **Upload file** to upload the file to a storage account. <br> Sample: `abfss://…/path/to/wordcount.jar`|
+ | References from subfolders | Scanning subfolders from the root folder of the main definition file, these files will be added as reference files. The folders named "jars", "pyFiles", "files" or "archives" will be scanned, and the folders name are case sensitive. |
|Main class name| The fully qualified identifier or the main class that is in the main definition file. <br> Sample: `WordCount`| |Command-line arguments| You can add command-line arguments by clicking the **New** button. It should be noted that adding command-line arguments will override the command-line arguments defined by the Spark job definition. <br> *Sample: `abfss://…/path/to/shakespeare.txt` `abfss://…/path/to/result`* <br> | |Apache Spark pool| You can select Apache Spark pool from the list.|
On this panel, you can reference to the Spark job definition to run.
|Min executors| Min number of executors to be allocated in the specified Spark pool for the job.| |Max executors| Max number of executors to be allocated in the specified Spark pool for the job.| |Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.|-
+ |Spark configuration| Specify values for Spark configuration properties listed in the topic: Spark Configuration - Application properties. Users can use default configuration and customized configuration. |
![spark job definition pipline settings](media/quickstart-transform-data-using-spark-job-definition/spark-job-definition-pipline-settings.png)
virtual-machines Disks Enable Ultra Ssd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-ultra-ssd.md
description: Learn about ultra disks for Azure VMs
Previously updated : 12/01/2022 Last updated : 01/17/2023
virtual-machines Windows In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows-in-place-upgrade.md
+
+ Title: Windows in-place upgrade
+description: This article describes how to do an in-place upgrade for VMs running Windows Server in Azure.
+++ Last updated : 01/17/2023++++
+# In-place upgrade for VMs running Windows Server in Azure
+
+An in-place upgrade allows you to go from an older operating system to a newer one while keeping your settings, server roles, and data intact. This article will teach you how to move your Azure VMs to a later version of Windows Server using an in-place upgrade.
+
+Before you begin an in-place upgrade:
+
+- Review the upgrade requirements for the target operating system:
+
+ - Upgrade options for Windows Server 2019
+
+ - Upgrade options for Windows Server 2022
+
+- Verify the operating system disk has enough [free space to perform the in-place upgrade](/windows-server/get-started/hardware-requirements#storage-controller-and-disk-space-requirements). If additional space is needed [follow these steps](/azure/virtual-machines/windows/expand-os-disk) to expand the operating system disk attached to the VM.
+
+- Disable antivirus and anti-spyware software and firewalls. These types of software can conflict with the upgrade process. Re-enable antivirus and anti-spyware software and firewalls after the upgrade is completed.
+
+## Windows versions not yet supported for in-place system upgrades
+For the following versions, consider using the work-around in the next section:
+
+- Windows Server 2012 Datacenter
+- Windows Server 2012 Standard
+- Windows Server 2008 R2 Datacenter
+- Windows Server 2008 R2 Standard
+### Work-around
+To work around this issue, create an Azure VM that's running a supported version. And then either migrate the workload (Method 1, preferred), or download and upgrade the VHD of the VM (Method 2).
+To prevent data loss, back up the Windows 10 VM by using [Azure Backup](../backup/backup-overview.md). Or use a third-party backup solution from [Azure Marketplace Backup & Recovery](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=Backup+&exp=ubp8).
+#### Method 1: Deploy a newer system and migrate the workload
+
+Create an Azure VM that runs a supported version of the operating system, and then migrate the workload. To do so, you'll use Windows Server migration tools. For instructions to migrate Windows Server roles and features, see [Install, use, and remove Windows Server migration tools](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012).
++
+#### Method 2: Download and upgrade the VHD
+1. Do an in-place upgrade in a local Hyper-V VM
+ 1. [Download the VHD](./windows/download-vhd.md) of the VM.
+ 2. Attach the VHD to a local Hyper-V VM.
+ 3. Start the VM.
+ 4. Run the in-place upgrade.
+2. Upload the VHD to Azure. For more information, see [Upload a generalized VHD and use it to create new VMs in Azure](./windows/upload-generalized-managed.md).
++
+## Upgrade VM to volume license (KMS server activation)
+
+The upgrade media provided by Azure requires the VM to be configured for Windows Server volume licensing. This is the default behavior for any Windows Server VM that was installed from a generalized image in Azure. If the VM was imported into Azure, then it may need to be converted to volume licensing to use the upgrade media provided by Azure. To confirm the VM is configured for volume license activation follow these steps to [configure the appropriate KMS client setup key](/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems#step-1-configure-the-appropriate-kms-client-setup-key). If the activation configuration was changed, then follow these steps to [verify connectivity to Azure KMS service](/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems#step-2-verify-the-connectivity-between-the-vm-and-azure-kms-service).
+
+
+
+## Upgrade to Managed Disks
+
+The in-place upgrade process requires the use of Managed Disks on the VM to be upgraded. Most VMs in Azure are using Managed Disks, and retirement for unmanaged disks support was announced in November of 2022. If the VM is currently using unmanaged disks, then follow these steps to [migrate to Managed Disks](./windows/migrate-to-managed-disks.md).
+
+
+
+## Create snapshot of the operating system disk
+
+We recommend that you create a snapshot of your operating system disk and any data disks before starting the in-place upgrade process. This will enable you to revert to the previous state of the VM if anything fails during the in-place upgrade process. To create a snapshot on each disk, follow these steps to [create a snapshot of a disk](/azure/virtual-machines/snapshot-copy-managed-disk).
+
+
+## Create upgrade media disk
+
+To perform an in-place upgrade the upgrade media must be attached to the VM as a Managed Disk. To create the upgrade media, use the following PowerShell script with the specific variables configured for the desired upgrade media. The created upgrade media disk can be used to upgrade multiple VMs, but it can only be used to upgrade a single VM at a time. To upgrade multiple VMs simultaneously multiple upgrade disks must be created for each simultaneous upgrade.
+
+| Parameter | Definition |
+|||
+| resourceGroup | Name of the resource group where the upgrade media Managed Disk will be created. The named resource group will be created if it doesn't exist. |
+| location | Azure region where the upgrade media Managed Disk will be created. This must be the same region as the VM to be upgraded. |
+| zone | Azure zone in the selected region where the upgrade media Managed Disk will be created. This must be the same zone as the VM to be upgraded. For regional VMs (non-zonal) the zone parameter should be "". |
+| diskName | Name of the Managed Disk that will contain the upgrade media |
+| sku | Windows Server upgrade media version. This must be either: `server2022Upgrade` or `server2019Upgrade` |
+
+### Script contents
+
+```azurepowershell-interactive
+#
+# Customer specific parameters
+#
+$resourceGroup = "WindowsServerUpgrades"
+$location = "WestUS2"
+$diskName = "WindowsServer2022UpgradeDisk"
+$zone = ""
+
+#
+# Selection of upgrade target version
+#
+$sku = "server2022Upgrade"
+
+#
+# Common parameters
+#
+$publisher = "MicrosoftWindowsServer"
+$offer = "WindowsServerUpgrade"
+$managedDiskSKU = "Standard_LRS"
+
+#
+# Get the latest version of the image
+#
+$versions = Get-AzVMImage -PublisherName $publisher -Location $location -Offer $offer -Skus $sku | sort-object -Descending {[version] $_.Version }
+$latestString = $versions[0].Version
+
+#
+# Get Image
+#
+
+$image = Get-AzVMImage -Location $location `
+ -PublisherName $publisher `
+ -Offer $offer `
+ -Skus $sku `
+ -Version $latestString
+
+#
+# Create Resource Group if it doesn't exist
+#
+
+if (-not (Get-AzResourceGroup -Name $resourceGroup -ErrorAction SilentlyContinue)) {
+ New-AzResourceGroup -Name $resourceGroup -Location $location
+}
+
+#
+# Create Managed Disk from LUN 0
+#
+
+if ($zone){
+ $diskConfig = New-AzDiskConfig -SkuName $managedDiskSKU `
+ -CreateOption FromImage `
+ -Zone $zone `
+ -Location $location
+} else {
+ $diskConfig = New-AzDiskConfig -SkuName $managedDiskSKU `
+ -CreateOption FromImage `
+ -Location $location
+}
+
+Set-AzDiskImageReference -Disk $diskConfig -Id $image.Id -Lun 0
+
+New-AzDisk -ResourceGroupName $resourceGroup `
+ -DiskName $diskName `
+ -Disk $diskConfig
+
+```
+
+## Attach upgrade media to the VM
+
+Attach the upgrade media for the target Windows Server version to the VM which will be upgraded. This can be done while the VM is in the running or stopped state.
+
+### Portal instructions
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for and select **Virtual machines**.
+
+1. Select a virtual machine to perform the in-place upgrade from the list.
+
+1. On the **Virtual machine** page, select **Disks**.
+
+1. On the **Disks** page, select **Attach existing disks**.
+
+1. In the drop-down for **Disk name**, select the name of the upgrade disk created in the previous step.
+
+1. Select **Save** to attach the upgrade disk to the VM.
+
+
+
+## Perform in-place upgrade
+
+To initiate the in-place upgrade the VM must be in the `Running` state. Once the VM is in a running state use the following steps to perform the upgrade.
+
+1. Connect to the VM using [RDP](./windows/connect-rdp.md#connect-to-the-virtual-machine) or [RDP-Bastion](../bastion/bastion-connect-vm-rdp-windows.md#rdp).
+
+1. Determine the drive letter for the upgrade disk (typically E: or F: if there are no other data disks).
+
+1. Start Windows PowerShell.
+
+1. Change directory to the only directory on the upgrade disk.
+
+1. Execute the following command to start the upgrade:
+
+ ```powershell
+ .\setup.exe /auto upgrade /dynamicupdate disable
+ ```
+
+1. Select the correct "Upgrade to" image based on the current version and configuration of the VM using the following table:
+
+| Upgrade from | Upgrade to |
+|||
+| Windows Server 2012 R2 (Core) | Windows Server 2019 |
+| Windows Server 2012 R2 | Windows Server 2019 (Desktop Experience) |
+| Windows Server 2016 (Core) | Windows Server 2019 -or- Windows Server 2022 |
+| Windows Server 2016 (Desktop Experience) | Windows Server 2019 (Desktop Experience) -or- Windows Server 2022 (Desktop Experience) |
+| Windows Server 2019 (Core) | Windows Server 2022 |
+| Windows Server 2019 (Desktop Experience) | Windows Server 2022 (Desktop Experience) |
++
+
+
+During the upgrade process the VM will automatically disconnect from the RDP session. After the VM is disconnected from the RDP session the progress of the upgrade can be monitored through the [screenshot functionality available in the Azure portal](/troubleshoot/azure/virtual-machines/boot-diagnostics#enable-boot-diagnostics-on-existing-virtual-machine).
+
+## Post upgrade steps
+
+Once the upgrade process has completed successfully the following steps should be taken to clean up any artifacts which were created during the upgrade process:
+
+- Delete the snapshots of the OS disk and data disk(s) if they were created.
+
+- Delete the upgrade media Managed Disk.
+
+- Enable any antivirus, anti-spyware or firewall software that may have been disabled at the start of the upgrade process.
+
+## Recovery from failures
+
+If the in-place upgrade process failed to complete successfully you can return to the previous version of the VM if snapshots of the operating system disk and data disk(s) were created. To revert the VM to the previous state using snapshots complete the following steps:
+
+1. Create a new Managed Disk from the OS disk snapshot and each data disk snapshot following the steps in [Create a disk from a snapshot](/virtual-machines/scripts/virtual-machines-powershell-sample-create-managed-disk-from-snapshot) making sure to create the disks in the same Availability Zone as the VM if the VM is in a zone.
+
+1. Stop the VM.
+
+1. [Swap the OS disk](/virtual-machines/scripts/virtual-machines-powershell-sample-create-managed-disk-from-snapshot) of the VM.
+
+1. [Detach any data disks](/azure/virtual-machines/windows/detach-disk) from the VM.
+
+1. [Attach data disks](/azure/virtual-machines/windows/attach-managed-disk-portal) created from the snapshots in step 1.
+
+1. Restart the VM.
++
+## Next steps
+
+For more information, see [Perform an in-place upgrade of Windows Server](/windows-server/get-started/perform-in-place-upgrade)
virtual-machines Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/ibm/demo.md
documentationcenter:---++
+editor: swread
+ Last updated 02/22/2019 tags:
virtual-machines Deploy Ibm Db2 Purescale Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/ibm/deploy-ibm-db2-purescale-azure.md
Title: Deploy IBM DB2 pureScale on Azure description: Learn how to deploy an example architecture used recently to migrate an enterprise from its IBM DB2 environment running on z/OS to IBM DB2 pureScale on Azure.-++
+editor: swread
+ Last updated 11/09/2018-- # Deploy IBM DB2 pureScale on Azure
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/ibm/get-started.md
documentationcenter:---++
+editor: swread
+ Last updated 02/22/2019 tags:
virtual-machines Ibm Db2 Purescale Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/ibm/ibm-db2-purescale-azure.md
Title: IBM DB2 pureScale on Azure description: In this article, we show an architecture for running an IBM DB2 pureScale environment on Azure.--++
+editor: swread
+ Last updated 11/09/2018-- # IBM DB2 pureScale on Azure
virtual-machines Install Ibm Z Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/ibm/install-ibm-z-environment.md
documentationcenter:---++
+editor: swread
+ Last updated 04/02/2019 tags:
virtual-machines Demo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/microfocus/demo.md
Title: Set up Micro Focus CICS BankDemo for Micro Focus Enterprise Developer 4.0 on Azure Virtual Machines description: Run the Micro Focus BankDemo application on Azure Virtual Machines (VMs) to learn to use Micro Focus Enterprise Server and Enterprise Developer.-+
+editor: swread
+ Last updated 03/30/2020
virtual-machines Deploy Micro Focus Enterprise Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/microfocus/deploy-micro-focus-enterprise-server.md
Title: Deploy Micro Focus Enterprise Server 5.0 to AKS | Microsoft Docs
description: Rehost your IBM z/OS mainframe workloads using the Micro Focus development and test environment on Azure virtual machines (VMs). documentationcenter:---++
+editor: swread
+ Last updated 06/29/2020 tags:
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/microfocus/get-started.md
Title: Micro Focus dev/test environments on Azure | Microsoft Docs description: Rehost your IBM z/OS mainframe workloads using Micro Focus solutions on Azure virtual machines (VMs).-+
+editor: swread
+ Last updated 03/30/2020
virtual-machines Run Enterprise Server Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/microfocus/run-enterprise-server-container.md
Title: Run Micro Focus Enterprise Server 5.0 in a Docker container on Azure | Mi
description: In this article, learn how to run Micro Focus Enterprise Server 5.0 in a Docker container on Microsoft Azure. documentationcenter:---++
+editor: swread
+ Last updated 06/29/2020 tags:
virtual-machines Set Up Micro Focus Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/mainframe-rehosting/microfocus/set-up-micro-focus-azure.md
Title: Install Micro Focus Enterprise Server 5.0 and Enterprise Developer 5.0 on
description: In this article, learn how to install Micro Focus Enterprise Server 5.0 and Enterprise Developer 5.0 on Microsoft Azure. documentationcenter:---++
+editor: swread
+ Last updated 06/29/2020 tags:
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-addresses.md
To learn more about IP address pricing in Azure, review the [IP address pricing]
* Forward DNS for IPv6 is supported for Azure public DNS. Reverse DNS isn't supported.
-* Routing Preference and cross-region load balancer aren't supported.
+* Routing Preference Internet isn't supported.
For more information on IPv6 in Azure, see [here](ipv6-overview.md).