Updates from: 11/03/2022 02:08:38
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Users managing their security information at [https://aka.ms/mysecurityinfo](htt
![Screenshot of how users can manage a Temporary Access Pass in My Security Info.](./media/how-to-authentication-temporary-access-pass/tap-my-security-info.png) ### Windows device setup
-Users with a Temporary Access Pass can navigate the setup process on Windows 10 and 11 to perform device join operations and configure Windows Hello For Business. Temporary Access Pass usage for setting up Windows Hello for Business varies based on the devices joined state:
-- During Azure AD Join setup, users can authenticate with a TAP (no password required) and setup Windows Hello for Business.-- On already Azure AD Joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to set up Windows Hello for Business. -- On Hybrid Azure AD Joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to set up Windows Hello for Business.
+Users with a Temporary Access Pass can navigate the setup process on Windows 10 and 11 to perform device join operations and configure Windows Hello for Business. Temporary Access Pass usage for setting up Windows Hello for Business varies based on the devices joined state.
+
+For Azure AD Joined devices:
+- During the Azure AD Join setup process, users can authenticate with a TAP (no password required) to join the device and register Windows Hello for Business.
+- On already joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to set up Windows Hello for Business.
+- If the [Web sign-in](https://learn.microsoft.com/windows/client-management/mdm/policy-csp-authentication#authentication-enablewebsignin) feature on Windows is also enabled, the user can use TAP to sign into the device. This is intended only for completing initial device setup, or recovery when the user does not know or have a password.
+
+For Hybrid Azure AD Joined devices:
+- Users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to set up Windows Hello for Business.
![Screenshot of how to enter Temporary Access Pass when setting up Windows 10.](./media/how-to-authentication-temporary-access-pass/windows-10-tap.png)
If MFA is required for the resource tenant, the guest user needs to perform MFA
### Expiration An expired or deleted Temporary Access Pass canΓÇÖt be used for interactive or non-interactive authentication.
-Users need to reauthenticate with different authentication methods after the Temporary Access Pass is expired or deleted.
+Users need to reauthenticate with different authentication methods after the Temporary Access Pass is expired or deleted.
+
+The token lifetime (session token, refresh token, access token, etc.) obtained via a Temporary Access Pass login will be limited to the Temporary Access Pass lifetime. As a result, a Temporary Access Pass expiring will lead to the expiration of the associated token.
## Delete an expired Temporary Access Pass
active-directory Active Directory How Applications Are Added https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-how-applications-are-added.md
Previously updated : 12/01/2020 Last updated : 10/26/2022 -+ # How and why applications are added to Azure AD
-There are two representations of applications in Azure AD:
+There are two representations of applications in Azure Active Directory (Azure AD):
-* [Application objects](app-objects-and-service-principals.md#application-object) - Although there are [exceptions](#notes-and-exceptions), application objects can be considered the definition of an application.
-* [Service principals](app-objects-and-service-principals.md#service-principal-object) - Can be considered an instance of an application.
-Service principals generally reference an application object, and one application object can be referenced by multiple service principals across directories.
+- [Application objects](app-objects-and-service-principals.md#application-object) - Although there are [exceptions](#notes-and-exceptions), application objects can be considered the definition of an application.
+- [Service principals](app-objects-and-service-principals.md#service-principal-object) - Can be considered an instance of an application.
+ Service principals generally reference an application object, and one application object can be referenced by multiple service principals across directories.
## What are application objects and where do they come from?
-You can manage [application objects](app-objects-and-service-principals.md#application-object) in the Azure portal through the [App Registrations](https://aka.ms/appregistrations) experience. Application objects describe the application to Azure AD and can be considered the definition of the application, allowing the service to know how to issue tokens to the application based on its settings. The application object will only exist in its home directory, even if it's a multi-tenant application supporting service principals in other directories. The application object may include any of the following (as well as additional information not mentioned here):
+You can manage [application objects](app-objects-and-service-principals.md#application-object) in the Azure portal through the [App registrations](https://aka.ms/appregistrations) experience. Application objects describe the application to Azure AD and can be considered the definition of the application, allowing the service to know how to issue tokens to the application based on its settings. The application object will only exist in its home directory, even if it's a multi-tenant application supporting service principals in other directories. The application object may include (but not limited to) any of the following:
-* Name, logo, and publisher
-* Redirect URIs
-* Secrets (symmetric and/or asymmetric keys used to authenticate the application)
-* API dependencies (OAuth)
-* Published APIs/resources/scopes (OAuth)
-* App roles (RBAC)
-* SSO metadata and configuration
-* User provisioning metadata and configuration
-* Proxy metadata and configuration
+- Name, logo, and publisher
+- Redirect URIs
+- Secrets (symmetric and/or asymmetric keys used to authenticate the application)
+- API dependencies (OAuth)
+- Published APIs/resources/scopes (OAuth)
+- App roles
+- Single sign-on (SSO) metadata and configuration
+- User provisioning metadata and configuration
+- Proxy metadata and configuration
Application objects can be created through multiple pathways, including:
-* Application registrations in the Azure portal
-* Creating a new application using Visual Studio and configuring it to use Azure AD authentication
-* When an admin adds an application from the app gallery (which will also create a service principal)
-* Using the Microsoft Graph API or PowerShell to create a new application
-* Many others including various developer experiences in Azure and in API explorer experiences across developer centers
+- Application registrations in the Azure portal
+- Creating a new application using Visual Studio and configuring it to use Azure AD authentication
+- When an admin adds an application from the app gallery (which will also create a service principal)
+- Using the Microsoft Graph API or PowerShell to create a new application
+- Many others including various developer experiences in Azure and in API explorer experiences across developer centers
## What are service principals and where do they come from?
-You can manage [service principals](app-objects-and-service-principals.md#service-principal-object) in the Azure portal through the [Enterprise Applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps/menuId/) experience. Service principals are what govern an application connecting to Azure AD and can be considered the instance of the application in your directory. For any given application, it can have at most one application object (which is registered in a "home" directory) and one or more service principal objects representing instances of the application in every directory in which it acts.
+You can manage [service principals](app-objects-and-service-principals.md#service-principal-object) in the Azure portal through the [Enterprise Applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps/menuId/) experience. Service principals are what govern an application connecting to Azure AD and can be considered the instance of the application in your directory. For any given application, it can have at most one application object (which is registered in a "home" directory), and one or more service principal objects representing instances of the application in every directory in which it acts.
The service principal can include:
-* A reference back to an application object through the application ID property
-* Records of local user and group application-role assignments
-* Records of local user and admin permissions granted to the application
- * For example: permission for the application to access a particular user's email
-* Records of local policies including Conditional Access policy
-* Records of alternate local settings for an application
- * Claims transformation rules
- * Attribute mappings (User provisioning)
- * Directory-specific app roles (if the application supports custom roles)
- * Directory-specific name or logo
+- A reference back to an application object through the application ID property
+- Records of local user and group application-role assignments
+- Records of local user and admin permissions granted to the application
+ - For example: permission for the application to access a particular user's email
+- Records of local policies including Conditional Access policy
+- Records of alternate local settings for an application
+ - Claims transformation rules
+ - Attribute mappings (User provisioning)
+ - Directory-specific app roles (if the application supports custom roles)
+ - Directory-specific name or logo
Like application objects, service principals can also be created through multiple pathways including:
-* When users sign in to a third-party application integrated with Azure AD
- * During sign-in, users are asked to give permission to the application to access their profile and other permissions. The first person to give consent causes a service principal that represents the application to be added to the directory.
-* When users sign in to Microsoft online services like [Microsoft 365](https://products.office.com/)
- * When you subscribe to Microsoft 365 or begin a trial, one or more service principals are created in the directory representing the various services that are used to deliver all of the functionality associated with Microsoft 365.
- * Some Microsoft 365 services like SharePoint create service principals on an ongoing basis to allow secure communication between components including workflows.
-* When an admin adds an application from the app gallery (this will also create an underlying app object)
-* Add an application to use the [Azure AD Application Proxy](../app-proxy/application-proxy.md)
-* Connect an application for single sign on using SAML or password single sign-on (SSO)
-* Programmatically via the Microsoft Graph API or PowerShell
+- When users sign in to a third-party application integrated with Azure AD
+ - During sign-in, users are asked to give permission to the application to access their profile and other permissions. The first person to give consent causes a service principal that represents the application to be added to the directory.
+- When users sign in to Microsoft online services like [Microsoft 365](https://products.office.com/)
+ - When you subscribe to Microsoft 365 or begin a trial, one or more service principals are created in the directory representing the various services that are used to deliver all of the functionality associated with Microsoft 365.
+ - Some Microsoft 365 services like SharePoint create service principals on an ongoing basis to allow secure communication between components including workflows.
+- When an admin adds an application from the app gallery (this will also create an underlying app object)
+- Add an application to use the [Azure AD Application Proxy](../app-proxy/application-proxy.md)
+- Connect an application for SSO using SAML or password SSO
+- Programmatically via the Microsoft Graph API or PowerShell
## How are application objects and service principals related to each other?
An application has one application object in its home directory that is referenc
In the preceding diagram, Microsoft maintains two directories internally (shown on the left) that it uses to publish applications:
-* One for Microsoft Apps (Microsoft services directory)
-* One for pre-integrated third-party applications (App gallery directory)
+- One for Microsoft Apps (Microsoft services directory)
+- One for pre-integrated third-party applications (App gallery directory)
-Application publishers/vendors who integrate with Azure AD are required to have a publishing directory (shown on the right as "Some SaaS Directory").
+Application publishers/vendors who integrate with Azure AD are required to have a publishing directory (shown on the right as "Some software as a service (SaaS) Directory").
Applications that you add yourself (represented as **App (yours)** in the diagram) include:
-* Apps you developed (integrated with Azure AD)
-* Apps you connected for single-sign-on
-* Apps you published using the Azure AD application proxy
+- Apps you developed (integrated with Azure AD)
+- Apps you connected for SSO
+- Apps you published using the Azure AD application proxy
### Notes and exceptions
-* Not all service principals point back to an application object. When Azure AD was originally built the services provided to applications were more limited and the service principal was sufficient for establishing an application identity. The original service principal was closer in shape to the Windows Server Active Directory service account. For this reason, it's still possible to create service principals through different pathways, such as using Azure AD PowerShell, without first creating an application object. The Microsoft Graph API requires an application object before creating a service principal.
-* Not all of the information described above is currently exposed programmatically. The following are only available in the UI:
- * Claims transformation rules
- * Attribute mappings (User provisioning)
-* For more detailed information on the service principal and application objects, see the Microsoft Graph API reference documentation:
- * [Application](/graph/api/resources/application)
- * [Service Principal](/graph/api/resources/serviceprincipal)
+- Not all service principals point back to an application object. When Azure AD was originally built the services provided to applications were more limited, and the service principal was sufficient for establishing an application identity. The original service principal was closer in shape to the Windows Server Active Directory service account. For this reason, it's still possible to create service principals through different pathways, such as using Azure AD PowerShell, without first creating an application object. The Microsoft Graph API requires an application object before creating a service principal.
+- Not all of the information described above is currently exposed programmatically. The following are only available in the UI:
+ - Claims transformation rules
+ - Attribute mappings (User provisioning)
+- For more detailed information on the service principal and application objects, see the Microsoft Graph API reference documentation:
+ - [Application](/graph/api/resources/application)
+ - [Service Principal](/graph/api/resources/serviceprincipal)
## Why do applications integrate with Azure AD?
-Applications are added to Azure AD to leverage one or more of the services it provides including:
+Applications are added to Azure AD to use one or more of the services it provides including:
-* Application authentication and authorization
-* User authentication and authorization
-* SSO using federation or password
-* User provisioning and synchronization
-* Role-based access control - Use the directory to define application roles to perform role-based authorization checks in an application
-* OAuth authorization services - Used by Microsoft 365 and other Microsoft applications to authorize access to APIs/resources
-* Application publishing and proxy - Publish an application from a private network to the internet
-* Directory schema extension attributes - [Extend the schema of service principal and user objects](active-directory-schema-extensions.md) to store additional data in Azure AD
+- Application authentication and authorization
+- User authentication and authorization
+- SSO using federation or password
+- User provisioning and synchronization
+- Role-based access control (RBAC) - Use the directory to define application roles to perform role-based authorization checks in an application
+- OAuth authorization services - Used by Microsoft 365 and other Microsoft applications to authorize access to APIs/resources
+- Application publishing and proxy - Publish an application from a private network to the internet
+- Directory schema extension attributes - [Extend the schema of service principal and user objects](active-directory-schema-extensions.md) to store additional data in Azure AD
## Who has permission to add applications to my Azure AD instance?
-While there are some tasks that only global administrators can do (such as adding applications from the app gallery and configuring an application to use the Application Proxy) by default all users in your directory have rights to register application objects that they are developing and discretion over which applications they share/give access to their organizational data through consent. If a person is the first user in your directory to sign in to an application and grant consent, that will create a service principal in your tenant; otherwise, the consent grant information will be stored on the existing service principal.
+While there are some tasks that only global administrators can do (such as adding applications from the app gallery, and configuring an application to use the Application Proxy) by default all users in your directory have rights to register application objects that they're developing and discretion over which applications they share/give access to their organizational data through consent. If a person is the first user in your directory to sign in to an application and grant consent, that will create a service principal in your tenant. Otherwise, the consent grant information will be stored on the existing service principal.
-Allowing users to register and consent to applications might initially sound concerning, but keep the following in mind:
+Allowing users to register and consent to applications might initially sound concerning, but keep the following reasons in mind:
-
-* Applications have been able to leverage Windows Server Active Directory for user authentication for many years without requiring the application to be registered or recorded in the directory. Now the organization will have improved visibility to exactly how many applications are using the directory and for what purpose.
-* Delegating these responsibilities to users negates the need for an admin-driven application registration and publishing process. With Active Directory Federation Services (ADFS) it was likely that an admin had to add an application as a relying party on behalf of their developers. Now developers can self-service.
-* Users signing in to applications using their organization accounts for business purposes is a good thing. If they subsequently leave the organization they will automatically lose access to their account in the application they were using.
-* Having a record of what data was shared with which application is a good thing. Data is more transportable than ever and it's useful to have a clear record of who shared what data with which applications.
-* API owners who use Azure AD for OAuth decide exactly what permissions users are able to grant to applications and which permissions require an admin to agree to. Only admins can consent to larger scopes and more significant permissions, while user consent is scoped to the users' own data and capabilities.
-* When a user adds or allows an application to access their data, the event can be audited so you can view the Audit Reports within the Azure portal to determine how an application was added to the directory.
+- Applications have been able to use Windows Server Active Directory for user authentication for many years without requiring the application to be registered or recorded in the directory. Now the organization will have improved visibility to exactly how many applications are using the directory and for what purpose.
+- Delegating these responsibilities to users negates the need for an admin-driven application registration and publishing process. With Active Directory Federation Services (ADFS) it was likely that an admin had to add an application as a relying party on behalf of their developers. Now developers can self-service.
+- Users signing in to applications using their organization accounts for business purposes is a good thing. If they subsequently leave the organization they'll automatically lose access to their account in the application they were using.
+- Having a record of what data was shared with which application is a good thing. Data is more transportable than ever and it's useful to have a clear record of who shared what data with which applications.
+- API owners who use Azure AD for OAuth decide exactly what permissions users are able to grant to applications and which permissions require an admin to agree to. Only admins can consent to larger scopes and more significant permissions, while user consent is scoped to the users' own data and capabilities.
+- When a user adds or allows an application to access their data, the event can be audited so you can view the Audit Reports within the Azure portal to determine how an application was added to the directory.
If you still want to prevent users in your directory from registering applications and from signing in to applications without administrator approval, there are two settings that you can change to turn off those capabilities:
-* To change the user consent settings in your organization, see [Configure how users consent to applications](../manage-apps/configure-user-consent.md).
+- To change the user consent settings in your organization, see [Configure how users consent to applications](../manage-apps/configure-user-consent.md).
-* To prevent users from registering their own applications:
- 1. In the Azure portal, go to the [User settings](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/UserSettings) section under Azure Active Directory
+- To prevent users from registering their own applications:
+ 1. In the Azure portal, go to the [User settings](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/UserSettings) section under **App registrations**
2. Change **Users can register applications** to **No**.
-> [!NOTE]
-> Microsoft itself uses the default configuration allowing users to register applications and only allows user consent for a very limited set of permissions.
- <!--Image references-->
-[apps_service_principals_directory]:../media/active-directory-how-applications-are-added/HowAppsAreAddedToAAD.jpg
+
+[apps_service_principals_directory]: ../media/active-directory-how-applications-are-added/HowAppsAreAddedToAAD.jpg
active-directory App Objects And Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-objects-and-service-principals.md
Previously updated : 07/20/2022 Last updated : 11/02/2022
active-directory Application Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/application-model.md
Previously updated : 09/27/2021 Last updated : 11/02/2022
active-directory Authentication Vs Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-vs-authorization.md
Previously updated : 08/26/2022 Last updated : 11/02/2022
active-directory Howto Modify Supported Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-modify-supported-accounts.md
Previously updated : 11/15/2020 Last updated : 11/02/2022 -+ # Customer intent: As an application developer, I need to know how to modify which account types can sign in to or access my application or API.
To specify a different setting for the account types supported by an existing ap
1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which the app is registered. 1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations**, then select your application.
-1. Now, specify who can use the application, sometimes referred to as the *sign-in audience*.
-
- | Supported account types | Description |
- |-|-|
- | **Accounts in this organizational directory only** | Select this option if you're building an application for use only by users (or guests) in *your* tenant.<br><br>Often called a *line-of-business* (LOB) application, this is a **single-tenant** application in the Microsoft identity platform. |
- | **Accounts in any organizational directory** | Select this option if you'd like users in *any* Azure AD tenant to be able to use your application. This option is appropriate if, for example, you're building a software-as-a-service (SaaS) application that you intend to provide to multiple organizations.<br><br>This is known as a **multi-tenant** application in the Microsoft identity platform. |
-1. Select **Save**.
+1. Under **Manage**, select **App registrations**, select your application, and then select **Manifest** to use the manifest editor.
+1. Download the manifest JSON file locally.
+1. Now, specify who can use the application, sometimes referred to as the *sign-in audience*. Find the *signInAudience* property in the manifest JSON file and set it to one of the following property values:
+
+ | Property value | Supported account types | Description |
+ |-|-|-|
+ | **AzureADMyOrg** | Accounts in this organizational directory only (Microsoft only - Single tenant) |All user and guest accounts in your directory can use your application or API. Use this option if your target audience is internal to your organization. |
+ | **AzureADMultipleOrgs** | Accounts in any organizational directory (Any Azure AD directory - Multitenant) | All users with a work or school account from Microsoft can use your application or API. This includes schools and businesses that use Office 365. Use this option if your target audience is business or educational customers and to enable multitenancy. |
+ | **AzureADandPersonalMicrosoftAccount** | Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox) | All users with a work or school, or personal Microsoft account can use your application or API. It includes schools and businesses that use Office 365 as well as personal accounts that are used to sign in to services like Xbox and Skype. Use this option to target the widest set of Microsoft identities and to enable multitenancy.|
+ | **PersonalMicrosoftAccount** | Personal Microsoft accounts only | Personal accounts that are used to sign in to services like Xbox and Skype. Use this option to target the widest set of Microsoft identities.|
+1. Save your changes to the JSON file locally, then select **Upload** in the manifest editor to upload the updated manifest JSON file.
### Why changing to multi-tenant can fail
active-directory Reference V2 Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-v2-libraries.md
Previously updated : 03/30/2021 Last updated : 10/28/2022 -+ # Customer intent: As a developer, I want to know whether there's a Microsoft Authentication Library (MSAL) available for the language/framework I'm using to build my application, and whether the library is GA or in preview.
active-directory Security Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-tokens.md
Previously updated : 09/27/2021 Last updated : 11/1/2022 -+ #Customer intent: As an application developer, I want to understand the basic concepts of security tokens in the Microsoft identity platform. # Security tokens
-A centralized identity provider is especially useful for apps that have users located around the globe who don't necessarily sign in from the enterprise's network. The Microsoft identity platform authenticates users and provides security tokens, such as [access tokens](developer-glossary.md#access-token), [refresh tokens](developer-glossary.md#refresh-token), and [ID tokens](developer-glossary.md#id-token). Security tokens allow a [client application](developer-glossary.md#client-application) to access protected resources on a [resource server](developer-glossary.md#resource-server).
+A centralized identity provider is especially useful for apps that have worldwide users who don't necessarily sign in from the enterprise's network. The Microsoft identity platform authenticates users and provides security tokens, such as [access tokens](developer-glossary.md#access-token), [refresh tokens](developer-glossary.md#refresh-token), and [ID tokens](developer-glossary.md#id-token). Security tokens allow a [client application](developer-glossary.md#client-application) to access protected resources on a [resource server](developer-glossary.md#resource-server).
-**Access token**: An access token is a security token that's issued by an [authorization server](developer-glossary.md#authorization-server) as part of an [OAuth 2.0](active-directory-v2-protocols.md) flow. It contains information about the user and the resource for which the token is intended. The information can be used to access web APIs and other protected resources. Access tokens are validated by resources to grant access to a client app. To learn more about how the Microsoft identity platform issues access tokens, see [Access tokens](access-tokens.md).
+**Access token**: An access token is a security token issued by an [authorization server](developer-glossary.md#authorization-server) as part of an [OAuth 2.0](active-directory-v2-protocols.md) flow. It contains information about the user and the resource for which the token is intended. The information can be used to access web APIs and other protected resources. Access tokens are validated by resources to grant access to a client app. To learn more about how the Microsoft identity platform issues access tokens, see [Access tokens](access-tokens.md).
**Refresh token**: Because access tokens are valid for only a short period of time, authorization servers will sometimes issue a refresh token at the same time the access token is issued. The client application can then exchange this refresh token for a new access token when needed. To learn more about how the Microsoft identity platform uses refresh tokens to revoke permissions, see [Refresh tokens](refresh-tokens.md). **ID token**: ID tokens are sent to the client application as part of an [OpenID Connect](v2-protocols-oidc.md) flow. They can be sent alongside or instead of an access token. ID tokens are used by the client to authenticate the user. To learn more about how the Microsoft identity platform issues ID tokens, see [ID tokens](id-tokens.md).
-> [!NOTE]
-> This article discusses security tokens used by the OAuth2 and OpenID Connect protocols. Many enterprise applications use SAML to authenticate users. For information on SAML assertions, see [Azure Active Directory SAML token reference](reference-saml-tokens.md).
+Many enterprise applications use SAML to authenticate users. For information on SAML assertions, see [Azure Active Directory SAML token reference](reference-saml-tokens.md).
## Validate security tokens It's up to the app for which the token was generated, the web app that signed in the user, or the web API being called to validate the token. The token is signed by the authorization server with a private key. The authorization server publishes the corresponding public key. To validate a token, the app verifies the signature by using the authorization server public key to validate that the signature was created using the private key.
-Tokens are valid for only a limited amount of time. Usually, the authorization server provides a pair of tokens, such as:
+Tokens are valid for only a limited amount of time, so the authorization server frequently provides a pair of tokens;
* An access token, which accesses the application or protected resource. * A refresh token, which is used to refresh the access token when the access token is close to expiring.
A claim consists of key-value pairs that provide information such as the:
* Security Token Server that generated the token. * Date when the token was generated.
-* Subject (such as the user--except for daemons).
+* Subject (like the user, but not daemons).
* Audience, which is the app for which the token was generated.
-* App (the client) that asked for the token. In the case of web apps, this app might be the same as the audience.
+* App (the client) that asked for the token. For web apps, this app might be the same as the audience.
To learn more about how the Microsoft identity platform implements tokens and claim information, see [Access tokens](access-tokens.md) and [ID tokens](id-tokens.md).
active-directory Single And Multi Tenant Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-and-multi-tenant-apps.md
Previously updated : 10/13/2021 Last updated : 11/02/2022
active-directory V2 Permissions And Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-permissions-and-consent.md
- Title: Microsoft identity platform scopes, permissions, & consent
-description: Learn about authorization in the Microsoft identity platform endpoint, including scopes, permissions, and consent.
-------- Previously updated : 04/21/2022-----
-# Permissions and consent in the Microsoft identity platform
-
-Applications that integrate with the Microsoft identity platform follow an authorization model that gives users and administrators control over how data can be accessed. The implementation of the authorization model has been updated on the Microsoft identity platform. It changes how an app must interact with the Microsoft identity platform. This article covers the basic concepts of this authorization model, including scopes, permissions, and consent.
-
-## Scopes and permissions
-
-The Microsoft identity platform implements the [OAuth 2.0](active-directory-v2-protocols.md) authorization protocol. OAuth 2.0 is a method through which a third-party app can access web-hosted resources on behalf of a user. Any web-hosted resource that integrates with the Microsoft identity platform has a resource identifier, or *application ID URI*.
-
-Here are some examples of Microsoft web-hosted resources:
-
-* Microsoft Graph: `https://graph.microsoft.com`
-* Microsoft 365 Mail API: `https://outlook.office.com`
-* Azure Key Vault: `https://vault.azure.net`
-
-The same is true for any third-party resources that have integrated with the Microsoft identity platform. Any of these resources also can define a set of permissions that can be used to divide the functionality of that resource into smaller chunks. As an example, [Microsoft Graph](https://graph.microsoft.com) has defined permissions to do the following tasks, among others:
-
-* Read a user's calendar
-* Write to a user's calendar
-* Send mail as a user
-
-Because of these types of permission definitions, the resource has fine-grained control over its data and how API functionality is exposed. A third-party app can request these permissions from users and administrators, who must approve the request before the app can access data or act on a user's behalf.
-
-When a resource's functionality is chunked into small permission sets, third-party apps can be built to request only the permissions that they need to perform their function. Users and administrators can know what data the app can access. And they can be more confident that the app isn't behaving with malicious intent. Developers should always abide by the principle of least privilege, asking for only the permissions they need for their applications to function.
-
-In OAuth 2.0, these types of permission sets are called *scopes*. They're also often referred to as *permissions*. In the Microsoft identity platform, a permission is represented as a string value. An app requests the permissions it needs by specifying the permission in the `scope` query parameter. Identity platform supports several well-defined [OpenID Connect scopes](#openid-connect-scopes) as well as resource-based permissions (each permission is indicated by appending the permission value to the resource's identifier or application ID URI). For example, the permission string `https://graph.microsoft.com/Calendars.Read` is used to request permission to read users calendars in Microsoft Graph.
-
-An app most commonly requests these permissions by specifying the scopes in requests to the Microsoft identity platform authorize endpoint. However, some high-privilege permissions can be granted only through administrator consent. They can be requested or granted by using the [administrator consent endpoint](#admin-restricted-permissions). Keep reading to learn more.
-
-In requests to the authorization, token or consent endpoints for the Microsoft Identity platform, if the resource identifier is omitted in the scope parameter, the resource is assumed to be Microsoft Graph. For example, `scope=User.Read` is equivalent to `https://graph.microsoft.com/User.Read`.
-
-## Permission types
-
-The Microsoft identity platform supports two types of permissions: *delegated permissions* and *application permissions*.
-
-* **Delegated permissions** are used by apps that have a signed-in user present. For these apps, either the user or an administrator consents to the permissions that the app requests. The app is delegated with the permission to act as a signed-in user when it makes calls to the target resource.
-
- Some delegated permissions can be consented to by nonadministrators. But some high-privileged permissions require [administrator consent](#admin-restricted-permissions). To learn which administrator roles can consent to delegated permissions, see [Administrator role permissions in Azure Active Directory (Azure AD)](../roles/permissions-reference.md).
-
-* **Application permissions** are used by apps that run without a signed-in user present, for example, apps that run as background services or daemons. Only [an administrator can consent to](#requesting-consent-for-an-entire-tenant) application permissions.
-
-_Effective permissions_ are the permissions that your app has when it makes requests to the target resource. It's important to understand the difference between the delegated permissions and application permissions that your app is granted, and the effective permissions your app is granted when it makes calls to the target resource.
--- For delegated permissions, the _effective permissions_ of your app are the least-privileged intersection of the delegated permissions the app has been granted (by consent) and the privileges of the currently signed-in user. Your app can never have more privileges than the signed-in user. -
- Within organizations, the privileges of the signed-in user can be determined by policy or by membership in one or more administrator roles. To learn which administrator roles can consent to delegated permissions, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md).
-
- For example, assume your app has been granted the _User.ReadWrite.All_ delegated permission. This permission nominally grants your app permission to read and update the profile of every user in an organization. If the signed-in user is a global administrator, your app can update the profile of every user in the organization. However, if the signed-in user doesn't have an administrator role, your app can update only the profile of the signed-in user. It can't update the profiles of other users in the organization because the user that it has permission to act on behalf of doesn't have those privileges.
--- For application permissions, the _effective permissions_ of your app are the full level of privileges implied by the permission. For example, an app that has the _User.ReadWrite.All_ application permission can update the profile of every user in the organization.-
-## OpenID Connect scopes
-
-The Microsoft identity platform implementation of OpenID Connect has a few well-defined scopes that are also hosted on Microsoft Graph: `openid`, `email`, `profile`, and `offline_access`. The `address` and `phone` OpenID Connect scopes aren't supported.
-
-If you request the OpenID Connect scopes and a token, you'll get a token to call the [UserInfo endpoint](userinfo.md).
-
-### openid
-
-If an app signs in by using [OpenID Connect](active-directory-v2-protocols.md), it must request the `openid` scope. The `openid` scope appears on the work account consent page as the **Sign you in** permission.
-
-By using this permission, an app can receive a unique identifier for the user in the form of the `sub` claim. The permission also gives the app access to the UserInfo endpoint. The `openid` scope can be used at the Microsoft identity platform token endpoint to acquire ID tokens. The app can use these tokens for authentication.
-
-### email
-
-The `email` scope can be used with the `openid` scope and any other scopes. It gives the app access to the user's primary email address in the form of the `email` claim.
-
-The `email` claim is included in a token only if an email address is associated with the user account, which isn't always the case. If your app uses the `email` scope, the app needs to be able to handle a case in which no `email` claim exists in the token.
-
-### profile
-
-The `profile` scope can be used with the `openid` scope and any other scope. It gives the app access to a large amount of information about the user. The information it can access includes, but isn't limited to, the user's given name, surname, preferred username, and object ID.
-
-For a complete list of the `profile` claims available in the `id_tokens` parameter for a specific user, see the [`id_tokens` reference](id-tokens.md).
-
-### offline_access
-
-The [`offline_access` scope](https://openid.net/specs/openid-connect-core-1_0.html#OfflineAccess) gives your app access to resources on behalf of the user for an extended time. On the consent page, this scope appears as the **Maintain access to data you have given it access to** permission.
-
-When a user approves the `offline_access` scope, your app can receive refresh tokens from the Microsoft identity platform token endpoint. Refresh tokens are long-lived. Your app can get new access tokens as older ones expire.
-
-> [!NOTE]
-> This permission currently appears on all consent pages, even for flows that don't provide a refresh token (such as the [implicit flow](v2-oauth2-implicit-grant-flow.md)). This setup addresses scenarios where a client can begin within the implicit flow and then move to the code flow where a refresh token is expected.
-
-On the Microsoft identity platform (requests made to the v2.0 endpoint), your app must explicitly request the `offline_access` scope, to receive refresh tokens. So when you redeem an authorization code in the [OAuth 2.0 authorization code flow](active-directory-v2-protocols.md), you'll receive only an access token from the `/token` endpoint.
-
-The access token is valid for a short time. It usually expires in one hour. At that point, your app needs to redirect the user back to the `/authorize` endpoint to get a new authorization code. During this redirect, depending on the type of app, the user might need to enter their credentials again or consent again to permissions.
-
-For more information about how to get and use refresh tokens, see the [Microsoft identity platform protocol reference](active-directory-v2-protocols.md).
-
-## Consent types
-
-Applications in Microsoft identity platform rely on consent in order to gain access to necessary resources or APIs. There are a number of kinds of consent that your app may need to know about in order to be successful. If you are defining permissions, you will also need to understand how your users will gain access to your app or API.
-
-### Static user consent
-
-In the static user consent scenario, you must specify all the permissions it needs in the app's configuration in the Azure portal. If the user (or administrator, as appropriate) has not granted consent for this app, then Microsoft identity platform will prompt the user to provide consent at this time.
-
-Static permissions also enable administrators to [consent on behalf of all users](#requesting-consent-for-an-entire-tenant) in the organization.
-
-While static permissions of the app defined in the Azure portal keep the code nice and simple, it presents some possible issues for developers:
--- The app needs to request all the permissions it would ever need upon the user's first sign-in. This can lead to a long list of permissions that discourages end users from approving the app's access on initial sign-in.--- The app needs to know all of the resources it would ever access ahead of time. It is difficult to create apps that could access an arbitrary number of resources.-
-### Incremental and dynamic user consent
-
-With the Microsoft identity platform endpoint, you can ignore the static permissions defined in the app registration information in the Azure portal and request permissions incrementally instead. You can ask for a bare minimum set of permissions upfront and request more over time as the customer uses additional app features. To do so, you can specify the scopes your app needs at any time by including the new scopes in the `scope` parameter when [requesting an access token](#requesting-individual-user-consent) - without the need to pre-define them in the application registration information. If the user hasn't yet consented to new scopes added to the request, they'll be prompted to consent only to the new permissions. Incremental, or dynamic consent, only applies to delegated permissions and not to application permissions.
-
-Allowing an app to request permissions dynamically through the `scope` parameter gives developers full control over your user's experience. You can also front load your consent experience and ask for all permissions in one initial authorization request. If your app requires a large number of permissions, you can gather those permissions from the user incrementally as they try to use certain features of the app over time.
-
-> [!IMPORTANT]
-> Dynamic consent can be convenient, but presents a big challenge for permissions that require admin consent. The admin consent experience in the **App registrations** and **Enterprise applications** blades in the portal doesn't know about those dynamic permissions at consent time. We recommend that a developer list all the admin privileged permissions that are needed by the app in the portal. This enables tenant admins to consent on behalf of all their users in the portal, once. Users won't need to go through the consent experience for those permissions on sign in. The alternative is to use dynamic consent for those permissions. To grant admin consent, an individual admin signs in to the app, triggers a consent prompt for the appropriate permissions, and selects **consent for my entire org** in the consent dialogue.
-
-### Admin consent
-
-[Admin consent](#using-the-admin-consent-endpoint) is required when your app needs access to certain high-privilege permissions. Admin consent ensures that administrators have some additional controls before authorizing apps or users to access highly privileged data from the organization.
-
-[Admin consent done on behalf of an organization](#requesting-consent-for-an-entire-tenant) is highly recommended if your app has an enterprise audience. Admin consent done on behalf of an organization requires the static permissions to be registered for the app in the portal. Set those permissions for apps in the app registration portal if you need an admin to give consent on behalf of the entire organization. The admin can consent to those permissions on behalf of all users in the org, once. The users will not need to go through the consent experience for those permissions when signing in to the app. This is easier for users and reduces the cycles required by the organization admin to set up the application.
-
-## Requesting individual user consent
-
-In an [OpenID Connect or OAuth 2.0](active-directory-v2-protocols.md) authorization request, an app can request the permissions it needs by using the `scope` query parameter. For example, when a user signs in to an app, the app sends a request like the following example. (Line breaks are added for legibility.)
-
-```HTTP
-GET https://login.microsoftonline.com/common/oauth2/v2.0/authorize?
-client_id=6731de76-14a6-49ae-97bc-6eba6914391e
-&response_type=code
-&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F
-&response_mode=query
-&scope=
-https%3A%2F%2Fgraph.microsoft.com%2Fcalendars.read%20
-https%3A%2F%2Fgraph.microsoft.com%2Fmail.send
-&state=12345
-```
-
-The `scope` parameter is a space-separated list of delegated permissions that the app is requesting. Each permission is indicated by appending the permission value to the resource's identifier (the application ID URI). In the request example, the app needs permission to read the user's calendar and send mail as the user.
-
-After the user enters their credentials, the Microsoft identity platform checks for a matching record of *user consent*. If the user hasn't consented to any of the requested permissions in the past, and if the administrator hasn't consented to these permissions on behalf of the entire organization, the Microsoft identity platform asks the user to grant the requested permissions.
-
-At this time, the `offline_access` ("Maintain access to data you have given it access to") permission and `User.Read` ("Sign you in and read your profile") permission are automatically included in the initial consent to an application. These permissions are generally required for proper app functionality. The `offline_access` permission gives the app access to refresh tokens that are critical for native apps and web apps. The `User.Read` permission gives access to the `sub` claim. It allows the client or app to correctly identify the user over time and access rudimentary user information.
--
-When the user approves the permission request, consent is recorded. The user doesn't have to consent again when they later sign in to the application.
-
-## Requesting consent for an entire tenant
-
-When an organization purchases a license or subscription for an application, the organization often wants to proactively set up the application for use by all members of the organization. As part of this process, an administrator can grant consent for the application to act on behalf of any user in the tenant. If the admin grants consent for the entire tenant, the organization's users don't see a consent page for the application.
-
-Admin consent done on behalf of an organization requires the static permissions registered for the app. Set those permissions for apps in the app registration portal if you need an admin to give consent on behalf of the entire organization.
-
-To request consent for delegated permissions for all users in a tenant, your app can use the [admin consent endpoint](#using-the-admin-consent-endpoint).
-
-Additionally, applications must use the admin consent endpoint to request application permissions.
-
-## Admin-restricted permissions
-
-Some high-privilege permissions in Microsoft resources can be set to *admin-restricted*. Here are some examples of these kinds of permissions:
-
-* Read all user's full profiles by using `User.Read.All`
-* Write data to an organization's directory by using `Directory.ReadWrite.All`
-* Read all groups in an organization's directory by using `Groups.Read.All`
-
-> [!NOTE]
->In requests to the authorization, token or consent endpoints for the Microsoft Identity platform, if the resource identifier is omitted in the scope parameter, the resource is assumed to be Microsoft Graph. For example, `scope=User.Read` is equivalent to `https://graph.microsoft.com/User.Read`.
-
-Although a consumer user might grant an application access to this kind of data, organizational users can't grant access to the same set of sensitive company data. If your application requests access to one of these permissions from an organizational user, the user receives an error message that says they're not authorized to consent to your app's permissions.
-
-If your app requires scopes for admin-restricted permissions, an organization's administrator must consent to those scopes on behalf of the organization's users. To avoid displaying prompts to users that request consent for permissions they can't grant, your app can use the admin consent endpoint. The admin consent endpoint is covered in the next section.
-
-If the application requests high-privilege delegated permissions and an administrator grants these permissions through the admin consent endpoint, consent is granted for all users in the tenant.
-
-If the application requests application permissions and an administrator grants these permissions through the admin consent endpoint, this grant isn't done on behalf of any specific user. Instead, the client application is granted permissions *directly*. These types of permissions are used only by daemon services and other noninteractive applications that run in the background.
-
-## Using the admin consent endpoint
-
-After you use the admin consent endpoint to grant admin consent, you're finished. Users don't need to take any further action. After admin consent is granted, users can get an access token through a typical auth flow. The resulting access token has the consented permissions.
-
-When a Global Administrator uses your application and is directed to the authorize endpoint, the Microsoft identity platform detects the user's role. It asks if the Global Administrator wants to consent on behalf of the entire tenant for the permissions you requested. You could instead use a dedicated admin consent endpoint to proactively request an administrator to grant permission on behalf of the entire tenant. This endpoint is also necessary for requesting application permissions. Application permissions can't be requested by using the authorize endpoint.
-
-If you follow these steps, your app can request permissions for all users in a tenant, including admin-restricted scopes. This operation is high privilege. Use the operation only if necessary for your scenario.
-
-To see a code sample that implements the steps, see the [admin-restricted scopes sample](https://github.com/Azure-Samples/active-directory-dotnet-admin-restricted-scopes-v2) in GitHub.
-
-### Request the permissions in the app registration portal
-
-In the app registration portal, applications can list the permissions they require, including both delegated permissions and application permissions. This setup allows the use of the `.default` scope and the Azure portal's **Grant admin consent** option.
-
-In general, the permissions should be statically defined for a given application. They should be a superset of the permissions that the app will request dynamically or incrementally.
-
-> [!NOTE]
->Application permissions can be requested only through the use of [`.default`](#the-default-scope). So if your app needs application permissions, make sure they're listed in the app registration portal.
-
-To configure the list of statically requested permissions for an application:
-
-1. Go to your application in the <a href="https://go.microsoft.com/fwlink/?linkid=2083908" target="_blank">Azure portal - App registrations</a> quickstart experience.
-1. Select an application, or [create an app](quickstart-register-app.md) if you haven't already.
-1. On the application's **Overview** page, under **Manage**, select **API Permissions** > **Add a permission**.
-1. Select **Microsoft Graph** from the list of available APIs. Then add the permissions that your app requires.
-1. Select **Add Permissions**.
-
-### Recommended: Sign the user in to your app
-
-Typically, when you build an application that uses the admin consent endpoint, the app needs a page or view in which the admin can approve the app's permissions. This page can be:
-
-* Part of the app's sign-up flow.
-* Part of the app's settings.
-* A dedicated "connect" flow.
-
-In many cases, it makes sense for the app to show this "connect" view only after a user has signed in with a work Microsoft account or school Microsoft account.
-
-When you sign the user in to your app, you can identify the organization to which the admin belongs before you ask them to approve the necessary permissions. Although this step isn't strictly necessary, it can help you create a more intuitive experience for your organizational users.
-
-To sign the user in, follow the [Microsoft identity platform protocol tutorials](active-directory-v2-protocols.md).
-
-### Request the permissions from a directory admin
-
-When you're ready to request permissions from your organization's admin, you can redirect the user to the Microsoft identity platform admin consent endpoint.
-
-```HTTP
-// Line breaks are for legibility only.
-GET https://login.microsoftonline.com/{tenant}/v2.0/adminconsent?
-client_id=6731de76-14a6-49ae-97bc-6eba6914391e
-&state=12345
-&redirect_uri=http://localhost/myapp/permissions
-&scope=
-https://graph.microsoft.com/calendars.read
-https://graph.microsoft.com/mail.send
-```
--
-| Parameter | Condition | Description |
-|:--|:--|:--|
-| `tenant` | Required | The directory tenant that you want to request permission from. It can be provided in a GUID or friendly name format. Or it can be generically referenced with organizations, as seen in the example. Don't use "common," because personal accounts can't provide admin consent except in the context of a tenant. To ensure the best compatibility with personal accounts that manage tenants, use the tenant ID when possible. |
-| `client_id` | Required | The application (client) ID that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. |
-| `redirect_uri` | Required |The redirect URI where you want the response to be sent for your app to handle. It must exactly match one of the redirect URIs that you registered in the app registration portal. |
-| `state` | Recommended | A value included in the request that will also be returned in the token response. It can be a string of any content you want. Use the state to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. |
-|`scope` | Required | Defines the set of permissions being requested by the application. Scopes can be either static (using [`.default`](#the-default-scope)) or dynamic. This set can include the OpenID Connect scopes (`openid`, `profile`, `email`). If you need application permissions, you must use `.default` to request the statically configured list of permissions. |
--
-At this point, Azure AD requires a tenant administrator to sign in to complete the request. The administrator is asked to approve all the permissions that you requested in the `scope` parameter. If you used a static (`.default`) value, it will function like the v1.0 admin consent endpoint and request consent for all scopes found in the required permissions for the app.
-
-#### Successful response
-
-If the admin approves the permissions for your app, the successful response looks like this:
-
-```HTTP
-GET http://localhost/myapp/permissions?tenant=a8990e1f-ff32-408a-9f8e-78d3b9139b95&state=state=12345&admin_consent=True
-```
-
-| Parameter | Description |
-| | |
-| `tenant` | The directory tenant that granted your application the permissions it requested, in GUID format. |
-| `state` | A value included in the request that also will be returned in the token response. It can be a string of any content you want. The state is used to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. |
-| `admin_consent` | Will be set to `True`. |
-
-#### Error response
-
-If the admin doesn't approve the permissions for your app, the failed response looks like this:
-
-```HTTP
-GET http://localhost/myapp/permissions?error=permission_denied&error_description=The+admin+canceled+the+request
-```
-
-| Parameter | Description |
-| | |
-| `error` | An error code string that can be used to classify types of errors that occur. It can also be used to react to errors. |
-| `error_description` | A specific error message that can help a developer identify the root cause of an error. |
-
-After you've received a successful response from the admin consent endpoint, your app has gained the permissions it requested. Next, you can request a token for the resource you want.
-
-## Using permissions
-
-After the user consents to permissions for your app, your app can acquire access tokens that represent the app's permission to access a resource in some capacity. An access token can be used only for a single resource. But encoded inside the access token is every permission that your app has been granted for that resource. To acquire an access token, your app can make a request to the Microsoft identity platform token endpoint, like this:
-
-```HTTP
-POST common/oauth2/v2.0/token HTTP/1.1
-Host: https://login.microsoftonline.com
-Content-Type: application/json
-
-{
- "grant_type": "authorization_code",
- "client_id": "6731de76-14a6-49ae-97bc-6eba6914391e",
- "scope": "https://outlook.office.com/Mail.Read https://outlook.office.com/mail.send",
- "code": "AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq...",
- "redirect_uri": "https://localhost/myapp",
- "client_secret": "zc53fwe80980293klaj9823" // NOTE: Only required for web apps
-}
-```
-
-You can use the resulting access token in HTTP requests to the resource. It reliably indicates to the resource that your app has the proper permission to do a specific task.
-
-For more information about the OAuth 2.0 protocol and how to get access tokens, see the [Microsoft identity platform endpoint protocol reference](active-directory-v2-protocols.md).
-
-## The .default scope
-
-The `.default` scope is used to refer generically to a resource service (API) in a request, without identifying specific permissions. If consent is necessary, using `.default` signals that consent should be prompted for all required permissions listed in the application registration (for all APIs in the list).
-
-The scope parameter value is constructed by using the identifier URI for the resource and `.default`, separated by a forward slash (`/`). For example, if the resource's identifier URI is `https://contoso.com`, the scope to request is `https://contoso.com/.default`. For cases where you must include a second slash to correctly request the token, see the [section about trailing slashes](#trailing-slash-and-default).
-
-Using `scope={resource-identifier}/.default` is functionally the same as `resource={resource-identifier}` on the v1.0 endpoint (where `{resource-identifier}` is the identifier URI for the API, for example `https://graph.microsoft.com` for Microsoft Graph).
-
-The `.default` scope can be used in any OAuth 2.0 flow and to initiate [admin consent](v2-admin-consent.md). Its use is required in the [On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) and [client credentials flow](v2-oauth2-client-creds-grant-flow.md).
-
-Clients can't combine static (`.default`) consent and dynamic consent in a single request. So `scope=https://graph.microsoft.com/.default Mail.Read` results in an error because it combines scope types.
-
-### .default when the user has already given consent
-
-The `.default` scope is functionally identical to the behavior of the `resource`-centric v1.0 endpoint. It carries the consent behavior of the v1.0 endpoint as well. That is, `.default` triggers a consent prompt only if consent has not been granted for any delegated permission between the client and the resource, on behalf of the signed-in user.
-
-If consent does exist, the returned token contains all scopes granted for that resource for the signed-in user. However, if no permission has been granted for the requested resource (or if the `prompt=consent` parameter has been provided), a consent prompt is shown for all required permissions configured on the client application registration, for all APIs in the list.
-
-For example, if the scope `https://graph.microsoft.com/.default` is requested, your application is requesting an access token for the Microsoft Graph API. If at least one delegated permission has been granted for Microsoft Graph on behalf of the signed-in user, the sign-in will continue and all Microsoft Graph delegated permissions which have been granted for that user will be included in the access token. If no permissions have been granted for the requested resource (Microsoft Graph, in this example), then a consent prompt will be presented for all required permissions configured on the application, for all APIs in the list.
-
-#### Example 1: The user, or tenant admin, has granted permissions
-
-In this example, the user or a tenant administrator has granted the `Mail.Read` and `User.Read` Microsoft Graph permissions to the client.
-
-If the client requests `scope=https://graph.microsoft.com/.default`, no consent prompt is shown, regardless of the contents of the client application's registered permissions for Microsoft Graph. The returned token contains the scopes `Mail.Read` and `User.Read`.
-
-#### Example 2: The user hasn't granted permissions between the client and the resource
-
-In this example, the user hasn't granted consent between the client and Microsoft Graph, nor has an administrator. The client has registered for the permissions `User.Read` and `Contacts.Read`. It has also registered for the Azure Key Vault scope `https://vault.azure.net/user_impersonation`.
-
-When the client requests a token for `scope=https://graph.microsoft.com/.default`, the user sees a consent page for the Microsoft Graph `User.Read` and `Contacts.Read` scopes, and for the Azure Key Vault `user_impersonation` scope. The returned token contains only the `User.Read` and `Contacts.Read` scopes, and it can be used only against Microsoft Graph.
-
-#### Example 3: The user has consented, and the client requests more scopes
-
-In this example, the user has already consented to `Mail.Read` for the client. The client has registered for the `Contacts.Read` scope.
-
-The client first performs a sign-in with `scope=https://graph.microsoft.com/.default`. Based on the `scopes` parameter of the response, the application's code detects that only `Mail.Read` has been granted. The client then initiates a second sign-in using `scope=https://graph.microsoft.com/.default`, and this time forces consent using `prompt=consent`. If the user is allowed to consent for all the permissions that the application registered, they will be shown the consent prompt. (If not, they will be shown an error message or the [admin consent request](../manage-apps/configure-admin-consent-workflow.md) form.) Both `Contacts.Read` and `Mail.Read` will be in the consent prompt. If consent is granted and the sign-in continues, the token returned is for Microsoft Graph, and contains `Mail.Read` and `Contacts.Read`.
-
-### Using the .default scope with the client
-
-In some cases, a client can request its own `.default` scope. The following example demonstrates this scenario.
-
-```http
-// Line breaks are for legibility only.
-
-GET https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize
- ?response_type=token //Code or a hybrid flow is also possible here
- &client_id=9ada6f8a-6d83-41bc-b169-a306c21527a5
- &scope=9ada6f8a-6d83-41bc-b169-a306c21527a5/.default
- &redirect_uri=https%3A%2F%2Flocalhost
- &state=1234
-```
-
-This code example produces a consent page for all registered permissions if the preceding descriptions of consent and `.default` apply to the scenario. Then the code returns an `id_token`, rather than an access token.
-
-This behavior accommodates some legacy clients that are moving from Azure AD Authentication Library (ADAL) to the Microsoft Authentication Library (MSAL). This setup *shouldn't* be used by new clients that target the Microsoft identity platform.
-
-### Client credentials grant flow and .default
-
-Another use of `.default` is to request app roles (also known as application permissions) in a non-interactive application like a daemon app that uses the [client credentials](v2-oauth2-client-creds-grant-flow.md) grant flow to call a web API.
-
-To define app roles (application permissions) for a web API, see [Add app roles in your application](howto-add-app-roles-in-azure-ad-apps.md).
-
-Client credentials requests in your client service *must* include `scope={resource}/.default`. Here, `{resource}` is the web API that your app intends to call, and wishes to obtain an access token for. Issuing a client credentials request by using individual application permissions (roles) is *not* supported. All the app roles (application permissions) that have been granted for that web API are included in the returned access token.
-
-To grant access to the app roles you define, including granting admin consent for the application, see [Configure a client application to access a web API](quickstart-configure-app-access-web-apis.md).
-
-### Trailing slash and .default
-
-Some resource URIs have a trailing forward slash, for example, `https://contoso.com/` as opposed to `https://contoso.com`. The trailing slash can cause problems with token validation. Problems occur primarily when a token is requested for Azure Resource Manager (`https://management.azure.com/`). In this case, a trailing slash on the resource URI means the slash must be present when the token is requested. So when you request a token for `https://management.azure.com/` and use `.default`, you must request `https://management.azure.com//.default` (notice the double slash!). In general, if you verify that the token is being issued, and if the token is being rejected by the API that should accept it, consider adding a second forward slash and trying again.
-
-## Troubleshooting permissions and consent
-
-For troubleshooting steps, see [Unexpected error when performing consent to an application](../manage-apps/application-sign-in-unexpected-user-consent-error.md).
-
-## Next steps
-
-* [ID tokens in the Microsoft identity platform](id-tokens.md)
-* [Access tokens in the Microsoft identity platform](access-tokens.md)
active-directory Workload Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identities-overview.md
Previously updated : 12/06/2021 Last updated : 11/02/2022
active-directory 10 Secure Local Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/10-secure-local-guest.md
+
+ Title: Convert local guests into Azure AD B2B guest accounts
+description: Learn how to convert local guests into Azure AD B2B guest accounts
++++ Last updated : 11/03/2022++++++++
+# Convert local guests into Azure Active Directory B2B guest accounts
+
+Azure Active Directory (Azure AD B2B) allows external users to collaborate using their own identities. However, it isn't uncommon for organizations to issue local usernames and passwords to external users. This approach isn't recommended as the bring-your-own-identity (BYOI) capabilities provided
+by Azure AD B2B to provide better security, lower cost, and reduce
+complexity when compared to local account creation. Learn more
+[here.](https://learn.microsoft.com/azure/active-directory/fundamentals/secure-external-access-resources)
+
+If your organization currently issues local credentials that external users have to manage and would like to migrate to using Azure AD B2B instead, this document provides a guide to make the transition as seamlessly as possible.
+
+## Identify external-facing applications
+
+Before migrating local accounts to Azure AD B2B, admins should understand what applications and workloads these external users need to access. For example, if external users need access to an application that is hosted on-premises, admins will need to validate that the application is integrated with Azure AD and that a provisioning process is implemented to provision the user from Azure AD to the application.
+The existence and use of on-premises applications could be a reason why local accounts are created in the first place. Learn more about
+[provisioning B2B guests to on-premises
+applications.](https://learn.microsoft.com/azure/active-directory/external-identities/hybrid-cloud-to-on-premises)
+
+All external-facing applications should have single-sign on (SSO) and provisioning integrated with Azure AD for the best end user experience.
+
+## Identify local guest accounts
+
+Admins will need to identify which accounts should be migrated to Azure AD B2B. External identities in Active Directory should be easily identifiable, which can be done with an attribute-value pair. For example, making ExtensionAttribute15 = `External` for all external users. If these users are being provisioned via Azure AD Connect or Cloud Sync, admins can optionally configure these synced external users
+to have the `UserType` attributes set to `Guest`. If these users are being
+provisioned as cloud-only accounts, admins can directly modify the
+users' attributes. What is most important is being able to identify the
+users who you want to convert to B2B.
+
+## Map local guest accounts to external identities
+
+Once you've identified which external user accounts you want to
+convert to Azure AD B2B, you need to identify the BYOI identities or external emails for each user. For example, admins will need to identify that the local account (v-Jeff@Contoso.com) is a user whose home identity/email address is Jeff@Fabrikam.com. How to identify the home identities is up to the organization, but some examples include:
+
+- Asking the external user's sponsor to provide the information.
+
+- Asking the external user to provide the information.
+
+- Referring to an internal database if this information is already known and stored by the organization.
+
+Once the mapping of each external local account to the BYOI identity is done, admins will need to add the external identity/email to the user.mail attribute on each local account.
+
+## End user communications
+
+External users should be notified that the migration will be taking place and when it will happen. Ensure you communicate the expectation that external users will stop using their existing password and post-migration will authenticate with their own home/corporate credentials going forward. Communications can include email campaigns, posters, and announcements.
+
+## Migrate local guest accounts to Azure AD B2B
+
+Once the local accounts have their user.mail attributes populated with the external identity/email that they're mapped to, admins can [convert the local accounts to Azure AD B2B by inviting the local account.](https://learn.microsoft.com/azure/active-directory/external-identities/invite-internal-users)
+This can be done in the UX or programmatically via PowerShell or the Microsoft Graph API. Once complete, the users will no longer
+authenticate with their local password, but will instead authenticate with their home identity/email that was populated in the user.mail attribute. You've successfully migrated to Azure AD B2B.
+
+## Post-migration considerations
+
+If local accounts for external users were being synced from on-premises, admins should take steps to reduce their on-premises footprint and use cloud-native B2B guest accounts moving forward. Some possible actions can include:
+
+- Transition existing local accounts for external users to Azure AD B2B and stop creating local accounts. Post-migration, admins should invite external users natively in Azure AD.
+
+- Randomize the passwords of existing local accounts for external users to ensure they can't authenticate locally to on-premises resources. This will increase security by ensuring that authentication and user lifecycle is tied to the external user's home identity.
+
+## Next steps
+
+See the following articles on securing external access to resources. We recommend you take the actions in the listed order.
+
+1. [Determine your desired security posture for external access](1-secure-access-posture.md)
+1. [Discover your current state](2-secure-access-current-state.md)
+1. [Create a governance plan](3-secure-access-plan.md)
+1. [Use groups for security](4-secure-access-groups.md)
+1. [Transition to Azure AD B2B](5-secure-access-b2b.md)
+1. [Secure access with Entitlement Management](6-secure-access-entitlement-managment.md)
+1. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md)
+1. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
+1. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
+1. [Secure local guest accounts](10-secure-local-guest.md) (YouΓÇÖre here)
active-directory Secure External Access Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-external-access-resources.md
Previously updated : 09/13/2022 Last updated : 11/03/2022
See the following articles on securing external access to resources. We recommen
8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md) 9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)+
+10. [Secure local guest accounts](10-secure-local-guest.md)
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
To learn how to verify or turn on this feature, see [Sync userPrincipalName upda
We recommend that you roll over the Kerberos decryption key at least every 30 days to align with the way that Active Directory domain members submit password changes. There is no associated device attached to the AZUREADSSO computer account object, so you must perform the rollover manually.
-See FAQ [How do I roll over the Kerberos decryption key of the AZUREADSSO computer account?](how-to-connect-sso.md).
+See FAQ [How do I roll over the Kerberos decryption key of the AZUREADSSO computer account?](how-to-connect-sso-faq.yml#how-can-i-roll-over-the-kerberos-decryption-key-of-the--azureadsso--computer-account-).
### Monitoring and logging
active-directory Admin Consent Workflow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/admin-consent-workflow-overview.md
- Previously updated : 06/10/2022+ Last updated : 11/02/2022
active-directory Disable User Sign In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
Title: Disable how a how a user signs in
+ Title: Disable user sign-in for application
description: How to disable an enterprise application so that no users may sign in to it in Azure Active Directory
Last updated 09/06/2022
-#customer intent: As an admin, I want to disable the way a user signs in for an application so that no user can sign in to it in Azure Active Directory.
+#customer intent: As an admin, I want to disable user sign-in for an application so that no user can sign in to it in Azure Active Directory.
# Disable user sign-in for an application There may be situations while configuring or managing an application where you don't want tokens to be issued for an application. Or, you may want to preemptively block an application that you do not want your employees to try to access. To accomplish this, you can disable user sign-in for the application, which will prevent all tokens from being issued for that application.
-In this article, you will learn how to disable how a user signs in to an application in Azure Active Directory through both the Azure portal and PowerShell. If you are looking for how to block specific users from accessing an application, use [user or group assignment](./assign-user-or-group-access-portal.md).
+In this article, you will learn how to prevent users from signing in to an application in Azure Active Directory through both the Azure portal and PowerShell. If you are looking for how to block specific users from accessing an application, use [user or group assignment](./assign-user-or-group-access-portal.md).
## Prerequisites
-To disable how a user signs in, you need:
+To disable user sign-in, you need:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
active-directory Restore Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/restore-application.md
Previously updated : 07/28/2022 Last updated : 11/02/2022
To recover your enterprise application with its previous configurations, first d
1. To view the recently deleted enterprise application, run the following command: ```powershell
- Get-AzureADMSDeletedDirectoryObject -Id 'd4142c52-179b-4d31-b5b9-08940873507b'
- ```
+ Get-AzureADMSDeletedDirectoryObject -Id <id>
+ ```
+
+Replace id with the object ID of the service principal that you want to restore.
+
:::zone-end :::zone pivot="ms-powershell"
To recover your enterprise application with its previous configurations, first d
1. To view the recently deleted enterprise applications, run the following command: ```powershell
- Get-MgDirectoryDeletedItem -DirectoryObjectId 'd4142c52-179b-4d31-b5b9-08940873507b'
+ Get-MgDirectoryDeletedItem -DirectoryObjectId <id>
```
+Replace id with the object ID of the service principal that you want to restore.
+ :::zone-end :::zone pivot="ms-graph"
To get the list of deleted enterprise applications in your tenant, run the follo
```http GET https://graph.microsoft.com/v1.0/directory/deletedItems/microsoft.graph.servicePrincipal ```
-Record the ID of the enterprise application you want to restore.
+From the list of deleted service principals generated, record the ID of the enterprise application you want to restore.
+
+Alternatively, if you want to get the specific enterprise application that was deleted, fetch the deleted service principal and filter the results by the client's application ID (appId) property using the following syntax:
+
+`https://graph.microsoft.com/v1.0/directory/deletedItems/microsoft.graph.servicePrincipal?$filter=appId eq '{appId}'`. Once you've retrieved the object ID of the deleted service principal, proceed to restore it.
:::zone-end
Record the ID of the enterprise application you want to restore.
```powershell
- Restore-AzureADMSDeletedDirectoryObject -Id 'd4142c52-179b-4d31-b5b9-08940873507b'
+ Restore-AzureADMSDeletedDirectoryObject -Id <id>
```+
+Replace id with the object ID of the service principal that you want to restore.
+ :::zone-end :::zone pivot="ms-powershell"
Record the ID of the enterprise application you want to restore.
1. To restore the enterprise application, run the following command: ```powershell
- Restore-MgDirectoryObject -DirectoryObjectId 'd4142c52-179b-4d31-b5b9-08940873507b'
+ Restore-MgDirectoryObject -DirectoryObjectId <id>
```+
+Replace id with the object ID of the service principal that you want to restore.
+ :::zone-end :::zone pivot="ms-graph"
Record the ID of the enterprise application you want to restore.
```http POST https://graph.microsoft.com/v1.0/directory/deletedItems/{id}/restore ```+
+Replace id with the object ID of the service principal that you want to restore.
+ :::zone-end ## Permanently delete an enterprise application
Record the ID of the enterprise application you want to restore.
To permanently delete a soft deleted enterprise application, run the following command: ```powershell
-Remove-AzureADMSDeletedDirectoryObject -Id 'd4142c52-179b-4d31-b5b9-08940873507b'
+Remove-AzureADMSDeletedDirectoryObject -Id <id>
``` :::zone-end
Remove-AzureADMSDeletedDirectoryObject -Id 'd4142c52-179b-4d31-b5b9-08940873507b
1. To permanently delete the soft deleted enterprise application, run the following command: ```powershell
- Remove-MgDirectoryDeletedItem -DirectoryObjectId 'd4142c52-179b-4d31-b5b9-08940873507b'
+ Remove-MgDirectoryDeletedItem -DirectoryObjectId <id>
``` :::zone-end
active-directory Overview Flagged Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-flagged-sign-ins.md
Title: What are flagged sign-ins in Azure Active Directory? description: Provides a general overview of flagged sign-ins in Azure Active Directory. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ # Customer intent: As an Azure AD administrator, I want a tool that gives me the right level of insights into the sign-in activities in my system so that I can easily diagnose and solve problems when they occur.
As an IT admin, when a user failed to sign-in, you want to resolve the issue as
This article gives you an overview of a feature that significantly improves the time it takes to resolve user sign-in problems by making the related problems easy to find. ---
-## What it is
+## What are flagged sign-ins?
Azure AD sign-in events are critical to understanding what went right or wrong with user sign-ins and the authentication configuration in a tenant. However, Azure AD processes over 8 billion authentications a day, which can result in so many sign-in events that admins may find it difficult to find the ones which matter. In other words, the sheer number of sign-in events can make the signal of users who need assistance get lost in the volume of a large number of events.
-Flagged Sign-ins is a feature intended to increase the signal to noise ratio for user sign-ins requiring help. The functionality is intended to empower users to raise awareness about sign-in errors they want help with and, for admins and help desk workers, make finding the right events faster and more efficient. Flagged Sign-in events contain the same information as other sign-in events contain with one addition: they also indicate that a user flagged the event for review by admins.
+Flagged Sign-ins is a feature intended to increase the signal to noise ratio for user sign-ins requiring help. The functionality is intended to empower users to raise awareness about sign-in errors they want help with. Admins and help desk workers also benefit from finding the right events more efficiently. Flagged Sign-in events contain the same information as other sign-in events contain with one addition: they also indicate that a user flagged the event for review by admins.
-Flagged sign-ins gives the user the ability to enable flagging when an error is seen on a sign-in page and then reproduce that error. The error event will then appear as ΓÇ£Flagged for ReviewΓÇ¥ in the Azure AD Reporting blade for Sign-ins.
+Flagged sign-ins gives the user the ability to enable flagging when an error is seen on a sign-in page and then reproduce that error. The error event will then appear as ΓÇ£Flagged for ReviewΓÇ¥ in the Azure AD sign-ins log.
In summary, you can use flagged sign-ins to:
Flagged sign-ins gives you the ability to enable flagging when signing in using
### User: How to flag an error 1. The user receives an error during sign-in.
-2. The user clicks **View details** in the error page.
-3. In **Troubleshooting details**, click **Enable Flagging**. The text changes to **Disable Flagging**. Flagging is now enabled.
+2. The user selects **View details** in the error page.
+3. In **Troubleshooting details**, select **Enable Flagging**. The text changes to **Disable Flagging**. Flagging is now enabled.
4. Close the browser window.
-5. Open a new browser window (in the same browser application) and attempt the same sign in that failed.
+5. Open a new browser window (in the same browser application) and attempt the same sign-in that failed.
6. Reproduce the sign-in error that was seen before.
-After enabling flagging, the same browser application and client must be used or the events will not be flagged.
+With flagging enabled, the same browser application and client must be used or the events won't be flagged.
### Admin: Find flagged events in reports
-1. In the Azure AD portal, select **Sign-in logs** in the left-hand pane.
-2. Click **Add Filters**.
-3. In the filter menu titled **Pick a field**, select **Flagged for review**, and click **Apply**.
-4. All events that were flagged by users are shown.
-5. If needed, apply additional filters to refine the event view.
-6. Select the event to review what happened.
+1. In the Azure AD portal, go to **Sign-in logs** > **Add Filters**.
+1. From the **Pick a field** menu, select **Flagged for review** and **Apply**.
+1. All events that were flagged by users are shown.
+1. If needed, apply more filters to refine the event view.
+1. Select the event to review what happened.
### Admin or Developer: Find flagged events using MS Graph
You can find flagged sign-ins with a filtered query using the sign-ins reporting
Show all Flagged Sign-ins: `https://graph.microsoft.com/beta/auditLogs/signIns?&$filter=flaggedforReview eq true`
-Flagged Sign-ins query for specific user by UPN (e.g.: user@contoso.com):
+Flagged Sign-ins query for specific user by UPN (for example: user@contoso.com):
`https://graph.microsoft.com/beta/auditLogs/signIns?&$filter=flaggedforReview eq true and userPrincipalname eq 'user@contoso.com'` Flagged Sign-ins query for specific user and date greater than:
Any user signing into Azure AD via web page can use flag sign-ins for review. Me
Reviewing flagged sign-in events requires permissions to read the Sign-in Report events in the Azure AD portal. For more information, see [who can access it?](concept-sign-ins.md#who-can-access-it)
-To flag sign-in failures, you don't need additional permissions.
+To flag sign-in failures, you don't need extra permissions.
## What you should know
While the names are similar, **flagged sign-ins** and **risky sign-ins** are dif
## Next steps - [Sign-in logs in Azure Active Directory](concept-sign-ins.md)-- [Sign in diagnostics for Azure AD scenarios](concept-sign-in-diagnostics-scenarios.md)
+- [Sign-in diagnostics for Azure AD scenarios](concept-sign-in-diagnostics-scenarios.md)
active-directory Overview Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring.md
Title: What is Azure Active Directory monitoring? | Microsoft Docs description: Provides a general overview of Azure Active Directory monitoring. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ # Customer intent: As an Azure AD administrator, I want to understand what monitoring solutions are available for Azure AD activity data and how they can help me manage my tenant.
Currently, you can route the logs to:
## Licensing and prerequisites for Azure AD reporting and monitoring
-You'll need an Azure AD premium license to access the Azure AD sign in logs.
+You'll need an Azure AD premium license to access the Azure AD sign-in logs.
For detailed feature and licensing information in the [Azure Active Directory pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
active-directory Overview Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-reports.md
Title: What are Azure Active Directory reports? | Microsoft Docs description: Provides a general overview of Azure Active Directory reports. -+ - Previously updated : 08/22/2022- Last updated : 11/01/2022+ # Customer intent: As an Azure AD administrator, I want to understand what Azure AD reports are available and how I can use them to gain insights into my environment.
The [audit logs report](concept-audit-logs.md) provides you with records of syst
#### What Azure AD license do you need to access the audit logs report?
-The audit logs report is available for features for which you have licenses. If you have a license for a specific feature, you also have access to the audit log information for it. A detailed feature comparison as per [different types of licenses](../fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) can be seen on the [Azure Active Directory pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). For more details, see [Azure Active Directory features and capabilities](../fundamentals/active-directory-whatis.md#which-features-work-in-azure-ad).
+The audit logs report is available for features for which you have licenses. If you have a license for a specific feature, you also have access to the audit log information for it. A detailed feature comparison as per [different types of licenses](../fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) can be seen on the [Azure Active Directory pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). For more information, see [Azure Active Directory features and capabilities](../fundamentals/active-directory-whatis.md#which-features-work-in-azure-ad).
### Sign-ins report
To access the sign-ins activity report, your tenant must have an Azure AD Premiu
## Programmatic access
-In addition to the user interface, Azure AD also provides you with [programmatic access](concept-reporting-api.md) to the reports data, through a set of REST-based APIs. You can call these APIs from a variety of programming languages and tools.
+In addition to the user interface, Azure AD also provides you with [programmatic access](concept-reporting-api.md) to the reports data, through a set of REST-based APIs. You can call these APIs from various programming languages and tools.
## Next steps
active-directory Overview Service Health Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-service-health-notifications.md
Title: What are Service Health notifications in Azure Active Directory? | Microsoft Docs description: Learn how Service Health notifications provide you with a customizable dashboard that tracks the health of your Azure services in the regions where you use them. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+
Most of the built-in admin roles will have access to see these notifications. Fo
## What you should know
-Service Health events allow the addition of alerts and notifications to be applied to subscription events. Currently, this isn't yet supported with tenant events, but will be coming soon.
+Service Health events allow the addition of alerts and notifications to be applied to subscription events. This feature isn't yet supported with tenant events, but will be coming soon.
active-directory Overview Sign In Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-sign-in-diagnostics.md
Title: What is the sign-in diagnostic for Azure Active Directory? description: Provides a general overview of the sign-in diagnostic in Azure Active Directory. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ # Customer intent: As an Azure AD administrator, I want a tool that gives me the right level of insights into the sign-in activities in my system so that I can easily diagnose and solve problems when they occur.
This article gives you an overview of what the diagnostic is and how you can use
In Azure AD, sign-in attempts are controlled by: -- **Who** - The user performing a sign in attempt.
+- **Who** - The user performing a sign-in attempt.
- **How** - How a sign-in attempt was performed. For example, you can configure conditional access policies that enable administrators to configure all aspects of the tenant when they sign in from the corporate network. But the same user might be blocked when they sign into the same account from an untrusted network.
To start and complete the diagnostic process, you need to:
The diagnostic allows two methods to find events to investigate: - Sign-in failures users have [flagged for assistance](overview-flagged-sign-ins.md). -- Search for specific events by the user and additional criteria.
+- Search for specific events by the user and other criteria.
-Flagged sign-ins are automatically presented in a list of up to 100. You can run a diagnostics on an event immediately by clicking it.
+Flagged sign-ins are automatically presented in a list of up to 100. You can run diagnostics on an event immediately by clicking it.
You can search a specific event by selecting the search tab even when flagged sign-ins are present. When searching for specific events, you can filter based on the following options:
You can change the content displayed in the columns based on your preference. Ex
### Take action
-For the selected sign-in event, you get a diagnostic results. Read through the results to identify action that you can take to fix the problem. These results add recommended steps and shed light on relevant information such as the related policies, sign-in details, and supportive documentation. Because it's not always possible to resolve issues without more help, a recommended step might be to open a support ticket.
+For the selected sign-in event, you get a diagnostic result. Read through the results to identify action that you can take to fix the problem. These results add recommended steps and shed light on relevant information such as the related policies, sign-in details, and supportive documentation. Because it's not always possible to resolve issues without more help, a recommended step might be to open a support ticket.
![Screenshot showing the diagnostic results.](./media/overview-sign-in-diagnostics/diagnostic-results.png)
For the selected sign-in event, you get a diagnostic results. Read through the r
## How to access it
-To use the diagnostic, you must be signed into the tenant as a global admin or a global reader. If you do not have this level of access, use [Privileged Identity Management, PIM](../privileged-identity-management/pim-resource-roles-activate-your-roles.md), to elevate your access to global admin/reader within the tenant. This will allow you to have temporary access to the diagnostic.
+To use the diagnostic, you must be signed into the tenant as a Global Administrator or a Global Reader.
With the correct access level, you can find the diagnostic in various places:
With the correct access level, you can find the diagnostic in various places:
1. Open **Azure Active Directory AAD or Azure AD Conditional Access**.
-2. From the main menu, click **Diagnose & Solve Problems**.
-
-3. Under the **Troubleshooters**, there is a sign-in diagnostic tile.
-
-4. Click **Troubleshoot** button.
-
-
+1. From the main menu, select **Diagnose & Solve Problems**.
+1. From the **Troubleshooters** section, select the **Troubleshoot** button from the sign-in diagnostic tile.
**Option B**: Sign-in Events
With the correct access level, you can find the diagnostic in various places:
2. On the main menu, in the **Monitoring** section, select **Sign-ins**.
-3. From the list of sign-ins, select a sign in with a **Failure** status. You can filter your list by Status to make it easier to find failed sign-ins.
+3. From the list of sign-ins, select a sign-in with a **Failure** status. You can filter your list by Status to make it easier to find failed sign-ins.
-4. The **Activity Details: Sign-ins** tab will open for the selected sign-in. Click on dotted icon to view more menu icons. Select the **Troubleshooting and support** tab.
+4. The **Activity Details: Sign-ins** tab will open for the selected sign-in. Select the dotted icon to view more menu icons. Select the **Troubleshooting and support** tab.
-5. Click the link to **Launch the Sign-in Diagnostic**.
+5. Select the link to **Launch the Sign-in Diagnostic**.
active-directory Plan Monitoring And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/plan-monitoring-and-reporting.md
Title: Plan reports & monitoring deployment - Azure AD description: Describes how to plan and execute implementation of reporting and monitoring. -+ Previously updated : 08/26/2022- Last updated : 11/01/2022+ # Customer intent: As an Azure AD administrator, I want to monitor logs and report on access to increase security
Your Azure Active Directory (Azure AD) reporting and monitoring solution depends
### Benefits of Azure AD reporting and monitoring
-Azure AD reporting provides a comprehensive view and logs of Azure AD activity in your environment, including sign in events, audit events, and changes to your directory.
+Azure AD reporting provides a comprehensive view and logs of Azure AD activity in your environment, including sign-in events, audit events, and changes to your directory.
The provided data enables you to:
With Azure AD monitoring, you can route logs to:
### Licensing and prerequisites for Azure AD reporting and monitoring
-You'll need an Azure AD premium license to access the Azure AD sign in logs.
+You'll need an Azure AD premium license to access the Azure AD sign-in logs.
For detailed feature and licensing information in the [Azure Active Directory pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
In this project, you'll define the audiences that will consume and monitor repor
### Engage the right stakeholders
-When technology projects fail, they typically do so due to mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md). Also ensure that stakeholder roles in the project are well understood by documenting the stakeholders and their project input and accountabilities.
+When technology projects fail, they typically do so due to mismatched expectations on effect, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md). Also ensure that stakeholder roles in the project are well understood by documenting the stakeholders and their project input and accountabilities.
### Plan communications
Reporting and monitoring are used to meet your business requirements, gain insig
|Area |Description | |-|-|
-|Retention| **Log retention of more than 30 days**. ΓÇÄDue to legal or business requirements it is required to store audit logs and sign in logs of Azure AD longer than 30 days. |
+|Retention| **Log retention of more than 30 days**. ΓÇÄDue to legal or business requirements it's required to store audit logs and sign in logs of Azure AD longer than 30 days. |
|Analytics| **The logs need to be searchable**. ΓÇÄThe stored logs need to be searchable with analytic tools. | | Operational Insights| **Insights for various teams**. The need to give access for different users to gain operational insights such as application usage, sign in errors, self-service usage, trends, etc. | | Security Insights| **Insights for various teams**. The need to give access for different users to gain operational insights such as application usage, sign in errors, self service usage, trends, etc. |
-| Integration in SIEM systems | **SIEM integration**. ΓÇÄThe need to integrate and stream Azure AD sign in logs and audit logs to existing SIEM systems. |
+| Integration in SIEM systems | **SIEM integration**. ΓÇÄThe need to integrate and stream Azure AD sign-in logs and audit logs to existing SIEM systems. |
### Choose a monitoring solution architecture
Learn how to [route data to your storage account](./quickstart-azure-monitor-rou
#### Send logs to Azure Monitor logs
-[Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) consolidate monitoring data from different sources. It also provides a query language and analytics engine that gives you insights into the operation of your applications and use of resources. By sending Azure AD activity logs to Azure Monitor logs, you can quickly retrieve, monitor, and alert on collected data. Use this method when you don't have an existing SIEM solution that you want to send your data to directly but do want queries and analysis. Once your data is in Azure Monitor logs, you can then send it to event hub and from there to a SIEM if you want to.
+[Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) consolidate monitoring data from different sources. It also provides a query language and analytics engine that gives you insights into the operation of your applications and use of resources. By sending Azure AD activity logs to Azure Monitor logs, you can quickly retrieve, monitor, and alert on collected data. Use this method when you don't have an existing SIEM solution that you want to send your data to directly but do want queries and analysis. Once your data is in Azure Monitor logs, you can then send it to event hub, and from there to a SIEM if you want to.
Learn how to [send data to Azure Monitor logs](./howto-integrate-activity-logs-with-log-analytics.md).
-You can also install the pre-built views for Azure AD activity logs to monitor common scenarios involving sign in and audit events.
+You can also install the pre-built views for Azure AD activity logs to monitor common scenarios involving sign-in and audit events.
Learn how to [install and use log analytics views for Azure AD activity logs](./howto-install-use-log-analytics-views.md).
Depending on the decisions you have made earlier using the design guidance above
Consider implementing [Privileged Identity Management](../privileged-identity-management/pim-configure.md)
-Consider implementing [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md)
+Consider implementing [Azure role-based access control](../../role-based-access-control/overview.md)
active-directory Quickstart Access Log With Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-access-log-with-graph-api.md
Previously updated : 08/26/2022-- Last updated : 11/01/2022++
This section provides you with the steps to get information about your sign-in u
5. In the **Request query address bar**, type `https://graph.microsoft.com/beta/auditLogs/signIns?$top=100&$filter=userDisplayName eq 'Isabella Simonsen'`
-6. Click **Run query**.
+6. Select **Run query**.
Review the outcome of your query.
active-directory Quickstart Analyze Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-analyze-sign-in.md
Previously updated : 08/26/2021-- Last updated : 11/01/2022++
The goal of this step is to create a record of a failed sign-in in the Azure AD
This section provides you with the steps to analyze a failed sign-in: -- **Filter sign-ins**: Remove all records that are not relevant to your analysis. For example, set a filter to display only the records of a specific user.-- **Lookup additional error information**: In addition to the information you can find in the sign-ins log, you can also lookup the error using the [sign-in error lookup tool](https://login.microsoftonline.com/error). This tool might provide you with additional information for a sign-in error.
+- **Filter sign-ins**: Remove all records that aren't relevant to your analysis. For example, set a filter to display only the records of a specific user.
+- **Lookup additional error information**: In addition to the information you can find in the sign-ins log, you can also look up the error using the [sign-in error lookup tool](https://login.microsoftonline.com/error). This tool might provide you with additional information for a sign-in error.
**To review the failed sign-in:**
This section provides you with the steps to analyze a failed sign-in:
2. To list only records for Isabella Simonsen:
- a. In the toolbar, click **Add filters**.
+ a. In the toolbar, select **Add filters**.
![Add user filter](./media/quickstart-analyze-sign-in/add-filters.png)
- b. In the **Pick a field** list, select **User**, and then click **Apply**.
+ b. In the **Pick a field** list, select **User**, and then select **Apply**.
- c. In the **Username** textbox, type **Isabella Simonsen**, and then click **Apply**.
+ c. In the **Username** textbox, type **Isabella Simonsen**, and then select **Apply**.
- d. In the toolbar, click **Refresh**.
+ d. In the toolbar, select **Refresh**.
-3. To analyze the issue, click **Troubleshooting and support**.
+3. To analyze the issue, select **Troubleshooting and support**.
![Add filter](./media/quickstart-analyze-sign-in/troubleshooting-and-support.png)
This section provides you with the steps to analyze a failed sign-in:
![Sign-in error code](./media/quickstart-analyze-sign-in/sign-in-error-code.png)
-5. Paste the error code into the textbox of the [sign-in error lookup tool](https://login.microsoftonline.com/error), and then click **Submit**.
+5. Paste the error code into the textbox of the [sign-in error lookup tool](https://login.microsoftonline.com/error), and then select **Submit**.
Review the outcome of the tool and determine whether it provides you with additional information. ![Error code lookup tool](./media/concept-all-sign-ins/error-code-lookup-tool.png)
-## Additional tests
+## More tests
Now, that you know how to find an entry in the sign-in log by name, you should also try to find the record using the following filters:
active-directory Quickstart Azure Monitor Route Logs To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md
Title: Tutorial - Archive directory logs to a storage account | Microsoft Docs description: Learn how to set up Azure Diagnostics to push Azure Active Directory logs to a storage account -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ # Customer intent: As an IT administrator, I want to learn how to route Azure AD logs to an Azure storage account so I can retain it for longer than the default retention period.
To use this feature, you need:
![Export settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/ExportSettings.png)
-5. Once in the **Diagnostic setting** pane if you are creating a new setting, enter a name for the setting to remind you of its purpose (for example, *Send to Azure storage account*). You can't change the name of an existing setting.
+5. Once in the **Diagnostic setting** pane if you're creating a new setting, enter a name for the setting to remind you of its purpose (for example, *Send to Azure storage account*). You can't change the name of an existing setting.
6. Under **Destination Details** Select the **Archive to a storage account** check box.
-7. Select the Azure subscription in the **Subscription** drop down menu and storage account in the **Storage account** drop down menu that you want to route the logs to.
+7. Select the Azure subscription in the **Subscription** menu and storage account in the **Storage account** menu that you want to route the logs to.
8. Select all the relevant categories in under **Category details**:
active-directory Quickstart Filter Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-filter-audit-log.md
Previously updated : 08/26/2022-- Last updated : 11/01/2022++
This section provides you with the steps to filter your audit log.
2. To list only records for Isabella Simonsen:
- a. In the toolbar, click **Add filters**.
+ a. In the toolbar, select **Add filters**.
![Add user filter](./media/quickstart-analyze-sign-in/add-filters.png)
- b. In the **Pick a field** list, select **Target**, and then click **Apply**
+ b. In the **Pick a field** list, select **Target**, and then select **Apply**
- c. In the **Target** textbox, type the **User Principal Name** of **Isabella Simonsen**, and then click **Apply**.
+ c. In the **Target** textbox, type the **User Principal Name** of **Isabella Simonsen**, and then select **Apply**.
-3. Click the filtered item.
+3. Select the filtered item.
![Filtered items](./media/quickstart-filter-audit-log/audit-log-list.png)
active-directory Opentext Fax Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/opentext-fax-tutorial.md
Previously updated : 10/10/2022 Last updated : 10/28/2022
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure OpenText XM Fax and XM SendSecure SSO
-To configure single sign-on on **OpenText XM Fax and XM SendSecure** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [OpenText XM Fax and XM SendSecure support team](mailto:support@opentext.com). They set this setting to have the SAML SSO connection set properly on both sides.
+1. Login to your XM Cloud account using a Web browser.
+
+1. From the main menu of your Web Portal, select **enterprise_account -> Enterprise Settings**.
+
+1. Go to **Single Sign-On** section and select **SAML 2.0**.
+
+1. Provide the following required information:
+
+ a. In the **Sign In URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ b. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **X.509 Signing Certificate** textbox.
+
+ c. click **Save**.
+
+> [!NOTE]
+> Keep the fail-safe URL (`https://login.[domain]/[account]/no-sso`) provided at the bottom of the SSO configuration section, it will allow you to log in using your XM Cloud account credentials if you lock yourself after SSO activation.
### Create OpenText XM Fax and XM SendSecure test user
advisor Advisor How To Performance Resize High Usage Vm Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-how-to-performance-resize-high-usage-vm-recommendations.md
+++
+ Title: Improve the performance of highly used VMs using Azure Advisor
+description: Use Azure Advisor to improve the performance of your Azure virtual machines with consistent high utilization.
+ Last updated : 10/27/2022++
+# Improve the performance of highly used VMs using Azure Advisor
+
+Azure Advisor helps you improve the speed and responsiveness of your business-critical applications. You can get performance recommendations from the **Performance** tab on the Advisor dashboard.
+
+1. Sign in to the [**Azure portal**](https://portal.azure.com).
+
+1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
+
+1. On the **Advisor** dashboard, select the **Performance** tab.
+
+## Optimize virtual machine (VM) performance by right-sizing highly utilized instances
+
+You can improve the quality of your workload and prevent many performance-related issues (i.e., throttling, high latency) by regularly assessing your [performance efficiency](/azure/architecture/framework/scalability/overview). Performance efficiency is defined by the [Azure Well-Architected Framework](/azure/architecture/framework/) as the ability of your workload to adapt to changes in load. Performance efficiency is one of the five pillars of architectural excellence on Azure.
+
+Unless by design, we recommend keeping your application's usage well below your virtual machine's size limits, so it can better operate and easily accommodate changes.
+
+Advisor aggregates various metrics over a minimum of 7 days, identifies virtual machines with consistent high utilization across those metrics, and finds better sizes (SKUs) for improved performance. Finally, Advisor examines capacity signals in Azure to frequently refresh the recommended SKUs, ensuring that they are available for deployment in the region.
+
+### Resize SKU recommendations
+
+Advisor recommends resizing virtual machines when use is consistently high (above predefined thresholds) given the running virtual machine's size limits.
+
+- The recommendation algorithm evaluates **CPU**, **Memory**, **VM Cached IOPS Consumed Percentage**, and **VM Uncached Bandwidth Consumed Percentage** usage metrics
+- The observation period is the past 7 days from the day of the recommendation
+- Metrics are sampled every 30 seconds, aggregated to 1 minute and then further aggregated to 30 minutes (taking the average of 1-minute average values while aggregating to 30 minutes)
+- A SKU upgrade for virtual machines is decided given the following criteria:
+ - For each metric, we create a new feature from the P50 (median) of its 30-mins averages aggregated over the observation period. Therefore, a virtual machine is identified as a candidate for a resize if:
+ * _Both_ `CPU` and `Memory` features are >= *90%* of the current SKU's limits
+ * Otherwise, _either_
+ * The `VM Cached IOPS` feature is >= to *95%* of the current SKU's limits, and the current SKU's max local disk IOPS is >= to its network disk IOPS. _or_
+ * the `VM Uncached Bandwidth` feature is >= *95%* of the current SKU's limits, and the current SKU's max network disk throttle limits are >= to its local disk throttle units
+- We ensure the following:
+ - The current workload utilization will be better on the new SKU's given that it has higher limits and better performance guarantees
+ - The new SKU has the same Accelerated Networking and Premium Storage capabilities
+ - The new SKU is supported and ready for deployment in the same region as the running virtual machine
++
+In some cases, recommendations can't be adopted or might not be applicable, such as some of these common scenarios (there may be other cases):
+- The virtual machine is short-lived
+- The current virtual machine has already been provisioned to accommodate upcoming traffic
+- Specific testing being done using the current SKU, even if not utilized efficiently
+- There's a need to keep the virtual machine as-is
+
+In such cases, simply use the Dismiss/Postpone options associated with the recommendation.
+
+We're constantly working on improving these recommendations. Feel free to share feedback on [Advisor Forum](https://aka.ms/advisorfeedback).
+
+## Next steps
+
+To learn more about Advisor recommendations and best practices, see:
+* [Get started with Advisor](advisor-get-started.md)
+* [Introduction to Advisor](advisor-overview.md)
+* [Advisor score](azure-advisor-score.md)
+* [Advisor performance recommendations](advisor-reference-performance-recommendations.md)
+* [Advisor cost recommendations (full list)](advisor-reference-cost-recommendations.md)
+* [Advisor reliability recommendations](advisor-reference-reliability-recommendations.md)
+* [Advisor security recommendations](advisor-security-recommendations.md)
+* [Advisor operational excellence recommendations](advisor-reference-operational-excellence-recommendations.md)
+* [The Microsoft Azure Well-Architected Framework](/azure/architecture/framework/)
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
Ultra disk is available in the same region as your database workload. Ultra disk
Learn more about [Virtual machine - AzureStorageVmUltraDisk (Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance.)](../virtual-machines/disks-enable-ultra-ssd.md?tabs=azure-portal).
+### Upgrade the size of your virtual machines close to resource exhaustion
+
+We analyzed data for the past 7 days and identified virtual machines (VMs) with high utilization across different metrics (i.e., CPU, Memory, and VM IO). Those VMs may experience performance issues since they are nearing/at their SKU's limits. Consider upgrading their SKU to improve performance.
+
+Learn more about [Virtual machine - Improve the performance of highly used VMs using Azure Advisor](https://aka.ms/aa_resizehighusagevmrec_learnmore)
+ ## Kubernetes ### Unsupported Kubernetes version is detected
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
If you navigated away from the **Deployment is in progress** page, the following
1. In the left pane, select **Outputs**. 1. Using the same copy technique as with the preceding values, save aside the values for the following outputs:
- * **appDeploymentTemplateYamlEncoded**
* **cmdToConnectToCluster**
+
- These values will be used later in this article. Note that several other useful commands are listed in the outputs.
+These values will be used later in this article. Note that several other useful commands are listed in the outputs.
## Create an Azure SQL Database
The directories *java*, *resources*, and *webapp* contain the source code of the
In the *aks* directory, we placed two deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *openlibertyapplication.yaml* is used to deploy the application image.
-In the *docker* directory, we placed four Dockerfiles. *Dockerfile-local* is used for local debugging, and *Dockerfile* is used to build the image for an AKS deployment. These two files work with Open Liberty. *Dockerfile-wlp-local* and *Dockerfile-wlp* are also used for local debugging and to build the image for an AKS deployment respectively, but instead work with WebSphere Liberty.
- In directory *liberty/config*, the *server.xml* FILE is used to configure the DB connection for the Open Liberty and WebSphere Liberty cluster.
-### Acquire necessary variables from AKS deployment
-
-After the offer is successfully deployed, an AKS cluster will be generated automatically. The AKS cluster is configured to connect to a generated ACR instance. Before we get started with the application, we need to extract the namespace configured for AKS.
-
-1. Run the following command to print the current deployment file, using the `appDeploymentTemplateYamlEncoded` you saved above. The output contains all the variables we need.
-
- ```bash
- echo <appDeploymentTemplateYamlEncoded> | base64 -d
- ```
-
-1. Save aside the `metadata.namespace` from this yaml output for later use in this article.
- ### Build the project
-Now that you've gathered the necessary properties, you can build the application. The POM file for the project reads many properties from the environment.
-
-Now that you've gathered the necessary properties, you can build the application. The POM file for the project reads many properties from the environment. The reason for this parameterization is to avoid having to hard-code values such as database server names, passwords, and other identifiers into the example source code. This allows the sample source code to be easier to use in a wider variety of contexts.
+Now that you've gathered the necessary properties, you can build the application. The POM file for the project reads many properties from the environment. The reason for this parameterization is to avoid having to hard-code values such as database server names, passwords, and other identifiers into the example source code. This allows the sample source code to be easier to use in a wider variety of contexts. These variables are used to also populate `JavaEECafeDB` properties in *server.xml* and in yaml files located in *src/main/aks*.
```bash cd <path-to-your-repo>/java-app
export REGISTRY_NAME=<Azure_Container_Registery_Name>
export USER_NAME=<Azure_Container_Registery_Username> export PASSWORD=<Azure_Container_Registery_Password> export DB_SERVER_NAME=<Server name>.database.windows.net
-export DB_PORT_NUMBER=1433
export DB_NAME=<Database name> export DB_USER=<Server admin login>@<Server name> export DB_PASSWORD=<Server admin password>
-export NAMESPACE=<metadata.namespace>
mvn clean install ```
-### Test your project locally
+### (Optional) Test your project locally
-Use the `liberty:devc` command to run and test the project locally before deploying to Azure. For more information on `liberty:devc`, see the [Liberty Plugin documentation](https://github.com/OpenLiberty/ci.maven/blob/main/docs/dev.md#devc-container-mode).
-In the sample application, we've prepared *Dockerfile-local* and *Dockerfile-wlp-local* for use with `liberty:devc`.
+Use your local ide, or `liberty:run` command to run and test the project locally before deploying to Azure.
-1. Start your local docker environment if you haven't done so already. The instructions for doing this vary depending on the host operating system.
+1. Start your local docker environment if you haven't done so already. The instructions for doing this vary depending on the host operating system. `liberty:run` will also leverage the environment variables defined in the above step.
-1. Start the application in `liberty:devc` mode
+1. Start the application in `liberty:run` mode
```bash cd <path-to-your-repo>/java-app
-
- # If you're running with Open Liberty
- mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_NAME} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-local
-
- # If you're running with WebSphere Liberty
- mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_NAME} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-wlp-local
+ mvn liberty:run
```-
+
1. Verify the application works as expected. You should see a message similar to `[INFO] [AUDIT] CWWKZ0003I: The application javaee-cafe updated in 1.930 seconds.` in the command output if successful. Go to `http://localhost:9080/` in your browser and verify the application is accessible and all functions are working.
-1. Press `Ctrl+C` to stop `liberty:devc` mode.
+1. Press `Ctrl+C` to stop `liberty:run` mode.
### Build image for AKS deployment
-After successfully running the app in the Liberty Docker container, you can run the `docker build` command to build the image.
+After successfully running the app in the Liberty Docker container, you can run the `docker build` command to build the image.
```bash
-cd <path-to-your-repo>/java-app
-
-# Fetch maven artifactId as image name, maven build version as image version
-export IMAGE_NAME=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.artifactId}' --non-recursive exec:exec)
-export IMAGE_VERSION=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.version}' --non-recursive exec:exec)
- cd <path-to-your-repo>/java-app/target # If you are running with Open Liberty
-docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile .
+docker build -t javaee-cafe:v1 --pull --file=Dockerfile .
# If you are running with WebSphere Liberty
-docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile-wlp .
+docker build -t javaee-cafe:v1 --pull --file=Dockerfile-wlp .
``` ### Upload image to ACR
docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile-wlp .
Now, we upload the built image to the ACR created in the offer. ```bash
-docker tag ${IMAGE_NAME}:${IMAGE_VERSION} ${LOGIN_SERVER}/${IMAGE_NAME}:${IMAGE_VERSION}
+docker tag javaee-cafe:v1 ${LOGIN_SERVER}/javaee-cafe:v1
docker login -u ${USER_NAME} -p ${PASSWORD} ${LOGIN_SERVER}
-docker push ${LOGIN_SERVER}/${IMAGE_NAME}:${IMAGE_VERSION}
+docker push ${LOGIN_SERVER}/javaee-cafe:v1
``` ### Deploy and test the application
The following steps deploy and test the application.
Wait until all pods are restarted successfully using the following command. ```bash
- kubectl get pods -n $NAMESPACE --watch
+ kubectl get pods --watch
``` You should see output similar to the following to indicate that all the pods are running.
The following steps deploy and test the application.
1. Get endpoint of the deployed service ```bash
- kubectl get service -n $NAMESPACE
+ kubectl get service
``` 1. Go to `http://EXTERNAL-IP` to test the application.
+
## Clean up resources
az group delete --name <db-resource-group> --yes --no-wait
* [Azure Kubernetes Service](https://azure.microsoft.com/free/services/kubernetes-service/) * [Open Liberty](https://openliberty.io/) * [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator)
-* [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/)
+* [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/)
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
Title: Connect to Azure Kubernetes Service (AKS) cluster nodes
description: Learn how to connect to Azure Kubernetes Service (AKS) cluster nodes for troubleshooting and maintenance tasks. Previously updated : 10/20/2022 Last updated : 11/1/2022
# Connect to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting
-Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you may need to access an AKS node. This access could be for maintenance, log collection, or other troubleshooting operations. You can access AKS nodes using SSH, including Windows Server nodes. You can also [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp]. For security purposes, the AKS nodes aren't exposed to the internet. To connect to the AKS nodes, you use `kubectl debug` or the private IP address.
+Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you might need to access an AKS node. This access could be for maintenance, log collection, or troubleshooting operations. You can securely authenticate against AKS Linux and Windows nodes using SSH, and you can also [connect to Windows Server nodes using remote desktop protocol (RDP)][aks-windows-rdp]. For security reasons, the AKS nodes aren't exposed to the internet. To connect to the AKS nodes, you use `kubectl debug` or the private IP address.
-This article shows you how to create a connection to an AKS node.
+This article shows you how to create a connection to an AKS node and update the SSH key on an existing AKS cluster.
## Before you begin
When done, `exit` the SSH session, stop any port forwarding, and then `exit` the
kubectl delete pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx ```
+## Update SSH key on an existing AKS cluster (preview)
+
+### Prerequisites
+* Before you start, ensure the Azure CLI is installed and configured. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* The aks-preview extension version 0.5.111 or later. To learn how to install an Azure extension, see [How to install extensions][how-to-install-azure-extensions].
+
+> [!NOTE]
+> Updating of the SSH key is supported on Azure virtual machine scale sets with AKS clusters.
+
+Use the [az aks update][az-aks-update] command to update the SSH key on the cluster. This operation will update the key on all node pools. You can either specify the key or a key file using the `--ssh-key-value` argument.
+
+```azurecli
+az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value <new SSH key value or SSH key file>
+```
+
+Examples:
+In the following example, you can specify the new SSH key value for the `--ssh-key-value` argument.
+
+```azurecli
+az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value 'ssh-rsa AAAAB3Nza-xxx'
+```
+
+In the following example, you specify a SSH key file.
+
+```azurecli
+az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value .ssh/id_rsa.pub
+```
+
+> [!IMPORTANT]
+> During this operation, all virtual machine scale set instances are upgraded and re-imaged to use the new SSH key.
+ ## Next steps If you need more troubleshooting data, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs].
If you need more troubleshooting data, you can [view the kubelet logs][view-kube
[aks-windows-rdp]: rdp.md [ssh-nix]: ../virtual-machines/linux/mac-create-ssh-keys.md [ssh-windows]: ../virtual-machines/linux/ssh-from-windows.md
-[ssh-linux-kubectl-debug]: #create-an-interactive-shell-connection-to-a-linux-node
+[ssh-linux-kubectl-debug]: #create-an-interactive-shell-connection-to-a-linux-node
+[az-aks-update]: /cli/azure/aks#az-aks-update
+[how-to-install-azure-extensions]: /cli/azure/azure-cli-extensions-overview#how-to-install-extensions
+
+
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
Title: Use Azure Active Directory pod-managed identities in Azure Kubernetes Ser
description: Learn how to use Azure AD pod-managed identities in Azure Kubernetes Service (AKS) Previously updated : 8/27/2022 Last updated : 11/01/2022
metadata:
## Clean up
-To remove an Azure AD pod-managed identity from your cluster, remove the sample application and the pod-managed identity from the cluster. Then remove the identity.
+To remove an Azure AD pod-managed identity from your cluster, remove the sample application and the pod-managed identity from the cluster. Then remove the identity and the role assignment of cluster identity.
```bash kubectl delete pod demo --namespace $POD_IDENTITY_NAMESPACE
az aks pod-identity delete --name ${POD_IDENTITY_NAME} --namespace ${POD_IDENTIT
az identity delete -g ${IDENTITY_RESOURCE_GROUP} -n ${IDENTITY_NAME} ```
+```azurecli
+az role assignment delete --role "Managed Identity Operator" --assignee "$IDENTITY_CLIENT_ID" --scope "$IDENTITY_RESOURCE_ID"
+```
+ ## Next steps For more information on managed identities, see [Managed identities for Azure resources][az-managed-identities].
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
Title: Use Key Management Service (KMS) etcd encryption in Azure Kubernetes Serv
description: Learn how to use the Key Management Service (KMS) etcd encryption with Azure Kubernetes Service (AKS) Previously updated : 10/03/2022 Last updated : 11/01/2022 # Add Key Management Service (KMS) etcd encryption to an Azure Kubernetes Service (AKS) cluster
-This article shows you how to enable encryption at rest for your Kubernetes data in etcd using Azure Key Vault with the Key Management Service (KMS) plugin. The KMS plugin allows you to:
+This article shows you how to enable encryption at rest for your Kubernetes secrets in etcd using Azure Key Vault with the Key Management Service (KMS) plugin. The KMS plugin allows you to:
* Use a key in Key Vault for etcd encryption. * Bring your own keys.
For more information on using the KMS plugin, see [Encrypting Secret Data at Res
* Azure CLI version 2.39.0 or later. Run `az --version` to find your version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. > [!WARNING]
-> KMS only supports Konnectivity and Vnet Integration.
+> KMS only supports Konnectivity and [API Server Vnet Integration][api-server-vnet-integration].
> You can use `kubectl get po -n kube-system` to verify the results show that a konnectivity-agent-xxx pod is running. If there is, it means the AKS cluster is using Konnectivity. When using VNet integration, you can run the command `az aks cluster show -g -n` to verify the setting `enableVnetIntegration` is set to **true**. ## Limitations
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
[Enable-KMS-with-private-key-vault]: use-kms-etcd-encryption.md#enable-kms-with-private-key-vault [changing-associated-key-vault-mode]: use-kms-etcd-encryption.md#update-key-vault-mode [install-azure-cli]: /cli/azure/install-azure-cli
+[api-server-vnet-integration]: api-server-vnet-integration.md
app-service Configure Vnet Integration Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-vnet-integration-routing.md
az resource update --resource-group <group-name> --name <app-name> --resource-ty
We recommend that you use the site property to enable routing image pull traffic through the virtual network integration. Using the configuration setting allows you to audit the behavior with Azure Policy. The existing `WEBSITE_PULL_IMAGE_OVER_VNET` app setting with the value `true` can still be used, and you can enable routing through the virtual network with either setting.
-### Content storage
+### Content share
-Routing content storage over virtual network integration can be configured using the Azure CLI. In addition to enabling the feature, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445.
+Routing content share over virtual network integration can be configured using the Azure CLI. In addition to enabling the feature, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445.
```azurecli-interactive
-az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --properties.vnetContentStorageEnabled [true|false]
+az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --properties.vnetContentShareEnabled [true|false]
```
-We recommend that you use the site property to enable content storage traffic through the virtual network integration. Using the configuration setting allows you to audit the behavior with Azure Policy. The existing `WEBSITE_CONTENTOVERVNET` app setting with the value `1` can still be used, and you can enable routing through the virtual network with either setting.
+We recommend that you use the site property to enable content share traffic through the virtual network integration. Using the configuration setting allows you to audit the behavior with Azure Policy. The existing `WEBSITE_CONTENTOVERVNET` app setting with the value `1` can still be used, and you can enable routing through the virtual network with either setting.
## Next steps
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
Learn [how to configure application routing](./configure-vnet-integration-routin
When you're using virtual network integration, you can configure how parts of the configuration traffic are managed. By default, configuration traffic will go directly over the public route, but for the mentioned individual components, you can actively configure it to be routed through the virtual network integration.
-##### Content storage
+##### Content share
-Bringing your own storage for content in often used in Functions where [content storage](./../azure-functions/configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network) is configured as part of the Functions app.
+Bringing your own storage for content in often used in Functions where [content share](./../azure-functions/configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network) is configured as part of the Functions app.
-To route content storage traffic through the virtual network integration, you must ensure that the routing setting is configured. Learn [how to configure content storage routing](./configure-vnet-integration-routing.md#content-storage).
+To route content share traffic through the virtual network integration, you must ensure that the routing setting is configured. Learn [how to configure content share routing](./configure-vnet-integration-routing.md#content-share).
In addition to configuring the routing, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445.
application-gateway Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link.md
Previously updated : 05/09/2022 Last updated : 11/02/2022
Today, you can deploy your critical workloads securely behind Application Gateway, gaining the flexibility of Layer 7 load balancing features. Access to the backend workloads is possible in two ways: - Public IP address - your workloads are accessible over the Internet. -- Private IP address- your workloads are accessible via a private IP address, but within the same VNet as the Application Gateway.
+- Private IP address- your workloads are accessible privately via your virtual network / connected networks
Private Link for Application Gateway allows you to connect workloads over a private connection spanning across VNets and subscriptions. When configured, a private endpoint will be placed into a defined virtual network's subnet, providing a private IP address for clients looking to communicate to the gateway. For a list of other PaaS services that support Private Link functionality, see [What is Azure Private Link?](../private-link/private-link-overview.md).
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| Nutanix | [Nutanix Kubernetes Engine](https://www.nutanix.com/products/kubernetes-engine) | Version [2.5](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_5:Nutanix-Kubernetes-Engine-v2_5); upstream K8s v1.23.11 | | Platform9 | [Platform9 Managed Kubernetes (PMK)](https://platform9.com/managed-kubernetes/) | PMK Version [5.3.0](https://platform9.com/docs/kubernetes/release-notes#platform9-managed-kubernetes-version-53-release-notes); Kubernetes versions: v1.20.5, v1.19.6, v1.18.10 | | Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution | Upstream K8s Version: 1.22.10 <br> Upstream K8s Version: 1.21.3 |
-| Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version [3.5.5](https://docs.mirantis.com/mke/3.5/release-notes/3-5-5.html) <br> MKE Version [3.4.7](https://docs.mirantis.com/mke/3.4/release-notes/3-4-7.html) |
+| Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version [3.6.0](https://docs.mirantis.com/mke/3.6/release-notes/3-6-0.html) <br> MKE Version [3.5.5](https://docs.mirantis.com/mke/3.5/release-notes/3-5-5.html) <br> MKE Version [3.4.7](https://docs.mirantis.com/mke/3.4/release-notes/3-4-7.html) |
| Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) | Wind River Cloud Platform 22.06; Upstream K8s version: 1.23.1 <br>Wind River Cloud Platform 21.12; Upstream K8s version: 1.21.8 <br>Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 | The Azure Arc team also ran the conformance tests and validated Azure Arc-enabled Kubernetes scenarios on the following public cloud providers:
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
The following Azure built-in roles are required for different aspects of managin
* To onboard machines, you must have the [Azure Connected Machine Onboarding](../../role-based-access-control/built-in-roles.md#azure-connected-machine-onboarding) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group in which the machines will be managed. * To read, modify, and delete a machine, you must have the [Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator) role for the resource group.
-* To select a resource group from the drop-down list when using the **Generate script** method, you must have the [Reader](../../role-based-access-control/built-in-roles.md#reader) role for that resource group (or another role which includes **Reader** access).
+* To select a resource group from the drop-down list when using the **Generate script** method, as well as the permissions needed to onboard machines, listed above, you must additionally have the [Reader](../../role-based-access-control/built-in-roles.md#reader) role for that resource group (or another role which includes **Reader** access).
## Azure subscription and service limits
azure-functions Functions Reference Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-java.md
public class Function {
```
-> [!NOTE]
-> The value of AppSetting FUNCTIONS_EXTENSION_VERSION should be ~2 or ~3 for an optimized cold start experience.
- ## Next steps For more information about Azure Functions Java development, see the following resources:
azure-maps How To Dataset Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md
Title: How to create a dataset using a GeoJson package
-description: Learn how to create a dataset using a GeoJson package embedding the module's JavaScript libraries.
+description: Learn how to create a dataset using a GeoJson package.
Previously updated : 10/31/2021 Last updated : 11/01/2021
https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0&
1. Copy the value of the `Resource-Location` key in the response header, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the GeoJSON package resource. ### Create a dataset
-<!--
+ A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset from your GeoJSON, use the new [Dataset Create API][Dataset Create 2022-09-01-preview]. The Dataset Create API takes the `udid` you got in the previous section and returns the `datasetId` of the new dataset.>
-A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset from your GeoJSON, use the new create dataset API. The create dataset API takes the `udid` you got in the previous section and returns the `datasetId` of the new dataset.
> [!IMPORTANT] > This is different from the [previous version][Dataset Create] in that it doesn't require a `conversionId` from a converted Drawing package.
See [Next steps](#next-steps) for links to articles to help you complete your in
## Add data to an existing dataset
-<!--
Data can be added to an existing dataset by providing the `datasetId` parameter to the [dataset create API][Dataset Create 2022-09-01-preview] along with the unique identifier of the data you wish to add. The unique identifier can be either a `udid` or `conversionId`. This creates a new dataset consisting of the data (facilities) from both the existing dataset and the new data being imported. Once the new dataset has been created successfully, the old dataset can be deleted.>-
-Data can be added to an existing dataset by providing the `datasetId` parameter to the create dataset API along with the unique identifier of the data you wish to add. The unique identifier can be either a `udid` or `conversionId`. This creates a new dataset consisting of the data (facilities) from both the existing dataset and the new data being imported. Once the new dataset has been created successfully, the old dataset can be deleted.
One thing to consider when adding to an existing dataset is how the feature IDs are created. If a dataset is created from a converted drawing package, the feature IDs are generated automatically. When a dataset is created from a GeoJSON package, feature IDs must be provided in the GeoJSON file. When appending to an existing dataset, the original dataset drives the way feature IDs are created. If the original dataset was created using a `udid`, it uses the IDs from the GeoJSON, and will continue to do so with all GeoJSON packages appended to that dataset in the future. If the dataset was created using a `conversionId`, IDs will be internally generated, and will continue to be internally generated with all GeoJSON packages appended to that dataset in the future.
https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&conversio
| conversionId | The ID returned when converting your drawing package. For more information, see [Convert a Drawing package][conversion]. | | datasetId | The dataset ID returned when creating the original dataset from a GeoJSON package). |
-<!--For more information, see [][].-->
- ## Geojson zip package requirements The GeoJSON zip package consists of one or more [RFC 7946][RFC 7946] compliant GeoJSON files, one for each feature class, all in the root directory (subdirectories aren't supported), compressed with standard Zip compression and named using the `.ZIP` extension.
Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.)
[Facility Ontology]: creator-facility-ontology.md?pivots=facility-ontology-v2 [RFC 7946]: https://www.rfc-editor.org/rfc/rfc7946.html [dataset-concept]: creator-indoor-maps.md#datasets
-<!--[Dataset Create 2022-09-01-preview]: /rest/api/maps/v20220901preview/dataset/create-->
+[Dataset Create 2022-09-01-preview]: /rest/api/maps/v20220901preview/dataset/create
[Visual Studio]: https://visualstudio.microsoft.com/downloads/
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
In addition to the generally available data collection listed above, Azure Monit
| Azure Monitor feature | Current support | Other extensions installed | More information | | : | : | : | : | | Text logs and Windows IIS logs | Public preview | None | [Collect text logs with Azure Monitor Agent (Public preview)](data-collection-text-log.md) |
-| Windows client installer | Public preview | None | [Set up Azure Monitor Agent on Windows client devices](azure-monitor-agent-windows-client.md) |
| [VM insights](../vm/vminsights-overview.md) | Public preview | Dependency Agent extension, if youΓÇÖre using the Map Services feature | [Enable VM Insights overview](../vm/vminsights-enable-overview.md) | In addition to the generally available data collection listed above, Azure Monitor Agent also supports these Azure services in preview:
View [supported operating systems for Azure Arc Connected Machine agent](../../a
## Next steps - [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Alerts Manage Alerts Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md
1. Select **Done**. 1. You can edit the rule **Description**, and **Severity**. These details are used in all alert actions. Additionally, you can choose to not activate the alert rule on creation by selecting **Enable rule upon creation**. 1. Use the [**Suppress Alerts**](./alerts-unified-log.md#state-and-resolving-alerts) option if you want to suppress rule actions for a specified time after an alert is fired. The rule will still run and create alerts but actions won't be triggered to prevent noise. Mute actions value must be greater than the frequency of alert to be effective.
+1. To make alerts stateful, select **Automatically resolve alerts (preview)**.
![Suppress Alerts for Log Alerts](media/alerts-log/AlertsPreviewSuppress.png) 1. Specify if the alert rule should trigger one or more [**Action Groups**](./action-groups.md#webhook) when alert condition is met. > [!NOTE] > Refer to the [Azure subscription service limits](../../azure-resource-manager/management/azure-subscription-service-limits.md) for limits on the actions that can be performed.
- > [!NOTE]
- > Log alert rules are currently [stateless and do not resolve](./alerts-unified-log.md#state-and-resolving-alerts).
1. (Optional) Customize actions in log alert rules: - **Custom Email Subject**: Overrides the *e-mail subject* of email actions. You can't modify the body of the mail and this field **isn't for email addresses**. - **Include custom Json payload**: Overrides the webhook JSON used by Action Groups assuming the action group contains a webhook action. Learn more about [webhook action for Log Alerts](./alerts-log-webhook.md).
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
In order to enable telemetry collection with Application Insights, only the Appl
Upgrading from version 2.8.9 happens automatically, without any extra actions. The new monitoring bits are delivered in the background to the target app service, and on application restart they'll be picked up.
-To check which version of the extension you're running, go to `https://scm.yoursitename.azurewebsites.net/ApplicationInsights`.
+To check which version of the extension you're running, go to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`.
:::image type="content"source="./media/azure-web-apps/extension-version.png" alt-text="Screenshot of the URL path to check the version of the extension you're running." border="false":::
If the upgrade is done from a version prior to 2.5.1, check that the Application
Below is our step-by-step troubleshooting guide for extension/agent based monitoring for ASP.NET based applications running on Azure App Services. 1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~2".
-2. Browse to `https://scm.yoursitename.azurewebsites.net/ApplicationInsights`.
+2. Browse to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`.
:::image type="content"source="./media/azure-web-apps/app-insights-sdk-status.png" alt-text="Screenshot of the link above results page."border ="false":::
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
# Migrate to workspace-based Application Insights resources
-This article walks you through migrating a classic Application Insights resource to a workspace-based resource. Workspace-based resources support full integration between Application Insights and Log Analytics. Workspace-based resources send Application Insights telemetry to a common Log Analytics workspace. This behavior allows you to access [the latest features of Azure Monitor](#new-capabilities) while keeping application, infrastructure, and platform logs in a consolidated location.
-
-Workspace-based resources enable common Azure role-based access control across your resources and eliminate the need for cross-app/workspace queries.
-
-Workspace-based resources are currently available in all commercial regions and Azure US Government.
+This article walks through migrating a classic Application Insights resource to a workspace-based resource.
+
+Workspace-based resources:
+
+> [!div class="checklist"]
+> - Support full integration between Application Insights and [Log Analytics](../logs/log-analytics-overview.md)
+> - Send Application Insights telemetry to a common [Log Analytics workspace](../logs/log-analytics-workspace-overview.md)
+> - Allow you to access [the latest features of Azure Monitor](#new-capabilities) while keeping application, infrastructure, and platform logs in a consolidated location
+> - Enable common [Azure role-based access control](../../role-based-access-control/overview.md) across your resources
+> - Eliminate the need for cross-app/workspace queries
+> - Are available in all commercial regions and [Azure US Government](../../azure-government/index.yml)
+> - Do not require changing instrumentation keys after migration from a Classic resource
## New capabilities
azure-monitor Separate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/separate-resources.md
Title: How to design your Application Insights deployment - One vs many resources? description: Direct telemetry to different resources for development, test, and production stamps. Previously updated : 05/11/2020 Last updated : 11/01/2022 # How many Application Insights resources should I deploy
-When you are developing the next version of a web application, you don't want to mix up the [Application Insights](../../azure-monitor/app/app-insights-overview.md) telemetry from the new version and the already released version. To avoid confusion, send the telemetry from different development stages to separate Application Insights resources, with separate instrumentation keys (ikeys). To make it easier to change the instrumentation key as a version moves from one stage to another, it can be useful to set the ikey in code instead of in the configuration file.
+When you're developing the next version of a web application, you don't want to mix up the [Application Insights](../../azure-monitor/app/app-insights-overview.md) telemetry from the new version and the already released version.
+
+To avoid confusion, send the telemetry from different development stages to separate Application Insights resources, with separate instrumentation keys (ikeys).
+
+To make it easier to change the instrumentation key as a version moves from one stage to another, it can be useful to [set the ikey dynamically in code](#dynamic-ikey) instead of in the configuration file.
(If your system is an Azure Cloud Service, there's [another method of setting separate ikeys](../../azure-monitor/app/azure-web-apps-net-core.md).)
When you are developing the next version of a web application, you don't want to
When you set up Application Insights monitoring for your web app, you create an Application Insights *resource* in Microsoft Azure. You open this resource in the Azure portal in order to see and analyze the telemetry collected from your app. The resource is identified by an *instrumentation key* (ikey). When you install the Application Insights package to monitor your app, you configure it with the instrumentation key, so that it knows where to send the telemetry.
-Each Application Insights resource comes with metrics that are available out-of-box. If completely separate components report to the same Application Insights resource, these metrics may not make sense to dashboard/alert on.
+Each Application Insights resource comes with metrics that are available out-of-box. If separate components report to the same Application Insights resource, these metrics may not make sense to dashboard/alert on.
### When to use a single Application Insights resource -- For application components that are deployed together. Usually developed by a single team, managed by the same set of DevOps/ITOps users.
+- For application components that are deployed together. These applications are usually developed by a single team and managed by the same set of DevOps/ITOps users.
- If it makes sense to aggregate Key Performance Indicators (KPIs) such as response durations, failure rates in dashboard etc., across all of them by default (you can choose to segment by role name in the Metrics Explorer experience).-- If there is no need to manage Azure role-based access control (Azure RBAC) differently between the application components.
+- If there's no need to manage Azure role-based access control (Azure RBAC) differently between the application components.
- If you don't need metrics alert criteria that are different between the components.-- If you do not need to manage continuous exports differently between the components.-- If you do not need to manage billing/quotas differently between the components.-- If it is okay to have an API key have the same access to data from all components. And 10 API keys are sufficient for the needs across all of them.-- If it is okay to have the same smart detection and work item integration settings across all roles.
+- If you don't need to manage continuous exports differently between the components.
+- If you don't need to manage billing/quotas differently between the components.
+- If it's okay to have an API key have the same access to data from all components. And 10 API keys are sufficient for the needs across all of them.
+- If it's okay to have the same smart detection and work item integration settings across all roles.
> [!NOTE] > If you want to consolidate multiple Application Insights Resources, you may point your existing application components to a new, consolidated Application Insights Resource. The telemetry stored in your old resource will not be transfered to the new resource, so only delete the old resource when you have enough telemetry in the new resource for business continuity.
Each Application Insights resource comes with metrics that are available out-of-
### Other things to keep in mind - You may need to add custom code to ensure that meaningful values are set into the [Cloud_RoleName](./app-map.md?tabs=net#set-or-override-cloud-role-name) attribute. Without meaningful values set for this attribute, *NONE* of the portal experiences will work.-- For Service Fabric applications and classic cloud services, the SDK automatically reads from the Azure Role Environment and sets these. For all other types of apps, you will likely need to set this explicitly.-- Live Metrics experience does not support splitting by role name.
+- For Service Fabric applications and classic cloud services, the SDK automatically reads from the Azure Role Environment and sets these. For all other types of apps, you'll likely need to set this explicitly.
+- Live Metrics experience doesn't support splitting by role name.
## <a name="dynamic-ikey"></a> Dynamic instrumentation key
There are several different methods of setting the Application Version property.
This generates a file called *yourProjectName*.BuildInfo.config. The Publish process renames it to BuildInfo.config.
- The build label contains a placeholder (AutoGen_...) when you build with Visual Studio. But when built with MSBuild, it is populated with the correct version number.
+ The build label contains a placeholder (AutoGen_...) when you build with Visual Studio. But when built with MSBuild, it's populated with the correct version number.
To allow MSBuild to generate version numbers, set the version like `1.0.*` in AssemblyReference.cs
azure-monitor Container Insights Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-manage-agent.md
Container insights uses a containerized version of the Log Analytics agent for L
## How to upgrade the Container insights agent
-Container insights uses a containerized version of the Log Analytics agent for Linux. When a new version of the agent is released, the agent is automatically upgraded on your managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS) and Azure Red Hat OpenShift version 3.x. For a [hybrid Kubernetes cluster](container-insights-hybrid-setup.md) and Azure Red Hat OpenShift version 4.x, the agent is not managed, and you need to manually upgrade the agent.
+Container insights uses a containerized version of the Log Analytics agent for Linux. When a new version of the agent is released, the agent is automatically upgraded on your managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS) and Azure Arc enabled Kubernetes.
-If the agent upgrade fails for a cluster hosted on AKS or Azure Red Hat OpenShift version 3.x, this article also describes the process to manually upgrade the agent. To follow the versions released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
+If the agent upgrade fails for a cluster hosted on AKS, this article also describes the process to manually upgrade the agent. To follow the versions released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
### Upgrade agent on AKS cluster
Perform the following steps to upgrade the agent on a Kubernetes cluster running
* Self-managed Kubernetes clusters hosted on Azure using AKS Engine. * Self-managed Kubernetes clusters hosted on Azure Stack or on-premises using AKS Engine.
-* Red Hat OpenShift version 4.x.
If the Log Analytics workspace is in commercial Azure, run the following command:
If the Log Analytics workspace is in Azure US Government, run the following comm
$ helm upgrade --set omsagent.domain=opinsights.azure.us,omsagent.secret.wsid=<your_workspace_id>,omsagent.secret.key=<your_workspace_key>,omsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers ```
-### Upgrade agent on Azure Red Hat OpenShift v4
-
-Perform the following steps to upgrade the agent on a Kubernetes cluster running on Azure Red Hat OpenShift version 4.x.
-
->[!NOTE]
->Azure Red Hat OpenShift version 4.x only supports running in the Azure commercial cloud.
->
-
-```console
-curl -o upgrade-monitoring.sh -L https://aka.ms/upgrade-monitoring-bash-script
-export azureAroV4ClusterResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.RedHatOpenShift/OpenShiftClusters/<clusterName>"
-bash upgrade-monitoring.sh --resource-id $ azureAroV4ClusterResourceId
-```
- ## How to disable environment variable collection on a container Container insights collects environmental variables from the containers running in a pod and presents them in the property pane of the selected container in the **Containers** view. You can control this behavior by disabling collection for a specific container either during deployment of the Kubernetes cluster, or after by setting the environment variable *AZMON_COLLECT_ENV*. This feature is available from the agent version ΓÇô ciprod11292018 and higher.
azure-monitor Container Insights Prometheus Monitoring Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-monitoring-addon.md
- Title: Send Prometheus metrics to Azure Monitor Logs with Container insights
-description: Configure the Container insights agent to scrape Prometheus metrics from your Kubernetes cluster and send to Log Analytics workspace in Azure Monitor.
-- Previously updated : 09/15/2022---
-# Send Prometheus metrics to Azure Monitor Logs with Container insights
-This article describes how to send Prometheus metrics to a Log Analytics workspace with the Container insights monitoring addon. You can also send metrics to Azure Monitor managed service for Prometheus with the metrics addon which that supports standard Prometheus features such as PromQL and Prometheus alert rules. See [Collect Prometheus metrics with Container insights](container-insights-prometheus.md).
--
-## Prometheus scraping settings
-
-Active scraping of metrics from Prometheus is performed from one of two perspectives:
--- **Cluster-wide**: Defined in the ConfigMap section *[Prometheus data_collection_settings.cluster]*.-- **Node-wide**: Defined in the ConfigMap section *[Prometheus_data_collection_settings.node]*.-
-| Endpoint | Scope | Example |
-|-|-||
-| Pod annotation | Cluster-wide | `prometheus.io/scrape: "true"` <br>`prometheus.io/path: "/mymetrics"` <br>`prometheus.io/port: "8000"` <br>`prometheus.io/scheme: "http"` |
-| Kubernetes service | Cluster-wide | `http://my-service-dns.my-namespace:9100/metrics` <br>`https://metrics-server.kube-system.svc.cluster.local/metrics`ΓÇï |
-| URL/endpoint | Per-node and/or cluster-wide | `http://myurl:9101/metrics` |
-
-When a URL is specified, Container insights only scrapes the endpoint. When Kubernetes service is specified, the service name is resolved with the cluster DNS server to get the IP address. Then the resolved service is scraped.
-
-|Scope | Key | Data type | Value | Description |
-||--|--|-|-|
-| Cluster-wide | | | | Specify any one of the following three methods to scrape endpoints for metrics. |
-| | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
-| | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. Fully qualified domain names must be used here. For example,`kubernetes_services = ["https://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace.svc.cluster.local:9100/metrics]`|
-| | `monitor_kubernetes_pods` | Boolean | true or false | When set to `true` in the cluster-wide settings, the Container insights agent will scrape Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:` |
-| | `prometheus.io/scrape` | Boolean | true or false | Enables scraping of the pod, and `monitor_kubernetes_pods` must be set to `true`. |
-| | `prometheus.io/scheme` | String | http or https | Defaults to scraping over HTTP. If necessary, set to `https`. |
-| | `prometheus.io/path` | String | Comma-separated array | The HTTP resource path from which to fetch metrics. If the metrics path isn't `/metrics`, define it with this annotation. |
-| | `prometheus.io/port` | String | 9102 | Specify a port to scrape from. If the port isn't set, it will default to 9102. |
-| | `monitor_kubernetes_pods_namespaces` | String | Comma-separated array | An allowlist of namespaces to scrape metrics from Kubernetes pods.<br> For example, `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]` |
-| Node-wide | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
-| Node-wide or cluster-wide | `interval` | String | 60s | The collection interval default is one minute (60 seconds). You can modify the collection for either the *[prometheus_data_collection_settings.node]* and/or *[prometheus_data_collection_settings.cluster]* to time units such as s, m, and h. |
-| Node-wide or cluster-wide | `fieldpass`<br> `fielddrop`| String | Comma-separated array | You can specify certain metrics to be collected or not from the endpoint by setting the allow (`fieldpass`) and disallow (`fielddrop`) listing. You must set the allowlist first. |
-
-## Configure ConfigMaps
-Perform the following steps to configure your ConfigMap configuration file for your cluster. ConfigMaps is a global list and there can be only one ConfigMap applied to the agent. You can't have another ConfigMaps overruling the collections.
---
-1. [Download](https://aka.ms/container-azm-ms-agentconfig) the template ConfigMap YAML file and save it as c*ontainer-azm-ms-agentconfig.yaml*. If you've already deployed a ConfigMap to your cluster and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used.
-1. Edit the ConfigMap YAML file with your customizations to scrape Prometheus metrics.
--
- ### [Cluster-wide](#tab/cluster-wide)
-
- To collect Kubernetes services cluster-wide, configure the ConfigMap file by using the following example:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings
- [prometheus_data_collection_settings.cluster] ΓÇï
- interval = "1m" ## Valid time units are s, m, h.
- fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï
- fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting
- kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"]
- ```
-
- ### [Specific URL](#tab/url)
-
- To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file by using the following example:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings
- [prometheus_data_collection_settings.cluster] ΓÇï
- interval = "1m" ## Valid time units are s, m, h.
- fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï
- fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting
- urls = ["http://myurl:9101/metrics"] ## An array of urls to scrape metrics from
- ```
-
- ### [DaemonSet](#tab/deamonset)
-
- To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following example in the ConfigMap:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings ΓÇï
- [prometheus_data_collection_settings.node] ΓÇï
- interval = "1m" ## Valid time units are s, m, h.
- urls = ["http://$NODE_IP:9103/metrics"] ΓÇï
- fieldpass = ["metric_to_pass1", "metric_to_pass2"] ΓÇï
- fielddrop = ["metric_to_drop"] ΓÇï
- ```
-
- `$NODE_IP` is a specific Container insights parameter and can be used instead of a node IP address. It must be all uppercase.
-
- ### [Pod annotation](#tab/pod)
-
- To configure scraping of Prometheus metrics by specifying a pod annotation:
-
- 1. In the ConfigMap, specify the following configuration:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings
- [prometheus_data_collection_settings.cluster] ΓÇï
- interval = "1m" ## Valid time units are s, m, h
- monitor_kubernetes_pods = true
- ```
-
- 1. Specify the following configuration for pod annotations:
-
- ```
- - prometheus.io/scrape:"true" #Enable scraping for this pod ΓÇï
- - prometheus.io/scheme:"http" #If the metrics endpoint is secured then you will need to set this to `https`, if not default ΓÇÿhttpΓÇÖΓÇï
- - prometheus.io/path:"/mymetrics" #If the metrics path is not /metrics, define it with this annotation. ΓÇï
- - prometheus.io/port:"8000" #If port is not 9102 use this annotationΓÇï
- ```
-
- If you want to restrict monitoring to specific namespaces for pods that have annotations, for example, only include pods dedicated for production workloads, set the `monitor_kubernetes_pod` to `true` in ConfigMap. Then add the namespace filter `monitor_kubernetes_pods_namespaces` to specify the namespaces to scrape from. An example is `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`.
-
-2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
-
- Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
-
-The configuration change can take a few minutes to finish before taking effect. You must restart all Azure Monitor Agent pods manually. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
--
-## Verify configuration
-
-To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs ama-logs-fdf58 -n=kube-system`.
--
-If there are configuration errors from the Azure Monitor Agent pods, the output will show errors similar to the following example:
-
-```
-***************Start Config Processing********************
-config::unsupported/missing config schema version - 'v21' , using defaults
-```
-
-Errors related to applying configuration changes are also available for review. The following options are available to perform additional troubleshooting of configuration changes and scraping of Prometheus metrics:
--- From an agent pod logs using the same `kubectl logs` command.--- From Live Data (preview). Live Data (preview) logs show errors similar to the following example:-
- ```
- 2019-07-08T18:55:00Z E! [inputs.prometheus]: Error in plugin: error making HTTP request to http://invalidurl:1010/metrics: Get http://invalidurl:1010/metrics: dial tcp: lookup invalidurl on 10.0.0.10:53: no such host
- ```
--- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Warning* severity for scrape errors and *Error* severity for configuration errors. If there are no errors, the entry in the table will have data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour.-- For Azure Red Hat OpenShift v3.x and v4.x, check the Azure Monitor Agent logs by searching the **ContainerLog** table to verify if log collection of openshift-azure-logging is enabled.-
-Errors prevent Azure Monitor Agent from parsing the file, causing it to restart and use the default configuration. After you correct the errors in ConfigMap on clusters other than Azure Red Hat OpenShift v3.x, save the YAML file and apply the updated ConfigMaps by running the command `kubectl apply -f <configmap_yaml_file.yaml`.
-
-For Azure Red Hat OpenShift v3.x, edit and save the updated ConfigMaps by running the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
-
-## Query Prometheus metrics data
-
-To view Prometheus metrics scraped by Azure Monitor and any configuration/scraping errors reported by the agent, review [Query Prometheus metrics data](container-insights-log-query.md#prometheus-metrics).
-
-## View Prometheus metrics in Grafana
-
-Container insights supports viewing metrics stored in your Log Analytics workspace in Grafana dashboards. We've provided a template that you can download from Grafana's [dashboard repository](https://grafana.com/grafana/dashboards?dataSource=grafana-azure-monitor-datasource&category=docker). Use the template to get started and reference it to help you learn how to query other data from your monitored clusters to visualize in custom Grafana dashboards.
---
-## Next steps
--- [Learn more about scraping Prometheus metrics](container-insights-prometheus.md).-- [Configure your cluster to send data to Azure Monitor managed service for Prometheus](container-insights-prometheus-metrics-addon.md).
azure-monitor Grafana Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/grafana-plugin.md
Use the following steps to set up a Grafana server and build dashboards for metr
## Set up Grafana
-### Set up Azure Managed Grafana (Preview)
+### Set up Azure Managed Grafana
Azure Managed Grafana is optimized for the Azure environment and works seamlessly with Azure Monitor. Enabling you to: - Manage user authentication and access control using Azure Active Directory identities
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 10/27/2022 Last updated : 11/02/2022 # Create and manage Active Directory connections for Azure NetApp Files
Several features of Azure NetApp Files require that you have an Active Directory
See [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for information about these options.
+ * <a name="encrypted-smb-dc"></a> **Encrypted SMB connections to Domain Controller**
+
+ **Encrypted SMB connections to Domain Controller** specifies whether encryption should be used for communication between an SMB server and domain controller. When enabled, only SMB3 will be used for encrypted domain controller connections.
+
+ This feature is currently in preview. If this is your first time using Encrypted SMB connections to domain controller, you must register it:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFEncryptedSMBConnectionsToDC
+ ```
+
+ Check the status of the feature registration:
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is `Registered` before continuing.
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFEncryptedSMBConnectionsToDC
+ ```
+
+ You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+ * <a name="backup-policy-users"></a> **Backup policy users** This option grants addition security privileges to AD DS domain users or groups that require elevated backup privileges to support backup, restore, and migration workflows in Azure NetApp Files. The specified AD DS user accounts or groups will have elevated NTFS permissions at the file or folder level.
azure-netapp-files Modify Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/modify-active-directory-connections.md
Previously updated : 07/22/2022 Last updated : 11/02/2022
Once you have [created an Active Directory connection](create-active-directory-c
| Allow local NFS users with LDAP | If enabled, this option will manage access for local users and LDAP users. | Yes | This option will allow access to local users. It is not recommended and, if enabled, should only be used for a limited time and later disabled. | If enabled, this option will allow access to local users and LDAP users. If access is needed for only LDAP users, this option must be disabled. | | LDAP over TLS | If enabled, LDAP over TLS will be configured to support secure LDAP communication to active directory. | Yes | None | If LDAP over TLS is enabled and if the server root CA certificate is already present in the database, then LDAP traffic is secured using the CA certificate. If a new certificate is passed in, that certificate will be installed. | | Server root CA Certificate | When LDAP over SSL/TLS is enabled, the LDAP client is required to have base64-encoded Active Directory Certificate Service's self-signed root CA certificate. | Yes | None* | LDAP traffic secured with new certificate only if LDAP over TLS is enabled |
+| Encrypted SMB connections to Domain Controller | This specifies whether encryption should be used for communication between SMB server and domain controller. See [Create Active Directory connections](create-active-directory-connections.md#encrypted-smb-dc) for more details on using this feature. | Yes | SMB, Kerberos, and LDAP enabled volume creation cannot be used if the domain controller does not support SMB3 | Only SMB3 will be used for encrypted domain controller connections. |
| Backup policy users | You can include additional accounts that require elevated privileges to the computer account created for use with Azure NetApp Files. See [Create and manage Active Directory connections](create-active-directory-connections.md#create-an-active-directory-connection) for more information. | Yes | None* | The specified accounts will be allowed to change the NTFS permissions at the file or folder level. | | Administrators | Specify users or groups that will be given administrator privileges on the volume | Yes | None | User account will receive administrator privileges | | Username | Username of the Active Directory domain administrator | Yes | None* | Credential change to contact DC |
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 10/25/2022 Last updated : 11/02/2022 # What's new in Azure NetApp Files Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## November 2022
+
+* [Encrypted SMB connections to Domain Controller](create-active-directory-connections.md#encrypted-smb-dc) (Preview)
+
+ With the Encrypted SMB connections to Active Directory Domain Controller capability you can now specify whether encryption should be used for communication between SMB server and domain controller in Active Directory connections. When enabled, only SMB3 will be used for encrypted domain controller connections.
+ ## October 2022 * [Availability zone volume placement](manage-availability-zone-volume-placement.md) (Preview)
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
Title: Linter settings for Bicep config description: Describes how to customize configuration values for the Bicep linter Previously updated : 09/30/2022 Last updated : 11/01/2022 # Add linter settings in the Bicep config file
The following example shows the rules that are available for configuration.
"artifacts-parameters": { "level": "warning" },
+ "decompiler-cleanup": {
+ "level": "warning"
+ },
"max-outputs": { "level": "warning" },
azure-resource-manager Deployment Script Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md
Previously updated : 10/26/2022 Last updated : 11/01/2022 - # Use deployment scripts in Bicep
resource runPowerShellInline 'Microsoft.Resources/deploymentScripts@2020-10-01'
storageAccountName: 'myStorageAccount' storageAccountKey: 'myKey' }
- azPowerShellVersion: '6.4' // or azCliVersion: '2.28.0'
+ azPowerShellVersion: '8.3' // or azCliVersion: '2.40.0'
arguments: '-name \\"John Dole\\"' environmentVariables: [ {
Property value details:
The following Bicep file has one resource defined with the `Microsoft.Resources/deploymentScripts` type. The highlighted part is the inline script. The script takes a parameter, and output the parameter value. `DeploymentScriptOutputs` is used for storing outputs. The output line shows how to access the stored values. `Write-Output` is used for debugging purpose. To learn how to access the output file, see [Monitor and troubleshoot deployment scripts](#monitor-and-troubleshoot-deployment-scripts). For the property descriptions, see [Sample Bicep files](#sample-bicep-files).
You can use the [loadTextContent](bicep-functions-files.md#loadtextcontent) func
The following example loads a script from a file and uses it for a deployment script. ## Use external scripts
The supporting files are copied to `azscripts/azscriptinput` at the runtime. Use
The following Bicep file shows how to pass values between two `deploymentScripts` resources: In the first resource, you define a variable called `$DeploymentScriptOutputs`, and use it to store the output values. Use resource symbolic name to access the output values.
Different from the PowerShell deployment script, CLI/bash support doesn't expose
Deployment script outputs must be saved in the `AZ_SCRIPTS_OUTPUT_PATH` location, and the outputs must be a valid JSON string object. The contents of the file must be saved as a key-value pair. For example, an array of strings is stored as `{ "MyResult": [ "foo", "bar"] }`. Storing just the array results, for example `[ "foo", "bar" ]`, is invalid. [jq](https://stedolan.github.io/jq/) is used in the previous sample. It comes with the container images. See [Configure development environment](#configure-development-environment).
SubscriptionId : 01234567-89AB-CDEF-0123-456789ABCDEF
ProvisioningState : Succeeded Identity : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/mydentity1008rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myuami ScriptKind : AzurePowerShell
-AzPowerShellVersion : 3.0
-StartTime : 6/18/2020 7:46:45 PM
-EndTime : 6/18/2020 7:49:45 PM
-ExpirationDate : 6/19/2020 7:49:45 PM
+AzPowerShellVersion : 8.3
+StartTime : 6/18/2022 7:46:45 PM
+EndTime : 6/18/2022 7:49:45 PM
+ExpirationDate : 6/19/2022 7:49:45 PM
CleanupPreference : OnSuccess StorageAccountId : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0618rg/providers/Microsoft.Storage/storageAccounts/ftnlvo6rlrvo2azscripts ContainerInstanceId : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0618rg/providers/Microsoft.ContainerInstance/containerGroups/ftnlvo6rlrvo2azscripts
The list command output is similar to:
[ { "arguments": "-name \\\"John Dole\\\"",
- "azPowerShellVersion": "3.0",
+ "azPowerShellVersion": "8.3",
"cleanupPreference": "OnSuccess", "containerSettings": { "containerGroupName": null }, "environmentVariables": null,
- "forceUpdateTag": "20200625T025902Z",
+ "forceUpdateTag": "20220625T025902Z",
"id": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Resources/deploymentScripts/runPowerShellInlineWithOutput", "identity": { "tenantId": "01234567-89AB-CDEF-0123-456789ABCDEF",
The list command output is similar to:
"scriptContent": "\r\n param([string] $name)\r\n $output = \"Hello {0}\" -f $name\r\n Write-Output $output\r\n $DeploymentScriptOutputs = @{}\r\n $DeploymentScriptOutputs['text'] = $output\r\n ", "status": { "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.ContainerInstance/containerGroups/64lxews2qfa5uazscripts",
- "endTime": "2020-06-25T03:00:16.796923+00:00",
+ "endTime": "2022-06-25T03:00:16.796923+00:00",
"error": null,
- "expirationTime": "2020-06-26T03:00:16.796923+00:00",
- "startTime": "2020-06-25T02:59:07.595140+00:00",
+ "expirationTime": "2022-06-26T03:00:16.796923+00:00",
+ "startTime": "2022-06-25T02:59:07.595140+00:00",
"storageAccountId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Storage/storageAccounts/64lxews2qfa5uazscripts" }, "storageAccountSettings": null, "supportingScriptUris": null, "systemData": {
- "createdAt": "2020-06-25T02:59:04.750195+00:00",
+ "createdAt": "2022-06-25T02:59:04.750195+00:00",
"createdBy": "someone@contoso.com", "createdByType": "User",
- "lastModifiedAt": "2020-06-25T02:59:04.750195+00:00",
+ "lastModifiedAt": "2022-06-25T02:59:04.750195+00:00",
"lastModifiedBy": "someone@contoso.com", "lastModifiedByType": "User" },
The output is similar to:
"systemData": { "createdBy": "someone@contoso.com", "createdByType": "User",
- "createdAt": "2020-06-25T02:59:04.7501955Z",
+ "createdAt": "2022-06-25T02:59:04.7501955Z",
"lastModifiedBy": "someone@contoso.com", "lastModifiedByType": "User",
- "lastModifiedAt": "2020-06-25T02:59:04.7501955Z"
+ "lastModifiedAt": "2022-06-25T02:59:04.7501955Z"
}, "properties": { "provisioningState": "Succeeded",
- "forceUpdateTag": "20200625T025902Z",
- "azPowerShellVersion": "3.0",
+ "forceUpdateTag": "20220625T025902Z",
+ "azPowerShellVersion": "8.3",
"scriptContent": "\r\n param([string] $name)\r\n $output = \"Hello {0}\" -f $name\r\n Write-Output $output\r\n $DeploymentScriptOutputs = @{}\r\n $DeploymentScriptOutputs['text'] = $output\r\n ", "arguments": "-name \\\"John Dole\\\"", "retentionInterval": "P1D",
The output is similar to:
"status": { "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.ContainerInstance/containerGroups/64lxews2qfa5uazscripts", "storageAccountId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Storage/storageAccounts/64lxews2qfa5uazscripts",
- "startTime": "2020-06-25T02:59:07.5951401Z",
- "endTime": "2020-06-25T03:00:16.7969234Z",
- "expirationTime": "2020-06-26T03:00:16.7969234Z"
+ "startTime": "2022-06-25T02:59:07.5951401Z",
+ "endTime": "2022-06-25T03:00:16.7969234Z",
+ "expirationTime": "2022-06-26T03:00:16.7969234Z"
}, "outputs": { "text": "Hello John Dole"
azure-resource-manager Linter Rule Decompiler Cleanup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-decompiler-cleanup.md
+
+ Title: Linter rule - decompiler cleanup
+description: Linter rule - decompiler cleanup
+ Last updated : 11/01/2022++
+# Linter rule - decompiler cleanup
+
+The [Bicep CLI decompile](./bicep-cli.md#decompile) command converts ARM template JSON to a Bicep file. If a variable name, or a parameter name, or a resource symbolic name is ambiguous, the Bicep CLI adds a suffix to the name, for example *accountName_var* or *virtualNetwork_resource*. This rule finds these names in Bicep files.
+
+## Linter rule code
+
+Use the following value in the [Bicep configuration file](bicep-config-linter.md) to customize rule settings:
+
+`decompiler-cleanup`
+
+## Solution
+
+To increase the readability, update these names with more meaningful names.
+
+## Next steps
+
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md
Title: Use Bicep linter description: Learn how to use Bicep linter. Previously updated : 9/30/2022 Last updated : 11/01/2022 # Use Bicep linter
The default set of linter rules is minimal and taken from [arm-ttk test cases](.
- [adminusername-should-not-be-literal](./linter-rule-admin-username-should-not-be-literal.md) - [artifacts-parameters](./linter-rule-artifacts-parameters.md)
+- [decompiler-cleanup](./linter-rule-decompiler-cleanup.md)
- [max-outputs](./linter-rule-max-outputs.md) - [max-params](./linter-rule-max-parameters.md) - [max-resources](./linter-rule-max-resources.md)
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> | - | -- | - | -- | > | diagnosticsettings | No | No | No | > | diagnosticsettingscategories | No | No | No |
-> | privatelinkforazuread | Yes | Yes | No |
-> | tenants | Yes | Yes | No |
+> | privatelinkforazuread | **Yes** | **Yes** | No |
+> | tenants | **Yes** | **Yes** | No |
## Microsoft.Addons
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | actionrules | Yes | Yes | No |
+> | actionrules | **Yes** | **Yes** | No |
> | alerts | No | No | No | > | alertslist | No | No | No | > | alertsmetadata | No | No | No | > | alertssummary | No | No | No | > | alertssummarylist | No | No | No |
-> | smartdetectoralertrules | Yes | Yes | No |
+> | smartdetectoralertrules | **Yes** | **Yes** | No |
> | smartgroups | No | No | No | ## Microsoft.AnalysisServices
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | servers | Yes | Yes | No |
+> | servers | **Yes** | **Yes** | No |
## Microsoft.ApiManagement
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | reportfeedback | No | No | No |
-> | service | Yes | Yes | Yes (using template) <br/><br/> [Move API Management across regions](../../api-management/api-management-howto-migrate.md). |
+> | service | **Yes** | **Yes** | **Yes** (using template) <br/><br/> [Move API Management across regions](../../api-management/api-management-howto-migrate.md). |
## Microsoft.App > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | managedenvironments | Yes | Yes | No |
+> | managedenvironments | **Yes** | **Yes** | No |
## Microsoft.AppConfiguration > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | configurationstores | Yes | Yes | No |
+> | configurationstores | **Yes** | **Yes** | No |
> | configurationstores / eventgridfilters | No | No | No | ## Microsoft.AppPlatform
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | spring | Yes | Yes | No |
+> | spring | **Yes** | **Yes** | No |
## Microsoft.AppService
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | apiapps | No | No | Yes (using template)<br/><br/> [Move an App Service app to another region](../../app-service/manage-move-across-regions.md) |
+> | apiapps | No | No | **Yes** (using template)<br/><br/> [Move an App Service app to another region](../../app-service/manage-move-across-regions.md) |
> | appidentities | No | No | No | > | gateways | No | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | attestationproviders | Yes | Yes | No |
+> | attestationproviders | **Yes** | **Yes** | No |
## Microsoft.Authorization
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | automationaccounts | Yes | Yes | Yes (using template) <br/><br/> [Using geo-replication](../../automation/automation-managing-data.md#geo-replication-in-azure-automation) |
-> | automationaccounts / configurations | Yes | Yes | No |
-> | automationaccounts / runbooks | Yes | Yes | No |
+> | automationaccounts | **Yes** | **Yes** | **Yes** (using template) <br/><br/> [Using geo-replication](../../automation/automation-managing-data.md#geo-replication-in-azure-automation) |
+> | automationaccounts / configurations | **Yes** | **Yes** | No |
+> | automationaccounts / runbooks | **Yes** | **Yes** | No |
## Microsoft.AVS > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | privateclouds | Yes | Yes | No |
+> | privateclouds | **Yes** | **Yes** | No |
## Microsoft.AzureActiveDirectory > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | b2cdirectories | Yes | Yes | No |
+> | b2cdirectories | **Yes** | **Yes** | No |
> | b2ctenants | No | No | No | ## Microsoft.AzureData
Jump to a resource provider namespace:
> | sqlinstances | No | No | No | > | sqlmanagedinstances | No | No | No | > | sqlserverinstances | No | No | No |
-> | sqlserverregistrations | Yes | Yes | No |
+> | sqlserverregistrations | **Yes** | **Yes** | No |
## Microsoft.AzureStack
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | cloudmanifestfiles | No | No | No |
-> | registrations | Yes | Yes | No |
+> | registrations | **Yes** | **Yes** | No |
## Microsoft.AzureStackHCI
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | batchaccounts | Yes | Yes | Batch accounts can't be moved directly from one region to another, but you can use a template to export a template, modify it, and deploy the template to the new region. <br/><br/> Learn about [moving a Batch account across regions](../../batch/account-move.md) |
+> | batchaccounts | **Yes** | **Yes** | Batch accounts can't be moved directly from one region to another, but you can use a template to export a template, modify it, and deploy the template to the new region. <br/><br/> Learn about [moving a Batch account across regions](../../batch/account-move.md) |
## Microsoft.Billing
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | botservices | Yes | Yes | No |
+> | botservices | **Yes** | **Yes** | No |
## Microsoft.Cache
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | redis | Yes | Yes | No |
+> | redis | **Yes** | **Yes** | No |
> | redisenterprise | No | No | No | ## Microsoft.Capacity
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | cdnwebapplicationfirewallmanagedrulesets | No | No | No |
-> | cdnwebapplicationfirewallpolicies | Yes | Yes | No |
+> | cdnwebapplicationfirewallpolicies | **Yes** | **Yes** | No |
> | edgenodes | No | No | No |
-> | profiles | Yes | Yes | No |
-> | profiles / endpoints | Yes | Yes | No |
+> | profiles | **Yes** | **Yes** | No |
+> | profiles / endpoints | **Yes** | **Yes** | No |
## Microsoft.CertificateRegistration
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | certificateorders | Yes | Yes | No |
+> | certificateorders | **Yes** | **Yes** | No |
## Microsoft.ClassicCompute
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | capabilities | No | No | No |
-> | domainnames | Yes | No | No |
+> | domainnames | **Yes** | No | No |
> | quotas | No | No | No | > | resourcetypes | No | No | No | > | validatesubscriptionmoveavailability | No | No | No |
-> | virtualmachines | Yes | Yes | No |
+> | virtualmachines | **Yes** | **Yes** | No |
## Microsoft.ClassicInfrastructureMigrate
Jump to a resource provider namespace:
> | osplatformimages | No | No | No | > | publicimages | No | No | No | > | quotas | No | No | No |
-> | storageaccounts | Yes | No | Yes |
+> | storageaccounts | **Yes** | No | **Yes** |
> | vmimages | No | No | No | ## Microsoft.ClassicSubscription
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | accounts | Yes | Yes | No |
-> | Cognitive Search | Yes | Yes | Supported with manual steps.<br/><br/> Learn about [moving your Azure Cognitive Search service to another region](../../search/search-howto-move-across-regions.md) |
+> | accounts | **Yes** | **Yes** | No |
+> | Cognitive Search | **Yes** | **Yes** | Supported with manual steps.<br/><br/> Learn about [moving your Azure Cognitive Search service to another region](../../search/search-howto-move-across-regions.md) |
## Microsoft.Commerce
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | communicationservices | Yes | Yes <br/><br/> Note that resources with attached phone numbers cannot be moved to subscriptions in different data locations, nor subscriptions that do not support having phone numbers. | No |
+> | communicationservices | **Yes** | **Yes** <br/><br/> Note that resources with attached phone numbers cannot be moved to subscriptions in different data locations, nor subscriptions that do not support having phone numbers. | No |
## Microsoft.Compute
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | availabilitysets | Yes | Yes | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move availability sets. |
+> | availabilitysets | **Yes** | **Yes** | **Yes** <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move availability sets. |
> | diskaccesses | No | No | No | > | diskencryptionsets | No | No | No |
-> | disks | Yes | Yes | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move Azure VMs and related disks. |
+> | disks | **Yes** | **Yes** | **Yes** <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move Azure VMs and related disks. |
> | galleries | No | No | No | > | galleries / images | No | No | No | > | galleries / images / versions | No | No | No | > | hostgroups | No | No | No | > | hostgroups / hosts | No | No | No |
-> | images | Yes | Yes | No |
-> | proximityplacementgroups | Yes | Yes | No |
+> | images | **Yes** | **Yes** | No |
+> | proximityplacementgroups | **Yes** | **Yes** | No |
> | restorepointcollections | No | No | No | > | restorepointcollections / restorepoints | No | No | No | > | sharedvmextensions | No | No | No | > | sharedvmimages | No | No | No | > | sharedvmimages / versions | No | No | No |
-> | snapshots | Yes - Full <br> No - Incremental | Yes - Full <br> No - Incremental | No - Full <br> No - Incremental |
+> | snapshots | **Yes** - Full <br> No - Incremental | **Yes** - Full <br> No - Incremental | No - Full <br> No - Incremental |
> | sshpublickeys | No | No | No |
-> | virtualmachines | Yes | Yes | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move Azure VMs. |
-> | virtualmachines / extensions | Yes | Yes | No |
-> | virtualmachinescalesets | Yes | Yes | No |
+> | virtualmachines | **Yes** | **Yes** | **Yes** <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move Azure VMs. |
+> | virtualmachines / extensions | **Yes** | **Yes** | No |
+> | virtualmachinescalesets | **Yes** | **Yes** | No |
## Microsoft.Confluent
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | registries | Yes | Yes | No |
-> | registries / agentpools | Yes | Yes | No |
-> | registries / buildtasks | Yes | Yes | No |
-> | registries / replications | Yes | Yes | No |
-> | registries / tasks | Yes | Yes | No |
-> | registries / webhooks | Yes | Yes | No |
+> | registries | **Yes** | **Yes** | No |
+> | registries / agentpools | **Yes** | **Yes** | No |
+> | registries / buildtasks | **Yes** | **Yes** | No |
+> | registries / replications | **Yes** | **Yes** | No |
+> | registries / tasks | **Yes** | **Yes** | No |
+> | registries / webhooks | **Yes** | **Yes** | No |
## Microsoft.ContainerService
Jump to a resource provider namespace:
> | billingaccounts | No | No | No | > | budgets | No | No | No | > | cloudconnectors | No | No | No |
-> | connectors | Yes | Yes | No |
+> | connectors | **Yes** | **Yes** | No |
> | departments | No | No | No | > | dimensions | No | No | No | > | enrollmentaccounts | No | No | No |
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | associations | No | No | No |
-> | resourceproviders | Yes | Yes | No |
+> | resourceproviders | **Yes** | **Yes** | No |
## Microsoft.DataBox
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | catalogs | Yes | Yes | No |
+> | catalogs | **Yes** | **Yes** | No |
> | datacatalogs | No | No | No | ## Microsoft.DataConnect
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | datafactories | Yes | Yes | No |
-> | factories | Yes | Yes | No |
+> | datafactories | **Yes** | **Yes** | No |
+> | factories | **Yes** | **Yes** | No |
## Microsoft.DataLake
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | accounts | Yes | Yes | No |
+> | accounts | **Yes** | **Yes** | No |
## Microsoft.DataLakeStore > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | accounts | Yes | Yes | No |
+> | accounts | **Yes** | **Yes** | No |
## Microsoft.DataMigration
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | - |
-> | backupvaults | [Yes](../../backup/backup-vault-overview.md#use-azure-portal-to-move-backup-vault-to-a-different-resource-group) | [Yes](../../backup/backup-vault-overview.md#use-azure-portal-to-move-backup-vault-to-a-different-subscription) | No |
+> | backupvaults | [**Yes**](../../backup/backup-vault-overview.md#use-azure-portal-to-move-backup-vault-to-a-different-resource-group) | [**Yes**](../../backup/backup-vault-overview.md#use-azure-portal-to-move-backup-vault-to-a-different-subscription) | No |
## Microsoft.DataShare > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | accounts | Yes | Yes | No |
+> | accounts | **Yes** | **Yes** | No |
## Microsoft.DBforMariaDB > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | servers | Yes | Yes | You can use a cross-region read replica to move an existing server. [Learn more](../../postgresql/howto-move-regions-portal.md).<br/><br/> If the service is provisioned with geo-redundant backup storage, you can use geo-restore to restore in other regions. [Learn more](../../mariadb/concepts-business-continuity.md#recover-from-an-azure-regional-data-center-outage).
+> | servers | **Yes** | **Yes** | You can use a cross-region read replica to move an existing server. [Learn more](../../postgresql/howto-move-regions-portal.md).<br/><br/> If the service is provisioned with geo-redundant backup storage, you can use geo-restore to restore in other regions. [Learn more](../../mariadb/concepts-business-continuity.md#recover-from-an-azure-regional-data-center-outage).
## Microsoft.DBforMySQL > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | flexibleServers | Yes | Yes | No |
-> | servers | Yes | Yes | You can use a cross-region read replica to move an existing server. [Learn more](../../mysql/howto-move-regions-portal.md).
+> | flexibleServers | **Yes** | **Yes** | No |
+> | servers | **Yes** | **Yes** | You can use a cross-region read replica to move an existing server. [Learn more](../../mysql/howto-move-regions-portal.md).
## Microsoft.DBforPostgreSQL > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | flexibleServers | Yes | Yes | No |
+> | flexibleServers | **Yes** | **Yes** | No |
> | servergroups | No | No | No |
-> | servers | Yes | Yes | You can use a cross-region read replica to move an existing server. [Learn more](../../postgresql/howto-move-regions-portal.md).
-> | serversv2 | Yes | Yes | No |
+> | servers | **Yes** | **Yes** | You can use a cross-region read replica to move an existing server. [Learn more](../../postgresql/howto-move-regions-portal.md).
+> | serversv2 | **Yes** | **Yes** | No |
## Microsoft.DeploymentManager > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | artifactsources | Yes | Yes | No |
-> | rollouts | Yes | Yes | No |
-> | servicetopologies | Yes | Yes | No |
-> | servicetopologies / services | Yes | Yes | No |
-> | servicetopologies / services / serviceunits | Yes | Yes | No |
-> | steps | Yes | Yes | No |
+> | artifactsources | **Yes** | **Yes** | No |
+> | rollouts | **Yes** | **Yes** | No |
+> | servicetopologies | **Yes** | **Yes** | No |
+> | servicetopologies / services | **Yes** | **Yes** | No |
+> | servicetopologies / services / serviceunits | **Yes** | **Yes** | No |
+> | steps | **Yes** | **Yes** | No |
## Microsoft.DesktopVirtualization > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | applicationgroups | Yes | Yes | No |
-> | hostpools | Yes | Yes | No |
-> | workspaces | Yes | Yes | No |
+> | applicationgroups | **Yes** | **Yes** | No |
+> | hostpools | **Yes** | **Yes** | No |
+> | workspaces | **Yes** | **Yes** | No |
## Microsoft.Devices
Jump to a resource provider namespace:
> | - | -- | - | -- | > | elasticpools | No | No | No. Resource isn't exposed. | > | elasticpools / iothubtenants | No | No | No. Resource isn't exposed. |
-> | iothubs | Yes | Yes | Yes. [Learn more](../../iot-hub/iot-hub-how-to-clone.md) |
-> | provisioningservices | Yes | Yes | No |
+> | iothubs | **Yes** | **Yes** | **Yes**. [Learn more](../../iot-hub/iot-hub-how-to-clone.md) |
+> | provisioningservices | **Yes** | **Yes** | No |
## Microsoft.DevOps > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | pipelines | Yes | Yes | No |
+> | pipelines | **Yes** | **Yes** | No |
> | controllers | **pending** | **pending** | No | ## Microsoft.DevSpaces
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | controllers | Yes | Yes | No |
+> | controllers | **Yes** | **Yes** | No |
> | AKS cluster | **pending** | **pending** | No<br/><br/> [Learn more](/previous-versions/azure/dev-spaces/) about moving to another region. ## Microsoft.DevTestLab
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | labcenters | No | No | No |
-> | labs | Yes | No | No |
-> | labs / environments | Yes | Yes | No |
-> | labs / servicerunners | Yes | Yes | No |
-> | labs / virtualmachines | Yes | No | No |
-> | schedules | Yes | Yes | No |
+> | labs | **Yes** | No | No |
+> | labs / environments | **Yes** | **Yes** | No |
+> | labs / servicerunners | **Yes** | **Yes** | No |
+> | labs / virtualmachines | **Yes** | No | No |
+> | schedules | **Yes** | **Yes** | No |
## Microsoft.DigitalTwins > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | digitaltwinsinstances | No | No | Yes, by recreating resources in new region. [Learn more](../../digital-twins/how-to-move-regions.md) |
+> | digitaltwinsinstances | No | No | **Yes**, by recreating resources in new region. [Learn more](../../digital-twins/how-to-move-regions.md) |
## Microsoft.DocumentDB
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | databaseaccountnames | No | No | No |
-> | databaseaccounts | Yes | Yes | No |
+> | databaseaccounts | **Yes** | **Yes** | No |
## Microsoft.DomainRegistration > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | domains | Yes | Yes | No |
+> | domains | **Yes** | **Yes** | No |
> | generatessorequest | No | No | No | > | topleveldomains | No | No | No | > | validatedomainregistrationinformation | No | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | services | Yes | Yes | No |
+> | services | **Yes** | **Yes** | No |
## Microsoft.EventGrid > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | domains | Yes | Yes | No |
+> | domains | **Yes** | **Yes** | No |
> | eventsubscriptions | No - can't be moved independently but automatically moved with subscribed resource. | No - can't be moved independently but automatically moved with subscribed resource. | No | > | extensiontopics | No | No | No |
-> | partnernamespaces | Yes | Yes | No |
+> | partnernamespaces | **Yes** | **Yes** | No |
> | partnerregistrations | No | No | No |
-> | partnertopics | Yes | Yes | No |
-> | systemtopics | Yes | Yes | No |
-> | topics | Yes | Yes | No |
+> | partnertopics | **Yes** | **Yes** | No |
+> | systemtopics | **Yes** | **Yes** | No |
+> | topics | **Yes** | **Yes** | No |
> | topictypes | No | No | No | ## Microsoft.EventHub
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | clusters | Yes | Yes | No |
-> | namespaces | Yes | Yes | Yes (with template)<br/><br/> [Move an Event Hub namespace to another region](../../event-hubs/move-across-regions.md) |
+> | clusters | **Yes** | **Yes** | No |
+> | namespaces | **Yes** | **Yes** | **Yes** (with template)<br/><br/> [Move an Event Hub namespace to another region](../../event-hubs/move-across-regions.md) |
> | sku | No | No | No | ## Microsoft.Experimentation
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | namespaces | Yes | Yes | No |
+> | namespaces | **Yes** | **Yes** | No |
## Microsoft.Features
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | clusters | Yes | Yes | No |
+> | clusters | **Yes** | **Yes** | No |
## Microsoft.HealthcareApis > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | services | Yes | Yes | No |
+> | services | **Yes** | **Yes** | No |
## Microsoft.HybridCompute > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | machines | Yes | Yes | No |
-> | machines / extensions | Yes | Yes | No |
+> | machines | **Yes** | **Yes** | No |
+> | machines / extensions | **Yes** | **Yes** | No |
## Microsoft.HybridData > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | datamanagers | Yes | Yes | No |
+> | datamanagers | **Yes** | **Yes** | No |
## Microsoft.HybridNetwork
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | jobs | Yes | Yes | No |
+> | jobs | **Yes** | **Yes** | No |
## Microsoft.Insights
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | accounts | Yes | Yes | No. [Learn more](../../azure-monitor/faq.yml#how-do-i-move-an-application-insights-resource-to-a-new-region-). |
-> | actiongroups | Yes | Yes | No |
+> | accounts | **Yes** | **Yes** | No. [Learn more](../../azure-monitor/faq.yml#how-do-i-move-an-application-insights-resource-to-a-new-region-). |
+> | actiongroups | **Yes** | **Yes** | No |
> | activitylogalerts | No | No | No |
-> | alertrules | Yes | Yes | No |
-> | autoscalesettings | Yes | Yes | No |
+> | alertrules | **Yes** | **Yes** | No |
+> | autoscalesettings | **Yes** | **Yes** | No |
> | baseline | No | No | No |
-> | components | Yes | Yes | No |
+> | components | **Yes** | **Yes** | No |
> | datacollectionrules | No | No | No | > | diagnosticsettings | No | No | No | > | diagnosticsettingscategories | No | No | No |
Jump to a resource provider namespace:
> | notificationgroups | No | No | No | > | privatelinkscopes | No | No | No | > | rollbacktolegacypricingmodel | No | No | No |
-> | scheduledqueryrules | Yes | Yes | No |
+> | scheduledqueryrules | **Yes** | **Yes** | No |
> | topology | No | No | No | > | transactions | No | No | No | > | vminsightsonboardingstatuses | No | No | No |
-> | webtests | Yes | Yes | No |
+> | webtests | **Yes** | **Yes** | No |
> | webtests / gettestresultfile | No | No | No |
-> | workbooks | Yes | Yes | No |
-> | workbooktemplates | Yes | Yes | No |
+> | workbooks | **Yes** | **Yes** | No |
+> | workbooktemplates | **Yes** | **Yes** | No |
## Microsoft.IoTCentral
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | apptemplates | No | No | No |
-> | iotapps | Yes | Yes | No |
+> | iotapps | **Yes** | **Yes** | No |
## Microsoft.IoTHub > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | iothub | Yes | Yes | Yes (clone hub) <br/><br/> [Clone an IoT hub to another region](../../iot-hub/iot-hub-how-to-clone.md) |
+> | iothub | **Yes** | **Yes** | **Yes** (clone hub) <br/><br/> [Clone an IoT hub to another region](../../iot-hub/iot-hub-how-to-clone.md) |
## Microsoft.IoTSpaces > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | graph | Yes | Yes | No |
+> | graph | **Yes** | **Yes** | No |
## Microsoft.KeyVault
Jump to a resource provider namespace:
> | deletedvaults | No | No | No | > | hsmpools | No | No | No | > | managedhsms | No | No | No |
-> | vaults | Yes | Yes | No |
+> | vaults | **Yes** | **Yes** | No |
## Microsoft.Kubernetes
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | clusters | Yes | Yes | No |
+> | clusters | **Yes** | **Yes** | No |
## Microsoft.LabServices
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | hostingenvironments | No | No | No |
-> | integrationaccounts | Yes | Yes | No |
-> | integrationserviceenvironments | Yes | No | No |
-> | integrationserviceenvironments / managedapis | Yes | No | No |
+> | integrationaccounts | **Yes** | **Yes** | No |
+> | integrationserviceenvironments | **Yes** | No | No |
+> | integrationserviceenvironments / managedapis | **Yes** | No | No |
> | isolatedenvironments | No | No | No |
-> | workflows | Yes | Yes | No |
+> | workflows | **Yes** | **Yes** | No |
## Microsoft.MachineLearning
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | commitmentplans | No | No | No |
-> | webservices | Yes | No | No |
-> | workspaces | Yes | Yes | No |
+> | webservices | **Yes** | No | No |
+> | workspaces | **Yes** | **Yes** | No |
## Microsoft.MachineLearningCompute
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | configurationassignments | No | No | Yes. [Learn more](../../virtual-machines/move-region-maintenance-configuration.md) |
-> | maintenanceconfigurations | Yes | Yes | Yes. [Learn more](../../virtual-machines/move-region-maintenance-configuration-resources.md) |
+> | configurationassignments | No | No | **Yes**. [Learn more](../../virtual-machines/move-region-maintenance-configuration.md) |
+> | maintenanceconfigurations | **Yes** | **Yes** | **Yes**. [Learn more](../../virtual-machines/move-region-maintenance-configuration-resources.md) |
> | updates | No | No | No | ## Microsoft.ManagedIdentity
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | accounts | Yes | Yes | No, Azure Maps is a geospatial service. |
-> | accounts / privateatlases | Yes | Yes | No |
+> | accounts | **Yes** | **Yes** | No, Azure Maps is a geospatial service. |
+> | accounts / privateatlases | **Yes** | **Yes** | No |
## Microsoft.Marketplace
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | mediaservices | Yes | Yes | No |
-> | mediaservices / liveevents | Yes | Yes | No |
-> | mediaservices / streamingendpoints | Yes | Yes | No |
+> | mediaservices | **Yes** | **Yes** | No |
+> | mediaservices / liveevents | **Yes** | **Yes** | No |
+> | mediaservices / streamingendpoints | **Yes** | **Yes** | No |
## Microsoft.Microservices4Spring
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | - | > | objectunderstandingaccounts | No | No | No |
-> | remoterenderingaccounts | Yes | Yes | No |
-> | spatialanchorsaccounts | Yes | Yes | No |
+> | remoterenderingaccounts | **Yes** | **Yes** | No |
+> | spatialanchorsaccounts | **Yes** | **Yes** | No |
## Microsoft.NetApp
Jump to a resource provider namespace:
> | - | -- | - | -- | > | applicationgateways | No | No | No | > | applicationgatewaywebapplicationfirewallpolicies | No | No | No |
-> | applicationsecuritygroups | Yes | Yes | No |
+> | applicationsecuritygroups | **Yes** | **Yes** | No |
> | azurefirewalls | No | No | No | > | bastionhosts | No | No | No | > | bgpservicecommunities | No | No | No |
-> | connections | Yes | Yes | No |
-> | ddoscustompolicies | Yes | Yes | No |
+> | connections | **Yes** | **Yes** | No |
+> | ddoscustompolicies | **Yes** | **Yes** | No |
> | ddosprotectionplans | No | No | No |
-> | dnszones | Yes | Yes | No |
+> | dnszones | **Yes** | **Yes** | No |
> | expressroutecircuits | No | No | No | > | expressroutegateways | No | No | No | > | expressrouteserviceproviders | No | No | No | > | firewallpolicies | No | No | No | > | frontdoors | No | No | No |
-> | ipallocations | Yes | Yes | No |
+> | ipallocations | **Yes** | **Yes** | No |
> | ipgroups | No | No | No |
-> | loadbalancers | Yes - Basic SKU<br> Yes - Standard SKU | Yes - Basic SKU<br>No - Standard SKU | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move internal and external load balancers. |
-> | localnetworkgateways | Yes | Yes | No |
+> | loadbalancers | **Yes** - Basic SKU<br> **Yes** - Standard SKU | **Yes** - Basic SKU<br>No - Standard SKU | **Yes** <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move internal and external load balancers. |
+> | localnetworkgateways | **Yes** | **Yes** | No |
> | natgateways | No | No | No | > | networkexperimentprofiles | No | No | No |
-> | networkintentpolicies | Yes | Yes | No |
-> | networkinterfaces | Yes | Yes | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move NICs. |
+> | networkintentpolicies | **Yes** | **Yes** | No |
+> | networkinterfaces | **Yes** | **Yes** | **Yes** <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move NICs. |
> | networkprofiles | No | No | No |
-> | networksecuritygroups | Yes | Yes | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move network security groups (NSGs). |
+> | networksecuritygroups | **Yes** | **Yes** | **Yes** <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move network security groups (NSGs). |
> | networkwatchers | No | No | No |
-> | networkwatchers / connectionmonitors | Yes | No | No |
-> | networkwatchers / flowlogs | Yes | No | No |
-> | networkwatchers / pingmeshes | Yes | No | No |
+> | networkwatchers / connectionmonitors | **Yes** | No | No |
+> | networkwatchers / flowlogs | **Yes** | No | No |
+> | networkwatchers / pingmeshes | **Yes** | No | No |
> | p2svpngateways | No | No | No |
-> | privatednszones | Yes | Yes | No |
-> | privatednszones / virtualnetworklinks | Yes | Yes | No |
+> | privatednszones | **Yes** | **Yes** | No |
+> | privatednszones / virtualnetworklinks | **Yes** | **Yes** | No |
> | privatednszonesinternal | No | No | No | > | privateendpointredirectmaps | No | No | No |
-> | privateendpoints | Yes - for [supported private-link resources](./move-limitations/networking-move-limitations.md#private-endpoints)<br>No - for all other private-link resources | Yes - for [supported private-link resources](./move-limitations/networking-move-limitations.md#private-endpoints)<br>No - for all other private-link resources | No |
+> | privateendpoints | **Yes** - for [supported private-link resources](./move-limitations/networking-move-limitations.md#private-endpoints)<br>No - for all other private-link resources | **Yes** - for [supported private-link resources](./move-limitations/networking-move-limitations.md#private-endpoints)<br>No - for all other private-link resources | No |
> | privatelinkservices | No | No | No |
-> | publicipaddresses | Yes | Yes - see [Networking move guidance](./move-limitations/networking-move-limitations.md) | Yes<br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move public IP address configurations (IP addresses are not retained). |
-> | publicipprefixes | Yes | Yes | No |
+> | publicipaddresses | **Yes** | **Yes** - see [Networking move guidance](./move-limitations/networking-move-limitations.md) | **Yes**<br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move public IP address configurations (IP addresses are not retained). |
+> | publicipprefixes | **Yes** | **Yes** | No |
> | routefilters | No | No | No |
-> | routetables | Yes | Yes | No |
-> | securitypartnerproviders | Yes | Yes | No |
-> | serviceendpointpolicies | Yes | Yes | No |
+> | routetables | **Yes** | **Yes** | No |
+> | securitypartnerproviders | **Yes** | **Yes** | No |
+> | serviceendpointpolicies | **Yes** | **Yes** | No |
> | trafficmanagergeographichierarchies | No | No | No |
-> | trafficmanagerprofiles | Yes | Yes | No |
+> | trafficmanagerprofiles | **Yes** | **Yes** | No |
> | trafficmanagerprofiles / heatmaps | No | No | No | > | trafficmanagerusermetricskeys | No | No | No | > | virtualhubs | No | No | No |
-> | virtualnetworkgateways | Yes | Yes - see [Networking move guidance](./move-limitations/networking-move-limitations.md) | No |
-> | virtualnetworks | Yes | Yes | No |
+> | virtualnetworkgateways | **Yes** | **Yes** - see [Networking move guidance](./move-limitations/networking-move-limitations.md) | No |
+> | virtualnetworks | **Yes** | **Yes** | No |
> | virtualnetworktaps | No | No | No |
-> | virtualrouters | Yes | Yes | No |
+> | virtualrouters | **Yes** | **Yes** | No |
> | virtualwans | No | No | > | vpngateways (Virtual WAN) | No | No | No | > | vpnserverconfigurations | No | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | namespaces | Yes | Yes | No |
-> | namespaces / notificationhubs | Yes | Yes | No |
+> | namespaces | **Yes** | **Yes** | No |
+> | namespaces / notificationhubs | **Yes** | **Yes** | No |
## Microsoft.ObjectStore > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | osnamespaces | Yes | Yes | No |
+> | osnamespaces | **Yes** | **Yes** | No |
## Microsoft.OffAzure
Jump to a resource provider namespace:
> | deletedworkspaces | No | No | No | > | linktargets | No | No | No | > | storageinsightconfigs | No | No | No |
-> | workspaces | Yes | Yes | No |
+> | workspaces | **Yes** | **Yes** | No |
## Microsoft.OperationsManagement
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | managementassociations | No | No | No |
-> | managementconfigurations | Yes | Yes | No |
-> | solutions | Yes | Yes | No |
-> | views | Yes | Yes | No |
+> | managementconfigurations | **Yes** | **Yes** | No |
+> | solutions | **Yes** | **Yes** | No |
+> | views | **Yes** | **Yes** | No |
## Microsoft.Peering
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | consoles | No | No | No |
-> | dashboards | Yes | Yes | No |
+> | dashboards | **Yes** | **Yes** | No |
> | usersettings | No | No | No | ## Microsoft.PowerBI
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | workspacecollections | Yes | Yes | No |
+> | workspacecollections | **Yes** | **Yes** | No |
## Microsoft.PowerBIDedicated > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | capacities | Yes | Yes | No |
+> | capacities | **Yes** | **Yes** | No |
## Microsoft.ProjectBabylon
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | - |
-> | accounts | Yes | Yes | No |
+> | accounts | **Yes** | **Yes** | No |
## Microsoft.ProviderHub
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | replicationeligibilityresults | No | No | No |
-> | vaults | Yes | Yes | No.<br/><br/> Moving Recovery Services vaults for Azure Backup across Azure regions isn't supported.<br/><br/> In Recovery Services vaults for Azure Site Recovery, you can [disable and recreate the vault](../../site-recovery/move-vaults-across-regions.md) in the target region. |
+> | vaults | **Yes** | **Yes** | No.<br/><br/> Moving Recovery Services vaults for Azure Backup across Azure regions isn't supported.<br/><br/> In Recovery Services vaults for Azure Site Recovery, you can [disable and recreate the vault](../../site-recovery/move-vaults-across-regions.md) in the target region. |
## Microsoft.RedHatOpenShift
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | namespaces | Yes | Yes | No |
+> | namespaces | **Yes** | **Yes** | No |
## Microsoft.ResourceGraph > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | queries | Yes | Yes | No |
+> | queries | **Yes** | **Yes** | No |
> | resourcechangedetails | No | No | No | > | resourcechanges | No | No | No | > | resources | No | No | No |
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | deployments | No | No | No |
-> | deploymentscripts | No | No | Yes<br/><br/>[Move Microsoft.Resources resources to new region](microsoft-resources-move-regions.md) |
+> | deploymentscripts | No | No | **Yes**<br/><br/>[Move Microsoft.Resources resources to new region](microsoft-resources-move-regions.md) |
> | deploymentscripts / logs | No | No | No | > | links | No | No | No | > | providers | No | No | No |
Jump to a resource provider namespace:
> | resources | No | No | No | > | subscriptions | No | No | No | > | tags | No | No | No |
-> | templatespecs | No | No | Yes<br/><br/>[Move Microsoft.Resources resources to new region](microsoft-resources-move-regions.md) |
+> | templatespecs | No | No | **Yes**<br/><br/>[Move Microsoft.Resources resources to new region](microsoft-resources-move-regions.md) |
> | templatespecs / versions | No | No | No | > | tenants | No | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | applications | Yes | No | No |
-> | resources | Yes | Yes | No |
+> | applications | **Yes** | No | No |
+> | resources | **Yes** | **Yes** | No |
> | saasresources | No | No | No | ## Microsoft.Search
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | resourcehealthmetadata | No | No | No |
-> | searchservices | Yes | Yes | No |
+> | searchservices | **Yes** | **Yes** | No |
## Microsoft.Security
Jump to a resource provider namespace:
> | assessmentmetadata | No | No | No | > | assessments | No | No | No | > | autodismissalertsrules | No | No | No |
-> | automations | Yes | Yes | No |
+> | automations | **Yes** | **Yes** | No |
> | autoprovisioningsettings | No | No | No | > | complianceresults | No | No | No | > | compliances | No | No | No |
Jump to a resource provider namespace:
> | discoveredsecuritysolutions | No | No | No | > | externalsecuritysolutions | No | No | No | > | informationprotectionpolicies | No | No | No |
-> | iotsecuritysolutions | Yes | Yes | No |
+> | iotsecuritysolutions | **Yes** | **Yes** | No |
> | iotsecuritysolutions / analyticsmodels | No | No | No | > | iotsecuritysolutions / analyticsmodels / aggregatedalerts | No | No | No | > | iotsecuritysolutions / analyticsmodels / aggregatedrecommendations | No | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | namespaces | Yes | Yes | No |
+> | namespaces | **Yes** | **Yes** | No |
> | premiummessagingregions | No | No | No | > | sku | No | No | No |
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | applications | No | No | No |
-> | clusters | Yes | Yes | No |
+> | clusters | **Yes** | **Yes** | No |
> | containergroups | No | No | No | > | containergroupsets | No | No | No | > | edgeclusters | No | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | applications | Yes | Yes | No |
+> | applications | **Yes** | **Yes** | No |
> | containergroups | No | No | No |
-> | gateways | Yes | Yes | No |
-> | networks | Yes | Yes | No |
-> | secrets | Yes | Yes | No |
-> | volumes | Yes | Yes | No |
+> | gateways | **Yes** | **Yes** | No |
+> | networks | **Yes** | **Yes** | No |
+> | secrets | **Yes** | **Yes** | No |
+> | volumes | **Yes** | **Yes** | No |
## Microsoft.Services
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | signalr | Yes | Yes | No |
+> | signalr | **Yes** | **Yes** | No |
## Microsoft.SoftwarePlan
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | instancepools | No | No | No |
-> | locations | Yes | Yes | No |
-> | managedinstances | No | No | Yes <br/><br/> [Learn more](/azure/azure-sql/database/move-resources-across-regions) about moving managed instances across regions. |
-> | managedinstances / databases | No | No | Yes |
-> | servers | Yes | Yes |Yes |
-> | servers / databases | Yes | Yes | Yes <br/><br/> [Learn more](/azure/azure-sql/database/move-resources-across-regions) about moving databases across regions.<br/><br/> [Learn more](../../resource-mover/tutorial-move-region-sql.md) about using Azure Resource Mover to move Azure SQL databases. |
-> | servers / databases / backuplongtermretentionpolicies | Yes | Yes | No |
-> | servers / elasticpools | Yes | Yes | Yes <br/><br/> [Learn more](/azure/azure-sql/database/move-resources-across-regions) about moving elastic pools across regions.<br/><br/> [Learn more](../../resource-mover/tutorial-move-region-sql.md) about using Azure Resource Mover to move Azure SQL elastic pools. |
-> | servers / jobaccounts | Yes | Yes | No |
-> | servers / jobagents | Yes | Yes | No |
+> | locations | **Yes** | **Yes** | No |
+> | managedinstances | No | No | **Yes** <br/><br/> [Learn more](/azure/azure-sql/database/move-resources-across-regions) about moving managed instances across regions. |
+> | managedinstances / databases | No | No | **Yes** |
+> | servers | **Yes** | **Yes** | **Yes** |
+> | servers / databases | **Yes** | **Yes** | **Yes** <br/><br/> [Learn more](/azure/azure-sql/database/move-resources-across-regions) about moving databases across regions.<br/><br/> [Learn more](../../resource-mover/tutorial-move-region-sql.md) about using Azure Resource Mover to move Azure SQL databases. |
+> | servers / databases / backuplongtermretentionpolicies | **Yes** | **Yes** | No |
+> | servers / elasticpools | **Yes** | **Yes** | **Yes** <br/><br/> [Learn more](/azure/azure-sql/database/move-resources-across-regions) about moving elastic pools across regions.<br/><br/> [Learn more](../../resource-mover/tutorial-move-region-sql.md) about using Azure Resource Mover to move Azure SQL elastic pools. |
+> | servers / jobaccounts | **Yes** | **Yes** | No |
+> | servers / jobagents | **Yes** | **Yes** | No |
> | virtualclusters | No | No | No | ## Microsoft.SqlVirtualMachine
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | sqlvirtualmachinegroups | Yes | Yes | No |
-> | sqlvirtualmachines | Yes | Yes | No |
+> | sqlvirtualmachinegroups | **Yes** | **Yes** | No |
+> | sqlvirtualmachines | **Yes** | **Yes** | No |
## Microsoft.Storage > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | storageaccounts | Yes | Yes | Yes<br/><br/> [Move an Azure Storage account to another region](../../storage/common/storage-account-move.md) |
+> | storageaccounts | **Yes** | **Yes** | **Yes**<br/><br/> [Move an Azure Storage account to another region](../../storage/common/storage-account-move.md) |
## Microsoft.StorageCache
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | storagesyncservices | Yes | Yes | No |
+> | storagesyncservices | **Yes** | **Yes** | No |
## Microsoft.StorageSyncDev
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | clusters | No | No | No |
-> | streamingjobs | Yes | Yes | No |
+> | streamingjobs | **Yes** | **Yes** | No |
## Microsoft.StreamAnalyticsExplorer
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | environments | Yes | Yes | No |
-> | environments / eventsources | Yes | Yes | No |
-> | environments / referencedatasets | Yes | Yes | No |
+> | environments | **Yes** | **Yes** | No |
+> | environments / eventsources | **Yes** | **Yes** | No |
+> | environments / referencedatasets | **Yes** | **Yes** | No |
## Microsoft.Token > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | stores | Yes | Yes | No |
+> | stores | **Yes** | **Yes** | No |
## Microsoft.VirtualMachineImages
Jump to a resource provider namespace:
> | - | -- | - | -- | > | availablestacks | No | No | No | > | billingmeters | No | No | No |
-> | certificates | No | Yes | No |
+> | certificates | No | **Yes** | No |
> | certificates (managed) | No | No | No |
-> | connectiongateways | Yes | Yes | No |
-> | connections | Yes | Yes | No |
-> | customapis | Yes | Yes | No |
+> | connectiongateways | **Yes** | **Yes** | No |
+> | connections | **Yes** | **Yes** | No |
+> | customapis | **Yes** | **Yes** | No |
> | deletedsites | No | No | No | > | deploymentlocations | No | No | No | > | georegions | No | No | No | > | hostingenvironments | No | No | No |
-> | kubeenvironments | Yes | Yes | No |
+> | kubeenvironments | **Yes** | **Yes** | No |
> | publishingusers | No | No | No | > | recommendations | No | No | No | > | resourcehealthmetadata | No | No | No | > | runtimes | No | No | No |
-> | serverfarms | Yes | Yes | No |
+> | serverfarms | **Yes** | **Yes** | No |
> | serverfarms / eventgridfilters | No | No | No |
-> | sites | Yes | Yes | No |
-> | sites / premieraddons | Yes | Yes | No |
-> | sites / slots | Yes | Yes | No |
+> | sites | **Yes** | **Yes** | No |
+> | sites / premieraddons | **Yes** | **Yes** | No |
+> | sites / slots | **Yes** | **Yes** | No |
> | sourcecontrols | No | No | No | > | staticsites | No | No | No |
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md
Previously updated : 10/26/2022 Last updated : 11/01/2022 - + # Use deployment scripts in ARM templates Learn how to use deployment scripts in Azure Resource templates (ARM templates). With a new resource type called `Microsoft.Resources/deploymentScripts`, users can execute scripts in template deployments and review execution results. These scripts can be used for performing custom steps such as:
The following JSON is an example. For more information, see the latest [template
"storageAccountName": "myStorageAccount", "storageAccountKey": "myKey" },
- "azPowerShellVersion": "6.4", // or "azCliVersion": "2.28.0",
+ "azPowerShellVersion": "8.3", // or "azCliVersion": "2.40.0",
"arguments": "-name \\\"John Dole\\\"", "environmentVariables": [ {
SubscriptionId : 01234567-89AB-CDEF-0123-456789ABCDEF
ProvisioningState : Succeeded Identity : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/mydentity1008rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myuami ScriptKind : AzurePowerShell
-AzPowerShellVersion : 3.0
-StartTime : 6/18/2020 7:46:45 PM
-EndTime : 6/18/2020 7:49:45 PM
-ExpirationDate : 6/19/2020 7:49:45 PM
+AzPowerShellVersion : 8.3
+StartTime : 6/18/2022 7:46:45 PM
+EndTime : 6/18/2022 7:49:45 PM
+ExpirationDate : 6/19/2022 7:49:45 PM
CleanupPreference : OnSuccess StorageAccountId : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0618rg/providers/Microsoft.Storage/storageAccounts/ftnlvo6rlrvo2azscripts ContainerInstanceId : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0618rg/providers/Microsoft.ContainerInstance/containerGroups/ftnlvo6rlrvo2azscripts
The list command output is similar to:
[ { "arguments": "-name \\\"John Dole\\\"",
- "azPowerShellVersion": "3.0",
+ "azPowerShellVersion": "8.3",
"cleanupPreference": "OnSuccess", "containerSettings": { "containerGroupName": null }, "environmentVariables": null,
- "forceUpdateTag": "20200625T025902Z",
+ "forceUpdateTag": "20220625T025902Z",
"id": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Resources/deploymentScripts/runPowerShellInlineWithOutput", "identity": { "tenantId": "01234567-89AB-CDEF-0123-456789ABCDEF",
The list command output is similar to:
"scriptContent": "\r\n param([string] $name)\r\n $output = \"Hello {0}\" -f $name\r\n Write-Output $output\r\n $DeploymentScriptOutputs = @{}\r\n $DeploymentScriptOutputs['text'] = $output\r\n ", "status": { "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.ContainerInstance/containerGroups/64lxews2qfa5uazscripts",
- "endTime": "2020-06-25T03:00:16.796923+00:00",
+ "endTime": "2022-06-25T03:00:16.796923+00:00",
"error": null,
- "expirationTime": "2020-06-26T03:00:16.796923+00:00",
- "startTime": "2020-06-25T02:59:07.595140+00:00",
+ "expirationTime": "2022-06-26T03:00:16.796923+00:00",
+ "startTime": "2022-06-25T02:59:07.595140+00:00",
"storageAccountId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Storage/storageAccounts/64lxews2qfa5uazscripts" }, "storageAccountSettings": null, "supportingScriptUris": null, "systemData": {
- "createdAt": "2020-06-25T02:59:04.750195+00:00",
+ "createdAt": "2022-06-25T02:59:04.750195+00:00",
"createdBy": "someone@contoso.com", "createdByType": "User",
- "lastModifiedAt": "2020-06-25T02:59:04.750195+00:00",
+ "lastModifiedAt": "2022-06-25T02:59:04.750195+00:00",
"lastModifiedBy": "someone@contoso.com", "lastModifiedByType": "User" },
The output is similar to:
"systemData": { "createdBy": "someone@contoso.com", "createdByType": "User",
- "createdAt": "2020-06-25T02:59:04.7501955Z",
+ "createdAt": "2022-06-25T02:59:04.7501955Z",
"lastModifiedBy": "someone@contoso.com", "lastModifiedByType": "User",
- "lastModifiedAt": "2020-06-25T02:59:04.7501955Z"
+ "lastModifiedAt": "2022-06-25T02:59:04.7501955Z"
}, "properties": { "provisioningState": "Succeeded",
- "forceUpdateTag": "20200625T025902Z",
- "azPowerShellVersion": "3.0",
+ "forceUpdateTag": "20220625T025902Z",
+ "azPowerShellVersion": "8.3",
"scriptContent": "\r\n param([string] $name)\r\n $output = \"Hello {0}\" -f $name\r\n Write-Output $output\r\n $DeploymentScriptOutputs = @{}\r\n $DeploymentScriptOutputs['text'] = $output\r\n ", "arguments": "-name \\\"John Dole\\\"", "retentionInterval": "P1D",
The output is similar to:
"status": { "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.ContainerInstance/containerGroups/64lxews2qfa5uazscripts", "storageAccountId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Storage/storageAccounts/64lxews2qfa5uazscripts",
- "startTime": "2020-06-25T02:59:07.5951401Z",
- "endTime": "2020-06-25T03:00:16.7969234Z",
- "expirationTime": "2020-06-26T03:00:16.7969234Z"
+ "startTime": "2022-06-25T02:59:07.5951401Z",
+ "endTime": "2022-06-25T03:00:16.7969234Z",
+ "expirationTime": "2022-06-26T03:00:16.7969234Z"
}, "outputs": { "text": "Hello John Dole"
azure-signalr Howto Shared Private Endpoints Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-shared-private-endpoints-key-vault.md
Title: Access Key Vault in private network through Shared Private Endpoints
+ Title: Access Key Vault in a private network through shared private endpoints
-description: How to access key vault in private network through Shared Private Endpoints
+description: Learn how Azure SignalR Service can use shared private endpoints to avoid exposing your key vault on a public network.
Last updated 09/23/2022
-# Access Key Vault in private network through Shared Private Endpoints
+# Access Key Vault in a private network through shared private endpoints
-Azure SignalR Service can access your Key Vault in private network through Shared Private Endpoints. In this way you don't have to expose your Key Vault on public network.
+Azure SignalR Service can access your Azure Key Vault instance in a private network through shared private endpoints. In this way, you don't have to expose your key vault on a public network.
- :::image type="content" alt-text="Diagram showing architecture of shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\shared-private-endpoint-overview.png" :::
-## Shared Private Link Resources Management
+## Management of shared private link resources
-Private endpoints of secured resources that are created through Azure SignalR Service APIs are referred to as *shared private link resources*. This is because you're "sharing" access to a resource, such as an Azure Key Vault, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside Azure SignalR Service execution environment and aren't directly visible to you.
+Private endpoints of secured resources that are created through Azure SignalR Service APIs are called *shared private link resources*. This is because you're "sharing" access to a resource, such a key vault, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside an Azure SignalR Service execution environment and aren't directly visible to you.
> [!NOTE] > The examples in this article are based on the following assumptions:
-> * The resource ID of this Azure SignalR Service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr_.
-> * The resource ID of Azure Key Vault is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.KeyVault/vaults/contoso-kv_.
+> * The resource ID of the Azure SignalR Service instance is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr_.
+> * The resource ID of the key vault is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.KeyVault/vaults/contoso-kv_.
-The rest of the examples show how the *contoso-signalr* service can be configured so that its outbound calls to Key Vault go through a private endpoint rather than public network.
+The examples show how the *contoso-signalr* service can be configured so that its outbound calls to the key vault go through a private endpoint rather than a public network.
-### Step 1: Create a shared private link resource to the Key Vault
+## Create a shared private link resource to the key vault
-#### [Azure portal](#tab/azure-portal)
+### [Azure portal](#tab/azure-portal)
1. In the Azure portal, go to your Azure SignalR Service resource.
-1. In the menu pane, select **Networking**. Switch to **Private access** tab.
-1. Click **Add shared private endpoint**.
+1. On the menu pane, select **Networking**. Switch to the **Private access** tab.
+1. Select **Add shared private endpoint**.
- :::image type="content" alt-text="Screenshot of shared private endpoints management." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" lightbox="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" :::
+ :::image type="content" alt-text="Screenshot of the button for adding a shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" lightbox="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" :::
1. Fill in a name for the shared private endpoint.
-1. Select the target linked resource either by selecting from your owned resources or by filling a resource ID.
-1. Click **Add**.
+1. Select the target linked resource either by selecting from your owned resources or by filling in a resource ID.
+1. Select **Add**.
:::image type="content" alt-text="Screenshot of adding a shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-add.png" :::
-1. The shared private endpoint resource will be in **Succeeded** provisioning state. The connection state is **Pending** approval at target resource side.
+1. Confirm that the shared private endpoint resource is now in a **Succeeded** provisioning state. The connection state is **Pending** at the target resource side.
:::image type="content" alt-text="Screenshot of an added shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-added.png" lightbox="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-added.png" :::
-#### [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
You can make the following API call with the [Azure CLI](/cli/azure/) to create a shared private link resource:
The process of creating an outbound private endpoint is a long-running (asynchro
You can poll this URI periodically to obtain the status of the operation.
-If you're using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value,
+If you're using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value:
```dotnetcli az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview ```
-Wait until the status changes to "Succeeded" before proceeding to the next steps.
+Wait until the status changes to **Succeeded** before you proceed to the next steps.
--
-### Step 2a: Approve the private endpoint connection for the Key Vault
+## Approve the private endpoint connection for the key vault
-#### [Azure portal](#tab/azure-portal)
+### [Azure portal](#tab/azure-portal)
-1. In the Azure portal, select the **Networking** tab of your Key Vault and navigate to **Private endpoint connections**. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call.
+1. In the Azure portal, select the **Networking** tab for your key vault and go to **Private endpoint connections**. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call.
- :::image type="content" alt-text="Screenshot of the Azure portal, showing the Private endpoint connections pane." source="media\howto-shared-private-endpoints-key-vault\portal-key-vault-approve-private-endpoint.png" :::
+1. Select the private endpoint that Azure SignalR Service created. Then select **Approve**.
-1. Select the private endpoint that Azure SignalR Service created. Click **Approve**.
+ :::image type="content" alt-text="Screenshot of the Azure portal that shows the pane for private endpoint connections." source="media\howto-shared-private-endpoints-key-vault\portal-key-vault-approve-private-endpoint.png" :::
+
+1. Make sure that the private endpoint connection appears, as shown in the following screenshot. It could take one to two minutes for the status to be updated in the portal.
- Make sure that the private endpoint connection appears as shown in the following screenshot. It could take one to two minutes for the status to be updated in the portal.
+ :::image type="content" alt-text="Screenshot of the Azure portal that shows an Approved status on the pane for private endpoint connections." source="media\howto-shared-private-endpoints-key-vault\portal-key-vault-approved-private-endpoint.png" :::
- :::image type="content" alt-text="Screenshot of the Azure portal, showing an Approved status on the Private endpoint connections pane." source="media\howto-shared-private-endpoints-key-vault\portal-key-vault-approved-private-endpoint.png" :::
+### [Azure CLI](#tab/azure-cli)
-#### [Azure CLI](#tab/azure-cli)
-
-1. List private endpoint connections.
+1. List private endpoint connections:
```dotnetcli az network private-endpoint-connection list -n <key-vault-resource-name> -g <key-vault-resource-group-name> --type 'Microsoft.KeyVault/vaults'
Wait until the status changes to "Succeeded" before proceeding to the next steps
] ```
-1. Approve the private endpoint connection.
+1. Approve the private endpoint connection:
```dotnetcli az network private-endpoint-connection approve --id <private-endpoint-connection-id>
Wait until the status changes to "Succeeded" before proceeding to the next steps
--
-### Step 2b: Query the status of the shared private link resource
+## Query the status of the shared private link resource
-It takes minutes for the approval to be propagated to Azure SignalR Service. You can check the state using either Azure portal or Azure CLI.
+It takes minutes for the approval to be propagated to Azure SignalR Service. You can check the state by using either the Azure portal or the Azure CLI.
-#### [Azure portal](#tab/azure-portal)
+### [Azure portal](#tab/azure-portal)
:::image type="content" alt-text="Screenshot of an approved shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-approved.png" lightbox="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-approved.png" :::
-#### [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
```dotnetcli az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/sharedPrivateLinkResources/func-pe?api-version=2021-06-01-preview ```
-This would return a JSON, where the connection state would show up as "status" under the "properties" section.
+This command returns JSON that shows the connection state as the `status` value in the `properties` section.
```json {
This would return a JSON, where the connection state would show up as "status" u
```
-If the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, it means that the shared private link resource is functional and Azure SignalR Service can communicate over the private endpoint.
+If the provisioning state (`properties.provisioningState`) of the resource is `Succeeded` and the connection state (`properties.status`) is `Approved`, the shared private link resource is functional and Azure SignalR Service can communicate over the private endpoint.
-- At this point, the private endpoint between Azure SignalR Service and Azure Key Vault is established.
-Now you can configure features like custom domain as usual. **You don't have to use a special domain for Key Vault**. DNS resolution is automatically handled by Azure SignalR Service.
+Now you can configure features like custom domain as usual. *You don't have to use a special domain for Key Vault*. Azure SignalR Service automatically handles DNS resolution.
## Next steps Learn more: + [What are private endpoints?](../private-link/private-endpoint-overview.md)
-+ [Configure custom domain](howto-custom-domain.md)
++ [Configure a custom domain](howto-custom-domain.md)
azure-video-indexer Indexing Configuration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/indexing-configuration-guide.md
Below are the indexing type options with details of their insights provided. To
### Audio only -- **Basic**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles, named entities (brands, locations, people), and topics.
+- **Basic**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles.
- **Standard**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles, emotions, keywords, named entities (brands, locations, people), sentiments, speakers, and topics. - **Advanced**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles, audio effects (preview), emotions, keywords, named entities (brands, locations, people), sentiments, speakers, and articles.
backup Backup Azure Sap Hana Database Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database-troubleshoot.md
Title: Troubleshoot SAP HANA databases backup errors description: Describes how to troubleshoot common errors that might occur when you use Azure Backup to back up SAP HANA databases. Previously updated : 05/16/2022 Last updated : 11/02/2022
Refer to the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [
**Possible causes** | When you've reached the maximum permissible limit for an operation in a span of 24 hours, this error appears. This error usually appears when there are at-scale operations such as modify policy or auto-protection. Unlike the case of CloudDosAbsoluteLimitReached, there isn't much you can do to resolve this state. In fact, Azure Backup service will retry the operations internally for all the items in question.<br><br> For example, if you've a large number of datasources protected with a policy and you try to modify that policy, it will trigger configure protection jobs for each of the protected items and sometimes may hit the maximum limit permissible for such operations per day. **Recommended action** | Azure Backup service will automatically retry this operation after 24 hours.
+### UserErrorInvalidBackint
+
+**Error message** | Found invalid hdbbackint executable.
+ |
+**Possible case** | 1. The operation to change Backint path from `/opt/msawb/bin` to `/usr/sap/<sid>/SYS/global/hdb/opt/hdbbackint` failed due to insufficient storage space in the new location. <br><br> 2. The *hdbbackint utility* located on `/usr/sap/<sid>/SYS/global/hdb/opt/hdbbackint` doesn't have executable permissions or correct ownership.
+**Recommended action** | 1. Ensure that there is free space available on `/usr/sap/<sid>/SYS/global/hdb/opt/hdbbackint` or the path where you want to save backups. <br><br> 2. Ensure that *sapsys* group has appropriate permissions on the `/usr/sap/<sid>/SYS/global/hdb/opt/hdbbackint` file by running the command `chmod 755`.
+ ## Restore checks ### Single Container Database (SDC) restore
backup Geo Code List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/geo-code-list.md
Title: Geo-code mapping description: Learn about geo-codes mapped with the respective regions. Previously updated : 11/01/2022 Last updated : 03/07/2022
This sample XML provides you an insight about the geo-codes mapped with the respective regions. Use these geo-codes to create and add custom DNS zones for private endpoint for Recovery Services vault.
-## Fetch mapping details
-
-To fetch the geo-code mapping list, run the following command:
-
-```azurecli-interactive
- az cli list-locations
-```
- ## Mapping details ```xml
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/whats-new.md
We've also added links to some user-generated content. Those items will be marke
### Nov 2022
-* Multivariate Anomaly Detection is now a generally available feature in Anomaly Detector service, with a better user experience and better model performance. Learn more about [how to use latest Multivariate Anomaly Detection](quickstarts/client-libraries-multivariate.md).
+* Multivariate Anomaly Detection is now a generally available feature in Anomaly Detector service, with a better user experience and better model performance. Learn more about [how to get started using the latest release of Multivariate Anomaly Detection](how-to/create-resource.md).
### June 2022
cognitive-services Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/autoscale.md
Yes, you can disable the autoscale feature through Azure portal or CLI and retur
Autoscale feature is available for the following
+* [Cognitive Services multi-key](/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Canomaly-detector%2Clanguage-service%2Ccomputer-vision%2Cwindows)
* [Computer Vision](computer-vision/index.yml) * [Language](language-service/overview.md) (only available for sentiment analysis, key phrase extraction, named entity recognition, and text analytics for health)
+* [Anomaly Detector](/azure/cognitive-services/anomaly-detector/overview)
+* [Content Moderator](/azure/cognitive-services/content-moderator/overview)
+* [Custom Vision (Prediction)](/azure/cognitive-services/custom-vision-service/overview)
+* [Immersive Reader](/azure/applied-ai-services/immersive-reader/overview)
+* [LUIS](/azure/cognitive-services/luis/what-is-luis)
+* [Metrics Advisor](/azure/applied-ai-services/metrics-advisor/overview)
+* [Personalizer](/azure/cognitive-services/personalizer/what-is-personalizer)
+* [QnAMaker](/azure/cognitive-services/qnamaker/overview/overview)
* [Form Recognizer](../applied-ai-services/form-recognizer/overview.md?tabs=v3-0) ### Can I test this feature using a free subscription?
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the quotas and limits t
| Limit Name | Limit Value | |--|--| | OpenAI resources per region | 2 |
-| Requests per second per deployment | 5 |
+| Requests per second per deployment | 10 |
| Max fine-tuned model deployments | 2 | | Ability to deploy same model to multiple deployments | Not allowed | | Total number of training jobs per resource | 100 |
cognitive-services Concept Feature Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-feature-evaluation.md
- Title: Feature evaluation - Personalizer-
-description: When you run an Evaluation in your Personalizer resource from the Azure portal, Personalizer provides information about what features of context and actions are influencing the model.
--
-ms.
--- Previously updated : 07/29/2019--
-# Feature evaluation
-
-When you run an Evaluation in your Personalizer resource from the [Azure portal](https://portal.azure.com), Personalizer provides information about what features of context and actions are influencing the model.
-
-This is useful in order to:
-
-* Imagine additional features you could use, getting inspiration from what features are more important in the model.
-* See what features aren't important, and potentially remove them or further analyze what may be affecting usage.
-* Provide guidance to editorial or curation teams about new content or products worth bringing into the catalog.
-* Troubleshoot common problems and mistakes that happen when sending features to Personalizer.
-
-The more important features have stronger weights in the model. Because these features have stronger weight, they tend to be present when Personalizer obtains higher rewards.
-
-## Getting feature importance evaluation
-
-To see feature importance results, you must run an evaluation. The evaluation creates human-readable feature labels based on the feature names observed during the evaluation period.
-
-The resulting information about feature importance represents the current Personalizer online model. The evaluation analyzes feature importance of the model saved at the end date of the evaluation period, after undergoing all the training done during the evaluation, with the current online learning policy.
-
-The feature importance results don't represent other policies and models tested or created during the evaluation. The evaluation won't include features sent to Personalizer after the end of the evaluation period.
-
-## How to interpret the feature importance evaluation
-
-Personalizer evaluates features by creating "groups" of features that have similar importance. One group can be said to have overall stronger importance than others, but within the group, ordering of features is alphabetically.
-
-Information about each Feature includes:
-
-* Whether the feature comes from Context or Actions
-* Feature Key and Value
-
-For example, an ice cream shop ordering app may see `Context.Weather:Hot` as a very important feature.
-
-Personalizer displays correlations of features that, when taken into account together, produce higher rewards.
-
-For example, you may see `Context.Weather:Hot` *with* `Action.MenuItem:IceCream` as well as `Context.Weather:Cold` *with* `Action.MenuItem:WarmTea:`.
-
-## Actions you can take based on feature evaluation
-
-### Imagine additional features you could use
-
-Get inspiration from the more important features in the model. For example, if you see "Context.MobileBattery:Low" in a video mobile app, you may think that connection type may also make customers choose to see one video clip over another, then add features about connectivity type and bandwidth into your app.
-
-### See what features aren't important
-
-Potentially remove unimportant features or further analyze what may affect usage. Features may rank low for many reasons. One could be that genuinely the feature doesn't affect user behavior. But it could also mean that the feature isn't apparent to the user.
-
-For example, a video site could see that "Action.VideoResolution=4k" is a low-importance feature, contradicting user research. The cause could be that the application doesn't even mention or show the video resolution, so users wouldn't change their behavior based on it.
-
-### Provide guidance to editorial or curation teams
-
-Provide guidance about new content or products worth bringing into the catalog. Personalizer is designed to be a tool that augments human insight and teams. One way it does this is by providing information to editorial groups on what is it about products, articles or content that drives behavior. For example, the video application scenario may show that there's an important feature called "Action.VideoEntities.Cat:true", prompting the editorial team to bring in more cat videos.
-
-### Troubleshoot common problems and mistakes
-
-Common problems and mistakes can be fixed by changing your application code so it won't send inappropriate or incorrectly formatted features to Personalizer.
-
-Common mistakes when sending features include the following:
-
-* Sending personally identifiable information (PII). PII specific to one individual (such as name, phone number, credit card numbers, IP Addresses) shouldn't be used with Personalizer. If your application needs to track users, use a non-identifying UUID or some other UserID number. In most scenarios this is also problematic.
-* With large numbers of users, it's unlikely that each user's interaction will weigh more than all the population's interaction, so sending user IDs (even if non-PII) will probably add more noise than value to the model.
-* Sending date-time fields as precise timestamps instead of featurized time values. Having features such as Context.TimeStamp.Day=Monday or "Context.TimeStamp.Hour"="13" is more useful. There will be at most 7 or 24 feature values for each. But `"Context.TimeStamp":"1985-04-12T23:20:50.52Z"` is so precise that there will be no way to learn from it because it will never happen again.
-
-## Next steps
-
-Understand [scalability and performance](concepts-scalability-performance.md) with Personalizer.
-
cognitive-services How To Feature Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-feature-evaluation.md
+
+ Title: Personalizer feature evaluations
+
+description: When you run a Feature Evaluation in your Personalizer resource from the Azure portal, Personalizer creates a report containing Feature Scores, a measure of how influential each feature was to the model during the evaluation period.
++
+ms.
+++ Last updated : 09/22/2022++
+# Evaluate feature importances
+
+You can assess how important each feature was to Personalizer's machine learning model by conducting a _feature evaluation_ on your historical log data. Feature evaluations are useful to:
+
+* Understand which features are most or least important to the model.
+* Brainstorm extra features that may be beneficial to learning, by deriving inspiration from what features are currently important in the model.
+* Identify potentially unimportant or non-useful features that should be considered for further analysis or removal.
+* Troubleshoot common problems and errors that may occur when designing features and sending them to Personalizer. For example, using GUIDs, timestamps, or other features that are generally _sparse_ may be problematic. Learn more about [improving features](concepts-features.md).
+
+## What is a feature evaluation?
+
+Feature evaluations are conducted by training and running a copy of your current model configuration on historically collected log data in a specified time period. Features are ignored one at a time to measure the difference in model performance with and without each feature. Because the feature evaluations are performed on historical data, there's no guarantee that these patterns will be observed in future data. However, these insights may still be relevant to future data if your logged data has captured sufficient variability or non-stationary properties of your data. Your current model's performance isn't affected by running a feature evaluation.
+
+A _feature importance_ score is a measure of the relative impact of the feature on the reward over the evaluation period. Feature importance scores are a number between 0 (least important) and 100 (most important) and are shown in the feature evaluation. Since the evaluation is run over a specific time period, the feature importances can change as additional data is sent to Personalizer and as your users, scenarios, and data change over time.
+
+## Creating a feature evaluation
+
+To obtain feature importance scores, you must create a feature evaluation over a period of logged data to generate a report containing the feature importance scores. This report is viewable in the Azure portal. To create a feature evaluation:
+
+1. Go to the [Azure portal](https://portal.azure.com) website
+1. Select your Personalizer resource
+1. Select the _Monitor_ section from the side navigation pane
+1. Select the _Features_ tab
+1. Select "Create report" and a new screen should appear
+1. Choose a name for your report
+1. Choose _start_ and _end_ times for your evaluation period
+1. Select "Create report"
+
+![Screenshot that shows how to create a Feature Evaluation in your Personalizer resource by clicking on "Monitor" blade, the "Feature" tab, then "Create a report".](media/feature-evaluation/create-report.png)
++
+![Screenshot that shows in the creation window and how to fill in the fields for your report including the name, start date, and end date.](media/feature-evaluation/create-report-window.png)
+
+Next, your report name should appear in the reports table below. Creating a feature evaluation is a long running process, where the time to completion depends on the volume of data sent to Personalizer during the evaluation period. While the report is being generated, the _Status_ column will indicate "Running" for your evaluation, and will update to "Succeeded" once completed. Check back periodically to see if your evaluation has finished.
+
+You can run multiple feature evaluations over various periods of time that your Personalizer resource has log data. Make sure that your [data retention period](how-to-settings.md#data-retention) is set sufficiently long to enable you to perform evaluations over older data.
+
+## Interpreting feature importance scores
+
+### Features with a high importance score
+
+Features with higher importance scores were more influential to the model during the evaluation period as compared to the other features. Important features can provide inspiration for designing additional features to be included in the model. For example, if you see the context features "IsWeekend" or "IsWeekday" have high importance for grocery shopping, it may be the case that holidays or long-weekends may also be important factors, so you may want to consider adding features that capture this information.
+
+### Features with a low importance score
+
+Features with low importance scores are good candidates for further analysis. Not all low scoring features necessarily _bad_ or not useful as low scores can occur for one or more several reasons. The list below can help you get started with analyzing why your features may have low scores:
+
+* The feature was rarely observed in the data during the evaluation period.
+ <!-- * Check The _Feature occurrences_ in your feature evaluation. If it's low in comparison to other features, this may indicate that feature was not present often enough for the model to determine if it's valuable or not. -->
+ * If the number of occurrences of this feature is low in comparison to other features, this may indicate that feature wasn't present often enough for the model to determine if it's valuable or not.
+* The feature values didn't have a lot of diversity or variation.
+ <!-- * Check The _Number of unique values_ in your feature evaluation. If it's lower than you would expect, this may indicate that the feature did not vary much during the evaluation period and won't provide significant insight. -->
+ * If the number of unique values for this feature lower than you would expect, this may indicate that the feature didn't vary much during the evaluation period and won't provide significant insight.
+
+* The feature values were too noisy (random), or too distinct, and provided little value.
+ <!-- * Check the _Number of unique values_ in your feature evaluation. If it's higher than you expected, or high in comparison to other features, this may indicate that the feature was too noisy during the evaluation period. -->
+ * Check the _Number of unique values_ in your feature evaluation. If the number of unique values for this feature is higher than you expected, or high in comparison to other features, this may indicate that the feature was too noisy during the evaluation period.
+* There's a data or formatting issue.
+ * Check to make sure the features are formatted and sent to Personalizer in the way you expect.
+* The feature may not be valuable to model learning and performance if the feature score is low and the reasons above do not apply.
+ * Consider removing the feature as it's not helping your model maximize the average reward.
+
+Removing features with low importance scores can help speed up model training by reducing the amount of data needed to learn. It can also potentially improve the performance of the model. However, this isn't guaranteed and further analysis may be needed. [Learn more about designing context and action features.](concepts-features.md)
+
+### Common issues and steps to improve features
+
+- **Sending features with high cardinality.** Features with high cardinality are those that have many distinct values that are not likely to repeat over many events. For example, personal information specific to one individual (such as name, phone number, credit card number, IP address) shouldn't be used with Personalizer.
+
+- **Sending user IDs** With large numbers of users, it's unlikely that this information is relevant to Personalizer learning to maximize the average reward score. Sending user IDs (even if not personal information) will likely add more noise to the model and isn't recommended.
+
+- **Features are too sparse. Values are distinct and rarely occur more than a few times**. Precise timestamps down to the second can be very sparse. It can be made more dense (and therefore, effective) by grouping times into "morning", "midday" or "afternoon", for example.
+
+Location information also typically benefits from creating broader classifications. For example, a latitude-longitude coordinates such as Lat: 47.67402┬░ N, Long: 122.12154┬░ W is too precise and forces the model to learn latitude and longitude as distinct dimensions. When you're trying to personalize based on location information, it helps to group location information in larger sectors. An easy way to do that is to choose an appropriate rounding precision for the lat-long numbers, and combine latitude and longitude into "areas" by making them one string. For example, a good way to represent Lat: 47.67402┬░ N, Long: 122.12154┬░ W in regions approximately a few kilometers wide would be "location":"34.3 , 12.1".
+
+- **Expand feature sets with extrapolated information**
+You can also get more features by thinking of unexplored attributes that can be derived from information you already have. For example, in a fictitious movie list personalization, is it possible that a weekend vs weekday elicits different behavior from users? Time could be expanded to have a "weekend" or "weekday" attribute. Do national cultural holidays drive attention to certain movie types? For example, a "Halloween" attribute is useful in places where it's relevant. Is it possible that rainy weather has significant impact on the choice of a movie for many people? With time and place, a weather service could provide that information and you can add it as an extra feature.
++
+## Next steps
+
+[Analyze policy performances with an offline evaluation](how-to-offline-evaluation.md) with Personalizer.
+
cognitive-services How To Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-offline-evaluation.md
Last updated 02/20/2020
# Analyze your learning loop with an offline evaluation
-Learn how to complete an offline evaluation and understand the results.
+Learn how to create an offline evaluation and interpret the results.
-Offline Evaluations allow you to measure how effective Personalizer is compared to your application's default behavior, learn what features are contributing most to personalization, and discover new machine learning values automatically.
+Offline Evaluations allow you to measure how effective Personalizer is compared to your application's default behavior over a period of logged (historical) data, and assess how well other model configuration settings may perform for your model.
+
+When you create an offline evaluation, the _Optimization discovery_ option will run offline evaluations over a variety of learning policy values to find one that may improve the performance of your model. You can also provide additional policies to assess in the offline evaluation.
Read about [Offline Evaluations](concepts-offline-evaluation.md) to learn more. ## Prerequisites
-* A configured Personalizer loop
-* The Personalizer loop must have a representative amount of data - as a ballpark we recommend at least 50,000 events in its logs for meaningful evaluation results. Optionally, you may also have previously exported _learning policy_ files you can compare and test in the same evaluation.
+* A configured Personalizer resource
+* The Personalizer resource must have a representative amount of logged data - as a ballpark figure, we recommend at least 50,000 events in its logs for meaningful evaluation results. Optionally, you may also have previously exported _learning policy_ files that you wish to test and compare in this evaluation.
## Run an offline evaluation 1. In the [Azure portal](https://azure.microsoft.com/free/cognitive-services), locate your Personalizer resource. 1. In the Azure portal, go to the **Evaluations** section and select **Create Evaluation**. ![In the Azure portal, go to the **Evaluations** section and select **Create Evaluation**.](./media/offline-evaluation/create-new-offline-evaluation.png)
-1. Configure the following values:
+1. Fill out the options in the _Create an evaluation_ window:
* An evaluation name. * Start and end date - these are dates that specify the range of data to use in the evaluation. This data must be present in the logs, as specified in the [Data Retention](how-to-settings.md) value.
- * Optimization Discovery set to **yes**.
+ * Set _Optimization discovery_ to **yes**, if you wish Personalizer to attempt to find more optimal learning policies.
+ * Add learning settings - upload a learning policy file if you wish to evaluate a custom or previously exported policy.l
> [!div class="mx-imgBorder"] > ![Choose offline evaluation settings](./media/offline-evaluation/create-an-evaluation-form.png)
-1. Start the Evaluation by selecting **Ok**.
+1. Start the Evaluation by selecting **Start evaluation**.
## Review the evaluation results Evaluations can take a long time to run, depending on the amount of data to process, number of learning policies to compare, and whether an optimization was requested.
-Once completed, you can select the evaluation from the list of evaluations, then select **Compare the score of your application with other potential learning settings**. Select this feature when you want to see how your current learning policy performs compared to a new policy.
+1. Once completed, you can select the evaluation from the list of evaluations, then select **Compare the score of your application with other potential learning settings**. Select this feature when you want to see how your current learning policy performs compared to a new policy.
-1. Review the performance of the [learning policies](concepts-offline-evaluation.md#discovering-the-optimized-learning-policy).
+1. Next, Review the performance of the [learning policies](concepts-offline-evaluation.md#discovering-the-optimized-learning-policy).
> [!div class="mx-imgBorder"] > [![Review evaluation results](./media/offline-evaluation/evaluation-results.png)](./media/offline-evaluation/evaluation-results.png#lightbox)
-1. Select **Apply** to apply the policy that improves the model best for your data.
+You'll see various learning policies on the chart, along with their estimated average reward, confidence intervals, and options to download or apply a specific policy.
+- "Online" - Personalizer's current policy
+- "Baseline1" - Your application's baseline policy
+- "BaselineRand" - A policy of taking actions at random
+- "Inter-len#" or "Hyper#" - Policies created by Optimization discovery.
+
+Select **Apply** to apply the policy that improves the model best for your data.
+ ## Next steps
cognitive-services Responsible Data And Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-data-and-privacy.md
Personalizer processes the following types of data:
To understand more about what information you typically use with Personalizer, see [Features are information about Actions and Context](concepts-features.md).
-[!TIP] You decide which features to use, how to aggregate them, and where the information comes from when you call the Personalizer Rank API in your application. You also determine how to create reward scores. To make informed decisions about what information to use with Personalizer, see the [Personalizer responsible use guidelines](responsible-use-cases.md).
-
+[!TIP] You decide which features to use, how to aggregate them, and where the information comes from when you call the Personalizer Rank API in your application. You also determine how to create reward scores.
## How does Personalizer process data?
Personalizer processes data as follows:
4. After the rank and reward information for events is correlated, it's removed from transient caches and placed in more permanent storage. It remains in permanent storage until the number of days specified in the Data Retention setting has gone by, at which time the information is deleted. If you choose not to specify a number of days in the Data Retention setting, this data will be saved as long as the Personalizer Azure Resource is not deleted or until you choose to Clear Data via the UI or APIs. You can change the Data Retention setting at any time. 5. Personalizer continuously trains internal Personalizer AI models specific to this Personalizer loop by using the data in the permanent storage and machine learning configuration parameters in [Learning settings](concept-active-learning.md). 6. Personalizer creates [offline evaluations either](concepts-offline-evaluation.md) automatically or on demand.
-Offline evaluations contain a report of rewards obtained by Personalizer models during a past time period. An offline evaluation embeds the models active at the time of their creation, and the learning settings used to create them, as well as a historical aggregate of average reward per event for that time window. Evaluations also include [feature importance](concept-feature-evaluation.md), which is a list of features observed in the time period, and their relative importance in the model.
+Offline evaluations contain a report of rewards obtained by Personalizer models during a past time period. An offline evaluation embeds the models active at the time of their creation, and the learning settings used to create them, as well as a historical aggregate of average reward per event for that time window. Evaluations also include [feature importance](how-to-feature-evaluation.md), which is a list of features observed in the time period, and their relative importance in the model.
### Independence of Personalizer loops
communication-services Browser Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/browser-support.md
A `CallClient` instance is required for this operation. When you have a `CallCli
```javascript const callClient = new CallClient(options);
-const environmentInfo = await callClient.getEnvironmentInfo();
+const environmentInfo = await callClient.feature(Features.DebugInfo).getEnvironmentInfo();
``` The `getEnvironmentInfo` method asynchronously returns an object of type `EnvironmentInfo`.
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-nodejs.md
Create a doc with the *product* properties for the `adventureworks` database:
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="new_doc":::
-Create an doc in the collect by calling [``Collection.UpdateOne``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html#updateOne). In this example, we chose to *upsert* instead of *create* a new doc in case you run this sample code more than once.
+Create an doc in the collection by calling [``Collection.UpdateOne``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html#updateOne). In this example, we chose to *upsert* instead of *create* a new doc in case you run this sample code more than once.
### Get a doc
Troubleshooting:
## Run the code
-This app creates a API for MongoDB database and collection and creates a doc and then reads the exact same doc back. Finally, the example issues a query that should only return that single doc. With each step, the example outputs information to the console about the steps it has performed.
+This app creates an API for MongoDB database and collection and creates a doc and then reads the exact same doc back. Finally, the example issues a query that should only return that single doc. With each step, the example outputs information to the console about the steps it has performed.
To run the app, use a terminal to navigate to the application directory and run the application.
data-factory Concepts Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture.md
Previously updated : 10/18/2022 Last updated : 11/01/2022 # Change data capture in Azure Data Factory and Azure Synapse Analytics
When you perform data integration and ETL processes in the cloud, your jobs can
### Native change data capture in mapping data flow
-The changed data including inserted, updated and deleted rows can be automatically detected and extracted by ADF mapping data flow from the source databases. No timestamp or ID columns are required to identify the changes since it uses the native change data capture technology in the databases. By simply chaining a source transform and a sink transform reference to a database dataset in a mapping data flow, you will see the changes happened on the source database to be automatically applied to the target database, so that you can easily synchronize data between two tables. You can also add any transformations in between for any business logic to process the delta data.
+The changed data including inserted, updated and deleted rows can be automatically detected and extracted by ADF mapping data flow from the source databases. No timestamp or ID columns are required to identify the changes since it uses the native change data capture technology in the databases. By simply chaining a source transform and a sink transform reference to a database dataset in a mapping data flow, you will see the changes happened on the source database to be automatically applied to the target database, so that you can easily synchronize data between two tables. You can also add any transformations in between for any business logic to process the delta data. When defining your sink data destination, you can set insert, update, upsert, and delete operations in your sink without the need of an Alter Row transformation because ADF is able to automatically detect the row makers.
**Supported connectors** - [SAP CDC](connector-sap-change-data-capture.md)
data-factory Data Flow Alter Row https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-alter-row.md
Previously updated : 08/03/2022 Last updated : 11/01/2022 # Alter row transformation in mapping data flow
Alter Row transformations only operate on database, REST, or Azure Cosmos DB sin
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4vJYc]
+> [!NOTE]
+> An Alter Row transformation is not needed for Change Data Capture data flows that use native CDC sources like SQL Server or SAP. In those instances, ADF will automatically detect the row marker so Alter Row policies are unnecessary.
+ ## Specify a default row policy Create an Alter Row transformation and specify a row policy with a condition of `true()`. Each row that doesn't match any of the previously defined expressions will be marked for the specified row policy. By default, each row that doesn't match any conditional expression will be marked for `Insert`.
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-sink.md
Previously updated : 10/26/2022 Last updated : 11/01/2022 # Sink transformation in mapping data flow
For example, if I specify a single key column of `column1` in a cache sink calle
**Write to activity output** The cached sink can optionally write your output data to the input of the next pipeline activity. This will allow you to quickly and easily pass data out of your data flow activity without needing to persist the data in a data store.
+## Update method
+
+For database sink types, the Settings tab will include an "Update method" property. The default is insert but also includes checkbox options for update, upsert, and delete. To utilize those additional options, you will need to add an [Alter Row transformation](data-flow-alter-row.md) before the sink. The Alter Row will allow you to define the conditions for each of the database actions. If your source is a native CDC enable source, then you can set the update methods without an Alter Row as ADF is already aware of the row markers for insert, update, upsert, and delete.
+ ## Field mapping Similar to a select transformation, on the **Mapping** tab of the sink, you can decide which incoming columns will get written. By default, all input columns, including drifted columns, are mapped. This behavior is known as *automapping*.
data-factory Pricing Examples Transform Mapping Data Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-transform-mapping-data-flows.md
To accomplish the scenario, you need to create a pipeline with the following ite
| **Operations** | **Types and Units** | | | | | Run Pipeline | 2 Activity runs **per execution** (1 for trigger run, 1 for activity runs) = 480 activity runs, rounded up since the calculator only allows increments of 1000. |
-| Data Flow Assumptions: General purpose 16 vCore hours **per execution** = 10 min + 10 min TTL | 20 min \ 60 min |
+| Data Flow Assumptions: General purpose 16 vCore hours **per execution** = 10 min | 10 min \ 60 min |
## Pricing calculator example
-**Total scenario pricing for 30 days: $350.76**
+**Total scenario pricing for 30 days: $175.88**
:::image type="content" source="media/pricing-concepts/scenario-4a-pricing-calculator.png" alt-text="Screenshot of the orchestration section of the pricing calculator configured to transform data in a blob store with mapping data flows." lightbox="media/pricing-concepts/scenario-4a-pricing-calculator.png":::
To accomplish the scenario, you need to create a pipeline with the following ite
- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md) - [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md) - [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)-- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Quickstart Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-get-started.md
# Quickstart: Get started with Azure Data Factory
-> [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"]
-> * [Version 1](v1/data-factory-copy-data-from-azure-blob-storage-to-sql-database.md)
-> * [Current version](quickstart-create-data-factory-rest-api.md)
- [!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)] Welcome to Azure Data Factory! This getting started article will let you create your first data factory and pipeline within 5 minutes. The ARM template below will create and configure everything you need to try it out. Then you only need to navigate to your demo data factory and make one more click to trigger the pipeline, which moves some sample data from one Azure blob storage to another.
All of the resources referenced above will be created in the new resource group,
1. In the resource group, you will see the new data factory, Azure blob storage account, and managed identity that were created by the deployment. :::image type="content" source="media/quickstart-get-started/resource-group-contents.png" alt-text="A screenshot of the contents of the resource group created for the demo.":::
-1. Select the data factory in the resource group to view it. Then select the **Open Azure Data Factory Studio** button to continue.
- :::image type="content" source="media/quickstart-get-started/open-data-factory-studio.png" alt-text="A screenshot of the Azure portal on the newly created data factory page, highlighting the location of the Open Azure Data Factory Studio button.":::
+1. Select the data factory in the resource group to view it. Then select the **Launch Studio** button to continue.
+ :::image type="content" source="media/quickstart-get-started/launch-adf-studio.png" alt-text="A screenshot of the Azure portal on the newly created data factory page, highlighting the location of the Open Azure Data Factory Studio button.":::
1. Select on the **Author** tab <img src="media/quickstart-get-started/author-button.png" alt="Author tab"/> and then the **Pipeline** created by the template. Then check the source data by selecting **Open**.
All of the resources referenced above will be created in the new resource group,
1. In this quickstart, the pipeline has only one activity type: Copy. Click on the pipeline name and you can see the details of the copy activity's run results.
- :::image type="content" source="media/quickstart-get-started/copy-activity-run-results.png" alt-text="Screenshot of the run results of a copy activity in the data factorying monitoring tab.":::
+ :::image type="content" source="media/quickstart-get-started/copy-activity-run-results.png" alt-text="Screenshot of the run results of a copy activity in the data factory monitoring tab.":::
1. Click on details, and the detailed copy process is displayed. From the results, data read and written size are the same, and 1 file was read and written, which also proves all the data has been successfully copied to the destination.
data-factory Source Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/source-control.md
Previously updated : 03/01/2022 Last updated : 10/26/2022 # Source control in Azure Data Factory
For more info about connecting Azure Repos to your organization's Active Directo
Visual authoring with GitHub integration supports source control and collaboration for work on your data factory pipelines. You can associate a data factory with a GitHub account repository for source control, collaboration, versioning. A single GitHub account can have multiple repositories, but a GitHub repository can be associated with only one data factory. If you don't have a GitHub account or repository, follow [these instructions](https://github.com/join) to create your resources.
-The GitHub integration with Data Factory supports both public GitHub (that is, [https://github.com](https://github.com)) and GitHub Enterprise. You can use both public and private GitHub repositories with Data Factory as long you have read and write permission to the repository in GitHub. ADFΓÇÖs GitHub enterprise server integration only works with [officially supported versions of GitHub enterprise server.](https://docs.github.com/en/enterprise-server@3.1/admin/all-releases)
+The GitHub integration with Data Factory supports both public GitHub (that is, [https://github.com](https://github.com)), GitHub Enterprise Cloud and GitHub Enterprise Server. You can use both public and private GitHub repositories with Data Factory as long you have read and write permission to the repository in GitHub. ADFΓÇÖs GitHub enterprise server integration only works with [officially supported versions of GitHub enterprise server.](https://docs.github.com/en/enterprise-server@3.1/admin/all-releases)
> [!NOTE] > If you are using Microsoft Edge, GitHub Enterprise version less than 2.1.4 does not work with it. GitHub officially supports >=3.0 and these all should be fine for ADF. As GitHub changes its minimum version, ADF supported versions will also change. ### GitHub settings ++ :::image type="content" source="media/author-visually/github-integration-image2.png" alt-text="GitHub repository settings"::: The configuration pane shows the following GitHub repository settings:
The configuration pane shows the following GitHub repository settings:
| **Setting** | **Description** | **Value** | |: |: |: | | **Repository Type** | The type of the Azure Repos code repository. | GitHub |
-| **Use GitHub Enterprise** | Checkbox to select GitHub Enterprise | unselected (default) |
-| **GitHub Enterprise URL** | The GitHub Enterprise root URL (must be HTTPS for local GitHub Enterprise server). For example: `https://github.mydomain.com`. Required only if **Use GitHub Enterprise** is selected | `<your GitHub enterprise url>` |
-| **GitHub account** | Your GitHub account name. This name can be found from https:\//github.com/{account name}/{repository name}. Navigating to this page prompts you to enter GitHub OAuth credentials to your GitHub account. | `<your GitHub account name>` |
-| **Repository Name** | Your GitHub code repository name. GitHub accounts contain Git repositories to manage your source code. You can create a new repository or use an existing repository that's already in your account. | `<your repository name>` |
-| **Collaboration branch** | Your GitHub collaboration branch that is used for publishing. By default, it's main. Change this setting in case you want to publish resources from another branch. | `<your collaboration branch>` |
+| **Use GitHub Enterprise Server** | Checkbox to select GitHub Enterprise Server.| unselected (default) |
+| **GitHub Enterprise Server URL** | The GitHub Enterprise root URL (must be HTTPS for local GitHub Enterprise server). For example: `https://github.mydomain.com`. Required only if **Use GitHub Enterprise Server** is selected | `<your GitHub Enterprise Server URL>` |
+| **GitHub repository owner** | GitHub organization or account that owns the repository. This name can be found from https:\//github.com/{owner}/{repository name}. Navigating to this page prompts you to enter GitHub OAuth credentials to your GitHub organization or account. If you select **Use GitHub Enterprise Server**, a dialog box will pop out to let you enter your access token. | `<your GitHub repository owner name>` |
+| **Repository Name** | Your GitHub code repository name. GitHub accounts contain Git repositories to manage your source code. You can create a new repository or use an existing repository that's already in your account. Specify your GitHub code repository name when you select **Select repository**. | `<your repository name>` |
+|**Git repository link**| Your GitHub code repository link. Specify your GitHub code repository link when you select **Use repository link**. |`<your repository link>`|
+| **Collaboration branch** | Your GitHub collaboration branch that is used for publishing. By default, it's main. Change this setting in case you want to publish resources from another branch. You can also create a new collaboration branch here. | `<your collaboration branch>` |
+| **Publish branch** |The branch in your repository where publishing related ARM templates are stored and updated.| `<your publish branch name>`|
| **Root folder** | Your root folder in your GitHub collaboration branch. |`<your root folder name>` |
-| **Import existing Data Factory resources to repository** | Specifies whether to import existing data factory resources from the UX authoring canvas into a GitHub repository. Select the box to import your data factory resources into the associated Git repository in JSON format. This action exports each resource individually (that is, the linked services and datasets are exported into separate JSONs). When this box isn't selected, the existing resources aren't imported. | Selected (default) |
-| **Branch to import resource into** | Specifies into which branch the data factory resources (pipelines, datasets, linked services etc.) are imported. You can import resources into one of the following branches: a. Collaboration b. Create new c. Use Existing | |
+| **Import existing resources to repository** | Specifies whether to import existing data factory resources from the UX authoring canvas into a GitHub repository. Select the box to import your data factory resources into the associated Git repository in JSON format. This action exports each resource individually (that is, the linked services and datasets are exported into separate JSONs). When this box isn't selected, the existing resources aren't imported. | Selected (default) |
+| **Import resource into this branch** | Specifies into which branch the data factory resources (pipelines, datasets, linked services etc.) are imported. | |
### GitHub organizations Connecting to a GitHub organization requires the organization to grant permission to Azure Data Factory. A user with ADMIN permissions on the organization must perform the below steps to allow data factory to connect.
-#### Connecting to GitHub for the first time in Azure Data Factory
+#### Connecting to public GitHub or GitHub Enterprise Cloud for the first time in Azure Data Factory
-If you're connecting to GitHub from Azure Data Factory for the first time, follow these steps to connect to a GitHub organization.
+If you're connecting to public GitHub or GitHub Enterprise Cloud from Azure Data Factory for the first time, follow these steps to connect to a GitHub organization.
1. In the Git configuration pane, enter the organization name in the *GitHub Account* field. A prompt to login into GitHub will appear. 1. Login using your user credentials.
If you're connecting to GitHub from Azure Data Factory for the first time, follo
Once you follow these steps, your factory will be able to connect to both public and private repositories within your organization. If you are unable to connect, try clearing the browser cache and retrying.
-#### Already connected to GitHub using a personal account
+#### Already connected to public GitHub or GitHub Enterprise Cloud using a personal account
-If you have already connected to GitHub and only granted permission to access a personal account, follow the below steps to grant permissions to an organization.
+If you have already connected to public GitHub or GitHub Enterprise Cloud and only granted permission to access a personal account, follow the below steps to grant permissions to an organization.
1. Go to GitHub and open **Settings**.
If you have already connected to GitHub and only granted permission to access a
Once you follow these steps, your factory will be able to connect to both public and private repositories within your organization.
+#### Connecting to GitHub Enterprise Server
+
+If you connect to GitHub Enterprise Server, you need to use personal access token for authentication. Learn how to create a personal access token in [Creating a personal access token](https://docs.github.com/en/enterprise-server@3.6/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token).
+
+> [!Note]
+> GitHub Enterprise Server is in your self-hosted private environment, so you need have full control on the firewall, network policies and VPN when you use this authentication. For more information, see [About GitHub Enterprise Server](https://docs.github.com/en/enterprise-server@3.6/admin/overview/about-github-enterprise-server#about-github-enterprise-server).
+++ ### Known GitHub limitations - You can store script and data files in a GitHub repository. However, you have to upload the files manually to Azure Storage. A Data Factory pipeline does not automatically upload script or data files stored in a GitHub repository to Azure Storage.
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
If you enabled the integration, but still don't see the extension running on you
### What are the licensing requirements for Microsoft Defender for Endpoint?
-Licenses for Defender for Endpoint for servers are included with **Microsoft Defender for Servers**. Alternatively, you can [purchase licenses for Defender for Endpoint](https://www.microsoft.com/en-us/security/business/get-started/contact-us) for servers separately.
+Licenses for Defender for Endpoint for servers are included with **Microsoft Defender for Servers**.
### Do I need to buy a separate anti-malware solution to protect my machines? No. With MDE integration in Defender for Servers, you'll also get malware protection on your machines.
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
To have full visibility to Microsoft Defender for Servers security content, ensu
- Additional extensions should be enabled on the Arc-connected machines. - Microsoft Defender for Endpoint - VA solution (TVM/ Qualys)
- - Log Analytics (LA) agent on Arc machines. Ensure the selected workspace has security solution installed.
+ - Log Analytics (LA) agent on Arc machines or Azure Monitor agent (AMA). Ensure the selected workspace has security solution installed.
- The LA agent is currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regard to the LA agent.
+ The LA agent and AMA are currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regard to the LA agent and AMA.
Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
defender-for-cloud Quickstart Onboard Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md
Title: 'Quickstart: Connect your GitHub repositories to Microsoft Defender for Cloud' description: Learn how to connect your GitHub repositories to Defender for Cloud. Previously updated : 09/20/2022 Last updated : 11/02/2022
By connecting your GitHub repositories to Defender for Cloud, you'll extend Defe
:::image type="content" source="media/quickstart-onboard-github/select-github.png" alt-text="Screenshot that shows you where to select, to select GitHub." lightbox="media/quickstart-onboard-github/select-github.png":::
-1. Enter a name, select your subscription, resource group, and region.
+1. Enter a name (limit of 20 characters), select your subscription, resource group, and region.
> [!NOTE] > The subscription will be the location where Defender for DevOps will create and store the GitHub connection.
By connecting your GitHub repositories to Defender for Cloud, you'll extend Defe
1. Select **Next: Authorize connection**.
-1. Select **Authorize** to grant your Azure subscription access to your GitHub repositories. Sign in, if necessary, with an account that has permissions to the repositories you want to protect
+1. Select **Authorize** to grant your Azure subscription access to your GitHub repositories. Sign in, if necessary, with an account that has permissions to the repositories you want to protect.
> [!NOTE] > The authorization will auto-login using the session from your browser tab. After you select Authorize, if you do not see the GitHub organizations you expect to see, check whether you are logged in to MDC in one browser tab and logged in to GitHub in another browser tab.
+ > After authorization, if you wait too long to install the DevOps application, the session will time out and you will receive an error message.
1. Select **Install**.
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
description: This article lists Microsoft Defender for Cloud's security recommen
Previously updated : 08/24/2022 Last updated : 11/02/2022
shown in your environment depend on the resources you're protecting and your cus
configuration. Defender for Cloud's recommendations are based on the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction).
-the Microsoft cloud security benchmark is the Microsoft-authored, Azure-specific set of guidelines for security
+the Microsoft cloud security benchmark is the Microsoft-authored set of guidelines for security
and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
You can now monitor your cloud security compliance posture per cloud in a single
Microsoft cloud security benchmark is automatically assigned to your Azure subscriptions and AWS accounts when you onboard Defender for Cloud.
-Learn more about the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction).
+Learn more about the [Microsoft cloud security benchmark](concept-regulatory-compliance.md).
### Attack path analysis and contextual security capabilities in Defender for Cloud (Preview)
defender-for-cloud Security Policy Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/security-policy-concept.md
Title: Understanding security policies, initiatives, and recommendations in Micr
description: Learn about security policies, initiatives, and recommendations in Microsoft Defender for Cloud. Previously updated : 06/06/2022 Last updated : 11/02/2022 # What are security policies, initiatives, and recommendations?
A security initiative defines the desired configuration of your workloads and he
Like security policies, Defender for Cloud initiatives are also created in Azure Policy. You can use [Azure Policy](../governance/policy/overview.md) to manage your policies, build initiatives, and assign initiatives to multiple subscriptions or for entire management groups.
-The default initiative automatically assigned to every subscription in Microsoft Defender for Cloud is Microsoft cloud security benchmark. This benchmark is the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security. Learn more about [Microsoft cloud security benchmark](/security/benchmark/azure/introduction).
+The default initiative automatically assigned to every subscription in Microsoft Defender for Cloud is Microsoft cloud security benchmark. This benchmark is the Microsoft-authored set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security. Learn more about [Microsoft cloud security benchmark](/security/benchmark/azure/introduction).
Defender for Cloud offers the following options for working with security initiatives and policies:
defender-for-cloud Tutorial Security Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-policy.md
Title: Working with security policies
description: Learn how to work with security policies in Microsoft Defender for Cloud. Previously updated : 01/25/2022 Last updated : 10/31/2022 # Manage security policies
To view your security policies in Defender for Cloud:
1. To view and edit the default initiative, select it and proceed as described below.
- :::image type="content" source="./media/security-center-policies/policy-screen.png" alt-text="Effective policy screen.":::
- This **Security policy** screen reflects the action taken by the policies assigned on the subscription or management group you selected. * Use the links at the top to open a policy **assignment** that applies on the subscription or management group. These links let you access the assignment and edit or disable the policy. For example, if you see that a particular policy assignment is effectively denying endpoint protection, use the link to edit or disable the policy.
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
Title: Alert types and descriptions
-description: Review Defender for IoT Alert descriptions.
Previously updated : 12/13/2021-
+ Title: OT monitoring alert types and descriptions
+description: Learn more about the alerts that are triggered for traffic on OT networks.
Last updated : 11/01/2022+
-# Alert types and descriptions
+# OT monitoring alert types and descriptions
This article provides information on the alert types, descriptions, and severity that may be generated from the Defender for IoT engines. This information can be used to help map alerts into playbooks, define Forwarding rules, Exclusion rules, and custom alerts and define the appropriate rules within a SIEM. Alerts appear in the Alerts window, which allows you to manage the alert event. ### Alert news
-New alerts may be added and existing alerts may be updated or disabled. Certain disabled alerts can be re-enabled from the Support page of the sensor console. Alerts that can be re-enabled are marked with an asterisk (*) in the tables below.
+New alerts may be added and existing alerts may be updated or disabled. Certain disabled alerts can be re-enabled from the **Support** page of the sensor console. Alerts that can be re-enabled are marked with an asterisk (*) in the tables below.
-You may have configured newly disabled alerts in your Forwarding rules. If so, you may need to update related Defender for IoT Exclusion rules, or update SIEM rules and playbooks where relevant.
+You may have configured newly disabled alerts in your Forwarding rules. If so, you may need to update related Defender for IoT Exclusion rules, or update SIEM rules and playbooks where relevant.
See [What's new in Microsoft Defender for IoT?](release-notes.md#whats-new-in-microsoft-defender-for-iot) for detailed information about changes made to alerts.
See [What's new in Microsoft Defender for IoT?](release-notes.md#whats-new-in-m
| Alert type | Description | |-|-|
-| Policy violation alerts | Triggered when the Policy Violation engine detects a deviation from traffic previously learned. For example: <br /> - A new device is detected. <br /> - A new configuration is detected on a device. <br /> - A device not defined as a programming device carries out a programming change. <br /> - A firmware version changed. |
-| Protocol violation alerts | Triggered when the Protocol Violation engine detects packet structures or field values that don't comply with the protocol specification. |
-| Operational alerts | Triggered when the Operational engine detects network operational incidents or a device malfunctioning. For example, a network device was stopped through a Stop PLC command, or an interface on a sensor stopped monitoring traffic. |
-| Malware alerts | Triggered when the Malware engine detects malicious network activity. For example, the engine detects a known attack such as Conficker. |
-| Anomaly alerts | Triggered when the Anomaly engine detects a deviation. For example, a device is performing network scans but isn't defined as a scanning device. |
+| **Policy violation alerts** | Triggered when the Policy Violation engine detects a deviation from traffic previously learned. For example: <br /> - A new device is detected. <br /> - A new configuration is detected on a device. <br /> - A device not defined as a programming device carries out a programming change. <br /> - A firmware version changed. |
+| **Protocol violation alerts** | Triggered when the Protocol Violation engine detects packet structures or field values that don't comply with the protocol specification. |
+| **Operational alerts** | Triggered when the Operational engine detects network operational incidents or a device malfunctioning. For example, a network device was stopped through a Stop PLC command, or an interface on a sensor stopped monitoring traffic. |
+| **Malware alerts** | Triggered when the Malware engine detects malicious network activity. For example, the engine detects a known attack such as Conficker. |
+| **Anomaly alerts** | Triggered when the Anomaly engine detects a deviation. For example, a device is performing network scans but isn't defined as a scanning device. |
+## Supported alert categories
+
+Each alert has one of the following categories:
+
+ :::column span="":::
+ - Abnormal Communication Behavior
+ - Abnormal HTTP Communication Behavior
+ - Authentication
+ - Backup
+ - Bandwidth Anomalies
+ - Buffer overflow
+ - Command Failures
+ - Configuration changes
+ - Custom Alerts
+ - Discovery
+ - Firmware change
+ - Illegal commands
+ :::column-end:::
+ :::column span="":::
+ - Internet Access
+ - Operation Failures
+ - Operational issues
+ - Programming
+ - Remote access
+ - Restart/Stop Commands
+ - Scan
+ - Sensor traffic
+ - Suspicion of malicious activity
+ - Suspicion of Malware
+ - Unauthorized Communication Behavior
+ - Unresponsive
+ :::column-end:::
## Policy engine alerts
Policy engine alerts describe detected deviations from learned baseline behavior
| Title | Description | Severity | Category | |--|--|--|--|
-| Beckhoff Software Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| Database Login Failed | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. <br><br> Threshold: 2 login failures in 5 minutes | Major | Authentication |
-| Emerson ROC Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| External address within the network communicated with Internet | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access |
-| Field Device Discovered Unexpectedly | A new source device was detected on the network but hasn't been authorized. | Major | Discovery |
-| Firmware Change Detected | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| Foxboro I/A Unauthorized Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| FTP Login Failed | A failed sign-in attempt was detected from a source device to a destination server. This alert might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major | Authentication |
-| Function Code Raised Unauthorized Exception | A source device (secondary) returned an exception to a destination device (primary). | Major | Command Failures |
-| GOOSE Message Type Settings | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior |
-| Honeywell Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| * Illegal HTTP Communication | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
-| Internet Access Detected | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Major | Internet Access |
-| Mitsubishi Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| Modbus Address Range Violation | A primary device requested access to a new secondary memory address. | Major | Unauthorized Communication Behavior |
-| Modbus Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| New Activity Detected - CIP Class | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - CIP Class Service | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - CIP PCCC Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - CIP Symbol | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - EtherNet/IP I/O Connection | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - EtherNet/IP Protocol Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - GSM Message Code | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - LonTalk Command Codes | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Port Discovery | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Warning | Discovery |
-| New Activity Detected - LonTalk Network Variable | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Ovation Data Request | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Read/Write Command (AMS Index Group) | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes |
-| New Activity Detected - Read/Write Command (AMS Index Offset) | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes |
-| New Activity Detected - Unauthorized DeltaV Message Type | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Unauthorized DeltaV ROC Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Unauthorized RPC Message Type | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Using AMS Protocol Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Using Siemens SICAM Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Using Suitelink Protocol command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Using Suitelink Protocol sessions | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Using Yokogawa VNetIP Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Asset Detected | A new source device was detected on the network but hasn't been authorized. <br><br>This alert applies to devices discovered in OT subnets. New devices discovered in IT subnets don't trigger an alert.| Major | Discovery |
-| New LLDP Device Configuration | A new source device was detected on the network but hasn't been authorized. | Major | Configuration Changes |
-| Omron FINS Unauthorized Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| S7 Plus PLC Firmware Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| Sampled Values Message Type Settings | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior |
-| Suspicion of Illegal Integrity Scan | A scan was detected on a DNP3 source device (outstation). This scan wasn't authorized as learned traffic on your network. | Major | Scan |
-| Toshiba Computer Link Unauthorized Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Minor | Unauthorized Communication Behavior |
-| Unauthorized ABB Totalflow File Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized ABB Totalflow Register Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Access to Siemens S7 Data Block | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Warning | Unauthorized Communication Behavior |
-| Unauthorized Access to Siemens S7 Plus Object | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Access to Wonderware Tag | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Unauthorized Communication Behavior |
-| Unauthorized BACNet Object Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized BACNet Route | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Database Login | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication |
-| Unauthorized Database Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
-| Unauthorized Emerson ROC Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized GE SRTP File Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized GE SRTP Protocol Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized GE SRTP System Memory Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized HTTP Activity | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
-| * Unauthorized HTTP SOAP Action | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
-| * Unauthorized HTTP User Agent | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal HTTP Communication Behavior |
-| Unauthorized Internet Connectivity Detected | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access |
-| Unauthorized Mitsubishi MELSEC Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized MMS Program Access | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Programming |
-| Unauthorized MMS Service | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Multicast/Broadcast Connection | A Multicast/Broadcast connection was detected between a source device and other devices. Multicast/Broadcast communication isn't authorized. | Critical | Abnormal Communication Behavior |
-| Unauthorized Name Query | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
-| Unauthorized OPC UA Activity | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized OPC UA Request/Response | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Operation was detected by a User Defined Rule | Traffic was detected between two devices. This activity is unauthorized based on a Custom Alert Rule defined by a user. | Major | Custom Alerts |
-| Unauthorized PLC Configuration Read | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Warning | Configuration Changes |
-| Unauthorized PLC Configuration Write | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Configuration Changes |
-| Unauthorized PLC Program Upload | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Programming |
-| Unauthorized PLC Programming | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Critical | Programming |
-| Unauthorized Profinet Frame Type | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized SAIA S-Bus Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Siemens S7 Execution of Control Function | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Siemens S7 Execution of User Defined Function | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Siemens S7 Plus Block Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Siemens S7 Plus Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized SMB Login | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication |
-| Unauthorized SNMP Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
-| Unauthorized SSH Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Remote Access |
-| Unauthorized Windows Process | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior |
-| Unauthorized Windows Service | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior |
-| Unauthorized Operation was detected by a User Defined Rule | New traffic parameters were detected. This parameter combination violates a user defined rule | Major |
-| Unpermitted Modbus Schneider Electric Extension | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unpermitted Usage of ASDU Types | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unpermitted Usage of DNP3 Function Code | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unpermitted Usage of Internal Indication (IIN) | A DNP3 source device (outstation) reported an internal indication (IIN) that hasn't authorized as learned traffic on your network. | Major | Illegal Commands |
-| Unpermitted Usage of Modbus Function Code | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Beckhoff Software Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| **Database Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. <br><br> Threshold: 2 sign-in failures in 5 minutes | Major | Authentication |
+| **Emerson ROC Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| **External address within the network communicated with Internet** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access |
+| **Field Device Discovered Unexpectedly** | A new source device was detected on the network but hasn't been authorized. | Major | Discovery |
+| **Firmware Change Detected** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| **Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| **Foxboro I/A Unauthorized Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **FTP Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This alert might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major | Authentication |
+| **Function Code Raised Unauthorized Exception** | A source device (secondary) returned an exception to a destination device (primary). | Major | Command Failures |
+| **GOOSE Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior |
+| **Honeywell Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| * **Illegal HTTP Communication** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
+| **Internet Access Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Major | Internet Access |
+| **Mitsubishi Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| **Modbus Address Range Violation** | A primary device requested access to a new secondary memory address. | Major | Unauthorized Communication Behavior |
+| **Modbus Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| **New Activity Detected - CIP Class** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - CIP Class Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - CIP PCCC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - CIP Symbol** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - EtherNet/IP I/O Connection** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - EtherNet/IP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - GSM Message Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - LonTalk Command Codes** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Port Discovery** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Warning | Discovery |
+| **New Activity Detected - LonTalk Network Variable** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Ovation Data Request** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Read/Write Command (AMS Index Group)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes |
+| **New Activity Detected - Read/Write Command (AMS Index Offset)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes |
+| **New Activity Detected - Unauthorized DeltaV Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Unauthorized DeltaV ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Unauthorized RPC Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Using AMS Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Using Siemens SICAM Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Using Suitelink Protocol command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Using Suitelink Protocol sessions** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Using Yokogawa VNetIP Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Asset Detected** | A new source device was detected on the network but hasn't been authorized. <br><br>This alert applies to devices discovered in OT subnets. New devices discovered in IT subnets don't trigger an alert.| Major | Discovery |
+| **New LLDP Device Configuration** | A new source device was detected on the network but hasn't been authorized. | Major | Configuration Changes |
+| **Omron FINS Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **S7 Plus PLC Firmware Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| **Sampled Values Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior |
+| **Suspicion of Illegal Integrity Scan** | A scan was detected on a DNP3 source device (outstation). This scan wasn't authorized as learned traffic on your network. | Major | Scan |
+| **Toshiba Computer Link Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Minor | Unauthorized Communication Behavior |
+| **Unauthorized ABB Totalflow File Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized ABB Totalflow Register Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Access to Siemens S7 Data Block** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Warning | Unauthorized Communication Behavior |
+| **Unauthorized Access to Siemens S7 Plus Object** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Access to Wonderware Tag** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Unauthorized Communication Behavior |
+| **Unauthorized BACNet Object Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized BACNet Route** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Database Login** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication |
+| **Unauthorized Database Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
+| **Unauthorized Emerson ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized GE SRTP File Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized GE SRTP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized GE SRTP System Memory Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized HTTP Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
+| * **Unauthorized HTTP SOAP Action** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
+| * **Unauthorized HTTP User Agent** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal HTTP Communication Behavior |
+| **Unauthorized Internet Connectivity Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access |
+| **Unauthorized Mitsubishi MELSEC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized MMS Program Access** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Programming |
+| **Unauthorized MMS Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Multicast/Broadcast Connection** | A Multicast/Broadcast connection was detected between a source device and other devices. Multicast/Broadcast communication isn't authorized. | Critical | Abnormal Communication Behavior |
+| **Unauthorized Name Query** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
+| **Unauthorized OPC UA Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized OPC UA Request/Response** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Operation was detected by a User Defined Rule** | Traffic was detected between two devices. This activity is unauthorized, based on a Custom Alert Rule defined by a user. | Major | Custom Alerts |
+| **Unauthorized PLC Configuration Read** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Warning | Configuration Changes |
+| **Unauthorized PLC Configuration Write** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Configuration Changes |
+| **Unauthorized PLC Program Upload** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Programming |
+| **Unauthorized PLC Programming** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Critical | Programming |
+| **Unauthorized Profinet Frame Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized SAIA S-Bus Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Siemens S7 Execution of Control Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Siemens S7 Execution of User Defined Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Siemens S7 Plus Block Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Siemens S7 Plus Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized SMB Login** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication |
+| **Unauthorized SNMP Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
+| **Unauthorized SSH Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Remote Access |
+| **Unauthorized Windows Process** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior |
+| **Unauthorized Windows Service** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior |
+| **Unauthorized Operation was detected by a User Defined Rule** | New traffic parameters were detected. This parameter combination violates a user defined rule | Major |
+| **Unpermitted Modbus Schneider Electric Extension** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unpermitted Usage of ASDU Types** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unpermitted Usage of DNP3 Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unpermitted Usage of Internal Indication (IIN)** | A DNP3 source device (outstation) reported an internal indication (IIN) that hasn't authorized as learned traffic on your network. | Major | Illegal Commands |
+| **Unpermitted Usage of Modbus Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
## Anomaly engine alerts
Anomaly engine alerts describe detected anomalies in network activity.
| Title | Description | Severity | Category | |--|--|--|--|
-| Abnormal Exception Pattern in Slave | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. <br><br> Threshold: 20 exceptions in 1 hour | Minor | Abnormal Communication Behavior |
-| * Abnormal HTTP Header Length | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior |
-| * Abnormal Number of Parameters in HTTP Header | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior |
-| Abnormal Periodic Behavior In Communication Channel | A change in the frequency of communication between the source and destination devices was detected. | Minor | Abnormal Communication Behavior |
-| Abnormal Termination of Applications | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 20 stop commands in 3 hours | Major | Abnormal Communication Behavior |
-| Abnormal Traffic Bandwidth | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies |
-| Abnormal Traffic Bandwidth Between Devices | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies |
-| Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 2 minutes | Critical | Scan |
-| ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. <br><br> Threshold: 40 scans in 6 minutes | Critical | Scan |
-| ARP Spoofing | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior |
-| Excessive Login Attempts | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 20 login attempts in 1 minute | Critical | Authentication |
-| Excessive Number of Sessions | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 50 sessions in 1 minute | Critical | Abnormal Communication Behavior |
-| Excessive Restart Rate of an Outstation | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 10 restarts in 1 hour | Major | Restart/ Stop Commands |
-| Excessive SMB login attempts | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 10 login attempts in 10 minutes | Critical | Authentication |
-| ICMP Flooding | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior |
-|* Illegal HTTP Header Content | The source device initiated an invalid request. | Critical | Abnormal HTTP Communication Behavior |
-| Inactive Communication Channel | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. <br><br> Threshold: 1 minute | Warning | Unresponsive |
-| Long Duration Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 10 minutes | Critical | Scan |
-| Password Guessing Attempt Detected | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 100 attempts in 1 minute | Critical | Authentication |
-| PLC Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 10 scans in 2 minutes | Critical | Scan |
-| Port Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 25 scans in 2 minutes | Critical | Scan |
-| Unexpected message length | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. <br><br> Threshold: text length - 32768 | Critical | Abnormal Communication Behavior |
-| Unexpected Traffic for Standard Port | Traffic was detected on a device using a port reserved for another protocol. | Major | Abnormal Communication Behavior |
+| **Abnormal Exception Pattern in Slave** | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. <br><br> Threshold: 20 exceptions in 1 hour | Minor | Abnormal Communication Behavior |
+| * **Abnormal HTTP Header Length** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior |
+| * **Abnormal Number of Parameters in HTTP Header** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior |
+| **Abnormal Periodic Behavior In Communication Channel** | A change in the frequency of communication between the source and destination devices was detected. | Minor | Abnormal Communication Behavior |
+| **Abnormal Termination of Applications** | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 20 stop commands in 3 hours | Major | Abnormal Communication Behavior |
+| **Abnormal Traffic Bandwidth** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies |
+| **Abnormal Traffic Bandwidth Between Devices** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies |
+| **Address Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 2 minutes | Critical | Scan |
+| **ARP Address Scan Detected** | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. <br><br> Threshold: 40 scans in 6 minutes | Critical | Scan |
+| **ARP Spoofing** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior |
+| **Excessive Login Attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 20 sign-in attempts in 1 minute | Critical | Authentication |
+| **Excessive Number of Sessions** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 50 sessions in 1 minute | Critical | Abnormal Communication Behavior |
+| **Excessive Restart Rate of an Outstation** | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 10 restarts in 1 hour | Major | Restart/ Stop Commands |
+| **Excessive SMB login attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 10 sign-in attempts in 10 minutes | Critical | Authentication |
+| **ICMP Flooding** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior |
+|* **Illegal HTTP Header Content** | The source device initiated an invalid request. | Critical | Abnormal HTTP Communication Behavior |
+| **Inactive Communication Channel** | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. <br><br> Threshold: 1 minute | Warning | Unresponsive |
+| **Long Duration Address Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 10 minutes | Critical | Scan |
+| **Password Guessing Attempt Detected** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 100 attempts in 1 minute | Critical | Authentication |
+| **PLC Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 10 scans in 2 minutes | Critical | Scan |
+| **Port Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 25 scans in 2 minutes | Critical | Scan |
+| **Unexpected message length** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. <br><br> Threshold: text length - 32768 | Critical | Abnormal Communication Behavior |
+| **Unexpected Traffic for Standard Port** | Traffic was detected on a device using a port reserved for another protocol. | Major | Abnormal Communication Behavior |
## Protocol violation engine alerts
Protocol engine alerts describe detected deviations in the packet structure, or
| Title | Description | Severity | Category | |--|--|--|--|
-| Excessive Malformed Packets In a Single Session | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. <br><br> Threshold: 2 malformed packets in 10 minutes | Major | Illegal Commands |
-| Firmware Update | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning | Firmware Change |
-| Function Code Not Supported by Outstation | The destination device received an invalid request. | Major | Illegal Commands |
-| Illegal BACNet message | The source device initiated an invalid request. | Major | Illegal Commands |
-| Illegal Connection Attempt on Port 0 | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and canΓÇÖt be used. For UDP, the port is optional and a value of 0 means no port. There's usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Minor | Illegal Commands |
-| Illegal DNP3 Operation | The source device initiated an invalid request. | Major | Illegal Commands |
-| Illegal MODBUS Operation (Exception Raised by Master) | The source device initiated an invalid request. | Major | Illegal Commands |
-| Illegal MODBUS Operation (Function Code Zero) | The source device initiated an invalid request. | Major | Illegal Commands |
-| Illegal Protocol Version | The source device initiated an invalid request. | Major | Illegal Commands |
-| Incorrect Parameter Sent to Outstation | The destination device received an invalid request. | Major | Illegal Commands |
-| Initiation of an Obsolete Function Code (Initialize Data) | The source device initiated an invalid request. | Minor | Illegal Commands |
-| Initiation of an Obsolete Function Code (Save Config) | The source device initiated an invalid request. | Minor | Illegal Commands |
-| Master Requested an Application Layer Confirmation | The source device initiated an invalid request. | Warning | Illegal Commands |
-| Modbus Exception | A source device (secondary) returned an exception to a destination device (primary). | Major | Illegal Commands |
-| Slave Device Received Illegal ASDU Type | The destination device received an invalid request. | Major | Illegal Commands |
-| Slave Device Received Illegal Command Cause of Transmission | The destination device received an invalid request. | Major | Illegal Commands |
-| Slave Device Received Illegal Common Address | The destination device received an invalid request. | Major | Illegal Commands |
-| Slave Device Received Illegal Data Address Parameter | The destination device received an invalid request. | Major | Illegal Commands |
-| Slave Device Received Illegal Data Value Parameter | The destination device received an invalid request. | Major | Illegal Commands |
-| Slave Device Received Illegal Function Code | The destination device received an invalid request. | Major | Illegal Commands |
-| Slave Device Received Illegal Information Object Address | The destination device received an invalid request. | Major | Illegal Commands |
-| Unknown Object Sent to Outstation | The destination device received an invalid request. | Major | Illegal Commands |
-| Usage of a Reserved Function Code | The source device initiated an invalid request. | Major | Illegal Commands |
-| Usage of Improper Formatting by Outstation | The source device initiated an invalid request. | Warning | Illegal Commands |
-| Usage of Reserved Status Flags (IIN) | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It's recommended to check the device's configuration. | Warning | Illegal Commands |
+| **Excessive Malformed Packets In a Single Session** | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. <br><br> Threshold: 2 malformed packets in 10 minutes | Major | Illegal Commands |
+| **Firmware Update** | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning | Firmware Change |
+| **Function Code Not Supported by Outstation** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Illegal BACNet message** | The source device initiated an invalid request. | Major | Illegal Commands |
+| **Illegal Connection Attempt on Port 0** | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and canΓÇÖt be used. For UDP, the port is optional and a value of 0 means no port. There's usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Minor | Illegal Commands |
+| **Illegal DNP3 Operation** | The source device initiated an invalid request. | Major | Illegal Commands |
+| **Illegal MODBUS Operation (Exception Raised by Master)** | The source device initiated an invalid request. | Major | Illegal Commands |
+| **Illegal MODBUS Operation (Function Code Zero)** | The source device initiated an invalid request. | Major | Illegal Commands |
+| **Illegal Protocol Version** | The source device initiated an invalid request. | Major | Illegal Commands |
+| **Incorrect Parameter Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Initiation of an Obsolete Function Code (Initialize Data)** | The source device initiated an invalid request. | Minor | Illegal Commands |
+| **Initiation of an Obsolete Function Code (Save Config)** | The source device initiated an invalid request. | Minor | Illegal Commands |
+| **Master Requested an Application Layer Confirmation** | The source device initiated an invalid request. | Warning | Illegal Commands |
+| **Modbus Exception** | A source device (secondary) returned an exception to a destination device (primary). | Major | Illegal Commands |
+| **Slave Device Received Illegal ASDU Type** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Slave Device Received Illegal Command Cause of Transmission** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Slave Device Received Illegal Common Address** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Slave Device Received Illegal Data Address Parameter** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Slave Device Received Illegal Data Value Parameter** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Slave Device Received Illegal Function Code** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Slave Device Received Illegal Information Object Address** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Unknown Object Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Usage of a Reserved Function Code** | The source device initiated an invalid request. | Major | Illegal Commands |
+| **Usage of Improper Formatting by Outstation** | The source device initiated an invalid request. | Warning | Illegal Commands |
+| **Usage of Reserved Status Flags (IIN)** | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It's recommended to check the device's configuration. | Warning | Illegal Commands |
## Malware engine alerts
Malware engine alerts describe detected malicious network activity.
| Title | Description| Severity | Category | |--|--|--|--|
-| Connection Attempt to Known Malicious IP | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| Invalid SMB Message (DoublePulsar Backdoor Implant) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Malicious Domain Name Request | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| Malware Test File Detected - EICAR AV Success | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major | Suspicion of Malicious Activity |
-| Suspicion of Conficker Malware | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware |
-| Suspicion of Denial Of Service Attack | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. <br><br> Threshold: 3000 syn attempts in 1 minute | Critical | Suspicion of Malicious Activity |
-| Suspicion of Malicious Activity | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Major | Suspicion of Malicious Activity |
-| Suspicion of Malicious Activity (BlackEnergy) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (DarkComet) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (Duqu) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (Flame) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (Havex) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (Karagany) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (LightsOut) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (Name Queries) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br> Threshold: 25 name queries in 1 minute | Major | Suspicion of Malicious Activity |
-| Suspicion of Malicious Activity (Poison Ivy) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (Regin) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (Stuxnet) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (WannaCry) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware |
-| Suspicion of NotPetya Malware - Illegal SMB Parameters Detected | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of NotPetya Malware - Illegal SMB Transaction Detected | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Remote Code Execution with PsExec | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| Suspicion of Remote Windows Service Management | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| Suspicious Executable File Detected on Endpoint | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| Suspicious Traffic Detected | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical | Suspicion of Malicious Activity |
-| Backup Activity with Antivirus Signatures | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning | Backup
+| **Connection Attempt to Known Malicious IP** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| **Invalid SMB Message (DoublePulsar Backdoor Implant)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Malicious Domain Name Request** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| **Malware Test File Detected - EICAR AV Success** | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major | Suspicion of Malicious Activity |
+| **Suspicion of Conficker Malware** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware |
+| **Suspicion of Denial Of Service Attack** | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. <br><br> Threshold: 3000 attempts in 1 minute | Critical | Suspicion of Malicious Activity |
+| **Suspicion of Malicious Activity** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Major | Suspicion of Malicious Activity |
+| **Suspicion of Malicious Activity (BlackEnergy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (DarkComet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (Duqu)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (Flame)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (Havex)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (Karagany)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (LightsOut)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (Name Queries)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br> Threshold: 25 name queries in 1 minute | Major | Suspicion of Malicious Activity |
+| **Suspicion of Malicious Activity (Poison Ivy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (Regin)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (Stuxnet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (WannaCry)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware |
+| **Suspicion of NotPetya Malware - Illegal SMB Parameters Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of NotPetya Malware - Illegal SMB Transaction Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Remote Code Execution with PsExec** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| **Suspicion of Remote Windows Service Management** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| **Suspicious Executable File Detected on Endpoint** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| **Suspicious Traffic Detected** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical | Suspicion of Malicious Activity |
+| **Backup Activity with Antivirus Signatures** | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning | Backup
## Operational engine alerts
Operational engine alerts describe detected operational incidents, or malfunctio
| Title | Description | Severity | Category | |--|--|--|--|
-| An S7 Stop PLC Command was Sent | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
-| BACNet Operation Failed | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
-| Bad MMS Device State | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Major | Operational Issues |
-| Change of Device Configuration | A configuration change was detected on a source device. | Minor | Configuration Changes |
-| Continuous Event Buffer Overflow at Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. <br><br> Threshold: 3 occurrences in 10 minutes | Major | Buffer Overflow |
-| Controller Reset | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Warning | Restart/ Stop Commands |
-| Controller Stop | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
-| Device Failed to Receive a Dynamic IP Address | The source device is configured to receive a dynamic IP address from a DHCP server but didn't receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It's recommended to notify the network administrator of the incident | Major | Command Failures |
-| Device is Suspected to be Disconnected (Unresponsive) | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: 8 attempts in 5 minutes | Major | Unresponsive |
-| EtherNet/IP CIP Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| EtherNet/IP Encapsulation Protocol Command Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| Event Buffer Overflow in Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major | Buffer Overflow |
-| Expected Backup Operation Did Not Occur | Expected backup/file transfer activity didn't occur between two devices. This alert may indicate errors in the backup / file transfer process. <br><br> Threshold: 100 seconds | Major | Backup |
-| GE SRTP Command Failure | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
-| GE SRTP Stop PLC Command was Sent | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
-| GOOSE Control Block Requires Further Configuration | A source device sent a GOOSE message indicating that the device needs commissioning. This means that the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major | Configuration Changes |
-| GOOSE Dataset Configuration was Changed | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes |
-| Honeywell Controller Unexpected Status | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Warning | Operational Issues |
-|* HTTP Client Error | The source device initiated an invalid request. | Warning | Abnormal HTTP Communication Behavior |
-| Illegal IP Address | System detected traffic between a source device and an IP address that is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Minor | Abnormal Communication Behavior |
-| Master-Slave Authentication Error | The authentication process between a DNP3 source device (primary) and a destination device (outstation) failed. | Minor | Authentication |
-| MMS Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| No Traffic Detected on Sensor Interface | A sensor stopped detecting network traffic on a network interface. | Critical | Sensor Traffic |
-| OPC UA Server Raised an Event That Requires User's Attention | An OPC UA server sent an event notification to a client. This type of event requires user attention | Major | Operational Issues |
-| OPC UA Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| Outstation Restarted | A cold restart was detected on a source device. This means the device was physically turned off and back on again. | Warning | Restart/ Stop Commands |
-| Outstation Restarts Frequently | An excessive number of cold restarts were detected on a source device. This means the device was physically turned off and back on again an excessive number of times. <br><br> Threshold: 2 restarts in 10 minutes | Minor | Restart/ Stop Commands |
-| Outstation's Configuration Changed | A configuration change was detected on a source device. | Major | Configuration Changes |
-| Outstation's Corrupted Configuration Detected | This DNP3 source device (outstation) reported a corrupted configuration. | Major | Configuration Changes |
-| Profinet DCP Command Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| Profinet Device Factory Reset | A source device sent a factory reset command to a Profinet destination device. The reset command clears Profinet device configurations and stops its operation. | Warning | Restart/ Stop Commands |
-| * RPC Operation Failed | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
-| Sampled Values Message Dataset Configuration was Changed | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes |
-| Slave Device Unrecoverable Failure | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Command Failures |
-| Suspicion of Hardware Problems in Outstation | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Operational Issues |
-| Suspicion of Unresponsive MODBUS Device | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: Minimum of 1 valid response for a minimum of 3 requests within 5 minutes | Minor | Unresponsive |
-| Traffic Detected on Sensor Interface | A sensor resumed detecting network traffic on a network interface. | Warning | Sensor Traffic |
+| **An S7 Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
+| **BACNet Operation Failed** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **Bad MMS Device State** | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Major | Operational Issues |
+| **Change of Device Configuration** | A configuration change was detected on a source device. | Minor | Configuration Changes |
+| **Continuous Event Buffer Overflow at Outstation** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. <br><br> Threshold: 3 occurrences in 10 minutes | Major | Buffer Overflow |
+| **Controller Reset** | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Warning | Restart/ Stop Commands |
+| **Controller Stop** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
+| **Device Failed to Receive a Dynamic IP Address** | The source device is configured to receive a dynamic IP address from a DHCP server but didn't receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It's recommended to notify the network administrator of the incident | Major | Command Failures |
+| **Device is Suspected to be Disconnected (Unresponsive)** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: 8 attempts in 5 minutes | Major | Unresponsive |
+| **EtherNet/IP CIP Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **EtherNet/IP Encapsulation Protocol Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **Event Buffer Overflow in Outstation** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major | Buffer Overflow |
+| **Expected Backup Operation Did Not Occur** | Expected backup/file transfer activity didn't occur between two devices. This alert may indicate errors in the backup / file transfer process. <br><br> Threshold: 100 seconds | Major | Backup |
+| **GE SRTP Command Failure** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **GE SRTP Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
+| **GOOSE Control Block Requires Further Configuration** | A source device sent a GOOSE message indicating that the device needs commissioning. This means that the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major | Configuration Changes |
+| **GOOSE Dataset Configuration was Changed** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes |
+| **Honeywell Controller Unexpected Status** | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Warning | Operational Issues |
+|* **HTTP Client Error** | The source device initiated an invalid request. | Warning | Abnormal HTTP Communication Behavior |
+| **Illegal IP Address** | System detected traffic between a source device and an IP address that is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Minor | Abnormal Communication Behavior |
+| **Master-Slave Authentication Error** | The authentication process between a DNP3 source device (primary) and a destination device (outstation) failed. | Minor | Authentication |
+| **MMS Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **No Traffic Detected on Sensor Interface** | A sensor stopped detecting network traffic on a network interface. | Critical | Sensor Traffic |
+| **OPC UA Server Raised an Event That Requires User's Attention** | An OPC UA server sent an event notification to a client. This type of event requires user attention | Major | Operational Issues |
+| **OPC UA Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **Outstation Restarted** | A cold restart was detected on a source device. This means the device was physically turned off and back on again. | Warning | Restart/ Stop Commands |
+| **Outstation Restarts Frequently** | An excessive number of cold restarts were detected on a source device. This means the device was physically turned off and back on again an excessive number of times. <br><br> Threshold: 2 restarts in 10 minutes | Minor | Restart/ Stop Commands |
+| **Outstation's Configuration Changed** | A configuration change was detected on a source device. | Major | Configuration Changes |
+| **Outstation's Corrupted Configuration Detected** | This DNP3 source device (outstation) reported a corrupted configuration. | Major | Configuration Changes |
+| **Profinet DCP Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **Profinet Device Factory Reset** | A source device sent a factory reset command to a Profinet destination device. The reset command clears Profinet device configurations and stops its operation. | Warning | Restart/ Stop Commands |
+| * **RPC Operation Failed** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **Sampled Values Message Dataset Configuration was Changed** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes |
+| **Slave Device Unrecoverable Failure** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Command Failures |
+| **Suspicion of Hardware Problems in Outstation** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Operational Issues |
+| **Suspicion of Unresponsive MODBUS Device** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: Minimum of 1 valid response for a minimum of 3 requests within 5 minutes | Minor | Unresponsive |
+| **Traffic Detected on Sensor Interface** | A sensor resumed detecting network traffic on a network interface. | Warning | Sensor Traffic |
\* The alert is disabled by default, but can be enabled again. To enable the alert, navigate to the Support page, find the alert and select **Enable**. You need administrative level permissions to access the Support page.
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
Enter the following parameters:
| Syslog CEF output format | Description | |--|--|
-| Date and time | Date and time that the syslog server machine received the information. |
| Priority | User.Alert |
-| Hostname | Sensor IP address |
+| Date and time | Date and time that the syslog server machine received the information. (Added by Syslog server) |
+| Hostname | Sensor hostname (Added by Syslog server) |
| Message | CEF:0 <br />Microsoft Defender for IoT/CyberX <br />Sensor name <br />Sensor version <br />Microsoft Defender for IoT Alert <br />Alert title <br />Integer indication of serverity. 1=**Warning**, 4=**Minor**, 8=**Major**, or 10=**Critical**.<br />msg= The message of the alert. <br />protocol= The protocol of the alert. <br />severity= **Warning**, **Minor**, **Major**, or **Critical**. <br />type= **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />UUID= UUID of the alert <br /> start= The time that the alert was detected. <br />Might vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br />src_ip= IP address of the source device. <br />src_mac= MAC address of the source device. (Optional) <br />dst_ip= IP address of the destination device.<br />dst_mac= MAC address of the destination device. (Optional)<br />cat= The alert group associated with the alert. | | Syslog LEEF output format | Description |
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
The following alert details are displayed by default in the grid:
| **Source device** | The IP address, MAC, or device name. | | **Tactics** | The MITRE ATT&CK stage. |
-**To view additional information:**
+### View more alert details
1. Select **Edit columns** from the Alerts page. 1. In the Edit Columns dialog box, select **Add Column** and choose an item to add. The following items are available:
For example, filter alerts by **Category**:
:::image type="content" source="media/how-to-view-manage-cloud-alerts/category-filter.png" alt-text="Screenshot of the Category filter option in Alerts page in the Azure portal.":::
-Supported categories include:
-
- :::column span="":::
- - Abnormal Communication Behavior
- - Abnormal HTTP Communication Behavior
- - Authentication
- - Backup
- - Bandwidth Anomalies
- - Buffer overflow
- - Command Failures
- - Configuration changes
- - Custom Alerts
- - Discovery
- - Firmware change
- - Illegal commands
- :::column-end:::
- :::column span="":::
- - Internet Access
- - Operation Failures
- - Operational issues
- - Programming
- - Remote access
- - Restart/Stop Commands
- - Scan
- - Sensor traffic
- - Suspicion of malicious activity
- - Suspicion of Malware
- - Unauthorized Communication Behavior
- - Unresponsive
- :::column-end:::
- ### Group alerts displayed Use the **Group by** menu at the top right to collapse the grid into subsections according to specific parameters.
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
| 10.5.3 | 10/2021 | 07/2022 | | 10.5.2 | 10/2021 | 07/2022 |
+## October 2022
+
+|Service area |Updates |
+|||
+|**OT networks** | [Enhanced OT monitoring alert reference](#enhanced-ot-monitoring-alert-reference) |
+
+### Enhanced OT monitoring alert reference
+
+Our alert reference article now includes the following details for each alert:
+
+- **Alert category**, helpful when you want to investigate alerts that are aggregated by a specific activity or configure SIEM rules to generate incidents based on specific activities
+
+- **Alert threshold**, for relevant alerts. Thresholds indicate the specific point at which an alert is triggered. Modify alert thresholds as needed from the sensor's **Support** page.
+
+For more information, see [OT monitoring alert types and descriptions](alert-engine-messages.md), specifically [Supported alert categories](alert-engine-messages.md#supported-alert-categories).
+ ## September 2022 |Service area |Updates |
Unicode characters are now supported when working with sensor certificate passph
## Next steps
-[Getting started with Defender for IoT](getting-started.md)
+[Getting started with Defender for IoT](getting-started.md)
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md
Service methods return strongly typed objects wherever possible. However, becaus
API metrics such as requests, latency, and failure rate can be viewed in the [Azure portal](https://portal.azure.com/).
-From the portal homepage, search for your Azure Digital Twins instance to pull up its details. Select the **Metrics** option from the Azure Digital Twins instance's menu to bring up the **Metrics** page.
--
-From here, you can view the metrics for your instance and create custom views.
+For information about viewing and managing metrics with Azure Monitor, see [Get started with metrics explorer](../azure-monitor/essentials/metrics-getting-started.md). For a full list of API metrics available for Azure Digital Twins, see [Azure Digital Twins API request metrics](how-to-monitor.md#api-request-metrics).
## Next steps
digital-twins How To Manage Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-routes.md
az resource create --id <Azure-Digital-Twins-instance-Azure-resource-ID>/endpoin
When an endpoint can't deliver an event within a certain time period or after trying to deliver the event a certain number of times, it can send the undelivered event to a storage account. This process is known as *dead-lettering*.
-You can set up the necessary storage resources using the [Azure portal](https://portal.azure.com/#home) or the [Azure Digital Twins CLI](/cli/azure/dt). However, to create an endpoint with dead-lettering enabled, you'll need use the [Azure Digital Twins CLI](/cli/azure/dt) or [control plane APIs](concepts-apis-sdks.md#overview-control-plane-apis).
+You can set up the necessary storage resources using the [Azure portal](https://portal.azure.com/#home) or the [Azure Digital Twins CLI](/cli/azure/dt). However, to create an endpoint with dead-lettering enabled, you'll need to use the [Azure Digital Twins CLI](/cli/azure/dt) or [control plane APIs](concepts-apis-sdks.md#overview-control-plane-apis).
To learn more about dead-lettering, see [Endpoints and event routes](concepts-route-events.md#dead-letter-events). For instructions on how to set up an endpoint with dead-lettering, continue through the rest of this section.
When you implement or update a filter, the change may take a few minutes to be r
Routing metrics such as count, latency, and failure rate can be viewed in the [Azure portal](https://portal.azure.com/).
-From the portal homepage, search for your Azure Digital Twins instance to pull up its details. Select the **Metrics** option from the Azure Digital Twins instance's navigation menu on the left to bring up the **Metrics** page.
--
-From here, you can view the metrics for your instance and create custom views.
-
-For more on viewing Azure Digital Twins metrics, see [Monitor with metrics](how-to-monitor-metrics.md).
+For information about viewing and managing metrics with Azure Monitor, see [Get started with metrics explorer](../azure-monitor/essentials/metrics-getting-started.md). For a full list of routing metrics available for Azure Digital Twins, see [Azure Digital Twins routing metrics](how-to-monitor.md#routing-metrics).
## Next steps Read about the different types of event messages you can receive:
-* [Event notifications](concepts-event-notifications.md)
+* [Event notifications](concepts-event-notifications.md)
digital-twins How To Monitor Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor-alerts.md
-
-# Mandatory fields.
Title: Monitor with alerts-
-description: Learn how to troubleshoot Azure Digital Twins by setting up alerts based on service metrics.
-- Previously updated : 03/10/2022----
-# Monitor Azure Digital Twins with alerts
-
-In this article, you'll learn how to set up *alerts* in the [Azure portal](https://portal.azure.com). These alerts will notify you when configurable conditions you've defined based on the metrics of your Azure Digital Twins instance are met, allowing you to take important actions.
-
-Azure Digital Twins collects [metrics](how-to-monitor-metrics.md) for your service instance that give information about the state of your resources. You can use these metrics to assess the overall health of Azure Digital Twins service and the resources connected to it.
-
-Alerts proactively notify you when important conditions are found in your metrics data. They allow you to identify and address issues before the users of your system notice them. You can read more about alerts in [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md).
-
-## Turn on alerts
-
-Here's how to enable alerts for your Azure Digital Twins instance:
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Digital Twins instance. You can find it by typing its name into the portal search bar.
-
-2. Select **Alerts** from the menu, then **+ New alert rule**.
-
- :::image type="content" source="media/how-to-monitor-alerts/alerts-pre.png" alt-text="Screenshot of the Azure portal showing the button to create a new alert rule in the Alerts section of an Azure Digital Twin instance." lightbox="media/how-to-monitor-alerts/alerts-pre.png":::
-
-3. On the **Create alert rule** page that follows, you can follow the prompts to define conditions, actions to be triggered, and alert details.
- * **Scope** details should fill automatically with the details for your instance
- * You'll define **Condition** and **Action group** details to customize alert triggers and responses. For more information about this process, see the [Select conditions](#select-conditions) section later in this article.
- * In the **Alert rule details** section, enter a name and optional description for your rule.
- - You can select the **Enable alert rule upon creation** checkbox if you want the alert to become active as soon as it's created.
- - You can select the **Automatically resolve alerts** checkbox if you want to resolve the alert when the condition isn't met anymore.
- - This section is also where you select a **subscription**, **resource group**, and **Severity** level.
-
-4. Select the **Create alert rule** button to create your alert rule.
-
- :::image type="content" source="media/how-to-monitor-alerts/create-alert-rule.png" alt-text="Screenshot of the Azure portal showing the Create Alert Rule page with sections for scope, condition, action group, and alert rule details." lightbox="media/how-to-monitor-alerts/create-alert-rule.png":::
-
-For a guided walkthrough of filling out these fields, see [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md). Below are some examples of what the steps will look like for Azure Digital Twins.
-
-## Select conditions
-
-Here's an excerpt from the **Select condition** process illustrating what types of alert signals are available for Azure Digital Twins. On this page you can filter the type of signal, and select the signal that you want from a list.
--
-After selecting a signal, you'll be asked to configure the logic of the alert. You can filter on a dimension, set a threshold value for your alert, and set the frequency of checks for the condition. Here's an example of setting up an alert for when the average Routing Failure Rate metric goes above 5%.
--
-## Verify success
-
-After setting up alerts, they'll show up back on the **Alerts** page for your instance.
-
-
-## Next steps
-
-* For more information about alerts with Azure Monitor, see [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md).
-* For information about the Azure Digital Twins metrics, see [Monitor with metrics](how-to-monitor-metrics.md).
-* To see how to enable diagnostics logging for your Azure Digital Twins metrics, see [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
digital-twins How To Monitor Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor-metrics.md
-
-# Mandatory fields.
Title: Monitor with metrics-
-description: Learn how to view Azure Digital Twins metrics in Azure Monitor to troubleshoot and oversee your instance.
-- Previously updated : 03/10/2022---
-# Optional fields. Don't forget to remove # if you need a field.
-#
-#
-#
--
-# Monitor Azure Digital Twins with metrics
-
-The metrics described in this article give you information about the state of Azure Digital Twins resources in your Azure subscription. Azure Digital Twins metrics help you assess the overall health of the Azure Digital Twins service and the resources connected to it. These user-facing statistics help you see what is going on with your Azure Digital Twins and help analyze the root causes of issues without needing to contact Azure support.
-
-Metrics are enabled by default. You can view Azure Digital Twins metrics from the [Azure portal](https://portal.azure.com).
-
-## View the metrics
-
-1. Create an Azure Digital Twins instance. You can find instructions on how to set up an Azure Digital Twins instance in [Set up an instance and authentication](how-to-set-up-instance-portal.md).
-
-2. Find your Azure Digital Twins instance in the [Azure portal](https://portal.azure.com) (you can open the page for it by typing its name into the portal search bar).
-
- From the instance's menu, select **Metrics**.
-
- :::image type="content" source="media/how-to-monitor-metrics/azure-digital-twins-metrics.png" alt-text="Screenshot showing the metrics page for Azure Digital Twins in the Azure portal.":::
-
- This page displays the metrics for your Azure Digital Twins instance. You can also create custom views of your metrics by selecting the ones you want to see from the list.
-
-3. You can choose to send your metrics data to an Event Hubs endpoint or an Azure Storage account by selecting **Diagnostics settings** from the menu, then **Add diagnostic setting**.
-
- :::image type="content" source="media/how-to-monitor-diagnostics/diagnostic-settings.png" alt-text="Screenshot showing the diagnostic settings page and button to add in the Azure portal.":::
-
- For more information about this process, see [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
-
-4. You can choose to set up alerts for your metrics data by selecting **Alerts** from the menu, then **+ New alert rule**.
- :::image type="content" source="media/how-to-monitor-alerts/alerts-pre.png" alt-text="Screenshot showing the Alerts page and button to add in the Azure portal.":::
-
- For more information about this process, see [Monitor with alerts](how-to-monitor-alerts.md).
-
-## List of metrics
-
-Azure Digital Twins provides several metrics to give you an overview of the health of your instance and its associated resources. You can also combine information from multiple metrics to paint a bigger picture of the state of your instance.
-
-The following tables describe the metrics tracked by each Azure Digital Twins instance, and how each metric relates to the overall status of your instance.
-
-#### Metrics for tracking service limits
-
-You can configure these metrics to track when you're approaching a [published service limit](reference-service-limits.md#functional-limits) for some aspect of your solution.
-
-To set up tracking, use the [alerts](how-to-monitor-alerts.md) feature in Azure Monitor. You can define thresholds for these metrics so that you receive an alert when a metric reaches a certain percentage of its published limit.
-
-| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
-| | | | | | |
-| TwinCount | Twin Count (Preview) | Count | Total | Total number of twins in the Azure Digital Twins instance. Use this metric to determine if you're approaching the [service limit](reference-service-limits.md#functional-limits) for max number of twins allowed per instance. | None |
-| ModelCount | Model Count (Preview) | Count | Total | Total number of models in the Azure Digital Twins instance. Use this metric to determine if you're approaching the [service limit](reference-service-limits.md#functional-limits) for max number of models allowed per instance. | None |
-
-#### API request metrics
-
-Metrics having to do with API requests:
-
-| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
-| | | | | | |
-| ApiRequests | API Requests | Count | Total | The number of API Requests made for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol, <br>Status Code, <br>Status Code Class, <br>Status Text |
-| ApiRequestsFailureRate | API Requests Failure Rate | Percent | Average | The percentage of API requests that the service receives for your instance that give an internal error (500) response code for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol, <br>Status Code, <br>Status Code Class, <br>Status Text
-| ApiRequestsLatency | API Requests Latency | Milliseconds | Average | The response time for API requests. This value refers to the time from when the request is received by Azure Digital Twins until the service sends a success/fail result for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol |
-
-#### Billing metrics
-
-Metrics having to do with billing:
-
-| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
-| | | | | | |
-| BillingApiOperations | Billing API Operations | Count | Total | Billing metric for the count of all API requests made against the Azure Digital Twins service. | Meter ID |
-| BillingMessagesProcessed | Billing Messages Processed | Count | Total | Billing metric for the number of messages sent out from Azure Digital Twins to external endpoints.<br><br>To be considered a single message for billing purposes, a payload must be no larger than 1 KB. Payloads larger than this limit will be counted as additional messages in 1 KB increments (so a message between 1 KB and 2 KB will be counted as 2 messages, between 2 KB and 3 KB will be 3 messages, and so on).<br>This restriction also applies to responsesΓÇöso a call that returns 1.5 KB in the response body, for example, will be billed as 2 operations. | Meter ID |
-| BillingQueryUnits | Billing Query Units | Count | Total | The number of Query Units, an internally computed measure of service resource usage, consumed to execute queries. There's also a helper API available for measuring Query Units: [QueryChargeHelper Class](/dotnet/api/azure.digitaltwins.core.querychargehelper?view=azure-dotnet&preserve-view=true) | Meter ID |
-
-For more information on the way Azure Digital Twins is billed, see [Azure Digital Twins pricing](https://azure.microsoft.com/pricing/details/digital-twins/).
-
-#### Ingress metrics
-
-Metrics having to do with data ingress:
-
-| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
-| | | | | | |
-| IngressEvents | Ingress Events | Count | Total | The number of incoming telemetry events into Azure Digital Twins. | Result |
-| IngressEventsFailureRate | Ingress Events Failure Rate | Percent | Average | The percentage of incoming telemetry events for which the service returns an internal error (500) response code. | Result |
-| IngressEventsLatency | Ingress Events Latency | Milliseconds | Average | The time from when an event arrives to when it's ready to be egressed by Azure Digital Twins, at which point the service sends a success/fail result. | Result |
-
-#### Routing metrics
-
-Metrics having to do with routing:
-
-| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
-| | | | | | |
-| MessagesRouted | Messages Routed | Count | Total | The number of messages routed to an endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
-| RoutingFailureRate | Routing Failure Rate | Percent | Average | The percentage of events that result in an error as they're routed from Azure Digital Twins to an endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
-| RoutingLatency | Routing Latency | Milliseconds | Average | Time elapsed between an event getting routed from Azure Digital Twins to when it's posted to the endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
-
-## Dimensions
-
-Dimensions help identify more details about the metrics. Some of the routing metrics provide information per endpoint. The table below lists possible values for these dimensions.
-
-| Dimension | Values |
-| | |
-| Authentication | OAuth |
-| Operation (for API Requests) | Microsoft.DigitalTwins/digitaltwins/delete, <br>Microsoft.DigitalTwins/digitaltwins/write, <br>Microsoft.DigitalTwins/digitaltwins/read, <br>Microsoft.DigitalTwins/eventroutes/read, <br>Microsoft.DigitalTwins/eventroutes/write, <br>Microsoft.DigitalTwins/eventroutes/delete, <br>Microsoft.DigitalTwins/models/read, <br>Microsoft.DigitalTwins/models/write, <br>Microsoft.DigitalTwins/models/delete, <br>Microsoft.DigitalTwins/query/action |
-| Endpoint Type | Event Grid, <br>Event Hubs, <br>Service Bus |
-| Protocol | HTTPS |
-| Result | Success, <br>Failure |
-| Status Code | 200, 404, 500, and so on. |
-| Status Code Class | 2xx, 4xx, 5xx, and so on. |
-| Status Text | Internal Server Error, Not Found, and so on. |
-
-## Next steps
-
-To learn more about managing recorded metrics for Azure Digital Twins, see [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
digital-twins How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor.md
+
+# Mandatory fields.
+ Title: Monitor your instance
+
+description: Monitor Azure Digital Twins instances with metrics, alerts, and diagnostics.
++ Last updated : 10/31/2022+++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Monitor Azure Digital Twins with metrics, alerts, and diagnostics
+
+Azure Digital Twins integrates with [Azure Monitor](../azure-monitor/overview.md) to provide metrics and diagnostic information that you can use to monitor your Azure Digital Twins resources. **Metrics** are enabled by default, and give you information about the state of Azure Digital Twins resources in your Azure subscription. **Alerts** can proactively notify you when certain conditions are found in your metrics data. You can also collect **diagnostic logs** for your service instance to monitor its performance, access, and other data.
+
+These monitoring features can help you assess the overall health of the Azure Digital Twins service and the resources connected to it. You can use them to understand what is happening in your Azure Digital Twins instance, and analyze root causes on issues without needing to contact Azure support.
+
+They can be accessed from the [Azure portal](https://portal.azure.com), grouped under the **Monitoring** heading for the Azure Digital Twins resource.
++
+## Metrics and alerts
+
+For general information about viewing Azure resource **metrics**, see [Get started with metrics explorer](../azure-monitor/essentials/metrics-getting-started.md) in the Azure Monitor documentation. For general information about configuring **alerts** for Azure metrics, see [Create a new alert rule](../azure-monitor/alerts/alerts-create-new-alert-rule.md?tabs=metric).
+
+The rest of this section describes the metrics tracked by each Azure Digital Twins instance, and how each metric relates to the overall status of your instance.
+
+### Metrics for tracking service limits
+
+You can configure these metrics to track when you're approaching a [published service limit](reference-service-limits.md#functional-limits) for some aspect of your solution.
+
+To set up tracking, use the [alerts](../azure-monitor/alerts/alerts-overview.md) feature in Azure Monitor. You can define thresholds for these metrics so that you receive an alert when a metric reaches a certain percentage of its published limit.
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| TwinCount | Twin Count (Preview) | Count | Total | Total number of twins in the Azure Digital Twins instance. Use this metric to determine if you're approaching the [service limit](reference-service-limits.md#functional-limits) for max number of twins allowed per instance. | None |
+| ModelCount | Model Count (Preview) | Count | Total | Total number of models in the Azure Digital Twins instance. Use this metric to determine if you're approaching the [service limit](reference-service-limits.md#functional-limits) for max number of models allowed per instance. | None |
+
+### API request metrics
+
+Metrics having to do with API requests:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| ApiRequests | API Requests | Count | Total | The number of API Requests made for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol, <br>Status Code, <br>Status Code Class, <br>Status Text |
+| ApiRequestsFailureRate | API Requests Failure Rate | Percent | Average | The percentage of API requests that the service receives for your instance that gives an internal error (500) response code for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol, <br>Status Code, <br>Status Code Class, <br>Status Text
+| ApiRequestsLatency | API Requests Latency | Milliseconds | Average | The response time for API requests. This value refers to the time from when the request is received by Azure Digital Twins until the service sends a success/fail result for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol |
+
+### Billing metrics
+
+Metrics having to do with billing:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| BillingApiOperations | Billing API Operations | Count | Total | Billing metric for the count of all API requests made against the Azure Digital Twins service. | Meter ID |
+| BillingMessagesProcessed | Billing Messages Processed | Count | Total | Billing metric for the number of messages sent out from Azure Digital Twins to external endpoints.<br><br>To be considered a single message for billing purposes, a payload must be no larger than 1 KB. Payloads larger than this limit will be counted as additional messages in 1 KB increments (so a message between 1 KB and 2 KB will be counted as 2 messages, between 2 KB and 3 KB will be 3 messages, and so on).<br>This restriction also applies to responsesΓÇöso a call that returns 1.5 KB in the response body, for example, will be billed as 2 operations. | Meter ID |
+| BillingQueryUnits | Billing Query Units | Count | Total | The number of Query Units, an internally computed measure of service resource usage, consumed to execute queries. There's also a helper API available for measuring Query Units: [QueryChargeHelper Class](/dotnet/api/azure.digitaltwins.core.querychargehelper?view=azure-dotnet&preserve-view=true) | Meter ID |
+
+For more information on the way Azure Digital Twins is billed, see [Azure Digital Twins pricing](https://azure.microsoft.com/pricing/details/digital-twins/).
+
+### Ingress metrics
+
+Metrics having to do with data ingress:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| IngressEvents | Ingress Events | Count | Total | The number of incoming telemetry events into Azure Digital Twins. | Result |
+| IngressEventsFailureRate | Ingress Events Failure Rate | Percent | Average | The percentage of incoming telemetry events for which the service returns an internal error (500) response code. | Result |
+| IngressEventsLatency | Ingress Events Latency | Milliseconds | Average | The time from when an event arrives to when it's ready to be egressed by Azure Digital Twins, at which point the service sends a success/fail result. | Result |
+
+### Routing metrics
+
+Metrics having to do with routing:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| MessagesRouted | Messages Routed | Count | Total | The number of messages routed to an endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
+| RoutingFailureRate | Routing Failure Rate | Percent | Average | The percentage of events that result in an error as they're routed from Azure Digital Twins to an endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
+| RoutingLatency | Routing Latency | Milliseconds | Average | Time elapsed between an event getting routed from Azure Digital Twins to when it's posted to the endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
+
+### Metric dimensions
+
+Dimensions help identify more details about the metrics. Some of the routing metrics provide information per endpoint. The table below lists possible values for these dimensions.
+
+| Dimension | Values |
+| | |
+| Authentication | OAuth |
+| Operation (for API Requests) | Microsoft.DigitalTwins/digitaltwins/delete, <br>Microsoft.DigitalTwins/digitaltwins/write, <br>Microsoft.DigitalTwins/digitaltwins/read, <br>Microsoft.DigitalTwins/eventroutes/read, <br>Microsoft.DigitalTwins/eventroutes/write, <br>Microsoft.DigitalTwins/eventroutes/delete, <br>Microsoft.DigitalTwins/models/read, <br>Microsoft.DigitalTwins/models/write, <br>Microsoft.DigitalTwins/models/delete, <br>Microsoft.DigitalTwins/query/action |
+| Endpoint Type | Event Grid, <br>Event Hubs, <br>Service Bus |
+| Protocol | HTTPS |
+| Result | Success, <br>Failure |
+| Status Code | 200, 404, 500, and so on. |
+| Status Code Class | 2xx, 4xx, 5xx, and so on. |
+| Status Text | Internal Server Error, Not Found, and so on. |
+
+## Diagnostics logs
+
+For general information about Azure **diagnostics settings**, including how to enable them, see [Diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md). For information about querying diagnostic logs using **Log Analytics**, see [Overview of Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md).
+
+The rest of this section describes the diagnostic log categories that Azure Digital Twins can collect, and their schemas.
+
+### Log categories
+
+Here are more details about the categories of logs that Azure Digital Twins collects.
+
+| Log category | Description |
+| | |
+| ADTModelsOperation | Log all API calls related to Models |
+| ADTQueryOperation | Log all API calls related to Queries |
+| ADTEventRoutesOperation | Log all API calls related to Event Routes and egress of events from Azure Digital Twins to an endpoint service like Event Grid, Event Hubs, and Service Bus |
+| ADTDigitalTwinsOperation | Log all API calls related to individual twins |
+
+Each log category consists of operations of write, read, delete, and action. These categories map to REST API calls as follows:
+
+| Event type | REST API operations |
+| | |
+| Write | PUT and PATCH |
+| Read | GET |
+| Delete | DELETE |
+| Action | POST |
+
+Here's a comprehensive list of the operations and corresponding [Azure Digital Twins REST API calls](/rest/api/azure-digitaltwins/) that are logged in each category.
+
+>[!NOTE]
+> Each log category contains several operations/REST API calls. In the table below, each log category maps to all operations/REST API calls underneath it until the next log category is listed.
+
+| Log category | Operation | REST API calls and other events |
+| | | |
+| ADTModelsOperation | Microsoft.DigitalTwins/models/write | Digital Twin Models Update API |
+| | Microsoft.DigitalTwins/models/read | Digital Twin Models Get By ID and List APIs |
+| | Microsoft.DigitalTwins/models/delete | Digital Twin Models Delete API |
+| | Microsoft.DigitalTwins/models/action | Digital Twin Models Add API |
+| ADTQueryOperation | Microsoft.DigitalTwins/query/action | Query Twins API |
+| ADTEventRoutesOperation | Microsoft.DigitalTwins/eventroutes/write | Event Routes Add API |
+| | Microsoft.DigitalTwins/eventroutes/read | Event Routes Get By ID and List APIs |
+| | Microsoft.DigitalTwins/eventroutes/delete | Event Routes Delete API |
+| | Microsoft.DigitalTwins/eventroutes/action | Failure while attempting to publish events to an endpoint service (not an API call) |
+| ADTDigitalTwinsOperation | Microsoft.DigitalTwins/digitaltwins/write | Digital Twins Add, Add Relationship, Update, Update Component |
+| | Microsoft.DigitalTwins/digitaltwins/read | Digital Twins Get By ID, Get Component, Get Relationship by ID, List Incoming Relationships, List Relationships |
+| | Microsoft.DigitalTwins/digitaltwins/delete | Digital Twins Delete, Delete Relationship |
+| | Microsoft.DigitalTwins/digitaltwins/action | Digital Twins Send Component Telemetry, Send Telemetry |
+
+### Log schemas
+
+Each log category has a schema that defines how events in that category are reported. Each individual log entry is stored as text and formatted as a JSON blob. The fields in the log and example JSON bodies are provided for each log type below.
+
+`ADTDigitalTwinsOperation`, `ADTModelsOperation`, and `ADTQueryOperation` use a consistent API log schema. `ADTEventRoutesOperation` extends the schema to contain an `endpointName` field in properties.
+
+#### API log schemas
+
+This log schema is consistent for `ADTDigitalTwinsOperation`, `ADTModelsOperation`, `ADTQueryOperation`. The same schema is also used for `ADTEventRoutesOperation`, except the `Microsoft.DigitalTwins/eventroutes/action` operation name (for more information about that schema, see the next section, [Egress log schemas](#egress-log-schemas)).
+
+The schema contains information pertinent to API calls to an Azure Digital Twins instance.
+
+Here are the field and property descriptions for API logs.
+
+| Field name | Data type | Description |
+|--||-|
+| `Time` | DateTime | The date and time that this event occurred, in UTC |
+| `ResourceId` | String | The Azure Resource Manager Resource ID for the resource where the event took place |
+| `OperationName` | String | The type of action being performed during the event |
+| `OperationVersion` | String | The API Version used during the event |
+| `Category` | String | The type of resource being emitted |
+| `ResultType` | String | Outcome of the event |
+| `ResultSignature` | String | Http status code for the event |
+| `ResultDescription` | String | Additional details about the event |
+| `DurationMs` | String | How long it took to perform the event in milliseconds |
+| `CallerIpAddress` | String | A masked source IP address for the event |
+| `CorrelationId` | Guid | Unique identifier for the event |
+| `ApplicationId` | Guid | Application ID used in bearer authorization |
+| `Level` | Int | The logging severity of the event |
+| `Location` | String | The region where the event took place |
+| `RequestUri` | Uri | The endpoint used during the event |
+| `TraceId` | String | `TraceId`, as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of the whole trace used to uniquely identify a distributed trace across systems. |
+| `SpanId` | String | `SpanId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of this request in the trace. |
+| `ParentId` | String | `ParentId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). A request without a parent ID is the root of the trace. |
+| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, and so on. |
+| `TraceState` | String | `TraceState` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Additional vendor-specific trace identification information to span across different distributed tracing systems. |
+
+Below are example JSON bodies for these types of logs.
+
+##### ADTDigitalTwinsOperation
+
+```json
+{
+ "time": "2020-03-14T21:11:14.9918922Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/digitaltwins/write",
+ "operationVersion": "2020-10-31",
+ "category": "DigitalTwinOperation",
+ "resultType": "Success",
+ "resultSignature": "200",
+ "resultDescription": "",
+ "durationMs": 8,
+ "callerIpAddress": "13.68.244.*",
+ "correlationId": "2f6a8e64-94aa-492a-bc31-16b9f0b16ab3",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/digitaltwins/factory-58d81613-2e54-4faa-a930-d980e6e2a884?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+}
+```
+
+##### ADTModelsOperation
+
+```json
+{
+ "time": "2020-10-29T21:12:24.2337302Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/models/write",
+ "operationVersion": "2020-10-31",
+ "category": "ModelsOperation",
+ "resultType": "Success",
+ "resultSignature": "201",
+ "resultDescription": "",
+ "durationMs": "80",
+ "callerIpAddress": "13.68.244.*",
+ "correlationId": "9dcb71ea-bb6f-46f2-ab70-78b80db76882",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/Models?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+}
+```
+
+##### ADTQueryOperation
+
+```json
+{
+ "time": "2020-12-04T21:11:44.1690031Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/query/action",
+ "operationVersion": "2020-10-31",
+ "category": "QueryOperation",
+ "resultType": "Success",
+ "resultSignature": "200",
+ "resultDescription": "",
+ "durationMs": "314",
+ "callerIpAddress": "13.68.244.*",
+ "correlationId": "1ee2b6e9-3af4-4873-8c7c-1a698b9ac334",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/query?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+}
+```
+
+##### ADTEventRoutesOperation
+
+Here's an example JSON body for an `ADTEventRoutesOperation` that isn't of `Microsoft.DigitalTwins/eventroutes/action` type (for more information about that schema, see the next section, [Egress log schemas](#egress-log-schemas)).
+
+```json
+ {
+ "time": "2020-10-30T22:18:38.0708705Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/eventroutes/write",
+ "operationVersion": "2020-10-31",
+ "category": "EventRoutesOperation",
+ "resultType": "Success",
+ "resultSignature": "204",
+ "resultDescription": "",
+ "durationMs": 42,
+ "callerIpAddress": "212.100.32.*",
+ "correlationId": "7f73ab45-14c0-491f-a834-0827dbbf7f8e",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/EventRoutes/egressRouteForEventHub?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+ },
+```
+
+#### Egress log schemas
+
+The following example is the schema for `ADTEventRoutesOperation` logs specific to the `Microsoft.DigitalTwins/eventroutes/action` operation name. These contain details related to exceptions and the API operations around egress endpoints connected to an Azure Digital Twins instance.
+
+|Field name | Data type | Description |
+|--||-|
+| `Time` | DateTime | The date and time that this event occurred, in UTC |
+| `ResourceId` | String | The Azure Resource Manager Resource ID for the resource where the event took place |
+| `OperationName` | String | The type of action being performed during the event |
+| `Category` | String | The type of resource being emitted |
+| `ResultDescription` | String | Additional details about the event |
+| `CorrelationId` | Guid | Customer provided unique identifier for the event |
+| `ApplicationId` | Guid | Application ID used in bearer authorization |
+| `Level` | Int | The logging severity of the event |
+| `Location` | String | The region where the event took place |
+| `TraceId` | String | `TraceId`, as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of the whole trace used to uniquely identify a distributed trace across systems. |
+| `SpanId` | String | `SpanId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of this request in the trace. |
+| `ParentId` | String | `ParentId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). A request without a parent ID is the root of the trace. |
+| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, and so on. |
+| `TraceState` | String | `TraceState` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Additional vendor-specific trace identification information to span across different distributed tracing systems. |
+| `EndpointName` | String | The name of egress endpoint created in Azure Digital Twins |
+
+Here's an example JSON body for an `ADTEventRoutesOperation` that of `Microsoft.DigitalTwins/eventroutes/action` type.
+
+```json
+{
+ "time": "2020-11-05T22:18:38.0708705Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/eventroutes/action",
+ "operationVersion": "",
+ "category": "EventRoutesOperation",
+ "resultType": "",
+ "resultSignature": "",
+ "resultDescription": "Unable to send EventHub message to [myPath] for event Id [f6f45831-55d0-408b-8366-058e81ca6089].",
+ "durationMs": -1,
+ "callerIpAddress": "",
+ "correlationId": "7f73ab45-14c0-491f-a834-0827dbbf7f8e",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "",
+ "properties": {
+ "endpointName": "myEventHub"
+ },
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+},
+```
+
+## Next steps
+
+Read more about Azure Monitor and its capabilities in the [Azure Monitor documentation](../azure-monitor/overview.md).
digital-twins Reference Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/reference-service-limits.md
When a limit is reached, any requests beyond it are throttled by the service, wh
To manage the throttling, here are some recommendations for working with limits. * Use retry logic. The [Azure Digital Twins SDKs](concepts-apis-sdks.md) implement retry logic for failed requests, so if you're working with a provided SDK, this functionality is already built-in. Otherwise, consider implementing retry logic in your own application. The service sends back a `Retry-After` header in the failure response, which you can use to determine how long to wait before retrying.
-* Use thresholds and notifications to warn about approaching limits. Some of the service limits for Azure Digital Twins have corresponding [metrics](how-to-monitor-metrics.md) that can be used to track usage in these areas. To configure thresholds and set up an alert on any metric when a threshold is approached, see the instructions in [Monitor with alerts](how-to-monitor-alerts.md). To set up notifications for other limits where metrics aren't provided, consider implementing this logic in your own application code.
+* Use thresholds and notifications to warn about approaching limits. Some of the service limits for Azure Digital Twins have corresponding [metrics](../azure-monitor/essentials/data-platform-metrics.md) that can be used to track usage in these areas. To configure thresholds and set up an alert on any metric when a threshold is approached, see the instructions in [Create a new alert rule](../azure-monitor/alerts/alerts-create-new-alert-rule.md?tabs=metric). To set up notifications for other limits where metrics aren't provided, consider implementing this logic in your own application code.
* Deploy at scale across multiple instances. Avoid having a single point of failure. Instead of one large graph for your entire deployment, consider sectioning out subsets of twins logically (like by region or tenant) across multiple instances. >[!NOTE]
digital-twins Troubleshoot Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-performance.md
If you're experiencing delays or other performance issues when working with Azur
## Isolate the source of the delay
-Determine whether the delay is coming from Azure Digital Twins or another service in your solution. To investigate this delay, you can use the **API Latency** metric in [Azure Monitor](../azure-monitor/essentials/quick-monitor-azure-resource.md) through the Azure portal. For instructions on how to view Azure Monitor metrics for an Azure Digital Twins instance, see [Monitor with metrics](how-to-monitor-metrics.md).
+Determine whether the delay is coming from Azure Digital Twins or another service in your solution. To investigate this delay, you can use the **API Latency** metric in [Azure Monitor](../azure-monitor/essentials/quick-monitor-azure-resource.md) through the Azure portal. For more about Azure Monitor metrics for Azure Digital Twins, see [Azure Digital Twins metrics and alerts](how-to-monitor.md#metrics-and-alerts).
## Check regions
If your solution uses Azure Digital Twins in combination with other Azure servic
## Check logs
-Azure Digital Twins can collect logs for your service instance to help monitor its performance, among other data. Logs can be sent to [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) or your custom storage mechanism. To enable logging in your instance, use the instructions in [Monitor with diagnostic logs](how-to-monitor-diagnostics.md). You can analyze the timestamps on the logs to measure latencies, evaluate if they're consistent, and understand their source.
+Azure Digital Twins can collect logs for your service instance to help monitor its performance, among other data. Logs can be sent to [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) or your custom storage mechanism. To enable logging in your instance, use the instructions in [Diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md). You can analyze the timestamps on the logs to measure latencies, evaluate if they're consistent, and understand their source.
## Check API frequency
If you're still experiencing performance issues after troubleshooting with the s
Follow these steps:
-1. Gather [metrics](how-to-monitor-metrics.md) and [logs](how-to-monitor-diagnostics.md) for your instance.
+1. Gather [metrics](how-to-monitor.md#metrics-and-alerts) and [logs](how-to-monitor.md#diagnostics-logs) for your instance.
2. Navigate to [Azure Help + support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal. Use the prompts to provide details of your issue, see recommended solutions, share your metrics/log files, and submit any other information that the support team can use to help investigate your issue. For more information on creating support requests, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). ## Next steps
-Read about other ways to monitor your Azure Digital Twins instance to help with troubleshooting:
-* [Monitor with metrics](how-to-monitor-metrics.md)
-* [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
-* [Monitor with alerts](how-to-monitor-alerts.md)
-* [Monitor resource health](how-to-monitor-resource-health.md)
+Read about other ways to monitor your Azure Digital Twins instance to help with troubleshooting in [Monitor your Azure Digital Twins instance](how-to-monitor.md).
digital-twins Troubleshoot Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-resource-health.md
+
+# Mandatory fields.
+ Title: Troubleshoot resource health
+
+description: Learn how to use Azure Resource Health to check the health of your Azure Digital Twins instance.
++ Last updated : 11/1/2022++++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Troubleshoot Azure Digital Twins resource health
+
+[Azure Service Health](../service-health/index.yml) is a suite of experiences that can help you diagnose and get support for service problems that affect your Azure resources. It contains resource health, service health, and status information, and reports on both current and past health information.
+
+## Use Azure Resource Health
+
+[Azure Resource Health](../service-health/resource-health-overview.md) can help you monitor whether your Azure Digital Twins instance is up and running. You can also use it to learn whether a regional outage is impacting the health of your instance.
+
+To check the health of your instance, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Digital Twins instance. You can find it by typing its name into the portal search bar.
+
+2. From your instance's menu, select **Resource health** under **Support + troubleshooting**. This will take you to the page for viewing resource health history.
+
+ :::image type="content" source="media/troubleshoot-resource-health/resource-health.png" alt-text="Screenshot showing the 'Resource health' page. There is a 'Health history' section showing a daily report from the last nine days.":::
+
+In the image above, this instance is showing as **Available**, and has been for the past nine days. To learn more about the Available status and the other status types that may appear, see [Resource Health overview](../service-health/resource-health-overview.md).
+
+You can also learn more about the different checks that go into resource health for different types of Azure resources in [Resource types and health checks in Azure resource health](../service-health/resource-health-checks-resource-types.md).
+
+## Use Azure Service Health
+
+[Azure Service Health](../service-health/service-health-overview.md) can help you check the health of the entire Azure Digital Twins service in a certain region, and be aware of events like ongoing service issues and upcoming planned maintenance.
+
+To check service health, sign in to the [Azure portal](https://portal.azure.com) and navigate to the **Service Health** service. You can find it by typing "service health" into the portal search bar.
+
+You can then filter service issues by subscription, region, and service.
+
+For more information on using Azure Service Health, see [Service Health overview](../service-health/service-health-overview.md).
+
+## Use Azure status
+
+The [Azure status](../service-health/azure-status-overview.md) page provides a global view of the health of Azure services and regions. While Azure Service Health and Azure Resource Health are personalized to your specific resource, Azure status has a larger scope and can be useful to understand incidents with wide-ranging impact.
+
+To check Azure status, navigate to the [Azure status](https://azure.status.microsoft/status/) page. The page displays a table of Azure services along with health indicators per region. You can view Azure Digital Twins by searching for its table entry on the page.
+
+For more information on using the Azure status page, see [Azure status overview](../service-health/azure-status-overview.md).
+
+## Next steps
+
+Read about other ways to monitor your Azure Digital Twins instance in [Monitor your Azure Digital Twins instance](how-to-monitor.md).
education-hub Set Up Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/azure-dev-tools-teaching/set-up-access.md
Last updated 06/30/2020
-# Setting up access for Azure Dev tools
+# Setting up access for Azure Dev Tools for Teaching
There are two ways to access your subscription so that you can deploy software to your students and outfit your labs: 1. By downloading software and keys from the Visual Studio Subscription Portal.
event-grid Compare Messaging Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/compare-messaging-services.md
Title: Compare Azure messaging services description: Describes the three Azure messaging services - Azure Event Grid, Event Hubs, and Service Bus. Recommends which service to use for different scenarios. Previously updated : 04/26/2022 Last updated : 11/01/2022 # Choose between Azure messaging services - Event Grid, Event Hubs, and Service Bus
An event is a lightweight notification of a condition or a state change. The pub
Discrete events report state change and are actionable. To take the next step, the consumer only needs to know that something happened. The event data has information about what happened but doesn't have the data that triggered the event. For example, an event notifies consumers that a file was created. It may have general information about the file, but it doesn't have the file itself. Discrete events are ideal for serverless solutions that need to scale.
-A series of events report a condition and are analyzable. The events are time-ordered and interrelated. The consumer needs the sequenced series of events to analyze what happened.
+A series of events reports a condition and are analyzable. The events are time-ordered and interrelated. The consumer needs the sequenced series of events to analyze what happened.
### Message A message is raw data produced by a service to be consumed or stored elsewhere. The message contains the data that triggered the message pipeline. The publisher of the message has an expectation about how the consumer handles the message. A contract exists between the two sides. For example, the publisher sends a message with the raw data, and expects the consumer to create a file from that data and send a response when the work is done.
It has the following characteristics:
- Serverless - At least once delivery of an event
-Event Grid is offered in two editions: **Azure Event Grid**, a fully managed PaaS service on Azure, and **Event Grid on Kubernetes with Azure Arc**, which lets you use Event Grid on your Kubernetes cluster wherever that is deployed, on-prem or on the cloud. For more information, see [Azure Event Grid overview](overview.md) and [Event Grid on Kubernetes with Azure Arc overview](./kubernetes/overview.md).
+Event Grid is offered in two editions: **Azure Event Grid**, a fully managed PaaS service on Azure, and **Event Grid on Kubernetes with Azure Arc**, which lets you use Event Grid on your Kubernetes cluster wherever that is deployed, on-premises or on the cloud. For more information, see [Azure Event Grid overview](overview.md) and [Event Grid on Kubernetes with Azure Arc overview](./kubernetes/overview.md).
## Azure Event Hubs Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. It facilitates the capture, retention, and replay of telemetry and event stream data. The data can come from many concurrent sources. Event Hubs allows telemetry and event data to be made available to various stream-processing infrastructures and analytics services. It's available either as data streams or bundled event batches. This service provides a single solution that enables rapid data retrieval for real-time processing, and repeated replay of stored raw data. It can capture the streaming data into a file for processing and analysis.
It has the following characteristics:
For more information, see [Event Hubs overview](../event-hubs/event-hubs-about.md). ## Azure Service Bus
-Service Bus is a fully managed enterprise message broker with message queues and publish-subscribe topics. The service is intended for enterprise applications that require transactions, ordering, duplicate detection, and instantaneous consistency. Service Bus enables cloud-native applications to provide reliable state transition management for business processes. When handling high-value messages that cannot be lost or duplicated, use Azure Service Bus. This service also facilitates highly secure communication across hybrid cloud solutions and can connect existing on-premises systems to cloud solutions.
+Service Bus is a fully managed enterprise message broker with message queues and publish-subscribe topics. The service is intended for enterprise applications that require transactions, ordering, duplicate detection, and instantaneous consistency. Service Bus enables cloud-native applications to provide reliable state transition management for business processes. When handling high-value messages that can't be lost or duplicated, use Azure Service Bus. This service also facilitates highly secure communication across hybrid cloud solutions and can connect existing on-premises systems to cloud solutions.
Service Bus is a brokered messaging system. It stores messages in a "broker" (for example, a queue) until the consuming party is ready to receive the messages. It has the following characteristics:
For more information, see [Service Bus overview](../service-bus-messaging/servic
## Use the services together In some cases, you use the services side by side to fulfill distinct roles. For example, an e-commerce site can use Service Bus to process the order, Event Hubs to capture site telemetry, and Event Grid to respond to events like an item was shipped.
-In other cases, you link them together to form an event and data pipeline. You use Event Grid to respond to events in the other services. For an example of using Event Grid with Event Hubs to migrate data to a data warehouse, see [Stream big data into a data warehouse](event-grid-event-hubs-integration.md). The following image shows the workflow for streaming the data.
+In other cases, you link them together to form an event and data pipeline. You use Event Grid to respond to events in the other services. For an example of using Event Grid with Event Hubs to migrate data to Azure Synapse Analytics, see [Stream big data into a Azure Synapse Analytics](event-grid-event-hubs-integration.md). The following image shows the workflow for streaming the data.
## Next steps See the following articles:
event-grid Subscribe To Sap Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-sap-events.md
Last updated 10/25/2022
# Subscribe to events published by SAP
-This article describes steps to subscribe to events published by an SAP S/4HANA system.
+This article describes steps to subscribe to events published by a SAP S/4HANA system.
+
+> [!NOTE]
+> See the [New SAP events on Azure Event Grid](https://techcommunity.microsoft.com/t5/messaging-on-azure-blog/new-sap-events-on-azure-event-grid/ba-p/3663372) for an announcement of this feature.
## High-level steps
SAP's BETA program started in October 2022 and will last a couple of months. The
If you have any questions, you can contact us at <a href="mailto:ask-grid-and-ms-sap@microsoft.com">ask-grid-and-ms-sap@microsoft.com</a>. ## Next steps
-See [subscribe to partner events](subscribe-to-partner-events.md).
+See [subscribe to partner events](subscribe-to-partner-events.md).
event-hubs Event Hubs Capture Enable Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-capture-enable-through-portal.md
Title: Event Hubs - Capture streaming events using Azure portal description: This article describes how to enable capturing of events streaming through Azure Event Hubs by using the Azure portal. Previously updated : 10/27/2021 Last updated : 10/27/2022
event-hubs Event Hubs Quickstart Kafka Enabled Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md
Title: 'Quickstart: Data streaming with Azure Event Hubs using the Kafka protocol' description: 'Quickstart: This article provides information on how to stream into Azure Event Hubs using the Kafka protocol and APIs.' Previously updated : 09/26/2022 Last updated : 11/02/2022
When you create an Event Hubs namespace, the Kafka endpoint for the namespace is
## Send and receive messages with Kafka in Event Hubs +
+### [Connection string](#tab/connection-string)
+
+1. Clone the [Azure Event Hubs for Kafka repository](https://github.com/Azure/azure-event-hubs-for-kafka).
+
+1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/producer*.
+
+1. Update the configuration details for the producer in *src/main/resources/producer.config* as follows:
+
+ ```xml
+ bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
+ security.protocol=SASL_SSL
+ sasl.mechanism=PLAIN
+ sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
+ ```
+
+ > [!IMPORTANT]
+ > Replace `{YOUR.EVENTHUBS.CONNECTION.STRING}` with the connection string for your Event Hubs namespace. For instructions on getting the connection string, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). Here's an example configuration: `sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=XXXXXXXXXXXXXXXX";`
+
+1. Run the producer code and stream events into Event Hubs:
+
+ ```shell
+ mvn clean package
+ mvn exec:java -Dexec.mainClass="TestProducer"
+ ```
+
+1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/consumer*.
+
+1. Update the configuration details for the consumer in *src/main/resources/consumer.config* as follows:
+
+ ```xml
+ bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
+ security.protocol=SASL_SSL
+ sasl.mechanism=PLAIN
+ sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
+ ```
+
+ > [!IMPORTANT]
+ > Replace `{YOUR.EVENTHUBS.CONNECTION.STRING}` with the connection string for your Event Hubs namespace. For instructions on getting the connection string, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). Here's an example configuration: `sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=XXXXXXXXXXXXXXXX";`
+
+1. Run the consumer code and process events from event hub using your Kafka clients:
+
+ ```java
+ mvn clean package
+ mvn exec:java -Dexec.mainClass="TestConsumer"
+ ```
+
+If your Event Hubs Kafka cluster has events, you'll now start receiving them from the consumer.
+ ### [Passwordless (Recommended)](#tab/passwordless) 1. Enable a system-assigned managed identity for the virtual machine. For more information about configuring managed identity on a VM, see [Configure managed identities for Azure resources on a VM using the Azure portal](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity). Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure Active Directory. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code.
Azure Event Hubs supports using Azure Active Directory (Azure AD) to authorize r
:::image type="content" source="./media/event-hubs-quickstart-kafka-enabled-event-hubs/producer-consumer-output.png" alt-text="Screenshot showing the Producer and Consumer app windows showing the events.":::
-### [Connection string](#tab/connection-string)
-
-1. Clone the [Azure Event Hubs for Kafka repository](https://github.com/Azure/azure-event-hubs-for-kafka).
-
-1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/producer*.
-
-1. Update the configuration details for the producer in *src/main/resources/producer.config* as follows:
-
- ```xml
- bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
- security.protocol=SASL_SSL
- sasl.mechanism=PLAIN
- sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
- ```
-
- > [!IMPORTANT]
- > Replace `{YOUR.EVENTHUBS.CONNECTION.STRING}` with the connection string for your Event Hubs namespace. For instructions on getting the connection string, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). Here's an example configuration: `sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=XXXXXXXXXXXXXXXX";`
-
-1. Run the producer code and stream events into Event Hubs:
-
- ```shell
- mvn clean package
- mvn exec:java -Dexec.mainClass="TestProducer"
- ```
-
-1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/consumer*.
-
-1. Update the configuration details for the consumer in *src/main/resources/consumer.config* as follows:
-
- ```xml
- bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
- security.protocol=SASL_SSL
- sasl.mechanism=PLAIN
- sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
- ```
-
- > [!IMPORTANT]
- > Replace `{YOUR.EVENTHUBS.CONNECTION.STRING}` with the connection string for your Event Hubs namespace. For instructions on getting the connection string, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). Here's an example configuration: `sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=XXXXXXXXXXXXXXXX";`
-
-1. Run the consumer code and process events from event hub using your Kafka clients:
-
- ```java
- mvn clean package
- mvn exec:java -Dexec.mainClass="TestConsumer"
- ```
-
-If your Event Hubs Kafka cluster has events, you will now start receiving them from the consumer.
- ## Next steps
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md
Title: Get policy compliance data description: Azure Policy evaluations and effects determine compliance. Learn how to get the compliance details of your Azure resources. Previously updated : 10/26/2022 Last updated : 11/02/2022
either **Compliant**, **Non-compliant**, or **Exempt**. If either **name** or **
property in the definition, then all included and non-exempt resources are considered applicable and are evaluated.
-The compliance percentage is determined by dividing **Compliant** and **Exempt** resources by _total
+The compliance percentage is determined by dividing **Compliant**, **Exempt**, and **Unknown** resources by _total
resources_. _Total resources_ is defined as the sum of the **Compliant**, **Non-compliant**, **Exempt**, and **Conflicting** resources. The overall compliance numbers are the sum of distinct
-resources that are **Compliant** or **Exempt** divided by the sum of all distinct resources. In the
+resources that are **Compliant**, **Exempt**, and **Unknown** divided by the sum of all distinct resources. In the
image below, there are 20 distinct resources that are applicable and only one is **Non-compliant**. The overall resource compliance is 95% (19 out of 20).
hpc-cache Customer Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/customer-keys.md
Title: Use customer-manged keys to encrypt data in Azure HPC Cache
+ Title: Use customer-managed keys to encrypt data in Azure HPC Cache
description: How to use Azure Key Vault with Azure HPC Cache to control encryption key access instead of using the default Microsoft-managed encryption keys-+ Previously updated : 07/15/2021- Last updated : 11/02/2022+ # Use customer-managed encryption keys for Azure HPC Cache
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
Title: Quickstart - Provision an X.509 certificate simulated device to Microsoft
description: Learn how to provision a simulated device that authenticates with an X.509 certificate in the Azure IoT Hub Device Provisioning Service Previously updated : 05/31/2022 Last updated : 11/01/2022
In this section, you'll prepare a development environment that's used to build t
3. Copy the tag name for the latest release of the Azure IoT C SDK.
-4. In your Windows command prompt, run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. (replace `<release-tag>` with the tag you copied in the previous step).
+4. In your Windows command prompt, run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Replace `<release-tag>` with the tag you copied in the previous step.
```cmd git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git
In this section, you'll prepare a development environment that's used to build t
``` >[!TIP]
- >If `cmake` does not find your C++ compiler, you may get build errors while running the above command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
+ >If `cmake` doesn't find your C++ compiler, you may get build errors while running the above command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
7. When the build succeeds, the last few output lines look similar to the following output:
In this section, you'll prepare a development environment that's used to build t
::: zone pivot="programming-language-csharp"
-1. In your Windows command prompt, clone the [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository using the following command:
+In your Windows command prompt, clone the [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository using the following command:
- ```cmd
- git clone https://github.com/Azure/azure-iot-sdk-csharp.git
- ```
+```cmd
+git clone https://github.com/Azure/azure-iot-sdk-csharp.git
+```
::: zone-end ::: zone pivot="programming-language-nodejs"
-1. In your Windows command prompt, clone the [Azure IoT Samples for Node.js](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
+In your Windows command prompt, clone the [Azure IoT Samples for Node.js](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
- ```cmd
- git clone https://github.com/Azure/azure-iot-sdk-node.git
- ```
+```cmd
+git clone https://github.com/Azure/azure-iot-sdk-node.git
+```
::: zone-end ::: zone pivot="programming-language-python"
-1. In your Windows command prompt, clone the [Azure IoT Samples for Python](https://github.com/Azure/azure-iot-sdk-python.git) GitHub repository using the following command:
+In your Windows command prompt, clone the [Azure IoT Samples for Python](https://github.com/Azure/azure-iot-sdk-python.git) GitHub repository using the following command:
- ```cmd
- git clone https://github.com/Azure/azure-iot-sdk-python.git --recursive
- ```
+```cmd
+git clone https://github.com/Azure/azure-iot-sdk-python.git --recursive
+```
::: zone-end
Keep the Git Bash prompt open. You'll need it later in this quickstart.
::: zone pivot="programming-language-csharp"
-The C# sample code is set up to use X.509 certificates that are stored in a password-protected PKCS12 formatted file (`certificate.pfx`). You'll still need the PEM formatted public key certificate file (`device-cert.pem`) that you just created to create an individual enrollment entry later in this quickstart.
+The C# sample code is set up to use X.509 certificates that are stored in a password-protected PKCS#12 formatted file (`certificate.pfx`). You'll still need the PEM formatted public key certificate file (`device-cert.pem`) that you just created to create an individual enrollment entry later in this quickstart.
1. To generate the PKCS12 formatted file expected by the sample, enter the following command:
You won't need the Git Bash prompt for the rest of this quickstart. However, you
::: zone pivot="programming-language-nodejs"
-6. Copy the device certificate and private key to the project directory for the X.509 device provisioning sample. The path given is relative to the location where you downloaded the SDK.
+6. The sample code requires a private key that isn't encrypted. Run the following command to create an unencrypted private key:
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ winpty openssl rsa -in device-key.pem -out unencrypted-device-key.pem
+ ```
+
+ # [Linux](#tab/linux)
+
+ ```bash
+ openssl rsa -in device-key.pem -out unencrypted-device-key.pem
+ ```
+
+
+
+7. When asked to **Enter pass phrase for device-key.pem:**, use the same pass phrase you did previously, `1234`.
+
+8. Copy the device certificate and unencrytped private key to the project directory for the X.509 device provisioning sample. The path given is relative to the location where you downloaded the SDK.
```bash cp device-cert.pem ./azure-iot-sdk-node/provisioning/device/samples
- cp device-key.pem ./azure-iot-sdk-node/provisioning/device/samples
+ cp unencrypted-device-key.pem ./azure-iot-sdk-node/provisioning/device/samples
``` You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
In this section, you'll use your Windows command prompt.
1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
-1. Copy the **ID Scope** and **Global device endpoint** values.
+1. Copy the **ID Scope** value.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the ID scope and global device endpoint on Azure portal.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope on Azure portal.":::
1. In your Windows command prompt, go to the sample directory, and install the packages needed by the sample. The path shown is relative to the location where you cloned the SDK. ```cmd
- cd ./azure-iot-sdk-node/provisioning/device/samples
+ cd .\azure-iot-sdk-node\provisioning\device\samples
npm install ```
-1. Edit the **register_x509.js** file and make the following changes:
+ The sample uses five environment variables to authenticate and provision an IoT device using DPS. These environment variables are:
+
+ | Variable name | Description |
+ | :- | :- |
+ | `PROVISIONING_HOST` | The endpoint to use for connecting to your DPS instance. For this quickstart, use the global endpoint, `global.azure-devices-provisioning.net`. |
+ | `PROVISIONING_IDSCOPE` | The ID Scope for your DPS instance. |
+ | `PROVISIONING_REGISTRATION_ID` | The registration ID for your device. It must match the subject common name in the device certificate. |
+ | `CERTIFICATE_FILE` | The path to your device certificate file. |
+ | `KEY_FILE` | The path to your device private key file. |
+
+1. Add environment variables for the global device endpoint and ID scope. Replace `<id-scope>` with the value you copied in step 2.
+
+ ```cmd
+ set PROVISIONING_HOST=global.azure-devices-provisioning.net
+ set PROVISIONING_IDSCOPE=<id-scope>
+ ```
+
+1. Set the environment variable for the device registration ID. The registration ID for the IoT device must match subject common name on its device certificate. If you followed the steps in this quickstart to generate a self-signed test certificate, `my-x509-device` is both the subject name and the registration ID for the device.
+
+ ```cmd
+ set PROVISIONING_REGISTRATION_ID=my-x509-device
+ ```
- * Replace `provisioning host` with the **Global Device Endpoint** noted in **Step 1** above.
- * Replace `id scope` with the **ID Scope** noted in **Step 1** above.
- * Replace `registration id` with the **Registration ID** noted in the previous section.
- * Replace `cert filename` and `key filename` with the files you generated previously, *device-cert.pem* and *device-key.pem*.
+1. Set the environment variables for the device certificate and (unencrypted) device private key files.
-1. Save the file.
+ ```cmd
+ set CERTIFICATE_FILE=.\device-cert.pem
+ set KEY_FILE=.\unencrypted-device-key.pem
+ ```
1. Run the sample and verify that the device was provisioned successfully.
In this section, you'll use your Windows command prompt.
node register_x509.js ```
->[!TIP]
->The [Azure IoT Hub Node.js Device SDK](https://github.com/Azure/azure-iot-sdk-node) provides an easy way to simulate a device. For more information, see [Device concepts](./concepts-service.md).
+ You should see output similar to the following:
+
+ ```output
+ registration succeeded
+ assigned hub=contoso-hub-2.azure-devices.net
+ deviceId=my-x509-device
+ Client connected
+ send status: MessageEnqueued
+ ```
::: zone-end
In this section, you'll use your Windows command prompt.
| Variable name | Description | | :- | :- |
- | `PROVISIONING_HOST` | This value is the global endpoint used for connecting to your DPS resource |
- | `PROVISIONING_IDSCOPE` | This value is the ID Scope for your DPS resource |
- | `DPS_X509_REGISTRATION_ID` | This value is the ID for your device. It must also match the subject name on the device certificate |
- | `X509_CERT_FILE` | Your device certificate filename |
- | `X509_KEY_FILE` | The private key filename for your device certificate |
+ | `PROVISIONING_HOST` | The global endpoint used for connecting to your DPS instance. |
+ | `PROVISIONING_IDSCOPE` | The ID Scope for your DPS instance. |
+ | `DPS_X509_REGISTRATION_ID` | The registration ID for your device. It must also match the subject name on the device certificate. |
+ | `X509_CERT_FILE` | The path to your device certificate file. |
+ | `X509_KEY_FILE` | The path to your device certificate private key file. |
| `PASS_PHRASE` | The pass phrase you used to encrypt the certificate and private key file (`1234`). | 1. Add the environment variables for the global device endpoint and ID Scope.
In this section, you'll use your Windows command prompt.
set PROVISIONING_IDSCOPE=<ID scope for your DPS resource> ```
-1. The registration ID for the IoT device must match subject name on its device certificate. If you generated a self-signed test certificate, `my-x509-device` is both the subject name and the registration ID for the device.
-
-1. Set the environment variable for the registration ID as follows:
+1. Set the environment variable for the registration ID. The registration ID for the IoT device must match subject name on its device certificate. If you followed the steps in this quickstart to generate a self-signed test certificate, `my-x509-device` is both the subject name and the registration ID for the device.
```cmd set DPS_X509_REGISTRATION_ID=my-x509-device
In this section, you'll use your Windows command prompt.
set PASS_PHRASE=1234 ```
-1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())`.
-
-1. Save your changes.
+1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())` and save your changes.
1. Run the sample. The sample will connect to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub.
If you plan to continue working on and exploring the device client sample, don't
## Next steps
-To learn how to enroll your X.509 device programmatically:
+To learn how to provision multiple X.509 devices using an enrollment group:
> [!div class="nextstepaction"]
-> [Azure quickstart - Enroll X.509 devices to Azure IoT Hub Device Provisioning Service](quick-enroll-device-x509.md)
+> [Tutorial: Provision multiple X.509 devices using an enrollment group](tutorial-custom-hsm-enrollment-group-x509.md)
iot-dps Tutorial Custom Hsm Enrollment Group X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
Title: Tutorial - Provision X.509 devices to Azure IoT Hub using a custom Hardware Security Module (HSM) and a DPS enrollment group
-description: This tutorial shows how to use X.509 certificates to provision multiple devices through an enrollment group in your Azure IoT Hub Device Provisioning Service (DPS) instance. The devices are simulated using the C device SDK and a custom Hardware Security Module (HSM).
+ Title: Tutorial - Provision X.509 devices to Azure IoT Hub using a DPS enrollment group
+description: This tutorial shows how to use X.509 certificates to provision multiple devices through an enrollment group in your Azure IoT Hub Device Provisioning Service (DPS) instance.
Previously updated : 07/12/2022 Last updated : 11/01/2022
-#Customer intent: As a new IoT developer, I want provision groups of devices using X.509 certificate chains and the C SDK.
+zone_pivot_groups: iot-dps-set1
+#Customer intent: As a new IoT developer, I want provision groups of devices using X.509 certificate chains and the Azure IoT device SDK.
# Tutorial: Provision multiple X.509 devices using enrollment groups
The Azure IoT Hub Device Provisioning Service supports three forms of authentica
* [Trusted platform module (TPM)](concepts-tpm-attestation.md) * [Symmetric keys](./concepts-symmetric-key-attestation.md) This tutorial uses the [custom HSM sample](https://github.com/Azure/azure-iot-sdk-c/tree/master/provisioning_client/samples/custom_hsm_example), which provides a stub implementation for interfacing with hardware-based secure storage. A [Hardware Security Module (HSM)](./concepts-service.md#hardware-security-module) is used for secure, hardware-based storage of device secrets. An HSM can be used with symmetric key, X.509 certificate, or TPM attestation to provide secure storage for secrets. Hardware-based storage of device secrets isn't required, but it is strongly recommended to help protect sensitive information like your device certificate's private key.+
+A [Hardware Security Module (HSM)](./concepts-service.md#hardware-security-module) is used for secure, hardware-based storage of device secrets. An HSM can be used with symmetric key, X.509 certificate, or TPM attestation to provide secure storage for secrets. Hardware-based storage of device secrets isn't required, but it is strongly recommended to help protect sensitive information like your device certificate's private key.
In this tutorial, you'll complete the following objectives:
In this tutorial, you'll complete the following objectives:
> * Create a certificate chain of trust to organize a set of devices using X.509 certificates. > * Complete proof of possession with a signing certificate used with the certificate chain. > * Create a new group enrollment that uses the certificate chain.
-> * Set up the development environment for provisioning a device using code from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c).
-> * Provision a device using the certificate chain with the custom Hardware Security Module (HSM) sample in the SDK.
+> * Set up the development environment.
+> * Provision a device using the certificate chain using sample code in the Azure IoT device SDK.
## Prerequisites
In this tutorial, you'll complete the following objectives:
* Complete the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md). + The following prerequisites are for a Windows development environment used to simulate the devices. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation. * Install [Visual Studio](https://visualstudio.microsoft.com/vs/) 2022 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015, Visual Studio 2017, and Visual Studio 19 are also supported.
The following prerequisites are for a Windows development environment used to si
>[!IMPORTANT] >Confirm that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. Also, be aware that older versions of the CMake build system fail to generate the solution file used in this tutorial. Make sure to use the latest version of CMake. ++
+The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/doc/devbox_setup.md) in the SDK documentation.
+
+* Install [.NET SDK 6.0](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
+
+ ```cmd
+ dotnet --info
+ ```
+++
+The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/blob/main/doc/node-devbox-setup.md) in the SDK documentation.
+
+* Install [Node.js v4.0 or above](https://nodejs.org) on your machine.
+++
+The following prerequisites are for a Windows development environment.
+
+* [Python 3.6 or later](https://www.python.org/downloads/) on your machine.
+++
+The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-jav) in the SDK documentation.
+
+* Install the [Java SE Development Kit 8](/azure/developer/java/fundamentals/java-support-on-azure) or later on your machine.
+
+* Download and install [Maven](https://maven.apache.org/install.html).
++ * Install the latest version of [Git](https://git-scm.com/download/). Make sure that Git is added to the environment variables accessible to the command window. See [Software Freedom Conservancy's Git client tools](https://git-scm.com/download/) for the latest version of `git` tools to install, which includes *Git Bash*, the command-line app that you can use to interact with your local Git repository. * Make sure [OpenSSL](https://www.openssl.org/) is installed on your machine. On Windows, your installation of Git includes an installation of OpenSSL. You can access OpenSSL from the Git Bash prompt. To verify that OpenSSL is installed, open a Git Bash prompt and enter `openssl version`.
The following prerequisites are for a Windows development environment used to si
The steps in this tutorial assume that you're using a Windows machine and the OpenSSL installation that is installed as part of Git. You'll use the Git Bash prompt to issue OpenSSL commands and the Windows command prompt for everything else. If you're using Linux, you can issue all commands from a Bash shell.
-## Prepare the Azure IoT C SDK development environment
+## Prepare your development environment
+ In this section, you'll prepare a development environment used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The SDK includes sample code and tools used by devices provisioning with DPS.
In this section, you'll prepare a development environment used to build the [Azu
-- Build files have been written to: C:/azure-iot-sdk-c/cmake ``` ++
+In your Windows command prompt, clone the [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository using the following command:
+
+```cmd
+git clone https://github.com/Azure/azure-iot-sdk-csharp.git
+```
+++
+In your Windows command prompt, clone the [Azure IoT Samples for Node.js](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
+
+```cmd
+git clone https://github.com/Azure/azure-iot-sdk-node.git
+```
+++
+In your Windows command prompt, clone the [Azure IoT Samples for Python](https://github.com/Azure/azure-iot-sdk-python.git) GitHub repository using the following command:
+
+```cmd
+git clone https://github.com/Azure/azure-iot-sdk-python.git --recursive
+```
+++
+1. In your Windows command prompt, clone the [Azure IoT Samples for Java](https://github.com/Azure/azure-iot-sdk-java.git) GitHub repository using the following command:
+
+ ```cmd
+ git clone https://github.com/Azure/azure-iot-sdk-java.git --recursive
+ ```
+
+2. Go to the root `azure-iot-sdk-java` directory and build the project to download all needed packages.
+
+ ```cmd
+ cd azure-iot-sdk-java
+ mvn install -DskipTests=true
+ ```
++ ## Create an X.509 certificate chain In this section, you'll generate an X.509 certificate chain of three certificates for testing each device with this tutorial. The certificates have the following hierarchy.
In this section, you create two device certificates and their full chain certifi
cat ./certs/device-01.cert.pem ./certs/azure-iot-test-only.intermediate.cert.pem ./certs/azure-iot-test-only.root.ca.cert.pem > ./certs/device-01-full-chain.cert.pem ```
-1. Open the certificate chain file, *./certs/device-01-full-chain.cert.pem*, in a text editor to examine it. The certificate chain text contains the full chain of all three certificates. You'll use this text as the certificate chain with in the custom HSM device code later in this tutorial for `device-01`.
+1. Open the certificate chain file, *./certs/device-01-full-chain.cert.pem*, in a text editor to examine it. The certificate chain text contains the full chain of all three certificates. You'll use this certificate chain later in this tutorial to provision `device-01`.
The full chain text has the following format:
In this section, you create two device certificates and their full chain certifi
--END CERTIFICATE-- ```
-1. To create the private key, X.509 certificate, and full chain certificate for the second device, copy and paste this script into your GitBash command prompt. To create certificates for more devices, you can modify the `registration_id` variable declared at the beginning of the script.
+1. To create the private key, X.509 certificate, and full chain certificate for the second device, copy and paste this script into your Git Bash command prompt. To create certificates for more devices, you can modify the `registration_id` variable declared at the beginning of the script.
# [Windows](#tab/windows)
Your signing certificates are now trusted on the Windows-based device and the fu
1. In the **Add Enrollment Group** panel, enter the following information, then select **Save**.
- | Field | Value |
- | :-- | :-- |
- | **Group name** | For this tutorial, enter **custom-hsm-x509-devices**. The enrollment group name is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). |
- | **Attestation Type** | Select **Certificate** |
- | **IoT Edge device** | Select **False** |
- | **Certificate Type** | Select **Intermediate Certificate** |
- | **Primary certificate .pem or .cer file** | Navigate to the intermediate certificate that you created earlier (*./certs/azure-iot-test-only.intermediate.cert.pem*). This intermediate certificate is signed by the root certificate that you already uploaded and verified. DPS trusts that root once it's verified. DPS can verify that the intermediate provided with this enrollment group is truly signed by the trusted root. DPS will trust each intermediate truly signed by that root certificate, and therefore be able to verify and trust leaf certificates signed by the intermediate. |
+ * **Group name**: For this tutorial, enter **custom-hsm-x509-devices**. The enrollment group name is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`).
+ * **Attestation Type**: Select **Certificate**.
+ * **IoT Edge device**: Select **False**.
+ * **Certificate Type**: Select **Intermediate Certificate**.
+ * **Primary certificate .pem or .cer file**: Navigate to the intermediate certificate that you created earlier (*./certs/azure-iot-test-only.intermediate.cert.pem*) and upload it.
+
+ Your intermediate certificate is signed by the root certificate that you already uploaded and verified. Because DPS trusts that root certificate, it will trust any intermediate certificate that is either directly signed by the root, or whose signing chain contains the root. DPS will permit any device to register through the enrollment group whose certificate signing chain contains the intermediate certificate and the verified (root) certificate.
:::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/custom-hsm-enrollment-group-x509.png" alt-text="Screenshot that shows adding an enrollment group in the portal.":::
-## Configure the provisioning device code
+## Prepare and run the device provisioning code
In this section, you update the sample code with your Device Provisioning Service instance information. If a device is authenticated, it will be assigned to an IoT hub linked to the Device Provisioning Service instance configured in this section. +
+In this section, you'll use your Git Bash prompt and the Visual Studio IDE.
+
+### Configure the provisioning device code
+
+In this section, you update the sample code with your Device Provisioning Service instance information.
+ 1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service instance and note the **ID Scope** value. :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/copy-id-scope.png" alt-text="Screenshot that shows the ID scope on the DPS overview pane.":::
In this section, you update the sample code with your Device Provisioning Servic
7. Right-click the **prov\_dev\_client\_sample** project and select **Set as Startup Project**.
-## Configure the custom HSM stub code
+### Configure the custom HSM stub code
The specifics of interacting with actual secure hardware-based storage vary depending on the device hardware. The certificate chains used by the simulated devices in this tutorial will be hardcoded in the custom HSM stub code. In a real-world scenario, the certificate chain would be stored in the actual HSM hardware to provide better security for sensitive information. Methods similar to the stub methods used in this sample would then be implemented to read the secrets from that hardware-based storage. While HSM hardware isn't required, it is recommended to protect sensitive information like the certificate's private key. If an actual HSM was being called by the sample, the private key wouldn't be present in the source code. Having the key in the source code exposes the key to anyone that can view the code. This is only done in this tutorial to assist with learning.
-To update the custom HSM stub code to simulate the identity of the device with ID `device-01`, perform the following steps:
+To update the custom HSM stub code to simulate the identity of the device with ID `device-01`:
1. In Solution Explorer for Visual Studio, navigate to **Provision_Samples > custom_hsm_example > Source Files** and open *custom_hsm_example.c*.
To update the custom HSM stub code to simulate the identity of the device with I
Press enter key to exit: ``` ++
+The C# sample code is set up to use X.509 certificates that are stored in a password-protected PKCS#12 formatted file (.pfx). The full chain certificates you created previously are in the PEM format. To convert the full chain certificates to PKCS#12 format, enter the following commands in your Git Bash prompt from the directory where you previously ran the OpenSSL commands.
+
+* device-01
+
+ ```bash
+ openssl pkcs12 -inkey ./private/device-01.key.pem -in ./certs/device-01-full-chain.cert.pem -export -passin pass:1234 -passout pass:1234 -out ./certs/device-01-full-chain.cert.pfx
+ ```
+
+* device-02
+
+ ```bash
+ openssl pkcs12 -inkey ./private/device-02.key.pem -in ./certs/device-02-full-chain.cert.pem -export -passin pass:1234 -passout pass:1234 -out ./certs/device-02-full-chain.cert.pfx
+ ```
+
+In the rest of this section, you'll use your Windows command prompt.
+
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
+
+2. Copy the **ID Scope** value.
+
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope on Azure portal.":::
+
+3. In your Windows command prompt, change to the X509Sample directory. This directory is located in the *.\azure-iot-sdk-csharp\provisioning\device\samples\Getting Started\X509Sample* directory off the directory where you cloned the samples on your computer.
+
+4. Enter the following command to build and run the X.509 device provisioning sample (replace `<id-scope>` with the ID Scope that you copied in step 2. Replace `<your-certificate-folder>` with the path to the folder where you ran your OpenSSL commands.
+
+ ```cmd
+ run -- -s <id-scope> -c <your-certificate-folder>\certs\device-01-full-chain.cert.pfx -p 1234
+ ```
+
+ The device connects to DPS and is assigned to an IoT hub. Then, the device sends a telemetry message to the IoT hub. You should see output similar to the following:
+
+ ```output
+ Loading the certificate...
+ Found certificate: 3E5AA3C234B2032251F0135E810D75D38D2AA477 CN=Azure IoT Hub CA Cert Test Only; PrivateKey: False
+ Found certificate: 81FE182C08D18941CDEEB33F53F8553BA2081E60 CN=Azure IoT Hub Intermediate Cert Test Only; PrivateKey: False
+ Found certificate: 5BA1DB226D50EBB7A6A6071CED4143892855AE43 CN=device-01; PrivateKey: True
+ Using certificate 5BA1DB226D50EBB7A6A6071CED4143892855AE43 CN=device-01
+ Initializing the device provisioning client...
+ Initialized for registration Id device-01.
+ Registering with the device provisioning service...
+ Registration status: Assigned.
+ Device device-01 registered to contoso-hub-2.azure-devices.net.
+ Creating X509 authentication for IoT Hub...
+ Testing the provisioned device with IoT Hub...
+ Sending a telemetry message...
+ Finished.
+ ```
+
+ >[!NOTE]
+ > If you don't specify certificate and password on the command line, the certificate file will default to *./certificate.pfx* and you'll be prompted for your password.
+ >
+ > Additional parameters can be passed to change the TransportType (-t) and the GlobalDeviceEndpoint (-g). For a full list of parameters type `dotnet run -- --help`.
+
+5. To register your second device, re-run the sample using its full chain certificate.
+
+ ```cmd
+ run -- -s <id-scope> -c <your-certificate-folder>\certs\device-02-full-chain.cert.pfx -p 1234
+ ```
+++
+In the following steps, use your Windows command prompt.
+
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
+
+1. Copy the **ID Scope** value.
+
+ :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope in the Azure portal.":::
+
+1. In your Windows command prompt, go to the sample directory, and install the packages needed by the sample. The path shown is relative to the location where you cloned the SDK.
+
+ ```cmd
+ cd .\azure-iot-sdk-node\provisioning\device\samples
+ npm install
+ ```
+
+1. In the *provisioning\device\samples* folder, open *register_x509.js* and review the code.
+
+ The sample defaults to MQTT as the transport protocol. If you want to use a different protocol, comment out the following line and uncomment the line for the appropriate protocol.
+
+ ```javascript
+ var ProvisioningTransport = require('azure-iot-provisioning-device-mqtt').Mqtt;
+ ```
+
+ The sample uses five environment variables to authenticate and provision an IoT device using DPS. These environment variables are:
+
+ | Variable name | Description |
+ | :- | :- |
+ | `PROVISIONING_HOST` | The endpoint to use for connecting to your DPS instance. For this tutorial, use the global endpoint, `global.azure-devices-provisioning.net`. |
+ | `PROVISIONING_IDSCOPE` | The ID Scope for your DPS instance. |
+ | `PROVISIONING_REGISTRATION_ID` | The registration ID for your device. It must match the subject common name in the device certificate. |
+ | `CERTIFICATE_FILE` | The path to your device full chain certificate file. |
+ | `KEY_FILE` | The path to your device certificate private key file. |
+
+ The `ProvisioningDeviceClient.register()` method attempts to register your device.
+
+1. Add environment variables for the global device endpoint and ID scope. Replace `<id-scope>` with the value you copied in step 2.
+
+ ```cmd
+ set PROVISIONING_HOST=global.azure-devices-provisioning.net
+ set PROVISIONING_IDSCOPE=<id-scope>
+ ```
+
+1. Set the environment variable for the device registration ID. The registration ID for the IoT device must match subject common name on its device certificate. For this tutorial, *device-01* is both the subject name and the registration ID for the device.
+
+ ```cmd
+ set PROVISIONING_REGISTRATION_ID=device-01
+ ```
+
+1. Set the environment variables for the device full chain certificate and device private key files you generated previously. Replace `<your-certificate-folder>` with the path to the folder where you ran your OpenSSL commands.
+
+ ```cmd
+ set CERTIFICATE_FILE=<your-certificate-folder>\certs\device-01-full-chain.cert.pem
+ set KEY_FILE=<your-certificate-folder>\private\device-01.key.pem
+ ```
+
+1. Run the sample and verify that the device was provisioned successfully.
+
+ ```cmd
+ node register_x509.js
+ ```
+
+ You should see output similar to the following:
+
+ ```output
+ registration succeeded
+ assigned hub=contoso-hub-2.azure-devices.net
+ deviceId=device-01
+ Client connected
+ send status: MessageEnqueued
+ ```
+
+1. Update the environment variables for your second device (`device-02`) according to the table below and run the sample again.
+
+ | Environment Variable | Value |
+ | :- | : |
+ | PROVISIONING_REGISTRATION_ID | `device-02` |
+ | CERTIFICATE_FILE | *\<your-certificate-folder\>\certs\device-02-full-chain.cert.pem* |
+ | KEY_FILE | *\<your-certificate-folder\>\private\device-02.key.pem* |
+++
+In the following steps, use your Windows command prompt.
+
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
+
+1. Copy the **ID Scope**.
+
+ :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope in the Azure portal.":::
+
+1. In your Windows command prompt, go to the directory of the [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py) sample. The path shown is relative to the location where you cloned the SDK.
+
+ ```cmd
+ cd .\azure-iot-sdk-python\samples\async-hub-scenarios
+ ```
+
+ This sample uses six environment variables to authenticate and provision an IoT device using DPS. These environment variables are:
+
+ | Variable name | Description |
+ | :- | :- |
+ | `PROVISIONING_HOST` | The endpoint to use for connecting to your DPS instance. For this tutorial, use the global endpoint, `global.azure-devices-provisioning.net`. |
+ | `PROVISIONING_IDSCOPE` | The ID Scope for your DPS instance. |
+ | `DPS_X509_REGISTRATION_ID` | The registration ID for your device. It must match the subject common name in the device certificate. |
+ | `X509_CERT_FILE` | The path to your device full chain certificate file. |
+ | `X509_KEY_FILE` | The path to your device certificate private key file. |
+ | `PASS_PHRASE` | The pass phrase used to encrypt the private key file (if used). Not needed for this tutorial. |
+
+1. Add the environment variables for the global device endpoint and ID Scope. You copied the ID scope for your instance in step 2.
+
+ ```cmd
+ set PROVISIONING_HOST=global.azure-devices-provisioning.net
+ set PROVISIONING_IDSCOPE=<ID scope for your DPS resource>
+ ```
+
+1. Set the environment variable for the device registration ID. The registration ID for the IoT device must match subject common name on its device certificate. For this tutorial, *device-01* is both the subject name and the registration ID for the device.
+
+ ```cmd
+ set DPS_X509_REGISTRATION_ID=device-01
+ ```
+
+1. Set the environment variables for the device full chain certificate and device private key files you generated previously. Replace `<your-certificate-folder>` with the path to the folder where you ran your OpenSSL commands.
+
+ ```cmd
+ set X509_CERT_FILE=<your-certificate-folder>\certs\device-01-full-chain.cert.pem
+ set X509_KEY_FILE=<your-certificate-folder>\private\device-01.key.pem
+ ```
+
+1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())`.
+
+1. Run the sample. The sample connects to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample sends some test messages to the IoT hub.
+
+ ```cmd
+ python provision_x509.py
+ ```
+
+ You should see output similar to the following:
+
+ ```output
+ The complete registration result is
+ device-01
+ contoso-hub-2.azure-devices.net
+ initialAssignment
+ null
+ Will send telemetry from the provisioned device
+ sending message #1
+ sending message #2
+ sending message #3
+ sending message #4
+ sending message #5
+ sending message #6
+ sending message #7
+ sending message #8
+ sending message #9
+ sending message #10
+ done sending message #1
+ done sending message #2
+ done sending message #3
+ done sending message #4
+ done sending message #5
+ done sending message #6
+ done sending message #7
+ done sending message #8
+ done sending message #9
+ done sending message #10
+ ```
+
+1. Update the environment variables for your second device (`device-02`) according to the table below and run the sample again.
+
+ | Environment Variable | Value |
+ | :- | : |
+ | DPS_X509_REGISTRATION_ID | `device-02` |
+ | X509_CERT_FILE | *\<your-certificate-folder\>\certs\device-02-full-chain.cert.pem* |
+ | X509_KEY_FILE | *\<your-certificate-folder\>\private\device-02.key.pem* |
+++
+In the following steps, you'll use both your Windows command prompt and your Git Bash prompt.
+
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
+
+1. Copy the **ID Scope**.
+
+ :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope in the Azure portal.":::
+
+1. In your Windows command prompt, navigate to the sample project folder. The path shown is relative to the location where you cloned the SDK
+
+ ```cmd
+ cd .\azure-iot-sdk-java\provisioning\provisioning-samples\provisioning-X509-sample
+ ```
+
+1. Enter the provisioning service and X.509 identity information in the sample code. This is used during provisioning, for attestation of the simulated device, prior to device registration.
+
+ 1. Open the file `.\src\main\java\samples\com/microsoft\azure\sdk\iot\ProvisioningX509Sample.java` in your favorite editor.
+
+ 1. Update the following values. For `idScope`, use the **ID Scope** that you copied previously. For global endpoint, use the **Global device endpoint**. This endpoint is the same for every DPS instance, `global.azure-devices-provisioning.net`.
+
+ ```java
+ private static final String idScope = "[Your ID scope here]";
+ private static final String globalEndpoint = "[Your Provisioning Service Global Endpoint here]";
+ ```
+
+ 1. The sample defaults to using HTTPS as the transport protocol. If you want to change the protocol, comment out the following line and uncomment the line for the protocol you want to use.
+
+ ```java
+ private static final ProvisioningDeviceClientTransportProtocol PROVISIONING_DEVICE_CLIENT_TRANSPORT_PROTOCOL = ProvisioningDeviceClientTransportProtocol.HTTPS;
+ ```
+
+ 1. Update the value of the `leafPublicPem` constant string with the value of your device certificate, *device-01.cert.pem*.
+
+ The syntax of certificate text must follow the pattern below with no extra spaces or characters.
+
+ ```java
+ private static final String leafPublicPem = "--BEGIN CERTIFICATE--\n"
+ "MIIFOjCCAyKgAwIBAgIJAPzMa6s7mj7+MA0GCSqGSIb3DQEBCwUAMCoxKDAmBgNV\n"
+ ...
+ "MDMwWhcNMjAxMTIyMjEzMDMwWjAqMSgwJgYDVQQDDB9BenVyZSBJb1QgSHViIENB\n"
+ "--END CERTIFICATE--";
+ ```
+
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `leafPublicPem` string constant value and write it to the output.
+
+ ```Bash
+ sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' ./certs/device-01.cert.pem
+ ```
+
+ Copy and paste the output certificate text for the constant value.
+
+ 1. Update the string value of the `leafPrivateKey` constant with the unencrypted private key for your device certificate, *unencrypted-device-key.pem*.
+
+ The syntax of the private key text must follow the pattern below with no extra spaces or characters.
+
+ ```java
+ private static final String leafPrivateKey = "--BEGIN PRIVATE KEY--\n" +
+ "MIIJJwIBAAKCAgEAtjvKQjIhp0EE1PoADL1rfF/W6v4vlAzOSifKSQsaPeebqg8U\n" +
+ ...
+ "X7fi9OZ26QpnkS5QjjPTYI/wwn0J9YAwNfKSlNeXTJDfJ+KpjXBcvaLxeBQbQhij\n" +
+ "--END PRIVATE KEY--";
+ ```
+
+ To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `leafPrivateKey` string constant value and write it to the output.
+
+ ```Bash
+ sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' ./private/device-01.key.pem
+ ```
+
+ Copy and paste the output private key text for the constant value.
+
+ 1. Add a `rootPublicPem` constant string with the value of your root CA certificate, *azure-iot-test-only.root.ca.cert.pem*. You can add it just after the `leafPrivateKey` constant.
+
+ The syntax of certificate text must follow the pattern below with no extra spaces or characters.
+
+ ```java
+ private static final String rootPublicPem = "--BEGIN CERTIFICATE--\n"
+ "MIIFOjCCAyKgAwIBAgIJAPzMa6s7mj7+MA0GCSqGSIb3DQEBCwUAMCoxKDAmBgNV\n"
+ ...
+ "MDMwWhcNMjAxMTIyMjEzMDMwWjAqMSgwJgYDVQQDDB9BenVyZSBJb1QgSHViIENB\n"
+ "--END CERTIFICATE--";
+ ```
+
+ To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `rootPublicPem` string constant value and write it to the output.
+
+ ```Bash
+ sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' ./certs/azure-iot-test-only.root.ca.cert.pem
+ ```
+
+ Copy and paste the output certificate text for the constant value.
+
+ 1. Add an `intermediatePublicPem` constant string with the value of your intermediate CA certificate, *azure-iot-test-only.intermediate.cert.pem*. You can add it just after the previous constant.
+
+ The syntax of certificate text must follow the pattern below with no extra spaces or characters.
+
+ ```java
+ private static final String intermediatePublicPem = "--BEGIN CERTIFICATE--\n"
+ "MIIFOjCCAyKgAwIBAgIJAPzMa6s7mj7+MA0GCSqGSIb3DQEBCwUAMCoxKDAmBgNV\n"
+ ...
+ "MDMwWhcNMjAxMTIyMjEzMDMwWjAqMSgwJgYDVQQDDB9BenVyZSBJb1QgSHViIENB\n"
+ "--END CERTIFICATE--";
+ ```
+
+ To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `intermediatePublicPem` string constant value and write it to the output.
+
+ ```Bash
+ sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' ./certs/azure-iot-test-only.intermediate.cert.pem
+ ```
+
+ Copy and paste the output certificate text for the constant value.
+
+ 1. Find the following lines in the `main` method.
+
+ ```java
+ // For group enrollment uncomment this line
+ //signerCertificatePemList.add("<Your Signer/intermediate Certificate Here>");
+ ```
+
+ Add these two lines directly beneath them to add your intermediate and root CA certificates to the signing chain. Your signing chain should include the whole certificate chain up to and including a certificate that you've verified with DPS.
+
+ ```java
+ signerCertificatePemList.add(intermediatePublicPem);
+ signerCertificatePemList.add(rootPublicPem);
+ ```
+
+ > [!NOTE]
+ > The order that the signing certificates are added is important. The sample will fail if it's changed.
+
+ 1. Save your changes.
+
+1. Build the sample, and then go to the `target` folder.
+
+ ```cmd
+ mvn clean install
+ cd target
+ ```
+
+1. The build outputs .jar file in the `target` folder with the following file format: `provisioning-x509-sample-{version}-with-deps.jar`; for example: `provisioning-x509-sample-1.8.1-with-deps.jar`. Execute the .jar file. You may need to replace the version in the command below.
+
+ ```cmd
+ java -jar ./provisioning-x509-sample-1.8.1-with-deps.jar
+ ```
+
+ The sample will connect to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub.
+
+ ```output
+ Starting...
+ Beginning setup.
+ WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
+ 2022-10-21 10:41:20,476 DEBUG (main) [com.microsoft.azure.sdk.iot.provisioning.device.ProvisioningDeviceClient] - Initialized a ProvisioningDeviceClient instance using SDK version 2.0.2
+ 2022-10-21 10:41:20,479 DEBUG (main) [com.microsoft.azure.sdk.iot.provisioning.device.ProvisioningDeviceClient] - Starting provisioning thread...
+ Waiting for Provisioning Service to register
+ 2022-10-21 10:41:20,482 INFO (global.azure-devices-provisioning.net-4f8279ac-CxnPendingConnectionId-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Opening the connection to device provisioning service...
+ 2022-10-21 10:41:20,652 INFO (global.azure-devices-provisioning.net-4f8279ac-Cxn4f8279ac-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Connection to device provisioning service opened successfully, sending initial device registration message
+ 2022-10-21 10:41:20,680 INFO (global.azure-devices-provisioning.net-4f8279ac-Cxn4f8279ac-azure-iot-sdk-RegisterTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.RegisterTask] - Authenticating with device provisioning service using x509 certificates
+ 2022-10-21 10:41:21,603 INFO (global.azure-devices-provisioning.net-4f8279ac-Cxn4f8279ac-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Waiting for device provisioning service to provision this device...
+ 2022-10-21 10:41:21,605 INFO (global.azure-devices-provisioning.net-4f8279ac-Cxn4f8279ac-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Current provisioning status: ASSIGNING
+ 2022-10-21 10:41:24,868 INFO (global.azure-devices-provisioning.net-4f8279ac-Cxn4f8279ac-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Device provisioning service assigned the device successfully
+ IotHUb Uri : contoso-hub-2.azure-devices.net
+ Device ID : device-01
+ 2022-10-21 10:41:30,514 INFO (main) [com.microsoft.azure.sdk.iot.device.transport.ExponentialBackoffWithJitter] - NOTE: A new instance of ExponentialBackoffWithJitter has been created with the following properties. Retry Count: 2147483647, Min Backoff Interval: 100, Max Backoff Interval: 10000, Max Time Between Retries: 100, Fast Retry Enabled: true
+ 2022-10-21 10:41:30,526 INFO (main) [com.microsoft.azure.sdk.iot.device.transport.ExponentialBackoffWithJitter] - NOTE: A new instance of ExponentialBackoffWithJitter has been created with the following properties. Retry Count: 2147483647, Min Backoff Interval: 100, Max Backoff Interval: 10000, Max Time Between Retries: 100, Fast Retry Enabled: true
+ 2022-10-21 10:41:30,533 DEBUG (main) [com.microsoft.azure.sdk.iot.device.DeviceClient] - Initialized a DeviceClient instance using SDK version 2.1.2
+ 2022-10-21 10:41:30,590 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.MqttIotHubConnection] - Opening MQTT connection...
+ 2022-10-21 10:41:30,625 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sending MQTT CONNECT packet...
+ 2022-10-21 10:41:31,452 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sent MQTT CONNECT packet was acknowledged
+ 2022-10-21 10:41:31,453 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sending MQTT SUBSCRIBE packet for topic devices/device-01/messages/devicebound/#
+ 2022-10-21 10:41:31,523 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sent MQTT SUBSCRIBE packet for topic devices/device-01/messages/devicebound/# was acknowledged
+ 2022-10-21 10:41:31,525 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.MqttIotHubConnection] - MQTT connection opened successfully
+ 2022-10-21 10:41:31,528 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - The connection to the IoT Hub has been established
+ 2022-10-21 10:41:31,531 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Updating transport status to new status CONNECTED with reason CONNECTION_OK
+ 2022-10-21 10:41:31,532 DEBUG (main) [com.microsoft.azure.sdk.iot.device.DeviceIO] - Starting worker threads
+ 2022-10-21 10:41:31,535 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking connection status callbacks with new status details
+ 2022-10-21 10:41:31,536 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Client connection opened successfully
+ 2022-10-21 10:41:31,537 INFO (main) [com.microsoft.azure.sdk.iot.device.DeviceClient] - Device client opened successfully
+ Sending message from device to IoT Hub...
+ 2022-10-21 10:41:31,539 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Message was queued to be sent later ( Message details: Correlation Id [0d143280-dbc7-405f-a61e-fcc7a1d80b87] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] )
+ Press any key to exit...
+ 2022-10-21 10:41:31,540 DEBUG (contoso-hub-2.azure-devices.net-device-01-d7c67552-Cxn0bd73809-420e-46fe-91ee-942520b775db-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Sending message ( Message details: Correlation Id [0d143280-dbc7-405f-a61e-fcc7a1d80b87] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] )
+ 2022-10-21 10:41:31,844 DEBUG (MQTT Call: device-01) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - IotHub message was acknowledged. Checking if there is record of sending this message ( Message details: Correlation Id [0d143280-dbc7-405f-a61e-fcc7a1d80b87] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] )
+ 2022-10-21 10:41:31,846 DEBUG (contoso-hub-2.azure-devices.net-device-01-d7c67552-Cxn0bd73809-420e-46fe-91ee-942520b775db-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking the callback function for sent message, IoT Hub responded to message ( Message details: Correlation Id [0d143280-dbc7-405f-a61e-fcc7a1d80b87] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] ) with status OK
+ Message sent!
+ ```
+
+1. Update the constants for your second device (`device-02`) according to the table below, rebuild, and run the sample again.
+
+ | Constant | File to use |
+ | :- | : |
+ | `leafPublicPem` | *./certs/device-02.cert.pem* |
+ | `leafPrivateKey` | *./private/device-02.key.pem* |
++ ## Confirm your device provisioning registration Examine the registration records of the enrollment group to see the registration details for your devices:
When you're finished testing and exploring this device client sample, use the fo
1. Close the device client sample output window on your machine.
-1. From the left-hand menu in the Azure portal, select **All resources** and then select your Device Provisioning Service instance. Open **Manage Enrollments** for your service, and then select the **Enrollment Groups** tab. Select the check box next to the *Group Name* of the device group you created in this tutorial, and select **Delete** at the top of the pane.
+### Delete your enrollment group
+
+1. From the left-hand menu in the Azure portal, select **All resources**.
+
+1. Select your DPS instance.
+
+1. In the **Settings** menu, select **Manage enrollments**.
+
+1. Select the **Enrollment Groups** tab.
+
+1. Select the enrollment group you used for this tutorial, *custom-hsm-x509-devices*.
+
+1. On the **Enrollment Group Details** page, select the **Registration Records** tab. Then select the check box next to the **Device Id** column header to select all of the registration records for the enrollment group. Select **Delete Registrations** at the top of the page to delete the registration records.
+
+ > [!IMPORTANT]
+ > Deleting an enrollment group doesn't delete the registration records associated with it. These orphaned records will count against the [registrations quota](about-iot-dps.md#quotas-and-limits) for the DPS instance. For this reason, it's a best practice to delete all registration records associated with an enrollment group before you delete the enrollment group itself.
+
+1. Go back to the **Manage Enrollments** page and make sure the **Enrollment Groups** tab is selected.
+
+1. Select the check box next to the *GROUP NAME* of the enrollment group you used for this tutorial, *custom-hsm-x509-devices*.
+
+1. At the top of the page, select **Delete**.
+
+### Delete registered CA certificates from DPS
+
+1. Select **Certificates** from the left-hand menu of your DPS instance. For each certificate you uploaded and verified in this tutorial, select the certificate and select **Delete** and confirm your choice to remove it.
+
+### Delete device registration(s) from IoT Hub
+
+1. From the left-hand menu in the Azure portal, select **All resources**.
+
+2. Select your IoT hub.
+
+3. In the **Explorers** menu, select **IoT devices**.
-1. Select **Certificates** in DPS. For each certificate you uploaded and verified in this tutorial, select the certificate and select **Delete** to remove it.
+4. Select the check box next to the *DEVICE ID* of the devices you registered in this tutorial. For example, *device-01* and *device-02*..
-1. From the left-hand menu in the Azure portal, select **All resources** and then select your IoT hub. Open **IoT devices** for your hub. Select the check box next to the *DEVICE ID* of the device that you registered in this tutorial. Select **Delete** at the top of the pane.
+5. At the top of the page, select **Delete**.
## Next steps
-In this tutorial, you provisioned an X.509 device using a custom HSM to your IoT hub. To learn how to provision IoT devices across multiple IoT hubs, see:
+In this tutorial, you provisioned X.509 devices using an enrollment group to your IoT hub. To learn how to provision IoT devices to multiple hubs continue to the next tutorial.
> [!div class="nextstepaction"] > [How to use allocation policies](how-to-use-allocation-policies.md)
iot-hub-device-update Device Update Agent Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-agent-provisioning.md
This section describes how to start and verify the Device Update agent as a modu
1. Open a Terminal window, and enter the command below. ```shell
- sudo systemctl restart adu-agent
+ sudo systemctl restart deviceupdate-agent
``` 1. You can check the status of the agent using the command below. If you see any issues, refer to this [troubleshooting guide](troubleshoot-device-update.md). ```shell
- sudo systemctl status adu-agent
+ sudo systemctl status deviceupdate-agent
``` You should see status OK.
key-vault Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/authentication.md
A security principal is an object that represents a user, group, service, or app
* A **group** security principal identifies a set of users created in Azure Active Directory. Any roles or permissions assigned to the group are granted to all of the users within the group.
-* A **service principal** is a type of security principal that identifies an application or service, which is to say, a piece of code rather than a user or group. A service principal's object ID is known as its **client ID** and acts like its username. The service principal's **client secret** acts like its password.
+* A **service principal** is a type of security principal that identifies an application or service, which is to say, a piece of code rather than a user or group. A service principal's object ID acts like its username; the service principal's **client secret** acts like its password.
For applications, there are two ways to obtain a service principal:
key-vault Byok Specification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/byok-specification.md
Use the **az keyvault key create** command to create KEK with key operations set
az keyvault key create --kty RSA-HSM --size 4096 --name KEKforBYOK --ops import --vault-name ContosoKeyVaultHSM ```
+> [!NOTE]
+> Services support different KEK lengths; Azure SQL, for instance, only supports key lengths of [2048 or 3072 bytes](/azure/azure-sql/database/transparent-data-encryption-byok-overview#requirements-for-configuring-customer-managed-tde). Consult the documentation for your service for specifics.
+ ### Step 2: Retrieve the public key of the KEK Download the public key portion of the KEK and store it into a PEM file.
load-testing How To Define Test Criteria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-define-test-criteria.md
Azure Load Testing supports the following client metrics:
|||||-| |`response_time_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) | Response time or elapsed time, in milliseconds. Learn more about [elapsed time in the Apache JMeter documentation](https://jmeter.apache.org/usermanual/glossary.html). | |`latency_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) | Latency, in milliseconds. Learn more about [latency in the Apache JMeter documentation](https://jmeter.apache.org/usermanual/glossary.html). |
-|`error` | `percentage` | Numerical value in the range 0-100, representing a percentage. | `>` (greater than) <BR> `<` (less than) | Percentage of failed requests. |
+|`error` | `percentage` | Numerical value in the range 0-100, representing a percentage. | `>` (greater than) | Percentage of failed requests. |
|`requests_per_sec` | `avg` (average) | Numerical value with up to two decimal places. | `>` (greater than) <BR> `<` (less than) | Number of requests per second. | |`requests` | `count` | Integer value. | `>` (greater than) <BR> `<` (less than) | Total number of requests. |
load-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-limits-quotas-capacity.md
The following limits apply on a per-region, per-subscription basis.
| Resource | Default limit | Maximum limit | |||
-| Concurrent engine instances | 5-100 <sup>1</sup> | 5000 |
-| Engine instances per test run | 1-45 <sup>1</sup> | 5000 |
+| Concurrent engine instances | 5-100 <sup>1</sup> | 1000 |
+| Engine instances per test run | 1-45 <sup>1</sup> | 45 |
<sup>1</sup> To request an increase beyond this limit, contact Azure Support. Default limits vary by offer category type.
The following limits apply on a per-region, per-subscription basis.
| Resource | Default limit | Maximum limit | |||
-| Concurrent test runs | 5-25 <sup>2</sup> | 5000 |
+| Concurrent test runs | 5-25 <sup>2</sup> | 1000 |
| Test duration | 3 hours | <sup>2</sup> To request an increase beyond this limit, contact Azure Support. Default limits vary by offer category type.
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
Title: Create workflows with single-tenant Azure Logic Apps (Standard) in the Azure portal
-description: Create automated workflows to integrate apps, data, services, and systems with single-tenant Azure Logic Apps (Standard) in the Azure portal.
+ Title: Create Standard workflows in single-tenant Azure Logic Apps with the Azure portal
+description: Create Standard logic app workflows that run in single-tenant Azure Logic Apps to automate integration tasks across apps, data, services, and systems using the Azure portal.
ms.suite: integration Previously updated : 10/26/2022 Last updated : 11/01/2022
-#Customer intent: As a developer, I want to create an automated integration workflow that runs in single-tenant Azure Logic Apps using the Azure portal.
+# Customer intent: As a logic apps developer, I want to create a Standard logic app workflow that runs in single-tenant Azure Logic Apps using the Azure portal.
# Create an integration workflow with single-tenant Azure Logic Apps (Standard) in the Azure portal
As you progress, you'll complete these high-level tasks:
* To deploy your **Logic App (Standard)** resource to an [App Service Environment v3 (ASEv3)](../app-service/environment/overview.md), you have to create this environment resource first. You can then select this environment as the deployment location when you create your logic app resource. For more information, review [Resources types and environments](single-tenant-overview-compare.md#resource-environment-differences) and [Create an App Service Environment](../app-service/environment/creation.md).
+* Starting mid-October 2022, new Standard logic app workflows in the Azure portal automatically use Azure Functions v4. Throughout November 2022, existing Standard workflows in the Azure portal are automatically migrating to Azure Functions v4. Unless you deployed your Standard logic apps as NuGet-based projects or pinned your logic apps to a specific bundle version, this upgrade is designed to require no action from you nor have a runtime impact. However, if the exceptions apply to you, or for more information about Azure Functions v4 support, see [Azure Logic Apps Standard now supports Azure Functions v4](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/azure-logic-apps-standard-now-supports-azure-functions-v4/ba-p/3656072).
+ ## Best practices and recommendations For optimal designer responsiveness and performance, review and follow these guidelines:
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
Title: Create workflows with single-tenant Azure Logic Apps (Standard) in Visual Studio Code
-description: Create automated workflows to integrate apps, data, services, and systems with single-tenant Azure Logic Apps (Standard) in Visual Studio Code.
+ Title: Create Standard workflows in single-tenant Azure Logic Apps with Visual Studio Code
+description: Create Standard logic app workflows that run in single-tenant Azure Logic Apps to automate integration tasks across apps, data, services, and systems using Visual Studio Code.
ms.suite: integration Previously updated : 09/06/2022 Last updated : 11/01/2022 +
+# Customer intent: As a logic apps developer, I want to create a Standard logic app workflow that runs in single-tenant Azure Logic Apps using Visual Studio Code.
-# Create an integration workflow with single-tenant Azure Logic Apps (Standard) in Visual Studio Code
+# Create a Standard logic app workflow for single-tenant Azure Logic Apps using Visual Studio Code
[!INCLUDE [logic-apps-sku-standard](../../includes/logic-apps-sku-standard.md)]
-This article shows how to create an example automated integration workflow that runs in the *single-tenant* Azure Logic Apps environment by using Visual Studio Code with the **Azure Logic Apps (Standard)** extension. When you use this extension, you create a Standard logic app resource and workflow that provides the following capabilities:
+This how-to guide shows how to create an example integration workflow that runs in single-tenant Azure Logic Apps by using Visual Studio Code with the **Azure Logic Apps (Standard)** extension. Before you create this workflow, you'll create a Standard logic app resource, which provides the following capabilities:
* Your logic app can include multiple [stateful and stateless workflows](single-tenant-overview-compare.md#stateful-stateless). * Workflows in the same logic app and tenant run in the same process as the Azure Logic Apps runtime, so they share the same resources and provide better performance.
-* You can locally create, run, and test workflows in the Visual Studio Code development environment. You can deploy your logic app locally, to Azure, which includes the single-tenant Azure Logic Apps environment or App Service Environment v3 (ASEv3) - Windows plans only, and on-premises using containers, due to the Azure Logic Apps containerized runtime.
+* You can locally create, run, and test workflows using the Visual Studio Code development environment.
-For more information about single-tenant Azure Logic Apps, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+ When you're ready, you can deploy your logic app to Azure where your workflow can run in the single-tenant Azure Logic Apps environment or in an App Service Environment v3 (ASEv3 - Windows plans only). You can also deploy and run your workflow anywhere that Kubernetes can run, including Azure, Azure Kubernetes Service, on premises, or even other cloud providers, due to the Azure Logic Apps containerized runtime. For more information about single-tenant Azure Logic Apps, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md#resource-environment-differences).
While the example workflow is cloud-based and has only two steps, you can create workflows from hundreds of operations that can connect a wide range of apps, data, services, and systems across cloud, on premises, and hybrid environments. The example workflow starts with the built-in Request trigger and follows with an Office 365 Outlook action. The trigger creates a callable endpoint for the workflow and waits for an inbound HTTPS request from any caller. When the trigger receives a request and fires, the next action runs by sending email to the specified email address along with selected outputs from the trigger.
For more information, review the [Azurite documentation](https://github.com/Azur
* [C# for Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp), which enables F5 functionality to run your logic app.
- * [Azure Functions Core Tools - 3.x version](https://github.com/Azure/azure-functions-core-tools/releases/tag/3.0.4585) by using the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`. Don't install the 4.x version, which isn't supported and won't work.
+ * [Azure Functions Core Tools - 3.x version](https://github.com/Azure/azure-functions-core-tools/releases/tag/3.0.4585) by using the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`. These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code.
- These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code.
+ * If you have an installation that's earlier than these versions, uninstall that version first, or make sure that the PATH environment variable points at the version that you download and install.
- > [!IMPORTANT]
- > If you have an installation that's earlier than these versions, uninstall that version first,
- > or make sure that the PATH environment variable points at the version that you download and install.
+ * Azure Functions v3 support ends in late 2022. Starting mid-October 2022, new Standard logic app workflows in the Azure portal automatically use Azure Functions v4. Throughout November 2022, existing Standard workflows in the Azure portal are automatically migrating to Azure Functions v4. Unless you deployed your Standard logic apps as NuGet-based projects or pinned your logic apps to a specific bundle version, this upgrade is designed to require no action from you nor have
+ a runtime impact. However, if the exceptions apply to you, or for more information about Azure Functions v3 support, see [Azure Logic Apps Standard now supports Azure Functions v4](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/azure-logic-apps-standard-now-supports-azure-functions-v4/ba-p/3656072).
* [Azure Logic Apps (Standard) extension for Visual Studio Code](https://go.microsoft.com/fwlink/p/?linkid=2143167).
logic-apps Logic Apps Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-pricing.md
Title: Usage metering, billing, and pricing
-description: Learn how usage metering, billing, and pricing models work in Azure Logic Apps.
+description: Learn how usage metering, billing, and pricing work in Azure Logic Apps.
ms.suite: integration Previously updated : 08/20/2022 Last updated : 11/02/2022
+# As a logic apps developer, I want to learn and understand how usage metering, billing, and pricing work in Azure Logic Apps.
-# Usage metering, billing, and pricing models for Azure Logic Apps
+# Usage metering, billing, and pricing for Azure Logic Apps
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
The following table summarizes how the Consumption model handles metering and bi
### Trigger and action operations in the Consumption model
-Except for the initial number of free built-in operation executions, per Azure subscription, that a workflow can run, the Consumption model meters and bills an operation based on *each execution*, whether or not the overall workflow successfully runs, finishes, or is even instantiated. An operation usually makes a single execution [unless the operation has retry attempts enabled](#other-operation-behavior). In turn, an execution usually makes a single call [unless the operation supports and enables chunking or pagination to get large amounts of data](logic-apps-handle-large-messages.md). If chunking or pagination is enabled, an operation execution might have to make multiple calls. The Consumption model meters and bills an operation *per execution, not per call*.
+Except for the initial number of free built-in operation executions, per Azure subscription, that a workflow can run, the Consumption model meters and bills an operation based on *each execution*, whether or not the overall workflow successfully runs, finishes, or is even instantiated. An operation usually makes a single execution [unless the operation has retry attempts enabled](#other-operation-behavior). In turn, an execution usually makes a single call [unless the operation supports and enables chunking or pagination to get large amounts of data](logic-apps-handle-large-messages.md). If chunking or pagination is enabled, an operation execution might have to make multiple calls.
-For example, suppose a workflow starts with a polling trigger that gets records by regularly making outbound calls to an endpoint. The outbound call is metered and billed as a single execution, whether or not the trigger fires or is skipped, such as when a trigger checks an endpoint but doesn't find any data or events. The trigger state controls whether or not the workflow instance is created and run. Now, suppose the operation also supports and has enabled chunking or pagination. If the operation has to make 10 calls to finish getting all the data, the operation is still metered and billed as a *single execution*, despite making multiple calls.
+The Consumption model meters and bills an operation *per execution, not per call*. For example, suppose a workflow starts with a polling trigger that gets records by regularly making outbound calls to an endpoint. The outbound call is metered and billed as a single execution, whether or not the trigger fires or is skipped, such as when a trigger checks an endpoint but doesn't find any data or events. The trigger state controls whether or not the workflow instance is created and run. Now, suppose the operation also supports and has enabled chunking or pagination. If the operation has to make 10 calls to finish getting all the data, the operation is still metered and billed as a *single execution*, despite making multiple calls.
+
+> [!NOTE]
+>
+> By default, triggers that return an array have a **Split On** setting that's already enabled.
+> This setting results in a trigger event, which you can review in the trigger history, and a
+> workflow instance *for each* array item. All the workflow instances run in parallel so that
+> the array items are processed at the same time. Billing applies to all trigger events whether
+> the trigger state is **Succeeded** or **Skipped**. Triggers are still billable even in scenarios
+> where the triggers don't instantiate and start the workflow, but the trigger state is **Succeeded**,
+> **Failed**, or **Skipped**.
The following table summarizes how the Consumption model handles metering and billing for these operation types when used with a logic app and workflow in multi-tenant Azure Logic Apps:
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-authenticate-batch-endpoint.md
In this case, we want to execute a batch endpoint using a service principal alre
# [REST](#tab/rest)
-You can use the REST API of Azure Machine Learning to start a batch endpoints job using the user's credential. Follow these steps:
+1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
1. Use the login service from Azure to get an authorization token. Authorization tokens are issued to a particular scope. The resource type for Azure Machine learning is `https://ml.azure.com`. The request would look as follows:
machine-learning How To Troubleshoot Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-troubleshoot-batch-endpoints.md
The following section contains common problems and solutions you may see during
### No module named 'azureml'
+__Message logged__: `No module named 'azureml'`.
+ __Reason__: Azure Machine Learning Batch Deployments require the package `azureml-core` to be installed. __Solution__: Add `azureml-core` to your conda dependencies file.
__Message logged__: There is no succeeded mini batch item returned from run(). P
__Reason__: The batch endpoint failed to provide data in the expected format to the `run()` method. This may be due to corrupted files being read or incompatibility of the input data with the signature of the model (MLflow). __Solution__: To understand what may be happening, go to __Outputs + Logs__ and open the file at `logs > user > stdout > 10.0.0.X > process000.stdout.txt`. Look for error entries like `Error processing input file`. You should find there details about why the input file can't be correctly read.+
+### Audiences in JWT are not allowed
+
+__Context__: When invoking a batch endpoint using its REST APIs.
+
+__Reason__: The access token used to invoke the REST API for the endpoint/deployment is indicating a token that is issued for a different audience/service. Azure Active Directory tokens are issued for specific actions.
+
+__Solution__: When generating an authentication token to be used with the Batch Endpoint REST API, ensure the `resource` parameter is set to `https://ml.azure.com`. Please notice that this resource is different from the resource you need to indicate to manage the endpoint using the REST API. All Azure resources (including batch endpoints) use the resource `https://management.azure.com` for managing them. Ensure you use the right resource URI on each case. Notice that if you want to use the management API and the job invocation API at the same time, you will need two tokens. For details see: [Authentication on batch endpoints (REST)](how-to-authenticate-batch-endpoint.md?tabs=rest).
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
Specify the storage output location to any datastore and path. By default, batch
## Next steps -- [How to deploy online endpoints with the Azure CLI](how-to-deploy-managed-online-endpoints.md)-- [How to deploy batch endpoints with the Azure CLI](batch-inference/how-to-use-batch-endpoint.md)
+- [How to deploy online endpoints with the Azure CLI and Python SDK](how-to-deploy-managed-online-endpoints.md)
+- [How to deploy batch endpoints with the Azure CLI and Python SDK](batch-inference/how-to-use-batch-endpoint.md)
- [How to use online endpoints with the studio](how-to-use-managed-online-endpoint-studio.md) - [Deploy models with REST](how-to-deploy-with-rest.md) - [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md)
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
Title: MLflow and Azure Machine Learning
description: Learn about how Azure Machine Learning uses MLflow to log metrics and artifacts from machine learning models, and to deploy your machine learning models to an endpoint. --++ Last updated 08/15/2022
machine-learning How To Secure Kubernetes Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-kubernetes-online-endpoint.md
description: Learn about how to use TLS/SSL to configure secure Kubernetes onlin
-+ Last updated 10/10/2022
marketplace Plans Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plans-pricing.md
Plans are not supported for the following offer types:
- Consulting service - Dynamics 365 Business Central - Dynamics 365 Operations Apps-- Power BI app-- Power BI Visual ## Plan information
This table provides pricing information thatΓÇÖs specific to various offer types
| IoT Edge module | <ul><li>[Plan an IoT Edge module offer](marketplace-iot-edge.md#licensing-options)</li></ul> | | Managed service | <ul><li>[Plan a Managed Service offer](plan-managed-service-offer.md#plans-and-pricing)</li><li>[Create plans for a Managed Service offer](create-managed-service-offer-plans.md#define-pricing-and-availability) | | Power BI app | <ul><li>[Plan a Power BI App offer](marketplace-power-bi.md#licensing-options)</li></ul> |
+| Power BI visual | <ul><li>[Create a Power BI App offer](power-bi-visual-offer-setup.md#setup-details)</li></ul> |
| Software as a Service (SaaS) | <ul><li>[SaaS pricing models](plan-saas-offer.md#saas-pricing-models)</li><li>[SaaS billing](plan-saas-offer.md#saas-billing)</li><li>[Create plans for a SaaS offer](create-new-saas-offer-plans.md#define-a-pricing-model)</li></ul> |
mysql How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-upgrade.md
Last updated 10/12/2022
>[!Note] > This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, we will remove it from this article.
-This article describes how you can upgrade your MySQL major version in-place in Azure Database for MySQL Flexible server.
-This feature will enable customers to perform in-place upgrades of their MySQL 5.7 servers to MySQL 8.0 with a select of button without any data movement or the need of any application connection string changes.
+This article describes how you can upgrade your MySQL major version in-place in Azure Database for MySQL - Flexible server.
+This feature enables customers to perform in-place upgrades of their MySQL 5.7 servers to MySQL 8.0 without any data movement or the need to make any application connection string changes.
>[!Important]
-> - Major version upgrade for Azure database for MySQL Flexible Server is available in public preview.
-> - Major version upgrade is currently not available for Burstable SKU 5.7 servers.
-> - Duration of downtime will vary based on the size of your database instance and the number of tables on the database.
-> - Upgrading major MySQL version is irreversible. Your deployment might fail if validation identifies the server is configured with any features that are [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) or [deprecated](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-deprecations). You can make necessary configuration changes on the server and try upgrade again
+> - Major version upgrade for Azure Database for MySQL - Flexible Server is available in public preview.
+> - Major version upgrade is currently unavailable for version 5.7 servers based on the Burstable SKU.
+> - Duration of downtime varies based on the size of the database instance and the number of tables it contains.
+> - Upgrading the major MySQL version is irreversible. Your deployment might fail if validation identifies that the server is configured with any features that are [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) or [deprecated](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-deprecations). You can make necessary configuration changes on the server and try upgrade again.
## Prerequisites - Read Replicas with MySQL version 5.7 should be upgraded before Primary Server for replication to be compatible between different MySQL versions, read more on [Replication Compatibility between MySQL versions](https://dev.mysql.com/doc/mysql-replication-excerpt/8.0/en/replication-compatibility.html). - Before you upgrade your production servers, we strongly recommend you to test your application compatibility and verify your database compatibility with features [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals)/[deprecated](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-deprecations) in the new MySQL version.-- Trigger [on-demand backup](./how-to-trigger-on-demand-backup.md) before you perform major version upgrade on your production server, which can be used to [rollback to version 5.7](./how-to-restore-server-portal.md) from the full on-demand backup taken.
+- Trigger [on-demand backup](./how-to-trigger-on-demand-backup.md) before you perform major version upgrade on your production server, which can be used to [rollback to version 5.7](./how-to-restore-server-portal.md) from the full on-demand backup taken.
-## Perform Planned Major version upgrade from MySQL 5.7 to MySQL 8.0 using Azure portal
+## Perform a Planned major version upgrade from MySQL 5.7 to MySQL 8.0 using the Azure portal
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.7 server.
+To perform a major version upgrade of an Azure Database for MySQL 5.7 server using the Azure portal, perform the following steps.
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.7 server.
>[!Important] > We recommend performing upgrade first on restored copy of the server rather than upgrading production directly. See [how to perform point-in-time restore](./how-to-restore-server-portal.md).
-2. From the overview page, select the Upgrade button in the toolbar
+2. On the **Overview** page, in the toolbar, select **Upgrade**.
>[!Important] > Before upgrading visit link for list of [features removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) in MySQL 8.0.
This feature will enable customers to perform in-place upgrades of their MySQL 5
:::image type="content" source="./media/how-to-upgrade/1-how-to-upgrade.png" alt-text="Screenshot showing Azure Database for MySQL Upgrade.":::
-3. In the Upgrade sidebar, verify Major Upgrade version to upgrade i.e 8.0.
+3. In the **Upgrade** sidebar, in the **MySQL version to upgrade** text box, verify the major MySQL version you want to upgrade to, i.e., 8.0.
:::image type="content" source="./media/how-to-upgrade/2-how-to-upgrade.png" alt-text="Screenshot showing Upgrade.":::
-4. For Primary Server, select on confirmation checkbox, to confirm that all your replica servers are upgraded before primary server. Once confirmed that all your replicas are upgraded, Upgrade button will be enabled. For your read-replicas and standalone servers, Upgrade button will be enabled by default.
-
- :::image type="content" source="./media/how-to-upgrade/3-how-to-upgrade.png" alt-text="Screenshot showing confirmation.":::
+ Before you can upgrade your primary server, you first need to have upgraded any associated read replica servers. Until this is completed, **Upgrade** will be disabled.
-5. Once Upgrade button is enabled, you can select on Upgrade button to proceed with deployment.
+4. On the primary server, select the confirmation message to verify that all replica servers have been upgraded, and then select **Upgrade**.
:::image type="content" source="./media/how-to-upgrade/4-how-to-upgrade.png" alt-text="Screenshot showing upgrade.":::
-## Perform Planned Major version upgrade from MySQL 5.7 to MySQL 8.0 using Azure CLI
+ On read replica and standalone servers, **Upgrade** is enabled by default.
+
+## Perform a Planned major version upgrade from MySQL 5.7 to MySQL 8.0 using the Azure CLI
-Follow these steps to perform major version upgrade for your Azure Database of MySQL 5.7 server using Azure CLI.
+To perform a major version upgrade of an Azure Database for MySQL 5.7 server using the Azure CLI, perform the following steps.
-1. Install [Azure CLI](/cli/azure/install-azure-cli) for Windows or use [Azure CLI](../../cloud-shell/overview.md) in Azure Cloud Shell to run the upgrade commands.
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) for Windows or use the [Azure CLI](../../cloud-shell/overview.md) in Azure Cloud Shell to run the upgrade commands.
This upgrade requires version 2.40.0 or later of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed. Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade.
-2. After you sign in, run the [az mysql server upgrade](/cli/azure/mysql/server#az-mysql-server-upgrade) command.
+2. After you sign in, run the [az mysql server upgrade](/cli/azure/mysql/server#az-mysql-server-upgrade) command.
```azurecli az mysql server upgrade --name testsvr --resource-group testgroup --subscription MySubscription --version 8.0 ```
-3. Under confirmation prompt, type ΓÇ£yΓÇ¥ for confirming or ΓÇ£nΓÇ¥ to stop the upgrade process and enter.
+3. Under the confirmation prompt, type **y** to confirm or **n** to stop the upgrade process, and then press Enter.
+
+## Perform a major version upgrade from MySQL 5.7 to MySQL 8.0 on a read replica server using the Azure portal
+
+To perform a major version upgrade of an Azure Database for MySQL 5.7 server to MySQL 8.0 on a read replica using the Azure portal, perform the following steps.
-## Perform major version upgrade from MySQL 5.7 to MySQL 8.0 on read replica using Azure portal
+1. In the Azure portal, select your existing Azure Database for MySQL 5.7 read replica server.
-1. In the Azure portal, select your existing Azure Database for MySQL 5.7 read replica server.
+2. On the **Overview** page, in the toolbar, select **Upgrade**.
-2. From the Overview page, select the Upgrade button in the toolbar.
>[!Important] > Before upgrading visit link for list of [features removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) in MySQL 8.0. >Verify deprecated [sql_mode](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) values and remove/deselect them from your current Flexible Server 5.7 using Server Parameters Blade on your Azure Portal to avoid deployment failure.
-3. In the Upgrade section, select Upgrade button to upgrade Azure database for MySQL 5.7 read replica server to 8.0 server.
+3. In the **Upgrade** section, select **Upgrade** to upgrade an Azure Database for MySQL 5.7 read replica server to MySQL 8.0.
-4. A notification will confirm that upgrade is successful.
+ A notification appears to confirm that upgrade is successful.
-5. From the Overview page, confirm that your Azure database for MySQL read replica server version is 8.0.
+4. On the **Overview** page, confirm that your Azure Database for MySQL read replica server is running version is 8.0.
-6. Now go to your primary server and perform major version upgrade on it.
+5. Now, go to your primary server and perform major version upgrade on it.
## Perform minimal downtime major version upgrade from MySQL 5.7 to MySQL 8.0 using read replicas
+To perform a major version upgrade of an Azure Database for MySQL 5.7 server to MySQL 8.0 with minimal downtime using read replica servers, perform the following steps.
+ 1. In the Azure portal, select your existing Azure Database for MySQL 5.7. 2. Create a [read replica](./how-to-read-replicas-portal.md) from your primary server.
-3. Upgrade your [read replica to version](#perform-planned-major-version-upgrade-from-mysql-57-to-mysql-80-using-azure-cli) 8.0.
+3. [Upgrade](#perform-a-planned-major-version-upgrade-from-mysql-57-to-mysql-80-using-the-azure-cli) your read replica to version 8.0.
+
+4. After you confirm that the replica server is running version 8.0, stop your application from connecting to your primary server.
-4. Once you confirm that the replica server is running on version 8.0, stop your application from connecting to your primary server.
+5. Check replication status to ensure that the replica has caught up with the primary so that all data is in sync and that no new operations are being performed on the primary.
-5. Check replication status, and make sure replica is all caught up with primary, so all the data is in sync and ensure there are no new operations performed in primary.
-Confirm with the show slave status command on the replica server to view the replication status.
+6. Confirm with the show slave status command on the replica server to view the replication status.
```azurecli SHOW SLAVE STATUS\G ```
- If the state of Slave_IO_Running and Slave_SQL_Running are "yes" and the value of Seconds_Behind_Master is "0", replication is working well. Seconds_Behind_Master indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates. Once you confirm Seconds_Behind_Master is "0" it's safe to stop replication.
+ If the state of Slave_IO_Running and Slave_SQL_Running is **yes** and the value of Seconds_Behind_Master is **0**, replication is working well. Seconds_Behind_Master indicates how late the replica is. If the value isn't **0**, then the replica is still processing updates. After you confirm that the value of Seconds_Behind_Master is ****, it's safe to stop replication.
-6. Promote your read replica to primary by stopping replication.
+7. Promote your read replica to primary by stopping replication.
-7. Set Server Parameter read_only to 0 that is, OFF to start writing on promoted primary.
+8. Set Server Parameter read_only to **0** (OFF) to start writing on promoted primary.
- Point your application to the new primary (former replica) which is running server 8.0. Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
+9. Point your application to the new primary (former replica) which is running server 8.0. Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
>[!Note]
-> This scenario will have downtime during steps 4, 5 and 6 only.
+> This scenario only incur downtime during steps 4 through 7.
## Frequently asked questions -- Will this cause downtime of the server and if so, how long?
+- **Will this cause downtime of the server and if so, how long?**
+ To have minimal downtime during upgrades, follow the steps mentioned under - [Perform minimal downtime major version upgrade from MySQL 5.7 to MySQL 8.0 using read replicas](#perform-minimal-downtime-major-version-upgrade-from-mysql-57-to-mysql-80-using-read-replicas). The server will be unavailable during the upgrade process, so we recommend you perform this operation during your planned maintenance window. The estimated downtime depends on the database size, storage size provisioned (IOPs provisioned), and the number of tables on the database. The upgrade time is directly proportional to the number of tables on the server. To estimate the downtime for your server environment, we recommend to first perform upgrade on restored copy of the server. -- When will this upgrade feature be GA?
- The GA of this feature will be planned by December 2022. However, the feature is production ready and fully supported by Azure so you should run it with confidence in your environment. As a recommended best practice, we strongly suggest you run and test it first on a restored copy of the server so you can estimate the downtime during upgrade, and perform application compatibility test before you run it on production.
+- **When will this upgrade feature be GA?**
+
+ GA of this feature will be planned by December 2022. However, the feature is production ready and fully supported by Azure so you should run it with confidence in your environment. As a recommended best practice, we strongly suggest you run and test it first on a restored copy of the server so you can estimate the downtime during upgrade, and perform application compatibility test before you run it on production.
+
+- **What happens to my backups after upgrade?**
-- What happens to my backups after upgrade? All backups (automated/on-demand) taken before major version upgrade, when used for restoration will always restore to a server with older version (5.7). All the backups (automated/on-demand) taken after major version upgrade will restore to server with upgraded version (8.0). It's highly recommended to take on-demand backup before you perform the major version upgrade for an easy rollback. ## Next steps-- Learn more on [how to configure scheduled maintenance](./how-to-maintenance-portal.md) for your Azure Database for MySQL flexible server.
+- Learn more about [how to configure scheduled maintenance](./how-to-maintenance-portal.md) for your Azure Database for MySQL flexible server.
- Learn about what's new in [MySQL version 8.0](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html).
orbital Satellite Imagery With Orbital Ground Station https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/satellite-imagery-with-orbital-ground-station.md
cd ~/drl/data/pub/gsfcdata/aqua/modis/
## Next steps
+To easily deploy downstream components necessary to receive and process spaceborne earth observation data using Azure Orbital Ground Station, see:
+- [Azure Orbital Integration](https://github.com/Azure/azure-orbital-integration)
+ For an end-to-end implementation that involves extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics, see: - [Spaceborne data analysis with Azure Synapse Analytics](/azure/architecture/industries/aerospace/geospatial-processing-analytics)
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/overview.md
A comprehensive list of features of Azure Native ISV Services is listed below.
### Integrations -- Log and metrics: Use Microsoft Azure Monitor for collecting telemetry across all Azure environments.
+- Logs and metrics: Seamlessly direct logs and metrics from Azure Monitor to the Azure Native ISV Service using just a few gestures. You can configure auto-discovery of resources to monitor, and set up automatic log forwarding and metrics shipping. You can easily do the setup in Azure, without needing to create additional infrastructure or write custom code.
- VNet injection: Provides private data plane access to Azure Native ISV services from customersΓÇÖ virtual networks. - Unified billing: Engage with a single entity, Microsoft Azure Marketplace, for billing. No separate license purchase is required to use Azure Native ISV Services.
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
The following tables list the Private Link services and the regions where they'r
|Azure Event Grid| All public regions<br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Event Grid.](../event-grid/network-security.md) | |Azure Service Bus | All public region<br/>All Government regions | Supported with premium tier of Azure Service Bus. [Select for tiers](../service-bus-messaging/service-bus-premium-messaging.md) | GA <br/> [Learn how to create a private endpoint for Azure Service Bus.](../service-bus-messaging/private-link-service.md) | | Azure API Management | All public regions<br/> All Government regions | | Preview <br/> [Connect privately to API Management using a private endpoint.](../event-grid/network-security.md) |
+| Azure Logic Apps | All public regions | | GA <br/> [Learn how to create a private endpoint for Azure Logic Apps.](/azure/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint) |
### Internet of Things (IoT)
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Microsoft PowerBI (Microsoft.PowerBI/privateLinkServicesForPowerBI) | privatelink.analysis.windows.net </br> privatelink.pbidedicated.windows.net </br> privatelink.tip1.powerquery.microsoft.com | analysis.windows.net </br> pbidedicated.windows.net </br> tip1.powerquery.microsoft.com | | Azure Bot Service (Microsoft.BotService/botServices) / Bot | privatelink.directline.botframework.com | directline.botframework.com </br> europe.directline.botframework.com | | Azure Bot Service (Microsoft.BotService/botServices) / Token | privatelink.token.botframework.com | token.botframework.com </br> europe.token.botframework.com |
+| Azure Data Health Data Services (Microsoft.HealthcareApis/workspaces) / healthcareworkspace | workspace.privatelink.azurehealthcareapis.com </br> fhir.privatelink.azurehealthcareapis.com </br> dicom.privatelink.azurehealthcareapis.com | workspace.azurehealthcareapis.com </br> fhir.azurehealthcareapis.com </br> dicom.azurehealthcareapis.com |
<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hub-compatible-endpoint)
+>[!Note]
+>In the above text, `{region}` refers to the region code (for example, **eus** for East US and **ne** for North Europe). Refer to the following lists for regions codes:
+>- [All public clouds](https://download.microsoft.com/download/1/2/6/126a410b-0e06-45ed-b2df-84f353034fa1/AzureRegionCodesList.docx)
+ ### Government | Private link resource type / Subresource |Private DNS zone name | Public DNS zone forwarders |
For Azure services, use the recommended zone names as described in the following
| Azure Cache for Redis (Microsoft.Cache/Redis) / redisCache | privatelink.redis.cache.usgovcloudapi.net | redis.cache.usgovcloudapi.net | | Azure HDInsight (Microsoft.HDInsight) | privatelink.azurehdinsight.us | azurehdinsight.us |
+>[!Note]
+>In the above text, `{region}` refers to the region code (for example, **eus** for East US and **ne** for North Europe). Refer to the following lists for regions codes:
+>- [US Gov](../azure-government/documentation-government-developer-guide.md)
+ ### China | Private link resource type / Subresource |Private DNS zone name | Public DNS zone forwarders |
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
The following information lists the known limitations to the use of private endp
| No more than 50 members in an Application Security Group. | Fifty is the number of IP Configurations that can be tied to each respective ASG thatΓÇÖs coupled to the NSG on the private endpoint subnet. Connection failures may occur with more than 50 members. | | Destination port ranges supported up to a factor of 250K. | Destination port ranges are supported as a multiplication SourceAddressPrefixes, DestinationAddressPrefixes, and DestinationPortRanges. </br></br> Example inbound rule: </br> 1 source * 1 destination * 4K portRanges = 4K Valid </br> 10 sources * 10 destinations * 10 portRanges = 1K Valid </br> 50 sources * 50 destinations * 50 portRanges = 125K Valid </br> 50 sources * 50 destinations * 100 portRanges = 250K Valid </br> 100 sources * 100 destinations * 100 portRanges = 1M Invalid, NSG has too many sources/destinations/ports. | | Source port filtering is interpreted as * | Source port filtering isn't actively used as valid scenario of traffic filtering for traffic destined to a private endpoint. |
-| Feature unavailable in select regions. | Currently unavailable in the following regions: </br> West India </br> UK North </br> UK South 2 </br> Australia Central 2 </br> South Africa West </br> Brazil Southeast |
+| Feature unavailable in select regions. | Currently unavailable in the following regions: </br> West India </br> Australia Central 2 </br> South Africa West </br> Brazil Southeast |
| Dual port NSG rules are unsupported. | If multiple port ranges are used with NSG rules, only the first port range is honored for allow rules and deny rules. Rules with multiple port ranges are defaulted to *deny all* instead of to denying specific ports. </br><br>For more information, see the UDR rule example in the next table. | The following table shows an example of a dual port NSG rule:
purview Create A Custom Classification And Classification Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-a-custom-classification-and-classification-rule.md
Title: Create a custom classification and classification rule description: Learn how to create custom classifications to define data types in your data estate that are unique to your organization in Microsoft Purview.--++
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-credentials.md
Title: Create and manage credentials for scans description: Learn about the steps to create and manage credentials in Microsoft Purview.--++
purview Manage Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-data-sources.md
Title: How to manage multicloud data sources description: Learn how to register new data sources, manage collections of data sources, and view sources in Microsoft Purview.--++
purview Register Scan Azure Files Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-files-storage-source.md
Title: Connect to and manage Azure Files description: This guide describes how to connect to Azure Files in Microsoft Purview, and use Microsoft Purview's features to scan and manage your Azure Files source.--++
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
Title: Connect to and manage multiple Azure sources description: This guide describes how to connect to multiple Azure sources in Microsoft Purview at once, and use Microsoft Purview's features to scan and manage your sources.--++
purview Register Scan Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-synapse-analytics.md
Title: 'Connect to and manage dedicated SQL pools (formerly SQL DW)' description: This guide describes how to connect to dedicated SQL pools (formerly SQL DW) in Microsoft Purview, and use Microsoft Purview's features to scan and manage your dedicated SQL pools source.--++
purview Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-on-premises-sql-server.md
Title: Connect to and manage on-premises SQL server instances description: This guide describes how to connect to on-premises SQL server instances in Microsoft Purview, and use Microsoft Purview's features to scan and manage your on-premises SQL server source.--++
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
Title: Connect to and manage Azure Synapse Analytics workspaces description: This guide describes how to connect to Azure Synapse Analytics workspaces in Microsoft Purview, and use Microsoft Purview's features to scan and manage your Azure Synapse Analytics workspace source.--++
purview Troubleshoot Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/troubleshoot-connections.md
Title: Troubleshoot your connections in Microsoft Purview description: This article explains the steps to troubleshoot your connections in Microsoft Purview.--++
purview Tutorial Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-register-scan-on-premises-sql-server.md
Title: 'Tutorial: Register and scan an on-premises SQL Server' description: This tutorial describes how to register an on-prem SQL Server to Microsoft Purview, and scan the server using a self-hosted IR. --++
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 10/12/2022 Last updated : 10/31/2022
Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
> | Microsoft.NetApp/locations/read | Reads a location wide operation. | > | Microsoft.NetApp/locations/checknameavailability/action | Check if resource name is available | > | Microsoft.NetApp/locations/checkfilepathavailability/action | Check if file path is available |
-> | Microsoft.NetApp/locations/checkinventory/action | Checks ReservedCapacity inventory. |
> | Microsoft.NetApp/locations/operationresults/read | Reads an operation result resource. | > | Microsoft.NetApp/locations/quotaLimits/read | Reads a Quotalimit resource type. | > | Microsoft.NetApp/locations/RegionInfo/read | Reads a regionInfo resource. |
search Search Howto Index Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-mysql.md
The data source definition specifies the data to index, credentials, and policie
api-key: [admin key] {
- "name" : "hotel-mysql-ds"
+ "name" : "hotel-mysql-ds",
"description" : "[Description of MySQL data source]", "type" : "mysql", "credentials" : {
search Search Pagination Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-pagination-page-layout.md
Title: How to work with search results
-description: Structure and sort search results, get a document count, and add content navigation to search results in Azure Cognitive Search.
+description: Define search result composition, get a document count, sort results, and add content navigation to search results in Azure Cognitive Search.
- Previously updated : 07/22/2022+ Last updated : 11/02/2022 # How to work with search results in Azure Cognitive Search
Parameters on the query determine:
+ Selection of fields within results + Count of matches found in the index for the query + Number of results in the response (up to 50, by default)
-+ Sort order of results
++ Sort order + Highlighting of terms within a result, matching on either the whole or partial term in the body ## Result composition
Occasionally, the substance and not the structure of results are unexpected. For
## Counting matches
-The count parameter returns the number of documents in the index that are considered a match for the query. To return the count, add **`$count=true`** to the query request. There is no maximum value imposed by the search service. Depending on your query and the content of your documents, the count could be as high as every document in the index.
+The count parameter returns the number of documents in the index that are considered a match for the query. To return the count, add **`$count=true`** to the query request. There's no maximum value imposed by the search service. Depending on your query and the content of your documents, the count could be as high as every document in the index.
Count is accurate when the index is stable. If the system is actively adding, updating, or deleting documents, the count will be approximate, excluding any documents that aren't fully indexed.
To control the paging of all documents returned in a result set, add `$top` and
+ Return the second set, skipping the first 15 to get the next 15: `$top=15&$skip=15`. Repeat for the third set of 15: `$top=15&$skip=30`
-The results of paginated queries aren't guaranteed to be stable if the underlying index is changing. Paging changes the value of `$skip` for each page, but each query is independent and operates on the current view of the data as it exists in the index at query time (in other words, there is no caching or snapshot of results, such as those found in a general purpose database).
+The results of paginated queries aren't guaranteed to be stable if the underlying index is changing. Paging changes the value of `$skip` for each page, but each query is independent and operates on the current view of the data as it exists in the index at query time (in other words, there's no caching or snapshot of results, such as those found in a general purpose database).
Following is an example of how you might get duplicates. Assume an index with four documents:
Notice that document 2 is fetched twice. This is because the new document 5 has
## Ordering results
-In a full text search query, results can be ranked by a search score, a semantic re-ranker score (if using [semantic search](semantic-search-overview.md)), or by an **`$orderby`** expression in the query request.
+In a full text search query, results can be ranked by a search score, a semantic reranker score (if using [semantic search](semantic-search-overview.md)), or by an **`$orderby`** expression in the query request that specifies an explicit sort order.
-A @search.score equal to 1.00 indicates an un-scored or un-ranked result set, where the 1.0 score is uniform across all results. Un-scored results occur when the query form is fuzzy search, wildcard or regex queries, or an empty search (`search=*`). If you need to impose a ranking structure over un-scored results, an **`$orderby`** expression will help you achieve that objective.
+Sorting methodologies aren't designed to be used together. For example, if you're sorting with **`$orderby`** for primary sorting, you can't apply a secondary sort based on search score (because the search score will be uniform).
+
+### Ordering by search score
For full text search queries, results are automatically ranked by a search score, calculated based on term frequency and proximity in a document (derived from [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf)), with higher scores going to documents having more or stronger matches on a search term.
+The "@search.score" range is 0 up to (but not including) 1.00. A "@search.score" equal to 1.00 indicates an unscored or unranked result set, where the 1.0 score is uniform across all results. Unscored results occur when the query form is fuzzy search, wildcard or regex queries, or an empty search (`search=*`). If you need to impose a ranking structure over unscored results, an **`$orderby`** expression will help you achieve that objective.
+ Search scores convey general sense of relevance, reflecting the strength of match relative to other documents in the same result set. But scores aren't always consistent from one query to the next, so as you work with queries, you might notice small discrepancies in how search documents are ordered. There are several explanations for why this might occur. | Cause | Description |
Search scores convey general sense of relevance, reflecting the strength of matc
| Multiple replicas | For services using multiple replicas, queries are issued against each replica in parallel. The index statistics used to calculate a search score are calculated on a per-replica basis, with results merged and ordered in the query response. Replicas are mostly mirrors of each other, but statistics can differ due to small differences in state. For example, one replica might have deleted documents contributing to their statistics, which were merged out of other replicas. Typically, differences in per-replica statistics are more noticeable in smaller indexes. For more information about this condition, see [Concepts: search units, replicas, partitions, shards](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards) in the capacity planning documentation. | | Identical scores | If multiple documents have the same score, any one of them might appear first. |
-### How to get consistent ordering
+### Ordering by the semantic reranker
+
+If you're using [semantic search](semantic-search-overview.md), the "@search.rerankerScore" determines the sort order of your results.
+
+The "@search.rerankerScore" range is 1 to 4.00, where a higher score indicates a stronger semantic match.
+
+### Ordering with $orderby
If consistent ordering is an application requirement, you can explicitly define an [**`$orderby`** expression](query-odata-filter-orderby-syntax.md) on a field. Only fields that are indexed as "sortable" can be used to order results. Fields commonly used in an **`$orderby`** include rating, date, and location. Filtering by location requires that the filter expression calls the [**`geo.distance()` function**](search-query-odata-geo-spatial-functions.md?#order-by-examples), in addition to the field name.
-Another approach that promotes order consistency is using a [custom scoring profile](index-add-scoring-profiles.md). Scoring profiles give you more control over the ranking of items in search results, with the ability to boost matches found in specific fields. The additional scoring logic can help override minor differences among replicas because the search scores for each document are farther apart. We recommend the [ranking algorithm](index-ranking-similarity.md) for this approach.
+Numeric fields (Edm.Double, Edm.Int32, Edm.Int64) are sorted in numeric order (for example, 1, 2, 10, 11, 20).
+
+String fields (Edm.String, Edm.ComplexType subfields) are sorted in either [ASCII sort order](https://en.wikipedia.org/wiki/ASCII#Printable_characters) or [Unicode sort order](https://en.wikipedia.org/wiki/List_of_Unicode_characters), depending on the language. You can't sort collections of any type.
+++ Numeric content in string fields is sorted alphabetically (1, 10, 11, 2, 20).+++ Upper case strings are sorted ahead of lower case (APPLE, Apple, BANANA, Banana, apple, banana). You can assign a [text normalizer](search-normalizers.md) to preprocess the text before sorting to change this behavior. Using the lowercase tokenizer ona field will have no effect on sorting behavior because Cognitive Search sorts on a non-analyzed copy of the field.+++ Strings that lead with diacritics appear last (Äpfel, Öffnen, Üben)+
+### Use a scoring profile to influence relevance
+
+Another approach that promotes order consistency is using a [custom scoring profile](index-add-scoring-profiles.md). Scoring profiles give you more control over the ranking of items in search results, with the ability to boost matches found in specific fields. The extra scoring logic can help override minor differences among replicas because the search scores for each document are farther apart. We recommend the [ranking algorithm](index-ranking-similarity.md) for this approach.
## Hit highlighting Hit highlighting refers to text formatting (such as bold or yellow highlights) applied to matching terms in a result, making it easy to spot the match. Highlighting is useful for longer content fields, such as a description field, where the match isn't immediately obvious.
-Notice that highlighting is applied to individual terms. There is no highlight capability for the contents of an entire field. If you want highlighting over a phrase, you'll have to provide the matching terms (or phrase) in a quote-enclosed query string. This technique is described further on in this section.
+Notice that highlighting is applied to individual terms. There's no highlight capability for the contents of an entire field. If you want to highlight over a phrase, you'll have to provide the matching terms (or phrase) in a quote-enclosed query string. This technique is described further on in this section.
Hit highlighting instructions are provided on the [query request](/rest/api/searchservice/search-documents). Queries that trigger query expansion in the engine, such as fuzzy and wildcard search, have limited support for hit highlighting.
In a keyword search, each term is scanned for independently. A query for "divine
### Keyword search highlighting
-Within a highlighted field, formatting is applied to whole terms. For example, on a match against "The Divine Secrets of the Ya-Ya Sisterhood", formatting is applied to each term separately, even though they are consecutive.
+Within a highlighted field, formatting is applied to whole terms. For example, on a match against "The Divine Secrets of the Ya-Ya Sisterhood", formatting is applied to each term separately, even though they're consecutive.
```json "@odata.count": 39,
search Search Query Odata Orderby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-orderby.md
Previously updated : 07/18/2022 Last updated : 11/02/2022 # OData $orderby syntax in Azure Cognitive Search
-In Azure Cognitive Search, the **$orderby** parameter specifies custom sort order for search results. This article describes the OData syntax of **$orderby** and provides examples.
+In Azure Cognitive Search, the **$orderby** parameter specifies a custom sort order for search results. This article describes the OData syntax of **$orderby** and provides examples.
-Field path construction and constants are described in the [OData language overview in Azure Cognitive Search](query-odata-filter-orderby-syntax.md). For more information about sorting and search results composition, see [How to work with search results in Azure Cognitive Search](search-pagination-page-layout.md).
+Field path construction and constants are described in the [OData language overview in Azure Cognitive Search](query-odata-filter-orderby-syntax.md). For more information about sorting behaviors, see [Ordering results](search-pagination-page-layout.md#ordering-results).
## Syntax
Sort hotels in descending order by search.score and rating, and then in ascendin
$orderby=search.score() desc,Rating desc,geo.distance(Location, geography'POINT(-122.131577 47.678581)') asc ```
-## Next steps
+## See also
- [How to work with search results in Azure Cognitive Search](search-pagination-page-layout.md) - [OData expression language overview for Azure Cognitive Search](query-odata-filter-orderby-syntax.md)
search Semantic Answers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-answers.md
Previously updated : 03/16/2022 Last updated : 11/02/2022 # Return a semantic answer in Azure Cognitive Search
All prerequisites that apply to [semantic queries](semantic-how-to-query-request
+ Query strings entered by the user must be recognizable as a question (what, where, when, how).
-+ Search documents in the index must contain text having the characteristics of an answer, and that text must exist in one of the fields listed in the [semantic configuration](semantic-how-to-query-request.md#create-a-semantic-configuration). For example, given a query "what is a hash table", if none of the fields in the semantic configuration contain passages that include "A hash table is ...", then it's unlikely an answer will be returned.
++ Search documents in the index must contain text having the characteristics of an answer, and that text must exist in one of the fields listed in the [semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration). For example, given a query "what is a hash table", if none of the fields in the semantic configuration contain passages that include "A hash table is ...", then it's unlikely an answer will be returned. ## What is a semantic answer?
Answers are returned as an independent, top-level object in the query response p
<a name="query-params"></a>
-## Formulate a query rest for "answers"
+## Formulate a REST query for "answers"
-The approach for listing fields in priority order has changed recently, with "semanticConfiguration" replacing "searchFields". If you're currently using searchFields, update your code to the 2021-04-30-Preview API version and use "semanticConfiguration" instead.
+The approach for listing fields in priority order has changed, with "semanticConfiguration" replacing "searchFields". If you're currently using "searchFields", update your code to the 2021-04-30-Preview API version and use "semanticConfiguration" instead.
### [**Semantic Configuration (recommended)**](#tab/semanticConfiguration) To return a semantic answer, the query must have the semantic "queryType", "queryLanguage", "semanticConfiguration", and the "answers" parameters. Specifying these parameters doesn't guarantee an answer, but the request must include them for answer processing to occur.
-The "semanticConfiguration" parameter is crucial to returning a high-quality answer.
+The "semanticConfiguration" parameter is required. It's defined in a search index, and then referenced in a query, as shown below.
```json {
The "semanticConfiguration" parameter is crucial to returning a high-quality ans
+ "queryLanguage" must be one of the values from the [supported languages list (REST API)](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-+ A "semanticConfiguration" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#searchfields) for details.
++ A "semanticConfiguration" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration) for details. + For "answers", parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of 10. Whether you need more than one answer depends on the user experience of your app, and how you want to render results.
The "searchFields" parameter is crucial to returning a high-quality answer, both
+ "queryLanguage" must be one of the values from the [supported languages list (REST API)](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-+ "searchFields" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Set searchFields](semantic-how-to-query-request.md#searchfields) for details.
++ "searchFields" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Set searchFields](semantic-how-to-query-request.md#2buse-searchfields-for-field-prioritization) for details. + For "answers", parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of 10. Whether you need more than one answer depends on the user experience of your app, and how you want to render results.
Within @search.answers:
+ **"score"** is a confidence score that reflects the strength of the answer. If there are multiple answers in the response, this score is used to determine the order. Top answers and top captions can be derived from different search documents, where the top answer originates from one document, and the top caption from another, but in general you will see the same documents in the top positions within each array.
-Answers are followed by the **"value"** array, which always includes scores, captions, and any fields that are retrievable by default. If you specified the select parameter, the "value" array is limited to the fields that you specified. See [Create a semantic query](semantic-how-to-query-request.md) for details.
+Answers are followed by the **"value"** array, which always includes scores, captions, and any fields that are retrievable by default. If you specified the select parameter, the "value" array is limited to the fields that you specified. See [Configure semantic ranking](semantic-how-to-query-request.md) for details.
## Tips for producing high-quality answers
For best results, return semantic answers on a document corpus having the follow
+ [Semantic search overview](semantic-search-overview.md) + [Semantic ranking algorithm](semantic-ranking.md) + [Similarity ranking algorithm](index-ranking-similarity.md)
-+ [Create a semantic query](semantic-how-to-query-request.md)
++ [Configure semantic ranking](semantic-how-to-query-request.md)
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-query-request.md
Title: Create a semantic query
+ Title: Configure semantic search
-description: Set a semantic query type to attach the deep learning models to query processing, inferring intent and context as part of search rank and relevance.
+description: Set a semantic query type to attach the deep learning models of semantic search.
- Previously updated : 12/17/2021+ Last updated : 11/01/2022
-# Create a query that invokes semantic ranking and returns semantic captions
+# Configure semantic ranking and return captions in search results
> [!IMPORTANT]
-> Semantic search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and beta SDKs. These features are billable. For more information about, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
+> Semantic search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through Azure portal, preview REST APIs, and beta SDKs. This feature is billable. See [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
-Semantic search is a premium feature in Azure Cognitive Search that invokes a semantic ranking algorithm over a result set and returns semantic captions (and optionally [semantic answers](semantic-answers.md)), with highlights over the most relevant terms and phrases. Both captions and answers are returned in query requests formulated using the "semantic" query type.
+In this article, you'll learn how to invoke a semantic ranking algorithm over a result set, promoting the most semantically relevant results to the top of the stack. You can also get semantic captions, with highlights over the most relevant terms and phrases, and [semantic answers](semantic-answers.md).
-Captions and answers are extracted verbatim from text in the search document. The semantic subsystem determines what part of your content has the characteristics of a caption or answer, but it does not compose new sentences or phrases. For this reason, content that includes explanations or definitions work best for semantic search.
+There are two main activities to perform:
+++ Add a semantic configuration to an index++ Add parameters to a query request ## Prerequisites
-+ A Cognitive Search service at a Standard tier (S1, S2, S3) or Storage Optimized tier (L1, L2), located in one of these regions: Australia East, East US, East US 2, North Central US, South Central US, West US, West US 2, North Europe, UK South, West Europe. If you have an existing S1 or greater service in one of these regions, you can enable semantic search on your service without having to create a new one.
++ A search service on Standard tier (S1, S2, S3) or Storage Optimized tier (L1, L2), in these regions: Australia East, East US, East US 2, North Central US, South Central US, West US, West US 2, North Europe, UK South, West Europe.
-+ [Semantic search enabled on your search service](semantic-search-overview.md#enable-semantic-search).
+ If you have an existing S1 or greater service in one of these regions, you can enable semantic search without having to create a new service.
-+ An existing search index with content in a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage). Semantic search works best on content that is informational or descriptive.
++ Semantic search [enabled on your search service](semantic-search-overview.md#enable-semantic-search).
-+ A search client for sending queries and updating indexes.
++ An existing search index with rich content in a [supported query language](/rest/api/searchservice/preview-api/search-documents#queryLanguage). Semantic search works best on content that is informational or descriptive.
- The search client must support preview REST APIs on the query request. You can use [Postman](search-get-started-rest.md), another web client, or code that makes REST calls to the preview APIs. [Search explorer](search-explorer.md) in Azure portal can be used to submit a semantic query. You can also use [Azure.Search.Documents 11.4.0-beta.5](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.5).
++ Review the [Semantic search overview](semantic-search-overview.md) if you need an introduction to the feature.
-+ A [query request](/rest/api/searchservice/preview-api/search-documents) must include `queryType=semantic` and other parameters described in this article.
+> [!NOTE]
+> Captions and answers are extracted verbatim from text in the search document. The semantic subsystem determines what part of your content has the characteristics of a caption or answer, but it doesn't compose new sentences or phrases. For this reason, content that includes explanations or definitions work best for semantic search.
-## What's a semantic query type?
+## 1 - Choose a client
-In Cognitive Search, a query is a parameterized request that determines query processing and the shape of the response. A *semantic query* has [parameters](#query-using-rest) that invoke the semantic reranking model that can assess the context and meaning of matching results, promote more relevant matches to the top, and return semantic answers and captions.
+You'll need a search client that supports preview APIs on the query request. Here are some options:
-The approach for listing fields in priority order has changed recently, with semanticConfiguration replacing searchFields. If you are currently using searchFields, please update your code to the 2021-04-30-Preview API version and use semanticConfiguration instead.
++ [Search explorer](search-explorer.md) in Azure portal, recommended for initial exploration.
-### [**Semantic Configuration (recommended)**](#tab/semanticConfiguration)
++ [Postman Desktop App](https://www.postman.com/downloads/) using the [2021-04-30-Preview REST APIs](/rest/api/searchservice/preview-api/). See this [Quickstart](search-get-started-rest.md) for help with setting up your requests.
-The following request is representative of a minimal semantic query (without answers).
++ [Azure.Search.Documents 11.4.0-beta.5](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.5) in the Azure SDK for .NET Preview.
-```http
-POST https://[service name].search.windows.net/indexes/[index name]/docs/search?api-version=2021-04-30-Preview     
-{   
- "search": " Where was Alan Turing born?",   
- "queryType": "semantic",ΓÇ»
- "semanticConfiguration": "my-semantic-config",
- "queryLanguage": "en-us"ΓÇ»
-}
-```
++ [Azure.Search.Documents 11.3.0b6](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-search-documents/11.3.0b6/azure.search.documents.aio.html) in the Azure SDK for Python.
-### [**searchFields**](#tab/searchFields)
+## 2 - Create a semantic configuration
-The following request is representative of a minimal semantic query (without answers).
-
-```http
-POST https://[service name].search.windows.net/indexes/[index name]/docs/search?api-version=2020-06-30-Preview     
-{   
- "search": " Where was Alan Turing born?",   
- "queryType": "semantic",ΓÇ»
- "searchFields": "title,url,body",ΓÇ»
- "queryLanguage": "en-us"ΓÇ»
-}
-```
---
-As with all queries in Cognitive Search, the request targets the documents collection of a single index. Furthermore, a semantic query undergoes the same sequence of parsing, analysis, scanning, and scoring as a non-semantic query.
+> [!IMPORTANT]
+> A semantic configuration is required for the 2021-04-30-Preview REST APIs, Search explorer, and some versions of the beta SDKs. If you're using the 2020-06-30-preview REST API, skip this step and use the ["searchFields" approach for field prioritization](#2buse-searchfields-for-field-prioritization) instead.
-The difference lies in relevance and scoring. As defined in this preview release, a semantic query is one whose *results* are reranked using a semantic language model, providing a way to surface the matches deemed most relevant by the semantic ranker, rather than the scores assigned by the default similarity ranking algorithm.
+A *semantic configuration* specifies how fields are used in semantic ranking. It gives the underlying models hints about which index fields are most important for semantic ranking, captions, highlights, and answers.
-Only the top 50 matches from the initial results can be semantically ranked, and all results include captions in the response. Optionally, you can specify an **`answer`** parameter on the request to extract a potential answer. For more information, see [Semantic answers](semantic-answers.md).
+You'll add a semantic configuration to your [index definition](/rest/api/searchservice/preview-api/create-or-update-index). The tabbed sections below provide instructions for the REST APIs, Azure portal, and the .NET SDK Preview.
-## Create a semantic configuration
+You can add or update a semantic configuration at any time without rebuilding your index. When you issue a query, you'll add the semantic configuration (one per query) that specifies which semantic configuration to use for the query.
-> [!NOTE]
-> Semantic configurations are a new addition to the 2021-04-30-Preview API and are now required for semantic queries. If using 2020-06-30-Preview, **searchFields** is used instead of **semanticConfiguration**. We recommend upgrading to 2021-04-30-Preview and using **semanticConfiguration** for best results.
+1. Review the properties you'll need to specify. A semantic configuration has a name and at least one each of the following properties:
-In order to get the best results from semantic search, it's important to give the underlying models hints about which fields in your index are most important for semantic ranking, captions, highlights, and answers. To provide that information, you'll need to create a semantic configuration.
+ + **Title field** - A title field should be a concise description of the document, ideally a string that is under 25 words. This field could be the title of the document, name of the product, or item in your search index. If you don't have a title in your search index, leave this field blank.
+ + **Content fields** - Content fields should contain text in natural language form. Common examples of content are the body of a document, the description of a product, or other free-form text.
+ + **Keyword fields** - Keyword fields should be a list of keywords, such as the tags on a document, or a descriptive term, such as the category of an item.
-A semantic configuration contains properties to list three different types of fields, which map back to the inputs the underlying models for semantic search expect:
+ You can only specify one title field but you can specify as many content and keyword fields as you like. For content and keyword fields, list the fields in priority order because lower priority fields may get truncated.
-+ **Title field** - A title field should be a concise description of the document, ideally a string that is under 25 words. This could be the title of the document, name of the product, or item in your search index. If you don't have a title in your search index, leave this field blank.
-+ **Content fields** - Content fields should contain text in natural language form. Common examples of content are the text of a document, the description of a product, or other free-form text.
-+ **Keyword fields** - Keyword fields should be a list of keywords, such as the tags on a document, or a descriptive term, such as the category of an item.
+1. For the above properties, determine which fields to assign.
-You can only specify a single title field as part of your semantic configuration but you can specify as many content and keyword fields as you like. However, it's important that you list the content and keyword fields in priority order because lower priority fields may get truncated. Fields listed first will be given higher priority.
+ A field must be a [supported data type](/rest/api/searchservice/supported-data-types) and it should contain strings. If you happen to include an invalid field, there's no error, but those fields won't be used in semantic ranking.
-You're only required to specify one field between `titleField`, `prioritizedContentFields`, and `prioritizedKeywordsFields`, but it's best to add the fields to your semantic configuration if they exist in your search index.
+ | Data type | Example from hotels-sample-index |
+ |--|-|
+ | Edm.String | HotelName, Category, Description |
+ | Edm.ComplexType | Address.StreetNumber, Address.City, Address.StateProvince, Address.PostalCode |
+ | Collection(Edm.String) | Tags (a comma-delimited list of strings) |
-Similar to [scoring profiles](index-add-scoring-profiles.md), semantic configurations are a part of your [index definition](/rest/api/searchservice/preview-api/create-or-update-index) and can be updated at any time without rebuilding your index. When you issue a query, you'll add the `semanticConfiguration` that specifies which semantic configuration to use for the query.
+ > [!NOTE]
+ > Subfields of Collection(Edm.ComplexType) fields aren't currently supported by semantic search and won't be used for semantic ranking, captions, or answers.
### [**Azure portal**](#tab/portal)
Similar to [scoring profiles](index-add-scoring-profiles.md), semantic configura
### [**REST API**](#tab/rest)
- ```json
-"semantic": {
- "configurations": [
- {
- "name": "my-semantic-config",
- "prioritizedFields": {
- "titleField": {
- "fieldName": "hotelName"
- },
- "prioritizedContentFields": [
- {
- "fieldName": "description"
- },
- {
- "fieldName": "description_fr"
- }
- ],
- "prioritizedKeywordsFields": [
- {
- "fieldName": "tags"
- },
- {
- "fieldName": "category"
+1. Formulate a [Create or Update Index](/rest/api/searchservice/preview-api/create-or-update-index?branch=main) request.
+
+1. Add a semantic configuration to the index definition, perhaps after `scoringProfiles` or `suggesters`.
+
+ ```json
+ "semantic": {
+ "configurations": [
+ {
+ "name": "my-semantic-config",
+ "prioritizedFields": {
+ "titleField": {
+ "fieldName": "hotelName"
+ },
+ "prioritizedContentFields": [
+ {
+ "fieldName": "description"
+ },
+ {
+ "fieldName": "description_fr"
+ }
+ ],
+ "prioritizedKeywordsFields": [
+ {
+ "fieldName": "tags"
+ },
+ {
+ "fieldName": "category"
+ }
+ ]
}
- ]
- }
+ }
+ ]
}
- ]
- }
-```
+ ```
### [**.NET SDK**](#tab/sdk)
+Use the [SemanticConfiguration class](/dotnet/api/azure.search.documents.indexes.models.semanticconfiguration?view=azure-dotnet-preview&preserve-view=true) in the Azure SDK for .NET Preview.
+ ```c# var definition = new SearchIndex(indexName, searchFields);
adminClient.CreateOrUpdateIndex(definition);
-To see an example of creating a semantic configuration and using it to issue a semantic query, check out the
+> [!TIP]
+> To see an example of creating a semantic configuration and using it to issue a semantic query, check out the
[semantic search Postman sample](https://github.com/Azure-Samples/azure-search-postman-samples/tree/master/semantic-search).
-### Allowed data types
+## 2b - Use searchFields for field prioritization
-When selecting fields for your semantic configuration, choose only fields of the following [supported data types](/rest/api/searchservice/supported-data-types). If you happen to include an invalid field, there is no error, but those fields won't be used in semantic ranking.
+This step is only for solutions using the 2020-06-30-Preview REST API or a beta SDK that doesn't support semantic configurations. Instead of setting field prioritization in the index through a semantic configuration, you'll set the priority at query time, using the "searchFields" parameter of a query.
-| Data type | Example from hotels-sample-index |
-|--|-|
-| Edm.String | HotelName, Category, Description |
-| Edm.ComplexType | Address.StreetNumber, Address.City, Address.StateProvince, Address.PostalCode |
-| Collection(Edm.String) | Tags (a comma-delimited list of strings) |
-
-> [!NOTE]
-> Subfields of Collection(Edm.ComplexType) fields are not currently supported by semantic search and won't be used for semantic ranking, captions, or answers.
-
-## Query in Azure portal
-
-[Search explorer](search-explorer.md) has been updated to include options for semantic queries. To create a semantic query in the portal, follow the steps below:
-
-1. Open the [Azure portal](https://portal.azure.com) and navigate to a search service that has semantic search [enabled](semantic-search-overview.md#enable-semantic-search).
-
-1. Click **Search explorer** at the top of the overview page.
-
-1. Choose an index that has content in a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-
-1. In Search explorer, set query options that enable semantic queries, semantic configurations, and spell correction. You can also paste the required query parameters into the query string.
--
-## Query using REST
-
-Use the [Search Documents (REST preview)](/rest/api/searchservice/preview-api/search-documents) to formulate the request programmatically. A response includes captions and highlighting automatically. If you want spelling correction or answers in the response, add **`speller`** or **`answers`** to the request.
-
-The following example uses the [hotels-sample-index](search-get-started-portal.md) to create a semantic query request with spell check, semantic answers, and captions:
-
-### [**Semantic Configuration (recommended)**](#tab/semanticConfiguration)
-
-```http
-POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2021-04-30-Preview     
-{
- "search": "newer hotel near the water with a great restaurant",
- "queryType": "semantic",
- "queryLanguage": "en-us",
- "semanticConfiguration": "my-semantic-config",
- "speller": "lexicon",
- "answers": "extractive|count-3",
- "captions": "extractive|highlight-true",
- "highlightPreTag": "<strong>",
- "highlightPostTag": "</strong>",
- "select": "HotelId,HotelName,Description,Category",
- "count": true
-}
-```
-
-The following table summarizes the parameters used in a semantic query. For a list of all parameters in a request, see [Search Documents (REST preview)](/rest/api/searchservice/preview-api/search-documents)
-
-| Parameter | Type | Description |
-|--|-|-|
-| queryType | String | Valid values include simple, full, and semantic. A value of "semantic" is required for semantic queries. |
-| queryLanguage | String | Required for semantic queries. The lexicon you specify applies equally to semantic ranking, captions, answers, and spell check. For more information, see [supported languages (REST API reference)](/rest/api/searchservice/preview-api/search-documents#queryLanguage). |
-| semanticConfiguration | String | Required for semantic queries. The name of your [semantic configuration](#create-a-semantic-configuration). </br></br>In contrast with simple and full query types, the order in which fields are listed determines precedence. For more usage instructions, see [Create a semantic configuration](#create-a-semantic-configuration). |
-| speller | String | Optional parameter, not specific to semantic queries, that corrects misspelled terms before they reach the search engine. For more information, see [Add spell correction to queries](speller-how-to-add.md). |
-| answers |String | Optional parameters that specify whether semantic answers are included in the result. Currently, only "extractive" is implemented. Answers can be configured to return a maximum of ten. The default is one. This example shows a count of three answers: `extractive|count-3`. For more information, see [Return semantic answers](semantic-answers.md).|
-| captions |String | Optional parameters that specify whether semantic captions are included in the result. Currently, only "extractive" is implemented. Captions can be configured to return results with or without highlights. The default is for highlights to be returned. This example returns captions without highlights: `extractive|highlight-false`. For more information, see [Return semantic answers](semantic-answers.md).|
-
-### [**searchFields**](#tab/searchFields)
+Using "searchFields" for field prioritization was an early implementation detail that won't be supported once semantic search exits public preview. We encourage you to use semantic configurations if your application requirements allow it.
```http
-POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2020-06-30-Preview     
-{
- "search": "newer hotel near the water with a great restaurant",
- "queryType": "semantic",
- "queryLanguage": "en-us",
- "searchFields": "HotelName,Category,Description",
- "speller": "lexicon",
- "answers": "extractive|count-3",
- "highlightPreTag": "<strong>",
- "highlightPostTag": "</strong>",
- "select": "HotelId,HotelName,Description,Category",
- "count": true
+POST https://[service name].search.windows.net/indexes/[index name]/docs/search?api-version=2020-06-30-Preview     
+{   
+ "search": " Where was Alan Turing born?",   
+ "queryType": "semantic",ΓÇ»
+ "searchFields": "title,url,body",ΓÇ»
+ "queryLanguage": "en-us"ΓÇ»
} ```
-The following table summarizes the parameters used in a semantic query. For a list of all parameters in a request, see [Search Documents (REST preview)](/rest/api/searchservice/preview-api/search-documents)
-
-| Parameter | Type | Description |
-|--|-|-|
-| queryType | String | Valid values include simple, full, and semantic. A value of "semantic" is required for semantic queries. |
-| queryLanguage | String | Required for semantic queries. The lexicon you specify applies equally to semantic ranking, captions, answers, and spell check. For more information, see [supported languages (REST API reference)](/rest/api/searchservice/preview-api/search-documents#queryLanguage). |
-| searchFields | String | A comma-delimited list of searchable fields. Specifies the fields over which semantic ranking occurs, from which captions and answers are extracted. </br></br>In contrast with simple and full query types, the order in which fields are listed determines precedence. For more usage instructions, see [Step 2: Set searchFields](#searchfields). |
-| speller | String | Optional parameter, not specific to semantic queries, that corrects misspelled terms before they reach the search engine. For more information, see [Add spell correction to queries](speller-how-to-add.md). |
-| answers |String | Optional parameters that specify whether semantic answers are included in the result. Currently, only "extractive" is implemented. Answers can be configured to return a maximum of ten. The default is one. This example shows a count of three answers: `extractive|count-3`. For more information, see [Return semantic answers](semantic-answers.md).|
----
-### Formulate the request
-
-This section steps through query formulation.
-
-### [**Semantic Configuration (recommended)**](#tab/semanticConfiguration)
-
-#### Step 1: Set queryType and queryLanguage
-
-Add the following parameters to the rest. Both parameters are required.
-
-```json
-"queryType": "semantic",
-"queryLanguage": "en-us",
-```
-
-The queryLanguage must be a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage) and it must be consistent with any [language analyzers](index-add-language-analyzers.md) assigned to field definitions in the index schema. For example, you indexed French strings using a French language analyzer (such as "fr.microsoft" or "fr.lucene"), then queryLanguage should also be French language variant.
+Field order is critical because the semantic ranker limits the amount of content it can process while still delivering a reasonable response time. Content from fields at the start of the list are more likely to be included; content from the end could be truncated if the maximum limit is reached. For more information, see [Pre-processing during semantic ranking](semantic-ranking.md#pre-processing).
-In a query request, if you are also using [spell correction](speller-how-to-add.md), the queryLanguage you set applies equally to speller, answers, and captions. There is no override for individual parts. Spell check supports [fewer languages](speller-how-to-add.md#supported-languages), so if you are using that feature, you must set queryLanguage to one from that list.
++ If you're specifying just one field, choose a descriptive field where the answer to semantic queries might be found, such as the main content of a document.
-While content in a search index can be composed in multiple languages, the query input is most likely in one. The search engine doesn't check for compatibility of queryLanguage, language analyzer, and the language in which content is composed, so be sure to scope queries accordingly to avoid producing incorrect results.
++ For two or more fields in searchFields:
-<a name="searchfields"></a>
+ + The first field should always be concise (such as a title or name), ideally a string that is under 25 words.
-#### Step 2: Set semanticConfiguration
+ + If the index has a URL field that is human readable such as `www.domain.com/name-of-the-document-and-other-details` (rather than machine focused, such as `www.domain.com/?id=23463&param=eis`), place it second in the list (or first if there's no concise title field).
-Add a semanticConfiguration to the request. A semantic configuration is required and important for getting the best results from semantic search.
+ + Follow the above fields with other descriptive fields, where the answer to semantic queries may be found, such as the main content of a document.
-```json
-"semanticConfiguration": "my-semantic-config",
-```
+When setting "searchFields", choose only fields of the following [supported data types](/rest/api/searchservice/supported-data-types):
-The [semantic configuration](#create-a-semantic-configuration) is used to tell semantic search's models which fields are most important for reranking search results based on semantic similarity.
+| Data type | Example from hotels-sample-index |
+|--|-|
+| Edm.String | HotelName, Category, Description |
+| Edm.ComplexType | Address.StreetNumber, Address.City, Address.StateProvince, Address.PostalCode |
+| Collection(Edm.String) | Tags (a comma-delimited list of strings) |
+If you happen to include an invalid field, there's no error, but those fields won't be used in semantic ranking.
-#### Step 3: Remove or bracket query features that bypass relevance scoring
+## 3 - Avoid features that bypass relevance scoring
-Several query capabilities in Cognitive Search do not undergo relevance scoring, and some bypass the full text search engine altogether. If your query logic includes the following features, you will not get relevance scores or semantic ranking on your results:
+Several query capabilities in Cognitive Search don't undergo relevance scoring, and some bypass the full text search engine altogether. If your query logic includes the following features, you won't get relevance scores or semantic ranking on your results:
+ Filters, fuzzy search queries, and regular expressions iterate over untokenized text, scanning for verbatim matches in the content. Search scores for all of the above query forms are a uniform 1.0, and won't provide meaningful input for semantic ranking. + Sorting (orderBy clauses) on specific fields will also override search scores and semantic score. Given that semantic score is used to order results, including explicit sort logic will cause an HTTP 400 error to be returned.
-#### Step 4: Add answers and captions
-
-Optionally, add "answers" and "captions" if you want to include additional processing that provides an answer and captions. For details about this parameter, see [How to specify semantic answers](semantic-answers.md).
-
-```json
-"answers": "extractive|count-3",
-"captions": "extractive|highlight-true",
-```
-
-Answers (and captions) are extracted from passages found in fields listed in the semantic configuration. This is why you want to include content-rich fields in the prioritizedContentFields of a semantic configuration, so that you can get the best answers and captions in a response. Answers are not guaranteed on every request. To get an answer, the query must look like a question and the content must include text that looks like an answer.
+## 4 - Set up the query
-#### Step 5: Add other parameters
+Your next step is adding parameters to the query request. To be successful, your query should be full text search (using the "search" parameter to pass in a string), and the index should contain text fields with rich semantic content.
-Set any other parameters that you want in the request. Parameters such as [speller](speller-how-to-add.md), [select](search-query-odata-select.md), and count improve the quality of the request and readability of the response.
+### [**Azure portal**](#tab/portal-query)
-```json
-"speller": "lexicon",
-"select": "HotelId,HotelName,Description,Category",
-"count": true,
-"highlightPreTag": "<mark>",
-"highlightPostTag": "</mark>",
-```
+[Search explorer](search-explorer.md) has been updated to include options for semantic queries. To configure semantic ranking in the portal, follow the steps below:
-Highlight styling is applied to captions in the response. You can use the default style, or optionally customize the highlight style applied to captions. Captions apply highlight formatting over key passages in the document that summarize the response. The default is `<em>`. If you want to specify the type of formatting (for example, yellow background), you can set the highlightPreTag and highlightPostTag.
-
-### [**searchFields**](#tab/searchFields)
-
-#### Step 1: Set queryType and queryLanguage
-
-Add the following parameters to the rest. Both parameters are required.
-
-```json
-"queryType": "semantic",
-"queryLanguage": "en-us",
-```
-
-The queryLanguage must be a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage) and it must be consistent with any [language analyzers](index-add-language-analyzers.md) assigned to field definitions in the index schema. For example, you indexed French strings using a French language analyzer (such as "fr.microsoft" or "fr.lucene"), then queryLanguage should also be French language variant.
-
-In a query request, if you are also using [spell correction](speller-how-to-add.md), the queryLanguage you set applies equally to speller, answers, and captions. There is no override for individual parts. Spell check supports [fewer languages](speller-how-to-add.md#supported-languages), so if you are using that feature, you must set queryLanguage to one from that list.
-
-While content in a search index can be composed in multiple languages, the query input is most likely in one. The search engine doesn't check for compatibility of queryLanguage, language analyzer, and the language in which content is composed, so be sure to scope queries accordingly to avoid producing incorrect results.
+1. Open the [Azure portal](https://portal.azure.com) and navigate to a search service that has semantic search [enabled](semantic-search-overview.md#enable-semantic-search).
-<a name="searchfields"></a>
+1. Select **Search explorer** at the top of the overview page.
-#### Step 2: Set searchFields
+1. Choose an index that has content in a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-Add searchFields to the request. It's optional but strongly recommended.
+1. In Search explorer, set query options that enable semantic queries, semantic configurations, and spell correction. You can also paste the required query parameters into the query string.
-```json
-"searchFields": "HotelName,Category,Description",
-```
-The searchFields parameter is used to identify passages to be evaluated for "semantic similarity" to the query. For the preview, we do not recommend leaving searchFields blank as the model requires a hint as to which fields are the most important to process.
+### [**REST API**](#tab/rest-query)
-In contrast with other parameters, searchFields is not new. You might already be using searchFields in existing code for simple or full Lucene queries. If so, revisit how the parameter is used so that you can check for field order when switching to a semantic query type.
+Use the [Search Documents (REST preview)](/rest/api/searchservice/preview-api/search-documents) to formulate the request.
-##### Allowed data types
+A response includes an "@search.rerankerScore"" automatically. If you want captions, spelling correction, or answers in the response, add "captions", "speller", or "answers" to the request.
-When setting searchFields, choose only fields of the following [supported data types](/rest/api/searchservice/supported-data-types). If you happen to include an invalid field, there is no error, but those fields won't be used in semantic ranking.
+The following example in this section uses the [hotels-sample-index](search-get-started-portal.md) to demonstrate semantic ranking with spell check, semantic answers, and captions.
-| Data type | Example from hotels-sample-index |
-|--|-|
-| Edm.String | HotelName, Category, Description |
-| Edm.ComplexType | Address.StreetNumber, Address.City, Address.StateProvince, Address.PostalCode |
-| Collection(Edm.String) | Tags (a comma-delimited list of strings) |
+1. Paste the following request into a web client as a template. Replace the service name and index name with valid values.
-##### Order of fields in searchFields
+ ```http
+ POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2021-04-30-Preview     
+ {
+ "queryType": "semantic",
+ "queryLanguage": "en-us",
+ "search": "newer hotel near the water with a great restaurant",
+ "semanticConfiguration": "my-semantic-config",
+ "searchFields": "",
+ "speller": "lexicon",
+ "answers": "extractive|count-3",
+ "captions": "extractive|highlight-true",
+ "highlightPreTag": "<strong>",
+ "highlightPostTag": "</strong>",
+ "select": "HotelId,HotelName,Description,Category",
+ "count": true
+ }
+ ```
-Field order is critical because the semantic ranker limits the amount of content it can process while still delivering a reasonable response time. Content from fields at the start of the list are more likely to be included; content from the end could be truncated if the maximum limit is reached. For more information, see [Pre-processing during semantic ranking](semantic-ranking.md#pre-processing).
+1. Set "queryType" to "semantic".
-+ If you're specifying just one field, choose a descriptive field where the answer to semantic queries might be found, such as the main content of a document.
+ In other queries, the "queryType" is used to specify the query parser. In semantic search, it's set to "semantic". For the "search" field, you can specify queries that conform to the [simple syntax](query-simple-syntax.md).
-+ For two or more fields in searchFields:
+1. Set "queryLanguage" to a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
- + The first field should always be concise (such as a title or name), ideally a string that is under 25 words.
+ The "queryLanguage" must be consistent with any [language analyzers](index-add-language-analyzers.md) assigned to field definitions in the index schema. For example, you indexed French strings using a French language analyzer (such as "fr.microsoft" or "fr.lucene"), then "queryLanguage" should also be French language variant.
- + If the index has a URL field that is human readable such as `www.domain.com/name-of-the-document-and-other-details`, (rather than machine focused, such as `www.domain.com/?id=23463&param=eis`), place it second in the list (or first if there is no concise title field).
+ In a query request, if you're also using [spell correction](speller-how-to-add.md), the "queryLanguage" you set applies equally to speller, answers, and captions. There's no override for individual parts. Spell check supports [fewer languages](speller-how-to-add.md#supported-languages), so if you're using that feature, you must set queryLanguage to one from that list.
- + Follow the above fields with other descriptive fields, where the answer to semantic queries may be found, such as the main content of a document.
+ While content in a search index can be composed in multiple languages, the query input is most likely in one. The search engine doesn't check for compatibility of queryLanguage, language analyzer, and the language in which content is composed, so be sure to scope queries accordingly to avoid producing incorrect results.
-#### Step 3: Remove or bracket query features that bypass relevance scoring
+1. Set "search" to a full text search query based on the [simple syntax](query-simple-syntax.md). Semantic search is an extension of full text search, so while this parameter isn't required, you won't get an expected outcome if it's null.
-Several query capabilities in Cognitive Search do not undergo relevance scoring, and some bypass the full text search engine altogether. If your query logic includes the following features, you will not get graduated relevance scores that feed into the semantic re-ranking of results:
+1. Set "semanticConfiguration" to a [predefined semantic configuration](#2create-a-semantic-configuration) that's embedded in your index, assuming your client supports it. For some clients and API versions, "semanticConfiguration" is required and important for getting the best results from semantic search.
-+ Empty search (`search=0`), wildcard search, fuzzy search, and regular expressions iterate over untokenized text, scanning for verbatim matches in the content, returning an un-scored result set. An un-scored result set assigns a uniform 1.0 on each match, and won't provide meaningful input for semantic ranking. Up to 50 documents will still be passed to the re-ranker, but the document selection is arbitrary.
+1. Set "searchFields" to a prioritized list of searchable string fields. If you didn't use a semantic configuration, this field provides important hints to the underlying models as to which fields the most important. If you do have a semantic configuration, setting this parameter is still useful because it scopes the query to high-value fields.
-+ Sorting (orderBy clauses) on specific fields will also override search scores and semantic score. Given that semantic score is used to order results, including explicit sort logic will cause an HTTP 400 error to be returned.
+ In contrast with other parameters, searchFields isn't new. You might already be using "searchFields" in existing code for simple or full Lucene queries. If so, revisit how the parameter is used so that you can check for field order when switching to a semantic query type.
-#### Step 4: Add answers
+1. Set "speller" to correct misspelled terms before they reach the search engine. This parameter is optional and not specific to semantic queries. For more information, see [Add spell correction to queries](speller-how-to-add.md).
-Optionally, add "answers" if you want to include additional processing that provides an answer. For details about this parameter, see [How to specify semantic answers](semantic-answers.md).
+1. Set "answers" to specify whether [semantic answers](semantic-answers.md) are included in the result. Currently, the only valid value for this parameter is "extractive". Answers can be configured to return a maximum of 10. The default is one. This example shows a count of three answers: `extractive|count-3`.
-```json
-"answers": "extractive|count-3",
-```
+ Answers are extracted from passages found in fields listed in the semantic configuration. This behavior is why you want to include content-rich fields in the prioritizedContentFields of a semantic configuration, so that you can get the best answers and captions in a response. Answers aren't guaranteed on every request. To get an answer, the query must look like a question and the content must include text that looks like an answer.
-Answers (and captions) are extracted from passages found in fields listed in searchFields. This is why you want to include content-rich fields in searchFields, so that you can get the best answers in a response. Answers are not guaranteed on every request. The query must look like a question, and the content must include text that looks like an answer.
+1. Set "captions" to specify whether semantic captions are included in the result. If you're using a semantic configuration, you should set this parameter. While the ["searchFields" approach](#2buse-searchfields-for-field-prioritization) automatically included captions, "semanticConfiguration" doesn't.
-#### Step 5: Add other parameters
+ Currently, the only valid value for this parameter is "extractive". Captions can be configured to return results with or without highlights. The default is for highlights to be returned. This example returns captions without highlights: `extractive|highlight-false`.
-Set any other parameters that you want in the request. Parameters such as [speller](speller-how-to-add.md), [select](search-query-odata-select.md), and count improve the quality of the request and readability of the response.
+1. Set "highlightPreTag" and "highlightPostTag" if you want to override the default highlight formatting that's applied to captions.
-```json
-"speller": "lexicon",
-"select": "HotelId,HotelName,Description,Category",
-"count": true,
-"highlightPreTag": "<mark>",
-"highlightPostTag": "</mark>",
-```
+ Captions apply highlight formatting over key passages in the document that summarize the response. The default is `<em>`. If you want to specify the type of formatting (for example, yellow background), you can set the highlightPreTag and highlightPostTag.
-Highlight styling is applied to captions in the response. You can use the default style, or optionally customize the highlight style applied to captions. Captions apply highlight formatting over key passages in the document that summarize the response. The default is `<em>`. If you want to specify the type of formatting (for example, yellow background), you can set the highlightPreTag and highlightPostTag.
+1. Set ["select"](search-query-odata-select.md) to specify which fields are returned in the response, and "count" to return the number of matches in the index. These parameters improve the quality of the request and readability of the response.
-
+1. Send the request to execute the query and return results.
-## Query using Azure SDKs
+### [**.NET SDK**](#tab/dotnet-query)
-Beta versions of the Azure SDKs include support for semantic search. Because the SDKs are beta versions, there is no documentation or samples, but you can refer to the REST API section above for insights on how the APIs should work.
+Beta versions of the Azure SDKs include support for semantic search. Because the SDKs are beta versions, there's no documentation or samples, but you can refer to the REST API section above for insights on how the APIs should work.
-### [**Semantic Configuration (recommended)**](#tab/semanticConfiguration)
+The following beta versions support semantic configuration:
| Azure SDK | Package | |--||
Beta versions of the Azure SDKs include support for semantic search. Because the
| JavaScript | [azure/search-documents 11.3.0-beta.5](https://www.npmjs.com/package/@azure/search-documents/v/11.3.0-beta.5)| | Python | [azure-search-documents 11.3.0b6](https://pypi.org/project/azure-search-documents/11.3.0b6/) |
-### [**searchFields**](#tab/searchFields)
+These beta versions use "searchFields" for field prioritization:
| Azure SDK | Package | |--||
Beta versions of the Azure SDKs include support for semantic search. Because the
-## Evaluate the response
+## 5 - Evaluate the response
-As with all queries, a response is composed of all fields marked as retrievable, or just those fields listed in the select parameter. It includes the original relevance score, and might also include a count, or batched results, depending on how you formulated the request.
+Only the top 50 matches from the initial results can be semantically ranked. As with all queries, a response is composed of all fields marked as retrievable, or just those fields listed in the select parameter. A response includes the original relevance score, and might also include a count, or batched results, depending on how you formulated the request.
-In a semantic query, the response has additional elements: a new semantically ranked relevance score, captions in plain text and with highlights, and optionally an answer.
+In semantic search, the response has more elements: a new semantically ranked relevance score, an optional caption in plain text and with highlights, and an optional [answer](semantic-answers.md). If your results don't include these extra elements, then your query might be misconfigured. As a first step towards troubleshooting the problem, check the semantic configuration to ensure it's specified in both the index definition and query.
-In a client app, you can structure the search page to include a caption as the description of the match, rather than the entire contents of a specific field. This is useful when individual fields are too dense for the search results page.
+In a client app, you can structure the search page to include a caption as the description of the match, rather than the entire contents of a specific field. This approach is useful when individual fields are too dense for the search results page.
-The response for the above example query returns the following match as the top pick. Captions are returned automatically, with plain text and highlighted versions. Answers are omitted from the example because one could not be determined for this particular query and corpus.
+The response for the above example query returns the following match as the top pick. Captions are returned because the "captions" property is set, with plain text and highlighted versions. Answers are omitted from the example because one couldn't be determined for this particular query and corpus.
```json "@odata.count": 35,
search Semantic Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-ranking.md
Before scoring for relevance, content must be reduced to a manageable number of
Whatever the document count, whether one or 50, the initial result set establishes the first iteration of the document corpus for semantic ranking.
-1. Next, across the corpus, the contents of each field in the [semantic configuration](semantic-how-to-query-request.md#create-a-semantic-configuration) are extracted and combined into a long string.
+1. Next, across the corpus, the contents of each field in the [semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration) are extracted and combined into a long string.
1. After string consolidation, any strings that are excessively long are trimmed to ensure the overall length meets the input requirements of the summarization step.
A [semantic answer](semantic-answers.md) will also be returned if you specified
## Next steps
-Semantic ranking is offered on Standard tiers, in specific regions. For more information about availability and sign up, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing). A new query type enables the ranking and response structures of semantic search. To get started, [Create a semantic query](semantic-how-to-query-request.md).
+Semantic ranking is offered on Standard tiers, in specific regions. For more information about availability and sign-up, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing). A new query type enables the ranking and response structures of semantic search. To get started, [Configure semantic ranking](semantic-how-to-query-request.md).
Alternatively, review the following articles about default ranking. Semantic ranking depends on the similarity ranker to return the initial results. Knowing about query execution and ranking will give you a broad understanding of how the entire process works.
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-search-overview.md
Semantic search is a collection of features that improve the quality of search r
||-| | [Semantic re-ranking](semantic-ranking.md) | Uses the context or semantic meaning of a query to compute a new relevance score over existing results. | | [Semantic captions and highlights](semantic-how-to-query-request.md) | Extracts sentences and phrases from a document that best summarize the content, with highlights over key passages for easy scanning. Captions that summarize a result are useful when individual content fields are too dense for the results page. Highlighted text elevates the most relevant terms and phrases so that users can quickly determine why a match was considered relevant. |
-| [Semantic answers](semantic-answers.md) | An optional and additional substructure returned from a semantic query. It provides a direct answer to a query that looks like a question. It requires that a document have text with the characteristics of an answer. |
+| [Semantic answers](semantic-answers.md) | An optional and additional substructure returned from a semantic query. It provides a direct answer to a query that looks like a question. It requires that a document has text with the characteristics of an answer. |
## How semantic ranking works
To re-enable semantic search, rerun the above request, setting "semanticSearch"
## Next steps
-[Enable semantic search](#enable-semantic-search) for your search service and follow the documentation on how to [create a semantic query](semantic-how-to-query-request.md) so that you can test out semantic search on your content.
+[Enable semantic search](#enable-semantic-search) for your search service and follow the steps in [Configure semantic ranking](semantic-how-to-query-request.md) so that you can test out semantic search on your content.
search Speller How To Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/speller-how-to-add.md
POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/
## Spell correction with semantic search
-This query, with typos in every term except one, undergoes spelling corrections to return relevant results. To learn more, see [Create a semantic query](semantic-how-to-query-request.md).
+This query, with typos in every term except one, undergoes spelling corrections to return relevant results. To learn more, see [Configure semantic ranking](semantic-how-to-query-request.md).
```http POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2020-06-30-Preview    
search Tutorial Csharp Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-create-load-index.md
Previously updated : 08/30/2022 Last updated : 11/01/2022 ms.devlang: csharp
ms.devlang: csharp
# 2 - Create and load Search Index with .NET Continue to build your Search-enabled website by:
-* Creating a Search resource with the VS Code extension
-* Creating a new index and importing data with .NET using the sample script and Azure SDK [Azure.Search.Documents](https://www.nuget.org/packages/Azure.Search.Documents/).
+* Create a Search resource with the VS Code extension
+* Create a new index
+* Import data with .NET using the sample script and Azure SDK [Azure.Search.Documents](https://www.nuget.org/packages/Azure.Search.Documents/).
## Create an Azure Search resource
Create a new Search resource with the [Azure Cognitive Search](https://marketpla
1. In the Side bar, **right-click on your Azure subscription** under the `Azure: Cognitive Search` area and select **Create new search service**.
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-create-search-resource.png" alt-text="In the Side bar, right-click on your Azure subscription under the **Azure: Cognitive Search** area and select **Create new search service**.":::
+ :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-create-search-resource.png" alt-text="Screenshot of Visual Studio code showing the Azure explorer bar, right-click on your Azure subscription under the Azure: Cognitive Search area and select Create new search service.":::
1. Follow the prompts to provide the following information:
Get your Search resource admin key with the Visual Studio Code extension.
1. In Visual Studio Code, in the Side bar, right-click on your Search resource and select **Copy Admin Key**.
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-admin-key.png" alt-text="In the Side bar, right-click on your Search resource and select **Copy Admin Key**.":::
+ :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-admin-key.png" alt-text="Screenshot of Visual Studio code showing the Azure explorer bar, right-click on your Search resource and select Copy Admin Key.":::
-1. Keep this admin key, you will need to use it in [a later section](#prepare-the-bulk-import-script-for-search).
+1. Keep this admin key, you'll need to use it in [a later section](#prepare-the-bulk-import-script-for-search).
## Prepare the bulk import script for Search
The script uses the Azure SDK for Cognitive Search:
* [NuGet package Azure.Search.Documents](https://www.nuget.org/packages/Azure.Search.Documents/) * [Reference Documentation](/dotnet/api/overview/azure/search)
-1. In Visual Studio Code, open the `Program.cs` file in the subdirectory, `search-website/bulk-insert`, replace the following variables with your own values to authenticate with the Azure Search SDK:
+1. In Visual Studio Code, open the `Program.cs` file in the subdirectory, `search-website-functions-v4/bulk-insert`, replace the following variables with your own values to authenticate with the Azure Search SDK:
* YOUR-SEARCH-RESOURCE-NAME * YOUR-SEARCH-ADMIN-KEY
- :::code language="csharp" source="~/azure-search-dotnet-samples/search-website/bulk-insert/Program.cs" highlight="16-19" :::
+ :::code language="csharp" source="~/azure-search-dotnet-samples/search-website-functions-v4/bulk-insert/Program.cs" highlight="16-19, 21-23, 32, 49" :::
-1. Open an integrated terminal in Visual Studio Code for the project directory's subdirectory, `search-website/bulk-insert`, then run the following command to install the dependencies.
+1. Open an integrated terminal in Visual Studio Code for the project directory's subdirectory, `search-website-functions-v4/bulk-insert`, then run the following command to install the dependencies.
```bash dotnet restore
The script uses the Azure SDK for Cognitive Search:
## Run the bulk import script for Search
-1. Continue using the integrated terminal in Visual Studio for the project directory's subdirectory, `search-website/bulk-insert`, to run the following bash command to run the `Program.cs` script:
+1. Continue using the integrated terminal in Visual Studio for the project directory's subdirectory, `search-website-functions-v4/bulk-insert`, to run the following bash command to run the `Program.cs` script:
```bash dotnet run
The script uses the Azure SDK for Cognitive Search:
## Review the new Search Index
-Once the upload completes, the Search Index is ready to use. Review your new Index.
-
-1. In Visual Studio Code, open the Azure Cognitive Search extension and select your Search resource.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-resource.png" alt-text="In Visual Studio Code, open the Azure Cognitive Search extension and open your Search resource.":::
-
-1. Expand Indexes, then Documents, then `good-books`, then select a doc to see all the document-specific data.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-docs.png" lightbox="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-docs.png" alt-text="Expand Indexes, then `good-books`, then select a doc.":::
-
-## Copy your Search resource name
-
-Note your **Search resource name**. You will need this to connect the Azure Function app to your Search resource.
-
-> [!CAUTION]
-> While you may be tempted to use your Search admin key in the Azure Function, that isn't following the principle of least privilege. The Azure Function will use the query key to conform to least privilege.
## Rollback bulk import file changes
-Use the following git command in the VS Code integrated terminal at the `bulk-insert` directory, to rollback the changes. They are not needed to continue the tutorial and you shouldn't save or push these secrets to your repo.
-```git
-git checkout .
-```
+## Copy your Search resource name
## Next steps
search Tutorial Csharp Deploy Static Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-deploy-static-web-app.md
Previously updated : 08/30/2022 Last updated : 11/01/2022 ms.devlang: csharp # 3 - Deploy the search-enabled .NET website
-Deploy the search-enabled website as an Azure Static web app. This deployment includes both the React app and the Function app.
-
-The Static Web app pulls the information and files for deployment from GitHub using your fork of the samples repository.
-
-## Create a Static Web App in Visual Studio Code
-
-1. Select **Azure** from the Activity Bar, then open **Resources** from the Side bar.
-
-1. Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-create-static-web-app-resource-advanced.png" alt-text="Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**":::
-
-1. If you see a pop-up window in VS Code asking which branch you want to deploy from, select the default branch, usually **master** or **main**.
-
- This setting means only changes you commit to that branch are deployed to your static web app.
-
-1. If you see a pop-up window asking you to commit your changes, do not do this. The secrets from the bulk import step should not be committed to the repository.
-
- To rollback the changes, in VS Code select the Source Control icon in the Activity bar, then select each changed file in the Changes list and select the **Discard changes** icon.
-
-1. Follow the prompts to provide the following information:
-
- |Prompt|Enter|
- |--|--|
- |Enter the name for the new Static Web App.|Create a unique name for your resource. For example, you can prepend your name to the repository name such as, `joansmith-azure-search-dotnet-samples`. |
- |Select a resource group for new resources.|Use the resource group you created for this tutorial.|
- |Select a SKU| Select the free SKU for this tutorial.|
- |Choose build preset to configure default project structure.|Select **Custom**|
- |Select the location of your application code|`search-website`<br><br>This is the path, from the root of the repository, to your Azure Static web app. |
- |Select the location of your Azure Function code|`search-website/api`<br><br>This is the path, from the root of the repository, to your Azure Function app. |
- |Enter the path of your build output...|`build`<br><br>This is the path, from your Azure Static web app, to your generated files.|
- |Select a location for new resources.|Select a region close to you.|
-
-1. The resource is created, select **Open Actions in GitHub** from the Notifications. This opens a browser window pointed to your forked repo.
-
- The list of actions indicates your web app, both client and functions, were successfully pushed to your Azure Static Web App.
-
- Wait until the build and deployment complete before continuing. This may take a minute or two to finish.
-
-## Get Cognitive Search query key in Visual Studio Code
-
-1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-
-1. In the Side bar, select your Azure subscription under the **Azure: Cognitive Search** area, then right-click on your Search resource and select **Copy Query Key**.
-
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-query-key.png" alt-text="In the Side bar, select your Azure subscription under the **Azure: Cognitive Search** area, then right-click on your Search resource and select **Copy Query Key**.":::
-
-1. Keep this query key, you will need to use it in the next section. The query key is able to query your Index.
-
-## Add configuration settings in Azure portal
-
-The Azure Function app won't return Search data until the Search secrets are in settings.
-
-1. Select **Azure** from the Activity Bar.
-1. Right-click on your Static web app resource then select **Open in Portal**.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/open-static-web-app-in-azure-portal.png" alt-text="Right-click on your JavaScript Static web app resource then select Open in Portal.":::
-
-1. Select **Configuration** then select **+ Add**.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/add-new-application-setting-to-static-web-app-in-portal.png" alt-text="Select Configuration then select Add for your JavaScript app.":::
-
-1. Add each of the following settings:
-
- |Setting|Your Search resource value|
- |--|--|
- |SearchApiKey|Your Search query key|
- |SearchServiceName|Your Search resource name|
- |SearchIndexName|`good-books`|
- |SearchFacets|`authors*,language_code`|
-
- Azure Cognitive Search requires different syntax for filtering collections than it does for strings. Add a `*` after a field name to denote that the field is of type `Collection(Edm.String)`. This allows the Azure Function to add filters correctly to queries.
-
-1. Select **Save** to save the settings.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/save-new-application-setting-to-static-web-app-in-portal.png" alt-text="Select Save to save the settings for your JavaScript app..":::
-
-1. Return to VS Code.
-1. Refresh your Static web app to see the Static web app's application settings.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/visual-studio-code-extension-fresh-resource.png" alt-text="Refresh your JavaScript Static web app to see the Static web app's application settings.":::
-
-## Use search in your Static web app
-
-1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-1. In the Side bar, **right-click on your Azure subscription** under the `Static web apps` area and find the Static web app you created for this tutorial.
-1. Right-click the Static Web App name and select **Browse site**.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-browse-static-web-app.png" alt-text="Right-click the Static Web App name and select **Browse site**.":::
-
-1. Select **Open** in the pop-up dialog.
-1. In the website search bar, enter a search query such as `code`, _slowly_ so the suggest feature suggests book titles. Select a suggestion or continue entering your own query. Press enter when you've completed your search query.
-1. Review the results then select one of the books to see more details.
-
-## Clean up resources
-
-To clean up the resources created in this tutorial, delete the resource group.
-
-1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-
-1. In the Side bar, **right-click on your Azure subscription** under the `Resource Groups` area and find the resource group you created for this tutorial.
-1. Right-click the resource group name then select **Delete**.
- This deletes both the Search and Static web app resources.
-1. If you no longer want the GitHub fork of the sample, remember to delete that on GitHub. Go to your fork's **Settings** then delete the fork.
- ## Next steps
search Tutorial Csharp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-overview.md
Previously updated : 08/30/2022 Last updated : 11/01/2022 ms.devlang: csharp
ms.devlang: csharp
This tutorial builds a website to search through a catalog of books then deploys the website to an Azure Static Web App. The application is available:
-* [Sample](https://github.com/azure-samples/azure-search-dotnet-samples/tree/master/search-website)
+* [Sample](https://github.com/azure-samples/azure-search-dotnet-samples/tree/master/search-website-functions-v4)
* [Demo website - aka.ms/azs-good-books](https://aka.ms/azs-good-books) ## What does the sample do? -
-This sample website provides access to a catalog of 10,000 books. A user can search the catalog by entering text in the search bar. While the user enters text, the website uses the search index's suggest feature to complete the text. Once the query finishes, the list of books is displayed with a portion of the details. A user can select a book to see all the details, stored in the search index, of the book.
--
-The search experience includes:
-
-* Search ΓÇô provides search functionality for the application.
-* Suggest ΓÇô provides suggestions as the user is typing in the search bar.
-* Document Lookup ΓÇô looks up a document by ID to retrieve all of its contents for the details page.
## How is the sample organized?
-The [sample](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website) includes the following:
+The [sample](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website-functions-v4) includes the following:
|App|Purpose|GitHub<br>Repository<br>Location| |--|--|--|
-|Client|React app (presentation layer) to display books, with search. It calls the Azure Function app. |[/search-website/src](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website/src)|
-|Server|Azure .NET Function app (business layer) - calls the Azure Cognitive Search API using .NET SDK |[/search-website/api](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website/api)|
-|Bulk insert|.NET file to create the index and add documents to it.|[/search-website/bulk-insert](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website/bulk-insert)|
+|Client|React app (presentation layer) to display books, with search. It calls the Azure Function app. |[/search-website-functions-v4/client](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website-functions-v4/client)|
+|Server|Azure .NET Function app (business layer) - calls the Azure Cognitive Search API using .NET SDK |[/search-website-functions-v4/api](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website-functions-v4/api)|
+|Bulk insert|.NET file to create the index and add documents to it.|[/search-website-functions-v4/bulk-insert](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website-functions-v4/bulk-insert)|
## Set up your development environment Install the following for your local development environment. -- [.NET 5](https://dotnet.microsoft.com/download/dotnet/5.0)
+- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0)
- [Git](https://git-scm.com/downloads) - [Visual Studio Code](https://code.visualstudio.com/) and the following extensions
- - [Azure Resources](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureresourcegroups)
- [Azure Cognitive Search 0.2.0+](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch) - [Azure Static Web App](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps) - Optional:
Forking the sample repository is critical to be able to deploy the Static Web Ap
Complete the fork process in your web browser with your GitHub account. This tutorial uses your fork as part of the deployment to an Azure Static Web App.
-1. At a bash terminal, download the sample application to your local computer.
+1. At a Bash terminal, download your forked sample application to your local computer.
Replace `YOUR-GITHUB-ALIAS` with your GitHub alias.
Forking the sample repository is critical to be able to deploy the Static Web Ap
git clone https://github.com/YOUR-GITHUB-ALIAS/azure-search-dotnet-samples ```
-1. In Visual Studio Code, open your local folder of the cloned repository. The remaining tasks are accomplished from Visual Studio Code, unless specified.
+1. At the same Bash terminal, go into your forked repository for this website search example:
-## Create a resource group for your Azure resources
+ ```bash
+ cd azure-search-dotnet-samples
+ ```
-1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-1. In Resources, select Add (**+**), and then select **Create Resource Group**.
+1. Use the Visual Studio Code command, `code .` to open your forked repository. The remaining tasks are accomplished from Visual Studio Code, unless specified.
- :::image type="content" source="./media/tutorial-javascript-overview/visual-studio-code-create-resource-group.png" alt-text="In Resources, select Add (**+**), and then select **Create Resource Group**.":::
-1. Enter a resource group name, such as `cognitive-search-website-tutorial`.
-1. Select a location close to you.
-1. When you create the Cognitive Search and Static Web App resources, later in the tutorial, use this resource group.
+ ```bash
+ code .
+ ```
+
+## Create a resource group for your Azure resources
- Creating a resource group gives you a logical unit to manage the resources, including deleting them when you are finished using them.
## Next steps
search Tutorial Csharp Search Query Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-search-query-integration.md
Previously updated : 04/23/2021 Last updated : 11/01/2022 ms.devlang: csharp # 4 - .NET Search integration cheat sheet
-In the previous lessons, you added search to a Static Web App. This lesson highlights the essential steps that establish integration. If you are looking for a cheat sheet on how to integrate search into your web app, this article explains what you need to know.
+In the previous lessons, you added search to a Static Web App. This lesson highlights the essential steps that establish integration. If you're looking for a cheat sheet on how to integrate search into your web app, this article explains what you need to know.
The application is available:
-* [Sample](https://github.com/azure-samples/azure-search-dotnet-samples/tree/master/search-website)
+* [Sample](https://github.com/azure-samples/azure-search-dotnet-samples/tree/master/search-website-functions-v4)
* [Demo website - aka.ms/azs-good-books](https://aka.ms/azs-good-books) ## Azure SDK Azure.Search.Documents
The Function app authenticates through the SDK to the cloud-based Cognitive Sear
## Configure secrets in a local.settings.json file
-1. Create a new file named `local.settings.json` at `./api/` and copy the following JSON object into the file.
-
- ```json
- {
- "IsEncrypted": false,
- "Values": {
- "AzureWebJobsStorage": "",
- "FUNCTIONS_WORKER_RUNTIME": "dotnet",
- "SearchApiKey": "YOUR_SEARCH_QUERY_KEY",
- "SearchServiceName": "YOUR_SEARCH_RESOURCE_NAME",
- "SearchIndexName": "good-books"
- }
- }
- ```
-
-1. Change the following for you own Search resource values:
- * YOUR_SEARCH_RESOURCE_NAME
- * YOUR_SEARCH_QUERY_KEY
## Azure Function: Search the catalog
-The `Search` [API](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website/api/Search.cs) takes a search term and searches across the documents in the Search Index, returning a list of matches.
+The `Search` [API](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website-functions-v4/api/Search.cs) takes a search term and searches across the documents in the Search Index, returning a list of matches.
The Azure Function pulls in the Search configuration information, and fulfills the query. ## Client: Search from the catalog Call the Azure Function in the React client with the following code. ## Azure Function: Suggestions from the catalog
-The `Suggest` [API](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website/api/Suggest.cs) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches.
+The `Suggest` [API](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website-functions-v4/api/Suggest.cs) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches.
-The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website/bulk-insert/BookSearchIndex.cs) used during bulk upload.
+The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website-functions-v4/bulk-insert/BookSearchIndex.cs) used during bulk upload.
## Client: Suggestions from the catalog
-The Suggest function API is called in the React app at `\src\components\SearchBar\SearchBar.js` as part of component initialization:
+The Suggest function API is called in the React app at `\client\src\components\SearchBar\SearchBar.js` as part of component initialization:
## Azure Function: Get specific document
-The `Lookup` [API](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website/api/Lookup.cs) takes a ID and returns the document object from the Search Index.
+The `Lookup` [API](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website-functions-v4/api/Lookup.cs) takes an ID and returns the document object from the Search Index.
## Client: Get specific document
-This function API is called in the React app at `\src\pages\Details\Detail.js` as part of component initialization:
+This function API is called in the React app at `\client\src\pages\Details\Detail.js` as part of component initialization:
## C# models to support function app The following models are used to support the functions in this app. ## Next steps
search Tutorial Javascript Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-create-load-index.md
Create a new Search resource with the [Azure Cognitive Search](https://marketpla
1. In the Side bar, **right-click on your Azure subscription** under the `Azure: Cognitive Search` area and select **Create new search service**.
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-create-search-resource.png" alt-text="In the Side bar, right-click on your Azure subscription under the **Azure: Cognitive Search** area and select **Create new search service**.":::
+ :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-create-search-resource.png" alt-text="Screenshot of Visual Studio Code showing Azure explorer, right-click on your Azure subscription under the Azure: Cognitive Search area and select Create new search service.":::
1. Follow the prompts to provide the following information:
Get your Search resource admin key with the Visual Studio Code extension.
1. In Visual Studio Code, in the Side bar, right-click on your Search resource and select **Copy Admin Key**.
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-admin-key.png" alt-text="In the Side bar, right-click on your Search resource and select **Copy Admin Key**.":::
+ :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-admin-key.png" alt-text="Screenshot of Visual Studio Code showing Azure explorer, right-click on your Search resource and select Copy Admin Key.":::
1. Keep this admin key, you'll need to use it in [a later section](#prepare-the-bulk-import-script-for-search).
The script uses the Azure SDK for Cognitive Search:
* YOUR-SEARCH-RESOURCE-NAME * YOUR-SEARCH-ADMIN-KEY
- :::code language="javascript" source="~/azure-search-javascript-samples/search-website-functions-v4/bulk-insert/bulk_insert_books.js" highlight="16,17" :::
+ :::code language="javascript" source="~/azure-search-javascript-samples/search-website-functions-v4/bulk-insert/bulk_insert_books.js" highlight="14,16,17,27-38,83,92,119" :::
1. Open an integrated terminal in Visual Studio for the project directory's subdirectory, `search-website-functions-v4/bulk-insert`, and run the following command to install the dependencies.
The script uses the Azure SDK for Cognitive Search:
## Next steps
-[Deploy your Static Web App](tutorial-javascript-deploy-static-web-app.md)
+[Deploy your Static Web App](tutorial-javascript-deploy-static-web-app.md)
search Tutorial Python Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-create-load-index.md
Previously updated : 11/17/2021 Last updated : 11/02/2022 ms.devlang: python
ms.devlang: python
# 2 - Create and load Search Index with Python Continue to build your Search-enabled website by:
-* Creating a Search resource with the VS Code extension
-* Creating a new index and importing data with Python using the sample script and Azure SDK [azure-search-documents](https://pypi.org/project/azure-search-documents/).
+* Create a Search resource with the VS Code extension
+* Create a new index
+* Import data with Python using the [sample script](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/search-website-functions-v4/bulk-upload/bulk-upload.py) and Azure SDK [azure-search-documents](https://pypi.org/project/azure-search-documents/).
## Create an Azure Cognitive Search resource
Create a new Search resource with the [Azure Cognitive Search](https://marketpla
1. In the Side bar, **right-click on your Azure subscription** under the `Azure: Cognitive Search` area and select **Create new search service**.
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-create-search-resource.png" alt-text="In the Side bar, right-click on your Azure subscription under the **Azure: Cognitive Search** area and select **Create new search service**.":::
+ :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-create-search-resource.png" alt-text="Screenshot of Visual Studio Code showing the Azure explorer, right-click on your Azure subscription under the Azure: Cognitive Search area and select Create new search service.":::
1. Follow the prompts to provide the following information:
Get your Search resource admin key with the Visual Studio Code extension.
1. In Visual Studio Code, in the Side bar, right-click on your Search resource and select **Copy Admin Key**.
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-admin-key.png" alt-text="In the Side bar, right-click on your Search resource and select **Copy Admin Key**.":::
+ :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-admin-key.png" alt-text="Screenshot of Visual Studio Code showing the Azure explorer, right-click on your Search resource and select Copy Admin Key.":::
-1. Keep this admin key, you will need to use it to create objects in [a later section](#prepare-the-bulk-import-script-for-search).
+1. Keep this admin key, you'll need to use it to create objects in [a later section](#prepare-the-bulk-import-script-for-search).
## Prepare the bulk import script for Search
The script uses the Azure SDK for Cognitive Search:
* [PYPI package azure-search-documents](https://pypi.org/project/azure-search-documents/) * [Reference Documentation](/python/api/azure-search-documents)
-1. In Visual Studio Code, open the `bulk_upload.py` file in the subdirectory, `search-website/bulk-upload`, replace the following variables with your own values to authenticate with the Azure Search SDK:
+1. In Visual Studio Code, open the `bulk_upload.py` file in the subdirectory, `search-website-functions-v4/bulk-upload`, replace the following variables with your own values to authenticate with the Azure Search SDK:
* YOUR-SEARCH-SERVICE-NAME * YOUR-SEARCH-SERVICE-ADMIN-API-KEY
- :::code language="python" source="~/azure-search-python-samples/search-website/bulk-upload/bulk-upload.py" highlight="20,21,69,83,135" :::
+ :::code language="python" source="~/azure-search-python-samples/search-website-functions-v4/bulk-upload/bulk-upload.py" highlight="20-22,46-48,53-54,75-80,83,69,83,135,142" :::
-1. Open an integrated terminal in Visual Studio for the project directory's subdirectory, `search-website/bulk-upload`, and run the following command to install the dependencies.
+1. Open an integrated terminal in Visual Studio for the project directory's subdirectory, `search-website-functions-v4/bulk-upload`, and run the following command to install the dependencies.
# [macOS/Linux](#tab/linux-install)
The script uses the Azure SDK for Cognitive Search:
## Run the bulk import script for Search
-1. Continue using the integrated terminal in Visual Studio for the project directory's subdirectory, `search-website/bulk-upload`, to run the following bash command to run the `bulk_upload.py` script:
+1. Continue using the integrated terminal in Visual Studio for the project directory's subdirectory, `search-website-functions-v4/bulk-upload`, to run the following bash command to run the `bulk_upload.py` script:
# [macOS/Linux](#tab/linux-run)
The script uses the Azure SDK for Cognitive Search:
1. As the code runs, the console displays progress.
-1. When the upload is complete, the last statement printed to the console is "Done. Press any key to close the terminal.".
+1. When the upload is complete, the last statement printed to the console is "Done! Upload complete".
## Review the new Search Index
-Once the upload completes, the search index is ready to use. Review your new index.
-1. In Visual Studio Code, open the Azure Cognitive Search extension and select your Search resource.
+## Rollback bulk import file changes
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-resource.png" alt-text="In Visual Studio Code, open the Azure Cognitive Search extension and open your Search resource.":::
-
-1. Expand Indexes, then Documents, then `good-books`, then select a doc to see all the document-specific data.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-docs.png" lightbox="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-docs.png" alt-text="Expand Indexes, then `good-books`, then select a doc.":::
## Copy your Search resource name
-Note your **Search resource name**. You will need this to connect the Azure Function app to your Search resource.
-
-> [!CAUTION]
-> While you may be tempted to use your Search admin key in the Azure Function, that isn't following the principle of least privilege. The Azure Function will use the query key to conform to least privilege.
## Next steps
search Tutorial Python Deploy Static Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-deploy-static-web-app.md
Previously updated : 08/30/2022 Last updated : 11/02/2022 ms.devlang: python # 3 - Deploy the search-enabled Python website
-Deploy the search-enabled website as an Azure Static web app. This deployment includes both the React app and the Function app.
-
-The Static Web app pulls the information and files for deployment from GitHub using your fork of the samples repository.
-
-## Create a Static Web App in Visual Studio Code
-
-1. Select **Azure** from the Activity Bar, then open **Resources** from the Side bar.
-
-1. Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-create-static-web-app-resource-advanced.png" alt-text="Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**":::
-
-1. Follow the 8 prompts to provide the following information:
-
- |Prompt|Enter|
- |--|--|
- |Enter the name for the new Static Web App.|Create a unique name for your resource. For example, you can prepend your name to the repository name such as, `joansmith-azure-search-javascript-samples`. |
- |Select a resource group for new resources.|Use the resource group you created for this tutorial.|
- |Select a SKU| Select the free SKU for this tutorial.|
- |Choose build preset to configure default project structure.|Select **Custom**|
- |Select the location of your application code|`search-website`<br><br>This is the path, from the root of the repository, to your Azure Static web app. |
- |Select the location of your Azure Function code|`search-website/api`<br><br>This is the path, from the root of the repository, to your Azure Function app. |
- |Enter the path of your build output...|`build`<br><br>This is the path, from your Azure Static web app, to your generated files.|
- |Select a location for new resources.|Select a region close to you.|
-
-1. The resource is created, select **Open Actions in GitHub** from the Notifications. This opens a browser window pointed to your forked repo.
-
- The list of actions indicates your web app, both client and functions, were successfully pushed to your Azure Static Web App.
-
- Wait until the build and deployment complete before continuing. This may take a minute or two to finish.
-
-## Get Cognitive Search query key in VS Code
-
-1. In VS Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-
-1. In the Side bar, select your Azure subscription under the **Azure: Cognitive Search** area, then right-click on your Search resource and select **Copy Query Key**.
-
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-query-key.png" alt-text="In the Side bar, select your Azure subscription under the **Azure: Cognitive Search** area, then right-click on your Search resource and select **Copy Query Key**.":::
-
-1. Keep this query key, you will need to use it in the next section. The query key is able to query your index.
-
-## Add configuration settings in Azure portal
-
-The Azure Function app won't return search data until the search secrets are in settings.
-
-1. Select **Azure** from the Activity Bar.
-1. Right-click on your Static web app resource then select **Open in Portal**.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/open-static-web-app-in-azure-portal.png" alt-text="Right-click on your Python Static web app resource then select Open in Portal.":::
-
-1. Select **Configuration** then select **+ Add**.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/add-new-application-setting-to-static-web-app-in-portal.png" alt-text="Select Configuration then select Add for your Python app.":::
-
-1. Add each of the following settings:
-
- |Setting|Your Search resource value|
- |--|--|
- |SearchApiKey|Your search query key|
- |SearchServiceName|Your search resource name|
- |SearchIndexName|`good-books`|
- |SearchFacets|`authors*,language_code`|
-
- Azure Cognitive Search requires different syntax for filtering collections than it does for strings. For the authors* facet, adding a * after a field name denotes that the field is of type Collection(Edm.String). This allows the Azure Function to add filters correctly to queries.
-
-1. Select **Save** to save the settings.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/save-new-application-setting-to-static-web-app-in-portal.png" alt-text="Select Save to save the settings.":::
-
-1. Return to VS Code.
-1. Refresh your static web app to see the static web app's application settings.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/visual-studio-code-extension-fresh-resource.png" alt-text="Refresh your Static web app to see the Static web app's application settings.":::
-
-## Use search in your Static web app
-
-1. In VS Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-1. In the Side Bar, **right-click on your Azure subscription** under the `Static web apps` area and find the static web app you created for this tutorial.
-1. Right-click your static web app name and select **Browse site**.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-browse-static-web-app.png" alt-text="Right-click the Static Web App name and select **Browse site**.":::
-
-1. Select **Open** in the pop-up dialog.
-1. In the website search bar, enter a search query such as `code`, _slowly_ so the suggest feature suggests book titles. Select a suggestion or continue entering your own query. Press enter when you've completed your search query.
-1. Review the results then select one of the books to see more details.
-
-## Clean up resources
-
-To clean up the resources created in this tutorial, delete the resource group.
-
-1. In VS Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-
-1. In the Side bar, **right-click on your Azure subscription** under the `Resource Groups` area and find the resource group you created for this tutorial.
-1. Right-click the resource group name then select **Delete**.
- This deletes both the Search and Static web app resources.
-1. If you no longer want the GitHub fork of the sample, remember to delete that on GitHub. Go to your fork's **Settings** then delete the fork.
- ## Next steps
search Tutorial Python Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-overview.md
Previously updated : 08/30/2022 Last updated : 11/02/2022 ms.devlang: python
ms.devlang: python
This tutorial builds a website to search through a catalog of books then deploys the website to an Azure Static Web App. The application is available:
-* [Sample](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website)
+* [Sample](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website-functions-v4)
* [Demo website - aka.ms/azs-good-books](https://aka.ms/azs-good-books) ## What does the sample do?
-This sample website provides access to a catalog of 10,000 books. A user can search the catalog by entering text in the search bar. While the user enters text, the website uses your search index's suggest feature to complete the text. Once the query finishes, the list of books is displayed with a portion of the details. A user can select a book to see all the details, stored in the search index, of the book.
--
-The search experience includes:
-
-* Search ΓÇô provides search functionality for the application.
-* Suggest ΓÇô provides suggestions as the user is typing in the search bar.
-* Document Lookup ΓÇô looks up a document by ID to retrieve all of its contents for the details page.
## How is the sample organized?
-The [sample](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website) includes the following:
+The [sample](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website-functions-v4) includes the following:
|App|Purpose|GitHub<br>Repository<br>Location| |--|--|--|
-|Client|React app (presentation layer) to display books, with search. It calls the Azure Function app. |[/search-website/src](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website/src)|
-|Server|Azure Function app (business layer) - calls the Azure Cognitive Search API using Python SDK |[/search-website/api](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website/src)|
-|Bulk insert|Python file to create the index and add documents to it.|[/search-website/bulk-upload](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website/bulk-upload)|
+|Client|React app (presentation layer) to display books, with search. It calls the Azure Function app. |[/search-website-functions-v4/client](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website-functions-v4/client)|
+|Server|Azure Function app (business layer) - calls the Azure Cognitive Search API using Python SDK |[/search-website-functions-v4/api](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website-functions-v4/api)|
+|Bulk insert|Python file to create the index and add documents to it.|[/search-website-functions-v4/bulk-upload](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website-functions-v4/bulk-upload)|
## Set up your development environment
Install the following for your local development environment.
- [Python 3.9](https://www.python.org/downloads/) - [Git](https://git-scm.com/downloads) - [Visual Studio Code](https://code.visualstudio.com/) and the following extensions
- - [Azure Resources](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureresourcegroups)
- - [Azure Cognitive Search 0.2.0+](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch)
+ - [Azure Cognitive Search](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch)
- [Azure Static Web App](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps) - Optional: - This tutorial doesn't run the Azure Function API locally but if you intend to run it locally, you need to install [azure-functions-core-tools](../azure-functions/functions-run-local.md?tabs=linux%2ccsharp%2cbash).
Forking the sample repository is critical to be able to deploy the static web ap
## Create a resource group for your Azure resources
-1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-1. In Resources, select Add (**+**), and then select **Create Resource Group**.
-
- :::image type="content" source="./media/tutorial-javascript-overview/visual-studio-code-create-resource-group.png" alt-text="In Resources, select Add (**+**), and then select **Create Resource Group**.":::
-1. Enter a resource group name, such as `cognitive-search-website-tutorial`.
-1. Select a location close to you.
-1. When you create the Cognitive Search and Static Web App resources, later in the tutorial, use this resource group.
-
- Creating a resource group gives you a logical unit to manage the resources, including deleting them when you are finished using them.
## Next steps
search Tutorial Python Search Query Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-search-query-integration.md
Previously updated : 11/17/2021 Last updated : 11/02/2022 ms.devlang: python # 4 - Python Search integration cheat sheet
-In the previous lessons, you added search to a Static Web App. This lesson highlights the essential steps that establish integration. If you are looking for a cheat sheet on how to integrate search into your Python app, this article explains what you need to know.
+In the previous lessons, you added search to a Static Web App. This lesson highlights the essential steps that establish integration. If you're looking for a cheat sheet on how to integrate search into your Python app, this article explains what you need to know.
The application is available:
-* [Sample](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website)
+* [Sample](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website-functions-v4)
* [Demo website - aka.ms/azs-good-books](https://aka.ms/azs-good-books) ## Azure SDK azure-search-documents
The Function app authenticates through the SDK to the cloud-based Cognitive Sear
The Azure Function app settings environment variables are pulled in from a file, `__init__.py`, shared between the three API functions. ## Azure Function: Search the catalog
-The Search [API](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website/api/Search/__init__.py) takes a search term and searches across the documents in the Search Index, returning a list of matches.
+The Search [API](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website-functions-v4/api/Search/__init__.py) takes a search term and searches across the documents in the Search Index, returning a list of matches.
-Routing for the Search API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website/api/Search/function.json) bindings.
+Routing for the Search API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website-functions-v4/api/Search/function.json) bindings.
The Azure Function pulls in the search configuration information, and fulfills the query. ## Client: Search from the catalog Call the Azure Function in the React client with the following code. ## Azure Function: Suggestions from the catalog
-The `Suggest` [API](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website/api/Suggest/__init__.py) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches.
+The `Suggest` [API](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website-functions-v4/api/Suggest/__init__.py) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches.
-The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website/bulk-upload/good-books-index.json) used during bulk upload.
+The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website-functions-v4/bulk-upload/good-books-index.json) used during bulk upload.
-Routing for the Suggest API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website/api/Suggest/function.json) bindings.
+Routing for the Suggest API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website-functions-v4/api/Suggest/function.json) bindings.
## Client: Suggestions from the catalog
-The Suggest function API is called in the React app at `\src\components\SearchBar\SearchBar.js` as part of component initialization:
+The Suggest function API is called in the React app at `client\src\components\SearchBar\SearchBar.js` as part of component initialization:
## Azure Function: Get specific document
-The `Lookup` [API](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website/api/Lookup/__init__.py) takes a ID and returns the document object from the Search Index.
+The `Lookup` [API](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website-functions-v4/api/Lookup/__init__.py) takes an ID and returns the document object from the Search Index.
-Routing for the Lookup API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website/api/Lookup/function.json) bindings.
+Routing for the Lookup API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website-functions-v4/api/Lookup/function.json) bindings.
## Client: Get specific document
-This function API is called in the React app at `\src\pages\Details\Detail.js` as part of component initialization:
+This function API is called in the React app at `client\src\pages\Details\Detail.js` as part of component initialization:
## Next steps
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Learn about the latest updates to Azure Cognitive Search functionality, docs, an
| Month | Feature | Description | |-||-|
-| December | [Enhanced configuration for semantic search](semantic-how-to-query-request.md#create-a-semantic-configuration) | This configuration is a new addition to the 2021-04-30-Preview API, and is now required for semantic queries. Public preview in the portal and preview REST APIs.|
+| December | [Enhanced configuration for semantic search](semantic-how-to-query-request.md##2create-a-semantic-configuration | This configuration is a new addition to the 2021-04-30-Preview API, and is now required for semantic queries and Azure portal.|
| November | [Azure Files indexer (preview)](./search-file-storage-integration.md) | Public preview in the portal and preview REST APIs.| | July | [Search REST API 2021-04-30-Preview](/rest/api/searchservice/index-preview) | Public preview announcement. | | July | [Role-based access control for data plane (preview)](search-security-rbac.md) | Public preview announcement. |
sentinel Connect Log Forwarder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-log-forwarder.md
Your machine must meet the following requirements:
- Your Linux machine must have a minimum of **4 CPU cores and 8 GB RAM**. > [!NOTE]
- > - A single log forwarder machine using the **rsyslog** daemon has a supported capacity of **up to 8500 events per second (EPS)** collected.
+ > - A single log forwarder machine with the above hardware configuration and using the **rsyslog** daemon has a supported capacity of **up to 8500 events per second (EPS)** collected.
- **Operating system**
sentinel Create Nrt Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-nrt-rules.md
Title: Work with near-real-time (NRT) detection analytics rules in Microsoft Sen
description: This article explains how to view and create near-real-time (NRT) detection analytics rules in Microsoft Sentinel. Previously updated : 11/09/2021 Last updated : 11/02/2022 - # Work with near-real-time (NRT) detection analytics rules in Microsoft Sentinel
You create NRT rules the same way you create regular [scheduled-query analytics
1. From the Microsoft Sentinel navigation menu, select **Analytics**.
-1. Select **Create** from the button bar, then **NRT query rule** from the drop-down list.
+1. Select **Create** from the button bar, then **NRT query rule (preview)** from the drop-down list.
- :::image type="content" source="media/create-nrt-rules/create-nrt-rule.png" alt-text="Create a new NRT rule.":::
+ :::image type="content" source="media/create-nrt-rules/create-nrt-rule.png" alt-text="Screenshot shows how to create a new NRT rule." lightbox="media/create-nrt-rules/create-nrt-rule.png":::
1. Follow the instructions of the [**analytics rule wizard**](detect-threats-custom.md). The configuration of NRT rules is in most ways the same as that of scheduled analytics rules.
- - You can refer to [**watchlists**](watchlists.md) and [**threat intelligence feeds**](understand-threat-intelligence.md) in your query logic.
+ - You can refer to [**watchlists**](watchlists.md) in your query logic.
- You can use all of the alert enrichment methods: [**entity mapping**](map-data-fields-to-entities.md), [**custom details**](surface-custom-details-in-alerts.md), and [**alert details**](customize-alert-details.md).
You create NRT rules the same way you create regular [scheduled-query analytics
In this document, you learned how to create near-real-time (NRT) analytics rules in Microsoft Sentinel. -- Learn more about about [near-real-time (NRT) analytics rules in Microsoft Sentinel](near-real-time-rules.md).
+- Learn more about [near-real-time (NRT) analytics rules in Microsoft Sentinel](near-real-time-rules.md).
- Explore other [analytics rule types](detect-threats-built-in.md).
sentinel Near Real Time Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/near-real-time-rules.md
Title: Detect threats quickly with near-real-time (NRT) analytics rules in Micro
description: This article explains how the new near-real-time (NRT) analytics rules can help you detect threats quickly in Microsoft Sentinel. Previously updated : 11/09/2021 Last updated : 11/02/2022 - # Detect threats quickly with near-real-time (NRT) analytics rules in Microsoft Sentinel
The following limitations currently govern the use of NRT rules:
1. As this type of rule is new, its syntax is currently limited but will gradually evolve. Therefore, at this time the following restrictions are in effect:
- 1. The query defined in an NRT rule can reference **only one table**. Queries can, however, refer to multiple watchlists and to threat intelligence feeds.
+ 1. The query defined in an NRT rule can reference **only one table**. Queries can, however, refer to multiple watchlists.
1. You cannot use unions or joins.
sentinel Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new-archive.md
- Title: Archive for What's new in Microsoft Sentinel
-description: A description of what's new and changed in Microsoft Sentinel from six months ago and earlier.
--- Previously updated : 08/31/2022--
-# Archive for What's new in Microsoft Sentinel
-
-The primary [What's new in Sentinel](whats-new.md) release notes page contains updates for the last six months, while this page contains older items.
-
-For information about earlier features delivered, see our [Tech Community blogs](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/bg-p/MicrosoftSentinelBlog/label-name/What's%20New).
-
-Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
--
-> [!TIP]
-> Our threat hunting teams across Microsoft contribute queries, playbooks, workbooks, and notebooks to the [Microsoft Sentinel Community](https://github.com/Azure/Azure-Sentinel), including specific [hunting queries](https://github.com/Azure/Azure-Sentinel) that your teams can adapt and use.
->
-> You can also contribute! Join us in the [Microsoft Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
--
-## December 2021
--- [Apache Log4j Vulnerability Detection solution](#apache-log4j-vulnerability-detection-solution-public-preview)-- [IoT OT Threat Monitoring with Defender for IoT solution](#iot-ot-threat-monitoring-with-defender-for-iot-solution-public-preview)-- [Continuous Threat Monitoring for GitHub solution](#ingest-github-logs-into-your-microsoft-sentinel-workspace-public-preview)--
-### Apache Log4j Vulnerability Detection solution
-
-Remote code execution vulnerabilities related to Apache Log4j were disclosed on 9 December 2021. The vulnerability allows for unauthenticated remote code execution, and it's triggered when a specially crafted string, provided by the attacker through a variety of different input vectors, is parsed and processed by the Log4j 2 vulnerable component.
-
-The [Apache Log4J Vulnerability Detection](sentinel-solutions-catalog.md#domain-solutions) solution was added to the Microsoft Sentinel content hub to help customers monitor, detect, and investigate signals related to the exploitation of this vulnerability, using Microsoft Sentinel.
-
-For more information, see the [Microsoft Security Response Center blog](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/) and [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md).
-
-### IoT OT Threat Monitoring with Defender for IoT solution (Public preview)
-
-The new **IoT OT Threat Monitoring with Defender for IoT** solution available in the [Microsoft Sentinel content hub](sentinel-solutions-deploy.md) provides further support for the Microsoft Sentinel integration with Microsoft Defender for IoT, bridging gaps between IT and OT security challenges, and empowering SOC teams with enhanced abilities to efficiently and effectively detect and respond to OT threats.
-
-For more information, see [Tutorial: Investigate Microsoft Defender for IoT devices with Microsoft Sentinel](iot-advanced-threat-monitoring.md).
--
-### Ingest GitHub logs into your Microsoft Sentinel workspace (Public preview)
-
-Use the new [Continuous Threat Monitoring for GitHub](https://azuremarketplace.microsoft.com/marketplace/apps/microsoftcorporation1622712991604.sentinel4github?tab=Overview) solution and [data connector](data-connectors-reference.md#github-preview) to ingest your GitHub logs into your Microsoft Sentinel workspace.
-
-The **Continuous Threat Monitoring for GitHub** solution includes a data connector, relevant analytics rules, and a workbook that you can use to visualize your log data.
-
-For example, view the number of users that were added or removed from GitHub repositories, how many repositories were created, forked, or cloned, in the selected time frame.
-
-> [!NOTE]
-> The **Continuous Threat Monitoring for GitHub** solution is supported for GitHub enterprise licenses only.
->
-
-For more information, see [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions (Public preview)](sentinel-solutions-deploy.md) and [instructions](data-connectors-reference.md#github-preview) for installing the GitHub data connector.
-
-### Apache Log4j Vulnerability Detection solution (Public preview)
-
-Remote code execution vulnerabilities related to Apache Log4j were disclosed on 9 December 2021. The vulnerability allows for unauthenticated remote code execution, and it's triggered when a specially crafted string, provided by the attacker through a variety of different input vectors, is parsed and processed by the Log4j 2 vulnerable component.
-
-The [Apache Log4J Vulnerability Detection](sentinel-solutions-catalog.md#domain-solutions) solution was added to the Microsoft Sentinel content hub to help customers monitor, detect, and investigate signals related to the exploitation of this vulnerability, using Microsoft Sentinel.
-
-For more information, see the [Microsoft Security Response Center blog](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/) and [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md).
-
-## November 2021
--- [Incident advanced search now available in GA](#incident-advanced-search-now-available-in-ga)-- [Amazon Web Services S3 connector now available (Public preview)](#amazon-web-services-s3-connector-now-available-public-preview)-- [Windows Forwarded Events connector now available (Public preview)](#windows-forwarded-events-connector-now-available-public-preview)-- [Near-real-time (NRT) threat detection rules now available (Public preview)](#near-real-time-nrt-threat-detection-rules-now-available-public-preview)-- [Fusion engine now detects emerging and unknown threats (Public preview)](#fusion-engine-now-detects-emerging-and-unknown-threats-public-preview)-- [Fine-tuning recommendations for your analytics rules (Public preview)](#get-fine-tuning-recommendations-for-your-analytics-rules-public-preview)-- [Free trial updates](#free-trial-updates)-- [Content hub and new solutions (Public preview)](#content-hub-and-new-solutions-public-preview)-- [Continuous deployment from your content repositories (Public preview)](#enable-continuous-deployment-from-your-content-repositories-public-preview)-- [Enriched threat intelligence with Geolocation and WhoIs data (Public preview)](#enriched-threat-intelligence-with-geolocation-and-whois-data-public-preview)-- [Use notebooks with Azure Synapse Analytics in Microsoft Sentinel (Public preview)](#use-notebooks-with-azure-synapse-analytics-in-microsoft-sentinel-public-preview)-- [Enhanced Notebooks area in Microsoft Sentinel](#enhanced-notebooks-area-in-microsoft-sentinel)-- [Microsoft Sentinel renaming](#microsoft-sentinel-renaming)-- [Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel](#deploy-and-monitor-azure-key-vault-honeytokens-with-microsoft-sentinel)-
-### Incident advanced search now available in GA
-
-Searching for incidents using the advanced search functionality is now generally available.
-
-The advanced incident search provides the ability to search across more data, including alert details, descriptions, entities, tactics, and more.
-
-For more information, see [Search for incidents](investigate-cases.md#search-for-incidents).
-
-### Amazon Web Services S3 connector now available (Public preview)
-
-You can now connect Microsoft Sentinel to your Amazon Web Services (AWS) S3 storage bucket, in order to ingest logs from a variety of AWS services.
-
-For now, you can use this connection to ingest VPC Flow Logs and GuardDuty findings, as well as AWS CloudTrail.
-
-For more information, see [Connect Microsoft Sentinel to S3 Buckets to get Amazon Web Services (AWS) data](connect-aws.md).
-
-### Windows Forwarded Events connector now available (Public preview)
-
-You can now stream event logs from Windows Servers connected to your Microsoft Sentinel workspace using Windows Event Collection / Windows Event Forwarding (WEC / WEF), thanks to this new data connector. The connector uses the new Azure Monitor Agent (AMA), which provides a number of advantages over the legacy Log Analytics agent (also known as the MMA):
--- **Scalability:** If you've enabled Windows Event Collection (WEC), you can install the Azure Monitor Agent (AMA) on the WEC machine to collect logs from many servers with a single connection point.--- **Speed:** The AMA can send data at an improved rate of 5 K EPS, allowing for faster data refresh.--- **Efficiency:** The AMA allows you to design complex Data Collection Rules (DCR) to filter the logs at their source, choosing the exact events to stream to your workspace. DCRs help lower your network traffic and your ingestion costs by leaving out undesired events.--- **Coverage:** WEC / WEF enables the collection of Windows Event logs from legacy (on-premises and physical) servers and also from high-usage or sensitive machines, such as domain controllers, where installing an agent is undesired.-
-We recommend using this connector with the [Microsoft Sentinel Information Model (ASIM)](normalization.md) parsers installed to ensure full support for data normalization.
-
-Learn more about the [Windows Forwarded Events connector](data-connectors-reference.md#windows-forwarded-events-preview).
-
-### Near-real-time (NRT) threat detection rules now available (Public preview)
-
-When you're faced with security threats, time and speed are of the essence. You need to be aware of threats as they materialize so you can analyze and respond quickly to contain them. Microsoft Sentinel's near-real-time (NRT) analytics rules offer you faster threat detection - closer to that of an on-premises SIEM - and the ability to shorten response times in specific scenarios.
-
-Microsoft SentinelΓÇÖs [near-real-time analytics rules](detect-threats-built-in.md#nrt) provide up-to-the-minute threat detection out-of-the-box. This type of rule was designed to be highly responsive by running its query at intervals just one minute apart.
-
-Learn more about [NRT rules](near-real-time-rules.md) and [how to use them](create-nrt-rules.md).
-
-### Fusion engine now detects emerging and unknown threats (Public preview)
-
-In addition to detecting attacks based on [predefined scenarios](fusion-scenario-reference.md), Microsoft Sentinel's ML-powered Fusion engine can help you find the emerging and unknown threats in your environment by applying extended ML analysis and by correlating a broader scope of anomalous signals, while keeping the alert fatigue low.
-
-The Fusion engine's ML algorithms constantly learn from existing attacks and apply analysis based on how security analysts think. It can therefore discover previously undetected threats from millions of anomalous behaviors across the kill-chain throughout your environment, which helps you stay one step ahead of the attackers.
-
-Learn more about [Fusion for emerging threats](fusion.md#fusion-for-emerging-threats).
-
-Also, the [Fusion analytics rule is now more configurable](configure-fusion-rules.md), reflecting its increased functionality.
-
-### Get fine-tuning recommendations for your analytics rules (Public preview)
-
-Fine-tuning threat detection rules in your SIEM can be a difficult, delicate, and continuous process of balancing between maximizing your threat detection coverage and minimizing false positive rates. Microsoft Sentinel simplifies and streamlines this process by using machine learning to analyze billions of signals from your data sources as well as your responses to incidents over time, deducing patterns and providing you with actionable recommendations and insights that can significantly lower your tuning overhead and allow you to focus on detecting and responding to actual threats.
-
-[Tuning recommendations and insights](detection-tuning.md) are now built in to your analytics rules.
-
-### Free trial updates
-
-Microsoft Sentinel's free trial continues to support new or existing Log Analytics workspaces at no additional cost for the first 31 days.
-
-We're evolving our free trial experience to include the following updates:
--- **New Log Analytics workspaces** can ingest up to 10 GB / day of log data for the first 31-days at no cost. New workspaces include workspaces that are less than three days old.-
- Both Log Analytics data ingestion and Microsoft Sentinel charges are waived during the 31-day trial period. This free trial is subject to a 20-workspace limit per Azure tenant.
--- **Existing Log Analytics workspaces** can enable Microsoft Sentinel at no additional cost. Existing workspaces include any workspaces created more than three days ago.-
- Only the Microsoft Sentinel charges are waived during the 31-day trial period.
-
-Usage beyond these limits will be charged per the pricing listed on the [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/microsoft-sentinel/) page. Charges related to additional capabilities for [automation](automation.md) and [bring your own machine learning](bring-your-own-ml.md) are still applicable during the free trial.
-
-> [!TIP]
-> During your free trial, find resources for cost management, training, and more on the **News & guides > Free trial** tab in Microsoft Sentinel. This tab also displays details about the dates of your free trial, and how many days you've left until it expires.
->
-
-For more information, see [Plan and manage costs for Microsoft Sentinel](billing.md).
-
-### Content hub and new solutions (Public preview)
-
-Microsoft Sentinel now provides a **Content hub**, a centralized location to find and deploy Microsoft Sentinel out-of-the-box (built-in) content and solutions to your Microsoft Sentinel workspace. Find the content you need by filtering for content type, support models, categories and more, or use the powerful text search.
-
-Under **Content management**, select **Content hub**. Select a solution to view more details on the right, and then click **Install** to install it in your workspace.
--
-The following list includes highlights of new, out-of-the-box solutions added to the Content hub:
-
- :::column span="":::
- - Microsoft Sentinel Training Lab
- - Cisco ASA
- - Cisco Duo Security
- - Cisco Meraki
- - Cisco StealthWatch
- - Digital Guardian
- - 365 Dynamics
- - GCP Cloud DNS
- :::column-end:::
- :::column span="":::
- - GCP CloudMonitor
- - GCP Identity and Access Management
- - FalconForce
- - FireEye NX
- - Flare Systems Firework
- - Forescout
- - Fortinet Fortigate
- - Imperva Cloud FAW
- :::column-end:::
- :::column span="":::
- - Insider Risk Management (IRM)
- - IronNet CyberSecurity Iron Defense
- - Lookout
- - McAfee Network Security Platform
- - Microsoft MITRE ATT&CK Solution for Cloud
- - Palo Alto PAN-OS
- :::column-end:::
- :::column span="":::
- - Rapid7 Nexpose / Insight VM
- - ReversingLabs
- - RSA SecurID
- - Semperis
- - Tenable Nessus Scanner
- - Vectra Stream
- - Zero Trust
- :::column-end:::
-
-For more information, see:
--- [Learn about Microsoft Sentinel solutions](sentinel-solutions.md)-- [Discover and deploy Microsoft Sentinel solutions](sentinel-solutions-deploy.md)-- [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md)-
-### Enable continuous deployment from your content repositories (Public preview)
-
-The new Microsoft Sentinel **Repositories** page provides the ability to manage and deploy your custom content from GitHub or Azure DevOps repositories, as an alternative to managing them in the Azure portal. This capability introduces a more streamlined and automated approach for managing and deploying content across Microsoft Sentinel workspaces.
-
-If you store your custom content in an external repository in order to maintain it outside of Microsoft Sentinel, now you can connect that repository to your Microsoft Sentinel workspace. Content you add, create, or edit in your repository is automatically deployed to your Microsoft Sentinel workspaces, and will be visible from the various Microsoft Sentinel galleries, such as the **Analytics**, **Hunting**, or **Workbooks** pages.
-
-For more information, see [Deploy custom content from your repository](ci-cd.md).
-
-### Enriched threat intelligence with Geolocation and WhoIs data (Public preview)
-
-Now, any threat intelligence data that you bring in to Microsoft Sentinel via data connectors and logic app playbooks, or create in Microsoft Sentinel, is automatically enriched with GeoLocation and WhoIs information.
-
-GeoLocation and WhoIs data can provide more context for investigations where the selected indicator of compromise (IOC) is found.
-
-For example, use GeoLocation data to find details like *Organization* or *Country* for the indicator, and WhoIs data to find data like *Registrar* and *Record creation* data.
-
-You can view GeoLocation and WhoIs data on the **Threat Intelligence** pane for each indicator of compromise that you've imported into Microsoft Sentinel. Details for the indicator are shown on the right, including any Geolocation and WhoIs data available.
-
-For example:
--
-> [!TIP]
-> The Geolocation and WhoIs information come from the Microsoft Threat Intelligence service, which you can also access via API. For more information, see [Enrich entities with geolocation data via API](geolocation-data-api.md).
->
-
-For more information, see:
--- [Understand threat intelligence in Microsoft Sentinel](understand-threat-intelligence.md)-- [Understand threat intelligence integrations](threat-intelligence-integration.md)-- [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md)-- [Connect threat intelligence platforms](connect-threat-intelligence-tip.md)-
-### Use notebooks with Azure Synapse Analytics in Microsoft Sentinel (Public preview)
-
-Microsoft Sentinel now integrates Jupyter notebooks with Azure Synapse for large-scale security analytics scenarios.
-
-Until now, Jupyter notebooks in Microsoft Sentinel have been integrated with Azure Machine Learning. This functionality supports users who want to incorporate notebooks, popular open-source machine learning toolkits, and libraries such as TensorFlow, as well as their own custom models, into security workflows.
-
-The new Azure Synapse integration provides extra analytic horsepower, such as:
--- **Security big data analytics**, using cost-optimized, fully managed Azure Synapse Apache Spark compute pool.--- **Cost-effective Data Lake access** to build analytics on historical data via Azure Data Lake Storage Gen2, which is a set of capabilities dedicated to big data analytics, built on top of Azure Blob Storage.--- **Flexibility to integrate data sources** into security operation workflows from multiple sources and formats.--- **PySpark, a Python-based API** for using the Spark framework in combination with Python, reducing the need to learn a new programming language if you're already familiar with Python.-
-To support this integration, we added the ability to create and launch an Azure Synapse workspace directly from Microsoft Sentinel. We also added new, sample notebooks to guide you through configuring the Azure Synapse environment, setting up a continuous data export pipeline from Log Analytics into Azure Data Lake Storage, and then hunting on that data at scale.
-
-For more information, see [Integrate notebooks with Azure Synapse](notebooks-with-synapse.md).
-
-### Enhanced Notebooks area in Microsoft Sentinel
-
-The **Notebooks** area in Microsoft Sentinel also now has an **Overview** tab, where you can find basic information about notebooks, and a new **Notebook types** column in the **Templates** tab to indicate the type of each notebook displayed. For example, notebooks might have types of **Getting started**, **Configuration**, **Hunting**, and now **Synapse**.
-
-For example:
--
-For more information, see [Use Jupyter notebooks to hunt for security threats](notebooks.md).
-
-### Microsoft Sentinel renaming
-
-Starting in November 2021, Microsoft Sentinel is being renamed to Microsoft Sentinel, and you'll see upcoming updates in the portal, documentation, and other resources in parallel.
-
-Earlier entries in this article and the older [Archive for What's new in Sentinel](whats-new-archive.md) continue to use the name *Azure* Sentinel, as that was the service name when those features were new.
-
-For more information, see our [blog on recent security enhancements](https://aka.ms/secblg11).
-
-### Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel
-
-The new **Microsoft Sentinel Deception** solution helps you watch for malicious activity in your key vaults by helping you to deploy decoy keys and secrets, called *honeytokens*, to selected Azure key vaults.
-
-Once deployed, any access or operation with the honeytoken keys and secrets generate incidents that you can investigate in Microsoft Sentinel.
-
-Since there's no reason to actually use honeytoken keys and secrets, any similar activity in your workspace may be malicious and should be investigated.
-
-The **Microsoft Sentinel Deception** solution includes a workbook to help you deploy the honeytokens, either at scale or one at a time, watchlists to track the honeytokens created, and analytics rules to generate incidents as needed.
-
-For more information, see [Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel (Public preview)](monitor-key-vault-honeytokens.md).
-
-## October 2021
--- [Windows Security Events connector using Azure Monitor Agent now in GA](#windows-security-events-connector-using-azure-monitor-agent-now-in-ga)-- [Defender for Office 365 events now available in the Microsoft 365 Defender connector (Public preview)](#defender-for-office-365-events-now-available-in-the-microsoft-365-defender-connector-public-preview)-- [Playbook templates and gallery now available (Public preview)](#playbook-templates-and-gallery-now-available-public-preview)-- [Template versioning for your scheduled analytics rules (Public preview)](#manage-template-versions-for-your-scheduled-analytics-rules-public-preview)-- [DHCP normalization schema (Public preview)](#dhcp-normalization-schema-public-preview)-
-### Windows Security Events connector using Azure Monitor Agent now in GA
-
-The new version of the Windows Security Events connector, based on the Azure Monitor Agent, is now generally available. For more information, see [Connect to Windows servers to collect security events](connect-windows-security-events.md?tabs=AMA).
-
-### Defender for Office 365 events now available in the Microsoft 365 Defender connector (Public preview)
-
-In addition to those from Microsoft Defender for Endpoint, you can now ingest raw [advanced hunting events](/microsoft-365/security/defender/advanced-hunting-overview) from [Microsoft Defender for Office 365](/microsoft-365/security/office-365-security/overview) through the [Microsoft 365 Defender connector](connect-microsoft-365-defender.md). [Learn more](microsoft-365-defender-sentinel-integration.md#advanced-hunting-event-collection).
-
-### Playbook templates and gallery now available (Public preview)
-
-A playbook template is a pre-built, tested, and ready-to-use workflow that can be customized to meet your needs. Templates can also serve as a reference for best practices when developing playbooks from scratch, or as inspiration for new automation scenarios.
-
-Playbook templates have been developed by the Sentinel community, independent software vendors (ISVs), and Microsoft's own experts, and you can find them in the **Playbook templates** tab (under **Automation**), as part of an [Microsoft Sentinel solution](sentinel-solutions.md), or in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks).
-
-For more information, see [Create and customize playbooks from built-in templates](use-playbook-templates.md).
-
-### Manage template versions for your scheduled analytics rules (Public preview)
-
-When you create analytics rules from [built-in Microsoft Sentinel rule templates](detect-threats-built-in.md), you effectively create a copy of the template. Past that point, the active rule is ***not*** dynamically updated to match any changes that get made to the originating template.
-
-However, rules created from templates ***do*** remember which templates they came from, which allows you two advantages:
--- If you made changes to a rule when creating it from a template (or at any time after that), you can always revert the rule back to its original version (as a copy of the template).--- If a template is updated, you'll be notified and you can choose to update your rules to the new version of their templates, or leave them as they are.-
-[Learn how to manage these tasks](manage-analytics-rule-templates.md), and what to keep in mind. These procedures apply to any [Scheduled](detect-threats-built-in.md#scheduled) analytics rules created from templates.
-
-### DHCP normalization schema (Public preview)
-
-The Advanced Security Information Model (ASIM) now supports a DHCP normalization schema, which is used to describe events reported by a DHCP server and is used by Microsoft Sentinel to enable source-agnostic analytics.
-
-Events described in the DHCP normalization schema include serving requests for DHCP IP address leased from client systems and updating a DNS server with the leases granted.
-
-For more information, see:
--- [Microsoft Sentinel DHCP normalization schema reference (Public preview)](dhcp-normalization-schema.md)-- [Normalization and the Microsoft Sentinel Information Model (ASIM)](normalization.md)-
-## September 2021
--- [Data connector health enhancements (Public preview)](#data-connector-health-enhancements-public-preview)-- [New in docs: scaling data connector documentation](#new-in-docs-scaling-data-connector-documentation)-- [Azure Storage account connector changes](#azure-storage-account-connector-changes)-
-### Data connector health enhancements (Public preview)
-
-Microsoft Sentinel now provides the ability to enhance your data connector health monitoring with a new *SentinelHealth* table. The *SentinelHealth* table is created after you [turn on the Microsoft Sentinel health feature](monitor-sentinel-health.md) in your Microsoft Sentinel workspace, at the first success or failure health event generated.
-
-For more information, see [Monitor the health of your data connectors with this Microsoft Sentinel workbook](monitor-data-connector-health.md).
-
-> [!NOTE]
-> The *SentinelHealth* data table is currently supported only for selected data connectors. For more information, see [Supported data connectors](monitor-data-connector-health.md#supported-data-connectors).
->
--
-### New in docs: scaling data connector documentation
-
-As we continue to add more and more built-in data connectors for Microsoft Sentinel, we reorganized our data connector documentation to reflect this scaling.
-
-For most data connectors, we replaced full articles that describe an individual connector with a series of generic procedures and a full reference of all currently supported connectors.
-
-Check the [Microsoft Sentinel data connectors reference](data-connectors-reference.md) for details about your connector, including references to the relevant generic procedure, as well as extra information and configurations required.
-
-For more information, see:
--- **Conceptual information**: [Connect data sources](connect-data-sources.md)--- **Generic how-to articles**:-
- - [Connect to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md)
- - [Connect your data source to the Microsoft Sentinel Data Collector API to ingest data](connect-rest-api-template.md)
- - [Get CEF-formatted logs from your device or appliance into Microsoft Sentinel](connect-common-event-format.md)
- - [Collect data from Linux-based sources using Syslog](connect-syslog.md)
- - [Collect data in custom log formats to Microsoft Sentinel with the Log Analytics agent](connect-custom-logs.md)
- - [Use Azure Functions to connect your data source to Microsoft Sentinel](connect-azure-functions-template.md)
- - [Resources for creating Microsoft Sentinel custom connectors](create-custom-connector.md)
-
-### Azure Storage account connector changes
-
-Due to some changes made within the Azure Storage account resource configuration itself, the connector also needs to be reconfigured.
-The storage account (parent) resource has within it other (child) resources for each type of storage: files, tables, queues, and blobs.
-
-When configuring diagnostics for a storage account, you must select and configure, in turn:
-- The parent account resource, exporting the **Transaction** metric.-- Each of the child storage-type resources, exporting all the logs and metrics (see the table above).-
-You'll only see the storage types that you actually have defined resources for.
--
-## August 2021
--- [Advanced incident search (Public preview)](#advanced-incident-search-public-preview)-- [Fusion detection for Ransomware (Public preview)](#fusion-detection-for-ransomware-public-preview)-- [Watchlist templates for UEBA data](#watchlist-templates-for-ueba-data-public-preview)-- [File event normalization schema (Public preview)](#file-event-normalization-schema-public-preview)-- [New in docs: Best practice guidance](#new-in-docs-best-practice-guidance)-
-### Advanced incident search (Public preview)
-
-By default, incident searches run across the **Incident ID**, **Title**, **Tags**, **Owner**, and **Product name** values only. Microsoft Sentinel now provides [advanced search options](investigate-cases.md#search-for-incidents) to search across more data, including alert details, descriptions, entities, tactics, and more.
-
-For example:
--
-For more information, see [Search for incidents](investigate-cases.md#search-for-incidents).
-
-### Fusion detection for Ransomware (Public preview)
-
-Microsoft Sentinel now provides new Fusion detections for possible Ransomware activities, generating incidents titled as **Multiple alerts possibly related to Ransomware activity detected**.
-
-Incidents are generated for alerts that are possibly associated with Ransomware activities, when they occur during a specific time-frame, and are associated with the Execution and Defense Evasion stages of an attack. You can use the alerts listed in the incident to analyze the techniques possibly used by attackers to compromise a host / device and to evade detection.
-
-Supported data connectors include:
--- [Azure Defender (Azure Security Center)](connect-defender-for-cloud.md)-- [Microsoft Defender for Endpoint](./data-connectors-reference.md#microsoft-defender-for-endpoint)-- [Microsoft Defender for Identity](./data-connectors-reference.md#microsoft-defender-for-identity)-- [Microsoft Cloud App Security](./data-connectors-reference.md#microsoft-defender-for-cloud-apps)-- [Microsoft Sentinel scheduled analytics rules](detect-threats-built-in.md#scheduled)-
-For more information, see [Multiple alerts possibly related to Ransomware activity detected](fusion.md#fusion-for-ransomware).
-
-### Watchlist templates for UEBA data (Public preview)
-
-Microsoft Sentinel now provides built-in watchlist templates for UEBA data, which you can customize for your environment and use during investigations.
-
-After UEBA watchlists are populated with data, you can correlate that data with analytics rules, view it in the entity pages and investigation graphs as insights, create custom uses such as to track VIP or sensitive users, and more.
-
-Watchlist templates currently include:
--- **VIP Users**. A list of user accounts of employees that have high impact value in the organization.-- **Terminated Employees**. A list of user accounts of employees that have been, or are about to be, terminated.-- **Service Accounts**. A list of service accounts and their owners.-- **Identity Correlation**. A list of related user accounts that belong to the same person.-- **High Value Assets**. A list of devices, resources, or other assets that have critical value in the organization.-- **Network Mapping**. A list of IP subnets and their respective organizational contexts.-
-For more information, see [Create watchlists in Microsoft Sentinel](watchlists-create.md) and [Built-in watchlist schemas](watchlist-schemas.md).
---
-### File Event normalization schema (Public preview)
-
-The Microsoft Sentinel Information Model (ASIM) now supports a File Event normalization schema, which is used to describe file activity, such as creating, modifying, or deleting files or documents. File events are reported by operating systems, file storage systems such as Azure Files, and document management systems such as Microsoft SharePoint.
-
-For more information, see:
--- [Microsoft Sentinel File Event normalization schema reference (Public preview)](file-event-normalization-schema.md)-- [Normalization and the Microsoft Sentinel Information Model (ASIM)](normalization.md)--
-### New in docs: Best practice guidance
-
-In response to multiple requests from customers and our support teams, we added a series of best practice guidance to our documentation.
-
-For more information, see:
--- [Prerequisites for deploying Microsoft Sentinel](prerequisites.md)-- [Best practices for Microsoft Sentinel](best-practices.md)-- [Microsoft Sentinel workspace architecture best practices](best-practices-workspace-architecture.md)-- [Design your Microsoft Sentinel workspace architecture](design-your-workspace-architecture.md)-- [Microsoft Sentinel sample workspace designs](sample-workspace-designs.md)-- [Data collection best practices](best-practices-data.md)-
-> [!TIP]
-> You can find more guidance added across our documentation in relevant conceptual and how-to articles. For more information, see [Best practice references](best-practices.md#best-practice-references).
->
-
-## July 2021
--- [Microsoft Threat Intelligence Matching Analytics (Public preview)](#microsoft-threat-intelligence-matching-analytics-public-preview)-- [Use Azure AD data with Microsoft Sentinel's IdentityInfo table (Public preview)](#use-azure-ad-data-with-microsoft-sentinels-identityinfo-table-public-preview)-- [Enrich Entities with geolocation data via API (Public preview)](#enrich-entities-with-geolocation-data-via-api-public-preview)-- [Support for ADX cross-resource queries (Public preview)](#support-for-adx-cross-resource-queries-public-preview)-- [Watchlists are in general availability](#watchlists-are-in-general-availability)-- [Support for data residency in more geos](#support-for-data-residency-in-more-geos)-- [Bidirectional sync in Azure Defender connector (Public preview)](#bidirectional-sync-in-azure-defender-connector-public-preview)-
-### Microsoft Threat Intelligence Matching Analytics (Public preview)
-
-Microsoft Sentinel now provides the built-in **Microsoft Threat Intelligence Matching Analytics** rule, which matches Microsoft-generated threat intelligence data with your logs. This rule generates high-fidelity alerts and incidents, with appropriate severities based on the context of the logs detected. After a match is detected, the indicator is also published to your Microsoft Sentinel threat intelligence repository.
-
-The **Microsoft Threat Intelligence Matching Analytics** rule currently matches domain indicators against the following log sources:
--- [CEF](connect-common-event-format.md)-- [DNS](./data-connectors-reference.md#windows-dns-server-preview)-- [Syslog](connect-syslog.md)-
-For more information, see [Detect threats using matching analytics (Public preview)](use-matching-analytics-to-detect-threats.md).
-
-### Use Azure AD data with Microsoft Sentinel's IdentityInfo table (Public preview)
-
-As attackers often use the organization's own user and service accounts, data about those user accounts, including the user identification and privileges, are crucial for the analysts in the process of an investigation.
-
-Now, having [UEBA enabled](enable-entity-behavior-analytics.md) in your Microsoft Sentinel workspace also synchronizes Azure AD data into the new **IdentityInfo** table in Log Analytics. Synchronizations between your Azure AD and the **IdentifyInfo** table create a snapshot of your user profile data that includes user metadata, group information, and the Azure AD roles assigned to each user.
-
-Use the **IdentityInfo** table during investigations and when fine-tuning analytics rules for your organization to reduce false positives.
-
-For more information, see [IdentityInfo table](ueba-reference.md#identityinfo-table) in the UEBA enrichments reference and [Use UEBA data to analyze false positives](investigate-with-ueba.md#use-ueba-data-to-analyze-false-positives).
-
-### Enrich entities with geolocation data via API (Public preview)
-
-Microsoft Sentinel now offers an API to enrich your data with geolocation information. Geolocation data can then be used to analyze and investigate security incidents.
-
-For more information, see [Enrich entities in Microsoft Sentinel with geolocation data via REST API (Public preview)](geolocation-data-api.md) and [Classify and analyze data using entities in Microsoft Sentinel](entities.md).
--
-### Support for ADX cross-resource queries (Public preview)
-
-The hunting experience in Microsoft Sentinel now supports [ADX cross-resource queries](../azure-monitor/logs/azure-monitor-data-explorer-proxy.md#cross-query-your-log-analytics-or-application-insights-resources-and-azure-data-explorer).
-
-Although Log Analytics remains the primary data storage location for performing analysis with Microsoft Sentinel, there are cases where ADX is required to store data due to cost, retention periods, or other factors. This capability enables customers to hunt over a wider set of data and view the results in the [Microsoft Sentinel hunting experiences](hunting.md), including hunting queries, [livestream](livestream.md), and the Log Analytics search page.
-
-To query data stored in ADX clusters, use the adx() function to specify the ADX cluster, database name, and desired table. You can then query the output as you would any other table. See more information in the pages linked above.
----
-### Watchlists are in general availability
-
-The [watchlists](watchlists.md) feature is now generally available. Use watchlists to enrich alerts with business data, to create allowlists or blocklists against which to check access events, and to help investigate threats and reduce alert fatigue.
-
-### Support for data residency in more geos
-
-Microsoft Sentinel now supports full data residency in the following additional geos:
-
-Brazil, Norway, South Africa, Korea, Germany, United Arab Emirates (UAE), and Switzerland.
-
-See the [complete list of supported geos](quickstart-onboard.md#geographical-availability-and-data-residency) for data residency.
-
-### Bidirectional sync in Azure Defender connector (Public preview)
-
-The Azure Defender connector now supports bi-directional syncing of alerts' status between Defender and Microsoft Sentinel. When you close a Sentinel incident containing a Defender alert, the alert will automatically be closed in the Defender portal as well.
-
-See this [complete description of the updated Azure Defender connector](connect-defender-for-cloud.md).
-
-## June 2021
--- [Upgrades for normalization and the Microsoft Sentinel Information Model](#upgrades-for-normalization-and-the-microsoft-sentinel-information-model)-- [Updated service-to-service connectors](#updated-service-to-service-connectors)-- [Export and import analytics rules (Public preview)](#export-and-import-analytics-rules-public-preview)-- [Alert enrichment: alert details (Public preview)](#alert-enrichment-alert-details-public-preview)-- [More help for playbooks!](#more-help-for-playbooks)-- [New documentation reorganization](#new-documentation-reorganization)-
-### Upgrades for normalization and the Microsoft Sentinel Information Model
-
-The Microsoft Sentinel Information Model enables you to use and create source-agnostic content, simplifying your analysis of the data in your Microsoft Sentinel workspace.
-
-In this month's update, we've enhanced our normalization documentation, providing new levels of detail and full DNS, process event, and authentication normalization schemas.
-
-For more information, see:
--- [Normalization and the Microsoft Sentinel Information Model (ASIM)](normalization.md) (updated)-- [Microsoft Sentinel Authentication normalization schema reference (Public preview)](authentication-normalization-schema.md) (new!)-- [Microsoft Sentinel data normalization schema reference](./network-normalization-schema.md)-- [Microsoft Sentinel DNS normalization schema reference (Public preview)](dns-normalization-schema.md) (new!)-- [Microsoft Sentinel Process Event normalization schema reference (Public preview)](process-events-normalization-schema.md) (new!)-- [Microsoft Sentinel Registry Event normalization schema reference (Public preview)](registry-event-normalization-schema.md) (new!)--
-### Updated service-to-service connectors
-
-Two of our most-used connectors have been the beneficiaries of major upgrades.
--- The [Windows security events connector (Public preview)](connect-windows-security-events.md) is now based on the new Azure Monitor Agent (AMA), allowing you far more flexibility in choosing which data to ingest, and giving you maximum visibility at minimum cost.--- The [Azure activity logs connector](./data-connectors-reference.md#azure-activity) is now based on the diagnostics settings pipeline, giving you more complete data, greatly reduced ingestion lag, and better performance and reliability.-
-The upgrades are not automatic. Users of these connectors are encouraged to enable the new versions.
-
-### Export and import analytics rules (Public preview)
-
-You can now export your analytics rules to JSON-format Azure Resource Manager (ARM) template files, and import rules from these files, as part of managing and controlling your Microsoft Sentinel deployments as code. Any type of [analytics rule](detect-threats-built-in.md) - not just **Scheduled** - can be exported to an ARM template. The template file includes all the rule's information, from its query to its assigned MITRE ATT&CK tactics.
-
-For more information, see [Export and import analytics rules to and from ARM templates](import-export-analytics-rules.md).
-
-### Alert enrichment: alert details (Public preview)
-
-In addition to enriching your alert content with entity mapping and custom details, you can now custom-tailor the way alerts - and by extension, incidents - are presented and displayed, based on their particular content. Like the other alert enrichment features, this is configurable in the [analytics rule wizard](detect-threats-custom.md).
-
-For more information, see [Customize alert details in Microsoft Sentinel](customize-alert-details.md).
--
-### More help for playbooks!
-
-Two new documents can help you get started or get more comfortable with creating and working with playbooks.
-- [Authenticate playbooks to Microsoft Sentinel](authenticate-playbooks-to-sentinel.md) helps you understand the different authentication methods by which Logic Apps-based playbooks can connect to and access information in Microsoft Sentinel, and when it's appropriate to use each one.-- [Use triggers and actions in playbooks](playbook-triggers-actions.md) explains the difference between the **incident trigger** and the **alert trigger** and which to use when, and shows you some of the different actions you can take in playbooks in response to incidents, including how to access the information in [custom details](playbook-triggers-actions.md#work-with-custom-details).-
-Playbook documentation also explicitly addresses the multi-tenant MSSP scenario.
-
-### New documentation reorganization
-
-This month we've reorganized our [Microsoft Sentinel documentation](index.yml), restructuring into intuitive categories that follow common customer journeys. Use the filtered docs search and updated landing page to navigate through Microsoft Sentinel docs.
--
-## May 2021
--- [Microsoft Sentinel PowerShell module](#microsoft-sentinel-powershell-module)-- [Alert grouping enhancements](#alert-grouping-enhancements)-- [Microsoft Sentinel solutions (Public preview)](#microsoft-sentinel-solutions-public-preview)-- [Continuous Threat Monitoring for SAP solution (Public preview)](#continuous-threat-monitoring-for-sap-solution-public-preview)-- [Threat intelligence integrations (Public preview)](#threat-intelligence-integrations-public-preview)-- [Fusion over scheduled alerts (Public preview)](#fusion-over-scheduled-alerts-public-preview)-- [SOC-ML anomalies (Public preview)](#soc-ml-anomalies-public-preview)-- [IP Entity page (Public preview)](#ip-entity-page-public-preview)-- [Activity customization (Public preview)](#activity-customization-public-preview)-- [Hunting dashboard (Public preview)](#hunting-dashboard-public-preview)-- [Incident teams - collaborate in Microsoft Teams (Public preview)](#microsoft-sentinel-incident-teamcollaborate-in-microsoft-teams-public-preview)-- [Zero Trust (TIC3.0) workbook](#zero-trust-tic30-workbook)--
-### Microsoft Sentinel PowerShell module
-
-The official Microsoft Sentinel PowerShell module to automate daily operational tasks has been released as GA!
-
-You can download it here: [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.SecurityInsights/).
-
-For more information, see the PowerShell documentation: [Az.SecurityInsights](/powershell/module/az.securityinsights/)
-
-### Alert grouping enhancements
-
-Now you can configure your analytics rule to group alerts into a single incident, not only when they match a specific entity type, but also when they match a specific alert name, severity, or other custom details for a configured entity.
-
-In the **Incidents settings** tab of the analytics rule wizard, select to turn on alert grouping, and then select the **Group alerts into a single incident if the selected entity types and details match** option.
-
-Then, select your entity type and the relevant details you want to match:
--
-For more information, see [Alert grouping](detect-threats-custom.md#alert-grouping).
-
-### Microsoft Sentinel solutions (Public preview)
-
-Microsoft Sentinel now offers **packaged content** [solutions](sentinel-solutions-catalog.md) that include combinations of one or more data connectors, workbooks, analytics rules, playbooks, hunting queries, parsers, watchlists, and other components for Microsoft Sentinel.
-
-Solutions provide improved in-product discoverability, single-step deployment, and end-to-end product scenarios. For more information, see [Centrally discover and deploy built-in content and solutions](sentinel-solutions-deploy.md).
-
-### Continuous Threat Monitoring for SAP solution (Public preview)
-
-Microsoft Sentinel now includes the **Continuous Threat Monitoring for SAP** solution, enabling you to monitor SAP systems for sophisticated threats within the business and application layers.
-
-The SAP data connector streams a multitude of 14 application logs from the entire SAP system landscape, and collects logs from both Advanced Business Application Programming (ABAP) via NetWeaver RFC calls and file storage data via OSSAP Control interface. The SAP data connector adds to Microsoft Sentinels ability to monitor the SAP underlying infrastructure.
-
-To ingest SAP logs into Microsoft Sentinel, you must have the Microsoft Sentinel SAP data connector installed on your SAP environment. After the SAP data connector is deployed, deploy the rich SAP solution security content to smoothly gain insight into your organization's SAP environment and improve any related security operation capabilities.
-
-For more information, see [Deploying SAP continuous threat monitoring](sap/deployment-overview.md).
-
-### Threat intelligence integrations (Public preview)
-
-Microsoft Sentinel gives you a few different ways to [use threat intelligence](./understand-threat-intelligence.md) feeds to enhance your security analysts' ability to detect and prioritize known threats.
-
-You can now use one of many newly available integrated threat intelligence platform (TIP) products, connect to TAXII servers to take advantage of any STIX-compatible threat intelligence source, and make use of any custom solutions that can communicate directly with the [Microsoft Graph Security tiIndicators API](/graph/api/resources/tiindicator).
-
-You can also connect to threat intelligence sources from playbooks, in order to enrich incidents with TI information that can help direct investigation and response actions.
-
-For more information, see [Threat intelligence integration in Microsoft Sentinel](threat-intelligence-integration.md).
-
-### Fusion over scheduled alerts (Public preview)
-
-The **Fusion** machine-learning correlation engine can now detect multi-stage attacks using alerts generated by a set of [scheduled analytics rules](detect-threats-custom.md) in its correlations, in addition to the alerts imported from other data sources.
-
-For more information, see [Advanced multistage attack detection in Microsoft Sentinel](fusion.md).
-
-### SOC-ML anomalies (Public preview)
-
-Microsoft Sentinel's SOC-ML machine learning-based anomalies can identify unusual behavior that might otherwise evade detection.
-
-SOC-ML uses analytics rule templates that can be put to work right out of the box. While anomalies don't necessarily indicate malicious or even suspicious behavior by themselves, they can be used to improve the fidelity of detections, investigations, and threat hunting.
-
-For more information, see [Use SOC-ML anomalies to detect threats in Microsoft Sentinel](soc-ml-anomalies.md).
-
-### IP Entity page (Public preview)
-
-Microsoft Sentinel now supports the IP address entity, and you can now view IP entity information in the new IP entity page.
-
-Like the user and host entity pages, the IP page includes general information about the IP, a list of activities the IP has been found to be a part of, and more, giving you an ever-richer store of information to enhance your investigation of security incidents.
-
-For more information, see [Entity pages](entity-pages.md).
-
-### Activity customization (Public preview)
-
-Speaking of entity pages, you can now create new custom-made activities for your entities, that will be tracked and displayed on their respective entity pages alongside the out-of-the-box activities youΓÇÖve seen there until now.
-
-For more information, see [Customize activities on entity page timelines](customize-entity-activities.md).
-
-### Hunting dashboard (Public preview)
-
-The **Hunting** blade has gotten a refresh. The new dashboard lets you run all your queries, or a selected subset, in a single click.
-
-Identify where to start hunting by looking at result count, spikes, or the change in result count over a 24-hour period. You can also sort and filter by favorites, data source, MITRE ATT&CK tactic and technique, results, or results delta. View the queries that do not yet have the necessary data sources connected, and get recommendations on how to enable these queries.
-
-For more information, see [Hunt for threats with Microsoft Sentinel](hunting.md).
-
-### Microsoft Sentinel incident team - collaborate in Microsoft Teams (public preview)
-
-Microsoft Sentinel now supports a direct integration with Microsoft Teams, enabling you to collaborate seamlessly across the organization and with external stakeholders.
-
-Directly from the incident in Microsoft Sentinel, create a new *incident team* to use for central communication and coordination.
-
-Incident teams are especially helpful when used as a dedicated conference bridge for high-severity, ongoing incidents. Organizations that already use Microsoft Teams for communication and collaboration can use the Microsoft Sentinel integration to bring security data directly into their conversations and daily work.
-
-In Microsoft Teams, the new team's **Incident page** tab always has the most updated and recent data from Microsoft Sentinel, ensuring that your teams have the most relevant data right at hand.
-
-[ ![Incident page in Microsoft Teams.](media/collaborate-in-microsoft-teams/incident-in-teams.png) ](media/collaborate-in-microsoft-teams/incident-in-teams.png#lightbox)
-
-For more information, see [Collaborate in Microsoft Teams (Public preview)](collaborate-in-microsoft-teams.md).
-
-### Zero Trust (TIC3.0) workbook
-
-The new, Microsoft Sentinel Zero Trust (TIC3.0) workbook provides an automated visualization of [Zero Trust](/security/zero-trust/) principles, cross-walked to the [Trusted Internet Connections](https://www.cisa.gov/trusted-internet-connections) (TIC) framework.
-
-We know that compliance isnΓÇÖt just an annual requirement, and organizations must monitor configurations over time like a muscle. Microsoft Sentinel's Zero Trust workbook uses the full breadth of Microsoft security offerings across Azure, Office 365, Teams, Intune, Azure Virtual Desktop, and many more.
-
-[ ![Zero Trust workbook.](media/zero-trust-workbook.gif) ](media/zero-trust-workbook.gif#lightbox)
-
-**The Zero Trust workbook**:
--- Enables Implementers, SecOps Analysts, Assessors, Security and Compliance Decision Makers, MSSPs, and others to gain situational awareness for cloud workloads' security posture.-- Features over 75 control cards, aligned to the TIC 3.0 security capabilities, with selectable GUI buttons for navigation.-- Is designed to augment staffing through automation, artificial intelligence, machine learning, query/alerting generation, visualizations, tailored recommendations, and respective documentation references.-
-For more information, see [Visualize and monitor your data](monitor-your-data.md).
-
-## April 2021
--- [Azure Policy-based data connectors](#azure-policy-based-data-connectors)-- [Incident timeline (Public preview)](#incident-timeline-public-preview)-
-### Azure Policy-based data connectors
-
-Azure Policy allows you to apply a common set of diagnostics logs settings to all (current and future) resources of a particular type whose logs you want to ingest into Microsoft Sentinel.
-
-Continuing our efforts to bring the power of [Azure Policy](../governance/policy/overview.md) to the task of data collection configuration, we are now offering another Azure Policy-enhanced data collector, for [Azure Storage account](./data-connectors-reference.md#azure-storage-account) resources, released to public preview.
-
-Also, two of our in-preview connectors, for [Azure Key Vault](./data-connectors-reference.md#azure-key-vault) and [Azure Kubernetes Service](./data-connectors-reference.md#azure-kubernetes-service-aks), have now been released to general availability (GA), joining our [Azure SQL Databases](./data-connectors-reference.md#azure-sql-databases) connector.
-
-### Incident timeline (Public preview)
-
-The first tab on an incident details page is now the **Timeline**, which shows a timeline of alerts and bookmarks in the incident. An incident's timeline can help you understand the incident better and reconstruct the timeline of attacker activity across the related alerts and bookmarks.
--- Select an item in the timeline to see its details, without leaving the incident context-- Filter the timeline content to show alerts or bookmarks only, or items of a specific severity or MITRE tactic.-- You can select the **System alert ID** link to view the entire record or the **Events** link to see the related events in the **Logs** area.-
-For example:
--
-For more information, see [Tutorial: Investigate incidents with Microsoft Sentinel](investigate-cases.md).
---
-## March 2021
--- [Set workbooks to automatically refresh while in view mode](#set-workbooks-to-automatically-refresh-while-in-view-mode)-- [New detections for Azure Firewall](#new-detections-for-azure-firewall)-- [Automation rules and incident-triggered playbooks (Public preview)](#automation-rules-and-incident-triggered-playbooks-public-preview) (including all-new playbook documentation)-- [New alert enrichments: enhanced entity mapping and custom details (Public preview)](#new-alert-enrichments-enhanced-entity-mapping-and-custom-details-public-preview)-- [Print your Microsoft Sentinel workbooks or save as PDF](#print-your-microsoft-sentinel-workbooks-or-save-as-pdf)-- [Incident filters and sort preferences now saved in your session (Public preview)](#incident-filters-and-sort-preferences-now-saved-in-your-session-public-preview)-- [Microsoft 365 Defender incident integration (Public preview)](#microsoft-365-defender-incident-integration-public-preview)-- [New Microsoft service connectors using Azure Policy](#new-microsoft-service-connectors-using-azure-policy)-
-### Set workbooks to automatically refresh while in view mode
-
-Microsoft Sentinel users can now use the new [Azure Monitor ability](https://techcommunity.microsoft.com/t5/azure-monitor/azure-workbooks-set-it-to-auto-refresh/ba-p/2228555) to automatically refresh workbook data during a view session.
-
-In each workbook or workbook template, select :::image type="icon" source="media/whats-new/auto-refresh-workbook.png" border="false"::: **Auto refresh** to display your interval options. Select the option you want to use for the current view session, and select **Apply**.
--- Supported refresh intervals range from **5 minutes** to **1 day**.-- By default, auto refresh is turned off. To optimize performance, auto refresh is also turned off each time you close a workbook, and does not run in the background. Turn auto refresh back on as needed the next time you open the workbook.-- Auto refresh is paused while you're editing a workbook, and auto refresh intervals are restarted each time you switch back to view mode from edit mode.-
- Intervals are also restarted if you manually refresh the workbook by selecting the :::image type="icon" source="media/whats-new/manual-refresh-button.png" border="false"::: **Refresh** button.
-
-For more information, see [Visualize and monitor your data](monitor-your-data.md) and the [Azure Monitor documentation](../azure-monitor/visualize/workbooks-overview.md).
-
-### New detections for Azure Firewall
-
-Several out-of-the-box detections for Azure Firewall have been added to the [Analytics](./understand-threat-intelligence.md) area in Microsoft Sentinel. These new detections allow security teams to get alerts if machines on the internal network attempt to query or connect to internet domain names or IP addresses that are associated with known IOCs, as defined in the detection rule query.
-
-The new detections include:
--- [Solorigate Network Beacon](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/Solorigate-Network-Beacon.yaml)-- [Known GALLIUM domains and hashes](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/GalliumIOCs.yaml)-- [Known IRIDIUM IP](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/IridiumIOCs.yaml)-- [Known Phosphorus group domains/IP](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/PHOSPHORUSMarch2019IOCs.yaml)-- [THALLIUM domains included in DCU takedown](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/ThalliumIOCs.yaml)-- [Known ZINC related malware hash](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/ZincJan272021IOCs.yaml)-- [Known STRONTIUM group domains](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/STRONTIUMJuly2019IOCs.yaml)-- [NOBELIUM - Domain and IP IOCs - March 2021](https://github.com/Azure/Azure-Sentinel/blob/master/Detections/MultipleDataSources/NOBELIUM_DomainIOCsMarch2021.yaml)--
-Detections for Azure Firewalls are continuously added to the built-in template gallery. To get the most recent detections for Azure Firewall, under **Rule Templates**, filter the **Data Sources** by **Azure Firewall**:
--
-For more information, see [New detections for Azure Firewall in Microsoft Sentinel](https://techcommunity.microsoft.com/t5/azure-network-security/new-detections-for-azure-firewall-in-azure-sentinel/ba-p/2244958).
-
-### Automation rules and incident-triggered playbooks (Public preview)
-
-Automation rules are a new concept in Microsoft Sentinel, allowing you to centrally manage the automation of incident handling. Besides letting you assign playbooks to incidents (not just to alerts as before), automation rules also allow you to automate responses for multiple analytics rules at once, automatically tag, assign, or close incidents without the need for playbooks, and control the order of actions that are executed. Automation rules will streamline automation use in Microsoft Sentinel and will enable you to simplify complex workflows for your incident orchestration processes.
-
-Learn more with this [complete explanation of automation rules](automate-incident-handling-with-automation-rules.md).
-
-As mentioned above, playbooks can now be activated with the incident trigger in addition to the alert trigger. The incident trigger provides your playbooks a bigger set of inputs to work with (since the incident includes all the alert and entity data as well), giving you even more power and flexibility in your response workflows. Incident-triggered playbooks are activated by being called from automation rules.
-
-Learn more about [playbooks' enhanced capabilities](automate-responses-with-playbooks.md), and how to [craft a response workflow](tutorial-respond-threats-playbook.md) using playbooks together with automation rules.
-
-### New alert enrichments: enhanced entity mapping and custom details (Public preview)
-
-Enrich your alerts in two new ways to make them more usable and more informative.
-
-Start by taking your entity mapping to the next level. You can now map almost 20 kinds of entities, from users, hosts, and IP addresses, to files and processes, to mailboxes, Azure resources, and IoT devices. You can also use multiple identifiers for each entity, to strengthen their unique identification. This gives you a much richer data set in your incidents, providing for broader correlation and more powerful investigation. [Learn the new way to map entities](map-data-fields-to-entities.md) in your alerts.
-
-[Read more about entities](entities.md) and see the [full list of available entities and their identifiers](entities-reference.md).
-
-Give your investigative and response capabilities an even greater boost by customizing your alerts to surface details from your raw events. Bring event content visibility into your incidents, giving you ever greater power and flexibility in responding to and investigating security threats. [Learn how to surface custom details](surface-custom-details-in-alerts.md) in your alerts.
---
-### Print your Microsoft Sentinel workbooks or save as PDF
-
-Now you can print Microsoft Sentinel workbooks, which also enables you to export to them to PDFs and save locally or share.
-
-In your workbook, select the options menu > :::image type="icon" source="media/whats-new/print-icon.png" border="false"::: **Print content**. Then select your printer, or select **Save as PDF** as needed.
--
-For more information, see [Visualize and monitor your data](monitor-your-data.md).
-
-### Incident filters and sort preferences now saved in your session (Public preview)
-
-Now your incident filters and sorting is saved throughout your Microsoft Sentinel session, even while navigating to other areas of the product.
-As long as you're still in the same session, navigating back to the [Incidents](investigate-cases.md) area in Microsoft Sentinel shows your filters and sorting just as you left it.
-
-> [!NOTE]
-> Incident filters and sorting are not saved after leaving Microsoft Sentinel or refreshing your browser.
-
-### Microsoft 365 Defender incident integration (Public preview)
-
-Microsoft Sentinel's [Microsoft 365 Defender (M365D)](/microsoft-365/security/mtp/microsoft-threat-protection) incident integration allows you to stream all M365D incidents into Microsoft Sentinel and keep them synchronized between both portals. Incidents from M365D (formerly known as Microsoft Threat Protection or MTP) include all associated alerts, entities, and relevant information, providing you with enough context to perform triage and preliminary investigation in Microsoft Sentinel. Once in Sentinel, Incidents will remain bi-directionally synced with M365D, allowing you to take advantage of the benefits of both portals in your incident investigation.
-
-Using both Microsoft Sentinel and Microsoft 365 Defender together gives you the best of both worlds. You get the breadth of insight that a SIEM gives you across your organization's entire scope of information resources, and also the depth of customized and tailored investigative power that an XDR delivers to protect your Microsoft 365 resources, both of these coordinated and synchronized for seamless SOC operation.
-
-For more information, see [Microsoft 365 Defender integration with Microsoft Sentinel](microsoft-365-defender-sentinel-integration.md).
-
-### New Microsoft service connectors using Azure Policy
-
-[Azure Policy](../governance/policy/overview.md) is an Azure service which allows you to use policies to enforce and control the properties of a resource. The use of policies ensures that resources stay compliant with your IT governance standards.
-
-Among the properties of resources that can be controlled by policies are the creation and handling of diagnostics and auditing logs. Microsoft Sentinel now uses Azure Policy to allow you to apply a common set of diagnostics logs settings to all (current and future) resources of a particular type whose logs you want to ingest into Microsoft Sentinel. Thanks to Azure Policy, you'll no longer have to set diagnostics logs settings resource by resource.
-
-Azure Policy-based connectors are now available for the following Azure
-- [Azure Key Vault](./data-connectors-reference.md#azure-key-vault) (public preview)-- [Azure Kubernetes Service](./data-connectors-reference.md#azure-kubernetes-service-aks) (public preview)-- [Azure SQL databases/servers](./data-connectors-reference.md#azure-sql-databases) (GA)-
-Customers will still be able to send the logs manually for specific instances and donΓÇÖt have to use the policy engine.
--
-## February 2021
--- [Cybersecurity Maturity Model Certification (CMMC) workbook](#cybersecurity-maturity-model-certification-cmmc-workbook)-- [Third-party data connectors](#third-party-data-connectors)-- [UEBA insights in the entity page (Public preview)](#ueba-insights-in-the-entity-page-public-preview)-- [Improved incident search (Public preview)](#improved-incident-search-public-preview)-
-### Cybersecurity Maturity Model Certification (CMMC) workbook
-
-The Microsoft Sentinel CMMC Workbook provides a mechanism for viewing log queries aligned to CMMC controls across the Microsoft portfolio, including Microsoft security offerings, Office 365, Teams, Intune, Azure Virtual Desktop and many more.
-
-The CMMC workbook enables security architects, engineers, security operations analysts, managers, and IT professionals to gain situational awareness visibility for the security posture of cloud workloads. There are also recommendations for selecting, designing, deploying, and configuring Microsoft offerings for alignment with respective CMMC requirements and practices.
-
-Even if you arenΓÇÖt required to comply with CMMC, the CMMC workbook is helpful in building Security Operations Centers, developing alerts, visualizing threats, and providing situational awareness of workloads.
-
-Access the CMMC workbook in the Microsoft Sentinel **Workbooks** area. Select **Template**, and then search for **CMMC**.
---
-For more information, see:
--- [Microsoft Sentinel Cybersecurity Maturity Model Certification (CMMC) Workbook](https://techcommunity.microsoft.com/t5/public-sector-blog/azure-sentinel-cybersecurity-maturity-model-certification-cmmc/ba-p/2110524)-- [Visualize and monitor your data](monitor-your-data.md)--
-### Third-party data connectors
-
-Our collection of third-party integrations continues to grow, with thirty connectors being added in the last two months. Here's a list:
--- [Agari Phishing Defense and Brand Protection](./data-connectors-reference.md#agari-phishing-defense-and-brand-protection-preview)-- [Akamai Security Events](./data-connectors-reference.md#akamai-security-events-preview)-- [Alsid for Active Directory](./data-connectors-reference.md#alsid-for-active-directory)-- [Apache HTTP Server](./data-connectors-reference.md#apache-http-server)-- [Aruba ClearPass](./data-connectors-reference.md#aruba-clearpass-preview)-- [Blackberry CylancePROTECT](connect-data-sources.md)-- [Broadcom Symantec DLP](./data-connectors-reference.md#broadcom-symantec-data-loss-prevention-dlp-preview)-- [Cisco Firepower eStreamer](connect-data-sources.md)-- [Cisco Meraki](./data-connectors-reference.md#cisco-meraki-preview)-- [Cisco Umbrella](./data-connectors-reference.md#cisco-umbrella-preview)-- [Cisco Unified Computing System (UCS)](./data-connectors-reference.md#cisco-unified-computing-system-ucs-preview)-- [ESET Enterprise Inspector](connect-data-sources.md)-- [ESET Security Management Center](connect-data-sources.md)-- [Google Workspace (formerly G Suite)](./data-connectors-reference.md#google-workspace-g-suite-preview)-- [Imperva WAF Gateway](./data-connectors-reference.md#imperva-waf-gateway-preview)-- [Juniper SRX](./data-connectors-reference.md#juniper-srx-preview)-- [Netskope](connect-data-sources.md)-- [NXLog DNS Logs](./data-connectors-reference.md#nxlog-dns-logs-preview)-- [NXLog Linux Audit](./data-connectors-reference.md#nxlog-linuxaudit-preview)-- [Onapsis Platform](connect-data-sources.md)-- [Proofpoint On Demand Email Security (POD)](./data-connectors-reference.md#proofpoint-on-demand-pod-email-security-preview)-- [Qualys Vulnerability Management Knowledge Base](connect-data-sources.md)-- [Salesforce Service Cloud](./data-connectors-reference.md#salesforce-service-cloud-preview)-- [SonicWall Firewall](connect-data-sources.md)-- [Sophos Cloud Optix](./data-connectors-reference.md#sophos-cloud-optix-preview)-- [Squid Proxy](./data-connectors-reference.md#squid-proxy-preview)-- [Symantec Endpoint Protection](connect-data-sources.md)-- [Thycotic Secret Server](./data-connectors-reference.md#thycotic-secret-server-preview)-- [Trend Micro XDR](connect-data-sources.md)-- [VMware ESXi](./data-connectors-reference.md#vmware-esxi-preview)-
-### UEBA insights in the entity page (Public preview)
-
-The Microsoft Sentinel entity details pages provide an [Insights pane](entity-pages.md#entity-insights), which displays behavioral insights on the entity and help to quickly identify anomalies and security threats.
-
-If you have [UEBA enabled](ueba-reference.md), and have selected a timeframe of at least four days, this Insights pane will now also include the following new sections for UEBA insights:
-
-|Section |Description |
-|||
-|**UEBA Insights** | Summarizes anomalous user activities: <br>- Across geographical locations, devices, and environments<br>- Across time and frequency horizons, compared to user's own history <br>- Compared to peers' behavior <br>- Compared to the organization's behavior |
-|**User Peers Based on Security Group Membership** | Lists the user's peers based on Azure AD Security Groups membership, providing security operations teams with a list of other users who share similar permissions. |
-|**User Access Permissions to Azure Subscription** | Shows the user's access permissions to the Azure subscriptions accessible directly, or via Azure AD groups / service principals. |
-|**Threat Indicators Related to The User** | Lists a collection of known threats relating to IP addresses represented in the userΓÇÖs activities. Threats are listed by threat type and family, and are enriched by MicrosoftΓÇÖs threat intelligence service. |
--
-### Improved incident search (Public preview)
-
-We've improved the Microsoft Sentinel incident searching experience, enabling you to navigate faster through incidents as you investigate a specific threat.
-
-When searching for incidents in Microsoft Sentinel, you're now able to search by the following incident details:
--- ID-- Title-- Product-- Owner-- Tag-
-## January 2021
--- [Analytics rule wizard: Improved query editing experience (Public preview)](#analytics-rule-wizard-improved-query-editing-experience-public-preview)-- [Az.SecurityInsights PowerShell module (Public preview)](#azsecurityinsights-powershell-module-public-preview)-- [SQL database connector](#sql-database-connector)-- [Dynamics 365 connector (Public preview)](#dynamics-365-connector-public-preview)-- [Improved incident comments](#improved-incident-comments)-- [Dedicated Log Analytics clusters](#dedicated-log-analytics-clusters)-- [Logic apps managed identities](#logic-apps-managed-identities)-- [Improved rule tuning with the analytics rule preview graphs](#improved-rule-tuning-with-the-analytics-rule-preview-graphs-public-preview)--
-### Analytics rule wizard: Improved query editing experience (Public preview)
-
-The Microsoft Sentinel Scheduled analytics rule wizard now provides the following enhancements for writing and editing queries:
--- An expandable editing window, providing you with more screen space to view your query.-- Key word highlighting in your query code.-- Expanded autocomplete support.-- Real-time query validations. Errors in your query now show as a red block in the scroll bar, and as a red dot in the **Set rule logic** tab name. Additionally, a query with errors cannot be saved.-
-For more information, see [Create custom analytics rules to detect threats](detect-threats-custom.md).
-### Az.SecurityInsights PowerShell module (Public preview)
-
-Microsoft Sentinel now supports the new [Az.SecurityInsights](https://www.powershellgallery.com/packages/Az.SecurityInsights/) PowerShell module.
-
-The **Az.SecurityInsights** module supports common Microsoft Sentinel use cases, like interacting with incidents to change statues, severity, owner, and so on, adding comments and labels to incidents, and creating bookmarks.
-
-Although we recommend using [Azure Resource Manager (ARM)](../azure-resource-manager/templates/index.yml) templates for your CI/CD pipeline, the **Az.SecurityInsights** module is useful for post-deployment tasks, and is targeted for SOC automation. For example, your SOC automation might include steps to configure data connectors, create analytics rules, or add automation actions to analytics rules.
-
-For more information, including a full list and description of the available cmdlets, parameter descriptions, and examples, see the [Az.SecurityInsights PowerShell documentation](/powershell/module/az.securityinsights/).
-
-### SQL database connector
-
-Microsoft Sentinel now provides an Azure SQL database connector, which you to stream your databases' auditing and diagnostic logs into Microsoft Sentinel and continuously monitor activity in all your instances.
-
-Azure SQL is a fully managed, Platform-as-a-Service (PaaS) database engine that handles most database management functions, such as upgrading, patching, backups, and monitoring, without user involvement.
-
-For more information, see [Connect Azure SQL database diagnostics and auditing logs](./data-connectors-reference.md#azure-sql-databases).
-
-### Dynamics 365 connector (Public preview)
-
-Microsoft Sentinel now provides a connector for Microsoft Dynamics 365, which lets you collect your Dynamics 365 applications' user, admin, and support activity logs into Microsoft Sentinel. You can use this data to help you audit the entirety of data processing actions taking place and analyze it for possible security breaches.
-
-For more information, see [Connect Dynamics 365 activity logs to Microsoft Sentinel](./data-connectors-reference.md#dynamics-365).
-
-### Improved incident comments
-
-Analysts use incident comments to collaborate on incidents, documenting processes and steps manually or as part of a playbook.
-
-Our improved incident commenting experience enables you to format your comments and edit or delete existing comments.
-
-For more information, see [Automatically create incidents from Microsoft security alerts](create-incidents-from-alerts.md).
-### Dedicated Log Analytics clusters
-
-Microsoft Sentinel now supports dedicated Log Analytics clusters as a deployment option. We recommend considering a dedicated cluster if you:
--- **Ingest over 1 Tb per day** into your Microsoft Sentinel workspace-- **Have multiple Microsoft Sentinel workspaces** in your Azure enrollment-
-Dedicated clusters enable you to use features like customer-managed keys, lockbox, double encryption, and faster cross-workspace queries when you have multiple workspaces on the same cluster.
-
-For more information, see [Azure Monitor logs dedicated clusters](../azure-monitor/logs/logs-dedicated-clusters.md).
-
-### Logic apps managed identities
-
-Microsoft Sentinel now supports managed identities for the Microsoft Sentinel Logic Apps connector, enabling you to grant permissions directly to a specific playbook to operate on Microsoft Sentinel instead of creating extra identities.
--- **Without a managed identity**, the Logic Apps connector requires a separate identity with an Microsoft Sentinel RBAC role in order to run on Microsoft Sentinel. The separate identity can be an Azure AD user or a Service Principal, such as an Azure AD registered application.--- **Turning on managed identity support in your Logic App** registers the Logic App with Azure AD and provides an object ID. Use the object ID in Microsoft Sentinel to assign the Logic App with an Azure RBAC role in your Microsoft Sentinel workspace. -
-For more information, see:
--- [Authenticating with Managed Identity in Azure Logic Apps](../logic-apps/create-managed-service-identity.md)-- [Microsoft Sentinel Logic Apps connector documentation](/connectors/azuresentinel) -
-### Improved rule tuning with the analytics rule preview graphs (Public preview)
-
-Microsoft Sentinel now helps you better tune your analytics rules, helping you to increase their accuracy and decrease noise.
-
-After editing an analytics rule on the **Set rule logic** tab, find the **Results simulation** area on the right.
-
-Select **Test with current data** to have Microsoft Sentinel run a simulation of the last 50 runs of your analytics rule. A graph is generated to show the average number of alerts that the rule would have generated, based on the raw event data evaluated.
-
-For more information, see [Define the rule query logic and configure settings](detect-threats-custom.md#define-the-rule-query-logic-and-configure-settings).
-
-## December 2020
--- [80 new built-in hunting queries](#80-new-built-in-hunting-queries)-- [Log Analytics agent improvements](#log-analytics-agent-improvements)-
-### 80 new built-in hunting queries
-
-Microsoft Sentinel's built-in hunting queries empower SOC analysts to reduce gaps in current detection coverage and ignite new hunting leads.
-
-This update for Microsoft Sentinel includes new hunting queries that provide coverage across the MITRE ATT&CK framework matrix:
--- **Collection**-- **Command and Control**-- **Credential Access**-- **Discovery**-- **Execution**-- **Exfiltration**-- **Impact**-- **Initial Access**-- **Persistence**-- **Privilege Escalation**-
-The added hunting queries are designed to help you find suspicious activity in your environment. While they may return legitimate activity and potentially malicious activity, they can be useful in guiding your hunting.
-
-If after running these queries, you are confident with the results, you may want to convert them to analytics rules or add hunting results to existing or new incidents.
-
-All of the added queries are available via the Microsoft Sentinel Hunting page. For more information, see [Hunt for threats with Microsoft Sentinel](hunting.md).
-
-### Log Analytics agent improvements
-
-Microsoft Sentinel users benefit from the following Log Analytics agent improvements:
--- **Support for more operating systems**, including CentOS 8, RedHat 8, and SUSE Linux 15.-- **Support for Python 3** in addition to Python 2-
-Microsoft Sentinel uses the Log Analytics agent to sent events to your workspace, including Windows Security events, Syslog events, CEF logs, and more.
-
-> [!NOTE]
-> The Log Analytics agent is sometimes referred to as the OMS Agent or the Microsoft Monitoring Agent (MMA).
->
-
-For more information, see the [Log Analytics documentation](../azure-monitor/agents/log-analytics-agent.md) and the [Log Analytics agent release notes](https://github.com/microsoft/OMS-Agent-for-Linux/releases).
-## November 2020
--- [Monitor your Playbooks' health in Microsoft Sentinel](#monitor-your-playbooks-health-in-microsoft-sentinel)-- [Microsoft 365 Defender connector (Public preview)](#microsoft-365-defender-connector-public-preview)-
-### Monitor your Playbooks' health in Microsoft Sentinel
-
-Microsoft Sentinel playbooks are based on workflows built in [Azure Log Apps](../logic-apps/index.yml), a cloud service that helps you schedule, automate, and orchestrate tasks, business processes, and workflows. Playbooks can be automatically invoked when an incident is created, or when triaging and working with incidents.
-
-To provide insights into the health, performance, and usage of your playbooks, we've added a [workbook](../azure-monitor/visualize/workbooks-overview.md) named **Playbooks health monitoring**.
-
-Use the **Playbooks health monitoring** workbook to monitor the health of your playbooks, or look for anomalies in the amount of succeeded or failed runs.
-
-The **Playbooks health monitoring** workbook is now available in the Microsoft Sentinel Templates gallery:
--
-For more information, see:
--- [Logic Apps documentation](../logic-apps/monitor-logic-apps-log-analytics.md#set-up-azure-monitor-logs)--- [Azure Monitor documentation](../azure-monitor/essentials/activity-log.md#send-to-log-analytics-workspace)-
-### Microsoft 365 Defender connector (Public preview)
-
-The Microsoft 365 Defender connector for Microsoft Sentinel enables you to stream advanced hunting logs (a type of raw event data) from Microsoft 365 Defender into Microsoft Sentinel.
-
-With the integration of [Microsoft Defender for Endpoint (MDATP)](/windows/security/threat-protection/) into the [Microsoft 365 Defender](/microsoft-365/security/mtp/microsoft-threat-protection) security umbrella, you can now collect your Microsoft Defender for Endpoint advanced hunting events using the Microsoft 365 Defender connector, and stream them straight into new purpose-built tables in your Microsoft Sentinel workspace.
-
-The Microsoft Sentinel tables are built on the same schema that's used in the Microsoft 365 Defender portal, and provide you with complete access to the full set of advanced hunting logs.
-
-For more information, see [Connect data from Microsoft 365 Defender to Microsoft Sentinel](connect-microsoft-365-defender.md).
-
-> [!NOTE]
-> Microsoft 365 Defender was formerly known as Microsoft Threat Protection or MTP. Microsoft Defender for Endpoint was formerly known as Microsoft Defender Advanced Threat Protection or MDATP.
->
-
-## Next steps
-
-> [!div class="nextstepaction"]
->[On-board Microsoft Sentinel](quickstart-onboard.md)
-
-> [!div class="nextstepaction"]
->[Get visibility into alerts](get-visibility.md)
service-bus-messaging Service Bus Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-sas.md
Title: Azure Service Bus access control with Shared Access Signatures description: Overview of Service Bus access control using Shared Access Signatures overview, details about SAS authorization with Azure Service Bus. Previously updated : 04/26/2022 Last updated : 11/01/2022 ms.devlang: csharp
If you are using **Azure CLI**, use the [`az servicebus namespace authorization-
The scenario described as follows include configuration of authorization rules, generation of SAS tokens, and client authorization.
-For a sample of a Service Bus application that illustrates the configuration and uses SAS authorization, see [Shared Access Signature authentication with Service Bus](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/ManagingEntities/SASAuthorizationRule).
+For a sample of a Service Bus application that illustrates the configuration and uses SAS authorization, see [Shared Access Signature authentication with Service Bus](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/samples/Sample07_CrudOperations.md).
## Access Shared Access Authorization rules on an entity
site-recovery Vmware Azure Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-failback.md
Title: Fail back VMware VMs/physical servers from Azure with Azure Site Recovery description: Learn how to fail back to the on-premises site after failover to Azure, during disaster recovery of VMware VMs and physical servers to Azure.-+ -+ Last updated 05/27/2021 # Fail back VMware VMs to on-premises site
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
Previously updated : 06/14/2022 Last updated : 11/02/2022
View the JSON for inventory rules by selecting the **Code view** tab in the **Bl
## Inventory run
-A blob inventory run is automatically scheduled every day. It can take up to 24 hours for an inventory run to complete. For hierarchical namespace enabled accounts, a run can take as long as two days, and depending on the number of files being processed, the run might not complete by end of that two days. If a run does not complete successfully, check subsequent runs to see if they complete before contacting support. The performance of a run can vary, so if a run doesn't complete, it's possible that subsequent runs will.
+If you configure a rule to run daily, then it will be scheduled to run every day. If you configure a rule to run weekly, then it will be scheduled to run each week on Sunday UTC time.
-Inventory policies are read or written in full. Partial updates aren't supported.
+Most inventory runs complete within 24 hours. For hierarchical namespace enabled accounts, a run can take as long as two days, and depending on the number of files being processed, the run might not complete by end of that two days. The maximum amount of time that a run can complete before it fails is six days.
+
+Runs don't overlap so a run must complete before another run of the same rule can begin. For example, if a rule is scheduled to run daily, but the previous day's run of that same rule is still in progress, then a new run will not be initiated that day. Rules that are scheduled to run weekly will run each Sunday regardless of whether a previous run succeeds or fails. If a run does not complete successfully, check subsequent runs to see if they complete before contacting support. The performance of a run can vary, so if a run doesn't complete, it's possible that subsequent runs will.
+
+Inventory policies are read or written in full. Partial updates aren't supported. Inventory rules are evaluated daily. Therefore, if you change the definition of a rule, but the rules of a policy have already been evaluated for that day, then your updates won't be evaluated until the following day.
> [!IMPORTANT] > If you enable firewall rules for your storage account, inventory requests might be blocked. You can unblock these requests by providing exceptions for trusted Microsoft services. For more information, see the Exceptions section in [Configure firewalls and virtual networks](../common/storage-network-security.md#exceptions).
The following table describes the schema of the `BlobInventoryPolicyCompleted` e
|Field|Type|Description| |||
-|scheduleDateTime|string|The time that the inventory policy was scheduled.|
+|scheduleDateTime|string|The time that the inventory rule was scheduled.|
|accountName|string|The storage account name.| |ruleName|string|The rule name.| |policyRunStatus|string|The status of inventory run. Possible values are `Succeeded`, `PartiallySucceeded`, and `Failed`.|
storage Secure File Transfer Protocol Host Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-host-keys.md
When you connect to Blob Storage by using an SFTP client, you might be prompted
> [!div class="mx-tdBreakAll"] > | Region | Host key type | SHA 256 fingerprint <sup>1</sup> | Public key | > |||||
-> | West Europe | rsa-sha2-256 | `IeHrQ+N6WAdLMKSMsJiML4XqMrkF1kyOiTeTjh1PFyc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDZL63ZKHrWlwN8gkPvq43uTh88n0V6GwlTH2/sEpIyPxN56/gpgWW6aDyzyv6PIRI/zlLjZNdOBhqmEO+MhnBPkAI8edlvFoVOA6c/ft5RljQOhv+nFzgELyP8qAlZOi1iQHx7UeB1NGkQ5AIwNIkRDImeft9Iga+bDF6yWu60gY43QdGQCTNhjglNuZ6lkGnrTxQtPSC01AyU51V1yXKHzgaTByrA4tK6cGtwjFjMBsnXtX2+yoyyuQz/xNnIN63awqpQxZameGOtjAYhLhtEgl39XEIgvpAs1hXDWcSEBSMWP4z04U/tw2R5mtorL3QU1CmokWmuAQZNQcLSLLlt` |
-> | West Europe | rsa-sha2-512 | `7+VdJ21y+HcaNRZZeaaBtk1AjkCNK4weG5mkkoyabi0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDYAmiv6Tk/o02McJi79dlIdPLu1I5HfhsdPlUycW+t1zQwZL+WaI182G6SY728hJOGzAz51XqD4e5yueAZYjOJwcGhHVq6MfabbhvT1sxWQplnk3QKrUMRXnyuuSua1j+AwXsm957RlbW9bi1aQKdJgKq3y2yz+hqBS76SX9d8BxOHWJl5KwCIFaaJWb0u32W2HGb9eLDMQNipzHyANEQXI9Uq2qRL7Z20GiRGyy7VPP6AbPYTprrivo3QpYXSXe9VUuuXA9g3Bz3itxmOw6RV9aGQhCSp22BdJKDl70FMxTm1d87LEwOQmAViqelEeY+DEowPHwVLQs3rIJrZHxYV` |
-> | West Europe | ecdsa-sha2-nistp256 | `0WNMHmCNJE1YFBpHNeADuT5h+PfJ/jJPtUDHCxCSrO0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBANx85rJLXM8QZi33y8fzvUbH+O5Cujn0oJFDGQrwhGJQTHsjIhd5bhFFgDvJ64/4SGrtP1LHDKLwr9+ltzgxIE=` |
-> | West Europe | ecdsa-sha2-nistp384 | `90g+JfQChjbb3OOV0YIGSVTkNotnefCV2NcSuMdPrzY=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJgtrLFy2zsyhNvXlwHUmDBw1De++05pr1ZTxOIVnB17XZix0Euwq/wZTs0cE01c5/kYdAp+gQHEz594e7AQXBTCTqUiIS1a4+IXzfiCcShVfMsLFBvzjm9Yn8qgW9Ofg==` |
-> | East US | rsa-sha2-256 | `F6pNN5Py68/1hVRGEoCwpY5H7vWhXZM/4L442lY4ydE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAiUB94zwLf0e/++OeiAjE0X7Od2nuqyLyAqpOb7nfQUAOWyqgRL04yaan6R2Ir2YtI0FRwA6yRETUBf2+NuVhIONgLNsgPw3RakL1BUqAEzZAyF4sOjWnYE5/s/1KmYOE052SefzMciqjgkBV2+YrPW1CLivNhL4d1vuQh05kADLgHJiAVD6BqSM7Z6VoLhW+hfP4JklyQAojCF6ejXW7ZGWdqQGKLCUhdaOPSRAxjOmr9gZxJ69OvdJT2Cy6KO1YQt2gY2GbPs+4uAeNrz40swffjut4zn1NILImpHi8PTM+wcGYzbW4Nn7t5lhvT9kmX9BkSYXLVTlI9p1neT9t` |
-> | East US | rsa-sha2-512 | `MIpoRIiCtEKI23MN+S2bLqm5GKClzgmRpMnh90DaHx8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC8Ut7Rq7Vak26F29czig5auq334N9mNaJmdtWoT32uDzWjZNw/N8uxXQS51oSeD7c0oXWIMBklH0AS8JR1xvMUGVnv5aRXwubicQ6z4poG5RSudYDA3BjMs61LZUKZH/DRj7qR/KUBMNieT1X+0DbopZkO9etxXdKx+VqJaK3fRC5Zflxj5Z9Stfx/XlaBXptDdqnInHZAUbZxnNziPYrBOuXYl5/Cd6W4lR7dBsMCbjINSIShvrhPpVfd3qOv/xPpU172nqkOx2VsV4mrfqqg62ZdcenLJDYsiXd/AVNUAL+dvzmj1/3/yVtFwadA2l83Em6CgGpqUmvK6brY3bPh` |
-> | East US | ecdsa-sha2-nistp256 | `ixDeCdmQOB9ROqdJiVdXyFVsRqLmJJUb2M4shrWj8gI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNrdcVT12fftnDcjYL8K3zLX3bdMwYLjNu2ZJ1kUZwpVHSjNc+1KWB2CGHca+cMLuPSao4TxjIX0drn9/o+GHMQ=` |
-> | East US | ecdsa-sha2-nistp384 | `DPTC6EIORrsxzpGt6IZzAN67nlZUXzg5ANQ3QGz987Y=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEP3CUvPVWNVnFuojR43KRxTQt1xiClbgDzqN/s9F5luivP+Gh0QrK5UHf6diEju4ZQ9k2O10MEDs6c46g4fT56rY8CQkeBsaaBq8WYLRhSQsFZ6SZuw14oFNodniAO33g==` |
-> | West India | rsa-sha2-256 | `Fkh7r/tOJy1cZC6nI75VsO1sS3ugMvJ56U02uGGJHFo=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDHCzLI51bbBLWK7TcXvXvEHaLQMzuYKEwyoS1/oC5EN3NsLZl4BV5d2zbLETFDjsky/btWiAkCvHuzxealxGgzw69ll90aWSOEY/epaYJvueOTvGy4+rJY8Xyc64VdHml8n3EEZTQmBEi3Tn6bViLwvC0iT2/noLeYGXh0/NL0T3BeblwSm3cNXyemkBQO/zyYcchqRtKJu8w8brYVZYFINlTeBu4LyDP1k9DMtuewGoeH8SmvDxUmiIGh2VDlPmXe3IkMR0nSgz10jMl3F0fei7ZJ+8zdCVbBuIqsJf+koJa/q9npstWGMFddMX3nR0A3HnG4v5aCAGVmfl11iC0J` |
-> | West India | rsa-sha2-512 | `xDtcgfElRGUUgWlU9tRmSQ58WEOKoUSKrHFDruhgDIM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCXehufp18nKehU4/GOWMkXJ87t22TyG5bNdVCPO2AgLJ88FBwZJvDurLgdPRDRuJImysbD7ucwk2WoDNC39q0TWtCRyIKTXfwvPmyG+JZKkT+/QfslMqiAXAPIQtVr2iXTeuHmn3tk+PksGXnTwb3oFV4wv40Wi1CbwvtCkUsBSujq4AR7BqksPnAqPrAyw+fFR3w4iD3EdtHBdIVULez3lkpMH/d04rf2bjh6lpI9YUdcdAmTGYeMtsf/ef8z0G2xpN2aniLCoCPQP85cooKq7YEhBDR8Lzem3vWnqS3gPc4rUrCJoDkGm0iL/4GCWRyG+RPi70WSdVysJ+HIm0Ct` |
-> | West India | ecdsa-sha2-nistp256 | `t+PVPMSVEgQ3FPNploXz7mO25PFiEwzxutMjypoA2DM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCzR5dhW3wfN5bRqLfeZ2hlj7iRerE4lF5jk+iQl6HJHKXIsH6lQ63Wyg7wOzF65jNnvubAJoEmzyyYig+D3A+w=` |
-> | West India | ecdsa-sha2-nistp384 | `pLODd+3JNeLVcPYYnI0rSWoemhMWws0jLc3J8cV6+GU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL2PEknfZpPAT4ejqBJW8InHPELP1G7hGvroW5J3evJr8Qrr//voa6aH8ZF7Ak0HcVVOOCSzfjcEpZYjjrXrzuCOekU48DkSF8i1kKqV4iXejNNQ1ohDCbsiAyoxQMY9cA==` |
-> | East US 2 | rsa-sha2-256 | `K+QQglmdpev3bvEKUgBTiOGMxwTlbD7gaYnLZhPfe1c=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOA2Aj1tIG/TUXoVFHOltyW8qnCOtm2gq2NFTdfyDFw3/C4jk2HHQf2smDX54g8ixcLuSX3bRDtKRJItUhrKFY6A0AfN1+r46kkJJdFjzdcgi7C3M0BehH7HlHZ7Fv2u01VdROiXocHpNOVeLFKyt516ooe6b6bxrdc480RrgopPYpf6etJUm8d4WrDtGXB3ctip8p6Z2Z/ORfK77jTeKO4uzaHLM0W7G5X+nZJWn3axaf4H092rDAIH1tjEuWIhEivhkG9stUSeI3h6zw7q9FsJTGo0mIMZ9BwgE+Q2WLZtE2uMpwQ0mOqEPDnm0uJ5GiSmQLVyaV6E5SqhTfvVZ1` |
-> | East US 2 | rsa-sha2-512 | `UKT1qPRfpm+yzpRMukKpBCRFnOd257uSxGizI7fPLTw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC/HCjYc4tMVNKmbEDT0HXVhyYkyzufrad8pvGb3bW1qGnpM1ZT3qauJrKizJFIuT3cPu43slhwR/Ryy79x6fLTKXNNucHHEpwT/yzf5H6L14N+i0rB/KWvila2enB2lTDVkUW50Fo+k5U/JPTn8vdLPkYJbtx9s0s3RMwaRrRBkW6+36Xrh0h7rxV5LfY/EI1331f+1bgNM7xD59D3U76OafZMh5VfSbCisvDWyIPebXkOMF/eL8ATlaOfab0TAC8lriCkLQolR+El9ARZ69CJtKg4gBB3IY766Ag3+rry1/J97kr4X3aVrDxMps1Pq+Q8TCOf4zFDPf2JwZhUpDPp` |
-> | East US 2 | ecdsa-sha2-nistp256 | `bouiC5HdtURUU19RJbym8R94fbMOTw/bUxFUkoAByoI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJshJI18IECu6neLrash/Q622MAXO07C+hbIOiVPC6M/ZIJM8HyYvQEh4DKI1CMEaeAIs/HA905QKeU/syvt7QI=` |
-> | East US 2 | ecdsa-sha2-nistp384 | `vWnPlGaQOY4LFj9XSQ2qN/NMF92+UOfKPjGNSPA2bOg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBByJNAblwxCNVqedg5FcdbdwiuzTMVEWj/uF3uzI8wp890Xv2M4H+aMTpeItxgQsuiQCptgITsO+XCf2dBTHOGWpd90QtvcznzHyy/FEWVAKWs9brvyaNVe82c4TOFqYRg==` |
-> | West US | rsa-sha2-256 | `kqxoK1j6vHU8o8XyaiV87tZFEX9nE6o/yU0lOR5S6lE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAd7gh0mFw3iRAKusX3ai6OE0KO5O2CMlezvZOAJEH88fzWQ/zp0RZ1j7zJ8sbwslA6v3oRQ7Cx9ptAMTrL8SW4CZYcwETlfL3ZP39Llh+t7rZovIgvCDU0tijYvsa1W0T9XZgcwWEm6cWQzdm+i9U0KUdh7KgsubPAhGQ7xrOVEqgB9MYMofSSdIfKMt8K7xOSam6mhWiTSSIEGgeMTIZ9TgXkgAEJ8TNl3QHRoM8HxMnRFjtkXbT3EeSg6VOqi69Cei3hrmS64qvdzt2WwoTQwTFjxHocWGgA+Ow53wqWt8iYgOudpoB1neXiIcF4p0CN8zjvXNiRbZPg9lXFM9R` |
-> | West US | rsa-sha2-512 | `/PP9B/9KEa+QUyump1Yt05Lfk0LY/eyQhHyojh5zMEg=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC8R8bFe8QSTYKK+4evMpnlB8y0rQCqikTyviqD4rva7i4f1f/JxmptJQ/wkipHPXk6E7Du6oK/iJaZ+wjZ03tNIWwAGn0SdlTvWuwQwigK9k3JRlLYO+Uj/SSnBQWf8Dmp+cA6RDalteHpM2KwaUK65BHYC75bWKHaNntadTIU4kQ0BvFzmNRcJWL6otd5RkdYXjJWHu21zcv4EpRHGmVCD0na+UWce6UGDbLDtsZVJd2Q7IyeTrXpWxEO0fFN2Gu9gINfWC1FpuffGaqWSa4nK69n39lUKz4PUdu6Owmd9aNbLXknvtnW4+xGbX6oQa8wYulINHjdNz8Ez6nOiNZ9` |
-> | West US | ecdsa-sha2-nistp256 | `peqBbfcWZRW4QzLi69HicUUTwdtfW7/E9WGkgRMheAo=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBcTos/zmSn15kzn1Lk8N8QQh9hzOwqOSOf/bCpu6AQbWJtvjf1pHMuZlS2PpIV7G+/ImxXGpqpHqQlcD+Lg8Ro=` |
-> | West US | ecdsa-sha2-nistp384 | `sg63Cc3Mvnn9hoapGaEuZByscUEMa+xgw/3ruz49szk=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGzX2t9ALjFwpcBLm4B0+/D47PMrDya0KTva5w4E5GZNb5OwQujQvtUS2owd8BcKdMBeXx2S7qbcw6cQFffRxE+ZTr4J+3GoCmDM0PqraXxJHBRxyeK6vlrSR8ojRzIApA==` |
-> | East US 2 EUAP | rsa-sha2-256 | `dkP64W5LSbRoRlv2MV02TwH5wFPbV6D3R3nyTGivVfk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC3PqLDKkkqUXrZSAbiEZsI6T1jYRh5cp+F5ktPCw7aXq6E9Vn2e6Ngu+vr+nNrwwtHqPzcZhxuu9ej2vAKTfp2FcExvy3fKKEhJKq0fJX8dc/aBNAGihKqxTKUI7AX5XsjhtIf0uuhig506g9ZssyaDWXuQ/3gvTDn923R9Hz5BdqQEH9RSHKW+intO8H4CgbhgwfuVZ0mD4ioJKCwfdhakJ2cKMDfgi/FS6QQqeh1wI+uPoS7DjW8Zurd7fhXEfJQFyuy5yZ7CZc7qV381kyo/hV1az6u3W4mrFlGPlNHhp9TmGFBij5QISC6yfmyFS4ZKMbt6n8xFZTJODiU2mT1` |
-> | East US 2 EUAP | rsa-sha2-512 | `M39Ofv6366yGPdeFZ0/2B7Ui6JZeBUoTpxmFPkwIo4c=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC+1NvYoRon15Tr2wwSNGmL+uRi7GoVKwBsKFVhbRHI/w8oa3kndnXWI4rRRyfOS7KVlwFgtb/WolWzBdKOGVe6IaUHBU8TjOx2nKUhUvL605O0aNuaGylACJpponYxy7Kazftm2rV/WfxCcV7TmOGV1159mbbILCXdEWbHXZkA3qWe4JPGCT+XoEzrsXdPUDsXuUkSGVp0wWFI2Sr13KvygrwFdv4jxH1IkzJ5uk6Sxn0iVE+efqUOmBftQdVetleVdgR9qszQxxye0P2/FuXr0S+LUrwX4+lsWo3TSxXAUHxDd8jZoyYZFjAsVYGdp0NDQ+Y6yOx5L9bR6whSvKE1` |
-> | East US 2 EUAP | ecdsa-sha2-nistp256 | `X+c1NIpAJGvWU31UJ3Vd2Os4J7bCfgvyZGh35b2oSBQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK+U6CE6con74cCntkFAm6gxbzGxm9RgjboKuLcwBiFanNs/uYywMCpj+1PMYXVx/nMM4vFbAjEOA20fJeoQtN8=` |
-> | East US 2 EUAP | ecdsa-sha2-nistp384 | `Q3zIFfOI1UfCrMq6Eh7nP1/VIvgPn3QluTBkyZ2lfCw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWRjO+e8kZpalcdg7HblZ4I3q9yzURY5VXGjvs5+XFuvxyq4CoAIPskCsgtDLjB5u6NqYeFMPzlvo406XeugO4qAui+zUMoQDY8prNjTGk5t7JVc4wYeAWbBJ2WUFyMrQ==` |
-> | Australia Central | rsa-sha2-256 | `q2pDjwwgUuAMU3irDl2D+sbH8wQpPB5LHBOFFzwM9Sk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDnqOrNklxmyreRYe7N72ylBCensxxPTBWX/CfbdbGfEbcGRtMGHReeojkvf4iJ7mDMZRzecgYxZ9o2bwTH9UImNkuZTsFNH6APuJ075WyxoDgdBX1UAQ3eE6BrCNI0BcwLakU9lq0rNhmxMpt/quBXxxWbRieKR9liTOg5CGSqoUPo7TpwaZQBltJCEf7rN5wGUlHV49iuiJIasSldYT6F1c3vS4bJb2sdIvVnKVLq+yTMzaPzWn34BD+KHx/pkB+s7/vQtdMfBBEdgEdPVvMPsyXtIKhx4Q79LnfZT19RDY8KW1mJrbPo67oEcjJYTXSZTKysjCUNmNNrnXvp6sHd` |
-> | Australia Central | rsa-sha2-512 | `+tdLVAC4I+7DhQn9JguFBPu0/Hdt5Ru2zjuOOat+Opw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCnd0ETMwpU8w7uwWu1AWDv6COIwLKMmxXu+/1rR429cNXuPrBkMldUfI7NKVaiwnu1kLPuJsgUDkvs/fc7lxx2l5i6mYBWJOXcWzAfXSBfB1a+1SK+2tDPYT3j4/W/KRW74DFPokWTINre22UVc+8sbzkmdtX/FzZdVcqI4+xJSjwdsp2hbzcsVWkxWhrFzKmBU40m5E/YwKQwAcbkzmX6AN5O8s66TQs2uPkRuTItDWI3ShW7QzW05jb6W8TeYdkouZ5PY0Yz/h3/oysFzo4VaUc0y3JP98KRWNXPiBrmvphpKnU1TQrjvVkYEsiCBHMOUnNVHdR1oIHd2zPRneK5` |
> | Australia Central | ecdsa-sha2-nistp256 | `m2HCt3ESvMLlVBMwuo9jsQd9hJzPc/fe0WOJcoqO3RA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBElXRuNJbnDPWZF84vNtTjt4I/842dWBPvPi2fkgOV//2e/Y9gh0koVVAYp6MotNodg4L9MS7IfV9nnFSKaJW3o=` | > | Australia Central | ecdsa-sha2-nistp384 | `uoYLwsgkLp4D5diAulDKlLb7C5nT4gMCyf9MFvjr7qg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARO/VI5TyirrsNZZkI2IBS0TelywsJKj71zjBGB8+mmki+mmdtooSTPgH0zmmyWb/z3iJG+BnEEv/58zIvJ+cXsVoRChzN+ewvsqdfzfCqVrzwyro52x5ymB08yBwDYig==` |
-> | North Central US | rsa-sha2-256 | `9AV5CnZNkf9nd6WO6WGNu7x6c4FdlxyC0k6w6wRO0cs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDJTv+aoDs1ngYi5OPrRl1R6hz+ko4D35hS0pgPTAjx/VbktVC9WGIlZMRjIyerfalN6niJkyUqYMzE4OoR9Z2NZCtHN+mJ7rc88WKg7RlXmQJUYtuAVV3BhNEFniufXC7rB/hPfAJSl+ogfZoPW4MeP/2V2g+jAKvGyjaixqMczjC2IVAA1WHB5zr/JqP2p2B6JiNNqNrsFWwrTScbQg0OzR4zcLcaICJWqLo3fWPo5ErNIPsWlLLY6peO0lgzOPrIZe4lRRdNc1D//63EajPgHzvWeT30fkl8fT/gd7WTyGjnDe4TK3MEEBl3CW8GB71I4NYlH4QBx13Ra20IxMlN` |
-> | North Central US | rsa-sha2-512 | `R3HlMn2cnNblX4qnHxdReba31GMPphUl9+BQYSeR6+E=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDeM6MOS9Av7a5PGhYLyLmT09xETbcvdt9jgNE1rFnZho5ikzjzRH4nz60cJsUbbOxZ38+DDyZdR84EfTOYR2Fyvv08mg98AYXdKVWMyFlx08w1xI4vghjN2QQWa8cfWI02RgkxBHMlxxvkBYEyfXcV1wrKHSggqBtzpxPO94mbrqqO+2nZrPrPFkBg4xbiN8J2j+8c7d6mXJjAbSddVfwEbRs4mH8GwK8yd/PXPd1U0+f62bJRIbheWbB+NTfOnjND5XFGL9vziCTXO8AbFEz0vEZ9NmxfFTuVVxGtJBePVdCAYbifQbxe/gRTEGiaJnwDRnQHn/zzK+RUNesJuuFJ` |
-> | North Central US | ecdsa-sha2-nistp256 | `6xMRs7dmIdi3vUOgNnOf6xOTbF9RlGk6Pj7lLk6z/bM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJw1dXTy1YqYLJhAo1tB+F5NNaimQwDI+vfEDG4KXIFfS83mUFqr9VO9o+zgL3+0vTrlWQQTsP/hLHrjhHd9If8=` |
-> | North Central US | ecdsa-sha2-nistp384 | `0cJkHHeTNQpl7ewPTZwug5+/hfebiH6Yxl2rOTtYZQo=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8aqja46A9Q5PmhPzhxcklcJGp+CiC3MCjVR6Qdl9oQGMywOHfe+kCD72YBKnA6KNudZdx7pUUB/ZahvI5vwt4bi593adUMTY1/RlTRjplz6c2fSfwSO/0Ia4+0mxQyjw==` |
-> | Brazil South | rsa-sha2-256 | `qNzxx1kid41tZGcmbbyZrzlCIPJ9TFa20pUqvRbcjro=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC04g5K8emsS4NpL6jCT3wlpi6Msb5ax6QGlefO3IKp3wDKWAEqN+PvqBdrNp1PsitTKeyRSCLofq9k2wzeAMzV2n3UVqmUpNf9Q0Yd8SuXPhKG6VhqG2hL5+ztrlVTMI2Ak18SLaAEA1x7y9Z1lkEYGvCzJQaAw5EG8kd7XHGaI9nSCJ7RFOdJQF/40gq8z6E+bWW9Xs55JpWQ0i44i/ZvQUEiv5nyAa7D86y23wk1pTIFkRT99Kwdua0GtyUlcgCRDDTOzsCTn4qTo/MAF1Uq/ol4G0ZxwKnAEkazSZ1c+zEmh6GJNwT64nWBZ+pt5Rp3ugW+iDc/mIlXtxEV2k7V` |
-> | Brazil South | rsa-sha2-512 | `KAmGT8A7nRdxxQD7gulgmGTJvRhRdWPVDdagGCDmJug=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC6W0FiaS21Dze6Sr6CTB8rLBu1T1Zej+11m7Kt283PSkQNYmjDDPUx0wSgylHoElTnFcXG+eFMznnjxDqkH+GnYBmlpW3nxxdTYD/MrdP4dX9ibPCFjDupIDJ4thv+9xWCw/0RpGc1NlUx2YmenDVMFJtYqjB1IDa2UUEeUHeQa1qmiBs1tbBQdlws1MCFnfldliB5H+cO4xnbAUjOlaa01k7GKqPf0H75+R83VcIcFw8hSuCvgMT+86H6jRRfqiIzE7WGbQBTPQs0rGcvxcGR3oGOmtB2UmOD232XTEk+sG3q2RxtPKWTz8wz1Tt2c1BOxmtuXTtzXnigZjB2t8y5` |
+> | Australia Central | rsa-sha2-256 | `q2pDjwwgUuAMU3irDl2D+sbH8wQpPB5LHBOFFzwM9Sk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDnqOrNklxmyreRYe7N72ylBCensxxPTBWX/CfbdbGfEbcGRtMGHReeojkvf4iJ7mDMZRzecgYxZ9o2bwTH9UImNkuZTsFNH6APuJ075WyxoDgdBX1UAQ3eE6BrCNI0BcwLakU9lq0rNhmxMpt/quBXxxWbRieKR9liTOg5CGSqoUPo7TpwaZQBltJCEf7rN5wGUlHV49iuiJIasSldYT6F1c3vS4bJb2sdIvVnKVLq+yTMzaPzWn34BD+KHx/pkB+s7/vQtdMfBBEdgEdPVvMPsyXtIKhx4Q79LnfZT19RDY8KW1mJrbPo67oEcjJYTXSZTKysjCUNmNNrnXvp6sHd` |
+> | Australia Central | rsa-sha2-512 | `+tdLVAC4I+7DhQn9JguFBPu0/Hdt5Ru2zjuOOat+Opw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCnd0ETMwpU8w7uwWu1AWDv6COIwLKMmxXu+/1rR429cNXuPrBkMldUfI7NKVaiwnu1kLPuJsgUDkvs/fc7lxx2l5i6mYBWJOXcWzAfXSBfB1a+1SK+2tDPYT3j4/W/KRW74DFPokWTINre22UVc+8sbzkmdtX/FzZdVcqI4+xJSjwdsp2hbzcsVWkxWhrFzKmBU40m5E/YwKQwAcbkzmX6AN5O8s66TQs2uPkRuTItDWI3ShW7QzW05jb6W8TeYdkouZ5PY0Yz/h3/oysFzo4VaUc0y3JP98KRWNXPiBrmvphpKnU1TQrjvVkYEsiCBHMOUnNVHdR1oIHd2zPRneK5` |
+> | Australia Central 2 | ecdsa-sha2-nistp256 | `m7Go9P1bwcPHAcZzRSXdwYroDIdZzt0jhxkJW42YGKY=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHp76felOL7GAHcJoW6vcCS83jkeR6RdFCwUk0Jf6v7SFoqYNZfTaryy2n0vwG1W1dAyHvOjB1+gzTZOkHN/cAI=` |
+> | Australia Central 2 | ecdsa-sha2-nistp384 | `9Jc39OueTg3pQcq8KJgzsmPlVXxILG24Euw27on7SkY=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEduOE61sSP2BozvJ6QLtRDZ7j0TenX7PjcpPVtYIQuKQ+h3qakXFwFnj8N3m8+LpTXYO41mgX7N02Rl12QvD7lDpUgHUChaNpUcMcSpm5qvguLyG6XZg2BDNd6pyx+fpw==` |
+> | Australia Central 2 | rsa-sha2-256 | `sqVq1zdzD3OiAbnDjs70/why2c3UZOPMTuk5sXeOu4Y=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDKNZVZ5RVnGa0fYSn+Nx3tnt526fmMf+VufOBOy5/hEnqV6mPKXMiDijx2gFhKY4nyy957jYUwcqp1XasweWX6ISuhfg4QWcygW0HgmVdlSDezobPDueuP0WdhVsG3vXGbEYnrZOUR5kQHagX/wWf6Diy1J5Cn2ojIKGuSuGY/9bu3bnZzKt08fj+gQCEd1GxpPoBUfjF/73MM57IRhdmv919rsGD5nsyZCBmqFoKlLH/gKYZ4B3hylqf/6gER7OeZmG2S/U/fRAN0hVK7RkHNf2CFoCmuxXS6r87BznT5vF3nmd7tsf0akaxLjfWRbKLMWWyZkzU4/jijpbDDuu1x` |
+> | Australia Central 2 | rsa-sha2-512 | `p6vLHCTBcKOkqz7eiVCT6pLuIg7h4Jp41lvL/WOQLWM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDcqD2zICW1RLKweRXMG9wtOGxA5unQO/nd9yslfOIo54Ef0dlhAXGFPmCd3Yj60Gt/CIpqguzKLGm4D3nf19KjXE8V59cD7/lN6mVrFrm+6CU44JAzKN9ERUelxhSQKi/dsDR773wt4jsAt4SLBRrs19RC2fkYnxZgC/LzNZKXXY3FFb06uwheJjGOHyeQJbGpaV3hlelhOSV1UF2JAB8v6d8+9+S+b666EcpQ70JtxtA8h1s30hqhTKgYdRYMPfz7lqKXvact2NBXlqYRPod5cLW7lYBb2LzqTk1D44d8cwDknX2pYQJpgeFwJhB6SO9mF/Ot+jk+jV/CxUI55DPd` |
+> | Australia East | ecdsa-sha2-nistp256 | `s8NdoxI0mdWchKMMt/oYtnlFNAD8RUDa1a4lO8aPMpQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBKG2nz5SnoR5KVYAnBMdt8be1HNIOkiZ5UrHxm4pZpLG3LCuzLEXyWlhTm8rynuM/8rATVB5FZqrDCIrnn8pkw=` |
+> | Australia East | ecdsa-sha2-nistp384 | `YmeF1kX0R0W/ssqzKCkjoSLh3CciQvtV7iacYpRU2xc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFJi5nieNPCIxkYS7HKMH2fQgONSy2kGkViQucVhWrTJCEQMVz5peL2JZJFjf2a6zaB2olPaBNEkeuJRHxGyW0luTII9ZXXUoiGQH9l05B41mweVtG6pljHfuKQ4HzoUJA==` |
+> | Australia East | rsa-sha2-256 | `MrPZLU8llsG+SzgBN8eH702H4zuynyYgqqQLQmWGDEs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDsRwHZ+DKINZZNP0GO6l7mFIIgTRnJy7ikg07h54iuk+KStoB2Cwppj+ulPs0NiR2RgAsP5nchWRZQysjsfYDui8wha6JEXKvWPlQ90rEYEs96gpUcbVQesgfH8ILXK06Kn1xY/4CWAHEc5U++66e+pHQulkkFyDXTsRYHsjTk574OiUI1` |
+> | Australia East | rsa-sha2-512 | `jkDaVBMh+d9CUJq0QtH5LNwCIpc9DuWxetgJsE5vgNc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDFHirQKaYqkcecqdutyMQr1eFwIaIM/h302KjROiocgb4iywMAJkBLXmhJn+sSbagM5tzIk4K4k5LRjAizEIhC26sc2aa7spyvDu7+HMqDmNQ+nRgBgvO7kpxVRcK45ZjFsgZ6+mq9jK/eRnA8wgG3LnM+3zWaNLhWlrcCM0Pdy87Cswev/CEFZu6o6E6PgpBGw0MiPVY8CbdhFoTkT8Nt6tx9VhMTpcA2yzkd3LT7JGdC2I6MvRpuyZH1q+VhW9bC4eUVoVuIHJ81hH0vzzhIci2DKsikz2P4pJT0osg5YE/o9hVJs+4CG5n1MZN/l11K8lVb9Ns7oXYsvVdtR2Jp` |
+> | Australia Southeast | ecdsa-sha2-nistp256 | `4xc49pnNg4t/tr91pdtbZLDkqzQVCguwyUc16ACuYTc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCdswzJ+/Bw5ia/wRMaa0llZOjlz67MyZXkq7Ye38XMSHbS4k/GwM0AzdX+qFEwR00lxZCmpHH28SS+RyCdIzO0=` |
+> | Australia Southeast | ecdsa-sha2-nistp384 | `DEyjMyyAYkegwLtMBROR/8STr1kNoQzEV+EZbAMhb1s=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJRZx6caZTXnbTW/zRzKfoKC4LGzvD5fnr2p8yGWxDq27CjUEMxkctWcT6XS3AstF2MLMTOFp/UkkSr8eP0vXI8g99YDeFGXtoBiFSIgYF2Aiu/kpfEu3feiIUl3SVzxKw==` |
+> | Australia Southeast | rsa-sha2-256 | `YafIMxux7NtoKCrjQ2eDxaoRKHBzpanatwsYbRhrDKQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7omLu37G00gLoGvrPOJXpRcI5GTszUSldKjrARq0WeJIXEdekaSTz5qv2kSN/JaBDJiO9E4AJFI9q5AvchdmMVs4I59EIJ0bsR9lK+9eRP4071EEc8pb3u/EPFaZQ8plKkvINJrdK6p0R2FhlFxa7wrRlKybenF1v7aU3Vw79RQYYXaZifiNrIQFB8XQy3QQj2DcWoEEOjbOgZir9XzPBvmeR8LLEWPTLYangYd3TsQtybDpP6acpOKaGYDEyXiA8Lxv8O276LUBny6katPrNvfGZScxn6vbTEZyog+By8vyXMWKEbC1Qc/ecBBk5obFzbUnq3RP1VXLJspo99cex` |
+> | Australia Southeast | rsa-sha2-512 | `FpFCo9sNUkdnD1kOSkWDIfnasYhExvRr1pJlE631QrA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDmuW2VZAhR6IoIOr32WnLlsr/rt3y4bPFpFcNhXaLifCenkflj9BufX3lk5aEXadcemfKlUJJdwBTvTt1j4+X3P2ecCZX1/GSsRKSTuiivuOgkPxk3UlfggkgN9flE9EdUxHi/jN/OQ9CjGtHxxk72NJSMNAjvIe0Ixs7TfqqyEytYAcirYcSGcc0r70juiiWozflXlt+bS7mXvkxpqMjjIivX+wTAizzzJRaC6WcRbjQAkL2GP6UCFfBI1o9NBfXbz+qvs1KTmNA0ugRQ7g6MdiNOePHrvoF1JgTlCxEjy+/IqPiC8nNQUVCW6/gcATQoDQn0n9Lwm1ekycS35xEh` |
> | Brazil South | ecdsa-sha2-nistp256 | `rbOdmodk5Aq+nxHt04TN7g6WyuwbW5o+sDbj86l6jp8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNFqueXBigofrM5Kp2eA4wd4XxHcwcNgNFWGgEd0EoNdKWt9NroU47bN43f79Y5vPiSa4prKW1ccMBl40nNN4S4=` | > | Brazil South | ecdsa-sha2-nistp384 | `cenQeg58JZ+Dvu3AC7P7lC/Jq7V3+YPaS37/BBn3OlQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHBhfnlfXV9/m6ZgOoqmSgX3VPnRdTOraBhMv8v7lEN1lWwyBpiWepu52KS0jR1RhttfXB+n+p6i2+9djJ1zT7fHq4sNn/d/3k2J6IjJlymZ32GwPvDk+fGefupUtabvRQ==` |
-> | UK West | rsa-sha2-256 | `2NQ5z6fQjt4SZKdViPS+I2kX7GoXOx3fVE81t8/BCVE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNq0xtA0tdZmkSDTNgA05YLH5ZuLFKD7RbruzuL4KVU2In0DQUtJkVqRXIaB3f+cEBTs9QrMUqolOdCCunhzosr5FvCO3I6HZ8BLnVNshtUBf2C1aT9yonlkdiIyc2pCHonds8vHKC4SBNu3Jr584bhyan8NuzJqzPCnKTdHwyWjf8m5mB4liK/ka4QGiaLLYTAjCCXmaXXOVZI2u0yDcJQXAjAP5niCOQaPHgdGk6oSjs0YKB29V+lIdB8twUnBaJA9jgECM2brywksmXrAyUPnIFD6AVEiFZsUH3iwgFAH7O6PLZTOSgJuu994CNwigrOXTbABfpH2YMjvUF///5` |
-> | UK West | rsa-sha2-512 | `MrfRlQmVjukl5Q5KvQ6YDYulC3EWnGH9StlLnR2JY7Q=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQClZODHJMnlU29q0Dk1iwWFO0Sa0whbWIvUlJJFLgOKF5hGBz9t9L6JhKFd1mKzDJYnP9FXaK1x9kk7l0Rl+u1A4BJMsIIhESuUBYq62atL5po18YOQX5zv8mt0ou2aFlUDJiZQ4yuWyKd44jJCD5xUaeG8QVV4A8IgxKIUu2erV5hvfVDCmSK07OCuDudZGlYcRDOFfhu8ewu/qNd7M0LCU5KvTwAvAq55HiymifqrMJdXDhnjzojNs4gfudiwjeTFTXCYg02uV/ubR1iaSAKeLV649qxJekwsCmusjsEGQF5qMUkezl2WbOQcRsAVrajjqMoW/w1GEFiN6c70kYil` |
-> | UK West | ecdsa-sha2-nistp256 | `bNYdYVgicvl1yaOR/1xLqocxT8bamjezGFqFdO6Od0I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKWKoJuxB3EO5bKjxnviF+QTv3PBSViD1SNKbfj0qYfAjObQKZuiqcFYeDoPrkhk9jfan2jU6oCEN4+KDwivz3k=` |
-> | UK West | ecdsa-sha2-nistp384 | `6V8vLtRf6I5BjuLgToJ1cROM72UqPD+SC0N9L9WG6PA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+7R/5qSfsXACmseiErhfwhiE7Rref/TNHONqiFlAZq2KCW3w3u8+O4gpJEflibMFP/Mj5YeoygdPUwflFNcST9K+vnkEL3/lqzoGOarGBYIKtEZwixv3qlBR+KyoRUkw==` |
-> | West Central US | rsa-sha2-256 | `aSNxepEhr3CEijxbPB4D5I+vj8Um7OO6UtpzJ/iVeRg=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDDWmd8Zd7dCfamYd/c1i4wYhhRnaIgUmK7z/o8ehr4bzJgWRbjrxMtbkD2y7ImjE2NIBG5xglz6v9z4CFNjCKUmoUl7+Le3Rsc5sJ/JmHAmEXb0uiDMzhq9f6Qztp+Pb9uqLfsPmm6pt1WOcpu+KNpiGtPWTL21sJApv6JPKU+msUrrCIekutsHtW6044YPXNVOnvUXv08BaPFhbpeGZ4zkrji0mCdGfz2RNcgLw0y3ZzgUuv0Lw+xV0/xwanJu4IOFI1X9Ab7NnoGMkqN/upBLJ4lRhjYVTNEv01IX2/r5WZzTn4c38Nfw4Ma3hR0BiLMTFfklFVGg2R64Z7IILoB` |
-> | West Central US | rsa-sha2-512 | `vVHVYoH1kU1IZk+uZnStj3Qv2UCyOR9qVxJfmTc20jQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC9Q8Tvvnea8hdaqt+SZr4XN1JIeR43nX6vrdhcS6yyfRgaTcEdKbAKQbwj9Fu3kq80c4F+SNzh1KQWlqLu3MJHSaSdQLN9RaHO1Dd+iVK1WgZtsPM9+6U7wupMZq8Hdmao5sqaMT5lj7g+win2J+Wibz7t8YwS7g2Xi+ode8tFPFKduZ5WvKLjI0EiAS4mvcyWEWca142E8fxV9TobUjAICfgtL4vCpmLYKnSL/kUgplD0ow86k/MHp9zghDLVSVDj8MGMra+IJEpgHOUrFNnuyua2WSJVuXR2ITfaecRKrGg7Z4IJzExPoQzDIWdCHptiGLAqvtKT0NE2rPj9U4Rp` |
-> | West Central US | ecdsa-sha2-nistp256 | `rkHjcTK2BvryQAFvjugTHdbpYBGfOdbBUNOuzctzJqM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKMjEAUTIttG+f5eocMzRIhRx5GjHH7yYPzh/h9rp9Yb3c9q2Yxw/j35JNWxpGwpkb9W1QG86Hjt4xbB+7q/D8c=` |
-> | West Central US | ecdsa-sha2-nistp384 | `gS9SYvaH6dCqyugArvFb13cwi8q90glNaK+fyfPie+Y=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0HqM8ubcDBRMwuruX5zqCWSp1DaLcS9cA9ndXbQHzb2gJ5bJkjzxZEeIOM+AHPJB8UUZoD12It4tCRCCOkFnKgruT61hXbn0GSg4zjpTslLRYsbJzJ/q6F2DjlsOnvQQ==` |
-> | Central US | rsa-sha2-256 | `GOPn34T1cCkLHO0xjLwmkEphxKKBQIjIf9QE1OAk3lU=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC9oA4N2MxbHjcdSrOlJOdIPjTB2LpQUMwJJj+aI2KEALYDGWWJnv0E14XjY1/M35jk8z0hX4MHGE/MEocSsTVdFRdWdW9CKTWT6eICpg9frTj6wfkB/Dxz/BAYb/YXq5OMrFlkFJUG8FMp9N80W6UWichcltmSrCpRi5N3ZGpVXEYhJF+I0mCH7Yheoq2KzIG2iWU/EJT5fui4t51wD8CQ1NWG8/THnNr0gjCr3AtB+ZPAl/6N7i2vO3FlZEHUj6BHoQ4dhIGjGCkgFDNU6RpdifqMJRihP9fSMOq4qksch1TE5sOnp0sOaP/RQvChb4oXB8Pru+j45RxPzIvzzOZZ` |
-> | Central US | rsa-sha2-512 | `VLhZbZjHzoNRMyRSa3GYvk2rgacjmldxQ2YNzvsMpZA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDPnuJixcZd6gAIifABQ377Mn0ZootRmdJs1J3R8/u7mbdfmpX2ItI0VfgMh4BzGEdgCqewx4BjADhfXRurfimuP8P9PLRq89AHX2V+IfeizLZkrnrxKiijjGEz640gORzzwIp2X+bmnBABWzEZjSNOeE3CKVr4ONvH80bYGFFqR4+arOelDqWEgxktM1QBlId7xR7efmtEGAuAhFbZVaqjBNsbqyiR/hlkMQfmWn1bjGSoenUoPojc7UAp9+Xf6ujkhCihRV/O4A69tVvp5E0Qv5MJ1Qj3kzAYbHQcIQ2l47MQq1wdZYxkYBHmH5leAjHgQbbccPalOLSbLRYjF169` |
+> | Brazil South | rsa-sha2-256 | `qNzxx1kid41tZGcmbbyZrzlCIPJ9TFa20pUqvRbcjro=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC04g5K8emsS4NpL6jCT3wlpi6Msb5ax6QGlefO3IKp3wDKWAEqN+PvqBdrNp1PsitTKeyRSCLofq9k2wzeAMzV2n3UVqmUpNf9Q0Yd8SuXPhKG6VhqG2hL5+ztrlVTMI2Ak18SLaAEA1x7y9Z1lkEYGvCzJQaAw5EG8kd7XHGaI9nSCJ7RFOdJQF/40gq8z6E+bWW9Xs55JpWQ0i44i/ZvQUEiv5nyAa7D86y23wk1pTIFkRT99Kwdua0GtyUlcgCRDDTOzsCTn4qTo/MAF1Uq/ol4G0ZxwKnAEkazSZ1c+zEmh6GJNwT64nWBZ+pt5Rp3ugW+iDc/mIlXtxEV2k7V` |
+> | Brazil South | rsa-sha2-512 | `KAmGT8A7nRdxxQD7gulgmGTJvRhRdWPVDdagGCDmJug=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC6W0FiaS21Dze6Sr6CTB8rLBu1T1Zej+11m7Kt283PSkQNYmjDDPUx0wSgylHoElTnFcXG+eFMznnjxDqkH+GnYBmlpW3nxxdTYD/MrdP4dX9ibPCFjDupIDJ4thv+9xWCw/0RpGc1NlUx2YmenDVMFJtYqjB1IDa2UUEeUHeQa1qmiBs1tbBQdlws1MCFnfldliB5H+cO4xnbAUjOlaa01k7GKqPf0H75+R83VcIcFw8hSuCvgMT+86H6jRRfqiIzE7WGbQBTPQs0rGcvxcGR3oGOmtB2UmOD232XTEk+sG3q2RxtPKWTz8wz1Tt2c1BOxmtuXTtzXnigZjB2t8y5` |
+> | Brazil Southeast | ecdsa-sha2-nistp256 | `dhADLmzQOE0mPcctS3wV+x2AUlv1GviguDQgSbGn/qs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPYuseeJN3CvtyPSKOz5FSu7PoNul+o6/LB62/MW9CUW+3AmqtVANVox1XQ8eX/YhL0a5+brjmveZPQS6M09YyQ=` |
+> | Brazil Southeast | ecdsa-sha2-nistp384 | `mjFHELtgAJkPTWO4dc7pzVVnJ6WLfAXGvXN1Wq2+NPs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIwFI6bRmgPe0tN7Qwc03PMrlpBn+wBChcuofyWlKVd/Ec6t2dxHr/0ev0dqyKS2nbK7CAOQxSrV1NVYnYZKql/eC2sPqI1oxz7DzUtRnNKrXcH714ViN3RIY3DZA6rJOw==` |
+> | Brazil Southeast | rsa-sha2-256 | `D+S7uHDWy0VPrRg9mlaK70PBPghBRCR1ru/KEsKpcjA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCz86hzEBpBBVJqClTRb7dNolds/LShOM4jucPRawZrlKGEpeKv70Khk8BdI4697fORKavxjWK0O9tQpAJHtVGSv3Ajwb9MB7ERKncBVx/xfdiedrJNmF0G+p97XlCiwkAqePT/9IFaMy1OFqwl6LF7p7I0iBKZX0UgePwdiHtJnK0foTfsASNY4AEVcXHEuaulLFJKUjrr6ootblEbPBnC6IxTPj9oD+Nm0rtlCeD5JtCRFgKUj3LWybEog/AnnMNQDQ+vMPbsCnfsW/J/HQc+ebx3NtcumL+PIxqJw2Pk6mRpDdL+vs2nw/PtnPkdJ7DjIQYLypBSi3AFIONSlO15` |
+> | Brazil Southeast | rsa-sha2-512 | `C+p2eAPf5uec0yG+aeoVAaLOAAf0p8gbBNss3xfawPQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDV3WmETlQwzfuYoOsPAqfB9Z2gxsNecbpuwIBYwitLYKmJnT9Q3SNSgqnBiI1TKWyEweerdQaPnEvz9TeynGqSmLyGT0JJXQXFQCjTCgRHP4WD0Q+V7HWHnWYQ5c2e8tKEVA1jWt57dcrFlrGKEsywuMeEX21V13qQxK2acXVRWJPWgQCVwtiNpToc/cILOqL5XXKnSA81Ex7iRqw8QRAGdIozkryisucy+cStdJX6q+YUE5L62ENV8qMwJdwUGywEpKhXRg5VQKN0ciFqvyT/3cZQVF+NkUFGPnOi0bk4JzHxWxmQNTIwE7bmPsuniw5njD3ota/IPUHV2og190Xx` |
+> | Canada Central | ecdsa-sha2-nistp256 | `HhbpllbdxrinWvNsk/OvkowI9nWd9ZRVXXkQmwn2cq4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBuyYEUpBjzEnYljSwksmHMxl5uoErbC30R8wstMIDLexpjSpdUxty1u2nDE3WY7m4W/doyXVSBYiHUUYhdNFjg=` |
+> | Canada Central | ecdsa-sha2-nistp384 | `EjEadkKaEgaNfdwXtzlqanUbDigzsdzcZJeTzJfQXP0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBORAcpaBXKmSUyCLbAOzghHvH8NKzk0khR0QGHdru0kiFiE16uz9j07aV9AiQQ3PRyRZzsf+dnheD7zuEZAewRiWc54Vg8v8QVi9VUkOHCeSNaYxzaDTcMsKP/A7lR2AOQ==` |
+> | Canada Central | rsa-sha2-256 | `KOYkeGvx4egH9DTGgxiONDMvSlkEkoU8cXWnynOEQRE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7jhZvp5GMrYyA2gYjbQXTC/QoSeDeluBUpji6ndy52KuqBNXelmHIsaEBc69MbixqfoopaFyJshdu7X8maxcRSsdDcnhbCgUO/MnJ+am6yb33v/25qtLToqzJRXb5y86o9/WtyA9DXbJMwwzQFqxIsa1gB` |
+> | Canada Central | rsa-sha2-512 | `tdixmLr++BVpFMpiWyVkr5iAXM4TDmj3jp5EC0x8mrw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNMZwL0AuF2Uyn4NIK+XdaHuon2jEBwUFSNAXo4JP7WDfmewISzMWqqi1btip/7VwZbxiz98C4NUEcsPNweaw3VdpYiXXXc7NN45cC32uM8yFeV6TqizoerHf+8Hm8avWQOfBv17kvGihob2vx8wZo4HkZg9KacQGvyuUyfUKa9LJI9BnpI2Wo3RPue4kbaV3JKmzxl8sF9i6OTT8Adj6+H7SkluITm105NX32uKBMjipEeMwDSQvkWGwlh2oZwJpL+Tvi2G0hQ/Q/FCQS5MAW9MCwnp0SSPWZaLiA9EDnzFrugFoundyBa0vRjNGZoj+X4+8MVG2fYgOzDED1JSPB` |
+> | Canada East | ecdsa-sha2-nistp256 | `YPqDobCavdQ/zGV7FuR/gzYqgUIzWePgERDTQjYEE0M=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKlfnJ9/rlO5/YGeGle1K6I6Ctan4Z3cKpGE3W9BPe1ZcSfkXq47u/f6F/nR7WgrC6+NwJHaMkhiBGadEWbuA3Q=` |
+> | Canada East | ecdsa-sha2-nistp384 | `Y6FK9rWscBkyKN7mgPAEj0jKFXrv4mGNzoaZ9ttc4io=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDS8gYaqmJ8eEjmDF2ET7d2d6WAO7SgBQdTvqt6cUEjp7I11AYATKVN4Isz1hx8qBCWGIjA42X1/jNzk3YR7Bv/hgXO7PgAfDZ41AcT4+cJd0WrAWnxv0xgOvgLKL/8GYQ==` |
+> | Canada East | rsa-sha2-256 | `SRhd9gnvJS630A8VtCYMqc4djz5R8EiG7spwAUCYSJk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQD2nSByIh/NC3ZHsjK3zt7mspcUUXcq9Y/jc9QQsfHXQetOH/fBalf17d5odCwQSyNY5Mm+RWTt+Aae5t8kGm0f+sKVO/4HcBIihNlAnXkf1ah5NoeJ+R0eFxRs6Uz/cJILD4wuJnyDptRk1GFhpAphvBi0fLEnvn6lGJbrfOxuHJSXhjJcxDCbmcTlcWoU1l+1SaYfOzkVBcqelYIimspCmIznMdE2D9vNar77FVaNlx4J9Ew+HQRPSLG1zAh5ae1806B6CHG1+4puuTUFxJR1AO+BuT6fqy1p0V77CrhkBTHs8DNqw9ZYI27fjyTrSW4SixyfcH16DAegeHO+d2YZ` |
+> | Canada East | rsa-sha2-512 | `60yzcSSOHlubdGkuNPWMXB9j21HqIkIzGdJUv0J57iY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDDmA4meGZwkdDzrgA9jAgcrlglZro0+IVzkLDCo791vsjJ29bTM6UbXVYFoKEkYliXSueL0q92W91IaFH/NhlOdW81Dbjs3jE+CuE4OX5pMisIMKx45QDcYCx3MJxqZrIOkDdS+m8JLs6XwM07LxiTX+6bH5vSwuGwvqg5gpnYfUpN0U5o7Wq7H7UplyUN8vsiDvTux3glXBLAI3ugjn6FC/YVPwMOq7Luwry3kxwEMx4Fnewe6hAlz47lbBHW6l/qmzzu4wfhJC20GqPzMJHD3kjHEGFBHpcmRbyijUUIyd7QBrnfS4J0xPVLftGJsrOOUP7Oq8AAru66/00We501` |
+> | Central India | ecdsa-sha2-nistp256 | `zBKGtf770MPVvxgqLl/4pJinGPJDlvh/mM963AwH6rs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBjHx8+PF0VBspl6l9Xa3BGyJwSx2eDX0qTDnhrdayzHMWsHGX3vz0wr7oMeBVdQ26dOckExa6iPrEDSt8foV1M=` |
+> | Central India | ecdsa-sha2-nistp384 | `PzKXWvO/DR/KnUElcVWIwSdabp6ZJqce37DJZzNl3Sk=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJwEy1f+GYN4rxhlCAkXGgqAU1S7ssI4JPEJs8z1mcs8dDAAVy1cqdsir9yZ9RSZzOz/BOIubZsG137G2+po0Pz0FfJ0jZVGzlx1UHXu7OMuKQ7d2+1TkPpBYFy6PiCa3w==` |
+> | Central India | rsa-sha2-256 | `OcX6wPaofbb+UG/lLYr30CPhntKUXyTW7PUAhC6iDN0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDWuKbOJR8ZZqhE3k2HMBWO99rNWoHwUa+PVWPQHyCELyLR19hfrygNL9uugMQKTvPYPWx8VM6PrQBrvioifktc/HMNRsoOxlBifQETfRgBseXcIWorNlslfFhBnSn6ZGn8q4XICGgZ1hWUj9z1PUmcM2LZDjJS33LLzd23uIdLePizAliJAzlPyea8JNpCVjfmwnNwtuxXc48uAUXlmX+e0ZXRwuEGble8c1PbrWWTWU4xhWNJ+MInyvIGv9s6cGN7+fxAFaUAJS0wNEa3poCnhyNxrckvaqiI3WhPQ8Hefy2DpXTY03mdxCz8PZPcLWdQU3H5nmuAc/pypnc0Avax` |
+> | Central India | rsa-sha2-512 | `HSgc5u8s+QILdyBq6wGJkxRcK5nxj81gxvpkR5bcH6k=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDSO/R/yw8q33yLkSHOw0Bi2WKDWQPrll8skh3hdRUB6wtw9dvtQFEV3suvFJsTVvAbnGBe2Fjgi69X0zkIygxg74XuQsx7GZO6gyaKDwljyanFoCzer+OzFSpDcVJ0zOfhY99uHeYT6k4leb2ngABqjiqieDHMZ9JQX12KOK3cAks/oytrNUo9krGb1Nyv5BYu4dWXHmuFgtigDd043khaARfdWkg88lKgb6G9k+vQTGKphLnFMqhada/aP8GsaA2Dq5d/LH5P5CTU7MRPA8TuuyLOtbv8FtQ2TyaAXhYCplCQELtto1yXZ79WVjQE/uKuX8xK5M2rfOH+H5ck/Rxl` |
> | Central US | ecdsa-sha2-nistp256 | `qN1Fm+zcCQ4xEkNTarKiQduCd9S+Aq3vH8BlfCaqL74=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN6KpNy9XBIlV6jsqyRDSxPO2niTAEesFjIScsq8q36bZpKTXOLV4MjML0rOTD4VLm0mPGCwhY5riLZ743fowWA=` | > | Central US | ecdsa-sha2-nistp384 | `9no3/m09BEugyFxhaChilKiwyOaykwghTlV+dWfPT6c=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCiEYrlw9pskKzDp/6HsA2g0uMXNxJKrO5n16cHwXS1lVlgYMK3mmQdA+CjzMdJflvVw7sZO2caApr+sxI3rMmGWCt5gNvBaU6E9oUN8kdcNDvsfFavCb3vguOgtgbvHTg==` |
-> | North Europe | rsa-sha2-256 | `vTEOsEjvg/jHYH1xIWf2rKrtENlIScpBx450ROw52UI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQChnfrsd1M0nb7mOYhWqgjpA+ChNf7Ch6Eul6wnGbs7ZLxXtXIPyEkFKlEUw4bnozSRDCfrGFY78pjx4FXrPe5/m1sCCojZX8iaxCOyj00ETj+oIgw/87Mke1pQPjyPCL29TeId16e7Wmv5XlRhop8IN6Z9baeLYxg6phTH9ilA5xwc9a1AQVoQslG0k/eTyL4gVNVOgjhz94dlPYjwcsmMFif6nq2YgQgJlIjFJ+OwMqFIzCEZIIME1Mc04tRtPlClnZN/I+Hgnxl8ysroLBJrNXGYhuRMJjJm0J1AZyFIugp/z3X1SmBIjupu1RFn/M/iB6AxziebQcsaaFEkee0l` |
-> | North Europe | rsa-sha2-512 | `c4FqTQY/IjTcovY/g7RRxOVS5oObxpiu3B0ZFvC0y+A=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCanDNwi72RmI2j6ZEhRs4/tWoeDE4HTHgKs5DRgRfkH/opK6KHM64WnVADFxAvwNws1DYT1cln3eUs6VvxUDq5mVb6SGNSz4BWGuLQ4onRxOUS/L90qUgBp4JNgQvjxBI1LX2VNmFSed34jUkkdZnLfY+lCIA/svxwzMFDw5YTp+zR0pyPhTsdHB6dST7qou+gJvyRwbrcV4BxdBnZZ7gpJxnAPIYV0oLECb9GiNOlLiDZkdsG+SpL7TPduCsOrKb/J0gtjjWHrAejXoyfxP5R054nDk+NfhIeOVhervauxZPWeLPvqdskRNiEbFBhBzi9PZSTsV4Cvh5S5bkGCfV5` |
-> | North Europe | ecdsa-sha2-nistp256 | `wUF5N8VjGTnA/PYBVzQrhcrMgHuCfAYL1tu+p6s28Ms=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCh4oFTmr3wzccXcayCwvcx+EyvZ7yANMYfc3epZqEzAcDeoPV+6v58gGhYLaEVDh69fGdhiwIvMcB7yWXtqHxE=` |
-> | North Europe | ecdsa-sha2-nistp384 | `w7dzF6HD42eE2dgf/G1O73dh+QaZ7OPPZqzeKIT1H68=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgyasQj6FYeRa1jiQE4TzOGY/BcQwrWFxXNEmbyoG89ruJcmXD01hS2RzsOPaVLHfr/l71fslVrB8MQzlj3MFwgfeJdiPn7k/4owFoQolaZO7mr/vY/bqOienHN4uxLEA==` |
-> | UAE North | rsa-sha2-256 | `Vazz+KIADh85GQHAylrlI1tTY8/ckoRqTe/kbLXPmd0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDRGQHLLR9ruI0GcNF2u3EpS2CbHdZlqcgSR1bkaOXA9ZufHyxuhIpzG2IgYQ8wrjGzIilYds6UIH7CAw9FApKLNpLR6qdm8qjM0tJiyHLm3KloU27FfjCQjE9JhmsbTWCRH3N52A9HXIdiVCE3BBSoXhg/mF+3cvm1JvabKr1twoyfbUgDFuF7fDyhSxJ/MTig8SpgzWqcd5J+wbzjXG0ob2yWVhwtrcB6k97g25p77EKXo3VhSs0jN7VR+SAHupVwWsUgx4fZzi2I5xTUTBdOXW+e3EiXytPL2N5N/MtFKVY/JVhFkKkcTRgeuOds51tkByteSkc32kakcUxw6CjJ` |
-> | UAE North | rsa-sha2-512 | `NDeTZPUor2OuTdgSjLLhSaqJiTJUdfwTAzpkjNbNQUY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAx9LfiyVmWwGD/rjQeHiHTMWYaE/mMP6rxmfs9/I4wEFkaTBbc4qewxUlrB1jd7Se2a0kljI3lqQJ9h+gjtH/IaVTZOKCOZD8yV9Dh4ZENRqH/TOVz6LCvZifVbjUtxRtbvOuh1lJIVBSBFciNr0HThFMnTEIwcs5V48EFIT6eS9Krggu+cWAX2RbjM0VQnIgkA5BeM33MjSjNz86zhO+e7e1lhflPKL5RTIswtWbwatgkyvfM33pJql/zJz+3/usSpIA/pgWw23c8WziYXiHPTShJXN+N+9iLKf9YUkpzQUZSaRw8XDPyjJNx327Lot0Bh4YLpe37R0SrOvitBsN` |
-> | UAE North | ecdsa-sha2-nistp256 | `vAuGgsr0IQnOLUaWCCOBt+Jg0DV9C6rqHhnoJnwORM8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEYpnxgANJNJ4IIvSwvrRtjgbejCpTc3D+l5iob7dBK4KQ7MB40rq+CtdBDGZ1J7d6oCevW6gb1SIxU/PxCuvMI=` |
-> | UAE North | ecdsa-sha2-nistp384 | `A5fa4Pzkdl0H2kVJxlNiEQkOhPzBYkrfQrcviQUUWUA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOz4ENDgFpo0547D5XCRCJLg8brp+iUyId2IdEhZAhuNX9spxlVe6uSkiQbd+8D5hHPVNuLFTFx7v2wXObycM8tr/WGejn/934BvSUhM6lDpU+d5n+ZcxEEhp4gDiy1l+Q==` |
-> | Germany Westcentral | rsa-sha2-256 | `0SKtGye+E9pp4QLtWNLLiPSx+qKvDLNjrqHwYcDjyZ8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDsbkjxK9IJP8K98j+4/wGJQVdkO/x0Msf89wpjd/3O4VIbmZuQ/Ilfo6OClSMVxah6biDdt3ErqeyszSaDH9n3qnaLxSd5f+317oVpBlgr2FRoxBEgzLvR/a2ZracH14zWLiEmCePp/5dgseeN7TqPtFGalvGewHEol6y0C6rkiSBzuWwFK+FzXgjOFvme7M6RYbUS9/MF7cbQbq696jyetw2G5lzEdPpXuOxJdf0GqYWpgU7XNVm+XsMXn66lp87cijNBYkX7FnXyn4XhlG4Q6KlsJ/BcM3BMk+WxT+equ7R7sU/oMQ0ti/QNahd5E/5S/hDWxg6ZI1zN8WTzypid` |
-> | Germany Westcentral | rsa-sha2-512 | `9OYO7Hn5p+JJeGGVsTSanmHK3rm+iC6KKhLEWRPD9ro=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCwrSTqa0GD36iymT4ZxSMz3mf5iMIHk6APQ2snhR5FUvacnqTOHt3xhMF+UwYmGLbQtmr4HdXIKd7Dgn5EzHcfaYFbaLJs2aDngfv7Pd6TyLW3TtSgJ6K+mC1MDI/vHzGvRxizuxwdN0uMXv5kflQvnEtWlsKAHW/H7Ypk4R8s+Kl2AIVEKdy+PYwzRd2ojqqNs+4T2tPP5Y6pnJpzTlcHkIIIf7V0Bk/bFG2B7r73DG2cSUlnJz8QW9pLXIn7268YDOR/5nozSXj7DinVDBlE5oeZh4qkdMHO1FSWynj/isUCm5qBn76WNa6sAeMBS3dYiJHUgmKMc+ZHgpu6sqgd` |
-> | Germany Westcentral | ecdsa-sha2-nistp256 | `Ce+h+7thT5tt75ypIkWZ6+JnmQMZEl1N7Tt3Ldalb64=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBmVDE0INhtrKI83oB4r8eU1tXq7bRbzrtIhZdkgiy3lrsvNTEzsEExtWae2uy8zFHdkpyTbBlcUYCZEtNr9w3U=` |
-> | Germany Westcentral | ecdsa-sha2-nistp384 | `hhQQi2iRjSX5d9c+4714hAFvTA3c63+TGknhuQi7Tss=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDlFF3ceA17ZFERfvijHkPI2Na1wuti9/AOY5E/bDvZfP08kkmYTb9Ma6omhB0dHR6e1CmRJfKmFXfTd81iVWPa7yXCxbS8yG+uNKCuHxuNv8hFhNM84h2727BSBHBBHBA==` |
-> | Switzerland West | rsa-sha2-256 | `yoVjbjB+U4Cp/ZpMgKKuji9T2pIFtdeXnJudyeNvPs0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDFl9NO3CJyKTdYxDIgCjygwIxlT1ppJQm/ykv2zDz6C7mjweiuuwhVM3LRua3WyP5mbgl3qYm+PHlA7UyIMY5jtsg7GaSfhiBSGZAdfgfDgOp3qRkgyep84P69SLb2b0hwgsPVkx8eWLDDVbOEdQLLx7TVndyxtdw+X4bZs6UdEcLMvLUWl7v3SoD5oiuJN6vOJPQl0VBeEaK/uhujjFgnlEu7/31rYEKQ8vQBbx22a4kIyBtUSAGo/VfKGRWF9oXL7Umh2xHAPwNbGwP+DdCKUY27wWG7Qe18O+QS9AOu0yL4+MRIHZg8ODLQsk0Hp3q8Iw2JjohSkk4lcjHYgb69` |
-> | Switzerland West | rsa-sha2-512 | `UgWxFaVY0YYMiNQ82Wt3D1LDg3xta1DfRUUKWjZYllk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC6svukqfg7147raZZrA1bZFOO/EDFgi+WRsxoOfH/EEWGmZ89QQ5m855TpsTPZ5ZARQD9kxrYEtqefcSPuWgth4Ze5PNVwRfAwedsSfnYwHZqHRlRM54bOQ6Img7T292ERl4KNJUI7SLyF+kKB7eXqp5nMBrTZ4rSHXoeibv2yZAph0cyf4V/NnfRj6KZSf6YDs0LW1VuovWAC6S7mpBjwtabBmd1gIiJleWhB7Jj48yiyh0m7L9oIoR4NRiuFC535JwqCYhrgFwujuk6iIR9ScRdayEr6gVcv6tBms3MyR16ytA/MHRxYHfPKb1kHUrpFjDQZZZswoDJDnhQGOm8Z` |
-> | Switzerland West | ecdsa-sha2-nistp256 | `5MyZiuHQIMDh/+QEnbr3Zm6/HnsLpYT2GXetsWD6M8Q=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEj5nXHEjkVlLcf9R9fPQw9k2QGyUUP6NrFRj1gbxKzwHsgG2YKWDdOJiyguiro0xV9+JRdW3VC49/psIYUFDPA=` |
-> | Switzerland West | ecdsa-sha2-nistp384 | `nS9RIUnm5ULmNIG+d7qSeIl/kNzuJxAX9/PcwfCxcB0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB/Ps4Wp15xhNenavSHZijwVXdZcvhzVq8IcfHR3+Gz3tKLed36OdHRTdWpvjrg0mENw4L1mEZnHnDx96WMtA+FfagGWXMVMMfcyM4riIedemHsz45KAR2suqcdkNHfdVA==` |
-> | Sweden Central | rsa-sha2-256 | `feu0rEf3KhvHGfhxEjcuFcPtIl+f0ZVzOMXyxy+f8f4=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOimUzZHr0DxrjdWEPQqkrBudLW2P2dvPE9DoaXSNbehU13bxzsF6lzO65JBPh9rlNwwyt2yWtrR4XI0Qh/QSXmBntefOeH6BZVrN06aHrsd1dQBr4UFT5chCwy6Keu0ARW3fY8kO9lycTmMIeoiaYahicxyRRC8WLs0cSCH8tO0dA2aoaMxafBWqR6D5dNzu00rIcsCxvyjtN3Y8C4fw3YnNvPB/qWHdZ4aNcu7sQMRhCYVNPqX9UNGeXkbw8gHf9uL9dFu1c+P+VFIEs5bIecgT5HiGvtuXsWRdtEcM1v3mrRnNdmeWWQIqXzLrs5svipMIbnYXekhhLYHIlVo4d` |
-> | Sweden Central | rsa-sha2-512 | `5fx+Ic5p/MMR6TZvjj2yrb4HMHwc1TgM4x1xQw4aD3Y=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC2nRaxWTg4KGLClTZLQ5QgPZPyQ/XYbH4prjhg1uK7m/JKlmJw5LjmIUVKnlXS38qTKpWpJZyGU/eBCa5FPQODvoAXfNncgtIQxd7j00P8aO2tho+uIxSgiTCte8sgrAyx22uIJlORJn2x1cBFBJrlgQDJOKEAs9IakMNdLvlfjJV405gk7pstF4eeIANRWC3eOTrMs0O1gCTt2rnWR5BNQJu8swj9FEWreNQ3PvUliM6Ig6u8b+4d8ryYGuzh5+E8wy/aNxlowkoCI4D/+dBnH43pSYyjhrVx966JMlrJZjDmbgtygkJI+FoEEfBoFlrpIGfisqIX41Np9ZRre4Ux` |
-> | Sweden Central | ecdsa-sha2-nistp256 | `6HikgYBMSL9VguDq9bmwRaVXdOIUKEQUf4hlOjfvv6I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBErZhZNNmDhMKbSUXLB1VcTmR7pXcXWAqfFpdI81OP1FeCxBtpRNpIeWMyVoP3FeO3yWcODLm/ZkK7BICFpjleo=` |
-> | Sweden Central | ecdsa-sha2-nistp384 | `apRb96GLQ3LZ3E+rt2dyr9imMFDXYbaZERiireEO6ks=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKA5kwsqDKzZWmQCjIFBBjZun3rjg62pv8BOULwvwImaPvMFuR2OipExQZIyKSbR7wS9HA4/QKVA5rLRrSGpYvOBG438/7fwVZy5rOj3GXq6X7Havr1ExRXwsw5rJ56acA==` |
-> | East Asia | rsa-sha2-256 | `XYuEB+zABdpXRklca8RCoWy4hZp9wAxjk50MD9w6UjE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNKlaGhRiHdomU5uGvkcEjFQ0mhmNnWXsHAUNoGUhH6BU8LmsgWS61QOKHf1d3qQ0C9bPaRWMpemAa3DKGGqbgIdRrI2Yd9op+tqM+3hrZ8cBvBCgqKgaj4ZitoFnYm+iwwuReOz+x0I2/NmWUxuQlbiHTzcu8TVIln/5sj+n9PbwXC8Zk6vsCt6aon/P7hESHBJ4yf2E+Io30m+vaPNzxQGmwHjmBrZXzX8gAjGi6p823v5zdL4mq3tT5aPPsFQcfjkSMRDGq6yFSMMEA7i2dfczBQmTIJkYihdS8LBE0Ir5islJbaoPQxeXIrF+EgYgla505kJEogrLprcTGCY/t` |
-> | East Asia | rsa-sha2-512 | `FUYhL0FaN8Zkj/M0/VJnm8jPL+2WxMsHrrc/G+xo5QM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7x8s74EH+il+k99G2aLl1ip5cfGfO/WUd3foiwwq+qT/95xdtstPYmOP77VBQ4G6EnauP2dY6RHKzSM2qUdmBWiYaK0aaI/2lCAaPxco12Te5Htf7igWyAHYz7W99I2CfJCEUm1Soa0v/57gLtvUg/HOqTgFX44W+PEOstMhqGoU9bSpw2IKlos9ZP87C6IQB5xPQQ1HlnIQRIzluJoFCuT7YHXFWU+F4ZOwq5+uofNH3tLlCy7D4JlxLQ0hkqq3IhF4y5xXJyuWaBYF2H8OGjOL4QN+r9osrP7iithf1Q0EZwuPYqcT1QeIhgqI7OIYEKwqMfMIMNxZwnzKgnDC1` |
+> | Central US | rsa-sha2-256 | `GOPn34T1cCkLHO0xjLwmkEphxKKBQIjIf9QE1OAk3lU=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC9oA4N2MxbHjcdSrOlJOdIPjTB2LpQUMwJJj+aI2KEALYDGWWJnv0E14XjY1/M35jk8z0hX4MHGE/MEocSsTVdFRdWdW9CKTWT6eICpg9frTj6wfkB/Dxz/BAYb/YXq5OMrFlkFJUG8FMp9N80W6UWichcltmSrCpRi5N3ZGpVXEYhJF+I0mCH7Yheoq2KzIG2iWU/EJT5fui4t51wD8CQ1NWG8/THnNr0gjCr3AtB+ZPAl/6N7i2vO3FlZEHUj6BHoQ4dhIGjGCkgFDNU6RpdifqMJRihP9fSMOq4qksch1TE5sOnp0sOaP/RQvChb4oXB8Pru+j45RxPzIvzzOZZ` |
+> | Central US | rsa-sha2-512 | `VLhZbZjHzoNRMyRSa3GYvk2rgacjmldxQ2YNzvsMpZA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDPnuJixcZd6gAIifABQ377Mn0ZootRmdJs1J3R8/u7mbdfmpX2ItI0VfgMh4BzGEdgCqewx4BjADhfXRurfimuP8P9PLRq89AHX2V+IfeizLZkrnrxKiijjGEz640gORzzwIp2X+bmnBABWzEZjSNOeE3CKVr4ONvH80bYGFFqR4+arOelDqWEgxktM1QBlId7xR7efmtEGAuAhFbZVaqjBNsbqyiR/hlkMQfmWn1bjGSoenUoPojc7UAp9+Xf6ujkhCihRV/O4A69tVvp5E0Qv5MJ1Qj3kzAYbHQcIQ2l47MQq1wdZYxkYBHmH5leAjHgQbbccPalOLSbLRYjF169` |
> | East Asia | ecdsa-sha2-nistp256 | `/iq1i88fRFHFBw4DBtZUX7GRbT5dQq4g7KfUi5346co=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCvI7Dc7W3K919GK2VHZZkzJhTM+n2tX3mxq4EAI7l8p0HO0UHSmucHdQhpKApTIBR0j9O/idZ/Ew6Yr4nusBwE=` | > | East Asia | ecdsa-sha2-nistp384 | `KssXSE1WC6Oca0dS2CNySgObkbVshqRGE2JcaNsUvpA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEYGYGolx8LNs5TVJRF/yoxOCal3a4C0fw1Wlj1BxzUsDtxaQAxSfzQhZG+lFCF7RVQyiUwKjCxmWoZbSb19aE7AnRx9UOVmrbTt2PMD3dx8VmPj1K8rsPOSq+XX4KGdQ==` |
-> | South Africa North | rsa-sha2-256 | `qU1qry+E/fBbRtDoO+CdKiLxxKNfGaI9gAplekDpYvk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC2UBC1KeTx8/tQIxVEBUypcu/5n3B/g0zqE7tFmPYMFYngrXqEysIzgAdpiu2+ZX/vY8AF/0UkhYec/X/rwKQL8CCVwYqa2hufbSrX/qSuUHZd/95LFB2Nh+hJ23fn3EK8Gpgo/Xkmx9YVZoaQPGPsWVWVKjU6aVpM54cd6iuDT3y9SAnqbUMqgwwz3mK7bQGFPrbUVOUwVIcYKZD9HMNZhpo8HpjllKYIt1AFy4db8lSrLyuX8Nn/U7XAlPUndUCpKsAfWw8SemyuxSHziFDHF5xo8eLU+QYxdtzirgDAgEYWv9aa0TSx5Q2Mq8XJ7POffQxKj44ocHzmMGq/wPS1` |
-> | South Africa North | rsa-sha2-512 | `1/ogzd+xjh3itFg3IpAYA2pwj1o3DprEabjObSpY/DY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDLAkEygbVyp189UwvaslGRgaqcGWXaYJVq+gUB0906xkkjGoJeqSgTW5C/77vOk0zBCZM3yBgtDFZL1d6lze1QJZ6kGGPynJa5SeyydAds9G745yaFFuE53zJUyMy+y5I1ytfx003PKvk8+fHZK3rPYYr+LKm2u+9BmnuDB/0t561oFg1ZiMCPgNnDdUwkya2EtsJAifkUaBlYmzBZAFbIYyGfb898utZHyI+ix2TrMS/RHEDIchG8qSBMpOPmcpa29ADVsmAQDd5ds5D7WjirfMXwBxgJTMyuy+N9rJRgHoqDnt/GsgI2GtoPM7YSET8uYug941hAvFm5TI/dW3YR` |
-> | South Africa North | ecdsa-sha2-nistp256 | `e6v7pRdZE0i1U2/VePcQLguy7d+bHXdQf3RZ4jhae+g=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIEQemJxERZKre+A+MAs0T0R7++E6XanZ7uiKXZEFCyDgqjVcjk8Xvtrpk5pqo4+tMWM7DbtE0sgm1XmKhDSWFs=` |
-> | South Africa North | ecdsa-sha2-nistp384 | `NmxPlXzK2GpozWY374nvAFnYUBwJ2cCs9v/VEnk0N6Q=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKgEuS9xgExVxicW0HMK4RLO5ZC6S0ZyENe5XVVJY0WKZ5IfIXEhVTkYXMnbtrYIdfrTdDuHstoWY9uu4bS8PtFDheNn3MyNfObqpoBPAh1qJdwfJgzo5e7pEoxVORUMnw==` |
-> | UK South | rsa-sha2-256 | `3nrDdWUOwG0XgfrFgW27xhWSizttjabHXTRX8AOHmGw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCdLm+9OROp5zrc6nLKBJWNrTnUeCeo8n1v9Y3qWicwYMqmRs/sS9t5V3ABWnus4TxH3bqgnQW3OqWLgOHse/3S+K1wGERmBbEdKOl7A7kQ9QgDkWEZoftwJ9hp+AMVTfCYhcOOsG+gW021difNx+WW2O5TldL31pk+UvdhnQKRHLX31cqx5vuUmiwq4mlbBx+rY8B/xngP2bzx/oYXdy1I9fZbWWAQ6FwJBav1sSWL0l7snRdOsy5ASeMnYollEw1IATwYeUv8g3PzrryZuru+7gu/Ku9w8d5jbFyI6Up4KLwjs/gZNuqQ5dif7utiQYbVe4L0TPWOmuLA25JJRZaF` |
-> | UK South | rsa-sha2-512 | `Csnl8SFblkdpVVsJC1jNVSyc2eDWdCBVQj9t6J3KHvw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDIwNEfrP6Httmm5GoxwprQ57AyD6b3EOVe5pTGQWIOzxnrIw2KnDPL07KNa33xZOmtXro5PYyhr5eNXUkFiQMEe+RblilZSNAvc4MHbp2TVD0L9N7Pdy2SetoF4m5BCXdC48kZntqgkpzXoDbFiaAVln5zQCHB5fOuBPS1id8+k3zqG0o+K0MHb6qcbYV8gdQeOn/PlJzKE4M0Ie8na3aWHdGvfJjDdK/hNN0J+eUK8qIb9KCJkSMDj/l3rnue9L8XgeKKA2Pkvh3nch4VBXCcCsDVhgSf+aoiJ0Fy8GVOTk2s7QDMzD9y37D9V2OPl66q4pjFGOfK0mJmrgqxWNy5` |
-> | UK South | ecdsa-sha2-nistp256 | `weMVzOmQnlMdMp5XBoU9SdN5meBbx/8nvA8dB45w8Ck=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEnBllEm4/HsTP+ZMhlc8YnSAYWF23tibZDqGxf0yBRTU/ncuaavuQdIJ5TcJb0NcXG7skEmq3StwHT0FPMWN8Y=` |
-> | UK South | ecdsa-sha2-nistp384 | `HpsZ8zoOCCsUbpD3nAOtxpuKIvn0L8KGyg1KMLuMUqU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGd/672brwX1kOhH31ZTdBRj+bcEmemcdmTEe0J88cJ3RRQy7nDFs25UrnR+h3P0ov9Uq24EJQS8auxRgNCUJ3i3ZH9QjcwX/MDRFPnUrNosH8NkcPmJ/pezVeMJLqs3Qw==` |
-> | Australia Southeast | rsa-sha2-256 | `YafIMxux7NtoKCrjQ2eDxaoRKHBzpanatwsYbRhrDKQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7omLu37G00gLoGvrPOJXpRcI5GTszUSldKjrARq0WeJIXEdekaSTz5qv2kSN/JaBDJiO9E4AJFI9q5AvchdmMVs4I59EIJ0bsR9lK+9eRP4071EEc8pb3u/EPFaZQ8plKkvINJrdK6p0R2FhlFxa7wrRlKybenF1v7aU3Vw79RQYYXaZifiNrIQFB8XQy3QQj2DcWoEEOjbOgZir9XzPBvmeR8LLEWPTLYangYd3TsQtybDpP6acpOKaGYDEyXiA8Lxv8O276LUBny6katPrNvfGZScxn6vbTEZyog+By8vyXMWKEbC1Qc/ecBBk5obFzbUnq3RP1VXLJspo99cex` |
-> | Australia Southeast | rsa-sha2-512 | `FpFCo9sNUkdnD1kOSkWDIfnasYhExvRr1pJlE631QrA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDmuW2VZAhR6IoIOr32WnLlsr/rt3y4bPFpFcNhXaLifCenkflj9BufX3lk5aEXadcemfKlUJJdwBTvTt1j4+X3P2ecCZX1/GSsRKSTuiivuOgkPxk3UlfggkgN9flE9EdUxHi/jN/OQ9CjGtHxxk72NJSMNAjvIe0Ixs7TfqqyEytYAcirYcSGcc0r70juiiWozflXlt+bS7mXvkxpqMjjIivX+wTAizzzJRaC6WcRbjQAkL2GP6UCFfBI1o9NBfXbz+qvs1KTmNA0ugRQ7g6MdiNOePHrvoF1JgTlCxEjy+/IqPiC8nNQUVCW6/gcATQoDQn0n9Lwm1ekycS35xEh` |
-> | Australia Southeast | ecdsa-sha2-nistp256 | `4xc49pnNg4t/tr91pdtbZLDkqzQVCguwyUc16ACuYTc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCdswzJ+/Bw5ia/wRMaa0llZOjlz67MyZXkq7Ye38XMSHbS4k/GwM0AzdX+qFEwR00lxZCmpHH28SS+RyCdIzO0=` |
-> | Australia Southeast | ecdsa-sha2-nistp384 | `DEyjMyyAYkegwLtMBROR/8STr1kNoQzEV+EZbAMhb1s=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJRZx6caZTXnbTW/zRzKfoKC4LGzvD5fnr2p8yGWxDq27CjUEMxkctWcT6XS3AstF2MLMTOFp/UkkSr8eP0vXI8g99YDeFGXtoBiFSIgYF2Aiu/kpfEu3feiIUl3SVzxKw==` |
-> | France South | rsa-sha2-256 | `aywTR4RYJBQrwWsiALXc1lDDHpJ34jIEnq3DQhYny0g=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDELY4UcRAMkJpEBZT40Oh5TIxI6o6Enmlv+KxWkkcyFcNJlFtaF2Hl+afWlysrg+lB5Un4XpveWY64pl7a/dSju7aPfujcXowELIPqFSoWW7xQ+jkfJdyI0daa0l2h2oNCPqWnx8+04Vx5kcb2GktlNG4RMLx7Q6COJgQ3pGHtyfZ5fnmrWNBsuv4mvsXp0u1KGWX6s2LZtO+BpKE6DegSNLMVapAZ0ju8pagqtm6aeWEtqmkAvsI0U31qhL25FQX4DzjIbGzXd6I25AJcSXcpnwQefsaOwO/ztvIKeIf3i/h2rXdigXV1wyhvIdKm1uWwj6ph4XvOiHMZhsRUe02B` |
-> | France South | rsa-sha2-512 | `+y5oZsLMVG6kfdlHltp475WoKuqhFbTZnvY0KvLyOpA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDmsS9WimMMG95CMXFZiStR/peQU1VA6dklMbGmYwLqpxLNxxsaQuQi6NpyU6/TS8C3CX0832v1uutW38IfQGrQfcTGdAz+GjKverzaSXqZGgTMh/JSj06rxreSKvRjYae596aPdxX5P+9YVuTEeTMSdzeklpxaElPfOoZ7Ba5A2iCnB/5l/piHiN8qlXBPmfGLdZrTUFtgRkE4Ie4zaoWo19611XgUDMDX4N4be/qilb95cUBE73ceXwdVKJ3QVQinZgbwWFUq0fMlyd8ZNb9XN6bwXH7K6cLS6HYGgG6uJhkYSAqpAZK2pOFn3MCh8gw2BkM/Rg+1ahqPNAzGPVz9` |
+> | East Asia | rsa-sha2-256 | `XYuEB+zABdpXRklca8RCoWy4hZp9wAxjk50MD9w6UjE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNKlaGhRiHdomU5uGvkcEjFQ0mhmNnWXsHAUNoGUhH6BU8LmsgWS61QOKHf1d3qQ0C9bPaRWMpemAa3DKGGqbgIdRrI2Yd9op+tqM+3hrZ8cBvBCgqKgaj4ZitoFnYm+iwwuReOz+x0I2/NmWUxuQlbiHTzcu8TVIln/5sj+n9PbwXC8Zk6vsCt6aon/P7hESHBJ4yf2E+Io30m+vaPNzxQGmwHjmBrZXzX8gAjGi6p823v5zdL4mq3tT5aPPsFQcfjkSMRDGq6yFSMMEA7i2dfczBQmTIJkYihdS8LBE0Ir5islJbaoPQxeXIrF+EgYgla505kJEogrLprcTGCY/t` |
+> | East Asia | rsa-sha2-512 | `FUYhL0FaN8Zkj/M0/VJnm8jPL+2WxMsHrrc/G+xo5QM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7x8s74EH+il+k99G2aLl1ip5cfGfO/WUd3foiwwq+qT/95xdtstPYmOP77VBQ4G6EnauP2dY6RHKzSM2qUdmBWiYaK0aaI/2lCAaPxco12Te5Htf7igWyAHYz7W99I2CfJCEUm1Soa0v/57gLtvUg/HOqTgFX44W+PEOstMhqGoU9bSpw2IKlos9ZP87C6IQB5xPQQ1HlnIQRIzluJoFCuT7YHXFWU+F4ZOwq5+uofNH3tLlCy7D4JlxLQ0hkqq3IhF4y5xXJyuWaBYF2H8OGjOL4QN+r9osrP7iithf1Q0EZwuPYqcT1QeIhgqI7OIYEKwqMfMIMNxZwnzKgnDC1` |
+> | East US | ecdsa-sha2-nistp256 | `ixDeCdmQOB9ROqdJiVdXyFVsRqLmJJUb2M4shrWj8gI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNrdcVT12fftnDcjYL8K3zLX3bdMwYLjNu2ZJ1kUZwpVHSjNc+1KWB2CGHca+cMLuPSao4TxjIX0drn9/o+GHMQ=` |
+> | East US | ecdsa-sha2-nistp384 | `DPTC6EIORrsxzpGt6IZzAN67nlZUXzg5ANQ3QGz987Y=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEP3CUvPVWNVnFuojR43KRxTQt1xiClbgDzqN/s9F5luivP+Gh0QrK5UHf6diEju4ZQ9k2O10MEDs6c46g4fT56rY8CQkeBsaaBq8WYLRhSQsFZ6SZuw14oFNodniAO33g==` |
+> | East US | rsa-sha2-256 | `F6pNN5Py68/1hVRGEoCwpY5H7vWhXZM/4L442lY4ydE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAiUB94zwLf0e/++OeiAjE0X7Od2nuqyLyAqpOb7nfQUAOWyqgRL04yaan6R2Ir2YtI0FRwA6yRETUBf2+NuVhIONgLNsgPw3RakL1BUqAEzZAyF4sOjWnYE5/s/1KmYOE052SefzMciqjgkBV2+YrPW1CLivNhL4d1vuQh05kADLgHJiAVD6BqSM7Z6VoLhW+hfP4JklyQAojCF6ejXW7ZGWdqQGKLCUhdaOPSRAxjOmr9gZxJ69OvdJT2Cy6KO1YQt2gY2GbPs+4uAeNrz40swffjut4zn1NILImpHi8PTM+wcGYzbW4Nn7t5lhvT9kmX9BkSYXLVTlI9p1neT9t` |
+> | East US | rsa-sha2-512 | `MIpoRIiCtEKI23MN+S2bLqm5GKClzgmRpMnh90DaHx8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC8Ut7Rq7Vak26F29czig5auq334N9mNaJmdtWoT32uDzWjZNw/N8uxXQS51oSeD7c0oXWIMBklH0AS8JR1xvMUGVnv5aRXwubicQ6z4poG5RSudYDA3BjMs61LZUKZH/DRj7qR/KUBMNieT1X+0DbopZkO9etxXdKx+VqJaK3fRC5Zflxj5Z9Stfx/XlaBXptDdqnInHZAUbZxnNziPYrBOuXYl5/Cd6W4lR7dBsMCbjINSIShvrhPpVfd3qOv/xPpU172nqkOx2VsV4mrfqqg62ZdcenLJDYsiXd/AVNUAL+dvzmj1/3/yVtFwadA2l83Em6CgGpqUmvK6brY3bPh` |
+> | East US 2 | ecdsa-sha2-nistp256 | `bouiC5HdtURUU19RJbym8R94fbMOTw/bUxFUkoAByoI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJshJI18IECu6neLrash/Q622MAXO07C+hbIOiVPC6M/ZIJM8HyYvQEh4DKI1CMEaeAIs/HA905QKeU/syvt7QI=` |
+> | East US 2 | ecdsa-sha2-nistp384 | `vWnPlGaQOY4LFj9XSQ2qN/NMF92+UOfKPjGNSPA2bOg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBByJNAblwxCNVqedg5FcdbdwiuzTMVEWj/uF3uzI8wp890Xv2M4H+aMTpeItxgQsuiQCptgITsO+XCf2dBTHOGWpd90QtvcznzHyy/FEWVAKWs9brvyaNVe82c4TOFqYRg==` |
+> | East US 2 | rsa-sha2-256 | `K+QQglmdpev3bvEKUgBTiOGMxwTlbD7gaYnLZhPfe1c=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOA2Aj1tIG/TUXoVFHOltyW8qnCOtm2gq2NFTdfyDFw3/C4jk2HHQf2smDX54g8ixcLuSX3bRDtKRJItUhrKFY6A0AfN1+r46kkJJdFjzdcgi7C3M0BehH7HlHZ7Fv2u01VdROiXocHpNOVeLFKyt516ooe6b6bxrdc480RrgopPYpf6etJUm8d4WrDtGXB3ctip8p6Z2Z/ORfK77jTeKO4uzaHLM0W7G5X+nZJWn3axaf4H092rDAIH1tjEuWIhEivhkG9stUSeI3h6zw7q9FsJTGo0mIMZ9BwgE+Q2WLZtE2uMpwQ0mOqEPDnm0uJ5GiSmQLVyaV6E5SqhTfvVZ1` |
+> | East US 2 | rsa-sha2-512 | `UKT1qPRfpm+yzpRMukKpBCRFnOd257uSxGizI7fPLTw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC/HCjYc4tMVNKmbEDT0HXVhyYkyzufrad8pvGb3bW1qGnpM1ZT3qauJrKizJFIuT3cPu43slhwR/Ryy79x6fLTKXNNucHHEpwT/yzf5H6L14N+i0rB/KWvila2enB2lTDVkUW50Fo+k5U/JPTn8vdLPkYJbtx9s0s3RMwaRrRBkW6+36Xrh0h7rxV5LfY/EI1331f+1bgNM7xD59D3U76OafZMh5VfSbCisvDWyIPebXkOMF/eL8ATlaOfab0TAC8lriCkLQolR+El9ARZ69CJtKg4gBB3IY766Ag3+rry1/J97kr4X3aVrDxMps1Pq+Q8TCOf4zFDPf2JwZhUpDPp` |
+> | East US 2 EUAP | ecdsa-sha2-nistp256 | `X+c1NIpAJGvWU31UJ3Vd2Os4J7bCfgvyZGh35b2oSBQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK+U6CE6con74cCntkFAm6gxbzGxm9RgjboKuLcwBiFanNs/uYywMCpj+1PMYXVx/nMM4vFbAjEOA20fJeoQtN8=` |
+> | East US 2 EUAP | ecdsa-sha2-nistp384 | `Q3zIFfOI1UfCrMq6Eh7nP1/VIvgPn3QluTBkyZ2lfCw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWRjO+e8kZpalcdg7HblZ4I3q9yzURY5VXGjvs5+XFuvxyq4CoAIPskCsgtDLjB5u6NqYeFMPzlvo406XeugO4qAui+zUMoQDY8prNjTGk5t7JVc4wYeAWbBJ2WUFyMrQ==` |
+> | East US 2 EUAP | rsa-sha2-256 | `dkP64W5LSbRoRlv2MV02TwH5wFPbV6D3R3nyTGivVfk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC3PqLDKkkqUXrZSAbiEZsI6T1jYRh5cp+F5ktPCw7aXq6E9Vn2e6Ngu+vr+nNrwwtHqPzcZhxuu9ej2vAKTfp2FcExvy3fKKEhJKq0fJX8dc/aBNAGihKqxTKUI7AX5XsjhtIf0uuhig506g9ZssyaDWXuQ/3gvTDn923R9Hz5BdqQEH9RSHKW+intO8H4CgbhgwfuVZ0mD4ioJKCwfdhakJ2cKMDfgi/FS6QQqeh1wI+uPoS7DjW8Zurd7fhXEfJQFyuy5yZ7CZc7qV381kyo/hV1az6u3W4mrFlGPlNHhp9TmGFBij5QISC6yfmyFS4ZKMbt6n8xFZTJODiU2mT1` |
+> | East US 2 EUAP | rsa-sha2-512 | `M39Ofv6366yGPdeFZ0/2B7Ui6JZeBUoTpxmFPkwIo4c=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC+1NvYoRon15Tr2wwSNGmL+uRi7GoVKwBsKFVhbRHI/w8oa3kndnXWI4rRRyfOS7KVlwFgtb/WolWzBdKOGVe6IaUHBU8TjOx2nKUhUvL605O0aNuaGylACJpponYxy7Kazftm2rV/WfxCcV7TmOGV1159mbbILCXdEWbHXZkA3qWe4JPGCT+XoEzrsXdPUDsXuUkSGVp0wWFI2Sr13KvygrwFdv4jxH1IkzJ5uk6Sxn0iVE+efqUOmBftQdVetleVdgR9qszQxxye0P2/FuXr0S+LUrwX4+lsWo3TSxXAUHxDd8jZoyYZFjAsVYGdp0NDQ+Y6yOx5L9bR6whSvKE1` |
+> | France Central | ecdsa-sha2-nistp256 | `N61PH8SVCAXOq7Z7eIV4mRnotafmNoPrpc+TaLxtPX4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK3UBFa/Ke9y3aLs1q1b8gh/tXiS7lpOTzUiDFpXdbq00/V9Ag+v2z5MIaicFdum9Ih4fls1Mg07Ert16bi5M8E=` |
+> | France Central | ecdsa-sha2-nistp384 | `/CkQnHA57ehNeC9ZHkTyvVr8yVyl/P1dau2AwCg579k=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/x6qX+DRtmxOoMZwe7d7ZckHyeLkBWxB7SNH6Wnw2tXvtNekI9d9LGl1DaSmiZLJnawtX+MPj64S31v8AhZcVle9OPVIvH5im3IcoPSKQ6TIfZ26e2WegwJxuc1CjZZg==` |
+> | France Central | rsa-sha2-256 | `zYLnY1rtM2sgP5vwYCtaU8v2isldoWWcR8eMmQSQ9KQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDCmdsufvzqydsoecjXzxxL9AqnnRNCjlIRPRGohdspT9AfApKA9ZmoJUPY8461hD9qzsd7ps8RSIOkbGzgNfDUU9+ekEZLnhvrc7sSS9bikWyKmGtjDdr3PrPSZ/4zePAlYwDzRqtlWa/GKzXQrnP/h9SU4/3pj21gyUssOu2Mpr6zdPk59lO/n/w2JRTVVmkRghCmEVaWV25qmIEslWmbgI3WB5ysKfXZp79YRuByVZHZpuoQSBbU0s7Kjh3VRX8+ZoUnBuq7HKnIPwt+YzSxHx7ePHR+Ny4EEwU7NFzyfVYiUZflBK+Sf8e1cHnwADjv/qu/nhSinf3JcyQDG1lN` |
+> | France Central | rsa-sha2-512 | `ixum/Dragma5DAMBzA/c5/MY02FjUBD/gI8+XQDzJvc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDjTJ9EvMFWicBCcmYF0zO2GaWZJXLc7F5QrvFv6Nm/6pV72YrRmSdiY9znZowNK0NvwnucKjjQj0RkJOlwVEnsq7OVS+RqGA35vN6u6c0iGl4q2Jp+XLRm8nazC1B5uLVurVzYCH0SOl1vkkeXTqMOAZQlhj9e7RiFibDdv8toxU3Fl87KtexFYeSm3kHBVBJHoo5sD2CdeCv5/+nw9/vRQVhFKy2DyLaxtS+l2b0QXUqh6Op7KzjaMr3hd168yCaqRjtm8Jtth/Nzp+519H7tT0c0M+pdAeB7CQ9PAUqieXZJK+IvycM5gfi0TnmSoGRG8TPMGHMFQlcmr3K1eZ8h` |
> | France South | ecdsa-sha2-nistp256 | `LHWlPtDIQAHBlMkOagvMJUO0Wr41aGgM+B/zjsDXCl0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHdj2SxQdvNbizz8DPcRSZHLyw5dOtQbzNgjedSmFwOqiRuZ2Vzu88m2v5achBwIj9gp0Ga14T7YMGyAm04OOA0=` | > | France South | ecdsa-sha2-nistp384 | `btqtCD/hJWVahHWz/qftHV3B+ezJPY1I3JEI/WpgOuQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB2rbgGSTtFMciVSpWMvmGGTu8p1vGYfS2nlm+5pAM85A4Em1mYlgHfVZx+SdG5FSYcsX4vTWt4Yw2OnDmxV3W0ycrKBs4Bx3ASX4rx3oZezVafHsUUV0ErM+LmdmKfH8g==` |
-> | West US 2 | rsa-sha2-256 | `ktnBebdoHk7eDo2tvJuv36XnMtfauhvF/r0uSo6DBfk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDoskHzExtM+YSXGK6cBgmnLlXsZLWXkEexPKC7wHdt0kSqkIk9F31wD+2LefZzaTGfAmY5/EWrOsyBJvIgOoksH+ZPMnE9+TOWqy6vsS+Ml/ITvUkWajS1bKDPDSoIrCM1rQ9PlbgMQFg4o0FfyxLVCP7hcgvHO+aycOxkiDqtvwANvIn2Qwt7xwpIv1Mnc4OpcBSxbigb7ISlrvR9XWivE/piWjXS3IEYkGv7VitAlmWEoTt9L7K94bYol2nCXSXJ33X6xVVwVNpdxVtnUQBIRinN+vkOccgG0jvWtWPtrMjyDg/lyvr6lBdO/CQy4VO4VrIBuL6pjsS8KfIfTxKd` |
-> | West US 2 | rsa-sha2-512 | `i8v3Xxh/phaa5EyZEr5NM4nTSC/Rz7Nz0KJLdvAL0Ls=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOOo5f0ACypThvoDEokPfzGJUxbkyMoQKca9AgEb3YkQ/lsYCfLtfGxMr2FTOGQyx5wfhOrN0B2SpI4DBgF3B0YSLK0omZRY7fpVPspWWHrsbTOJm/Fn7bWCM+p63xurZ6RUPCA6J1gXd3xbdW7WQXLGBJZ6fjG7PbqphIOfFtwcs/JvjhjhvleHrXOtfGw9b4Jr8W1ldtgKslGCU1mnUhOWWXUi+AhwGFTI0G/AShlpX8ywulk2R+fxet3SNGNQmjydnNkcsrBI/EMytO1kwoJB3KmLHEeijaQzK7iJxRDZEHlHWos6G7jwaGPI4rV5/S1N+xnG+OhCDYAUbunp5R` |
-> | West US 2 | ecdsa-sha2-nistp256 | `rt5kaA0neIFDIWTP2MjWk9cOSapzEyafirEgPGt+9HM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKEKP+1QZf3GfEvkNZtzoKr05iAwGq+yPhUsVdyA7uKnwvTwZAi7NBr4hMkGIIdgQlGrMNNXKS0V+rhMNI1sH48=` |
-> | West US 2 | ecdsa-sha2-nistp384 | `g0vDKd4G5MKnxWewbnCYahCv1lZgfnZeEXfPAhv+trs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1+/Qu9Y1BqqV3seN0+0ITYCueuv0TFAnfG9z1Io8VYjmxLvdmaDvTi9kJ0ShjFJRKjbCfYKNekqYZDW4gBfmE9EyvMPI6VXPVLNY3TQ/z+Y7qO/oa28cSirW9fhp7vbA==` |
-> | South India | rsa-sha2-256 | `5gFLJvQvQodZxKBi3DnGywpf9dliWguiMTqcgkTmtu8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDlxVnaYnmg1cK+g/PI1jB1fgQQJiX39ZmfBss3mSW3kUxP3KWhm7lHBTkrbnfhVHnGpP6GcGFy09YBQa6UiyVpD8p8APtx0j9Jp8m3yhhgqOIjup0C7crl49NqMVryOZmCLOvA7KTyTxxV37GpRI+ffqQ8LOO+anWVWVaJlVCYBMct/OVhA7ePXblcbJg5eu5JjUiWW+cPdVqAqWojNHZzzprCFEBTCvYaZtzBx4kFGiipPmJSN6yvBPEfnA7Lzr/T9iXV/XkmI1txuJRBasoQMt+4jCZG25sCCN8y4iuUJCioUELr//TWaDyTsQAR4MbRW+L/GSIM9VUY4Uc+Impp` |
-> | South India | rsa-sha2-512 | `T4mrHCEHbFNAQSng//m0Viu/hXfi11JMnyA0PqAuTtg=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCz9tQa7D4dyrULCLH75yKwH27AQMRNWFUqgUQQXYHR1MYegLf7JEmFn126bxgEHPRO0bNwBM9S626Gcr1R1uDI/luL6uvG0Q57k+Pmv7HNQtv12J3fAuxuhSPcppE5IE5QR94Qgd1RzGXv954TK1Z+kCXHyLA583XTQ4btOEwqUo/16tSCqaoTSdyNp17q8BrOCPaTWMqT774lSGELIDc6RaGKHRu/Qa+F5FRMswdZt5YJDEKtlKdvbyIiSfIP2GZGhWBiSW2D6xpnzSjstR3LfRfFek/ryGkDPu5c5HNaVJwc1fatP6ggAbhVCcyDgWiCCpEBICV2wnPpGfDUdbRh` |
-> | South India | ecdsa-sha2-nistp256 | `7PQhzR5S6sEFYkn2s3GxK6k8bwHgAy0000zb07YvI44=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLgZw/ouE23XQnzO8bBPSCJp/KR+N/xfuJS5QtWU/PzlNLmSYS20b65GRP6ThwZdaigMhwHOEc8twpJ7aA7LBu0=` |
-> | South India | ecdsa-sha2-nistp384 | `sXR2nhTTNof58ne5K+Xjm9Pu8miEbKJn4Bo9NYoqQs4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLwbzUI8q9f5YTLIs6ddRTPlHdb35xrbsJeOQII/nEXhlNjzpdL9XnDJjQunQL2vg6XND1pfp3TNBJ9LF3mud442LbpwSt9B7EZD8tQ5u0+2NeNjn8JnCu6/tdvS+xoNiA==` |
-> | Japan West | rsa-sha2-256 | `DRVsSje7BbYlVJCfXqLzIzncbVU4/ETFGzdxFwocl8E=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDl/rlTgQpomq4FmJKSR2fjgAklV818RcjR/e/C1VUJVpbntJoWUlBhKYDFPTVQaHXDTK5HyJU5APsdy6CJo8ia32qc2E/573LDNk4dgFFrh+KFRiD+ULt3IH15i1DieVw61MAVOvzh+DmTJHPLaTufXoQ62YACm3yC1st1kXv4bawfXs0ssmeqrBcCOQvMvW/DexnnGXO6QXYTcjUktNrO2h2dd355n5FP4fcsBEdGmfT79HYPM6ZoqkItRZEO6Nel65KxtenAwQub8SK3iJgFyJwd3zIH4OCHp3z4tcGXw5yNAX15dJMSnls0zvzhx0f4ThwfgB4t1g9jVb47Ig7B` |
-> | Japan West | rsa-sha2-512 | `yLl9t2jlkrTVWAxsZ59Wbpq+ZCnwHfdMW8foUmMvGwI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC9zrpnjY7c0dHpE1BMv+sUp+hmvkBl3zPW/uCInYM5SgtViSQqn/DowySeq+2qMEnOpHGZ8DnEjq55PmHEumkYGWUUAs38xVGdvRZk6yU7TxGU42GBz0fT/sdungPHLQ2WvqTZYOFqBeulRaWsSBgovrOnQEa2bNTejK9m353/dmAtKHfu68zVT+XYADrT3PY5KZ1tpKJA0ZO9/ScUvXEAYs20WSYRZBcNDoSC9xz4K8hv9/6w3O3k0LyBKMFM5ZW8WVDfpZx1X0GBCypqS+RNZuVvx81h3nxVAZSx80CygYcV4UHml7wtnWDYEIBSyVRsJWVNGBlQrQ4voNdoTrk5` |
+> | France South | rsa-sha2-256 | `aywTR4RYJBQrwWsiALXc1lDDHpJ34jIEnq3DQhYny0g=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDELY4UcRAMkJpEBZT40Oh5TIxI6o6Enmlv+KxWkkcyFcNJlFtaF2Hl+afWlysrg+lB5Un4XpveWY64pl7a/dSju7aPfujcXowELIPqFSoWW7xQ+jkfJdyI0daa0l2h2oNCPqWnx8+04Vx5kcb2GktlNG4RMLx7Q6COJgQ3pGHtyfZ5fnmrWNBsuv4mvsXp0u1KGWX6s2LZtO+BpKE6DegSNLMVapAZ0ju8pagqtm6aeWEtqmkAvsI0U31qhL25FQX4DzjIbGzXd6I25AJcSXcpnwQefsaOwO/ztvIKeIf3i/h2rXdigXV1wyhvIdKm1uWwj6ph4XvOiHMZhsRUe02B` |
+> | France South | rsa-sha2-512 | `+y5oZsLMVG6kfdlHltp475WoKuqhFbTZnvY0KvLyOpA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDmsS9WimMMG95CMXFZiStR/peQU1VA6dklMbGmYwLqpxLNxxsaQuQi6NpyU6/TS8C3CX0832v1uutW38IfQGrQfcTGdAz+GjKverzaSXqZGgTMh/JSj06rxreSKvRjYae596aPdxX5P+9YVuTEeTMSdzeklpxaElPfOoZ7Ba5A2iCnB/5l/piHiN8qlXBPmfGLdZrTUFtgRkE4Ie4zaoWo19611XgUDMDX4N4be/qilb95cUBE73ceXwdVKJ3QVQinZgbwWFUq0fMlyd8ZNb9XN6bwXH7K6cLS6HYGgG6uJhkYSAqpAZK2pOFn3MCh8gw2BkM/Rg+1ahqPNAzGPVz9` |
+> | Germany North | ecdsa-sha2-nistp256 | `F4o8Z9llB5SRp0faYFwKMQtNw/+JYFKZdNoIuO7XUU0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMoIo/OXMP7W5a5QRDAVBo+9YQg4YBrl3J7xM91PUGUiALDE1Iw8Uq4e1gLiSNA6A46om5yY/6oGj4iqEk8Ar8Y=` |
+> | Germany North | ecdsa-sha2-nistp384 | `BgW5e9lciYG1oIxolnVUcpdh3JpN/eQxfOyeyuZ6ZjI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ69kH0urhCdcMMaqpID2m+u8MECowtNlYjYXoSUn6oEhj7VPxvCRZi5R02vHrtrTJslsrbpgYHXz+/jSLplKpccQGJFaZso9WWgEJH1k7tJOuOv0NIjoBTv7fY5IxeAvQ==` |
+> | Germany North | rsa-sha2-256 | `ppHnlruDLR73KzW/m7yc3dHQ0JvxzuC1QKJWHPom9KU=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNNjCcDxjL3ess1QQEkb9n5bPYpxXpekd32ZX4oTcXXFDOu+tz/jpA8JZL8lOBAcBQ5n+mZF0Pot1o+B1JxQOHHiEZdcdKtLtPWrI2OQyxZnvo7sCBeSk+r/j3mjqpvq3+KpwoTZKpYF/oNRXVHU4VFs+MzvqWd6vgLXsDwtJrriojtkrWy0bTa4NjjN/+olsITxDmR0TGAu+epCJptdpKjTcgcn25QuIKy37/zVW8BJ5QsZmIRwvlCYxj11UOAoDcbapJcnzJYpOmQTNpdzkazjViX17DZW17Jmfhc6Dk3H+TEseilkbq1ZjsUyGBBxklWHid7+BgKVXOoIgG6+0x` |
+> | Germany North | rsa-sha2-512 | `m/OFTRHkc3HxfhCKk1+jY1rPJrT9t4FYtQ/Wmo3MOUE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDkN3CN1VITaHy/CduQaZIkuKSC/+oX19sYntRdgCblJlIzUBmiGlhmnKXyhA29lwWWAYxSbUu0jEJUZfQ6xdQ4uALOb815DLNZtVrxqSm4SjvP5anNa7zRyCFfo4V8M4i6ji6NB+u+PwH5DOhxKLu6/Ml9pF8hWyfLRft8cg4wORLLhwGt2+agizq7N7vF2nmLBojmS0MMmpH5ON/NFshYIDNKPEeK9ehpaARf4fuXm440Zqzy/FfpptSspJIhbY2zsg4qGQgYGZyuRxkLzYgtD/uKW5ieFwXPn+tvVeVzezZTmGMoDlkPX18HSsuNaRkdnwpX8yk1/uoBCsuOFSph` |
+> | Germany Westcentral | ecdsa-sha2-nistp256 | `Ce+h+7thT5tt75ypIkWZ6+JnmQMZEl1N7Tt3Ldalb64=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBmVDE0INhtrKI83oB4r8eU1tXq7bRbzrtIhZdkgiy3lrsvNTEzsEExtWae2uy8zFHdkpyTbBlcUYCZEtNr9w3U=` |
+> | Germany Westcentral | ecdsa-sha2-nistp384 | `hhQQi2iRjSX5d9c+4714hAFvTA3c63+TGknhuQi7Tss=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDlFF3ceA17ZFERfvijHkPI2Na1wuti9/AOY5E/bDvZfP08kkmYTb9Ma6omhB0dHR6e1CmRJfKmFXfTd81iVWPa7yXCxbS8yG+uNKCuHxuNv8hFhNM84h2727BSBHBBHBA==` |
+> | Germany Westcentral | rsa-sha2-256 | `0SKtGye+E9pp4QLtWNLLiPSx+qKvDLNjrqHwYcDjyZ8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDsbkjxK9IJP8K98j+4/wGJQVdkO/x0Msf89wpjd/3O4VIbmZuQ/Ilfo6OClSMVxah6biDdt3ErqeyszSaDH9n3qnaLxSd5f+317oVpBlgr2FRoxBEgzLvR/a2ZracH14zWLiEmCePp/5dgseeN7TqPtFGalvGewHEol6y0C6rkiSBzuWwFK+FzXgjOFvme7M6RYbUS9/MF7cbQbq696jyetw2G5lzEdPpXuOxJdf0GqYWpgU7XNVm+XsMXn66lp87cijNBYkX7FnXyn4XhlG4Q6KlsJ/BcM3BMk+WxT+equ7R7sU/oMQ0ti/QNahd5E/5S/hDWxg6ZI1zN8WTzypid` |
+> | Germany Westcentral | rsa-sha2-512 | `9OYO7Hn5p+JJeGGVsTSanmHK3rm+iC6KKhLEWRPD9ro=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCwrSTqa0GD36iymT4ZxSMz3mf5iMIHk6APQ2snhR5FUvacnqTOHt3xhMF+UwYmGLbQtmr4HdXIKd7Dgn5EzHcfaYFbaLJs2aDngfv7Pd6TyLW3TtSgJ6K+mC1MDI/vHzGvRxizuxwdN0uMXv5kflQvnEtWlsKAHW/H7Ypk4R8s+Kl2AIVEKdy+PYwzRd2ojqqNs+4T2tPP5Y6pnJpzTlcHkIIIf7V0Bk/bFG2B7r73DG2cSUlnJz8QW9pLXIn7268YDOR/5nozSXj7DinVDBlE5oeZh4qkdMHO1FSWynj/isUCm5qBn76WNa6sAeMBS3dYiJHUgmKMc+ZHgpu6sqgd` |
+> | Japan East | ecdsa-sha2-nistp256 | `IFt/j4bH2Jc0UvhUUADfcy3TvesQO+vhVdY4KPBeZY8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKVq+uiJXmIlYS367Ir9AFq/mL3iliLgUNIWqdLSh7XV+R8UJUz1jpcT1F6sJlCdGovM3R5xW/PrTQOr3DmikyI=` |
+> | Japan East | ecdsa-sha2-nistp384 | `9XLsxg1xqDtoZOsvWZ/m74I8HwdOw9dx7rqbYGZokqA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFh7i1cfUoXeyAgXs+LxFGo7NwrO2vjDwCmONLuPMnwPT+Ujt7xelTlAW72G3aPeG2eoLgr6zkE48VguyhzSSQKy7fSpLkJCKt9s0DZg2w0+Bqs44XuB43ao6ZnxbMelJQ==` |
+> | Japan East | rsa-sha2-256 | `P3w0fZQMpmRcFBtnIQH2R88eWc+fYudlPy7fT5NaQbY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCZucqkz4UicI20DdIyMMeuFs+xUxMytNp7QaqufmA2SgUOoM387jesl27rwvadT6PlJmzFIBCSnFzjWe5xYy3GE59hv4Q3Fp3HMr5twlvAdYc5Ns5BEBEKiU0m88VPIXgsXfoWbF0wzhChx8duxHgG4Cm+F8SOsEw/yvl+Z/d42U9YzliQ1AafNj4siFVcAkoytxKZZgIqIL4VUI322uc93K5OBi9lgBqciFnvLMiVjxTWS/wXtVEjORFqbuTAu/gM4FuKHqKzD1o39hvBenyZF2BjIAfkiE6iYqROd75KaVfZlBSOOIIgrkdhvyj9IfaZFYs3HkLc7XgawYe6JVPR` |
+> | Japan East | rsa-sha2-512 | `4adNtgbPGYD+r/yLQZfuSpkirI9zD5ase01a+G7ppDw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCjHai98wsFv0iy+RPFPxcSv8fvTs3hN/YnuPxesS21tUtf0j5t8BTZiicFg6MLOQJxT4jv5AfwEwlfTqvSj3db6lZaUf/7qs/X9aN1gSoQNnUvALgnQDYGjNYO8frhR7S0/D/WggQo2YKMAeNLRScT7Pg/MJaOI12UhoUloCXbTAP1c85hYx0TGKlGWpFjfen/2fwYEKR1vuqaQxj+amRatnG+k18KWsqvHKze8I2D19cn5fp2VkqXzh6zQ1s5AMc5B9qIF48NIec9FAemb9pXzOoYBDFna0qNT4dfeWOQK6tM/Ll10jafaw2P32dGBF8MQKXB2sxtcC0nU4EEtS5d` |
> | Japan West | ecdsa-sha2-nistp256 | `VYWgC6A4ol1B7MolIeKuF2zhhgdzQAoGBj5WgnQj9XE=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFLIuhTo1et/bNvYUj+sanWnQEqiqs9ArKANIWG19F9Db6HVMtX8Y4i7qX6eFxXhZL17YB2cEpReNSEjs+DcEw4=` | > | Japan West | ecdsa-sha2-nistp384 | `+gvZrOQRq3lVOUqDqgsSawKvj6v/IWraGInqvqOmC6I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD3ZiyS1p7F1xdf6sJ3ebarzA5DbQl1HazzLUJCqnrA84U8yliLvPolUQJw4aYORIb5pMgijsN3v9l0spRYwjGHxbJZY/V6tmcaGbNPekJWzgXA1DY35EbFYJTkxh/Yezw==` |
-> | Norway East | rsa-sha2-256 | `vmcel/XXoNut7YsRo79fP5WAKYhTQUOrUcwnbasj/fQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC4Y1b2Bomv8tc/JwPgW0jR5YQhF031XOk4G0l3FOdZWY31L8fLTW6rOaJdizOnWCvMwYQK39tyHe6deN9TZESobh0kVVuCWaZNI6NUR0PSHi0OfbUkuV0gm/nwtwJkH5G9QbtiJ5miNb4Ys3+467/7JkqFZmqN6vBLhL9RVInO00LPYkUGtGfTv+/hmsPDGzSAujNDCFybti4c+wMgkrIH6/uqenGfA1zW3AjBYN2bBBDZopzysLHNJYQi3nQHQSiD4Mdl7IGZtJQeC/tH9CKH5R4U4jdPN1TmvNMuaBR/Etw4+v0vrDALG1aTmWJ7kJiBXEZKoWq/vWRfLzhxd4oB` |
-> | Norway East | rsa-sha2-512 | `JZPRhXmx44tEnXp+wPvexDys1tSYq9EDkolj9k6nRCM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC11j19LeEqRzJOs8sWeNarue+bknE3vvkvSnsewApVMQH35t9kpqRGMSr6RTU2QCYDiQTCKI2vzLSTLGoizoPBiY/7lvdylDRCbeEpuFUkgvKZrapkJ6JqKOySPpFNhqCs27rdY5dJ2C7/nmTL/kvcyhXFXZT2lJaOIdRSKv/1Q3DAWQ9icNGbDokQDubF5etlkquqTV6r/ioFuh7hdKE+fJooyHa2oYTD+j5cNDKBxrJWBEidOe2HwplR4lYPggUcVtGu9aoSVIMmswztFF6+MNIdOT1kdvHewKLjkVB1hbIHl/E+uexsyMGcCg5fPy7dDIipFi1aED+6R7CnAynJ` |
-> | Norway East | ecdsa-sha2-nistp256 | `mE43kdFMTV2ioIOQxwwHD7z+VvI4lvLCYW8ZRDtWCxI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDWP6vJCbOhnvdmr7gPe8awR/E+Bx+c8fhjeFLRwp6/0xvhcywT9a1AFp7FdAhkVahNKuNMU1dZ0WTbOEhEGvdg=` |
-> | Norway East | ecdsa-sha2-nistp384 | `cKF2asIQufOuV0C/wau4exb9ioVTrGUJjJDWfj+fcxg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGb8w8jVrPU1n68/hz9lblILow6YA9SPOYh5r9ClAW0VdaVvCIR/9cvQCHljOMJQbWwfQOcBXUQkO5yI4kgAN3oCTwLpFYcCNEK6RVug9Q5ULQh1MRcGCy3IcUcmvnYdg==` |
-> | France Central | rsa-sha2-256 | `zYLnY1rtM2sgP5vwYCtaU8v2isldoWWcR8eMmQSQ9KQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDCmdsufvzqydsoecjXzxxL9AqnnRNCjlIRPRGohdspT9AfApKA9ZmoJUPY8461hD9qzsd7ps8RSIOkbGzgNfDUU9+ekEZLnhvrc7sSS9bikWyKmGtjDdr3PrPSZ/4zePAlYwDzRqtlWa/GKzXQrnP/h9SU4/3pj21gyUssOu2Mpr6zdPk59lO/n/w2JRTVVmkRghCmEVaWV25qmIEslWmbgI3WB5ysKfXZp79YRuByVZHZpuoQSBbU0s7Kjh3VRX8+ZoUnBuq7HKnIPwt+YzSxHx7ePHR+Ny4EEwU7NFzyfVYiUZflBK+Sf8e1cHnwADjv/qu/nhSinf3JcyQDG1lN` |
-> | France Central | rsa-sha2-512 | `ixum/Dragma5DAMBzA/c5/MY02FjUBD/gI8+XQDzJvc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDjTJ9EvMFWicBCcmYF0zO2GaWZJXLc7F5QrvFv6Nm/6pV72YrRmSdiY9znZowNK0NvwnucKjjQj0RkJOlwVEnsq7OVS+RqGA35vN6u6c0iGl4q2Jp+XLRm8nazC1B5uLVurVzYCH0SOl1vkkeXTqMOAZQlhj9e7RiFibDdv8toxU3Fl87KtexFYeSm3kHBVBJHoo5sD2CdeCv5/+nw9/vRQVhFKy2DyLaxtS+l2b0QXUqh6Op7KzjaMr3hd168yCaqRjtm8Jtth/Nzp+519H7tT0c0M+pdAeB7CQ9PAUqieXZJK+IvycM5gfi0TnmSoGRG8TPMGHMFQlcmr3K1eZ8h` |
-> | France Central | ecdsa-sha2-nistp256 | `N61PH8SVCAXOq7Z7eIV4mRnotafmNoPrpc+TaLxtPX4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK3UBFa/Ke9y3aLs1q1b8gh/tXiS7lpOTzUiDFpXdbq00/V9Ag+v2z5MIaicFdum9Ih4fls1Mg07Ert16bi5M8E=` |
-> | France Central | ecdsa-sha2-nistp384 | `/CkQnHA57ehNeC9ZHkTyvVr8yVyl/P1dau2AwCg579k=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/x6qX+DRtmxOoMZwe7d7ZckHyeLkBWxB7SNH6Wnw2tXvtNekI9d9LGl1DaSmiZLJnawtX+MPj64S31v8AhZcVle9OPVIvH5im3IcoPSKQ6TIfZ26e2WegwJxuc1CjZZg==` |
-> | West US 3 | rsa-sha2-256 | `pOKzaf3mrTJhfdR/9dbodyNza30TpQrYRFwKAndeaMo=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC0KEDBaFSLsI28jdc854Rq6AL9Ku8g8L+OWQfWvb1ooBChMMd/oqVvFF9hkLzJ8nFPQw7+esVKys5uFwRTpBNuobF/RVtY0zLsNd+jkPxoUhs7Yl0hI2XXAPdp3uCsID56O+OrB7XbOsPCrJ2aXfiaRheRQg84/92c357uQ/epsva8XCMjIIGOAyEL6d4mnCNJ2Y0mXPJT1lfswoC8i2GSUKdJZhTLCe9zVDvTCTWuZJSH3A8nM3RVtnNgMXfNjh2blwW9YFv5BrMOXA205fahuDcPjwvXo9OMfEneDsrODmiEGYzbYLby/5/KPzz5OVn7BDJma6HL0z07i3PmEzXN` |
-> | West US 3 | rsa-sha2-512 | `KKcoWCeuJeepexnJCxoFqKJM88XrpsPKavXOoNFEGuY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNzhiVgDjCIarGEjKgmSxRh4vWjV6PxFbNK3cD0M4jWGlxPx/otJNEXCMee0hW29b7bwo2+aiyv3AEt7JYTeM/G9SHmenU6MTpqD/lC/LABtqTB7EV9FIFkc8MbbOvEkdTnRJw1d09MTqqwbkR9wq297AWggSzCuPDqMq+268UzsthMzODRVqW3yTr3M6vhlBCPfN5ptcvYwqRaa7Yhe4bdRZ+xYB5I2+ZMkalfn7SQiySSgAGjUJxrxK+LnJKSi32CfqTU8KjWNjCc40eAqexLFjg6AN9BtC0+ZYcD2KQmeqJ8oRCWw9r4CsaduSmcjc7XD75RKGdArjYzjeiVSlt` |
-> | West US 3 | ecdsa-sha2-nistp256 | `j4NlZP/wOXKnM3MkNEcTksqwBdF0Z46+rdi2Ic1Oj54=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBETvvRvehAQ2Ol0FfTt649/4Xsd0DQQ7vyZ666B92wRhvyziGIrOhy8klXHcijmRYRz3EjTHyXHZ4W8kcSKB4Lo=` |
-> | West US 3 | ecdsa-sha2-nistp384 | `DkJet/6Pm6EXfpz2Ut6aahJ94OvjG3R7+dlK0H4O1ts=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEu+HpgDp0a02miiJjD5qVcMcjWiZg5iIExECqD/KQVkfyraJ3WZ8P28JwB+IYlEGa2SHQxScDjG2t3iOSuU9BtpA0KK5PGtu3ZxhN1UmZbQgz6ANov7/+WHChg7/lhK0Q==` |
-> | Central India | rsa-sha2-256 | `OcX6wPaofbb+UG/lLYr30CPhntKUXyTW7PUAhC6iDN0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDWuKbOJR8ZZqhE3k2HMBWO99rNWoHwUa+PVWPQHyCELyLR19hfrygNL9uugMQKTvPYPWx8VM6PrQBrvioifktc/HMNRsoOxlBifQETfRgBseXcIWorNlslfFhBnSn6ZGn8q4XICGgZ1hWUj9z1PUmcM2LZDjJS33LLzd23uIdLePizAliJAzlPyea8JNpCVjfmwnNwtuxXc48uAUXlmX+e0ZXRwuEGble8c1PbrWWTWU4xhWNJ+MInyvIGv9s6cGN7+fxAFaUAJS0wNEa3poCnhyNxrckvaqiI3WhPQ8Hefy2DpXTY03mdxCz8PZPcLWdQU3H5nmuAc/pypnc0Avax` |
-> | Central India | rsa-sha2-512 | `HSgc5u8s+QILdyBq6wGJkxRcK5nxj81gxvpkR5bcH6k=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDSO/R/yw8q33yLkSHOw0Bi2WKDWQPrll8skh3hdRUB6wtw9dvtQFEV3suvFJsTVvAbnGBe2Fjgi69X0zkIygxg74XuQsx7GZO6gyaKDwljyanFoCzer+OzFSpDcVJ0zOfhY99uHeYT6k4leb2ngABqjiqieDHMZ9JQX12KOK3cAks/oytrNUo9krGb1Nyv5BYu4dWXHmuFgtigDd043khaARfdWkg88lKgb6G9k+vQTGKphLnFMqhada/aP8GsaA2Dq5d/LH5P5CTU7MRPA8TuuyLOtbv8FtQ2TyaAXhYCplCQELtto1yXZ79WVjQE/uKuX8xK5M2rfOH+H5ck/Rxl` |
-> | Central India | ecdsa-sha2-nistp256 | `zBKGtf770MPVvxgqLl/4pJinGPJDlvh/mM963AwH6rs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBjHx8+PF0VBspl6l9Xa3BGyJwSx2eDX0qTDnhrdayzHMWsHGX3vz0wr7oMeBVdQ26dOckExa6iPrEDSt8foV1M=` |
-> | Central India | ecdsa-sha2-nistp384 | `PzKXWvO/DR/KnUElcVWIwSdabp6ZJqce37DJZzNl3Sk=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJwEy1f+GYN4rxhlCAkXGgqAU1S7ssI4JPEJs8z1mcs8dDAAVy1cqdsir9yZ9RSZzOz/BOIubZsG137G2+po0Pz0FfJ0jZVGzlx1UHXu7OMuKQ7d2+1TkPpBYFy6PiCa3w==` |
-> | Korea South | rsa-sha2-256 | `J1W5chMr9yRceU2fqpywvhEQLG7jC6avayPoqUDQTXHtB2oTlQy2rQB` |
-> | Korea South | rsa-sha2-512 | `sHzKpDvhndbXaRAfJUskmpCCB3HgPbsDFI/9HFrSi3U=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCfGUmJIogHgbhxjEunkOALMjG77m+jgZqujO3MwTIQxQNd/mDeNDQaWDBVb2FJrw15TD3uvkctztGn2ear3lLOfPFt0NjYAaZ8u5g9JYCtdZUTo5CETQFU/sfbu2P2RJ/vIucMMg8HuuuIMO059+etsDZ5dZHu9cySfwbz/XtGA0jDaTlWG0ZDT+evOE0KmFABjgMFWyPnupzmSEXAjzlD/muGeeUhtXUB8F6HVUCXLz7ffzgYiYj+1OB0eZlG/cF8+aW7MOpnWvfpBxwm16soSE1gmZnXhPrz/KXlqPmEhgIhq7Cwk54r3rgfg/wCqFw+1JcbNOv5d4levu/aA7pt` |
+> | Japan West | rsa-sha2-256 | `DRVsSje7BbYlVJCfXqLzIzncbVU4/ETFGzdxFwocl8E=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDl/rlTgQpomq4FmJKSR2fjgAklV818RcjR/e/C1VUJVpbntJoWUlBhKYDFPTVQaHXDTK5HyJU5APsdy6CJo8ia32qc2E/573LDNk4dgFFrh+KFRiD+ULt3IH15i1DieVw61MAVOvzh+DmTJHPLaTufXoQ62YACm3yC1st1kXv4bawfXs0ssmeqrBcCOQvMvW/DexnnGXO6QXYTcjUktNrO2h2dd355n5FP4fcsBEdGmfT79HYPM6ZoqkItRZEO6Nel65KxtenAwQub8SK3iJgFyJwd3zIH4OCHp3z4tcGXw5yNAX15dJMSnls0zvzhx0f4ThwfgB4t1g9jVb47Ig7B` |
+> | Japan West | rsa-sha2-512 | `yLl9t2jlkrTVWAxsZ59Wbpq+ZCnwHfdMW8foUmMvGwI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC9zrpnjY7c0dHpE1BMv+sUp+hmvkBl3zPW/uCInYM5SgtViSQqn/DowySeq+2qMEnOpHGZ8DnEjq55PmHEumkYGWUUAs38xVGdvRZk6yU7TxGU42GBz0fT/sdungPHLQ2WvqTZYOFqBeulRaWsSBgovrOnQEa2bNTejK9m353/dmAtKHfu68zVT+XYADrT3PY5KZ1tpKJA0ZO9/ScUvXEAYs20WSYRZBcNDoSC9xz4K8hv9/6w3O3k0LyBKMFM5ZW8WVDfpZx1X0GBCypqS+RNZuVvx81h3nxVAZSx80CygYcV4UHml7wtnWDYEIBSyVRsJWVNGBlQrQ4voNdoTrk5` |
+> | Jio India Central | ecdsa-sha2-nistp256 | `zAZ0A1pk0Xz8Vr/DEf8ztPaLCivXxfajlKMtWqAEzgU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDow29ds+BRDNTZNW70CEoxUjLiUF+IHgaDRaO+dAWwxL13d+MqTIYY4I0D7vgVvh0OegmYLXIWpCdR8LvVT7zA=` |
+> | Jio India Central | ecdsa-sha2-nistp384 | `OTG7jxUSj+XrdL28JpYAhsfr6tfO7vtnfzWCxkC/jmQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/Bb3/3u/UIcYGRLSl7YvOObb43LO5Ksi0ewWJU+MPsPWZr7OTTPs76TdwXMvD8+QuY8U9JxgQQrNmvbpabmbGENkllEgjGlev5P2mHy/IZZAUQhAeeCinCRvTsiOOoLw==` |
+> | Jio India Central | rsa-sha2-256 | `DmNCjG1VJxWWmrXw5USD0pAnJAbEAVonkUtzRFKEEFI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC/x6T0nye3elqPzK8IF+Q70bLn2zg4MVJpK3P6YurtsRH8cv5+NEHyP0LWdeQWqKa9ivQRIQb8mHS+9KDMxOnzZraUeaaJLcXI0YV512kqzdevsEbH6BSmy8HhZHcRyXqH0PjxLcWJ5Wn9+caNhiVC40Oks7yrrZpAVbddzD9y/eJfguMVWiu1c8iZpYORss1QYo7JqVvEB6pLY03NXWM+xti1RSs+C6IEblQkPvnT3ELni9T1eZOACi12KGZHVLU9n27Nyg/fPjRheYSkw/lkkKDG0zvIQ7jr/k8SCHGcvtDYwRlFErFdGYBlIE888le2oDNNoKjJuhzN6S7ddpzp` |
+> | Jio India Central | rsa-sha2-512 | `m2P7vnysl2adTz/0P6ebSR7Xx8AubkYkex6cmD9C0ys=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDQHFDt8zTk+Hqh912v0U8CVTgAPUb8Kmuec+2orydM/InG+/zSuqQHsCZaD2mhEg8kevU8k2veF5z2sbko5TR/cghGg5dXlzz4YaKiNdNyKIGm2MdynXJofAtiktGhcB6ummctHqATfGSgkLJHtLvstzTVbVK1zgxXcB8hA52c2EPB1cN1TkAKEyiYNX7fKFe1EEPCxdx3fC/UyApKdD+D432HCW/g8Syj/n7asdB8EQqcoCT3ajh2wG2Qq0ZxjVbbrFImlr0VoTqLImJ4kZ9d2G7Rq2jqrlfESLAxKVDaqj+SjyWpzb3MHFSnwJZybCKXyTt+7BXpKeeGAcHpTs3F` |
+> | Jio India West | ecdsa-sha2-nistp256 | `mBx6CZ+6DseVrwsKfkNPh9NdrVLgwhHT4DEq9cYfZrg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPXqhYQKwmkGb8qRq52ulEkXrNVjzVU4sGEuRFn4xXK8xanasbEea3iMOTihzMDflMwgTDmVGoTKtDXy8tQ+Y8k=` |
+> | Jio India West | ecdsa-sha2-nistp384 | `lwQX9Yfn7uDz/8gXpG4sZcWLCAoXIpkpSYlgh8NpK1E=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLKY2+wwHIzFOfiKFKFHyfiqjUrscm0qwYTAirNPE1GI6OwAjconeX072ecY3/1G0dE7kAUaWbCKWSO3DqM4r6O+AewzxJoey85zMexW23g2lXFH7HkYn9rldURoUdk31A==` |
+> | Jio India West | rsa-sha2-256 | `hcy1XbIniEZloraGrvecJCvlw6zZhTOrzgMJag5b5DA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOBU9e1Ae68+ScLUA5O1gaZ3eq0EGqBIEqL3+QuN8LYpF3Bi/+m43kgjhgiOx5imPK6peHHaaT/nEBQFJKFtWyn8q2kspcDy1xvJfG8Jaks1GQG33djOItiHlKjRWMcyWFvisFE2vVkp3uO0xG4nMDLM2rFazkax+6XA5cf2iW2SfL6Trs4v1waakU/jQLA7vsrx14S+wGEdVINTSPeh5DHqkLzTa3m2tpXVcUA4CG8uQZM8E/3/y0BuIW0Ahl/P6dx35W1Al7gnaTqmx7+idcc/YVe0auorZWWdyclf1sjnAw6U8uMhWmQ0dZgDehDtshlHyx84vvJ1JOJs0+6S2l` |
+> | Jio India West | rsa-sha2-512 | `LPctDLIz/vqg4POMOPqI1yD9EE9sNS1gxY6cJoX+gEY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOH+IZFFfJN4lpFFpvp5x1lRzuOxLXs0WfpcCIACMOhCor2tkaa/MHlmPIbAqgZgth5NZIWpYkPAv7GpzPBOwTp3Bg5lUM7MXSayO/5+eJjMhB5PUCJ0We8Kfgf/U+vbaMIg9R8gJKutXrANd3sAWXMwWqKUw+ZX/AC7h58w04gb1s+lNOQbfhpqkw8+mrOj2eKH8zHYUJQBUYEyDHqirj565r7HhBtEZImn/ioJS+nYT5Zl/SNtW/ehhUsARG9p6O4wSy20Ysdk7b9Ur2YL0RyFa6QhWQeKktKPVFQuMMLRkYX7dv35uAKq8YN833lLjGESYNdCzYmGTJXk5KYZ8B` |
+> | Korea Central | ecdsa-sha2-nistp256 | `XjVSEyGlBtkONdvdw11tA0X1nKpw5nlCvN/0vXEy1Gc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPYiomaLB3ROxZvdfqiilpZ+2XJiPDeIIv4/fnQRZxnCBCFrUm7ATB6bMBSUTd00WfMhnOGj4hKRGFjkE+7SPy4=` |
+> | Korea Central | ecdsa-sha2-nistp384 | `p/jQTk9VsbsKZYk09opQVQi3HzvJ/GOKjALEMYmCRHw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN3NA7U4ZC576ibkC/ACWl6/jHvQixx+C6i2gUeYxp7Tq6k4YLn7Gr1GNr+XIitT6Jy5lNgTkTqmuLTOp7Bx9rGIw9Or37EGf7keUM42Urtd+9xF1dyVKbBw0pIuUSSy+w==` |
+> | Korea Central | rsa-sha2-256 | `Ek+yOmuAfsZhTF4w7ToRcWdOevgZPYXCxLiM10q44oA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCyUTae7QtAd3lmH+4lKJNEBNWnPUB+PELE9f4us5GxP8rGYRar1v3ZGXiP2gzPF1km1cGNrPvBChlwFMjW+O5HavIFYugVIe8NzfI7S3t+kgTylXegSo1cWen18MAZe6Q5vxqqFzfs+ZChWEa/P37lTXVkLVOYCe5NJUPm8Zvip7DHB2vk25Fk3HMHG9M50KNj1Hp4etPI7yiLNLNCh5V410mf3xhZChMUrH6PMl/A+sVv68ulcVeIZ68eMuQktxz1ULohBdSExZGmknVrwfF/fLTKWxHlVBjB3yDlLIJO3nTFKaQ4RzPa/0If+FcbY+hIdzSjIAK6W3fRlbVuWHYR` |
+> | Korea Central | rsa-sha2-512 | `KAji7Q8E2lT3+lSe7h74L6rfPnLEfGVzYZ/xyM96//0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDxZYb5eIWhBmWSwNU6G9FFDRgqlZjYYorMSXJ4swHm4YYHKGZTf4JOE5d87MNtkVgKe2942TQxA1t2TaENlmNejeVG5QZ4to+nVnwsFov2iqAYChoI6GlhpwzyPsO0RkqLB8mvhoKMel1sNGfmxjxYVFt4OSPHDzNIU4XjGfW24YURx/xRkLU1M9zBNADDx+41EMNRT7aBXrKW9MzsxkfCM3bYwjdBbI2Yi2nUqARm+e/sBPLTqVfjuMFvosacYc43MqepFSQoZE5snwYxkLJzltAbxNUysJs277isnGgezh9p5T2MCxtCERU0lvp7M52hd1p75QEtNrdadfDprzT9` |
> | Korea South | ecdsa-sha2-nistp256 | `XM5xNHAWcYsC5WxEMUMIFCoNJU/g90kjk/rfLdqK7aw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHTHpO85vgsI6/SEJWgEhP5VDTikLrNrSi6myIqoJvRx6x8+doTPkH87L/bOe/pTU/rCgkuPi1kXTC7iUTSzZYk=` | > | Korea South | ecdsa-sha2-nistp384 | `6T8uMI9gcG3HtjYUYqNNxi99ksghHvsDitIYpdQ4BL4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAgPPIDWZqvB/kuIguFnmCws7F4vzb6QG7pqSG/L9E1VfhlJBeKfngQwyUJxzS2tCSwXlto/1/W302g0HQSIzCtsR4vSbx827Hu2pGMGECPJmNrN3g82P8M0zz7y3dSJPA==` |
-> | South Central US | rsa-sha2-256 | `n7P8NrxY8pWNSaNIh8tSZxi9rXi11g3JuzWZF93Ws4g=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQD4PgB8PxPPpGfvrIUGSiiDFIfkRk2/u1DmhcoGoIfW+/KR8KC2JA0kY4Yj+AceGnDUiBbSPz7lcmy2eGATfCCL6fC5swgJoDoYDJiJoaKACuVA0Nk2y0OeO58kS6qVHGX/XHzx8+IkfGdlhUUttcga7RNeppT5iqSz49q9x6Ly42yrV3DIAkOgh+f9SsMMfR6dQQmvWN3HYDOtiO2DvVN+ZenViQVcsynspF3z4ysk53ZYw5YcLhZu8JFw4u0F6QJAznR6TfNqIlhSjR1ub8DiHvIwrmDNf8TgG5kPVGhIcibYPf+y0B0M8nr9OKCxZzUTlXX4Xcnx+VOQ1e1qGHvV` |
-> | South Central US | rsa-sha2-512 | `B2oOtHpXzwezblrKxGcNBc3QJLQG/TiVgOjnmNorqkA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC+LJA8W3BcwITzJv6CAkx/0HBPdy3LjKPK2NQgV9mxSMw8mhz4Ere59u2vRsVFcdW6iAeGrH66VF6mJSCgUKiYnyZAfTp1O6p6DnUg4tktMQFo4BEwSz1S5SGDuRhpWvoKjzvljESf/vZBqgms7nMRWe3MGuvlUWBqB+2CnJ7bxhvGQCdBTQeoPO9EZKYKi/fPlcxBmLFGcZnRRpB6nu/Cxhhj1aHLJdjqCd+4ahtjBHeFrPxeQv9gTJ1B+EipJZu7WgPZOTI8iZaIcnCbhuGOy0iOFXeuexC9/ptHDW9UEgKVLyZ4UIPJkSLFVgW5NRujWyZ/thc5+EfHY9Db3UAl` |
+> | Korea South | rsa-sha2-256 | `J1W5chMr9yRceU2fqpywvhEQLG7jC6avayPoqUDQTXHtB2oTlQy2rQB` |
+> | Korea South | rsa-sha2-512 | `sHzKpDvhndbXaRAfJUskmpCCB3HgPbsDFI/9HFrSi3U=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCfGUmJIogHgbhxjEunkOALMjG77m+jgZqujO3MwTIQxQNd/mDeNDQaWDBVb2FJrw15TD3uvkctztGn2ear3lLOfPFt0NjYAaZ8u5g9JYCtdZUTo5CETQFU/sfbu2P2RJ/vIucMMg8HuuuIMO059+etsDZ5dZHu9cySfwbz/XtGA0jDaTlWG0ZDT+evOE0KmFABjgMFWyPnupzmSEXAjzlD/muGeeUhtXUB8F6HVUCXLz7ffzgYiYj+1OB0eZlG/cF8+aW7MOpnWvfpBxwm16soSE1gmZnXhPrz/KXlqPmEhgIhq7Cwk54r3rgfg/wCqFw+1JcbNOv5d4levu/aA7pt` |
+> | North Central US | ecdsa-sha2-nistp256 | `6xMRs7dmIdi3vUOgNnOf6xOTbF9RlGk6Pj7lLk6z/bM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJw1dXTy1YqYLJhAo1tB+F5NNaimQwDI+vfEDG4KXIFfS83mUFqr9VO9o+zgL3+0vTrlWQQTsP/hLHrjhHd9If8=` |
+> | North Central US | ecdsa-sha2-nistp384 | `0cJkHHeTNQpl7ewPTZwug5+/hfebiH6Yxl2rOTtYZQo=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8aqja46A9Q5PmhPzhxcklcJGp+CiC3MCjVR6Qdl9oQGMywOHfe+kCD72YBKnA6KNudZdx7pUUB/ZahvI5vwt4bi593adUMTY1/RlTRjplz6c2fSfwSO/0Ia4+0mxQyjw==` |
+> | North Central US | rsa-sha2-256 | `9AV5CnZNkf9nd6WO6WGNu7x6c4FdlxyC0k6w6wRO0cs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDJTv+aoDs1ngYi5OPrRl1R6hz+ko4D35hS0pgPTAjx/VbktVC9WGIlZMRjIyerfalN6niJkyUqYMzE4OoR9Z2NZCtHN+mJ7rc88WKg7RlXmQJUYtuAVV3BhNEFniufXC7rB/hPfAJSl+ogfZoPW4MeP/2V2g+jAKvGyjaixqMczjC2IVAA1WHB5zr/JqP2p2B6JiNNqNrsFWwrTScbQg0OzR4zcLcaICJWqLo3fWPo5ErNIPsWlLLY6peO0lgzOPrIZe4lRRdNc1D//63EajPgHzvWeT30fkl8fT/gd7WTyGjnDe4TK3MEEBl3CW8GB71I4NYlH4QBx13Ra20IxMlN` |
+> | North Central US | rsa-sha2-512 | `R3HlMn2cnNblX4qnHxdReba31GMPphUl9+BQYSeR6+E=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDeM6MOS9Av7a5PGhYLyLmT09xETbcvdt9jgNE1rFnZho5ikzjzRH4nz60cJsUbbOxZ38+DDyZdR84EfTOYR2Fyvv08mg98AYXdKVWMyFlx08w1xI4vghjN2QQWa8cfWI02RgkxBHMlxxvkBYEyfXcV1wrKHSggqBtzpxPO94mbrqqO+2nZrPrPFkBg4xbiN8J2j+8c7d6mXJjAbSddVfwEbRs4mH8GwK8yd/PXPd1U0+f62bJRIbheWbB+NTfOnjND5XFGL9vziCTXO8AbFEz0vEZ9NmxfFTuVVxGtJBePVdCAYbifQbxe/gRTEGiaJnwDRnQHn/zzK+RUNesJuuFJ` |
+> | North Europe | ecdsa-sha2-nistp256 | `wUF5N8VjGTnA/PYBVzQrhcrMgHuCfAYL1tu+p6s28Ms=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCh4oFTmr3wzccXcayCwvcx+EyvZ7yANMYfc3epZqEzAcDeoPV+6v58gGhYLaEVDh69fGdhiwIvMcB7yWXtqHxE=` |
+> | North Europe | ecdsa-sha2-nistp384 | `w7dzF6HD42eE2dgf/G1O73dh+QaZ7OPPZqzeKIT1H68=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgyasQj6FYeRa1jiQE4TzOGY/BcQwrWFxXNEmbyoG89ruJcmXD01hS2RzsOPaVLHfr/l71fslVrB8MQzlj3MFwgfeJdiPn7k/4owFoQolaZO7mr/vY/bqOienHN4uxLEA==` |
+> | North Europe | rsa-sha2-256 | `vTEOsEjvg/jHYH1xIWf2rKrtENlIScpBx450ROw52UI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQChnfrsd1M0nb7mOYhWqgjpA+ChNf7Ch6Eul6wnGbs7ZLxXtXIPyEkFKlEUw4bnozSRDCfrGFY78pjx4FXrPe5/m1sCCojZX8iaxCOyj00ETj+oIgw/87Mke1pQPjyPCL29TeId16e7Wmv5XlRhop8IN6Z9baeLYxg6phTH9ilA5xwc9a1AQVoQslG0k/eTyL4gVNVOgjhz94dlPYjwcsmMFif6nq2YgQgJlIjFJ+OwMqFIzCEZIIME1Mc04tRtPlClnZN/I+Hgnxl8ysroLBJrNXGYhuRMJjJm0J1AZyFIugp/z3X1SmBIjupu1RFn/M/iB6AxziebQcsaaFEkee0l` |
+> | North Europe | rsa-sha2-512 | `c4FqTQY/IjTcovY/g7RRxOVS5oObxpiu3B0ZFvC0y+A=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCanDNwi72RmI2j6ZEhRs4/tWoeDE4HTHgKs5DRgRfkH/opK6KHM64WnVADFxAvwNws1DYT1cln3eUs6VvxUDq5mVb6SGNSz4BWGuLQ4onRxOUS/L90qUgBp4JNgQvjxBI1LX2VNmFSed34jUkkdZnLfY+lCIA/svxwzMFDw5YTp+zR0pyPhTsdHB6dST7qou+gJvyRwbrcV4BxdBnZZ7gpJxnAPIYV0oLECb9GiNOlLiDZkdsG+SpL7TPduCsOrKb/J0gtjjWHrAejXoyfxP5R054nDk+NfhIeOVhervauxZPWeLPvqdskRNiEbFBhBzi9PZSTsV4Cvh5S5bkGCfV5` |
+> | Norway East | ecdsa-sha2-nistp256 | `mE43kdFMTV2ioIOQxwwHD7z+VvI4lvLCYW8ZRDtWCxI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDWP6vJCbOhnvdmr7gPe8awR/E+Bx+c8fhjeFLRwp6/0xvhcywT9a1AFp7FdAhkVahNKuNMU1dZ0WTbOEhEGvdg=` |
+> | Norway East | ecdsa-sha2-nistp384 | `cKF2asIQufOuV0C/wau4exb9ioVTrGUJjJDWfj+fcxg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGb8w8jVrPU1n68/hz9lblILow6YA9SPOYh5r9ClAW0VdaVvCIR/9cvQCHljOMJQbWwfQOcBXUQkO5yI4kgAN3oCTwLpFYcCNEK6RVug9Q5ULQh1MRcGCy3IcUcmvnYdg==` |
+> | Norway East | rsa-sha2-256 | `vmcel/XXoNut7YsRo79fP5WAKYhTQUOrUcwnbasj/fQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC4Y1b2Bomv8tc/JwPgW0jR5YQhF031XOk4G0l3FOdZWY31L8fLTW6rOaJdizOnWCvMwYQK39tyHe6deN9TZESobh0kVVuCWaZNI6NUR0PSHi0OfbUkuV0gm/nwtwJkH5G9QbtiJ5miNb4Ys3+467/7JkqFZmqN6vBLhL9RVInO00LPYkUGtGfTv+/hmsPDGzSAujNDCFybti4c+wMgkrIH6/uqenGfA1zW3AjBYN2bBBDZopzysLHNJYQi3nQHQSiD4Mdl7IGZtJQeC/tH9CKH5R4U4jdPN1TmvNMuaBR/Etw4+v0vrDALG1aTmWJ7kJiBXEZKoWq/vWRfLzhxd4oB` |
+> | Norway East | rsa-sha2-512 | `JZPRhXmx44tEnXp+wPvexDys1tSYq9EDkolj9k6nRCM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC11j19LeEqRzJOs8sWeNarue+bknE3vvkvSnsewApVMQH35t9kpqRGMSr6RTU2QCYDiQTCKI2vzLSTLGoizoPBiY/7lvdylDRCbeEpuFUkgvKZrapkJ6JqKOySPpFNhqCs27rdY5dJ2C7/nmTL/kvcyhXFXZT2lJaOIdRSKv/1Q3DAWQ9icNGbDokQDubF5etlkquqTV6r/ioFuh7hdKE+fJooyHa2oYTD+j5cNDKBxrJWBEidOe2HwplR4lYPggUcVtGu9aoSVIMmswztFF6+MNIdOT1kdvHewKLjkVB1hbIHl/E+uexsyMGcCg5fPy7dDIipFi1aED+6R7CnAynJ` |
+> | Norway West | ecdsa-sha2-nistp256 | `muljUcRHpId06YvSLxboTHWmq0pUXxH6QRZHspsLZvs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOefohG21zu2JGcUvjk/qlz5sxhJcy5Vpk5Etj3cgmE/BuOTt5GR4HHpbcj/hrLxGRmAWhBV7uVMqO376pwsOBs=` |
+> | Norway West | ecdsa-sha2-nistp384 | `QlzJV54Ggw1AObztQjGt/J2TQ1kTiTtJDcxxIdCtWYE=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYnNgJKaYCByLPdh21ZYEV/I4FNSZ4RWxK4bMDgNo/53HROhQmezQgoDvJFWsQiFVDXOPLXf26OeVXJ7qXAm6vS+17Z7E1iHkrqo2MqnlMTYzvBOgYNFp9GfW6lkDYfiQ==` |
+> | Norway West | rsa-sha2-256 | `Ea3Vj3EfZYM25AX1IAty30AD+lhXYZsgtPGEFzNtjOk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDuxOcTdADdJHI8MFrXV00XKbKVjXpirS3ZPzzIxw0mIFxFTArJEpXJeRfb0OZzQ1IABDwoasp1u+IhnY1Uv2VQ8mYAXtC3He08+7+EXJgFU/xQ8qFfM4eioAuXpxR7M7qV/0golNT4dvvLrY4zHxbSWmVB7cYJAeIjDU8dKISWFvMYjnRuiI7RYtxh/JI5ZfImU65Vfxi26vqWm51QDyF5+FmmXLUHpMFFuW8i/g8wSE1C3Qk+NZ3YJDlHjYqasPm4QidX8rHQ1xyMX9+ouzBZArNrVfrA4/ozoKGnPhe4GFzpuwdppkP4Ciy+H6t1/de/8fo9zkNgUJWHQrxzT4Lt` |
+> | Norway West | rsa-sha2-512 | `uHGfIB97I8y8nSAEciD7InBKzAx9ui5xQHAXIUo6gdE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDPXLVCb1kqh8gERY43bvyPcfxVOUnZyWsHkEK5+QT6D7ttThO2alZbnAPMhMGpAzJieT1IArRbCjmssWQmJrhTGXSJBsi75zmku4vN+UB712EGXm308/TvClN0wlnFwFI9RWXonDBkUN1WjZnUoQuN+JNZ7ybApHEgyaiHkJfhdrtTkfzGLHqyMnESUvnEJkexLDog88xZVNL7qJTSJlq1m32JEAEDgTuO4Wb7IIr92s6GOFXKukwY8dRldXCaJvjwfBz5MEdPknvipwTHYlxYzpcCtb9qnOliDLD2g4gm9d5nq3QBlLj/4cS1M9trkAxQQfUmuVQooXfO2Zw+fOW1` |
+> | Poland Central | ecdsa-sha2-nistp256 | `aX1HJKXvnL8pJ1upt1OnBQT0vLbQXDrBeThar32gyEs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOTFAOA/iJnf5S+3tGqyGEpFspwR86HChkrkloJnehNvYhecP4tGhJx5Z15j9TJqHWEzpBFPIcxF+O9tStiv+oQ=` |
+> | Poland Central | ecdsa-sha2-nistp384 | `jNH6sSVNE+1NhyZzA3tzk0RaJpZoLVZHd8yjQG64DDw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFoLS+6QCyjyibWZvldjErzY9ptf+LXhyeQQDu7K+UajFsLk7xzx4vIRLsPJ+UhRyu81Lwo/pxcgoDX6uyB2M82JfQAWF+jniU7RfC/QzO5Jxbsj4mlY1kVO+R7/vdLTyQ==` |
+> | Poland Central | rsa-sha2-256 | `Ph2MhHZIZtRk76qOvea61JQGRMyxbHeYqbQYo1bDorc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCplMMhYJaBSEOXRYRvUACL1zoisjy7BRVdsKORsnqKtMimDqvl8UY304znr9Rn2DBT55EzRQIPs4V6tKwUMe4+FBm9Ef32/jxRdlJ7bM/eMRwFwmo4PxJ1pVpP8TYkpLcXXx5T+zCtphkSXUBHrZRas0OLJIw6ooj9rt60PeCvEIl9HBA8sMt8u7882KKGIZra7C1PK/0/rKub+7oRBEgXoxZxKYFmu72CJV4/4FmxQsYpqcwKaFgMnDYEzpJexL+XlGJ+GkeX8tngy38lwlwGdxi6s6w9e20TUSYtbfPJE8OBq08cHN1OhpbL3bS2Ynr5QkFwHIcwa0seSuXJCIj1` |
+> | Poland Central | rsa-sha2-512 | `aSOu8q60R2fx2C9eoCX3ReG/wKQbXzHf5XoTaEww6GQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC4c5mGbfEkwSgXnhzF4zrguh9X1aHMn1p6pTwJhCjGTQ54ZIFYgfA294RXTYJdL84Xi++qCXHeENVeTWfD9dRlz+KDCOST4JpHauGKnKUF3udsHNNItai88CpDHj8JM6YYxfUR4/BHCNJQ8BrVnvrljWaj7SYJhyUuwChZkTeycZSQPOVJRoHKAnfI+KVZGfQp6dfJx1M11Ojz6a72E6cDDeu8YBNEGiWfYARTi0FJWpy36CsA6aLjXkWTLgM4ZD7vIhLOCLholei+zR43jpZUNKRe7Ym4nSliRsrlEsYkblxsIxotpLt9Al+ftn7GBAjU4HwhC13o8K3yWw0z3daR` |
+> | Qatar Central | ecdsa-sha2-nistp256 | `QOdUXQx3Bi3ss/976Q0n+aIt/vkWjfmGH4fsgk1mBvw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJz1f9SCaXyUAatHKEr/sfY2uRJWtftsigePCckBp+l/VenEVY22vVwstmrIeu02JKz1+IfePfGQ2bWOprpodXA=` |
+> | Qatar Central | ecdsa-sha2-nistp384 | `znqSno+29X1UUZV3ljgE7qSoYZtAybbH4dWNoSZIg6w=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKkIRyyU0RVr0/xTE1pce28UeVStaqyw0daAWkChabp9SQb9ONmJ5UFzZ0p3bvcy2ZWeYiJCvg63qKojPomVCwT8ZtRtgeewRMWPS6kKAJDQfzl8r05dNjwbd8Y+1BerHw==` |
+> | Qatar Central | rsa-sha2-256 | `iHCboIvdshFEnYt/6+vvLQnjyUZQ550Pm7dkFX/Q43o=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDW+RNosbUkJxEwcZ0i22DPBTOgStdqEdaL+jRzzi8xs6n9hR2I8Mnv6PR+ujaejqAzXVmI5LLnMrQA9efsUR4F0Is5ruJgrK6f2ORiLsaYj7PgTOsoaItdjWxXHFQ7hZA1FmYLgody3Js68akvGkp8NwnW9goFq3qBrtpHRcvxFxWixeNTy4a4azVjmoN8SfZxiPa0mBT61fjpVttUrb+sJeZ3jo6Ox2ZQxc0My8kPY+8J1qNxjsoCUirHZsgsmYTM5F7lWSdszB7h2irIiMEi+cmcowhez6LJd3TcDxnElOz2Wva/wSNo0JJx/VLdZvP06hJTxIw2QsX2uwI7lyF1` |
+> | Qatar Central | rsa-sha2-512 | `EMxIi2rduXMod/OMKHrHRZKo9t9oYUdnw3sw8Txyaj8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDqTnxkToyGf9z/+6fXJ+DvHvKqADITDu+JqvJX2kaPSbkxEBvR1uW/jFT3DD7SL8ZS8qm8HD1MYyoiHE6yvM+K9md83GMNqBiuxIceHH7uW5mEUt25j519R7a/fQUXApt5ZXZTG5e9eUSP0W9r/HvwA+LkE66gDwamPZrF6OkBQnu3DEK1AcZNufM31lnFBlu0yzdLMFZh/L6yXRi9sh0ATf7aZeR2lgGuTuoaOUAx3F2xTt5lRNGpy8O4HV8uZKW0EsEcGYANguOEqiNEgjiw1sHIZ4XPZSYe+sXAkafVl6X07nu9CpEncrRnTcQIfZXnwbneOetDWlhZH/vk38ZJ` |
+> | South Africa North | ecdsa-sha2-nistp256 | `e6v7pRdZE0i1U2/VePcQLguy7d+bHXdQf3RZ4jhae+g=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIEQemJxERZKre+A+MAs0T0R7++E6XanZ7uiKXZEFCyDgqjVcjk8Xvtrpk5pqo4+tMWM7DbtE0sgm1XmKhDSWFs=` |
+> | South Africa North | ecdsa-sha2-nistp384 | `NmxPlXzK2GpozWY374nvAFnYUBwJ2cCs9v/VEnk0N6Q=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKgEuS9xgExVxicW0HMK4RLO5ZC6S0ZyENe5XVVJY0WKZ5IfIXEhVTkYXMnbtrYIdfrTdDuHstoWY9uu4bS8PtFDheNn3MyNfObqpoBPAh1qJdwfJgzo5e7pEoxVORUMnw==` |
+> | South Africa North | rsa-sha2-256 | `qU1qry+E/fBbRtDoO+CdKiLxxKNfGaI9gAplekDpYvk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC2UBC1KeTx8/tQIxVEBUypcu/5n3B/g0zqE7tFmPYMFYngrXqEysIzgAdpiu2+ZX/vY8AF/0UkhYec/X/rwKQL8CCVwYqa2hufbSrX/qSuUHZd/95LFB2Nh+hJ23fn3EK8Gpgo/Xkmx9YVZoaQPGPsWVWVKjU6aVpM54cd6iuDT3y9SAnqbUMqgwwz3mK7bQGFPrbUVOUwVIcYKZD9HMNZhpo8HpjllKYIt1AFy4db8lSrLyuX8Nn/U7XAlPUndUCpKsAfWw8SemyuxSHziFDHF5xo8eLU+QYxdtzirgDAgEYWv9aa0TSx5Q2Mq8XJ7POffQxKj44ocHzmMGq/wPS1` |
+> | South Africa North | rsa-sha2-512 | `1/ogzd+xjh3itFg3IpAYA2pwj1o3DprEabjObSpY/DY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDLAkEygbVyp189UwvaslGRgaqcGWXaYJVq+gUB0906xkkjGoJeqSgTW5C/77vOk0zBCZM3yBgtDFZL1d6lze1QJZ6kGGPynJa5SeyydAds9G745yaFFuE53zJUyMy+y5I1ytfx003PKvk8+fHZK3rPYYr+LKm2u+9BmnuDB/0t561oFg1ZiMCPgNnDdUwkya2EtsJAifkUaBlYmzBZAFbIYyGfb898utZHyI+ix2TrMS/RHEDIchG8qSBMpOPmcpa29ADVsmAQDd5ds5D7WjirfMXwBxgJTMyuy+N9rJRgHoqDnt/GsgI2GtoPM7YSET8uYug941hAvFm5TI/dW3YR` |
+> | South Africa West | ecdsa-sha2-nistp256 | `pr1KB8apI+FNQLKkzvUXx0/waiqBGZPEXMNglKimUwA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPvbvOfXQjT+/3+jQtW3FBAnPnaypYSUhZMkTTSfd7RQMmSxsLNmDooERhVuUTa7XCTlpDNTSPdnnaa6P1a+F6A=` |
+> | South Africa West | ecdsa-sha2-nistp384 | `A3RfMOd6dGgUlcrkXL1YRKNXIdAB8M1lF9qwmy6PjFg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNaJmo4QGmo6pbLHOXh06Rz9inntdxmuOtVxlJBO1i/ZK5les/AuaILMW7oQCxOKvZs/xI+P0MWRfrNgWSSapy5hNuTkbl8IqO4pH/lO//zdaHmVBC1kPnujDM9znJs6Rg==` |
+> | South Africa West | rsa-sha2-256 | `aMMzaNmXR+V1NrwLmovyvKwfbKQ6aAKYiA5n8ETYQmU=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDGhe98UTnljsYaeJwtP3ABvT/hZP6Mp1r5beyJ2SWpdqZSZaKC+UQlWLu6WhLxLZ+5snB+YAlC56u4qOdDHLoid6vbAR/FPIcJlvQfcFJD88nihv9sq1kUX3JXrh0ZUrl2/Zj71aNlM/RL1OnXK/Pg2E+wu4EfnQTrzlWMhR8bxlQA0jH1zmfFN/6BTwP2if29TNlQkWuW3uq3rccY1GA6n0QtlucanPNRzsBtAzsH5/oFuB5R4sD/Msw0itvWuQP4e0y+Vdov1My/rjK19xLce6AhWmmhwkn5qxHdIy158C4cWnSkQvkYzPnwsi7KT9WRH7vfr8qD9zlA5mO+IDxJ` |
+> | South Africa West | rsa-sha2-512 | `Uc7QB0fT4NGyBp34GCAt8G4j1ZBXh/3Wa2YRlILu818=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCijtmaOHIcXjI07fVugz1M33+amlOEqdqtVgOlLmFRKSehPW2+6iGpAjQVwzsYOx32Hp5O07xj/PhiFsbBBqZXGHmuSIOJYa7tQSFvwclO+JW/kuoELXQLwnHxUfPyq4tYoj83GSZ5k/KRlEtbmjEwcozMQVya/7MzulAeV4nN6PDxoLjXlfGEQU2ZCGz2neeisQEM8+hZNuEH+O9O03g7CW8bwiI1Y70/bnNq95xJ5F7lRpwtJNWlx+kmUiNpfXOUPxZAUsny7z1Ka5XKEB1fDP8E/jAtrSWrRPDJew8lFpQeWukwB5tf3F3bh1SuSKaSQqKBArnSpJizWxp0brZZ` |
> | South Central US | ecdsa-sha2-nistp256 | `Wg9hTlPmrRH9aC9lTSf8hGFqa85AnW3jqvSXjmHAdg4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnEz4iwyq7aaBNKiABce+CsVIUfiw9Jw3pp6pGbL6cUaJs9mEVg1RMLHgPg2I+7XV0doisYhYb/XtufxzGCe94=` | > | South Central US | ecdsa-sha2-nistp384 | `rgRhPelmxAix6TBDahmGqXnKjdImdI3MnDPVc6qhF2o=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKXGKbWfVe18G9gbCxFQiBGkGYM9LktSPKkRI18WRQ50qyuxVRXRDoV+iIEJyCQTpuFTPprQ6glQYeF+ztEb4MZaXpVrcs1/Og191dcEtty3UWuJBCrv/t1kezlwBWKyXg==` |
-> | Korea Central | rsa-sha2-256 | `Ek+yOmuAfsZhTF4w7ToRcWdOevgZPYXCxLiM10q44oA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCyUTae7QtAd3lmH+4lKJNEBNWnPUB+PELE9f4us5GxP8rGYRar1v3ZGXiP2gzPF1km1cGNrPvBChlwFMjW+O5HavIFYugVIe8NzfI7S3t+kgTylXegSo1cWen18MAZe6Q5vxqqFzfs+ZChWEa/P37lTXVkLVOYCe5NJUPm8Zvip7DHB2vk25Fk3HMHG9M50KNj1Hp4etPI7yiLNLNCh5V410mf3xhZChMUrH6PMl/A+sVv68ulcVeIZ68eMuQktxz1ULohBdSExZGmknVrwfF/fLTKWxHlVBjB3yDlLIJO3nTFKaQ4RzPa/0If+FcbY+hIdzSjIAK6W3fRlbVuWHYR` |
-> | Korea Central | rsa-sha2-512 | `KAji7Q8E2lT3+lSe7h74L6rfPnLEfGVzYZ/xyM96//0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDxZYb5eIWhBmWSwNU6G9FFDRgqlZjYYorMSXJ4swHm4YYHKGZTf4JOE5d87MNtkVgKe2942TQxA1t2TaENlmNejeVG5QZ4to+nVnwsFov2iqAYChoI6GlhpwzyPsO0RkqLB8mvhoKMel1sNGfmxjxYVFt4OSPHDzNIU4XjGfW24YURx/xRkLU1M9zBNADDx+41EMNRT7aBXrKW9MzsxkfCM3bYwjdBbI2Yi2nUqARm+e/sBPLTqVfjuMFvosacYc43MqepFSQoZE5snwYxkLJzltAbxNUysJs277isnGgezh9p5T2MCxtCERU0lvp7M52hd1p75QEtNrdadfDprzT9` |
-> | Korea Central | ecdsa-sha2-nistp256 | `XjVSEyGlBtkONdvdw11tA0X1nKpw5nlCvN/0vXEy1Gc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPYiomaLB3ROxZvdfqiilpZ+2XJiPDeIIv4/fnQRZxnCBCFrUm7ATB6bMBSUTd00WfMhnOGj4hKRGFjkE+7SPy4=` |
-> | Korea Central | ecdsa-sha2-nistp384 | `p/jQTk9VsbsKZYk09opQVQi3HzvJ/GOKjALEMYmCRHw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN3NA7U4ZC576ibkC/ACWl6/jHvQixx+C6i2gUeYxp7Tq6k4YLn7Gr1GNr+XIitT6Jy5lNgTkTqmuLTOp7Bx9rGIw9Or37EGf7keUM42Urtd+9xF1dyVKbBw0pIuUSSy+w==` |
-> | Southeast Asia | rsa-sha2-256 | `f0cyRMVxUgtpsa9J6pwAMysk2MY/sybo5ioPjhy9LZk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDWPK6PAGMTdzNkwKZt+A3Dhbnete6jyLLboOXWdv/QdhvjR2pNCMhGuWUxadaiLUxzZM7IvugSLGexQlZi5aCJ06DpaVYqZk/Q8l+QUydp9TfNg/kP+0OJXCJ6XdsVggboDIfrEN8ku4nfasD4QTo2tnmqZhmbIDUr38SP16PsH2bQAi2lZKg4DfWgnSFyj5sbMSDLljBEY6JQkLGiPcbqlYEN4kjB5mudE9c/ts6Jn1fhizBwJY/pE3kOydq8dCMXYFMZ6NafPacCi7Pe5zcTKfi/daioVlSXQhWK3jNzCVENonF2xWSPH+1T5F2IOV0wb0HL2l8d02x5Bw2Su4aF` |
-> | Southeast Asia | rsa-sha2-512 | `vh8Uh40NCD3iHVh5KEcURUZrT3hictlF9pMDEoK5Rxk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCdL+E/W2RpmJiWMRg5EtMs0AE7BF2Qb5jnXXaIbwqr5/BGuUPLm43eVJJt5R0BmEJe2lYfYLAzinC9MhsxKSTHIt5u8QleyIAxI759M3DWZwFSKngjsHFRe/SvZOzc7gvtR7osdnVaXCTXY5NccLT34gDybEbjlmp+SEvSZZmXyy2wmUR3O022euBifKN0t9Tk1mkLYhbfRySQi0ZADWazjd7loM9ZHArVe8y9oDrs7QYX4eHIVRbgtsBbkR3g9zP3VWVMERFyi6cU0Dyvue8DCx9YzNsdmKjkB2dvYTMVcUkad81pbO81jpLb1wL25WPHIPHqTOLZhdn9JxLn245Z` |
+> | South Central US | rsa-sha2-256 | `n7P8NrxY8pWNSaNIh8tSZxi9rXi11g3JuzWZF93Ws4g=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQD4PgB8PxPPpGfvrIUGSiiDFIfkRk2/u1DmhcoGoIfW+/KR8KC2JA0kY4Yj+AceGnDUiBbSPz7lcmy2eGATfCCL6fC5swgJoDoYDJiJoaKACuVA0Nk2y0OeO58kS6qVHGX/XHzx8+IkfGdlhUUttcga7RNeppT5iqSz49q9x6Ly42yrV3DIAkOgh+f9SsMMfR6dQQmvWN3HYDOtiO2DvVN+ZenViQVcsynspF3z4ysk53ZYw5YcLhZu8JFw4u0F6QJAznR6TfNqIlhSjR1ub8DiHvIwrmDNf8TgG5kPVGhIcibYPf+y0B0M8nr9OKCxZzUTlXX4Xcnx+VOQ1e1qGHvV` |
+> | South Central US | rsa-sha2-512 | `B2oOtHpXzwezblrKxGcNBc3QJLQG/TiVgOjnmNorqkA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC+LJA8W3BcwITzJv6CAkx/0HBPdy3LjKPK2NQgV9mxSMw8mhz4Ere59u2vRsVFcdW6iAeGrH66VF6mJSCgUKiYnyZAfTp1O6p6DnUg4tktMQFo4BEwSz1S5SGDuRhpWvoKjzvljESf/vZBqgms7nMRWe3MGuvlUWBqB+2CnJ7bxhvGQCdBTQeoPO9EZKYKi/fPlcxBmLFGcZnRRpB6nu/Cxhhj1aHLJdjqCd+4ahtjBHeFrPxeQv9gTJ1B+EipJZu7WgPZOTI8iZaIcnCbhuGOy0iOFXeuexC9/ptHDW9UEgKVLyZ4UIPJkSLFVgW5NRujWyZ/thc5+EfHY9Db3UAl` |
+> | South India | ecdsa-sha2-nistp256 | `7PQhzR5S6sEFYkn2s3GxK6k8bwHgAy0000zb07YvI44=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLgZw/ouE23XQnzO8bBPSCJp/KR+N/xfuJS5QtWU/PzlNLmSYS20b65GRP6ThwZdaigMhwHOEc8twpJ7aA7LBu0=` |
+> | South India | ecdsa-sha2-nistp384 | `sXR2nhTTNof58ne5K+Xjm9Pu8miEbKJn4Bo9NYoqQs4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLwbzUI8q9f5YTLIs6ddRTPlHdb35xrbsJeOQII/nEXhlNjzpdL9XnDJjQunQL2vg6XND1pfp3TNBJ9LF3mud442LbpwSt9B7EZD8tQ5u0+2NeNjn8JnCu6/tdvS+xoNiA==` |
+> | South India | rsa-sha2-256 | `5gFLJvQvQodZxKBi3DnGywpf9dliWguiMTqcgkTmtu8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDlxVnaYnmg1cK+g/PI1jB1fgQQJiX39ZmfBss3mSW3kUxP3KWhm7lHBTkrbnfhVHnGpP6GcGFy09YBQa6UiyVpD8p8APtx0j9Jp8m3yhhgqOIjup0C7crl49NqMVryOZmCLOvA7KTyTxxV37GpRI+ffqQ8LOO+anWVWVaJlVCYBMct/OVhA7ePXblcbJg5eu5JjUiWW+cPdVqAqWojNHZzzprCFEBTCvYaZtzBx4kFGiipPmJSN6yvBPEfnA7Lzr/T9iXV/XkmI1txuJRBasoQMt+4jCZG25sCCN8y4iuUJCioUELr//TWaDyTsQAR4MbRW+L/GSIM9VUY4Uc+Impp` |
+> | South India | rsa-sha2-512 | `T4mrHCEHbFNAQSng//m0Viu/hXfi11JMnyA0PqAuTtg=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCz9tQa7D4dyrULCLH75yKwH27AQMRNWFUqgUQQXYHR1MYegLf7JEmFn126bxgEHPRO0bNwBM9S626Gcr1R1uDI/luL6uvG0Q57k+Pmv7HNQtv12J3fAuxuhSPcppE5IE5QR94Qgd1RzGXv954TK1Z+kCXHyLA583XTQ4btOEwqUo/16tSCqaoTSdyNp17q8BrOCPaTWMqT774lSGELIDc6RaGKHRu/Qa+F5FRMswdZt5YJDEKtlKdvbyIiSfIP2GZGhWBiSW2D6xpnzSjstR3LfRfFek/ryGkDPu5c5HNaVJwc1fatP6ggAbhVCcyDgWiCCpEBICV2wnPpGfDUdbRh` |
> | Southeast Asia | ecdsa-sha2-nistp256 | `q7OsE02p9SZ6E63b+Mxri1wbI5WfkdWcIJgAP2+WTg8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEbvjkwSA0RQuT2nQf8ABKc21s/kcC/7I5431oNEwQPZQ8S18RAKktv6ti19Ju8op6NOZZ3Up9lOn3iybxHgy+s=` | > | Southeast Asia | ecdsa-sha2-nistp384 | `HpneuSwbRG7eiqHGEAkSXF0HtjvccoT3OIgeQbPDzoE=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGAMUN+0oyuXuf6rkS+eopeoISA2US3UrgAovMwoqAeYSPoHKy9n/WKczsHPy/G+FKsXM4VlMHtNhEAxYwjtueF0Sb2GRZFzngeXMfVZPVL5Twph/pT6ZJnUD8iloW0Mw==` |
-> | Australia East | rsa-sha2-256 | `MrPZLU8llsG+SzgBN8eH702H4zuynyYgqqQLQmWGDEs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDsRwHZ+DKINZZNP0GO6l7mFIIgTRnJy7ikg07h54iuk+KStoB2Cwppj+ulPs0NiR2RgAsP5nchWRZQysjsfYDui8wha6JEXKvWPlQ90rEYEs96gpUcbVQesgfH8ILXK06Kn1xY/4CWAHEc5U++66e+pHQulkkFyDXTsRYHsjTk574OiUI1` |
-> | Australia East | rsa-sha2-512 | `jkDaVBMh+d9CUJq0QtH5LNwCIpc9DuWxetgJsE5vgNc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDFHirQKaYqkcecqdutyMQr1eFwIaIM/h302KjROiocgb4iywMAJkBLXmhJn+sSbagM5tzIk4K4k5LRjAizEIhC26sc2aa7spyvDu7+HMqDmNQ+nRgBgvO7kpxVRcK45ZjFsgZ6+mq9jK/eRnA8wgG3LnM+3zWaNLhWlrcCM0Pdy87Cswev/CEFZu6o6E6PgpBGw0MiPVY8CbdhFoTkT8Nt6tx9VhMTpcA2yzkd3LT7JGdC2I6MvRpuyZH1q+VhW9bC4eUVoVuIHJ81hH0vzzhIci2DKsikz2P4pJT0osg5YE/o9hVJs+4CG5n1MZN/l11K8lVb9Ns7oXYsvVdtR2Jp` |
-> | Australia East | ecdsa-sha2-nistp256 | `s8NdoxI0mdWchKMMt/oYtnlFNAD8RUDa1a4lO8aPMpQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBKG2nz5SnoR5KVYAnBMdt8be1HNIOkiZ5UrHxm4pZpLG3LCuzLEXyWlhTm8rynuM/8rATVB5FZqrDCIrnn8pkw=` |
-> | Australia East | ecdsa-sha2-nistp384 | `YmeF1kX0R0W/ssqzKCkjoSLh3CciQvtV7iacYpRU2xc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFJi5nieNPCIxkYS7HKMH2fQgONSy2kGkViQucVhWrTJCEQMVz5peL2JZJFjf2a6zaB2olPaBNEkeuJRHxGyW0luTII9ZXXUoiGQH9l05B41mweVtG6pljHfuKQ4HzoUJA==` |
-> | Japan East | rsa-sha2-256 | `P3w0fZQMpmRcFBtnIQH2R88eWc+fYudlPy7fT5NaQbY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCZucqkz4UicI20DdIyMMeuFs+xUxMytNp7QaqufmA2SgUOoM387jesl27rwvadT6PlJmzFIBCSnFzjWe5xYy3GE59hv4Q3Fp3HMr5twlvAdYc5Ns5BEBEKiU0m88VPIXgsXfoWbF0wzhChx8duxHgG4Cm+F8SOsEw/yvl+Z/d42U9YzliQ1AafNj4siFVcAkoytxKZZgIqIL4VUI322uc93K5OBi9lgBqciFnvLMiVjxTWS/wXtVEjORFqbuTAu/gM4FuKHqKzD1o39hvBenyZF2BjIAfkiE6iYqROd75KaVfZlBSOOIIgrkdhvyj9IfaZFYs3HkLc7XgawYe6JVPR` |
-> | Japan East | rsa-sha2-512 | `4adNtgbPGYD+r/yLQZfuSpkirI9zD5ase01a+G7ppDw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCjHai98wsFv0iy+RPFPxcSv8fvTs3hN/YnuPxesS21tUtf0j5t8BTZiicFg6MLOQJxT4jv5AfwEwlfTqvSj3db6lZaUf/7qs/X9aN1gSoQNnUvALgnQDYGjNYO8frhR7S0/D/WggQo2YKMAeNLRScT7Pg/MJaOI12UhoUloCXbTAP1c85hYx0TGKlGWpFjfen/2fwYEKR1vuqaQxj+amRatnG+k18KWsqvHKze8I2D19cn5fp2VkqXzh6zQ1s5AMc5B9qIF48NIec9FAemb9pXzOoYBDFna0qNT4dfeWOQK6tM/Ll10jafaw2P32dGBF8MQKXB2sxtcC0nU4EEtS5d` |
-> | Japan East | ecdsa-sha2-nistp256 | `IFt/j4bH2Jc0UvhUUADfcy3TvesQO+vhVdY4KPBeZY8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKVq+uiJXmIlYS367Ir9AFq/mL3iliLgUNIWqdLSh7XV+R8UJUz1jpcT1F6sJlCdGovM3R5xW/PrTQOr3DmikyI=` |
-> | Japan East | ecdsa-sha2-nistp384 | `9XLsxg1xqDtoZOsvWZ/m74I8HwdOw9dx7rqbYGZokqA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFh7i1cfUoXeyAgXs+LxFGo7NwrO2vjDwCmONLuPMnwPT+Ujt7xelTlAW72G3aPeG2eoLgr6zkE48VguyhzSSQKy7fSpLkJCKt9s0DZg2w0+Bqs44XuB43ao6ZnxbMelJQ==` |
-> | Canada East | rsa-sha2-256 | `SRhd9gnvJS630A8VtCYMqc4djz5R8EiG7spwAUCYSJk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQD2nSByIh/NC3ZHsjK3zt7mspcUUXcq9Y/jc9QQsfHXQetOH/fBalf17d5odCwQSyNY5Mm+RWTt+Aae5t8kGm0f+sKVO/4HcBIihNlAnXkf1ah5NoeJ+R0eFxRs6Uz/cJILD4wuJnyDptRk1GFhpAphvBi0fLEnvn6lGJbrfOxuHJSXhjJcxDCbmcTlcWoU1l+1SaYfOzkVBcqelYIimspCmIznMdE2D9vNar77FVaNlx4J9Ew+HQRPSLG1zAh5ae1806B6CHG1+4puuTUFxJR1AO+BuT6fqy1p0V77CrhkBTHs8DNqw9ZYI27fjyTrSW4SixyfcH16DAegeHO+d2YZ` |
-> | Canada East | rsa-sha2-512 | `60yzcSSOHlubdGkuNPWMXB9j21HqIkIzGdJUv0J57iY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDDmA4meGZwkdDzrgA9jAgcrlglZro0+IVzkLDCo791vsjJ29bTM6UbXVYFoKEkYliXSueL0q92W91IaFH/NhlOdW81Dbjs3jE+CuE4OX5pMisIMKx45QDcYCx3MJxqZrIOkDdS+m8JLs6XwM07LxiTX+6bH5vSwuGwvqg5gpnYfUpN0U5o7Wq7H7UplyUN8vsiDvTux3glXBLAI3ugjn6FC/YVPwMOq7Luwry3kxwEMx4Fnewe6hAlz47lbBHW6l/qmzzu4wfhJC20GqPzMJHD3kjHEGFBHpcmRbyijUUIyd7QBrnfS4J0xPVLftGJsrOOUP7Oq8AAru66/00We501` |
-> | Canada East | ecdsa-sha2-nistp256 | `YPqDobCavdQ/zGV7FuR/gzYqgUIzWePgERDTQjYEE0M=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKlfnJ9/rlO5/YGeGle1K6I6Ctan4Z3cKpGE3W9BPe1ZcSfkXq47u/f6F/nR7WgrC6+NwJHaMkhiBGadEWbuA3Q=` |
-> | Canada East | ecdsa-sha2-nistp384 | `Y6FK9rWscBkyKN7mgPAEj0jKFXrv4mGNzoaZ9ttc4io=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDS8gYaqmJ8eEjmDF2ET7d2d6WAO7SgBQdTvqt6cUEjp7I11AYATKVN4Isz1hx8qBCWGIjA42X1/jNzk3YR7Bv/hgXO7PgAfDZ41AcT4+cJd0WrAWnxv0xgOvgLKL/8GYQ==` |
-> | Canada Central | rsa-sha2-256 | `KOYkeGvx4egH9DTGgxiONDMvSlkEkoU8cXWnynOEQRE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7jhZvp5GMrYyA2gYjbQXTC/QoSeDeluBUpji6ndy52KuqBNXelmHIsaEBc69MbixqfoopaFyJshdu7X8maxcRSsdDcnhbCgUO/MnJ+am6yb33v/25qtLToqzJRXb5y86o9/WtyA9DXbJMwwzQFqxIsa1gB` |
-> | Canada Central | rsa-sha2-512 | `tdixmLr++BVpFMpiWyVkr5iAXM4TDmj3jp5EC0x8mrw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNMZwL0AuF2Uyn4NIK+XdaHuon2jEBwUFSNAXo4JP7WDfmewISzMWqqi1btip/7VwZbxiz98C4NUEcsPNweaw3VdpYiXXXc7NN45cC32uM8yFeV6TqizoerHf+8Hm8avWQOfBv17kvGihob2vx8wZo4HkZg9KacQGvyuUyfUKa9LJI9BnpI2Wo3RPue4kbaV3JKmzxl8sF9i6OTT8Adj6+H7SkluITm105NX32uKBMjipEeMwDSQvkWGwlh2oZwJpL+Tvi2G0hQ/Q/FCQS5MAW9MCwnp0SSPWZaLiA9EDnzFrugFoundyBa0vRjNGZoj+X4+8MVG2fYgOzDED1JSPB` |
-> | Canada Central | ecdsa-sha2-nistp256 | `HhbpllbdxrinWvNsk/OvkowI9nWd9ZRVXXkQmwn2cq4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBuyYEUpBjzEnYljSwksmHMxl5uoErbC30R8wstMIDLexpjSpdUxty1u2nDE3WY7m4W/doyXVSBYiHUUYhdNFjg=` |
-> | Canada Central | ecdsa-sha2-nistp384 | `EjEadkKaEgaNfdwXtzlqanUbDigzsdzcZJeTzJfQXP0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBORAcpaBXKmSUyCLbAOzghHvH8NKzk0khR0QGHdru0kiFiE16uz9j07aV9AiQQ3PRyRZzsf+dnheD7zuEZAewRiWc54Vg8v8QVi9VUkOHCeSNaYxzaDTcMsKP/A7lR2AOQ==` |
-> | Switzerland North | rsa-sha2-256 | `4cXg5pca9HCvAxDMrE7GdwvUZl5RlaivApaqz8gl7vs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCqqSS6hVSmykLqNCqZntOao0QSS1xG89BiwNaR7uQvz7Y2H+gJiXhgot6wtc4/A5743t7svXZqsCBGPvkpK05JMNZDUy0UTwQ1eI9WAcgFAHqzmazKT1B5/aK0P5IMcK00dVap4jTwxaoQbtc973E5XAiUW1ZRt6YComeoZB6cFVX28MaE6auWOPdEaSg8SlcmWyw73Q9X5SsJkDTW5543tzjJI5hnH03LAvPIs8pIvqxntsKPEeWnyIMHWtc5Vpg8LB7CnAr4C86++hxt3mws7+AOtcjfUu2LmLzG1A34B1yEa/wLqJCz7jWV/Wm21KlTp1VdBk+4qFoVfy2IFeX9` |
-> | Switzerland North | rsa-sha2-512 | `E63lmwPWd5a6K3wJLj4ksx0wPab1lqle2a4kwjXuR4c=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCtSlbkDdzwqHy2C/pAteV2mrkZFpJHAlL05iOrJSFk0dhq8iwsmOmQiF9Xwth6T1n3NVVncAodIN2MyHR7pQTUJu1dmHcikG/JU6wGPVN8law0+3f9aClbqWRV5tdOx1vWQP3uPrppYlT90bWbD0IBmmHnxPJXsXm+7tI1n+P1/bKewG7FvU1yF+gqOXyTXrdb3sEZOD6IYW/PusR44mDl/rV5dFilBvmluHY5155hk1O2HBOWlCiDGBdEIOmB73waUQabqBCicAWfyloGZqB1n8Eay6FksLtRSAUcCSyBSnA81phYdLiLBd9UmiVKPC7gvdBWPztWB+2MeLsXtim9` |
+> | Southeast Asia | rsa-sha2-256 | `f0cyRMVxUgtpsa9J6pwAMysk2MY/sybo5ioPjhy9LZk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDWPK6PAGMTdzNkwKZt+A3Dhbnete6jyLLboOXWdv/QdhvjR2pNCMhGuWUxadaiLUxzZM7IvugSLGexQlZi5aCJ06DpaVYqZk/Q8l+QUydp9TfNg/kP+0OJXCJ6XdsVggboDIfrEN8ku4nfasD4QTo2tnmqZhmbIDUr38SP16PsH2bQAi2lZKg4DfWgnSFyj5sbMSDLljBEY6JQkLGiPcbqlYEN4kjB5mudE9c/ts6Jn1fhizBwJY/pE3kOydq8dCMXYFMZ6NafPacCi7Pe5zcTKfi/daioVlSXQhWK3jNzCVENonF2xWSPH+1T5F2IOV0wb0HL2l8d02x5Bw2Su4aF` |
+> | Southeast Asia | rsa-sha2-512 | `vh8Uh40NCD3iHVh5KEcURUZrT3hictlF9pMDEoK5Rxk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCdL+E/W2RpmJiWMRg5EtMs0AE7BF2Qb5jnXXaIbwqr5/BGuUPLm43eVJJt5R0BmEJe2lYfYLAzinC9MhsxKSTHIt5u8QleyIAxI759M3DWZwFSKngjsHFRe/SvZOzc7gvtR7osdnVaXCTXY5NccLT34gDybEbjlmp+SEvSZZmXyy2wmUR3O022euBifKN0t9Tk1mkLYhbfRySQi0ZADWazjd7loM9ZHArVe8y9oDrs7QYX4eHIVRbgtsBbkR3g9zP3VWVMERFyi6cU0Dyvue8DCx9YzNsdmKjkB2dvYTMVcUkad81pbO81jpLb1wL25WPHIPHqTOLZhdn9JxLn245Z` |
+> | Sweden Central | ecdsa-sha2-nistp256 | `6HikgYBMSL9VguDq9bmwRaVXdOIUKEQUf4hlOjfvv6I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBErZhZNNmDhMKbSUXLB1VcTmR7pXcXWAqfFpdI81OP1FeCxBtpRNpIeWMyVoP3FeO3yWcODLm/ZkK7BICFpjleo=` |
+> | Sweden Central | ecdsa-sha2-nistp384 | `apRb96GLQ3LZ3E+rt2dyr9imMFDXYbaZERiireEO6ks=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKA5kwsqDKzZWmQCjIFBBjZun3rjg62pv8BOULwvwImaPvMFuR2OipExQZIyKSbR7wS9HA4/QKVA5rLRrSGpYvOBG438/7fwVZy5rOj3GXq6X7Havr1ExRXwsw5rJ56acA==` |
+> | Sweden Central | rsa-sha2-256 | `feu0rEf3KhvHGfhxEjcuFcPtIl+f0ZVzOMXyxy+f8f4=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOimUzZHr0DxrjdWEPQqkrBudLW2P2dvPE9DoaXSNbehU13bxzsF6lzO65JBPh9rlNwwyt2yWtrR4XI0Qh/QSXmBntefOeH6BZVrN06aHrsd1dQBr4UFT5chCwy6Keu0ARW3fY8kO9lycTmMIeoiaYahicxyRRC8WLs0cSCH8tO0dA2aoaMxafBWqR6D5dNzu00rIcsCxvyjtN3Y8C4fw3YnNvPB/qWHdZ4aNcu7sQMRhCYVNPqX9UNGeXkbw8gHf9uL9dFu1c+P+VFIEs5bIecgT5HiGvtuXsWRdtEcM1v3mrRnNdmeWWQIqXzLrs5svipMIbnYXekhhLYHIlVo4d` |
+> | Sweden Central | rsa-sha2-512 | `5fx+Ic5p/MMR6TZvjj2yrb4HMHwc1TgM4x1xQw4aD3Y=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC2nRaxWTg4KGLClTZLQ5QgPZPyQ/XYbH4prjhg1uK7m/JKlmJw5LjmIUVKnlXS38qTKpWpJZyGU/eBCa5FPQODvoAXfNncgtIQxd7j00P8aO2tho+uIxSgiTCte8sgrAyx22uIJlORJn2x1cBFBJrlgQDJOKEAs9IakMNdLvlfjJV405gk7pstF4eeIANRWC3eOTrMs0O1gCTt2rnWR5BNQJu8swj9FEWreNQ3PvUliM6Ig6u8b+4d8ryYGuzh5+E8wy/aNxlowkoCI4D/+dBnH43pSYyjhrVx966JMlrJZjDmbgtygkJI+FoEEfBoFlrpIGfisqIX41Np9ZRre4Ux` |
+> | Sweden South | ecdsa-sha2-nistp256 | `8C148yiGdrJCGF6HpDzINhGkB5AAyWDqkauJClRqCZs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEREKXJT7obM0RXGFrUPmJOoEpJD8T+QT29UEt3/jZrUzVzqXLV/9+VK0xqv1suljhUoUoClBdKqx5E/Sv1kSV4=` |
+> | Sweden South | ecdsa-sha2-nistp384 | `ra8+vb8aSkTBsO0KAxDrl2lN9p41BxymtRU6Seby83M=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMby6y3wzWnzE304DjregQcSqKTsoMx2vPGk7OlBtjFKoubZlBRQH4jQrtPbJv/Hpf8f+D0JmvPe5G75yZFG1BcP5eB4aonAr0NNCw+3sCb50JVpoT4yoT787KKYf+5qg==` |
+> | Sweden South | rsa-sha2-256 | `kS1NUherycqJAYe8KZi8AnqIQ9UdDbpoEpcdHlZ702g=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDJ+Imy6VuOvZsel9SCoMmej4kFvP8MDgDY9EdgfkgpjEfOSk+vmXBMCFtthI7sHRggkuXQE5v6OkOPwuWuVWjAWmclfFIz+TTNE5dUUY6L+UMipDEcwFxtufnY3AW0v2MW5lOFHWbx3w7605yb2AFQuZjvngkjdelhDpVpX9a0XdPa7zUYBwXdxWeteH+i4ZJ62sjlBGzYRjFhK/y1rUKR3BVR5xtP9ofzqE1n/TRLpViU8iy4bpsQntTWa71xVoTFtE29h3ESw4QG2lRCwk7NIf8efyNdR25+YpVGIysAxXG2smGAi2W/YXUjteCE7k3IU+ehHJdWKB3spUBSoF/V` |
+> | Sweden South | rsa-sha2-512 | `G+oX014UJXR0t1xHrCi715XuoHBkBxJMdH8hmVMilJc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDCa5Ny0EUd8yLOgzczm6Zge+D39VY7hpG+et2ln0i/HdYLd1aijEiF/0RDgnJYxZM4RhPZHxrVZXJXLsLa2T+ud+cqifvsjudsUSCzWNY3pHAwKBTSuu8Po+TrJXx8b+ogg+EhTh1BZQzIVQbtLwqRFJ3beLtvhp+V1pPWOoXRiN6Rq+x6ciT37jOdp033rbEM3AtzWdRBvRxUiVxKoRXcDYwAAIb3joaZ26p69Vj7HpD0HAf7w9f70zIwIzqrW4RcHcP+RbDVzNukK8gWP66OgSKrAQgRmibS6SEJx4kgkaghiQfm1k1bXkTnlKlz956DHkTkpMQe21/eW1Prs+q1` |
> | Switzerland North | ecdsa-sha2-nistp256 | `DfyPsw04f2rU6PXeLx8iVRu+hrtSLushETT3zs5Dq7U=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJICveabT6GPfbyaCSeU7D553Q4Rr/IgGjTMC8vMCIUJKUzazeCeS3q46mXL2kwnBLIge9wTzzvP7JSWf+I2Fis=` | > | Switzerland North | ecdsa-sha2-nistp384 | `Rw0TLDVU4PqsXbOunR2BZcn2/wqFty6rCgWN4cCD/1Y=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLhGaEyHYvfVU05lmKV4Rnrl9YiuSSOCXjUaJjJJRhe5ZXbDMHeiC67CAWW3mm/+c5i1hoob/8pHg7vmeC+ve+Ztu/ww12JsC4qy/CG8qIIQvlnDDqnfmOgr0Svw3/Izw==` |
-> | UAE Central | rsa-sha2-256 | `GW5lrSx75BsjFe4y4vwJFdg454fndPjm4ez2mYsG3zs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAQiEpj9zkZ8F3iDkDDbZV4A3+1RC/0Un6HZVYv5MCVYKqsVzmyn+7rbseUTkZMO/EqgF8+VWlwSU5C2JOesZtKXAgNzXBSOER3NbiucB5v1b1cC+8Qo4C2+iTHXyJSKxV0bTz55crCfhKO1KTQw3uZoYh6jE9xI1RzCI1J4qP+afZQQhn3H+7q+8kTMhmlQrfKuMWennoWZih+uTe9LPHjlvzwYiXkS2sOIlKtx8eLDJJg2ONl7YKSE4XVq7K33807Gz5sCD/ZV+Bn+NyP2yX14QKcyI97pkrFdcJf2DZi7LdTuEVPx3qK/rHzmzotwe6ne6sfV+FJpowUUTbKgT5` |
-> | UAE Central | rsa-sha2-512 | `zflL4olL2bga9JCxPA/qfvT2jSYmIfr2RY6GagpUjkE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAtxSG7lHzGFclWVuErRZZo6VG5uaWy1ikhb67rJSXdTLuSGDU+4Boj4wKxK0EyVKXpdQ3VrIwC4rOEy/lKAlnI2PrkrMjluau2aetlwW0hCBKAcgEOpMeMJJxCvv9EVatmEhvCe0ARyVM539058da9LzoZ2geFnFIbh3t8fNCaJZTNSS5PW1SLkspSqYXUYJWzu8Kx9l3LTzlmJT1DukKLIKj5ZDwuzOIN5m1ePYp4MzfIeBN6ys8df8HqXLoEXE+vOZWOzwkPVWoTsYvwB8j9+FHECAVf4Gcm8sPvRZA/RKDn1dGW2THzVw/VI/F87fFC7stLmZJ1v+a9TTFE649` |
+> | Switzerland North | rsa-sha2-256 | `4cXg5pca9HCvAxDMrE7GdwvUZl5RlaivApaqz8gl7vs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCqqSS6hVSmykLqNCqZntOao0QSS1xG89BiwNaR7uQvz7Y2H+gJiXhgot6wtc4/A5743t7svXZqsCBGPvkpK05JMNZDUy0UTwQ1eI9WAcgFAHqzmazKT1B5/aK0P5IMcK00dVap4jTwxaoQbtc973E5XAiUW1ZRt6YComeoZB6cFVX28MaE6auWOPdEaSg8SlcmWyw73Q9X5SsJkDTW5543tzjJI5hnH03LAvPIs8pIvqxntsKPEeWnyIMHWtc5Vpg8LB7CnAr4C86++hxt3mws7+AOtcjfUu2LmLzG1A34B1yEa/wLqJCz7jWV/Wm21KlTp1VdBk+4qFoVfy2IFeX9` |
+> | Switzerland North | rsa-sha2-512 | `E63lmwPWd5a6K3wJLj4ksx0wPab1lqle2a4kwjXuR4c=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCtSlbkDdzwqHy2C/pAteV2mrkZFpJHAlL05iOrJSFk0dhq8iwsmOmQiF9Xwth6T1n3NVVncAodIN2MyHR7pQTUJu1dmHcikG/JU6wGPVN8law0+3f9aClbqWRV5tdOx1vWQP3uPrppYlT90bWbD0IBmmHnxPJXsXm+7tI1n+P1/bKewG7FvU1yF+gqOXyTXrdb3sEZOD6IYW/PusR44mDl/rV5dFilBvmluHY5155hk1O2HBOWlCiDGBdEIOmB73waUQabqBCicAWfyloGZqB1n8Eay6FksLtRSAUcCSyBSnA81phYdLiLBd9UmiVKPC7gvdBWPztWB+2MeLsXtim9` |
+> | Switzerland West | ecdsa-sha2-nistp256 | `5MyZiuHQIMDh/+QEnbr3Zm6/HnsLpYT2GXetsWD6M8Q=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEj5nXHEjkVlLcf9R9fPQw9k2QGyUUP6NrFRj1gbxKzwHsgG2YKWDdOJiyguiro0xV9+JRdW3VC49/psIYUFDPA=` |
+> | Switzerland West | ecdsa-sha2-nistp384 | `nS9RIUnm5ULmNIG+d7qSeIl/kNzuJxAX9/PcwfCxcB0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB/Ps4Wp15xhNenavSHZijwVXdZcvhzVq8IcfHR3+Gz3tKLed36OdHRTdWpvjrg0mENw4L1mEZnHnDx96WMtA+FfagGWXMVMMfcyM4riIedemHsz45KAR2suqcdkNHfdVA==` |
+> | Switzerland West | rsa-sha2-256 | `yoVjbjB+U4Cp/ZpMgKKuji9T2pIFtdeXnJudyeNvPs0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDFl9NO3CJyKTdYxDIgCjygwIxlT1ppJQm/ykv2zDz6C7mjweiuuwhVM3LRua3WyP5mbgl3qYm+PHlA7UyIMY5jtsg7GaSfhiBSGZAdfgfDgOp3qRkgyep84P69SLb2b0hwgsPVkx8eWLDDVbOEdQLLx7TVndyxtdw+X4bZs6UdEcLMvLUWl7v3SoD5oiuJN6vOJPQl0VBeEaK/uhujjFgnlEu7/31rYEKQ8vQBbx22a4kIyBtUSAGo/VfKGRWF9oXL7Umh2xHAPwNbGwP+DdCKUY27wWG7Qe18O+QS9AOu0yL4+MRIHZg8ODLQsk0Hp3q8Iw2JjohSkk4lcjHYgb69` |
+> | Switzerland West | rsa-sha2-512 | `UgWxFaVY0YYMiNQ82Wt3D1LDg3xta1DfRUUKWjZYllk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC6svukqfg7147raZZrA1bZFOO/EDFgi+WRsxoOfH/EEWGmZ89QQ5m855TpsTPZ5ZARQD9kxrYEtqefcSPuWgth4Ze5PNVwRfAwedsSfnYwHZqHRlRM54bOQ6Img7T292ERl4KNJUI7SLyF+kKB7eXqp5nMBrTZ4rSHXoeibv2yZAph0cyf4V/NnfRj6KZSf6YDs0LW1VuovWAC6S7mpBjwtabBmd1gIiJleWhB7Jj48yiyh0m7L9oIoR4NRiuFC535JwqCYhrgFwujuk6iIR9ScRdayEr6gVcv6tBms3MyR16ytA/MHRxYHfPKb1kHUrpFjDQZZZswoDJDnhQGOm8Z` |
> | UAE Central | ecdsa-sha2-nistp256 | `P3KxgoZgjHHxid66gbkRETjPsHUsNiPt5/TFU0Kby6I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOvHAXCWC9HGJnr5SRW8I1zZWsyHIczEdPpzmafrU8drYmhpRxlD6HlKnY7iXqfq8bOIK063tpVOsPbrVevAKPs=` | > | UAE Central | ecdsa-sha2-nistp384 | `E+jKxd6hnfVIXPQYreABXpZB7tppZnWUxAelvEDh874=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMDLyroqceuIpmDQk/gvHHzFup7NZbyzjXMdGrkDvZDE2H+6XTthCGSVNVmwqdyHE4yGw88jgW1TfWTAZxCxTfXD+xF72iYyBAsejgiyYY/0x9NKM/lrtw8mnRtkZzLyrA==` |
-> | Germany North | rsa-sha2-256 | `ppHnlruDLR73KzW/m7yc3dHQ0JvxzuC1QKJWHPom9KU=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNNjCcDxjL3ess1QQEkb9n5bPYpxXpekd32ZX4oTcXXFDOu+tz/jpA8JZL8lOBAcBQ5n+mZF0Pot1o+B1JxQOHHiEZdcdKtLtPWrI2OQyxZnvo7sCBeSk+r/j3mjqpvq3+KpwoTZKpYF/oNRXVHU4VFs+MzvqWd6vgLXsDwtJrriojtkrWy0bTa4NjjN/+olsITxDmR0TGAu+epCJptdpKjTcgcn25QuIKy37/zVW8BJ5QsZmIRwvlCYxj11UOAoDcbapJcnzJYpOmQTNpdzkazjViX17DZW17Jmfhc6Dk3H+TEseilkbq1ZjsUyGBBxklWHid7+BgKVXOoIgG6+0x` |
-> | Germany North | rsa-sha2-512 | `m/OFTRHkc3HxfhCKk1+jY1rPJrT9t4FYtQ/Wmo3MOUE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDkN3CN1VITaHy/CduQaZIkuKSC/+oX19sYntRdgCblJlIzUBmiGlhmnKXyhA29lwWWAYxSbUu0jEJUZfQ6xdQ4uALOb815DLNZtVrxqSm4SjvP5anNa7zRyCFfo4V8M4i6ji6NB+u+PwH5DOhxKLu6/Ml9pF8hWyfLRft8cg4wORLLhwGt2+agizq7N7vF2nmLBojmS0MMmpH5ON/NFshYIDNKPEeK9ehpaARf4fuXm440Zqzy/FfpptSspJIhbY2zsg4qGQgYGZyuRxkLzYgtD/uKW5ieFwXPn+tvVeVzezZTmGMoDlkPX18HSsuNaRkdnwpX8yk1/uoBCsuOFSph` |
-> | Germany North | ecdsa-sha2-nistp256 | `F4o8Z9llB5SRp0faYFwKMQtNw/+JYFKZdNoIuO7XUU0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMoIo/OXMP7W5a5QRDAVBo+9YQg4YBrl3J7xM91PUGUiALDE1Iw8Uq4e1gLiSNA6A46om5yY/6oGj4iqEk8Ar8Y=` |
-> | Germany North | ecdsa-sha2-nistp384 | `BgW5e9lciYG1oIxolnVUcpdh3JpN/eQxfOyeyuZ6ZjI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ69kH0urhCdcMMaqpID2m+u8MECowtNlYjYXoSUn6oEhj7VPxvCRZi5R02vHrtrTJslsrbpgYHXz+/jSLplKpccQGJFaZso9WWgEJH1k7tJOuOv0NIjoBTv7fY5IxeAvQ==` |
-> | Australia Central 2 | rsa-sha2-256 | `sqVq1zdzD3OiAbnDjs70/why2c3UZOPMTuk5sXeOu4Y=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDKNZVZ5RVnGa0fYSn+Nx3tnt526fmMf+VufOBOy5/hEnqV6mPKXMiDijx2gFhKY4nyy957jYUwcqp1XasweWX6ISuhfg4QWcygW0HgmVdlSDezobPDueuP0WdhVsG3vXGbEYnrZOUR5kQHagX/wWf6Diy1J5Cn2ojIKGuSuGY/9bu3bnZzKt08fj+gQCEd1GxpPoBUfjF/73MM57IRhdmv919rsGD5nsyZCBmqFoKlLH/gKYZ4B3hylqf/6gER7OeZmG2S/U/fRAN0hVK7RkHNf2CFoCmuxXS6r87BznT5vF3nmd7tsf0akaxLjfWRbKLMWWyZkzU4/jijpbDDuu1x` |
-> | Australia Central 2 | rsa-sha2-512 | `p6vLHCTBcKOkqz7eiVCT6pLuIg7h4Jp41lvL/WOQLWM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDcqD2zICW1RLKweRXMG9wtOGxA5unQO/nd9yslfOIo54Ef0dlhAXGFPmCd3Yj60Gt/CIpqguzKLGm4D3nf19KjXE8V59cD7/lN6mVrFrm+6CU44JAzKN9ERUelxhSQKi/dsDR773wt4jsAt4SLBRrs19RC2fkYnxZgC/LzNZKXXY3FFb06uwheJjGOHyeQJbGpaV3hlelhOSV1UF2JAB8v6d8+9+S+b666EcpQ70JtxtA8h1s30hqhTKgYdRYMPfz7lqKXvact2NBXlqYRPod5cLW7lYBb2LzqTk1D44d8cwDknX2pYQJpgeFwJhB6SO9mF/Ot+jk+jV/CxUI55DPd` |
-> | Australia Central 2 | ecdsa-sha2-nistp256 | `m7Go9P1bwcPHAcZzRSXdwYroDIdZzt0jhxkJW42YGKY=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHp76felOL7GAHcJoW6vcCS83jkeR6RdFCwUk0Jf6v7SFoqYNZfTaryy2n0vwG1W1dAyHvOjB1+gzTZOkHN/cAI=` |
-> | Australia Central 2 | ecdsa-sha2-nistp384 | `9Jc39OueTg3pQcq8KJgzsmPlVXxILG24Euw27on7SkY=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEduOE61sSP2BozvJ6QLtRDZ7j0TenX7PjcpPVtYIQuKQ+h3qakXFwFnj8N3m8+LpTXYO41mgX7N02Rl12QvD7lDpUgHUChaNpUcMcSpm5qvguLyG6XZg2BDNd6pyx+fpw==` |
-> | South Africa West | rsa-sha2-256 | `aMMzaNmXR+V1NrwLmovyvKwfbKQ6aAKYiA5n8ETYQmU=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDGhe98UTnljsYaeJwtP3ABvT/hZP6Mp1r5beyJ2SWpdqZSZaKC+UQlWLu6WhLxLZ+5snB+YAlC56u4qOdDHLoid6vbAR/FPIcJlvQfcFJD88nihv9sq1kUX3JXrh0ZUrl2/Zj71aNlM/RL1OnXK/Pg2E+wu4EfnQTrzlWMhR8bxlQA0jH1zmfFN/6BTwP2if29TNlQkWuW3uq3rccY1GA6n0QtlucanPNRzsBtAzsH5/oFuB5R4sD/Msw0itvWuQP4e0y+Vdov1My/rjK19xLce6AhWmmhwkn5qxHdIy158C4cWnSkQvkYzPnwsi7KT9WRH7vfr8qD9zlA5mO+IDxJ` |
-> | South Africa West | rsa-sha2-512 | `Uc7QB0fT4NGyBp34GCAt8G4j1ZBXh/3Wa2YRlILu818=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCijtmaOHIcXjI07fVugz1M33+amlOEqdqtVgOlLmFRKSehPW2+6iGpAjQVwzsYOx32Hp5O07xj/PhiFsbBBqZXGHmuSIOJYa7tQSFvwclO+JW/kuoELXQLwnHxUfPyq4tYoj83GSZ5k/KRlEtbmjEwcozMQVya/7MzulAeV4nN6PDxoLjXlfGEQU2ZCGz2neeisQEM8+hZNuEH+O9O03g7CW8bwiI1Y70/bnNq95xJ5F7lRpwtJNWlx+kmUiNpfXOUPxZAUsny7z1Ka5XKEB1fDP8E/jAtrSWrRPDJew8lFpQeWukwB5tf3F3bh1SuSKaSQqKBArnSpJizWxp0brZZ` |
-> | South Africa West | ecdsa-sha2-nistp256 | `pr1KB8apI+FNQLKkzvUXx0/waiqBGZPEXMNglKimUwA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPvbvOfXQjT+/3+jQtW3FBAnPnaypYSUhZMkTTSfd7RQMmSxsLNmDooERhVuUTa7XCTlpDNTSPdnnaa6P1a+F6A=` |
-> | South Africa West | ecdsa-sha2-nistp384 | `A3RfMOd6dGgUlcrkXL1YRKNXIdAB8M1lF9qwmy6PjFg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNaJmo4QGmo6pbLHOXh06Rz9inntdxmuOtVxlJBO1i/ZK5les/AuaILMW7oQCxOKvZs/xI+P0MWRfrNgWSSapy5hNuTkbl8IqO4pH/lO//zdaHmVBC1kPnujDM9znJs6Rg==` |
-> | Jio India West | rsa-sha2-256 | `hcy1XbIniEZloraGrvecJCvlw6zZhTOrzgMJag5b5DA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOBU9e1Ae68+ScLUA5O1gaZ3eq0EGqBIEqL3+QuN8LYpF3Bi/+m43kgjhgiOx5imPK6peHHaaT/nEBQFJKFtWyn8q2kspcDy1xvJfG8Jaks1GQG33djOItiHlKjRWMcyWFvisFE2vVkp3uO0xG4nMDLM2rFazkax+6XA5cf2iW2SfL6Trs4v1waakU/jQLA7vsrx14S+wGEdVINTSPeh5DHqkLzTa3m2tpXVcUA4CG8uQZM8E/3/y0BuIW0Ahl/P6dx35W1Al7gnaTqmx7+idcc/YVe0auorZWWdyclf1sjnAw6U8uMhWmQ0dZgDehDtshlHyx84vvJ1JOJs0+6S2l` |
-> | Jio India West | rsa-sha2-512 | `LPctDLIz/vqg4POMOPqI1yD9EE9sNS1gxY6cJoX+gEY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOH+IZFFfJN4lpFFpvp5x1lRzuOxLXs0WfpcCIACMOhCor2tkaa/MHlmPIbAqgZgth5NZIWpYkPAv7GpzPBOwTp3Bg5lUM7MXSayO/5+eJjMhB5PUCJ0We8Kfgf/U+vbaMIg9R8gJKutXrANd3sAWXMwWqKUw+ZX/AC7h58w04gb1s+lNOQbfhpqkw8+mrOj2eKH8zHYUJQBUYEyDHqirj565r7HhBtEZImn/ioJS+nYT5Zl/SNtW/ehhUsARG9p6O4wSy20Ysdk7b9Ur2YL0RyFa6QhWQeKktKPVFQuMMLRkYX7dv35uAKq8YN833lLjGESYNdCzYmGTJXk5KYZ8B` |
-> | Jio India West | ecdsa-sha2-nistp256 | `mBx6CZ+6DseVrwsKfkNPh9NdrVLgwhHT4DEq9cYfZrg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPXqhYQKwmkGb8qRq52ulEkXrNVjzVU4sGEuRFn4xXK8xanasbEea3iMOTihzMDflMwgTDmVGoTKtDXy8tQ+Y8k=` |
-> | Jio India West | ecdsa-sha2-nistp384 | `lwQX9Yfn7uDz/8gXpG4sZcWLCAoXIpkpSYlgh8NpK1E=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLKY2+wwHIzFOfiKFKFHyfiqjUrscm0qwYTAirNPE1GI6OwAjconeX072ecY3/1G0dE7kAUaWbCKWSO3DqM4r6O+AewzxJoey85zMexW23g2lXFH7HkYn9rldURoUdk31A==` |
-> | Sweden South | rsa-sha2-256 | `kS1NUherycqJAYe8KZi8AnqIQ9UdDbpoEpcdHlZ702g=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDJ+Imy6VuOvZsel9SCoMmej4kFvP8MDgDY9EdgfkgpjEfOSk+vmXBMCFtthI7sHRggkuXQE5v6OkOPwuWuVWjAWmclfFIz+TTNE5dUUY6L+UMipDEcwFxtufnY3AW0v2MW5lOFHWbx3w7605yb2AFQuZjvngkjdelhDpVpX9a0XdPa7zUYBwXdxWeteH+i4ZJ62sjlBGzYRjFhK/y1rUKR3BVR5xtP9ofzqE1n/TRLpViU8iy4bpsQntTWa71xVoTFtE29h3ESw4QG2lRCwk7NIf8efyNdR25+YpVGIysAxXG2smGAi2W/YXUjteCE7k3IU+ehHJdWKB3spUBSoF/V` |
-> | Sweden South | rsa-sha2-512 | `G+oX014UJXR0t1xHrCi715XuoHBkBxJMdH8hmVMilJc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDCa5Ny0EUd8yLOgzczm6Zge+D39VY7hpG+et2ln0i/HdYLd1aijEiF/0RDgnJYxZM4RhPZHxrVZXJXLsLa2T+ud+cqifvsjudsUSCzWNY3pHAwKBTSuu8Po+TrJXx8b+ogg+EhTh1BZQzIVQbtLwqRFJ3beLtvhp+V1pPWOoXRiN6Rq+x6ciT37jOdp033rbEM3AtzWdRBvRxUiVxKoRXcDYwAAIb3joaZ26p69Vj7HpD0HAf7w9f70zIwIzqrW4RcHcP+RbDVzNukK8gWP66OgSKrAQgRmibS6SEJx4kgkaghiQfm1k1bXkTnlKlz956DHkTkpMQe21/eW1Prs+q1` |
-> | Sweden South | ecdsa-sha2-nistp256 | `8C148yiGdrJCGF6HpDzINhGkB5AAyWDqkauJClRqCZs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEREKXJT7obM0RXGFrUPmJOoEpJD8T+QT29UEt3/jZrUzVzqXLV/9+VK0xqv1suljhUoUoClBdKqx5E/Sv1kSV4=` |
-> | Sweden South | ecdsa-sha2-nistp384 | `ra8+vb8aSkTBsO0KAxDrl2lN9p41BxymtRU6Seby83M=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMby6y3wzWnzE304DjregQcSqKTsoMx2vPGk7OlBtjFKoubZlBRQH4jQrtPbJv/Hpf8f+D0JmvPe5G75yZFG1BcP5eB4aonAr0NNCw+3sCb50JVpoT4yoT787KKYf+5qg==` |
-> | Jio India Central | rsa-sha2-256 | `DmNCjG1VJxWWmrXw5USD0pAnJAbEAVonkUtzRFKEEFI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC/x6T0nye3elqPzK8IF+Q70bLn2zg4MVJpK3P6YurtsRH8cv5+NEHyP0LWdeQWqKa9ivQRIQb8mHS+9KDMxOnzZraUeaaJLcXI0YV512kqzdevsEbH6BSmy8HhZHcRyXqH0PjxLcWJ5Wn9+caNhiVC40Oks7yrrZpAVbddzD9y/eJfguMVWiu1c8iZpYORss1QYo7JqVvEB6pLY03NXWM+xti1RSs+C6IEblQkPvnT3ELni9T1eZOACi12KGZHVLU9n27Nyg/fPjRheYSkw/lkkKDG0zvIQ7jr/k8SCHGcvtDYwRlFErFdGYBlIE888le2oDNNoKjJuhzN6S7ddpzp` |
-> | Jio India Central | rsa-sha2-512 | `m2P7vnysl2adTz/0P6ebSR7Xx8AubkYkex6cmD9C0ys=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDQHFDt8zTk+Hqh912v0U8CVTgAPUb8Kmuec+2orydM/InG+/zSuqQHsCZaD2mhEg8kevU8k2veF5z2sbko5TR/cghGg5dXlzz4YaKiNdNyKIGm2MdynXJofAtiktGhcB6ummctHqATfGSgkLJHtLvstzTVbVK1zgxXcB8hA52c2EPB1cN1TkAKEyiYNX7fKFe1EEPCxdx3fC/UyApKdD+D432HCW/g8Syj/n7asdB8EQqcoCT3ajh2wG2Qq0ZxjVbbrFImlr0VoTqLImJ4kZ9d2G7Rq2jqrlfESLAxKVDaqj+SjyWpzb3MHFSnwJZybCKXyTt+7BXpKeeGAcHpTs3F` |
-> | Jio India Central | ecdsa-sha2-nistp256 | `zAZ0A1pk0Xz8Vr/DEf8ztPaLCivXxfajlKMtWqAEzgU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDow29ds+BRDNTZNW70CEoxUjLiUF+IHgaDRaO+dAWwxL13d+MqTIYY4I0D7vgVvh0OegmYLXIWpCdR8LvVT7zA=` |
-> | Jio India Central | ecdsa-sha2-nistp384 | `OTG7jxUSj+XrdL28JpYAhsfr6tfO7vtnfzWCxkC/jmQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/Bb3/3u/UIcYGRLSl7YvOObb43LO5Ksi0ewWJU+MPsPWZr7OTTPs76TdwXMvD8+QuY8U9JxgQQrNmvbpabmbGENkllEgjGlev5P2mHy/IZZAUQhAeeCinCRvTsiOOoLw==` |
-> | Brazil Southeast | rsa-sha2-256 | `D+S7uHDWy0VPrRg9mlaK70PBPghBRCR1ru/KEsKpcjA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCz86hzEBpBBVJqClTRb7dNolds/LShOM4jucPRawZrlKGEpeKv70Khk8BdI4697fORKavxjWK0O9tQpAJHtVGSv3Ajwb9MB7ERKncBVx/xfdiedrJNmF0G+p97XlCiwkAqePT/9IFaMy1OFqwl6LF7p7I0iBKZX0UgePwdiHtJnK0foTfsASNY4AEVcXHEuaulLFJKUjrr6ootblEbPBnC6IxTPj9oD+Nm0rtlCeD5JtCRFgKUj3LWybEog/AnnMNQDQ+vMPbsCnfsW/J/HQc+ebx3NtcumL+PIxqJw2Pk6mRpDdL+vs2nw/PtnPkdJ7DjIQYLypBSi3AFIONSlO15` |
-> | Brazil Southeast | rsa-sha2-512 | `C+p2eAPf5uec0yG+aeoVAaLOAAf0p8gbBNss3xfawPQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDV3WmETlQwzfuYoOsPAqfB9Z2gxsNecbpuwIBYwitLYKmJnT9Q3SNSgqnBiI1TKWyEweerdQaPnEvz9TeynGqSmLyGT0JJXQXFQCjTCgRHP4WD0Q+V7HWHnWYQ5c2e8tKEVA1jWt57dcrFlrGKEsywuMeEX21V13qQxK2acXVRWJPWgQCVwtiNpToc/cILOqL5XXKnSA81Ex7iRqw8QRAGdIozkryisucy+cStdJX6q+YUE5L62ENV8qMwJdwUGywEpKhXRg5VQKN0ciFqvyT/3cZQVF+NkUFGPnOi0bk4JzHxWxmQNTIwE7bmPsuniw5njD3ota/IPUHV2og190Xx` |
-> | Brazil Southeast | ecdsa-sha2-nistp256 | `dhADLmzQOE0mPcctS3wV+x2AUlv1GviguDQgSbGn/qs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPYuseeJN3CvtyPSKOz5FSu7PoNul+o6/LB62/MW9CUW+3AmqtVANVox1XQ8eX/YhL0a5+brjmveZPQS6M09YyQ=` |
-> | Brazil Southeast | ecdsa-sha2-nistp384 | `mjFHELtgAJkPTWO4dc7pzVVnJ6WLfAXGvXN1Wq2+NPs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIwFI6bRmgPe0tN7Qwc03PMrlpBn+wBChcuofyWlKVd/Ec6t2dxHr/0ev0dqyKS2nbK7CAOQxSrV1NVYnYZKql/eC2sPqI1oxz7DzUtRnNKrXcH714ViN3RIY3DZA6rJOw==` |
-> | Norway West | rsa-sha2-256 | `Ea3Vj3EfZYM25AX1IAty30AD+lhXYZsgtPGEFzNtjOk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDuxOcTdADdJHI8MFrXV00XKbKVjXpirS3ZPzzIxw0mIFxFTArJEpXJeRfb0OZzQ1IABDwoasp1u+IhnY1Uv2VQ8mYAXtC3He08+7+EXJgFU/xQ8qFfM4eioAuXpxR7M7qV/0golNT4dvvLrY4zHxbSWmVB7cYJAeIjDU8dKISWFvMYjnRuiI7RYtxh/JI5ZfImU65Vfxi26vqWm51QDyF5+FmmXLUHpMFFuW8i/g8wSE1C3Qk+NZ3YJDlHjYqasPm4QidX8rHQ1xyMX9+ouzBZArNrVfrA4/ozoKGnPhe4GFzpuwdppkP4Ciy+H6t1/de/8fo9zkNgUJWHQrxzT4Lt` |
-> | Norway West | rsa-sha2-512 | `uHGfIB97I8y8nSAEciD7InBKzAx9ui5xQHAXIUo6gdE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDPXLVCb1kqh8gERY43bvyPcfxVOUnZyWsHkEK5+QT6D7ttThO2alZbnAPMhMGpAzJieT1IArRbCjmssWQmJrhTGXSJBsi75zmku4vN+UB712EGXm308/TvClN0wlnFwFI9RWXonDBkUN1WjZnUoQuN+JNZ7ybApHEgyaiHkJfhdrtTkfzGLHqyMnESUvnEJkexLDog88xZVNL7qJTSJlq1m32JEAEDgTuO4Wb7IIr92s6GOFXKukwY8dRldXCaJvjwfBz5MEdPknvipwTHYlxYzpcCtb9qnOliDLD2g4gm9d5nq3QBlLj/4cS1M9trkAxQQfUmuVQooXfO2Zw+fOW1` |
-> | Norway West | ecdsa-sha2-nistp256 | `muljUcRHpId06YvSLxboTHWmq0pUXxH6QRZHspsLZvs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOefohG21zu2JGcUvjk/qlz5sxhJcy5Vpk5Etj3cgmE/BuOTt5GR4HHpbcj/hrLxGRmAWhBV7uVMqO376pwsOBs=` |
-> | Norway West | ecdsa-sha2-nistp384 | `QlzJV54Ggw1AObztQjGt/J2TQ1kTiTtJDcxxIdCtWYE=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYnNgJKaYCByLPdh21ZYEV/I4FNSZ4RWxK4bMDgNo/53HROhQmezQgoDvJFWsQiFVDXOPLXf26OeVXJ7qXAm6vS+17Z7E1iHkrqo2MqnlMTYzvBOgYNFp9GfW6lkDYfiQ==` |
-> | US Gov Virginia | ecdsa-sha2-nistp256 | `RQCpx04JVJt2SWSlBdpItBBpxGCPnMxkv6TBrwtwt54` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD7FjQs4/JsT0BS3Fk8gnOFGNRmNIKH0/pAFpUnTdh7mci4FvCS2Wl/pOi3Vzjcq+IaMa9kUuZZ94QejGQ7nY/U=` |
-> | US Gov Virginia | ecdsa-sha2-nistp384 | `eR/fcgyjTj13I9qAif2SxSfoixS8vuPh++3emjUdZWU` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKtxuygqAi2rrc+mX2GzMqHXHQwhspWFthBveUglUB8mAELFBSwEQwyETZpMuUKgFd//fia6NTfpq2d2CWPUcNjLu041n0f3ZUbDIh8To3zT7K+5nthxWURz3vWEXdPlKQ==` |
-> | US Gov Virginia | rsa-sha2-256 | `/ItawLaQuYeKzMjZWbHOrUk1NWnsd63zPsWVFVtTWK0` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC87Alyx0GHEYiPTqsLcGI2bjwk/iaSKrJmQOBClBrS23wwyH/7rc/yDlyc3X8jqLvE6E8gx7zc+y3yPcWP1/6XwA8fVPyrY+v8JYlHL/nWiadFCXYc8p3s8aNeGQwqKsaObMGw55T/bPnm7vRpQNlFFLA9dtz42tTyQg+BvNVFJAIb8/YOMTLYG+Q9ZGfPEmdP6RrLvf2vM19R/pIxJVq5Xynt2hJp1dUiHim/D+x9aesARoW/dMFmsFscHQnjPbbCjU5Zk977IMIbER2FMHBcPAKGRnKVS9Z7cOKl/C71s0PeeNWNrqDLnPYd60ndRCrVmXAYLUAeE6XR8fFb2SPd` |
-> | US Gov Virginia | rsa-sha2-512 | `0SbDc5jI2bioFnP9ljPzMsAEYty0QiLbsq1qvWBHGK4` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNu4Oori191gsGb8rlj1XCrGW/Qtnj6rrSQK2iy7mtdzv9yyND1GLWyNKkKo4F3+MAUX3GCMIYlHEv1ucl7JrJQ58/u7pR59wN18Ehf+tU8i1EirQWRhlgvkbFfV9BPb7m6SOhfmOKSzgc1dEnTawskCXe+5Auk33SwtWEFh560N5YGC5vvTiXEuEovblg/RQRwj+` |
-> | US Gov Arizona | ecdsa-sha2-nistp256 | `NVCEDFMJplIVFSg34krIni9TGspma70KOmlYuvCVj7M` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKM1pvnkaX5Z9yaJANtlYVZYilpg0I+MB1t2y2pXCRJWy8TSTH/1xDLSsN29QvkZN68cs5774CtazYsLUjpsK04=` |
-> | US Gov Arizona | ecdsa-sha2-nistp384 | `CsqmZyqRDf5YKVt52zDgl6MOlfzvhvlJ0W+afH7TS5o` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwIkowKaWm5o8cyM4r6jW39uHf9oS3A5aVqnpZMWBU48LrONSeQBTj0oW7IGFRujBVASn/ejk25kwaNAzm9HT4ATBFToE3YGqPVoLtJO27wGvlGdefmAvv7q5Y7AEilhw==` |
-> | US Gov Arizona | rsa-sha2-256 | `lzreQ6XfJG0sLQVXC9X52O76E0D/7dzETSoreA9cPsI` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCt8cRUseER/kSeSzD6i2rxlxHinn2uqVFtoQQGeyW2g8CtfgzjOr4BVB7Z6Bs2iIkzNGgbnKWOj8ROBmAV4YBesEgf7ZXI+YD5vXtgDCV+Mnp1pwlN8mC6ood4dh+6pSOg2dSauYSN59zRUEjnwOwmmETSUWXcjIs2fWXyneYqUZdd5hojj5mbHliqvuvu0D6IX/Id7CRh9VA13VNAp1fJ8TPUyT7d2xiBhUNWgpMB3Y96V/LNXjKHWtd9gCm96apgx215ev+wAz6BzbrGB19K5c5bxd6XGqCvm924o/y2U5TUE8kTniSFPwT/dNFSGxdBtXk23ng1yrfYE/48CcS5` |
-> | US Gov Arizona | rsa-sha2-512 | `dezlFAhCxrM3XwuCFW4PEWTzPShALMW/5qIHYSRiTZQ` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDIAphA39+aUBaDkAhjJwhZK37mKfH0Xk3W3hepz+NwJ5V/NtrHgAHtnlrWiq/F7mDM0Xa++p7mbJNAhq9iT2vhQLX/hz8ibBRz8Kz6PutYuOtapftWz7trUJXMAI1ASOWjHbOffxeQwhUt2n0HmojFp4CoeYIoLIJiZNl8SkTJir3kUjHunIvvKRcIS0FBjEG9OfdJlo0k3U2nj5QLCORw8LzxfmqjmapRRfGQct/XmkJQM5bjUTcLW7vCkrx+EtHbnHtG+q+msnoP/GIwO3qMEgRvgxRnTctV82T8hmOz+6w1loO6B8qwAFt6tnsq2+zQvNdvOwRz/o+X8YWLGIzN` |
-> | US Gov Texas | ecdsa-sha2-nistp256 | `osmHklvhKEbYW8ViKXaF0uG+bnYlCSp1XEInnzoYaWs` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNjvs/Cy4EODF21qEafVDBjL4JQ5s4m87htOESPjMAvNoZ3vfRtJy81MB7Fk6IqJcavqwFas8e3FNRcWBVseOqM=` |
-> | US Gov Texas | ecdsa-sha2-nistp384 | `MIJbuk4de6NBeStxcfCaU0o8zAemBErm4GSFFwoyivQ` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxPcJV0UdTiqah2XeXvfGgIU8zQkmb6oeJxRtZnumlbu5DfrhaMibo3VgSK7HUphavc6DORSAKdFHoGnPHBO981FWmd9hqxJztn2KKpdyZALfhjgu0ySN2gso7kUpaxIA==` |
-> | US Gov Texas | rsa-sha2-256 | `IL6063PFm771JPM4bDuaKiireq8L7AZP+B9/DaiJ2sI` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDUTuQSTyQiJdXfDt9wfn9EpePO0SPMd+AtBNhYx1sTUbWNzBpHygfJlt2n0itodnFQ3d0fGZgxE/wHdG6zOy77pWU8i95YcxjdF+DMMY3j87uqZ8ZFk4t0YwIooAHvaBqw/PwtHYnTBr82T383pAasJTiFEd3GNDYIRgW5TZ4nnA26VoNUlUBaUXPUBfPvvqLrgcv8GBvV/MESSJTQDz1UegCqd6dGGfwdn2CWhkSjGcl17le/suND/fC5ZrvTkRNWfyeJlDkN4F+UpSUfvalBLV+QYv4ZJxsT4VagQ9n6wTBTDAvMu3CTP8XmAYEIGLf9YCbjxcTC+UywaL1Nk++x` |
-> | US Gov Texas | rsa-sha2-512 | `NZo9nBE/L1k6QyUcQZ5GV/0yg6rU2RTUFl+zvlvZvB4` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCwNs5md1kYKAxFruSF+I4qS1IOuKw6LS9oJpcASnXpPi//PI5aXlLpy5AmeePEHgF+O0pSNs6uGWC+/T2kYsYkTvIieSQEzyXfV+ZDVqCHBZuezoM0tQxc9tMLr8dUExow1QY5yizj35s1hPHjr2EQThCLhl5M0g3s+ktKMb77zNX7DA3eKhRnK/ulOtMmewrGDg9/ooOa7ZWIIPPY0mUDs5Get/EWF1KCOABOacdkXZOPoUaD0fTEOhU+xd66CBRuk9SIFGWmQw2GiBoeF0432sEAfc3ZptyzSmCamjtsfihFeHXUij8MH8UiTZopV3JjUO6xN7MCx9BJFcRxtEQF` |
-> | US DoD |East | ecdsa-sha2-nistp256 | `dk3jE5LOhsxfdaeeRPmuQ33z/ZO55XRLo8FA3I6YqAk` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD7vMN0MTHRlUB8/35XBfYIhk8RZjwHyh6GrIDHgsjQPiZKUO/blq6qZ57WRmWmo7F+Rtw6Rfiub53a6+yZfgB4=` |
-> | US DoD East | ecdsa-sha2-nistp384 | `6nTqoKVqqpBl7k9m/6joVb+pIqKvdssxO5JRPkiPYeE` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOwn2WSEmmec+DlJjPe0kjrdEmN/6tIQhN8HxQMq/G81c/FndVVFo97HQBYzo1SxCLCwZJRYQwFef3FWBzKFK7bqtpB055LM58FZv59QNCIXxF+wafqWolrKNGyL8k2Vvw==` |
-> | US DoD East | rsa-sha2-256 | `xzDw4ZHUTvtpy/GElnkDg95GRD8Wwj7+AuvCUcpIEVo` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDrAT5kTs5GXMoc+fSX1VScJ4uOFAaeA7i1CVZyWCcVNJrz2iHyZAncdxJ86BS8O2DceOpzjiFHr6wvg2OrFmByamDAVQCZQLPm+XfYV7Xk0cxZYk5RzNDQV87hEPYprNgZgPuM3tLyHVg76Zhx5LDhX7QujOIVIxQLkJaMJ/GIT+tOWzPOhxpWOGEXiifi4MNp/0uwyKbueoX7V933Bu2fz0VMJdKkprS5mXnZdcM9Y/ZvPFeKaX55ussBgcdfjaeK3emwdUUy4SaLMaTG6b1TgVaTQehMvC8ufZ3qfpwSGnuHrz1t7gKdB3w7/Q7UFXtBatWroZ10dnyZ/9Nn4V5R` |
-> | US DoD East | rsa-sha2-512 | `3rvLtZPtROldWm2TCI//vI8IW0RGSbvlrHSU4e4BQcA` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDrAT5kTs5GXMoc+fSX1VScJ4uOFAaeA7i1CVZyWCcVNJrz2iHyZAncdxJ86BS8O2DceOpzjiFHr6wvg2OrFmByamDAVQCZQLPm+XfYV7Xk0cxZYk5RzNDQV87hEPYprNgZgPuM3tLyHVg76Zhx5LDhX7QujOIVIxQLkJaMJ/GIT+tOWzPOhxpWOGEXiifi4MNp/0uwyKbueoX7V933Bu2fz0VMJdKkprS5mXnZdcM9Y/ZvPFeKaX55ussBgcdfjaeK3emwdUUy4SaLMaTG6b1TgVaTQehMvC8ufZ3qfpwSGnuHrz1t7gKdB3w7/Q7UFXtBatWroZ10dnyZ/9Nn4V5R` |
-> | US DoD Central | ecdsa-sha2-nistp256 | `03WHYAk6NEf2qYT62cwilvrkQ8rZCwdi+9M6yTZ9zjc` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCVsp8VO4aE6PwKD4nKZDU0xNx2CyNvw7xU3/KjXgTPWqNpbOlr6JmHG67ozOj+JUtLRMX15cLbDJgX9G9/EZd8=` |
-> | US DoD Central | ecdsa-sha2-nistp384 | `do10RyIoAbeuNClEvjfq5OvNTbcjKO6PPaCm1cGiFDA` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKYiTs82RA54EX24BESc5hFy5Zd+bPo4UTI/QFn+koMnv2QWSc9SYIumaVtl0bIWnEvdlOA4F2IJ1hU5emvDHM2syOPxK7wTPms9uLtOJBNekQaAUw61CJZ4LWlPQorYNQ==` |
-> | US DoD Central | rsa-sha2-256 | `htGg4hqLQo4QQ92GBDJBqo7KfMwpKpzs9KyB07jyT9w` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDVHNOQQpJY9Etaxa+XKttw4qkhS9ZsZBpNIsEM4UmfAq6yMmtXo1EXZ/LDt4uALIcHdt3tuEkt0kZ/d3CB+0oQggqaBXcr9ueJBofoyCwoW+QcPho5GSE5ecoFEMLG/u4RIXhDTIms/8MDiCvbquUBbR3QBh5I2d6mKJJej0cBeAH/Sh7+U+30hJqnrDm4BMA2F6Hztf19nzAmw7LotlH5SLMEOGVdzl28rMeDZ+O3qwyZJJyeXei1BiYFmOZDg4FjG9sEDwMTRnTQHNj2drNtRqWt46kjQ1MjEscoy8N/MlcZtGj1tKURL909l3tUi3fIth4eAxMaAkq023/mOK1x` |
-> | US DoD Central | rsa-sha2-512 | `ho5JpqNw8wV20XjrDWy/zycyUMwUASinQd0gj8AJbkE` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCT/6XYwIYUBHLTaHW8q7jE2fdLMWZpf1ohdrUXkfSksL3V8NeZ3j12Jm/MyZo4tURpPPcWJKT+0zcEyon9/AfBi6lpxhKUZQfgWQo7fUBDy1K4hyVt9IcnmNb22kX8y3Y6u/afeqCR8ukPd0uBhRYyzZWvyHzfVjXYSkw2ShxCRRQz4RjaljoSPPZIGFa2faBG8NQgyuCER8mZ72T3aq8YSUmWvpSojzfLr7roAEJdPHyRPFzM/jy1FSEanEuf6kF1Y+i1AbbH0dFDLU7AdxfCB4sHSmy6Xxnk7yYg5PYuxog7MH27wbg4+3+qUhBNcoNU33RNF9TdfVU++xNhOTH1` |
+> | UAE Central | rsa-sha2-256 | `GW5lrSx75BsjFe4y4vwJFdg454fndPjm4ez2mYsG3zs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAQiEpj9zkZ8F3iDkDDbZV4A3+1RC/0Un6HZVYv5MCVYKqsVzmyn+7rbseUTkZMO/EqgF8+VWlwSU5C2JOesZtKXAgNzXBSOER3NbiucB5v1b1cC+8Qo4C2+iTHXyJSKxV0bTz55crCfhKO1KTQw3uZoYh6jE9xI1RzCI1J4qP+afZQQhn3H+7q+8kTMhmlQrfKuMWennoWZih+uTe9LPHjlvzwYiXkS2sOIlKtx8eLDJJg2ONl7YKSE4XVq7K33807Gz5sCD/ZV+Bn+NyP2yX14QKcyI97pkrFdcJf2DZi7LdTuEVPx3qK/rHzmzotwe6ne6sfV+FJpowUUTbKgT5` |
+> | UAE Central | rsa-sha2-512 | `zflL4olL2bga9JCxPA/qfvT2jSYmIfr2RY6GagpUjkE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAtxSG7lHzGFclWVuErRZZo6VG5uaWy1ikhb67rJSXdTLuSGDU+4Boj4wKxK0EyVKXpdQ3VrIwC4rOEy/lKAlnI2PrkrMjluau2aetlwW0hCBKAcgEOpMeMJJxCvv9EVatmEhvCe0ARyVM539058da9LzoZ2geFnFIbh3t8fNCaJZTNSS5PW1SLkspSqYXUYJWzu8Kx9l3LTzlmJT1DukKLIKj5ZDwuzOIN5m1ePYp4MzfIeBN6ys8df8HqXLoEXE+vOZWOzwkPVWoTsYvwB8j9+FHECAVf4Gcm8sPvRZA/RKDn1dGW2THzVw/VI/F87fFC7stLmZJ1v+a9TTFE649` |
+> | UAE North | ecdsa-sha2-nistp256 | `vAuGgsr0IQnOLUaWCCOBt+Jg0DV9C6rqHhnoJnwORM8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEYpnxgANJNJ4IIvSwvrRtjgbejCpTc3D+l5iob7dBK4KQ7MB40rq+CtdBDGZ1J7d6oCevW6gb1SIxU/PxCuvMI=` |
+> | UAE North | ecdsa-sha2-nistp384 | `A5fa4Pzkdl0H2kVJxlNiEQkOhPzBYkrfQrcviQUUWUA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOz4ENDgFpo0547D5XCRCJLg8brp+iUyId2IdEhZAhuNX9spxlVe6uSkiQbd+8D5hHPVNuLFTFx7v2wXObycM8tr/WGejn/934BvSUhM6lDpU+d5n+ZcxEEhp4gDiy1l+Q==` |
+> | UAE North | rsa-sha2-256 | `Vazz+KIADh85GQHAylrlI1tTY8/ckoRqTe/kbLXPmd0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDRGQHLLR9ruI0GcNF2u3EpS2CbHdZlqcgSR1bkaOXA9ZufHyxuhIpzG2IgYQ8wrjGzIilYds6UIH7CAw9FApKLNpLR6qdm8qjM0tJiyHLm3KloU27FfjCQjE9JhmsbTWCRH3N52A9HXIdiVCE3BBSoXhg/mF+3cvm1JvabKr1twoyfbUgDFuF7fDyhSxJ/MTig8SpgzWqcd5J+wbzjXG0ob2yWVhwtrcB6k97g25p77EKXo3VhSs0jN7VR+SAHupVwWsUgx4fZzi2I5xTUTBdOXW+e3EiXytPL2N5N/MtFKVY/JVhFkKkcTRgeuOds51tkByteSkc32kakcUxw6CjJ` |
+> | UAE North | rsa-sha2-512 | `NDeTZPUor2OuTdgSjLLhSaqJiTJUdfwTAzpkjNbNQUY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAx9LfiyVmWwGD/rjQeHiHTMWYaE/mMP6rxmfs9/I4wEFkaTBbc4qewxUlrB1jd7Se2a0kljI3lqQJ9h+gjtH/IaVTZOKCOZD8yV9Dh4ZENRqH/TOVz6LCvZifVbjUtxRtbvOuh1lJIVBSBFciNr0HThFMnTEIwcs5V48EFIT6eS9Krggu+cWAX2RbjM0VQnIgkA5BeM33MjSjNz86zhO+e7e1lhflPKL5RTIswtWbwatgkyvfM33pJql/zJz+3/usSpIA/pgWw23c8WziYXiHPTShJXN+N+9iLKf9YUkpzQUZSaRw8XDPyjJNx327Lot0Bh4YLpe37R0SrOvitBsN` |
+> | UK South | ecdsa-sha2-nistp256 | `weMVzOmQnlMdMp5XBoU9SdN5meBbx/8nvA8dB45w8Ck=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEnBllEm4/HsTP+ZMhlc8YnSAYWF23tibZDqGxf0yBRTU/ncuaavuQdIJ5TcJb0NcXG7skEmq3StwHT0FPMWN8Y=` |
+> | UK South | ecdsa-sha2-nistp384 | `HpsZ8zoOCCsUbpD3nAOtxpuKIvn0L8KGyg1KMLuMUqU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGd/672brwX1kOhH31ZTdBRj+bcEmemcdmTEe0J88cJ3RRQy7nDFs25UrnR+h3P0ov9Uq24EJQS8auxRgNCUJ3i3ZH9QjcwX/MDRFPnUrNosH8NkcPmJ/pezVeMJLqs3Qw==` |
+> | UK South | rsa-sha2-256 | `3nrDdWUOwG0XgfrFgW27xhWSizttjabHXTRX8AOHmGw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCdLm+9OROp5zrc6nLKBJWNrTnUeCeo8n1v9Y3qWicwYMqmRs/sS9t5V3ABWnus4TxH3bqgnQW3OqWLgOHse/3S+K1wGERmBbEdKOl7A7kQ9QgDkWEZoftwJ9hp+AMVTfCYhcOOsG+gW021difNx+WW2O5TldL31pk+UvdhnQKRHLX31cqx5vuUmiwq4mlbBx+rY8B/xngP2bzx/oYXdy1I9fZbWWAQ6FwJBav1sSWL0l7snRdOsy5ASeMnYollEw1IATwYeUv8g3PzrryZuru+7gu/Ku9w8d5jbFyI6Up4KLwjs/gZNuqQ5dif7utiQYbVe4L0TPWOmuLA25JJRZaF` |
+> | UK South | rsa-sha2-512 | `Csnl8SFblkdpVVsJC1jNVSyc2eDWdCBVQj9t6J3KHvw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDIwNEfrP6Httmm5GoxwprQ57AyD6b3EOVe5pTGQWIOzxnrIw2KnDPL07KNa33xZOmtXro5PYyhr5eNXUkFiQMEe+RblilZSNAvc4MHbp2TVD0L9N7Pdy2SetoF4m5BCXdC48kZntqgkpzXoDbFiaAVln5zQCHB5fOuBPS1id8+k3zqG0o+K0MHb6qcbYV8gdQeOn/PlJzKE4M0Ie8na3aWHdGvfJjDdK/hNN0J+eUK8qIb9KCJkSMDj/l3rnue9L8XgeKKA2Pkvh3nch4VBXCcCsDVhgSf+aoiJ0Fy8GVOTk2s7QDMzD9y37D9V2OPl66q4pjFGOfK0mJmrgqxWNy5` |
+> | UK West | ecdsa-sha2-nistp256 | `bNYdYVgicvl1yaOR/1xLqocxT8bamjezGFqFdO6Od0I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKWKoJuxB3EO5bKjxnviF+QTv3PBSViD1SNKbfj0qYfAjObQKZuiqcFYeDoPrkhk9jfan2jU6oCEN4+KDwivz3k=` |
+> | UK West | ecdsa-sha2-nistp384 | `6V8vLtRf6I5BjuLgToJ1cROM72UqPD+SC0N9L9WG6PA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+7R/5qSfsXACmseiErhfwhiE7Rref/TNHONqiFlAZq2KCW3w3u8+O4gpJEflibMFP/Mj5YeoygdPUwflFNcST9K+vnkEL3/lqzoGOarGBYIKtEZwixv3qlBR+KyoRUkw==` |
+> | UK West | rsa-sha2-256 | `2NQ5z6fQjt4SZKdViPS+I2kX7GoXOx3fVE81t8/BCVE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNq0xtA0tdZmkSDTNgA05YLH5ZuLFKD7RbruzuL4KVU2In0DQUtJkVqRXIaB3f+cEBTs9QrMUqolOdCCunhzosr5FvCO3I6HZ8BLnVNshtUBf2C1aT9yonlkdiIyc2pCHonds8vHKC4SBNu3Jr584bhyan8NuzJqzPCnKTdHwyWjf8m5mB4liK/ka4QGiaLLYTAjCCXmaXXOVZI2u0yDcJQXAjAP5niCOQaPHgdGk6oSjs0YKB29V+lIdB8twUnBaJA9jgECM2brywksmXrAyUPnIFD6AVEiFZsUH3iwgFAH7O6PLZTOSgJuu994CNwigrOXTbABfpH2YMjvUF///5` |
+> | UK West | rsa-sha2-512 | `MrfRlQmVjukl5Q5KvQ6YDYulC3EWnGH9StlLnR2JY7Q=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQClZODHJMnlU29q0Dk1iwWFO0Sa0whbWIvUlJJFLgOKF5hGBz9t9L6JhKFd1mKzDJYnP9FXaK1x9kk7l0Rl+u1A4BJMsIIhESuUBYq62atL5po18YOQX5zv8mt0ou2aFlUDJiZQ4yuWyKd44jJCD5xUaeG8QVV4A8IgxKIUu2erV5hvfVDCmSK07OCuDudZGlYcRDOFfhu8ewu/qNd7M0LCU5KvTwAvAq55HiymifqrMJdXDhnjzojNs4gfudiwjeTFTXCYg02uV/ubR1iaSAKeLV649qxJekwsCmusjsEGQF5qMUkezl2WbOQcRsAVrajjqMoW/w1GEFiN6c70kYil` |
+> | US DoD Central | ecdsa-sha2-nistp256 | `03WHYAk6NEf2qYT62cwilvrkQ8rZCwdi+9M6yTZ9zjc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCVsp8VO4aE6PwKD4nKZDU0xNx2CyNvw7xU3/KjXgTPWqNpbOlr6JmHG67ozOj+JUtLRMX15cLbDJgX9G9/EZd8=` |
+> | US DoD Central | ecdsa-sha2-nistp384 | `do10RyIoAbeuNClEvjfq5OvNTbcjKO6PPaCm1cGiFDA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKYiTs82RA54EX24BESc5hFy5Zd+bPo4UTI/QFn+koMnv2QWSc9SYIumaVtl0bIWnEvdlOA4F2IJ1hU5emvDHM2syOPxK7wTPms9uLtOJBNekQaAUw61CJZ4LWlPQorYNQ==` |
+> | US DoD Central | rsa-sha2-256 | `htGg4hqLQo4QQ92GBDJBqo7KfMwpKpzs9KyB07jyT9w=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDVHNOQQpJY9Etaxa+XKttw4qkhS9ZsZBpNIsEM4UmfAq6yMmtXo1EXZ/LDt4uALIcHdt3tuEkt0kZ/d3CB+0oQggqaBXcr9ueJBofoyCwoW+QcPho5GSE5ecoFEMLG/u4RIXhDTIms/8MDiCvbquUBbR3QBh5I2d6mKJJej0cBeAH/Sh7+U+30hJqnrDm4BMA2F6Hztf19nzAmw7LotlH5SLMEOGVdzl28rMeDZ+O3qwyZJJyeXei1BiYFmOZDg4FjG9sEDwMTRnTQHNj2drNtRqWt46kjQ1MjEscoy8N/MlcZtGj1tKURL909l3tUi3fIth4eAxMaAkq023/mOK1x` |
+> | US DoD Central | rsa-sha2-512 | `ho5JpqNw8wV20XjrDWy/zycyUMwUASinQd0gj8AJbkE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCT/6XYwIYUBHLTaHW8q7jE2fdLMWZpf1ohdrUXkfSksL3V8NeZ3j12Jm/MyZo4tURpPPcWJKT+0zcEyon9/AfBi6lpxhKUZQfgWQo7fUBDy1K4hyVt9IcnmNb22kX8y3Y6u/afeqCR8ukPd0uBhRYyzZWvyHzfVjXYSkw2ShxCRRQz4RjaljoSPPZIGFa2faBG8NQgyuCER8mZ72T3aq8YSUmWvpSojzfLr7roAEJdPHyRPFzM/jy1FSEanEuf6kF1Y+i1AbbH0dFDLU7AdxfCB4sHSmy6Xxnk7yYg5PYuxog7MH27wbg4+3+qUhBNcoNU33RNF9TdfVU++xNhOTH1` |
+> | US DoD East | ecdsa-sha2-nistp256 | `dk3jE5LOhsxfdaeeRPmuQ33z/ZO55XRLo8FA3I6YqAk=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD7vMN0MTHRlUB8/35XBfYIhk8RZjwHyh6GrIDHgsjQPiZKUO/blq6qZ57WRmWmo7F+Rtw6Rfiub53a6+yZfgB4=` |
+> | US DoD East | ecdsa-sha2-nistp384 | `6nTqoKVqqpBl7k9m/6joVb+pIqKvdssxO5JRPkiPYeE=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOwn2WSEmmec+DlJjPe0kjrdEmN/6tIQhN8HxQMq/G81c/FndVVFo97HQBYzo1SxCLCwZJRYQwFef3FWBzKFK7bqtpB055LM58FZv59QNCIXxF+wafqWolrKNGyL8k2Vvw==` |
+> | US DoD East | rsa-sha2-256 | `3rvLtZPtROldWm2TCI//vI8IW0RGSbvlrHSU4e4BQcA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDv+66WtA3nXV5IWgTMPK9ZMfPzDaC/Z1MXoeTKhv0+kV+bpHq30EBcmxfNriTUa8JZBjbzJ0QMRD+lwpV1XLI1a26JQs3Gi1Rn+Cn+mMQzUocsgNN+0mG1ena2anemwh4dXTawTbm3YRmb5N1aSvxMWcMSyBtRzs7menLh/yiqFLr+qEYPhkdlaxxv4LKPUXIJ1HFMEq/6LkpWq61PczRrdAMZG9OJuFe/4iOXKLmxswXbwcvo6ZQPM6Yov1vljovQP2Iu4PYXPWOIHZe4Vb90IuitCcxpGYUs0lxm4swDRaIx0g+RLaNGQ7/f/l+uzbXvkLqdzr5u6gLYbb8+H6qp` |
+> | US DoD East | rsa-sha2-512 | `xzDw4ZHUTvtpy/GElnkDg95GRD8Wwj7+AuvCUcpIEVo=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDrAT5kTs5GXMoc+fSX1VScJ4uOFAaeA7i1CVZyWCcVNJrz2iHyZAncdxJ86BS8O2DceOpzjiFHr6wvg2OrFmByamDAVQCZQLPm+XfYV7Xk0cxZYk5RzNDQV87hEPYprNgZgPuM3tLyHVg76Zhx5LDhX7QujOIVIxQLkJaMJ/GIT+tOWzPOhxpWOGEXiifi4MNp/0uwyKbueoX7V933Bu2fz0VMJdKkprS5mXnZdcM9Y/ZvPFeKaX55ussBgcdfjaeK3emwdUUy4SaLMaTG6b1TgVaTQehMvC8ufZ3qfpwSGnuHrz1t7gKdB3w7/Q7UFXtBatWroZ10dnyZ/9Nn4V5R` |
+> | US Gov Arizona | ecdsa-sha2-nistp256 | `NVCEDFMJplIVFSg34krIni9TGspma70KOmlYuvCVj7M=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKM1pvnkaX5Z9yaJANtlYVZYilpg0I+MB1t2y2pXCRJWy8TSTH/1xDLSsN29QvkZN68cs5774CtazYsLUjpsK04=` |
+> | US Gov Arizona | ecdsa-sha2-nistp384 | `CsqmZyqRDf5YKVt52zDgl6MOlfzvhvlJ0W+afH7TS5o=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwIkowKaWm5o8cyM4r6jW39uHf9oS3A5aVqnpZMWBU48LrONSeQBTj0oW7IGFRujBVASn/ejk25kwaNAzm9HT4ATBFToE3YGqPVoLtJO27wGvlGdefmAvv7q5Y7AEilhw==` |
+> | US Gov Arizona | rsa-sha2-256 | `lzreQ6XfJG0sLQVXC9X52O76E0D/7dzETSoreA9cPsI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCt8cRUseER/kSeSzD6i2rxlxHinn2uqVFtoQQGeyW2g8CtfgzjOr4BVB7Z6Bs2iIkzNGgbnKWOj8ROBmAV4YBesEgf7ZXI+YD5vXtgDCV+Mnp1pwlN8mC6ood4dh+6pSOg2dSauYSN59zRUEjnwOwmmETSUWXcjIs2fWXyneYqUZdd5hojj5mbHliqvuvu0D6IX/Id7CRh9VA13VNAp1fJ8TPUyT7d2xiBhUNWgpMB3Y96V/LNXjKHWtd9gCm96apgx215ev+wAz6BzbrGB19K5c5bxd6XGqCvm924o/y2U5TUE8kTniSFPwT/dNFSGxdBtXk23ng1yrfYE/48CcS5` |
+> | US Gov Arizona | rsa-sha2-512 | `dezlFAhCxrM3XwuCFW4PEWTzPShALMW/5qIHYSRiTZQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDIAphA39+aUBaDkAhjJwhZK37mKfH0Xk3W3hepz+NwJ5V/NtrHgAHtnlrWiq/F7mDM0Xa++p7mbJNAhq9iT2vhQLX/hz8ibBRz8Kz6PutYuOtapftWz7trUJXMAI1ASOWjHbOffxeQwhUt2n0HmojFp4CoeYIoLIJiZNl8SkTJir3kUjHunIvvKRcIS0FBjEG9OfdJlo0k3U2nj5QLCORw8LzxfmqjmapRRfGQct/XmkJQM5bjUTcLW7vCkrx+EtHbnHtG+q+msnoP/GIwO3qMEgRvgxRnTctV82T8hmOz+6w1loO6B8qwAFt6tnsq2+zQvNdvOwRz/o+X8YWLGIzN` |
+> | US Gov Iowa | ecdsa-sha2-nistp256 | `nGg8jzH0KstWIW2icfYiP5MSC0k6tiI07u580CIsOdo=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGlFqr2aCuW5EE5thPlbGbqifEdGhGiwFQyto9OUwQ7TPSmxTEwspiqI7sF4BSJARo9ZTHw2QiTkprSsEihCAlE=` |
+> | US Gov Iowa | ecdsa-sha2-nistp384 | `Dg+iVLxNGWu0DUgxBG4omcB9UlTjXvUnlCyDxLMli4E=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAsubBoJjCp1gO26Xl0t0t0pHFuKybFFpE7wd4iozG0FINjCd4bFTEawmZs3yOJZSiVzLiP1cUotj2rkBK3dkbBw+ruX0DG1vTNT24D6k54LhzoMB0aXilDtwYQKWE+luw==` |
+> | US Gov Iowa | rsa-sha2-256 | `gzizFNptqVrw4CHf17tWFMBuzbpz2KqDwZLu/4OrUX8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDMv5Y4DdrKzfz2ZDn1UXKB6ItW9ekAIwflwgilf8CJxenEWINEK5bkEPgOz2eIxuThh9qE8rSR/XRJu3GfgSl9ATlUbl+HppXSF7S1V1DIlZbhA75JU/blUZ1tTTowrjwSn8dpnR2GQcBhywmdbra7QcJyHb+QuY9ZGXOu3ESETQBCD6eUsPoHCdQRtKk1H6zQELRPDi/qWCYhdNULx4j19CdItjMWPHfQPV9JEGGFxfBzDkWaUIDymsex44tLLxe9/tT8XlD/prT/zCLV0QE/UYxYI3h9R9zL7OJ5a92J72dBRPbptXIhz7UVeSBojNXnnOf+HnwAVbt1Fi/iiEQJ` |
+> | US Gov Iowa | rsa-sha2-512 | `Izq7UgGmtMU/EHG+uhoaAtNKkpWxnbjeeLCqRpIsuWA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDofdiTcVwmbYyk9RRTSuI6MPoX7L03a6eKemHMkTx2t7WtP7KqC9PlnmQ2Jo5VoaybMWdxLZ+CE8cVi70tKDCNgD8nAjKizm0iMk2AO5iKcj8ucyGojOngXO4JGgrf1mUlnQnTlLaC1nL487RDEez5rryLETGSGmmTkvIGNeSJUWIWqwDeUMg1FUnugyOeUmRpY7bl/PlUfZAm9rJJZ5DwiDGjn6dokk7S/huORGyUWeDVYGCSQug6VRC1UxnJclckgRIJ2qMoAZln4VdqZtpT3pBXaZqOdY52TQSAdi345bEHSCaGxyTdT14k3XjI/9q8BZ9IX7K4fbJCX0dbLHJp` |
+> | US Gov Texas | ecdsa-sha2-nistp256 | `osmHklvhKEbYW8ViKXaF0uG+bnYlCSp1XEInnzoYaWs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNjvs/Cy4EODF21qEafVDBjL4JQ5s4m87htOESPjMAvNoZ3vfRtJy81MB7Fk6IqJcavqwFas8e3FNRcWBVseOqM=` |
+> | US Gov Texas | ecdsa-sha2-nistp384 | `MIJbuk4de6NBeStxcfCaU0o8zAemBErm4GSFFwoyivQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGxPcJV0UdTiqah2XeXvfGgIU8zQkmb6oeJxRtZnumlbu5DfrhaMibo3VgSK7HUphavc6DORSAKdFHoGnPHBO981FWmd9hqxJztn2KKpdyZALfhjgu0ySN2gso7kUpaxIA==` |
+> | US Gov Texas | rsa-sha2-256 | `IL6063PFm771JPM4bDuaKiireq8L7AZP+B9/DaiJ2sI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDUTuQSTyQiJdXfDt9wfn9EpePO0SPMd+AtBNhYx1sTUbWNzBpHygfJlt2n0itodnFQ3d0fGZgxE/wHdG6zOy77pWU8i95YcxjdF+DMMY3j87uqZ8ZFk4t0YwIooAHvaBqw/PwtHYnTBr82T383pAasJTiFEd3GNDYIRgW5TZ4nnA26VoNUlUBaUXPUBfPvvqLrgcv8GBvV/MESSJTQDz1UegCqd6dGGfwdn2CWhkSjGcl17le/suND/fC5ZrvTkRNWfyeJlDkN4F+UpSUfvalBLV+QYv4ZJxsT4VagQ9n6wTBTDAvMu3CTP8XmAYEIGLf9YCbjxcTC+UywaL1Nk++x` |
+> | US Gov Texas | rsa-sha2-512 | `NZo9nBE/L1k6QyUcQZ5GV/0yg6rU2RTUFl+zvlvZvB4=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCwNs5md1kYKAxFruSF+I4qS1IOuKw6LS9oJpcASnXpPi//PI5aXlLpy5AmeePEHgF+O0pSNs6uGWC+/T2kYsYkTvIieSQEzyXfV+ZDVqCHBZuezoM0tQxc9tMLr8dUExow1QY5yizj35s1hPHjr2EQThCLhl5M0g3s+ktKMb77zNX7DA3eKhRnK/ulOtMmewrGDg9/ooOa7ZWIIPPY0mUDs5Get/EWF1KCOABOacdkXZOPoUaD0fTEOhU+xd66CBRuk9SIFGWmQw2GiBoeF0432sEAfc3ZptyzSmCamjtsfihFeHXUij8MH8UiTZopV3JjUO6xN7MCx9BJFcRxtEQF` |
+> | US Gov Virginia | ecdsa-sha2-nistp256 | `RQCpx04JVJt2SWSlBdpItBBpxGCPnMxkv6TBrwtwt54=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD7FjQs4/JsT0BS3Fk8gnOFGNRmNIKH0/pAFpUnTdh7mci4FvCS2Wl/pOi3Vzjcq+IaMa9kUuZZ94QejGQ7nY/U=` |
+> | US Gov Virginia | ecdsa-sha2-nistp384 | `eR/fcgyjTj13I9qAif2SxSfoixS8vuPh++3emjUdZWU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKtxuygqAi2rrc+mX2GzMqHXHQwhspWFthBveUglUB8mAELFBSwEQwyETZpMuUKgFd//fia6NTfpq2d2CWPUcNjLu041n0f3ZUbDIh8To3zT7K+5nthxWURz3vWEXdPlKQ==` |
+> | US Gov Virginia | rsa-sha2-256 | `/ItawLaQuYeKzMjZWbHOrUk1NWnsd63zPsWVFVtTWK0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC87Alyx0GHEYiPTqsLcGI2bjwk/iaSKrJmQOBClBrS23wwyH/7rc/yDlyc3X8jqLvE6E8gx7zc+y3yPcWP1/6XwA8fVPyrY+v8JYlHL/nWiadFCXYc8p3s8aNeGQwqKsaObMGw55T/bPnm7vRpQNlFFLA9dtz42tTyQg+BvNVFJAIb8/YOMTLYG+Q9ZGfPEmdP6RrLvf2vM19R/pIxJVq5Xynt2hJp1dUiHim/D+x9aesARoW/dMFmsFscHQnjPbbCjU5Zk977IMIbER2FMHBcPAKGRnKVS9Z7cOKl/C71s0PeeNWNrqDLnPYd60ndRCrVmXAYLUAeE6XR8fFb2SPd` |
+> | US Gov Virginia | rsa-sha2-512 | `0SbDc5jI2bioFnP9ljPzMsAEYty0QiLbsq1qvWBHGK4=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNu4Oori191gsGb8rlj1XCrGW/Qtnj6rrSQK2iy7mtdzv9yyND1GLWyNKkKo4F3+MAUX3GCMIYlHEv1ucl7JrJQ58/u7pR59wN18Ehf+tU8i1EirQWRhlgvkbFfV9BPb7m6SOhfmOKSzgc1dEnTawskCXe+5Auk33SwtWEFh560N5YGC5vvTiXEuEovblg/RQRwj+9oQD1kurYAelyr76jC/uqTTLBTlN7k0DBtuH305f7gkcxn+5Tx1eCvRSpsxD7lAbIoCvQjf95QvOzbqRHl6wOeEwm03uK8p9BLuzxlIc0TTh4CE8KrO5bciwTVi1xq7gvqh912q0OvWpg3XBh` |
+> | West Central US | ecdsa-sha2-nistp256 | `rkHjcTK2BvryQAFvjugTHdbpYBGfOdbBUNOuzctzJqM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKMjEAUTIttG+f5eocMzRIhRx5GjHH7yYPzh/h9rp9Yb3c9q2Yxw/j35JNWxpGwpkb9W1QG86Hjt4xbB+7q/D8c=` |
+> | West Central US | ecdsa-sha2-nistp384 | `gS9SYvaH6dCqyugArvFb13cwi8q90glNaK+fyfPie+Y=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0HqM8ubcDBRMwuruX5zqCWSp1DaLcS9cA9ndXbQHzb2gJ5bJkjzxZEeIOM+AHPJB8UUZoD12It4tCRCCOkFnKgruT61hXbn0GSg4zjpTslLRYsbJzJ/q6F2DjlsOnvQQ==` |
+> | West Central US | rsa-sha2-256 | `aSNxepEhr3CEijxbPB4D5I+vj8Um7OO6UtpzJ/iVeRg=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDDWmd8Zd7dCfamYd/c1i4wYhhRnaIgUmK7z/o8ehr4bzJgWRbjrxMtbkD2y7ImjE2NIBG5xglz6v9z4CFNjCKUmoUl7+Le3Rsc5sJ/JmHAmEXb0uiDMzhq9f6Qztp+Pb9uqLfsPmm6pt1WOcpu+KNpiGtPWTL21sJApv6JPKU+msUrrCIekutsHtW6044YPXNVOnvUXv08BaPFhbpeGZ4zkrji0mCdGfz2RNcgLw0y3ZzgUuv0Lw+xV0/xwanJu4IOFI1X9Ab7NnoGMkqN/upBLJ4lRhjYVTNEv01IX2/r5WZzTn4c38Nfw4Ma3hR0BiLMTFfklFVGg2R64Z7IILoB` |
+> | West Central US | rsa-sha2-512 | `vVHVYoH1kU1IZk+uZnStj3Qv2UCyOR9qVxJfmTc20jQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC9Q8Tvvnea8hdaqt+SZr4XN1JIeR43nX6vrdhcS6yyfRgaTcEdKbAKQbwj9Fu3kq80c4F+SNzh1KQWlqLu3MJHSaSdQLN9RaHO1Dd+iVK1WgZtsPM9+6U7wupMZq8Hdmao5sqaMT5lj7g+win2J+Wibz7t8YwS7g2Xi+ode8tFPFKduZ5WvKLjI0EiAS4mvcyWEWca142E8fxV9TobUjAICfgtL4vCpmLYKnSL/kUgplD0ow86k/MHp9zghDLVSVDj8MGMra+IJEpgHOUrFNnuyua2WSJVuXR2ITfaecRKrGg7Z4IJzExPoQzDIWdCHptiGLAqvtKT0NE2rPj9U4Rp` |
+> | West Europe | ecdsa-sha2-nistp256 | `0WNMHmCNJE1YFBpHNeADuT5h+PfJ/jJPtUDHCxCSrO0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBANx85rJLXM8QZi33y8fzvUbH+O5Cujn0oJFDGQrwhGJQTHsjIhd5bhFFgDvJ64/4SGrtP1LHDKLwr9+ltzgxIE=` |
+> | West Europe | ecdsa-sha2-nistp384 | `90g+JfQChjbb3OOV0YIGSVTkNotnefCV2NcSuMdPrzY=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJgtrLFy2zsyhNvXlwHUmDBw1De++05pr1ZTxOIVnB17XZix0Euwq/wZTs0cE01c5/kYdAp+gQHEz594e7AQXBTCTqUiIS1a4+IXzfiCcShVfMsLFBvzjm9Yn8qgW9Ofg==` |
+> | West Europe | rsa-sha2-256 | `IeHrQ+N6WAdLMKSMsJiML4XqMrkF1kyOiTeTjh1PFyc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDZL63ZKHrWlwN8gkPvq43uTh88n0V6GwlTH2/sEpIyPxN56/gpgWW6aDyzyv6PIRI/zlLjZNdOBhqmEO+MhnBPkAI8edlvFoVOA6c/ft5RljQOhv+nFzgELyP8qAlZOi1iQHx7UeB1NGkQ5AIwNIkRDImeft9Iga+bDF6yWu60gY43QdGQCTNhjglNuZ6lkGnrTxQtPSC01AyU51V1yXKHzgaTByrA4tK6cGtwjFjMBsnXtX2+yoyyuQz/xNnIN63awqpQxZameGOtjAYhLhtEgl39XEIgvpAs1hXDWcSEBSMWP4z04U/tw2R5mtorL3QU1CmokWmuAQZNQcLSLLlt` |
+> | West Europe | rsa-sha2-512 | `7+VdJ21y+HcaNRZZeaaBtk1AjkCNK4weG5mkkoyabi0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDYAmiv6Tk/o02McJi79dlIdPLu1I5HfhsdPlUycW+t1zQwZL+WaI182G6SY728hJOGzAz51XqD4e5yueAZYjOJwcGhHVq6MfabbhvT1sxWQplnk3QKrUMRXnyuuSua1j+AwXsm957RlbW9bi1aQKdJgKq3y2yz+hqBS76SX9d8BxOHWJl5KwCIFaaJWb0u32W2HGb9eLDMQNipzHyANEQXI9Uq2qRL7Z20GiRGyy7VPP6AbPYTprrivo3QpYXSXe9VUuuXA9g3Bz3itxmOw6RV9aGQhCSp22BdJKDl70FMxTm1d87LEwOQmAViqelEeY+DEowPHwVLQs3rIJrZHxYV` |
+> | West India | ecdsa-sha2-nistp256 | `t+PVPMSVEgQ3FPNploXz7mO25PFiEwzxutMjypoA2DM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCzR5dhW3wfN5bRqLfeZ2hlj7iRerE4lF5jk+iQl6HJHKXIsH6lQ63Wyg7wOzF65jNnvubAJoEmzyyYig+D3A+w=` |
+> | West India | ecdsa-sha2-nistp384 | `pLODd+3JNeLVcPYYnI0rSWoemhMWws0jLc3J8cV6+GU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL2PEknfZpPAT4ejqBJW8InHPELP1G7hGvroW5J3evJr8Qrr//voa6aH8ZF7Ak0HcVVOOCSzfjcEpZYjjrXrzuCOekU48DkSF8i1kKqV4iXejNNQ1ohDCbsiAyoxQMY9cA==` |
+> | West India | rsa-sha2-256 | `Fkh7r/tOJy1cZC6nI75VsO1sS3ugMvJ56U02uGGJHFo=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDHCzLI51bbBLWK7TcXvXvEHaLQMzuYKEwyoS1/oC5EN3NsLZl4BV5d2zbLETFDjsky/btWiAkCvHuzxealxGgzw69ll90aWSOEY/epaYJvueOTvGy4+rJY8Xyc64VdHml8n3EEZTQmBEi3Tn6bViLwvC0iT2/noLeYGXh0/NL0T3BeblwSm3cNXyemkBQO/zyYcchqRtKJu8w8brYVZYFINlTeBu4LyDP1k9DMtuewGoeH8SmvDxUmiIGh2VDlPmXe3IkMR0nSgz10jMl3F0fei7ZJ+8zdCVbBuIqsJf+koJa/q9npstWGMFddMX3nR0A3HnG4v5aCAGVmfl11iC0J` |
+> | West India | rsa-sha2-512 | `xDtcgfElRGUUgWlU9tRmSQ58WEOKoUSKrHFDruhgDIM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCXehufp18nKehU4/GOWMkXJ87t22TyG5bNdVCPO2AgLJ88FBwZJvDurLgdPRDRuJImysbD7ucwk2WoDNC39q0TWtCRyIKTXfwvPmyG+JZKkT+/QfslMqiAXAPIQtVr2iXTeuHmn3tk+PksGXnTwb3oFV4wv40Wi1CbwvtCkUsBSujq4AR7BqksPnAqPrAyw+fFR3w4iD3EdtHBdIVULez3lkpMH/d04rf2bjh6lpI9YUdcdAmTGYeMtsf/ef8z0G2xpN2aniLCoCPQP85cooKq7YEhBDR8Lzem3vWnqS3gPc4rUrCJoDkGm0iL/4GCWRyG+RPi70WSdVysJ+HIm0Ct` |
+> | West US | ecdsa-sha2-nistp256 | `peqBbfcWZRW4QzLi69HicUUTwdtfW7/E9WGkgRMheAo=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBcTos/zmSn15kzn1Lk8N8QQh9hzOwqOSOf/bCpu6AQbWJtvjf1pHMuZlS2PpIV7G+/ImxXGpqpHqQlcD+Lg8Ro=` |
+> | West US | ecdsa-sha2-nistp384 | `sg63Cc3Mvnn9hoapGaEuZByscUEMa+xgw/3ruz49szk=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGzX2t9ALjFwpcBLm4B0+/D47PMrDya0KTva5w4E5GZNb5OwQujQvtUS2owd8BcKdMBeXx2S7qbcw6cQFffRxE+ZTr4J+3GoCmDM0PqraXxJHBRxyeK6vlrSR8ojRzIApA==` |
+> | West US | rsa-sha2-256 | `kqxoK1j6vHU8o8XyaiV87tZFEX9nE6o/yU0lOR5S6lE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAd7gh0mFw3iRAKusX3ai6OE0KO5O2CMlezvZOAJEH88fzWQ/zp0RZ1j7zJ8sbwslA6v3oRQ7Cx9ptAMTrL8SW4CZYcwETlfL3ZP39Llh+t7rZovIgvCDU0tijYvsa1W0T9XZgcwWEm6cWQzdm+i9U0KUdh7KgsubPAhGQ7xrOVEqgB9MYMofSSdIfKMt8K7xOSam6mhWiTSSIEGgeMTIZ9TgXkgAEJ8TNl3QHRoM8HxMnRFjtkXbT3EeSg6VOqi69Cei3hrmS64qvdzt2WwoTQwTFjxHocWGgA+Ow53wqWt8iYgOudpoB1neXiIcF4p0CN8zjvXNiRbZPg9lXFM9R` |
+> | West US | rsa-sha2-512 | `/PP9B/9KEa+QUyump1Yt05Lfk0LY/eyQhHyojh5zMEg=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC8R8bFe8QSTYKK+4evMpnlB8y0rQCqikTyviqD4rva7i4f1f/JxmptJQ/wkipHPXk6E7Du6oK/iJaZ+wjZ03tNIWwAGn0SdlTvWuwQwigK9k3JRlLYO+Uj/SSnBQWf8Dmp+cA6RDalteHpM2KwaUK65BHYC75bWKHaNntadTIU4kQ0BvFzmNRcJWL6otd5RkdYXjJWHu21zcv4EpRHGmVCD0na+UWce6UGDbLDtsZVJd2Q7IyeTrXpWxEO0fFN2Gu9gINfWC1FpuffGaqWSa4nK69n39lUKz4PUdu6Owmd9aNbLXknvtnW4+xGbX6oQa8wYulINHjdNz8Ez6nOiNZ9` |
+> | West US 2 | ecdsa-sha2-nistp256 | `rt5kaA0neIFDIWTP2MjWk9cOSapzEyafirEgPGt+9HM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKEKP+1QZf3GfEvkNZtzoKr05iAwGq+yPhUsVdyA7uKnwvTwZAi7NBr4hMkGIIdgQlGrMNNXKS0V+rhMNI1sH48=` |
+> | West US 2 | ecdsa-sha2-nistp384 | `g0vDKd4G5MKnxWewbnCYahCv1lZgfnZeEXfPAhv+trs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1+/Qu9Y1BqqV3seN0+0ITYCueuv0TFAnfG9z1Io8VYjmxLvdmaDvTi9kJ0ShjFJRKjbCfYKNekqYZDW4gBfmE9EyvMPI6VXPVLNY3TQ/z+Y7qO/oa28cSirW9fhp7vbA==` |
+> | West US 2 | rsa-sha2-256 | `ktnBebdoHk7eDo2tvJuv36XnMtfauhvF/r0uSo6DBfk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDoskHzExtM+YSXGK6cBgmnLlXsZLWXkEexPKC7wHdt0kSqkIk9F31wD+2LefZzaTGfAmY5/EWrOsyBJvIgOoksH+ZPMnE9+TOWqy6vsS+Ml/ITvUkWajS1bKDPDSoIrCM1rQ9PlbgMQFg4o0FfyxLVCP7hcgvHO+aycOxkiDqtvwANvIn2Qwt7xwpIv1Mnc4OpcBSxbigb7ISlrvR9XWivE/piWjXS3IEYkGv7VitAlmWEoTt9L7K94bYol2nCXSXJ33X6xVVwVNpdxVtnUQBIRinN+vkOccgG0jvWtWPtrMjyDg/lyvr6lBdO/CQy4VO4VrIBuL6pjsS8KfIfTxKd` |
+> | West US 2 | rsa-sha2-512 | `i8v3Xxh/phaa5EyZEr5NM4nTSC/Rz7Nz0KJLdvAL0Ls=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOOo5f0ACypThvoDEokPfzGJUxbkyMoQKca9AgEb3YkQ/lsYCfLtfGxMr2FTOGQyx5wfhOrN0B2SpI4DBgF3B0YSLK0omZRY7fpVPspWWHrsbTOJm/Fn7bWCM+p63xurZ6RUPCA6J1gXd3xbdW7WQXLGBJZ6fjG7PbqphIOfFtwcs/JvjhjhvleHrXOtfGw9b4Jr8W1ldtgKslGCU1mnUhOWWXUi+AhwGFTI0G/AShlpX8ywulk2R+fxet3SNGNQmjydnNkcsrBI/EMytO1kwoJB3KmLHEeijaQzK7iJxRDZEHlHWos6G7jwaGPI4rV5/S1N+xnG+OhCDYAUbunp5R` |
+> | West US 3 | ecdsa-sha2-nistp256 | `j4NlZP/wOXKnM3MkNEcTksqwBdF0Z46+rdi2Ic1Oj54=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBETvvRvehAQ2Ol0FfTt649/4Xsd0DQQ7vyZ666B92wRhvyziGIrOhy8klXHcijmRYRz3EjTHyXHZ4W8kcSKB4Lo=` |
+> | West US 3 | ecdsa-sha2-nistp384 | `DkJet/6Pm6EXfpz2Ut6aahJ94OvjG3R7+dlK0H4O1ts=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEu+HpgDp0a02miiJjD5qVcMcjWiZg5iIExECqD/KQVkfyraJ3WZ8P28JwB+IYlEGa2SHQxScDjG2t3iOSuU9BtpA0KK5PGtu3ZxhN1UmZbQgz6ANov7/+WHChg7/lhK0Q==` |
+> | West US 3 | rsa-sha2-256 | `pOKzaf3mrTJhfdR/9dbodyNza30TpQrYRFwKAndeaMo=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC0KEDBaFSLsI28jdc854Rq6AL9Ku8g8L+OWQfWvb1ooBChMMd/oqVvFF9hkLzJ8nFPQw7+esVKys5uFwRTpBNuobF/RVtY0zLsNd+jkPxoUhs7Yl0hI2XXAPdp3uCsID56O+OrB7XbOsPCrJ2aXfiaRheRQg84/92c357uQ/epsva8XCMjIIGOAyEL6d4mnCNJ2Y0mXPJT1lfswoC8i2GSUKdJZhTLCe9zVDvTCTWuZJSH3A8nM3RVtnNgMXfNjh2blwW9YFv5BrMOXA205fahuDcPjwvXo9OMfEneDsrODmiEGYzbYLby/5/KPzz5OVn7BDJma6HL0z07i3PmEzXN` |
+> | West US 3 | rsa-sha2-512 | `KKcoWCeuJeepexnJCxoFqKJM88XrpsPKavXOoNFEGuY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNzhiVgDjCIarGEjKgmSxRh4vWjV6PxFbNK3cD0M4jWGlxPx/otJNEXCMee0hW29b7bwo2+aiyv3AEt7JYTeM/G9SHmenU6MTpqD/lC/LABtqTB7EV9FIFkc8MbbOvEkdTnRJw1d09MTqqwbkR9wq297AWggSzCuPDqMq+268UzsthMzODRVqW3yTr3M6vhlBCPfN5ptcvYwqRaa7Yhe4bdRZ+xYB5I2+ZMkalfn7SQiySSgAGjUJxrxK+LnJKSi32CfqTU8KjWNjCc40eAqexLFjg6AN9BtC0+ZYcD2KQmeqJ8oRCWw9r4CsaduSmcjc7XD75RKGdArjYzjeiVSlt` |
+
storage Storage Encrypt Decrypt Blobs Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-encrypt-decrypt-blobs-key-vault.md
+
+ Title: Encrypt and decrypt blobs using Azure Key Vault
+
+description: Learn how to encrypt and decrypt a blob using client-side encryption with Azure Key Vault.
+++ Last updated : 11/2/2022+++
+ms.devlang: csharp
+++
+# Tutorial: Encrypt and decrypt blobs using Azure Key Vault
+
+In this tutorial, you learn how to use client-side encryption to encrypt and decrypt blobs using a key stored with Azure Key Vault.
+
+Azure Blob Storage supports both service-side and client-side encryption. For most scenarios, Microsoft recommends using service-side encryption features for ease of use in protecting your data. To learn more about service-side encryption, see [Azure Storage encryption for data at rest](../common/storage-service-encryption.md).
+
+The [Azure Blob Storage client library for .NET](/dotnet/api/overview/azure/storage) supports client-side data encryption within applications before uploading to Azure Storage, and decrypting data while downloading to the client. The library also supports integration with [Azure Key Vault](https://azure.microsoft.com/services/key-vault/) for key management.
+
+This tutorial shows you how to:
+
+> [!div class="checklist"]
+> * Configure permissions for an Azure Key Vault resource
+> * Create a console application to interact with resources using .NET client libraries
+> * Add a key to a key vault
+> * Configure client-side encryption options using a key stored in a key vault
+> * Create a blob service client object with client-side encryption enabled
+> * Upload an encrypted blob, then download and decrypt the blob
+
+## Prerequisites
+
+- Azure subscription - [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+- Azure storage account - [create a storage account](../common/storage-account-create.md)
+- Key vault - create one using [Azure portal](/azure/key-vault/general/quick-create-portal), [Azure CLI](/azure/key-vault/general/quick-create-cli), or [PowerShell](/azure/key-vault/general/quick-create-powershell)
+- [Visual Studio 2022](https://visualstudio.microsoft.com) installed
+
+## Assign a role to your Azure AD user
+
+When developing locally, make sure that the user account that is accessing the key vault has the correct permissions. You'll need the [Key Vault Crypto Officer role](/azure/role-based-access-control/built-in-roles#key-vault-crypto-officer) to create a key and perform actions on keys in a key vault. You can assign Azure RBAC roles to a user using the Azure portal, Azure CLI, or Azure PowerShell. You can learn more about the available scopes for role assignments on the [scope overview](../../../articles/role-based-access-control/scope-overview.md) page.
+
+In this scenario, you'll assign permissions to your user account, scoped to the key vault, to follow the [Principle of Least Privilege](../../../articles/active-directory/develop/secure-least-privileged-access.md). This practice gives users only the minimum permissions needed and creates more secure production environments.
+
+The following example shows how to assign the **Key Vault Crypto Officer** role to your user account, which provides the access you'll need to complete this tutorial.
+
+> [!IMPORTANT]
+> In most cases it will take a minute or two for the role assignment to propagate in Azure, but in rare cases it may take up to eight minutes. If you receive authentication errors when you first run your code, wait a few moments and try again.
+
+### [Azure portal](#tab/roles-azure-portal)
+
+1. In the Azure portal, locate your key vault using the main search bar or left navigation.
+
+2. On the storage account overview page, select **Access control (IAM)** from the left-hand menu.
+
+3. On the **Access control (IAM)** page, select the **Role assignments** tab.
+
+4. Select **+ Add** from the top menu and then **Add role assignment** from the resulting drop-down menu.
+
+ :::image type="content" source="./media/storage-blob-encryption-keyvault/assign-role-key-vault.png" lightbox="./media/storage-blob-encryption-keyvault/assign-role-key-vault.png" alt-text="A screenshot showing how to assign a role in Azure portal.":::
+
+5. Use the search box to filter the results to the desired role. For this example, search for *Key Vault Crypto Officer* and select the matching result and then choose **Next**.
+
+6. Under **Assign access to**, select **User, group, or service principal**, and then choose **+ Select members**.
+
+7. In the dialog, search for your Azure AD username (usually your *user@domain* email address) and then choose **Select** at the bottom of the dialog.
+
+8. Select **Review + assign** to go to the final page, and then **Review + assign** again to complete the process.
+
+### [Azure CLI](#tab/roles-azure-cli)
+
+To assign a role at the resource level using the Azure CLI, you first must retrieve the resource ID using the `az storage account show` command. You can filter the output properties using the `--query` parameter.
+
+```azurecli
+az keyvault show --resource-group '<your-resource-group-name>' --name '<your-unique-keyvault-name>' --query id
+```
+
+Copy the output `Id` from the preceding command. You can then assign roles using the [az role](/cli/azure/role) command of the Azure CLI.
+
+```azurecli
+az role assignment create --assignee "<user@domain>" \
+ --role "Key Vault Crypto Officer" \
+ --scope "<your-resource-id>"
+```
+
+### [PowerShell](#tab/roles-powershell)
+
+To assign a role at the resource level using Azure PowerShell, you first must retrieve the resource ID using the `Get-AzResource` command.
+
+```azurepowershell
+Get-AzResource -ResourceGroupName "<yourResourceGroupname>" -Name "<yourKeyVaultName>"
+```
+
+Copy the `Id` value from the preceding command output. You can then assign roles using the [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) command in PowerShell.
+
+```azurepowershell
+New-AzRoleAssignment -SignInName <user@domain> `
+ -RoleDefinitionName "Key Vault Crypto Officer" `
+ -Scope <yourKeyVaultId>
+```
+++
+## Set up your project
+
+1. In a console window (such as PowerShell or Bash), use the `dotnet new` command to create a new console app with the name *BlobEncryptionKeyVault*. This command creates a simple "Hello World" C# project with a single source file: *Program.cs*.
+
+ ```dotnetcli
+ dotnet new console -n BlobEncryptionKeyVault
+ ```
+
+1. Switch to the newly created *BlobEncryptionKeyVault* directory.
+
+ ```console
+ cd BlobEncryptionKeyVault
+ ```
+
+1. Open the project in your desired code editor. To open the project in:
+ * Visual Studio, locate and double-click the `BlobEncryptionKeyVault.csproj` file.
+ * Visual Studio Code, run the following command:
+
+ ```bash
+ code .
+ ```
++
+To interact with Azure services in this example, install the following client libraries using `dotnet add package`.
+
+### [.NET CLI](#tab/packages-dotnetcli)
+
+```dotnetcli
+dotnet add package Azure.Identity
+dotnet add package Azure.Security.KeyVault.Keys
+dotnet add package Azure.Storage.Blobs
+```
+
+### [PowerShell](#tab/packages-powershell)
+
+```powershell
+Install-Package Azure.Identity
+Install-Package Azure.Security.KeyVault.Keys
+Install-Package Azure.Storage.Blobs
+```
++
+Add the following `using` directives and make sure to add a reference to `System.Configuration` to the project.
+
+```csharp
+using Azure;
+using Azure.Core;
+using Azure.Identity;
+using Azure.Security.KeyVault.Keys;
+using Azure.Security.KeyVault.Keys.Cryptography;
+using Azure.Storage;
+using Azure.Storage.Blobs;
+using Azure.Storage.Blobs.Models;
+using Azure.Storage.Blobs.Specialized;
+```
+
+## Set environment variable
+
+This application looks for an environment variable called `KEY_VAULT_NAME` to retrieve the name of your key vault. To set the environment variable, open a console window and follow the instructions for your operating system. Replace `<your-key-vault-name>` with the name of your key vault.
+
+**Windows:**
+
+You can set environment variables for Windows from the command line. However, when using this approach the values are accessible to all applications running on that operating system and may cause conflicts if you aren't careful. Environment variables can be set at either user or system level:
+
+```cmd
+setx KEY_VAULT_NAME "<your-key-vault-name>"
+````
+After you add the environment variable in Windows, you must start a new instance of the command window. If you're using Visual Studio on Windows, you may need to relaunch Visual Studio after creating the environment variable for the change to be detected.
+
+**Linux:**
+
+```bash
+export KEY_VAULT_NAME=<your-key-vault-name>
+```
+
+## Add a key in Azure Key Vault
+
+In this example, we create a key and add it to the key vault using the Azure Key Vault client library. You can also create and add a key to a key vault using [Azure CLI](/azure/key-vault/keys/quick-create-cli#add-a-key-to-key-vault), [Azure portal](/azure/key-vault/keys/quick-create-portal#add-a-key-to-key-vault), or [PowerShell](/azure/key-vault/keys/quick-create-powershell#add-a-key-to-key-vault).
+
+In the sample below, we create a [KeyClient](/dotnet/api/azure.security.keyvault.keys.keyclient) object for the specified vault. The `KeyClient` object is then used to create a new RSA key in the specified vault.
+
+```csharp
+var keyName = "testRSAKey";
+var keyVaultName = Environment.GetEnvironmentVariable("KEY_VAULT_NAME");
+
+// URI for the key vault resource
+var keyVaultUri = $"https://{keyVaultName}.vault.azure.net";
+
+TokenCredential tokenCredential = new DefaultAzureCredential();
+
+// Create a KeyClient object
+var keyClient = new KeyClient(new Uri(keyVaultUri), tokenCredential);
+
+// Add a key to the key vault
+var key = await keyClient.CreateKeyAsync(keyName, KeyType.Rsa);
+```
+
+## Create key and key resolver instances
+
+Next, we'll use the key we just added to the vault to create the cryptography client and key resolver instances. [CryptographyClient](/dotnet/api/azure.security.keyvault.keys.cryptography.cryptographyclient) implements [IKeyEncryptionKey](/dotnet/api/azure.core.cryptography.ikeyencryptionkey) and is used to perform cryptographic operations with keys stored in Azure Key Vault. [KeyResolver](/dotnet/api/azure.security.keyvault.keys.cryptography.keyresolver) implements [IKeyEncryptionResolver](/dotnet/api/azure.core.cryptography.ikeyencryptionkeyresolver) and retrieves key encryption keys from the key identifier and resolves the key.
+```csharp
+// Cryptography client and key resolver instances using Azure Key Vault client library
+CryptographyClient cryptoClient = keyClient.GetCryptographyClient(key.Value.Name, key.Value.Properties.Version);
+KeyResolver keyResolver = new (tokenCredential);
+```
+
+If you have an existing key in the vault that you'd like to encrypt with, you can create the key and key resolver instances by passing in the URI:
+```csharp
+var keyVaultKeyUri = $"https://{keyVaultName}.vault.azure.net/keys/{keyName}";
+CryptographyClient cryptoClient = new CryptographyClient(new Uri(keyVaultKeyUri), tokenCredential);
+```
+
+## Configure encryption options
+
+Now we need to configure the encryption options to be used for blob upload and download. To use client-side encryption, we first create a `ClientSideEncryptionOptions` object and set it on client creation with `SpecializedBlobClientOptions`.
+
+ The [ClientSideEncryptionOptions](/dotnet/api/azure.storage.clientsideencryptionoptions) class provides the client configuration options for connecting to Blob Storage using client-side encryption. [KeyEncryptionKey](/dotnet/api/azure.storage.clientsideencryptionoptions.keyencryptionkey) is required for upload operations and is used to wrap the generated content encryption key. [KeyResolver](/dotnet/api/azure.storage.clientsideencryptionoptions.keyresolver) is required for download operations and fetches the correct key encryption key to unwrap the downloaded content encryption key. [KeyWrapAlgorithm]() is required for uploads and specifies the algorithm identifier to use when wrapping the content encryption key.
+
+> [!IMPORTANT]
+>Due to a security vulnerability in version 1, it's recommended to construct the `ClientSideEncryptionOptions` object using `ClientSideEncryptionVersion.V2_0` for the version parameter. To learn more about mitigating the vulnerability in your apps, see [Mitigate the security vulnerability in your applications](client-side-encryption.md#mitigate-the-security-vulnerability-in-your-applications). For more information about this security vulnerability, see [Azure Storage updating client-side encryption in SDK to address security vulnerability](https://aka.ms/azstorageclientencryptionblog).
+
+```csharp
+// Configure the encryption options to be used for upload and download
+ClientSideEncryptionOptions encryptionOptions = new (ClientSideEncryptionVersion.V2_0)
+{
+ KeyEncryptionKey = cryptoClient,
+ KeyResolver = keyResolver,
+ // String value that the client library will use when calling IKeyEncryptionKey.WrapKey()
+ KeyWrapAlgorithm = "RSA-OAEP"
+};
+
+// Set the encryption options on the client options.
+BlobClientOptions options = new SpecializedBlobClientOptions() { ClientSideEncryption = encryptionOptions };
+```
+
+## Configure client object to use client-side encryption
+
+In this example, we apply the client-side encryption configuration options to a `BlobServiceClient` object. When applied at the service client level, these encryption options are passed from the service client to container clients, and from container clients to blob clients. When the `BlobClient` object performs an upload or download operation, the Azure Blob Storage client libraries use envelope encryption to encrypt and decrypt blobs on the client side. Envelope encryption encrypts a key with one or more additional keys.
+
+```csharp
+// Create a blob client with client-side encryption enabled.
+// Attempting to construct a BlockBlobClient, PageBlobClient, or AppendBlobClient from a BlobContainerClient
+// with client-side encryption options present will throw, as this functionality is only supported with BlobClient.
+Uri blobUri = new (string.Format($"https://{accountName}.blob.core.windows.net"));
+BlobClient blob = new BlobServiceClient(blobUri, tokenCredential, options).GetBlobContainerClient("test-container").GetBlobClient("testBlob");
+```
+
+## Encrypt blob and upload
+When the `BlobClient` object calls an upload method, several steps occur to perform the client-side encryption:
+1. The Azure Storage client library generates a random initialization vector (IV) of 16 bytes and a random content encryption key (CEK) of 32 bytes, and performs envelope encryption of the blob data using this information.
+1. Blob data is encrypted using the CEK.
+1. The CEK is then wrapped (encrypted) using the key encryption key (KEK) we specified in `ClientSideEncryptionOptions`. In this example, the KEK is an asymmetric key pair stored in the specified Azure Key Vault resource. The blob client itself never has access to the KEK, it just invokes the key wrapping algorithm that is provided by Key Vault.
+1. The encrypted blob data is then uploaded to the storage account.
+
+Add the following code to encrypt a blob and upload it to your Azure storage account:
+
+```csharp
+// Upload the encrypted contents to the blob
+Stream blobContent = BinaryData.FromString("Ready for encryption, Captain.").ToStream();
+await blob.UploadAsync(blobContent);
+```
+
+Once the blob is uploaded, you can view the blob in your storage account to see the encrypted contents along with the encryption metadata.
+
+## Decrypt blob and download
+
+The Azure Storage client library assumes that the user is managing the KEK either locally or in a key vault. The user doesn't need to know the specific key that was used for encryption. The key resolver specified in `ClientSideEncryptionOptions` will be used to resolve key identifiers when blob data is downloaded and decrypted.
+
+When the `BlobClient` object calls a download method, several steps occur to decrypt the encrypted blob data:
+
+1. The client library downloads the encrypted blob data, including encryption metadata, from the storage account.
+1. The wrapped CEK is then unwrapped (decrypted) using the KEK. The client library doesn't have access to the KEK during this process, but only invokes the key unwrapping algorithm specified in `ClientSideEncryptionOptions`. The private key of they RSA key pair remains in the key vault, so the encrypted key from the blob metadata that contains the CEK is sent to the key vault for decryption.
+1. The client library uses the CEK to decrypt the encrypted blob data.
+
+Add the following code to download and decrypt the blob that you previously uploaded.
+
+```csharp
+// Download and decrypt the encrypted contents from the blob
+Response<BlobDownloadInfo> response = await blob.DownloadAsync();
+BlobDownloadInfo downloadInfo = response.Value;
+Console.WriteLine((await BinaryData.FromStreamAsync(downloadInfo.Content)).ToString());
+```
+
+## Next steps
+
+In this tutorial, you learned how to use .NET client libraries to perform client-side encryption for blob upload and download operations.
+
+For a broad overview of client-side encryption for blobs, including instructions for migrating encrypted data to version 2, see [Client-side encryption for blobs](client-side-encryption.md).
+
+For more information about Azure Key Vault, see the [Azure Key Vault overview page](../../key-vault/general/overview.md)
storage Storage Use Azcopy V10 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-v10.md
description: AzCopy is a command-line utility that you can use to copy data to,
Previously updated : 09/23/2022 Last updated : 09/29/2022
To obtain the link, run this command:
| Operating system | Command | |--|--|
-| **Linux** | `curl -s -D- https://aka.ms/downloadazcopy-v10-linux | grep ^Location` |
-| **Windows (PowerShell Core 7)** | `(Invoke-WebRequest https://aka.ms/downloadazcopy-v10-windows -MaximumRedirection 0 -ErrorAction silentlycontinue -SkipHttpErrorCheck).headers.location[0]` |
-| **Windows (PowerShell 5.1)** | `(Invoke-WebRequest https://aka.ms/downloadazcopy-v10-windows -MaximumRedirection 0 -ErrorAction silentlycontinue ).headers.location` |
-
+| **Linux** | `curl -s -D- https://aka.ms/downloadazcopy-v10-linux \| grep ^Location` |
+| **Windows PowerShell** | `(Invoke-WebRequest -Uri https://aka.ms/downloadazcopy-v10-windows -MaximumRedirection 0 -ErrorAction SilentlyContinue).headers.location` |
+| **PowerShell 6.1+** | `(Invoke-WebRequest -Uri https://aka.ms/downloadazcopy-v10-windows -MaximumRedirection 0 -ErrorAction SilentlyContinue -SkipHttpErrorCheck).headers.location` |
> [!NOTE] > For Linux, `--strip-components=1` on the `tar` command removes the top-level folder that contains the version name, and instead extracts the binary directly into the current folder. This allows the script to be updated with a new version of `azcopy` by only updating the `wget` URL. The URL appears in the output of this command. Your script can then download AzCopy by using that URL.
-| Operating system | Command |
-|--|--|
-| **Linux** | `wget -O azcopy_v10.tar.gz https://aka.ms/downloadazcopy-v10-linux && tar -xf azcopy_v10.tar.gz --strip-components=1` |
-| **Windows** | `Invoke-WebRequest https://azcopyvnext.azureedge.net/release20190517/azcopy_windows_amd64_10.1.2.zip -OutFile azcopyv10.zip <<Unzip here>>` |
+**Linux**
+```bash
+wget -O azcopy_v10.tar.gz https://aka.ms/downloadazcopy-v10-linux && tar -xf azcopy_v10.tar.gz --strip-components=1
+```
+**Windows PowerShell**
+```PowerShell
+Invoke-WebRequest -Uri 'https://azcopyvnext.azureedge.net/release20220315/azcopy_windows_amd64_10.14.1.zip' -OutFile 'azcopyv10.zip'
+Expand-archive -Path '.\azcopyv10.zip' -Destinationpath '.\'
+$AzCopy = (Get-ChildItem -path '.\' -Recurse -File -Filter 'azcopy.exe').FullName
+# Invoke AzCopy
+& $AzCopy
+```
+**PowerShell 6.1+**
+```PowerShell
+Invoke-WebRequest -Uri 'https://azcopyvnext.azureedge.net/release20220315/azcopy_windows_amd64_10.14.1.zip' -OutFile 'azcopyv10.zip'
+$AzCopy = (Expand-archive -Path '.\azcopyv10.zip' -Destinationpath '.\' -PassThru | where-object {$_.Name -eq 'azcopy.exe'}).FullName
+# Invoke AzCopy
+& $AzCopy
+```
#### Escape special characters in SAS tokens
storage Storage Files Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-planning.md
To access an Azure file share, the user of the file share must be authenticated
For customers migrating from on-premises file servers, or creating new file shares in Azure Files intended to behave like Windows file servers or NAS appliances, domain joining your storage account to **Customer-owned Active Directory** is the recommended option. To learn more about domain joining your storage account to a customer-owned Active Directory, see [Azure Files Active Directory overview](storage-files-active-directory-overview.md).
-If you intend to use the storage account key to access your Azure file shares, we recommend using service endpoints as described in the [Networking](#networking) section.
+If you intend to use the storage account key to access your Azure file shares, we recommend using private endpoints or service endpoints as described in the [Networking](#networking) section.
## Networking Directly mounting your Azure file share often requires some thought about networking configuration because:
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
description: Learn about the capacity, IOPS, and throughput rates for Azure file
Previously updated : 10/12/2022 Last updated : 11/2/2022
To help you plan your deployment for each of the stages, below are the results o
| Number of objects | 25 million objects | | Dataset Size| ~4.7 TiB | | Average File Size | ~200 KiB (Largest File: 100 GiB) |
-| Initial cloud change enumeration | 20 objects per second |
+| Initial cloud change enumeration | 80 objects per second |
| Upload Throughput | 20 objects per second per sync group | | Namespace Download Throughput | 400 objects per second | ### Initial one-time provisioning **Initial cloud change enumeration**: When a new sync group is created, initial cloud change enumeration is the first step that will execute. In this process, the system will enumerate all the items in the Azure File Share. During this process, there will be no sync activity i.e. no items will be downloaded from cloud endpoint to server endpoint and no items will be uploaded from server endpoint to cloud endpoint. Sync activity will resume once initial cloud change enumeration completes.
-The rate of performance is 20 objects per second. Customers can estimate the time it will take to complete initial cloud change enumeration by determining the number of items in the cloud share and using the following formulae to get the time in days.
+The rate of performance is 80 objects per second. Customers can estimate the time it will take to complete initial cloud change enumeration by determining the number of items in the cloud share and using the following formulae to get the time in days.
- **Time (in days) for initial cloud enumeration = (Number of objects in cloud endpoint)/(20 * 60 * 60 * 24)**
+ **Time (in days) for initial cloud enumeration = (Number of objects in cloud endpoint)/(80 * 60 * 60 * 24)**
**Initial sync of data from Windows Server to Azure File share**:Many Azure File Sync deployments start with an empty Azure file share because all the data is on the Windows Server. In these cases, the initial cloud change enumeration is fast and the majority of time will be spent syncing changes from the Windows Server into the Azure file share(s).
synapse-analytics Overview Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/overview-terminology.md
To use Spark analytics, create and use **serverless Apache Spark pools** in your
There are two ways within Synapse to use Spark:
-* **Spark Notebooks** for doing data Data Science and Engineering use Scala, PySpark, C#, and SparkSQL
+* **Spark Notebooks** for doing Data Science and Engineering use Scala, PySpark, C#, and SparkSQL
* **Spark job definitions** for running batch Spark jobs using jar files. ## SynapseML
synapse-analytics Quickstart Transform Data Using Spark Job Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-transform-data-using-spark-job-definition.md
On this panel, you can reference to the Spark job definition to run.
|Main class name| The fully qualified identifier or the main class that is in the main definition file. <br> Sample: `WordCount`| |Command-line arguments| You can add command-line arguments by clicking the **New** button. It should be noted that adding command-line arguments will override the command-line arguments defined by the Spark job definition. <br> *Sample: `abfss://…/path/to/shakespeare.txt` `abfss://…/path/to/result`* <br> | |Apache Spark pool| You can select Apache Spark pool from the list.|
+ |Python code reference| Additional python code files used for reference in the main definition file. |
+ |Reference files | Additional files used for reference in the main definition file. |
|Dynamically allocate executors| This setting maps to the dynamic allocation property in Spark configuration for Spark Application executors allocation.| |Min executors| Min number of executors to be allocated in the specified Spark pool for the job.| |Max executors| Max number of executors to be allocated in the specified Spark pool for the job.|
synapse-analytics Develop Openrowset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-openrowset.md
FROM
OPENROWSET( BULK ( 'https://azureopendatastorage.blob.core.windows.net/censusdatacontainer/release/us_population_county/year=2000/*.parquet',
- 'https://azureopendatastorage.blob.core.windows.net/censusdatacontainer/release/us_population_county/year=2010/*.parquet',
+ 'https://azureopendatastorage.blob.core.windows.net/censusdatacontainer/release/us_population_county/year=2010/*.parquet'
), FORMAT='PARQUET' )
synapse-analytics Synapse Link For Sql Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/synapse-link-for-sql-known-issues.md
Previously updated : 05/24/2022 Last updated : 11/02/2022 -+ # Known limitations and issues with Azure Synapse Link for SQL
This is the list of known limitations for Azure Synapse Link for SQL.
* Azure Synapse Link for SQL isn't supported on Free, Basic or Standard tier with fewer than 100 DTUs. * Azure Synapse Link for SQL isn't supported on SQL Managed Instances. * Service principal isn't supported for authenticating to source Azure SQL DB, so when creating Azure SQL DB linked Service, choose SQL authentication, user-assigned managed identity (UAMI) or service assigned managed Identity (SAMI).
+* If the Azure SQL Database logical server has both a SAMI and UAMI configured, Azure Synapse Link will use SAMI.
* Azure Synapse Link can't be enabled on the secondary database once a GeoDR failover has happened if the secondary database has a different name from the primary database. * If you enabled Azure Synapse Link for SQL on your database as an Microsoft Azure Active Directory (Azure AD) user, Point-in-time restore (PITR) will fail. PITR will only work when you enable Azure Synapse Link for SQL on your database as a SQL user. * If you create a database as an Azure AD user and enable Azure Synapse Link for SQL, a SQL authentication user (for example, even sysadmin role) won't be able to disable/make changes to Azure Synapse Link for SQL artifacts. However, another Azure AD user will be able to enable/disable Azure Synapse Link for SQL on the same database. Similarly, if you create a database as an SQL authentication user, enabling/disabling Azure Synapse Link for SQL as an Azure AD user won't work.
This is the list of known limitations for Azure Synapse Link for SQL.
* When using asynchronous replicas, transactions need to be written to all replicas prior to them being published to Azure Synapse Link for SQL. * Azure Synapse Link for SQL isn't supported on databases with database mirroring enabled. * Restoring an Azure Synapse Link for SQL-enabled database from on-premises to Azure SQL Managed Instance isn't supported.
-* Azure Synapse Link for SQL is not supported on databases that are using Managed Instance Link.
+
+> [!CAUTION]
+> Azure Synapse Link for SQL is not supported on databases that are also using Azure SQL Managed Instance Link. Caution that in these scenarios, when the managed instance transitions to read-write mode, you may encounter transaction log full issues.
## Known issues ### Deleting an Azure Synapse Analytics workspace with a running link could cause log in source database to fill
synapse-analytics Synapse Notebook Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-notebook-activity.md
You can create a Synapse notebook activity directly from the Synapse pipeline ca
Drag and drop **Synapse notebook** under **Activities** onto the Synapse pipeline canvas. Select on the Synapse notebook activity box and config the notebook content for current activity in the **settings**. You can select an existing notebook from the current workspace or add a new one.
+If you select an existing notebook from the current workspace, you can click the **Open** button to directly open the notebook's page.
+ (Optional) You can also reconfigure Spark pool\Executor size\Dynamically allocate executors\Min executors\Max executors\Driver size in settings. It should be noted that the settings reconfigured here will replace the settings of the configure session in Notebook. If nothing is set in the settings of the current notebook activity, it will run with the settings of the configure session in that notebook. > [!div class="mx-imgBorder"]
virtual-desktop App Attach Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-azure-portal.md
Before you get started, you must disable automatic updates for MSIX app attach a
```cmd rem Disable Store auto update:
-reg add HKLM\Software\Policies\Microsoft\WindowsStore /v AutoDownload /t REG_DWORD /d 0 /f
-Schtasks /Change /Tn "\Microsoft\Windows\WindowsUpdate\Automatic app update" /Disable
+reg add HKLM\Software\Policies\Microsoft\WindowsStore /v AutoDownload /t REG_DWORD /d 2 /f
Schtasks /Change /Tn "\Microsoft\Windows\WindowsUpdate\Scheduled Start" /Disable rem Disable Content Delivery auto download apps that they want to promote to users:
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-cli.md
Title: Create virtual machines in a Flexible scale set using Azure CLI
-description: Learn how to create a virtual machine scale set in Flexible orchestration mode using Azure CLI.
+description: Learn how to create a Virtual Machine Scale Set in Flexible orchestration mode using Azure CLI.
Previously updated : 08/05/2021 Last updated : 11/01/2022 # Create virtual machines in a scale set using Azure CLI
-This article steps through using the Azure CLI to create a virtual machine scale set.
+This article steps through using the Azure CLI to create a Virtual Machine Scale Set.
Make sure that you've installed the latest [Azure CLI](/cli/azure/install-az-cli2) and are logged in to an Azure account with [az login](/cli/azure/reference-index).
Create a resource group with [az group create](/cli/azure/group) as follows:
```azurecli-interactive az group create --name myResourceGroup --location eastus ```
-## Create a virtual machine scale set
-Now create a virtual machine scale set with [az vmss create](/cli/azure/vmss). The following example creates a scale set with an instance count of *2*, and generates SSH keys.
+## Create a Virtual Machine Scale Set
+Now create a Virtual Machine Scale Set with [az vmss create](/cli/azure/vmss). The following example creates a scale set with an instance count of *2*, and generates SSH keys.
```azurecli-interactive az vmss create \
az vmss create \
## Clean up resources
-To remove your scale set and additional resources, delete the resource group and all its resources with [az group delete](/cli/azure/group). The `--no-wait` parameter returns control to the prompt without waiting for the operation to complete. The `--yes` parameter confirms that you wish to delete the resources without an additional prompt to do so.
+To remove your scale set and other resources, delete the resource group and all its resources with [az group delete](/cli/azure/group). The `--no-wait` parameter returns control to the prompt without waiting for the operation to complete. The `--yes` parameter confirms that you wish to delete the resources without another prompt to do so.
```azurecli-interactive az group delete --name myResourceGroup --yes --no-wait
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Migration Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-migration-resources.md
Title: Migrate deployments and resources to virtual machine scale sets in Flexible orchestration
-description: Learn how to migrate deployments and resources to virtual machine scale sets in Flexible orchestration.
+ Title: Migrate deployments and resources to Virtual Machine Scale Sets in Flexible orchestration
+description: Learn how to migrate deployments and resources to Virtual Machine Scale Sets in Flexible orchestration.
Previously updated : 03/31/2022 Last updated : 11/01/2022
-# Migrate deployments and resources to virtual machine scale sets in Flexible orchestration
+# Migrate deployments and resources to Virtual Machine Scale Sets in Flexible orchestration
-Like availability sets, virtual machine scale sets allows you to spread virtual machines across multiple fault domains. Virtual machine scale sets with Flexible orchestration allows you to combine the scalability of [virtual machine scale sets in Uniform orchestration mode](../virtual-machine-scale-sets/overview.md) with the regional availability guarantees of [availability sets](../virtual-machines/availability-set-overview.md). This article goes over migration considerations when switching to Flexible orchestration mode for virtual machine scale sets.
+Like availability sets, Virtual Machine Scale Sets allows you to spread virtual machines across multiple fault domains. Virtual Machine Scale Sets with Flexible orchestration allows you to combine the scalability of [Virtual Machine Scale Sets in Uniform orchestration mode](../virtual-machine-scale-sets/overview.md) with the regional availability guarantees of [availability sets](../virtual-machines/availability-set-overview.md). This article goes over migration considerations when switching to Flexible orchestration mode for Virtual Machine Scale Sets.
## Update availability set deployments templates and scripts
-First, you need to create a virtual machine scale set with no auto scaling profile via [Azure CLI](flexible-virtual-machine-scale-sets-cli.md), [Azure PowerShell](flexible-virtual-machine-scale-sets-powershell.md), or [ARM Template](flexible-virtual-machine-scale-sets-rest-api.md). Azure portal only allows creating a virtual machine scale set with an autoscaling profile. If you do not want or need an autoscaling profile and you want to create a scale set using [Azure portal](flexible-virtual-machine-scale-sets-portal.md), you can set the initial capacity to 0.
+First, you need to create a Virtual Machine Scale Set with no auto scaling profile via [Azure CLI](flexible-virtual-machine-scale-sets-cli.md), [Azure PowerShell](flexible-virtual-machine-scale-sets-powershell.md), or [ARM Template](flexible-virtual-machine-scale-sets-rest-api.md). Azure portal only allows creating a Virtual Machine Scale Set with an autoscaling profile. If you don't want or need an autoscaling profile and you want to create a scale set using [Azure portal](flexible-virtual-machine-scale-sets-portal.md), you can set the initial capacity to 0.
-You must specify the fault domain count for the virtual machine scale set. For regional (non-zonal) deployments, virtual machine scale sets offers the same fault domain guarantees as availability sets. However, you can scale up to 1000 instances. For zonal deployments where you are spreading instances across multiple availability zones, the fault domain count must be set to 1.
+You must specify the fault domain count for the Virtual Machine Scale Set. For regional (non-zonal) deployments, Virtual Machine Scale Sets offers the same fault domain guarantees as availability sets. However, you can scale up to 1000 instances. For zonal deployments where you're spreading instances across multiple availability zones, the fault domain count must be set to 1.
-Update domains have been deprecated in Flexible Orchestration mode. Most platform updates with general purpose SKUs are performed with Live Migration and do not require instance reboot. On the occasion that a platform maintenance requires instances to be rebooted, updates are applied fault domain by fault domain.
+Update domains have been deprecated in Flexible Orchestration mode. Most platform updates with general purpose SKUs are performed with Live Migration and don't require instance reboot. On the occasion that a platform maintenance requires instances to be rebooted, updates are applied fault domain by fault domain.
-Flexible orchestration for virtual machine scale sets also supports deploying instances across multiple availability zones. You may want to consider updating your VM deployments to spread across multiple availability zones.
+Flexible orchestration for Virtual Machine Scale Sets also supports deploying instances across multiple availability zones. You may want to consider updating your VM deployments to spread across multiple availability zones.
-The last step in this process is to create a virtual machine. Instead of specifying an availability set, specify the virtual machine scale set. Optionally, you can specify the availability zone or fault domain in which you wish to place the VM.
+The last step in this process is to create a virtual machine. Instead of specifying an availability set, specify the Virtual Machine Scale Set. Optionally, you can specify the availability zone or fault domain in which you wish to place the VM.
## Migrate existing availability set VMs
-There is currently no automated tooling to directly move existing instances in an Availability Set to a virtual machine scale set. However, there are several strategies you can utilize to migrate existing instances to a Flexible scale set:
+There's currently no automated tooling to directly move existing instances in an Availability Set to a Virtual Machine Scale Set. However, there are several strategies you can utilize to migrate existing instances to a Flexible scale set:
### Blue/green or side by side migration
There is currently no automated tooling to directly move existing instances in a
### Replace VM instances 1. Note the parameters you want to keep from the virtual machine (name, NIC ID, OS and data disk IDs, VM configuration settings, fault domain placement, etc.)
-1. Delete the availability set virtual machine. The NICs and disks for the VM will not be deleted
+1. Delete the availability set virtual machine. The NICs and disks for the VM won't be deleted
1. Create a new virtual machine object, using the parameters from the original VM - NIC ID - OS and Data disks
There is currently no automated tooling to directly move existing instances in a
## Update Uniform scale sets deployment templates and scripts
-Update Uniform virtual machine scale sets deployment templates and scripts to use Flexible orchestration. Change the following elements in your templates to successfully complete the process.
+Update Uniform Virtual Machine Scale Sets deployment templates and scripts to use Flexible orchestration. Change the following elements in your templates to successfully complete the process.
- Remove `LoadBalancerNATPool` (not valid for flex) - Remove overprovisioning parameter (not valid for flex)
Update Uniform virtual machine scale sets deployment templates and scripts to us
## Migrate existing Uniform scale sets
-There is currently no automated tooling to directly move existing instances or upgrade a Uniform scale set to a Flexible virtual machine scale set. However, here is a strategy you can utilize to migrate existing instances to a Flexible scale set:
+There's currently no automated tooling to directly move existing instances or upgrade a Uniform scale set to a Flexible Virtual Machine Scale Set. However, here's a strategy you can utilize to migrate existing instances to a Flexible scale set:
### Blue/green or side by side migration
There is currently no automated tooling to directly move existing instances or u
## Flexible scale sets considerations
-Virtual machine scale sets with Flexible orchestration allows you to combine the scalability of [virtual machine scale sets in Uniform orchestration](../virtual-machine-scale-sets/overview.md) with the regional availability guarantees of availability sets. The following are key considerations when deciding to work with the Flexible orchestration mode.
+Virtual Machine Scale Sets with Flexible orchestration allows you to combine the scalability of [Virtual Machine Scale Sets in Uniform orchestration](../virtual-machine-scale-sets/overview.md) with the regional availability guarantees of availability sets. The following are key considerations when deciding to work with the Flexible orchestration mode.
### Create scalable network connectivity <!-- the following is an important link to use in FLEX documentation to reference this section: /virtual-machines/flexible-virtual-machine-scale-sets-migration-resources.md#create-scalable-network-connectivity -->
-Networking outbound access behavior will vary depending on how you choose to create virtual machines within your scale set. **Manually added VM instances** have default outbound connectivity access. **Implicitly created VM instances** do not have default access.
+Networking outbound access behavior will vary depending on how you choose to create virtual machines within your scale set. **Manually added VM instances** have default outbound connectivity access. **Implicitly created VM instances** don't have default access.
-In order to enhance default network security, **virtual machine instances created implicitly via the autoscaling profile do not have default outbound access**. In order to use virtual machine scale sets with implicitly created VM instances, outbound access must be explicitly defined through one of the following methods:
+In order to enhance default network security, **virtual machine instances created implicitly via the autoscaling profile don't have default outbound access**. In order to use Virtual Machine Scale Sets with implicitly created VM instances, outbound access must be explicitly defined through one of the following methods:
- For most scenarios, we recommend [NAT Gateway attached to the subnet](../virtual-network/nat-gateway/quickstart-create-nat-gateway-portal.md). - For scenarios with high security requirements or when using Azure Firewall or Network Virtual Appliance (NVA), you can specify a custom User Defined Route as next hop through firewall.
In order to enhance default network security, **virtual machine instances create
Common scenarios that will require explicit outbound connectivity include: -- Windows VM activation will require that you have defined outbound connectivity from the VM instance to the Windows Activation Key Management Service (KMS). See [Troubleshoot Windows VM activation problems](/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems) for more information.
+- Windows VM activation will require that you have defined outbound connectivity from the VM instance to the Windows Activation Key Management Service (KMS). For more information, see [Troubleshoot Windows VM activation problems](/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems).
- Access to storage accounts or Key Vault. Connectivity to Azure services can also be established via [Private Link](../private-link/private-link-overview.md). - Windows updates. - Access to Linux package managers.
-See [Default outbound access in Azure](../virtual-network/ip-services/default-outbound-access.md) for more details on defining outbound connectivity.
+For more information, see [Default outbound access in Azure](../virtual-network/ip-services/default-outbound-access.md).
-With single instance VMs where you explicitly create the NIC, default outbound access is provided. Virtual machine scale sets in Uniform Orchestration mode also has default outbound connectivity.
+With single instance VMs where you explicitly create the NIC, default outbound access is provided. Virtual Machine Scale Sets in Uniform Orchestration mode also has default outbound connectivity.
> [!IMPORTANT]
-> Confirm that you have explicit outbound network connectivity. Learn more about this in [virtual networks and virtual machines in Azure](../virtual-network/network-overview.md) and make sure you are following Azure's networking [best practices](../virtual-network/concepts-and-best-practices.md).
+> Confirm that you have explicit outbound network connectivity. Learn more about this in [virtual networks and virtual machines in Azure](../virtual-network/network-overview.md) and make sure you're following Azure's networking [best practices](../virtual-network/concepts-and-best-practices.md).
### Assign fault domain during VM creation
-You can choose the number of fault domains for the Flexible orchestration scale set. By default, when you add a VM to a Flexible scale set, Azure evenly spreads instances across fault domains. While it is recommended to let Azure assign the fault domain, for advanced or troubleshooting scenarios you can override this default behavior and specify the fault domain where the instance will land.
+You can choose the number of fault domains for the Flexible orchestration scale set. By default, when you add a VM to a Flexible scale set, Azure evenly spreads instances across fault domains. While it's recommended to let Azure assign the fault domain, for advanced or troubleshooting scenarios you can override this default behavior and specify the fault domain where the instance will land.
```azurecli-interactive az vm create ΓÇôvmss "myVMSS" ΓÇô-platform_fault_domain 1 ``` ### Instance naming
-When you create a VM and add it to a Flexible scale set, you have full control over instance names within the Azure Naming convention rules. When VMs are automatically added to the scale set via autoscaling, you provide a prefix and Azure appends a unique number to the end of the name.
+When you create a VM and add it to a Flexible scale set, you have full control over instance names within the Azure Naming convention rules. When VMs are automatically added to the scale set via autoscaling, you provide a prefix, and Azure appends a unique number to the end of the name.
### List scale sets VM API changes Virtual Machine Scale Sets allows you to list the instances that belong to the scale set. With Flexible orchestration, the list Virtual Machine Scale Sets VM command provides a list of scale sets VM IDs. You can then call the GET Virtual Machine Scale Sets VM commands to get more details on how the scale set is working with the VM instance. To get the full details of the VM, use the standard GET VM commands or [Azure Resource Graph](../governance/resource-graph/overview.md).
Querying resources with [Azure Resource Graph](../governance/resource-graph/over
### Scale sets VM batch operations
-Use the standard VM commands to start, stop, restart, delete instances, instead of the Virtual Machine Scale Set VM APIs. The Virtual Machine Scale Set VM Batch operations (start all, stop all, reimage all, etc.) are not used with Flexible orchestration mode.
+Use the standard VM commands to start, stop, restart, delete instances, instead of the Virtual Machine Scale Set VM APIs. The Virtual Machine Scale Set VM Batch operations (start all, stop all, reimage all, etc.) aren't used with Flexible orchestration mode.
### Monitor application health
Application health monitoring allows your application to provide Azure with a he
### Retrieve boot diagnostics data
-Use the standard VM APIs and commands to retrieve instance Boot Diagnostics data and screenshots. The Virtual Machine Scale Sets VM boot diagnostic APIs and commands are not used with Flexible orchestration mode instances.
+Use the standard VM APIs and commands to retrieve instance Boot Diagnostics data and screenshots. The Virtual Machine Scale Sets VM boot diagnostic APIs and commands aren't used with Flexible orchestration mode instances.
### VM extensions
Use extensions targeted for standard virtual machines, instead of extensions tar
### Protect instances from delete
-Virtual machine scale sets in Flexible orchestration mode do not currently have instance protection options. If you have autoscale enabled on a virtual machine scale set, some VMs might be at risk of deletion during the scaling in process. If you want to protect certain VM instances from deletion, use [Azure Resource Manager lock](../azure-resource-manager/management/lock-resources.md).
+Virtual Machine Scale Sets in Flexible orchestration mode don't currently have instance protection options. If you have autoscale enabled on a Virtual Machine Scale Set, some VMs might be at risk of deletion during the scaling in process. If you want to protect certain VM instances from deletion, use [Azure Resource Manager lock](../azure-resource-manager/management/lock-resources.md).
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-portal.md
Title: Create virtual machines in a Flexible scale set using Azure portal
-description: Learn how to create a virtual machine scale set in Flexible orchestration mode in the Azure portal.
+description: Learn how to create a Virtual Machine Scale Set in Flexible orchestration mode in the Azure portal.
Previously updated : 10/25/2021 Last updated : 11/01/2022 # Create virtual machines in a scale set using Azure portal
-This article steps through using Azure portal to create a virtual machine scale set.
+This article steps through using Azure portal to create a Virtual Machine Scale Set.
## Log in to Azure Log in to the Azure portal at https://portal.azure.com.
-## Create a virtual machine scale set
+## Create a Virtual Machine Scale Set
You can deploy a scale set with a Windows Server image or Linux image such as RHEL, CentOS, Ubuntu, or SLES.
-1. In the Azure portal search bar, search for and select **Virtual machine scale sets**.
-1. Select **Create** on the **Virtual machine scale sets** page.
+1. In the Azure portal search bar, search for and select **Virtual Machine Scale Sets**.
+1. Select **Create** on the **Virtual Machine Scale Sets** page.
1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and create a new resource group called *myVMSSResourceGroup*. 1. Under **Scale set details**, set *myScaleSet* for your scale set name and select a **Region** that is close to your area.
You can deploy a scale set with a Windows Server image or Linux image such as RH
- If you select a Linux OS disk image, you can instead choose **SSH public key**. You can use an existing key or create a new one. In this example, we will have Azure generate a new key pair for us. For more information on generating key pairs, see [create and use SSH keys](../virtual-machines/linux/mac-create-ssh-keys.md). 1. Select **Next: Disks** to move the disk configuration options. For this quickstart, leave the default disk configurations.
You can deploy a scale set with a Windows Server image or Linux image such as RH
1. In **Select a load balancer**, select a load balancer or create a new one. 1. For **Select a backend pool**, select **Create new**, type *myBackendPool*, then select **Create**. 1. Select **Next: Scaling** to move to the scaling configurations. 1. On the **Scaling** page, set the **initial instance count** field to *5*. You can set this number up to 1000. 1. For the **Scaling policy**, keep it *Manual*. 1. When you're done, select **Review + create**. 1. After it passes validation, select **Create** to deploy the scale set.
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-powershell.md
Title: Create virtual machines in a scale set using Azure PowerShell
-description: Learn how to create a virtual machine scale set in Flexible orchestration mode using PowerShell.
+description: Learn how to create a Virtual Machine Scale Set in Flexible orchestration mode using PowerShell.
Previously updated : 08/05/2021 Last updated : 11/01/2022 # Create virtual machines in a scale set using PowerShell
-This article steps through using PowerShell to create a virtual machine scale set.
+This article steps through using PowerShell to create a Virtual Machine Scale Set.
## Launch Azure Cloud Shell
Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.
New-AzResourceGroup -Name 'myVMSSResourceGroup' -Location 'EastUS' ```
-## Create a virtual machine scale set
-Now create a virtual machine scale set with [New-AzVmss](/powershell/module/az.compute/new-azvmss). The following example creates a scale set with an instance count of *2* running Windows Server 2019 Datacenter edition.
+## Create a Virtual Machine Scale Set
+Now create a Virtual Machine Scale Set with [New-AzVmss](/powershell/module/az.compute/new-azvmss). The following example creates a scale set with an instance count of *two* running Windows Server 2019 Datacenter edition.
```azurepowershell-interactive New-AzVmss `
New-AzVmss `
``` ## Clean up resources
-When you delete a resource group, all resources contained within, such as the VM instances, virtual network, and disks, are also deleted. The `-Force` parameter confirms that you wish to delete the resources without an additional prompt to do so. The `-AsJob` parameter returns control to the prompt without waiting for the operation to complete.
+When you delete a resource group, all resources contained within, such as the VM instances, virtual network, and disks, are also deleted. The `-Force` parameter confirms that you wish to delete the resources without another prompt to do so. The `-AsJob` parameter returns control to the prompt without waiting for the operation to complete.
```azurepowershell-interactive Remove-AzResourceGroup -Name "myResourceGroup" -Force -AsJob
virtual-machine-scale-sets Flexible Virtual Machine Scale Sets Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/flexible-virtual-machine-scale-sets-rest-api.md
Title: Create virtual machines in a Flexible scale set using an ARM template
-description: Learn how to create a virtual machine scale set in Flexible orchestration mode using an ARM template.
+description: Learn how to create a Virtual Machine Scale Set in Flexible orchestration mode using an ARM template.
Previously updated : 08/05/2021 Last updated : 11/01/2022 # Create virtual machines in a scale set using an ARM template
-This article steps through using an ARM template to create a virtual machine scale set.
+This article steps through using an ARM template to create a Virtual Machine Scale Set.
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-ARM templates let you deploy groups of related resources. In a single template, you can create the virtual machine scale set, install applications, and configure autoscale rules. With the use of variables and parameters, this template can be reused to update existing, or create additional, scale sets. You can deploy templates through the Azure portal, Azure CLI, or Azure PowerShell, or from continuous integration / continuous delivery (CI/CD) pipelines.
+ARM templates let you deploy groups of related resources. In a single template, you can create the Virtual Machine Scale Set, install applications, and configure autoscale rules. With the use of variables and parameters, this template can be reused to update existing, or create additional, scale sets. You can deploy templates through the Azure portal, Azure CLI, or Azure PowerShell, or from continuous integration / continuous delivery (CI/CD) pipelines.
## Review the template
These resources are defined in the template:
### Define a scale set
-To create a scale with a template, you define the appropriate resources. The core parts of the virtual machine scale set resource type are:
+To create a scale with a template, you define the appropriate resources. The core parts of the Virtual Machine Scale Set resource type are:
| Property | Description of property | Example template value | ||-|-|
virtual-machine-scale-sets Instance Generalized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-generalized-image-version.md
Previously updated : 04/26/2022 Last updated : 11/01/2022 # Create a scale set from a generalized image > [!IMPORTANT]
-> You can't currently create a Flexible virtual machine scale set from an image shared by another tenant.
+> You can't currently create a Flexible Virtual Machine Scale Set from an image shared by another tenant.
Create a scale set from a generalized image version stored in an [Azure Compute Gallery](../virtual-machines/shared-image-galleries.md). If you want to create a scale set using a specialized image version, see [Create scale set instances from a specialized image](instance-specialized-image-version-cli.md).
virtual-machine-scale-sets Instance Specialized Image Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/instance-specialized-image-version.md
Previously updated : 04/26/2022 Last updated : 11/01/2022
# Create a scale set using a specialized image version with the Azure CLI > [!IMPORTANT]
-> You can't currently create a virtual machine scale set using Flexible orchestration mode from an image shared by another tenant.
+> You can't currently create a Virtual Machine Scale Set using Flexible orchestration mode from an image shared by another tenant.
Create a scale set from a [specialized image version](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) stored in an Azure Compute Gallery. If you want to create a scale set using a generalized image version, see [Create a scale set from a generalized image](instance-generalized-image-version-cli.md).
virtual-machine-scale-sets Orchestration Modes Api Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/orchestration-modes-api-comparison.md
Previously updated : 05/23/2022 Last updated : 11/01/2022
# Orchestration modes API comparison > [!NOTE]
-> We recommend using Flexible virtual machine scale sets for new workloads. Learn more about this new orchestration mode in our [Flexible virtual machine scale sets overview](flexible-virtual-machine-scale-sets.md).
+> We recommend using Flexible Virtual Machine Scale Sets for new workloads. Learn more about this new orchestration mode in our [Flexible Virtual Machine Scale Sets overview](flexible-virtual-machine-scale-sets.md).
-This article compares the API differences between Uniform and [Flexible orchestration](..\virtual-machines\flexible-virtual-machine-scale-sets.md) modes for virtual machine scale sets. To learn more about Uniform and Flexible virtual machine scale sets, see [orchestration modes](virtual-machine-scale-sets-orchestration-modes.md).
+This article compares the API differences between Uniform and [Flexible orchestration](..\virtual-machines\flexible-virtual-machine-scale-sets.md) modes for Virtual Machine Scale Sets. To learn more about Uniform and Flexible Virtual Machine Scale Sets, see [orchestration modes](virtual-machine-scale-sets-orchestration-modes.md).
## Instance view | Uniform API | Flexible alternative | |-|-|
-| Virtual machine scale sets Instance View | Get instance view on individual VMs; Use Resource Graph to query power state |
+| Virtual Machine Scale Sets Instance View | Get instance view on individual VMs; Use Resource Graph to query power state |
## Scale set lifecycle batch operations
This article compares the API differences between Uniform and [Flexible orchestr
**Uniform API:**
-Virtual machine scale sets VM Get or Update Instance:
+Virtual Machine Scale Sets VM Get or Update Instance:
- [Get](/rest/api/compute/virtualmachinescalesetvms/get) - [Update](/rest/api/compute/virtualmachinescalesetvms/update)
resources
**Uniform API:**
-Virtual machine scale sets Operations:
+Virtual Machine Scale Sets Operations:
- [Update Instances](/rest/api/compute/virtual-machine-scale-sets/update-instances) - [Deallocate](/rest/api/compute/virtual-machine-scale-sets/deallocate) - [Perform Maintenance](/rest/api/compute/virtual-machine-scale-sets/perform-maintenance)
Virtual machines Operations:
**Uniform API:**
-Virtual machine scale sets VM Extension:
+Virtual Machine Scale Sets VM Extension:
- [Create Or Update](/rest/api/compute/virtual-machine-scale-set-vm-extensions/create-or-update) - [Delete](/rest/api/compute/virtual-machine-scale-set-vm-extensions/delete) - [Get](/rest/api/compute/virtual-machine-scale-set-vm-extensions/get)
Invoke operations on individual VMs.
**Uniform API:**
-Uniform virtual machine scale sets APIs:
+Uniform Virtual Machine Scale Sets APIs:
- [Convert To Single Placement Group](/rest/api/compute/virtual-machine-scale-sets/convert-to-single-placement-group) - [Force Recovery Service Fabric Platform Update Domain Walk](/rest/api/compute/virtual-machine-scale-sets/force-recovery-service-fabric-platform-update-domain-walk) **Flexible alternative:**
-Not supported on Flexible virtual machine scale sets.
+Not supported on Flexible Virtual Machine Scale Sets.
## Next steps
virtual-machine-scale-sets Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/overview.md
Title: Azure virtual machine scale sets overview
-description: Learn about Azure virtual machine scale sets and how to automatically scale your applications
+ Title: Azure Virtual Machine Scale Sets overview
+description: Learn about Azure Virtual Machine Scale Sets and how to automatically scale your applications
Previously updated : 06/30/2020 Last updated : 11/01/2022
-# What are virtual machine scale sets?
+# What are Virtual Machine Scale Sets?
-Azure virtual machine scale sets let you create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule. Scale sets provide the following key benefits:
+Azure Virtual Machine Scale Sets let you create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule. Scale sets provide the following key benefits:
- Easy to create and manage multiple VMs - Provides high availability and application resiliency by distributing VMs across availability zones or fault domains - Allows your application to automatically scale as resource demand changes
Learn more about the differences between Uniform scale sets and Flexible scale s
> The orchestration mode is defined when you create the scale set and cannot be changed or updated later.
-## Why use virtual machine scale sets?
+## Why use Virtual Machine Scale Sets?
To provide redundancy and improved performance, applications are typically distributed across multiple instances. Customers may access your application through a load balancer that distributes requests to one of the application instances. If you need to perform maintenance or update an application instance, your customers must be distributed to another available application instance. To keep up with extra customer demand, you may need to increase the number of application instances that run your application.
-Azure virtual machine scale sets provide the management capabilities for applications that run across many VMs, automatic scaling of resources, and load balancing of traffic. Scale sets provide the following key benefits:
+Azure Virtual Machine Scale Sets provide the management capabilities for applications that run across many VMs, automatic scaling of resources, and load balancing of traffic. Scale sets provide the following key benefits:
- **Easy to create and manage multiple VMs** - When you have many VMs that run your application, it's important to maintain a consistent configuration across your environment. For reliable performance of your application, the VM size, disk configuration, and application installs should match across all VMs.
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Title: Built-in policy definitions for Azure virtual machine scale sets
-description: Lists Azure Policy built-in policy definitions for Azure virtual machine scale sets. These built-in policy definitions provide common approaches to managing your Azure resources.
Previously updated : 09/12/2022
+ Title: Built-in policy definitions for Azure Virtual Machine Scale Sets
+description: Lists Azure Policy built-in policy definitions for Azure Virtual Machine Scale Sets. These built-in policy definitions provide common approaches to managing your Azure resources.
Last updated : 11/01/2022
-# Azure Policy built-in definitions for Azure virtual machine scale sets
+# Azure Policy built-in definitions for Azure Virtual Machine Scale Sets
This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
-definitions for Azure virtual machine scale sets. For additional Azure Policy built-ins for other
-services, see
-[Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
+definitions for Azure Virtual Machine Scale Sets. For more Azure Policy built-ins for other
+services, see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
The name of each built-in policy definition links to the policy definition in the Azure portal. Use
-the link in the **Version** column to view the source on the
-[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+the link in the **Version** column to view the source on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
## Microsoft.Compute
virtual-machine-scale-sets Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/proximity-placement-groups.md
Title: Proximity placement groups for virtual machine scale sets
-description: Learn about creating proximity placement groups for Windows virtual machine scale sets in Azure.
--
+ Title: Proximity placement groups for Virtual Machine Scale Sets
+description: Learn about creating proximity placement groups for Windows Virtual Machine Scale Sets in Azure.
++ Previously updated : 07/01/2019- Last updated : 11/01/2022+
To get VMs as close as possible, achieving the lowest possible latency, you can
A proximity placement group is a logical grouping used to make sure that Azure compute resources are physically located close to each other. Proximity placement groups are useful for workloads where low latency is a requirement. - Low latency between stand-alone VMs.-- Low Latency between VMs in a single availability set or a virtual machine scale set.
+- Low Latency between VMs in a single availability set or a Virtual Machine Scale Set.
- Low latency between stand-alone VMs, VMs in multiple Availability Sets, or multiple scale sets. You can have multiple compute resources in a single placement group to bring together a multi-tiered application. - Low latency between multiple application tiers using different hardware types. For example, running the backend using M-series in an availability set and the front end on a D-series instance, in a scale set, in a single proximity placement group. ## Using Proximity Placement Groups
-A proximity placement group is a resource in Azure. You need to create one before using it with other resources. Once created, it could be used with virtual machines, availability sets, or virtual machine scale sets.
+A proximity placement group is a resource in Azure. You need to create one before using it with other resources. Once created, it could be used with virtual machines, availability sets, or Virtual Machine Scale Sets.
You specify a proximity placement group when creating compute resources providing the proximity placement group ID. You can also move an existing resource into a proximity placement group. When moving a resource into a proximity placement group, you should stop (deallocate) the asset first since it will be redeployed potentially into a different data center in the region to satisfy the colocation constraint.
-In the case of availability sets and virtual machine scale sets, you should set the proximity placement group at the resource level rather than the individual virtual machines.
+In the case of availability sets and Virtual Machine Scale Sets, you should set the proximity placement group at the resource level rather than the individual virtual machines.
A proximity placement group is a colocation constraint rather than a pinning mechanism. It is pinned to a specific data center with the deployment of the first resource to use it. Once all resources using the proximity placement group have been stopped (deallocated) or deleted, it is no longer pinned. Therefore, when using a proximity placement group with multiple VM series, it is important to specify all the required types upfront in a template when possible or follow a deployment sequence which will improve your chances for a successful deployment. If your deployment fails, restart the deployment with the VM size which has failed as the first size to be deployed.
virtual-machine-scale-sets Quick Create Bicep Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-bicep-windows.md
Title: Quickstart - Create a Windows virtual machine scale set with Bicep
+ Title: Quickstart - Create a Windows Virtual Machine Scale Set with Bicep
description: Learn how to quickly create a Windows virtual machine scale with Bicep to deploy a sample app and configures autoscale rules--++ Previously updated : 06/28/2022 Last updated : 11/01/2022
-# Quickstart: Create a Windows virtual machine scale set with Bicep
+# Quickstart: Create a Windows Virtual Machine Scale Set with Bicep
> [!NOTE]
-> This quickstart uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for virtual machine scale sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
+> This quickstart uses Uniform Orchestration mode. We recommend using Flexible Orchestration for new workloads. For more information, see [Orchesration modes for Virtual Machine Scale Sets in Azure](virtual-machine-scale-sets-orchestration-modes.md).
-A virtual machine scale set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the virtual machine scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the virtual machine scale set. In this quickstart, you create a virtual machine scale set and deploy a sample application with Bicep.
+A Virtual Machine Scale Set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the Virtual Machine Scale Set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the Virtual Machine Scale Set. In this quickstart, you create a Virtual Machine Scale Set and deploy a sample application with Bicep.
[!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)]
The following resources are defined in the Bicep file:
### Define a scale set
-To create a virtual machine scale set with a Bicep file, you define the appropriate resources. The core parts of the virtual machine scale set resource type are:
+To create a Virtual Machine Scale Set with a Bicep file, you define the appropriate resources. The core parts of the Virtual Machine Scale Set resource type are:
| Property | Description of property | Example template value | ||-|-|
To create a virtual machine scale set with a Bicep file, you define the appropri
| osProfile.adminUsername | The username for each VM instance | azureuser | | osProfile.adminPassword | The password for each VM instance | P@ssw0rd! |
-To customize a virtual machine scale set Bicep file, you can change the VM size or initial capacity. Another option is to use a different platform or a custom image.
+To customize a Virtual Machine Scale Set Bicep file, you can change the VM size or initial capacity. Another option is to use a different platform or a custom image.
### Add a sample application
-To test your virtual machine scale set, install a basic web application. When you deploy a virtual machine scale set, VM extensions can provide post-deployment configuration and automation tasks, such as installing an app. Scripts can be downloaded from [GitHub](https://azure.microsoft.com/resources/templates/vmss-windows-webapp-dsc-autoscale/) or provided to the Azure portal at extension run-time. To apply an extension to your virtual machine scale set, add the `extensionProfile` section to the resource example above. The extension profile typically defines the following properties:
+To test your Virtual Machine Scale Set, install a basic web application. When you deploy a Virtual Machine Scale Set, VM extensions can provide post-deployment configuration and automation tasks, such as installing an app. Scripts can be downloaded from [GitHub](https://azure.microsoft.com/resources/templates/vmss-windows-webapp-dsc-autoscale/) or provided to the Azure portal at extension run-time. To apply an extension to your Virtual Machine Scale Set, add the `extensionProfile` section to the resource example above. The extension profile typically defines the following properties:
- Extension type - Extension publisher
An install script is downloaded from GitHub, as defined in `url`. The extension
- Replace *\<vmss-name\>* with the name of the virtual machine scale set. It must be 3-61 characters in length and globally unique across Azure. You'll be prompted to enter `adminPassword`.
+ Replace *\<vmss-name\>* with the name of the Virtual Machine Scale Set. It must be 3-61 characters in length and globally unique across Azure. You'll be prompted to enter `adminPassword`.
> [!NOTE]
- > When the deployment finishes, you should see a message indicating the deployment succeeded. It can take 10-15 minutes for the virtual machine scale set to be created and apply the extension to configure the app.
+ > When the deployment finishes, you should see a message indicating the deployment succeeded. It can take 10-15 minutes for the Virtual Machine Scale Set to be created and apply the extension to configure the app.
## Validate the deployment
-To see your virtual machine scale set in action, access the sample web application in a web browser. Obtain the public IP address of your load balancer using Azure CLI or Azure PowerShell.
+To see your Virtual Machine Scale Set in action, access the sample web application in a web browser. Obtain the public IP address of your load balancer using Azure CLI or Azure PowerShell.
# [CLI](#tab/CLI)
Remove-AzResourceGroup -Name exampleRG
## Next steps
-In this quickstart, you created a Windows virtual machine scale set with a Bicep file and used the PowerShell DSC extension to install a basic ASP.NET app on the VM instances. To learn more, continue to the tutorial for how to create and manage Azure virtual machine scale sets.
+In this quickstart, you created a Windows Virtual Machine Scale Set with a Bicep file and used the PowerShell DSC extension to install a basic ASP.NET app on the VM instances. To learn more, continue to the tutorial for how to create and manage Azure Virtual Machine Scale Sets.
> [!div class="nextstepaction"]
-> [Create and manage Azure virtual machine scale sets](tutorial-create-and-manage-powershell.md)
+> [Create and manage Azure Virtual Machine Scale Sets](tutorial-create-and-manage-powershell.md)
virtual-machine-scale-sets Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-cli.md
Title: Quickstart - Create a virtual machine scale set with Azure CLI
-description: Get started with your deployments by learning how to quickly create a virtual machine scale set with Azure CLI.
+ Title: Quickstart - Create a Virtual Machine Scale Set with Azure CLI
+description: Get started with your deployments by learning how to quickly create a Virtual Machine Scale Set with Azure CLI.
Previously updated : 03/27/2018 Last updated : 11/01/2022
-# Quickstart: Create a virtual machine scale set with the Azure CLI
+# Quickstart: Create a Virtual Machine Scale Set with the Azure CLI
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets > [!NOTE]
-> The following article is for Uniform virtual machine scale sets. We recommend using Flexible virtual machine scale sets for new workloads. Learn more about this new orchestration mode in our [Flexible virtual machine scale sets overview](flexible-virtual-machine-scale-sets.md).
+> The following article is for Uniform Virtual Machine Scale Sets. We recommend using Flexible Virtual Machine Scale Sets for new workloads. Learn more about this new orchestration mode in our [Flexible Virtual Machine Scale Sets overview](flexible-virtual-machine-scale-sets.md).
-A virtual machine scale set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the scale set. In this quickstart, you create a virtual machine scale set and deploy a sample application with the Azure CLI.
+A Virtual Machine Scale Set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the scale set. In this quickstart, you create a Virtual Machine Scale Set and deploy a sample application with the Azure CLI.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
Before you can create a scale set, create a resource group with [az group create
az group create --name myResourceGroup --location eastus ```
-Now create a virtual machine scale set with [az vmss create](/cli/azure/vmss). The following example creates a scale set named *myScaleSet* that is set to automatically update as changes are applied, and generates SSH keys if they do not exist in *~/.ssh/id_rsa*. These SSH keys are used if you need to log in to the VM instances. To use an existing set of SSH keys, instead use the `--ssh-key-value` parameter and specify the location of your keys.
+Now create a Virtual Machine Scale Set with [az vmss create](/cli/azure/vmss). The following example creates a scale set named *myScaleSet* that is set to automatically update as changes are applied, and generates SSH keys if they do not exist in *~/.ssh/id_rsa*. These SSH keys are used if you need to log in to the VM instances. To use an existing set of SSH keys, instead use the `--ssh-key-value` parameter and specify the location of your keys.
```azurecli-interactive az vmss create \
az group delete --name myResourceGroup --yes --no-wait
## Next steps
-In this quickstart, you created a basic scale set and used the Custom Script Extension to install a basic NGINX web server on the VM instances. To learn more, continue to the tutorial for how to create and manage Azure virtual machine scale sets.
+In this quickstart, you created a basic scale set and used the Custom Script Extension to install a basic NGINX web server on the VM instances. To learn more, continue to the tutorial for how to create and manage Azure Virtual Machine Scale Sets.
> [!div class="nextstepaction"]
-> [Create and manage Azure virtual machine scale sets](tutorial-create-and-manage-cli.md)
+> [Create and manage Azure Virtual Machine Scale Sets](tutorial-create-and-manage-cli.md)
virtual-machine-scale-sets Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-portal.md
Title: Quickstart - Create a virtual machine scale set in the Azure portal
+ Title: Quickstart - Create a Virtual Machine Scale Set in the Azure portal
description: Get started with your deployments by learning how to quickly create a virtual machine scale the Azure portal. Previously updated : 06/30/2020 Last updated : 11/01/2022
-# Quickstart: Create a virtual machine scale set in the Azure portal
+# Quickstart: Create a Virtual Machine Scale Set in the Azure portal
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets > [!NOTE]
-> The following article is for Uniform virtual machine scale sets. We recommend using Flexible virtual machine scale sets for new workloads. Learn more about this new orchestration mode in our [Flexible virtual machine scale sets overview](flexible-virtual-machine-scale-sets.md).
+> The following article is for Uniform Virtual Machine Scale Sets. We recommend using Flexible Virtual Machine Scale Sets for new workloads. Learn more about this new orchestration mode in our [Flexible Virtual Machine Scale Sets overview](flexible-virtual-machine-scale-sets.md).
-A virtual machine scale set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the scale set. In this quickstart, you create a virtual machine scale set in the Azure portal.
+A Virtual Machine Scale Set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the scale set. In this quickstart, you create a Virtual Machine Scale Set in the Azure portal.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
First, create a public Standard Load Balancer by using the portal. The name and
![Create a load balancer](./media/virtual-machine-scale-sets-create-portal/load-balancer.png)
-## Create virtual machine scale set
+## Create Virtual Machine Scale Set
You can deploy a scale set with a Windows Server image or Linux image such as RHEL, CentOS, Ubuntu, or SLES.
-1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual machine scale sets**. Select **Create** on the **Virtual machine scale sets** page, which will open the **Create a virtual machine scale set** page.
+1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual Machine Scale Sets**. Select **Create** on the **Virtual Machine Scale Sets** page, which will open the **Create a Virtual Machine Scale Set** page.
1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and select *myVMSSResourceGroup* from resource group list. 1. Type *myScaleSet* as the name for your scale set. 1. In **Region**, select a region that is close to your area.
When no longer needed, delete the resource group, scale set, and all related res
## Next steps
-In this quickstart, you created a basic scale set in the Azure portal. To learn more, continue to the tutorial for how to create and manage Azure virtual machine scale sets.
+In this quickstart, you created a basic scale set in the Azure portal. To learn more, continue to the tutorial for how to create and manage Azure Virtual Machine Scale Sets.
> [!div class="nextstepaction"]
-> [Create and manage Azure virtual machine scale sets](tutorial-create-and-manage-powershell.md)
+> [Create and manage Azure Virtual Machine Scale Sets](tutorial-create-and-manage-powershell.md)
virtual-machine-scale-sets Quick Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-powershell.md
Title: Quickstart - Create a virtual machine scale set with Azure PowerShell
+ Title: Quickstart - Create a Virtual Machine Scale Set with Azure PowerShell
description: Get started with your deployments by learning how to quickly create a virtual machine scale with Azure PowerShell. Previously updated : 11/08/2018 Last updated : 11/01/2022
-# Quickstart: Create a virtual machine scale set with Azure PowerShell
+# Quickstart: Create a Virtual Machine Scale Set with Azure PowerShell
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets > [!NOTE]
-> The following article is for Uniform virtual machine scale sets. We recommend using Flexible virtual machine scale sets for new workloads. Learn more about this new orchestration mode in our [Flexible virtual machine scale sets overview](flexible-virtual-machine-scale-sets.md).
+> The following article is for Uniform Virtual Machine Scale Sets. We recommend using Flexible Virtual Machine Scale Sets for new workloads. Learn more about this new orchestration mode in our [Flexible Virtual Machine Scale Sets overview](flexible-virtual-machine-scale-sets.md).
-A virtual machine scale set allows you to deploy and manage a set of autoscaling virtual machines. You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the scale set. In this quickstart, you create a virtual machine scale set and deploy a sample application with Azure PowerShell.
+A Virtual Machine Scale Set allows you to deploy and manage a set of autoscaling virtual machines. You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the scale set. In this quickstart, you create a Virtual Machine Scale Set and deploy a sample application with Azure PowerShell.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
Before you can create a scale set, create a resource group with [New-AzResourceG
New-AzResourceGroup -ResourceGroupName "myResourceGroup" -Location "EastUS" ```
-Now create a virtual machine scale set with [New-AzVmss](/powershell/module/az.compute/new-azvmss). The following example creates a scale set named *myScaleSet* that uses the *Windows Server 2016 Datacenter* platform image. The Azure network resources for virtual network, public IP address, and load balancer are automatically created. When prompted, you can set your own administrative credentials for the VM instances in the scale set:
+Now create a Virtual Machine Scale Set with [New-AzVmss](/powershell/module/az.compute/new-azvmss). The following example creates a scale set named *myScaleSet* that uses the *Windows Server 2016 Datacenter* platform image. The Azure network resources for virtual network, public IP address, and load balancer are automatically created. When prompted, you can set your own administrative credentials for the VM instances in the scale set:
```azurepowershell-interactive New-AzVmss `
Update-AzVmss `
## Allow traffic to application
- To allow access to the basic web application, create a network security group with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) and [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup). For more information, see [Networking for Azure virtual machine scale sets](virtual-machine-scale-sets-networking.md).
+ To allow access to the basic web application, create a network security group with [New-AzNetworkSecurityRuleConfig](/powershell/module/az.network/new-aznetworksecurityruleconfig) and [New-AzNetworkSecurityGroup](/powershell/module/az.network/new-aznetworksecuritygroup). For more information, see [Networking for Azure Virtual Machine Scale Sets](virtual-machine-scale-sets-networking.md).
```azurepowershell-interactive # Get information about the scale set
Remove-AzResourceGroup -Name "myResourceGroup" -Force -AsJob
## Next steps
-In this quickstart, you created a basic scale set and used the Custom Script Extension to install a basic IIS web server on the VM instances. To learn more, continue to the tutorial for how to create and manage Azure virtual machine scale sets.
+In this quickstart, you created a basic scale set and used the Custom Script Extension to install a basic IIS web server on the VM instances. To learn more, continue to the tutorial for how to create and manage Azure Virtual Machine Scale Sets.
> [!div class="nextstepaction"]
-> [Create and manage Azure virtual machine scale sets](tutorial-create-and-manage-powershell.md)
+> [Create and manage Azure Virtual Machine Scale Sets](tutorial-create-and-manage-powershell.md)
virtual-machine-scale-sets Quick Create Template Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-template-linux.md
Title: Quickstart - Create a Linux virtual machine scale set with an Azure Resource Manager template
+ Title: Quickstart - Create a Linux Virtual Machine Scale Set with an Azure Resource Manager template
description: Learn how to quickly create a Linux virtual machine scale with an Azure Resource Manager template that deploys a sample app and configures autoscale rules Previously updated : 03/27/2020 Last updated : 11/01/2022
-# Quickstart: Create a Linux virtual machine scale set with an ARM template
+# Quickstart: Create a Linux Virtual Machine Scale Set with an ARM template
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Uniform scale sets > [!NOTE]
-> The following article is for Uniform virtual machine scale sets. We recommend using Flexible virtual machine scale sets for new workloads. Learn more about this new orchestration mode in our [Flexible virtual machine scale sets overview](flexible-virtual-machine-scale-sets.md).
+> The following article is for Uniform Virtual Machine Scale Sets. We recommend using Flexible Virtual Machine Scale Sets for new workloads. Learn more about this new orchestration mode in our [Flexible Virtual Machine Scale Sets overview](flexible-virtual-machine-scale-sets.md).
-A virtual machine scale set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the scale set. In this quickstart, you create a virtual machine scale set and deploy a sample application with an Azure Resource Manager template (ARM template).
+A Virtual Machine Scale Set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the scale set. In this quickstart, you create a Virtual Machine Scale Set and deploy a sample application with an Azure Resource Manager template (ARM template).
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-ARM templates let you deploy groups of related resources. In a single template, you can create the virtual machine scale set, install applications, and configure autoscale rules. With the use of variables and parameters, this template can be reused to update existing, or create additional, scale sets. You can deploy templates through the Azure portal, Azure CLI, or Azure PowerShell, or from continuous integration / continuous delivery (CI/CD) pipelines.
+ARM templates let you deploy groups of related resources. In a single template, you can create the Virtual Machine Scale Set, install applications, and configure autoscale rules. With the use of variables and parameters, this template can be reused to update existing, or create additional, scale sets. You can deploy templates through the Azure portal, Azure CLI, or Azure PowerShell, or from continuous integration / continuous delivery (CI/CD) pipelines.
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
These resources are defined in the template:
### Define a scale set
-To create a scale with a template, you define the appropriate resources. The core parts of the virtual machine scale set resource type are:
+To create a scale with a template, you define the appropriate resources. The core parts of the Virtual Machine Scale Set resource type are:
| Property | Description of property | Example template value | ||-|-|
az group delete --name myResourceGroup --yes --no-wait
## Next steps
-In this quickstart, you created a Linux scale set with an ARM template and used the Custom Script Extension to install a basic Python web server on the VM instances. To learn more, continue to the tutorial for how to create and manage Azure virtual machine scale sets.
+In this quickstart, you created a Linux scale set with an ARM template and used the Custom Script Extension to install a basic Python web server on the VM instances. To learn more, continue to the tutorial for how to create and manage Azure Virtual Machine Scale Sets.
> [!div class="nextstepaction"]
-> [Create and manage Azure virtual machine scale sets](tutorial-create-and-manage-cli.md)
+> [Create and manage Azure Virtual Machine Scale Sets](tutorial-create-and-manage-cli.md)
virtual-machine-scale-sets Quick Create Template Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-template-windows.md
Previously updated : 03/27/2020 Last updated : 11/01/2022
-# Quickstart: Create a Windows virtual machine scale set with an ARM template
+# Quickstart: Create a Windows Virtual Machine Scale Set with an ARM template
**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets > [!NOTE]
-> The following article is for Uniform virtual machine scale sets. We recommend using Flexible virtual machine scale sets for new workloads. Learn more about this new orchestration mode in our [Flexible virtual machine scale sets overview](flexible-virtual-machine-scale-sets.md).
+> The following article is for Uniform Virtual Machine Scale Sets. We recommend using Flexible Virtual Machine Scale Sets for new workloads. Learn more about this new orchestration mode in our [Flexible Virtual Machine Scale Sets overview](flexible-virtual-machine-scale-sets.md).
-A virtual machine scale set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the scale set. In this quickstart, you create a virtual machine scale set and deploy a sample application with an Azure Resource Manager template (ARM template).
+A Virtual Machine Scale Set allows you to deploy and manage a set of auto-scaling virtual machines. You can scale the number of VMs in the scale set manually, or define rules to autoscale based on resource usage like CPU, memory demand, or network traffic. An Azure load balancer then distributes traffic to the VM instances in the scale set. In this quickstart, you create a Virtual Machine Scale Set and deploy a sample application with an Azure Resource Manager template (ARM template).
[!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
-ARM templates let you deploy groups of related resources. In a single template, you can create the virtual machine scale set, install applications, and configure autoscale rules. With the use of variables and parameters, this template can be reused to update existing, or create additional, scale sets. You can deploy templates through the Azure portal, Azure CLI, Azure PowerShell, or from continuous integration / continuous delivery (CI/CD) pipelines.
+ARM templates let you deploy groups of related resources. In a single template, you can create the Virtual Machine Scale Set, install applications, and configure autoscale rules. With the use of variables and parameters, this template can be reused to update existing, or create additional, scale sets. You can deploy templates through the Azure portal, Azure CLI, Azure PowerShell, or from continuous integration / continuous delivery (CI/CD) pipelines.
If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
These resources are defined in these templates:
### Define a scale set
-To create a scale with a template, you define the appropriate resources. The core parts of the virtual machine scale set resource type are:
+To create a scale with a template, you define the appropriate resources. The core parts of the Virtual Machine Scale Set resource type are:
| Property | Description of property | Example template value | ||-|-|
Remove-AzResourceGroup -Name "myResourceGroup" -Force -AsJob
## Next steps
-In this quickstart, you created a Windows scale set with an ARM template and used the PowerShell DSC extension to install a basic ASP.NET app on the VM instances. To learn more, continue to the tutorial for how to create and manage Azure virtual machine scale sets.
+In this quickstart, you created a Windows scale set with an ARM template and used the PowerShell DSC extension to install a basic ASP.NET app on the VM instances. To learn more, continue to the tutorial for how to create and manage Azure Virtual Machine Scale Sets.
> [!div class="nextstepaction"]
-> [Create and manage Azure virtual machine scale sets](tutorial-create-and-manage-powershell.md)
+> [Create and manage Azure Virtual Machine Scale Sets](tutorial-create-and-manage-powershell.md)
virtual-machine-scale-sets Share Images Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/share-images-across-tenants.md
Previously updated : 04/05/2019 Last updated : 11/01/2022
## Create a scale set using Azure CLI > [!IMPORTANT]
-> You can't currently create a Flexible virtual machine scale set from an image shared by another tenant.
+> You can't currently create a Flexible Virtual Machine Scale Set from an image shared by another tenant.
Sign in the service principal for tenant 1 using the appID, the app key, and the ID of tenant 1. You can use `az account show --query "tenantId"` to get the tenant IDs if needed.
virtual-machine-scale-sets Spot Priority Mix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/spot-priority-mix.md
Previously updated : 10/12/2022 Last updated : 11/01/2022 + # Spot Priority Mix for high availability and cost savings (preview)
Azure allows you to have the flexibility of running a mix of uninterruptible reg
You can configure a custom percentage distribution across Spot and regular VMs. The platform automatically orchestrates each scale-out and scale-in operation to achieve the desired distribution by selecting an appropriate number of VMs to create or delete. You can also optionally configure the number of base regular uninterruptible VMs you would like to maintain in the Virtual Machine Scale Set during any scale operation.
-## Template
+## [Template](#tab/template-1)
You can set your Spot Priority Mix by using a template to add the following properties to a scale set with Flexible orchestration using a Spot priority VM profile:
You can set your Spot Priority Mix by using a template to add the following prop
You can refer to this [ARM template example](https://paste.microsoft.com/f84d2f83-f6bf-4d24-aa03-175b0c43da32) for more context.
-## Azure portal
+## [Portal](#tab/portal-1)
You can set your Spot Priority Mix in the Scaling tab of the Virtual Machine Scale Sets creation process in the Azure portal. The following steps will instruct you on how to access this feature during that process.
-1. Log in to the [Azure portal](https://portal.azure.com) through the [public preview access link](https://aka.ms/SpotMix).
+1. Log in to the [Azure portal](https://portal.azure.com).
1. In the search bar, search for and select **Virtual machine scale sets**. 1. Select **Create** on the **Virtual machine scale sets** page. 1. In the **Basics** tab, fill out the required fields and select **Flexible** as the **Orchestration** mode.
You can set your Spot Priority Mix in the Scaling tab of the Virtual Machine Sca
1. Continue through the Virtual Machine Scale Set creation process.
-## Azure CLI
+## [Azure CLI](#tab/cli-1)
You can set your Spot Priority Mix using Azure CLI by setting the `priority` flag to `Spot` and including the `regular-priority-count` and `regular-priority-percentage` flags.
az vmss create -n myScaleSet \
--single-placement-group False \ ```
-## Azure PowerShell
+## [Azure PowerShell](#tab/powershell-1)
You can set your Spot Priority Mix using Azure PowerShell by setting the `Priority` flag to `Spot` and including the `BaseRegularPriorityCount` and `RegularPriorityPercentage` flags.
New-AzVmss `
``` ++ ## Next steps > [!div class="nextstepaction"]
virtual-machine-scale-sets Spot Vm Size Recommendation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/spot-vm-size-recommendation.md
Previously updated : 06/15/2022 Last updated : 11/01/2022
virtual-machines Ephemeral Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ephemeral-os-disks.md
For the same example above, if you create a standard Ephemeral OS disk VM you wo
> For more information on [how to deploy a trusted launch VM](trusted-launch-portal.md)
-## Confidential VMs using Ephemeral OS disks (preview)
+## Confidential VMs using Ephemeral OS disks
AMD-based Confidential VMs cater to high security and confidentiality requirements of customers. These VMs provide a strong, hardware-enforced boundary to help meet your security needs. There are limitations to use Confidential VMs. Check the [region](../confidential-computing/confidential-vm-overview.md#regions), [size](../confidential-computing/confidential-vm-overview.md#size-support) and [OS supported](../confidential-computing/confidential-vm-overview.md#os-support) limitations for confidential VMs. Virtual machine guest state (VMGS) blob contains the security information of the confidential VM. Confidential VMs using Ephemeral OS disks by default **1 GiB** from the **OS cache** or **temp storage** based on the chosen placement option is reserved for VMGS.The lifecycle of the VMGS blob is tied to that of the OS Disk.
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
Customize properties:
- **restartTimeout** - Restart timeout specified as a string of magnitude and unit. For example, `5m` (5 minutes) or `2h` (2 hours). The default is: `5m`. > [!NOTE]
-> There's no Linux restart customizer. If you're installing drivers, or components that require a restart, you can install them and invoke a restart using the Shell customizer. There's a 20min SSH timeout to the build VM.
+> There is no Linux restart customizer.
### PowerShell customizer
virtual-machines High Availability Guide Suse Nfs Simple Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-suse-nfs-simple-mount.md
vm-windows Previously updated : 09/27/2022 Last updated : 11/01/2022
The example configurations and installation commands use the following instance
| Instance name | Instance number | | - | | | ASCS | 00 |
-| Evaluated Receipt Settlement (ERS) | 01 |
+| Enqueue Replication Server (ERS) | 01 |
| Primary Application Server (PAS) | 02 | | Additional Application Server (AAS) | 03 | | SAP system identifier | NW1 |
This article assumes that you've already deployed an [Azure virtual network](../
> If you need additional IP addresses for your VMs, deploy and attach a second network interface controller (NIC). Don't add secondary IP addresses to the primary NIC. [Azure Load Balancer Floating IP doesn't support this scenario](../../../load-balancer/load-balancer-multivip-overview.md#limitations). 2. For your virtual IPs, deploy and configure an [Azure load balancer](../../../load-balancer/load-balancer-overview.md). We recommend that you use a [Standard load balancer](../../../load-balancer/quickstart-load-balancer-standard-public-portal.md).
- 1. Create front-end IP address 0.27.0.9 for the ASCS instance:
+ 1. Create front-end IP address 10.27.0.9 for the ASCS instance:
1. Open the load balancer, select **Frontend IP pool**, and then select **Add**. 1. Enter the name of the new front-end IP pool (for example, **frontend.NW1.ASCS**). 1. Set **Assignment** to **Static** and enter the IP address (for example, **10.27.0.9**).
The instructions in this section are applicable only if you're using Azure NetAp
service/halib_cluster_connector = /usr/bin/sap_suse_cluster_connector # Remove Autostart from the ERS profile.
- Autostart = 1
+ # Autostart = 1
``` 6. **[A]** Configure `keepalive`.
The instructions in this section are applicable only if you're using Azure NetAp
sudo crm configure property maintenance-mode="true" sudo crm configure primitive rsc_sapstartsrv_NW1_ASCS00 ocf:suse:SAPStartSrv \
- params InstanceName=NW1_ASCS00_nw1ascs
+ params InstanceName=NW1_ASCS00_sapascs
sudo crm configure primitive rsc_sapstartsrv_NW1_ERS01 ocf:suse:SAPStartSrv \
- params InstanceName=NW1_ERS01_nw1ers
+ params InstanceName=NW1_ERS01_sapers
# If you're using NFS on Azure Files or NFSv3 on Azure NetApp Files: sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
The instructions in this section are applicable only if you're using Azure NetAp
AUTOMATIC_RECOVER=false IS_ERS=true MINIMAL_PROBE=true \ meta priority=1000
- sudo crm configure modgroup g-NW1_ASCS add rsc_sapstartsrv_NW1_ERS01
+ sudo crm configure modgroup g-NW1_ASCS add rsc_sapstartsrv_NW1_ASCS00
sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00 sudo crm configure modgroup g-NW1_ERS add rsc_sapstartsrv_NW1_ERS01 sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS01
The instructions in this section are applicable only if you're using Azure NetAp
sudo crm configure property maintenance-mode="true" sudo crm configure primitive rsc_sapstartsrv_NW1_ASCS00 ocf:suse:SAPStartSrv \
- params InstanceName=NW1_ASCS00_nw1ascs
+ params InstanceName=NW1_ASCS00_sapascs
sudo crm configure primitive rsc_sapstartsrv_NW1_ERS01 ocf:suse:SAPStartSrv \
- params InstanceName=NW1_ERS01_nw1ers
+ params InstanceName=NW1_ERS01_sapers
# If you're using NFS on Azure Files or NFSv3 on Azure NetApp Files: sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
The instructions in this section are applicable only if you're using Azure NetAp
params InstanceName=NW1_ERS01_sapers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS01_sapers" \ AUTOMATIC_RECOVER=false IS_ERS=true MINIMAL_PROBE=true
- sudo crm configure modgroup g-NW1_ASCS add rsc_sapstartsrv_NW1_ERS01
+ sudo crm configure modgroup g-NW1_ASCS add rsc_sapstartsrv_NW1_ASCS00
sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00 sudo crm configure modgroup g-NW1_ERS add rsc_sapstartsrv_NW1_ERS01 sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS01
virtual-network-manager How To Configure Cross Tenant Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-configure-cross-tenant-cli.md
# Configure cross-tenant connection in Azure Virtual Network Manager
-In this article, youΓÇÖll learn how-to create cross-tenant connections in Azure Virtual Network Manager using [Azure CLI](/cli/azure/network/manager/scope-connection). Cross-tenant support allows organizations to use a central Network Manager instance for managing virtual networks across different tenants and subscriptions. First, you'll create the scope connection on the central network manager. Then you'll create the network manager connection on the connecting tenant, and verify connection. Last, you'll add virtual networks from different tenants and verify. Once completed, You can centrally manage the resources of other tenants from a central network manager instance.
+In this article, youΓÇÖll learn how-to create [cross-tenant connections](concept-cross-tenant.md) in Azure Virtual Network Manager using [Azure CLI](/cli/azure/network/manager/scope-connection). Cross-tenant support allows organizations to use a central Network Manager instance for managing virtual networks across different tenants and subscriptions. First, you'll create the scope connection on the central network manager. Then you'll create the network manager connection on the connecting tenant, and verify connection. Last, you'll add virtual networks from different tenants and verify. Once completed, You can centrally manage the resources of other tenants from a central network manager instance.
> [!IMPORTANT] > Azure Virtual Network Manager is currently in public preview.
virtual-network Accelerated Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-overview.md
The benefits of accelerated networking only apply to the VM that it's enabled on
The following versions of Windows are supported:
+- **Windows Server 2022**
- **Windows Server 2019 Standard/Datacenter** - **Windows Server 2016 Standard/Datacenter** - **Windows Server 2012 R2 Standard/Datacenter**
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureStack** | Azure Stack Bridge services. </br> This tag represents the Azure Stack Bridge service endpoint per region. | Outbound | No | Yes | | **AzureTrafficManager** | Azure Traffic Manager probe IP addresses.<br/><br/>For more information on Traffic Manager probe IP addresses, see [Azure Traffic Manager FAQ](../traffic-manager/traffic-manager-faqs.md). | Inbound | No | Yes | | **AzureUpdateDelivery** | For accessing Windows Updates. <br/><br/>**Note**: This tag provides access to Windows Update metadata services. To successfully download updates, you must also enable the **AzureFrontDoor.FirstParty** service tag and configure outbound security rules with the protocol and port defined as follows: <ul><li>AzureUpdateDelivery: TCP, port 443</li><li>AzureFrontDoor.FirstParty: TCP, port 80</li></ul> | Outbound | No | No |
-| AzureWebSubPub | AzureWebSubPub | Both | Yes | No |
+| AzureWebPubSub | AzureWebPubSub | Both | Yes | No |
| **BatchNodeManagement** | Management traffic for deployments dedicated to Azure Batch. | Both | No | Yes | | **ChaosStudio** | Azure Chaos Studio. <br/><br/>**Note**: If you have enabled Application Insights integration on the Chaos Agent, the AzureMonitor tag is also required. | Both | Yes | No | | **CognitiveServicesManagement** | The address ranges for traffic for Azure Cognitive Services. | Both | No | No |
virtual-wan Virtual Wan Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md
Yes, BGP communities generated by on-premises will be preserved in Virtual WAN.
The Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. This will enable the virtual hub router to now be availability zone aware. If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. Azure-wide Cloud Services-based infrastructure is deprecating. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure Portal.
-YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of up to 30 minutes per hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update unless you have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet. In this case, you will have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you will also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON.
+YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Please make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks are not deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update unless one of the following is true:
+* If any of your spoke virtual networks are located in a different region than the hub, then you will need to delete and recreate these respective VNet connections after performing the upgrade. This will ensure you have connectivity to these spoke VNets.
+* If you have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet, then you will have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you will also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON.
If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup.
virtual-wan Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md
The following features are currently in gated public preview. After working with
|||||| |1|Virtual hub router upgrade: Compatibility with NVA in a hub.|For deployments with an NVA provisioned in the hub, the virtual hub router can't be upgraded to VMSS.| July 2022|The Virtual WAN team is working on a fix that will allow Virtual hub routers to be upgraded to VMSS, even if an NVA is provisioned in the hub. After upgrading, users will have to re-peer the NVA with the hub routerΓÇÖs new IP addresses (instead of having to delete the NVA).| |2|Virtual hub router upgrade: Compatibility with NVA in a spoke VNet.|For deployments with an NVA provisioned in a spoke VNet, the customer will have to delete and recreate the BGP peering with the spoke NVA.|March 2022|The Virtual WAN team is working on a fix to remove the need for users to delete and recreate the BGP peering with a spoke NVA after upgrading.|
-|3|Virtual hub router upgrade: More than 100 Spoke VNets connected to the Virtual hub.|If there are more than 100 spoke VNets connected to the virtual hub, then the virtual hub router can't be upgraded.|September 2022|The Virtual WAN team is working on removing this limitation of 100 spoke VNets connected to the virtual hub during the router upgrade.|
+ ## Next steps
web-application-firewall Custom Waf Rules Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/custom-waf-rules-overview.md
Previously updated : 04/20/2022 Last updated : 11/02/2022
If you want to use **or** between two different conditions,then the two conditio
Regular expressions are also supported in custom rules, just like in the CRS rulesets. For examples, see Examples 3 and 5 in [Create and use custom web application firewall rules](create-custom-waf-rules.md).
+> [!CAUTION]
+> Any redirect rules applied at the application gateway level will bypass WAF custom rules. See [Application Gateway redirect overview](https://learn.microsoft.com/azure/application-gateway/redirect-overview) for more information about redirect rules.
+ ## Allowing vs. blocking Allowing and blocking traffic is simple with custom rules. For example, you can block all traffic coming from a range of IP addresses. You can make another rule to allow traffic if the request comes from a specific browser.
Currently, must be **MatchRule**.
Must be one of the variables: - RemoteAddr ΓÇô IPv4 Address/Range of the remote computer connection-- RequestMethod ΓÇô HTTP Request method (GET, POST, PUT, DELETE, and so on.)
+- RequestMethod ΓÇô HTTP Request method
- QueryString ΓÇô Variable in the URI - PostArgs ΓÇô Arguments sent in the POST body. Custom Rules using this match variable are only applied if the 'Content-Type' header is set to 'application/x-www-form-urlencoded' and 'multipart/form-data'. Additional content type of `application/json` is supported with CRS version 3.2 or greater, bot protection rule set, and geo-match custom rules. - RequestUri ΓÇô URI of the request
Must be one of the following operators:
- IPMatch - only used when Match Variable is *RemoteAddr,* and only supports IPv4 - Equal ΓÇô input is the same as the MatchValue-- Any ΓÇô It should not have a MatchValue. It is recommended for Match Variable with a valid Selector.
+- Any ΓÇô It shouldn't have a MatchValue. It's recommended for Match Variable with a valid Selector.
- Contains - LessThan - GreaterThan
A list of strings with names of transformations to do before the match is attemp
List of values to match against, which can be thought of as being *OR*'ed. For example, it could be IP addresses or other strings. The value format depends on the previous operator.
+Supported HTTP request method values include:
+- GET
+- HEAD
+- POST
+- OPTIONS
+- PUT
+- DELETE
+- PATCH
+ ### Action [required] In WAF policy detection mode, if a custom rule is triggered, the action is always logged regardless of the action value set on the custom rule. -- Allow ΓÇô Authorizes the transaction, skipping all other rules. The specified request is added to the allow list and once matched, the request stops further evaluation and is sent to the backend pool. Rules that are on the allow list aren't evaluated for any further custom rules or managed rules.
+- Allow ΓÇô Authorizes the transaction, skipping all other rules. The specified request is added to the allowlist and once matched, the request stops further evaluation and is sent to the backend pool. Rules that are on the allowlist aren't evaluated for any further custom rules or managed rules.
- Block - Blocks or logs the transaction based on SecDefaultAction (detection/prevention mode).
- - Prevention mode - Blocks the transaction based on SecDefaultAction. Just like the Allow action, once the request is evaluated and added to the block list, evaluation is stopped and the request is blocked. Any request after that meets the same conditions won't be evaluated and will just be blocked.
+ - Prevention mode - Blocks the transaction based on SecDefaultAction. Just like the Allow action, once the request is evaluated and added to the blocklist, evaluation is stopped and the request is blocked. Any request after that meets the same conditions won't be evaluated and will just be blocked.
- Detection mode - Logs the transaction based on SecDefaultAction after which evaluation is stopped. Any request after that meets the same conditions won't be evaluated and will just be logged. - Log ΓÇô Lets the rule write to the log, but lets the rest of the rules run for evaluation. The other custom rules are evaluated in order of priority, followed by the managed rules.
web-application-firewall Waf Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/shared/waf-azure-policy.md
There are several built-in Azure Policy definitions to manage WAF resources. A b
4. **Web Application Firewall (WAF) should use the specified mode for Application Gateway**: Mandates the use of 'Detection' or 'Prevention' mode to be active on all Web Application Firewall policies for Application Gateway. The policy definition has three effects: Audit, Deny, and Disable. Audit tracks when a WAF does not fit the specified mode. Deny prevents any WAF from being created if it is not in the correct mode. Disabled turns off the policy assignment.
+5. **Azure Application Gateway should have Resource logs enabled**: Mandates the enabling of Resource logs and Metrics on all Application Gateways, including WAF. The policy definition has two effects: AuditIfNotExists and Disable. AuditIfNotExists tracks when an Application Gateway does not have resource logs, metrics enabled and notifies the user that the Application Gateway does not comply. Disabled turns off the policy assignment.
+
+6. **Azure Front Door should have Resource logs enabled**: Mandates the enabling of Resource logs and Metrics on Azure Front Door Service, including WAF. The policy definition has two effects: AuditIfNotExists and Disable. AuditIfNotExists tracks when a Front Door service does not have resource logs, metrics enabled and notifies the user that the service does not comply. Disabled turns off the policy assignment.
+ ## Launch an Azure Policy 1. On the Azure home page, type Policy in the search bar and click the Azure Policy icon