Updates from: 11/03/2022 02:08:38
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Howto Authentication Temporary Access Pass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-temporary-access-pass.md
Users managing their security information at [https://aka.ms/mysecurityinfo](htt
![Screenshot of how users can manage a Temporary Access Pass in My Security Info.](./media/how-to-authentication-temporary-access-pass/tap-my-security-info.png) ### Windows device setup
-Users with a Temporary Access Pass can navigate the setup process on Windows 10 and 11 to perform device join operations and configure Windows Hello For Business. Temporary Access Pass usage for setting up Windows Hello for Business varies based on the devices joined state:
-- During Azure AD Join setup, users can authenticate with a TAP (no password required) and setup Windows Hello for Business.-- On already Azure AD Joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to set up Windows Hello for Business. -- On Hybrid Azure AD Joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to set up Windows Hello for Business.
+Users with a Temporary Access Pass can navigate the setup process on Windows 10 and 11 to perform device join operations and configure Windows Hello for Business. Temporary Access Pass usage for setting up Windows Hello for Business varies based on the devices joined state.
+
+For Azure AD Joined devices:
+- During the Azure AD Join setup process, users can authenticate with a TAP (no password required) to join the device and register Windows Hello for Business.
+- On already joined devices, users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to set up Windows Hello for Business.
+- If the [Web sign-in](https://learn.microsoft.com/windows/client-management/mdm/policy-csp-authentication#authentication-enablewebsignin) feature on Windows is also enabled, the user can use TAP to sign into the device. This is intended only for completing initial device setup, or recovery when the user does not know or have a password.
+
+For Hybrid Azure AD Joined devices:
+- Users must first authenticate with another method such as a password, smartcard or FIDO2 key, before using TAP to set up Windows Hello for Business.
![Screenshot of how to enter Temporary Access Pass when setting up Windows 10.](./media/how-to-authentication-temporary-access-pass/windows-10-tap.png)
If MFA is required for the resource tenant, the guest user needs to perform MFA
### Expiration An expired or deleted Temporary Access Pass canΓÇÖt be used for interactive or non-interactive authentication.
-Users need to reauthenticate with different authentication methods after the Temporary Access Pass is expired or deleted.
+Users need to reauthenticate with different authentication methods after the Temporary Access Pass is expired or deleted.
+
+The token lifetime (session token, refresh token, access token, etc.) obtained via a Temporary Access Pass login will be limited to the Temporary Access Pass lifetime. As a result, a Temporary Access Pass expiring will lead to the expiration of the associated token.
## Delete an expired Temporary Access Pass
active-directory Active Directory How Applications Are Added https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-how-applications-are-added.md
Previously updated : 12/01/2020 Last updated : 10/26/2022 -+ # How and why applications are added to Azure AD
-There are two representations of applications in Azure AD:
+There are two representations of applications in Azure Active Directory (Azure AD):
-* [Application objects](app-objects-and-service-principals.md#application-object) - Although there are [exceptions](#notes-and-exceptions), application objects can be considered the definition of an application.
-* [Service principals](app-objects-and-service-principals.md#service-principal-object) - Can be considered an instance of an application.
-Service principals generally reference an application object, and one application object can be referenced by multiple service principals across directories.
+- [Application objects](app-objects-and-service-principals.md#application-object) - Although there are [exceptions](#notes-and-exceptions), application objects can be considered the definition of an application.
+- [Service principals](app-objects-and-service-principals.md#service-principal-object) - Can be considered an instance of an application.
+ Service principals generally reference an application object, and one application object can be referenced by multiple service principals across directories.
## What are application objects and where do they come from?
-You can manage [application objects](app-objects-and-service-principals.md#application-object) in the Azure portal through the [App Registrations](https://aka.ms/appregistrations) experience. Application objects describe the application to Azure AD and can be considered the definition of the application, allowing the service to know how to issue tokens to the application based on its settings. The application object will only exist in its home directory, even if it's a multi-tenant application supporting service principals in other directories. The application object may include any of the following (as well as additional information not mentioned here):
+You can manage [application objects](app-objects-and-service-principals.md#application-object) in the Azure portal through the [App registrations](https://aka.ms/appregistrations) experience. Application objects describe the application to Azure AD and can be considered the definition of the application, allowing the service to know how to issue tokens to the application based on its settings. The application object will only exist in its home directory, even if it's a multi-tenant application supporting service principals in other directories. The application object may include (but not limited to) any of the following:
-* Name, logo, and publisher
-* Redirect URIs
-* Secrets (symmetric and/or asymmetric keys used to authenticate the application)
-* API dependencies (OAuth)
-* Published APIs/resources/scopes (OAuth)
-* App roles (RBAC)
-* SSO metadata and configuration
-* User provisioning metadata and configuration
-* Proxy metadata and configuration
+- Name, logo, and publisher
+- Redirect URIs
+- Secrets (symmetric and/or asymmetric keys used to authenticate the application)
+- API dependencies (OAuth)
+- Published APIs/resources/scopes (OAuth)
+- App roles
+- Single sign-on (SSO) metadata and configuration
+- User provisioning metadata and configuration
+- Proxy metadata and configuration
Application objects can be created through multiple pathways, including:
-* Application registrations in the Azure portal
-* Creating a new application using Visual Studio and configuring it to use Azure AD authentication
-* When an admin adds an application from the app gallery (which will also create a service principal)
-* Using the Microsoft Graph API or PowerShell to create a new application
-* Many others including various developer experiences in Azure and in API explorer experiences across developer centers
+- Application registrations in the Azure portal
+- Creating a new application using Visual Studio and configuring it to use Azure AD authentication
+- When an admin adds an application from the app gallery (which will also create a service principal)
+- Using the Microsoft Graph API or PowerShell to create a new application
+- Many others including various developer experiences in Azure and in API explorer experiences across developer centers
## What are service principals and where do they come from?
-You can manage [service principals](app-objects-and-service-principals.md#service-principal-object) in the Azure portal through the [Enterprise Applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps/menuId/) experience. Service principals are what govern an application connecting to Azure AD and can be considered the instance of the application in your directory. For any given application, it can have at most one application object (which is registered in a "home" directory) and one or more service principal objects representing instances of the application in every directory in which it acts.
+You can manage [service principals](app-objects-and-service-principals.md#service-principal-object) in the Azure portal through the [Enterprise Applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps/menuId/) experience. Service principals are what govern an application connecting to Azure AD and can be considered the instance of the application in your directory. For any given application, it can have at most one application object (which is registered in a "home" directory), and one or more service principal objects representing instances of the application in every directory in which it acts.
The service principal can include:
-* A reference back to an application object through the application ID property
-* Records of local user and group application-role assignments
-* Records of local user and admin permissions granted to the application
- * For example: permission for the application to access a particular user's email
-* Records of local policies including Conditional Access policy
-* Records of alternate local settings for an application
- * Claims transformation rules
- * Attribute mappings (User provisioning)
- * Directory-specific app roles (if the application supports custom roles)
- * Directory-specific name or logo
+- A reference back to an application object through the application ID property
+- Records of local user and group application-role assignments
+- Records of local user and admin permissions granted to the application
+ - For example: permission for the application to access a particular user's email
+- Records of local policies including Conditional Access policy
+- Records of alternate local settings for an application
+ - Claims transformation rules
+ - Attribute mappings (User provisioning)
+ - Directory-specific app roles (if the application supports custom roles)
+ - Directory-specific name or logo
Like application objects, service principals can also be created through multiple pathways including:
-* When users sign in to a third-party application integrated with Azure AD
- * During sign-in, users are asked to give permission to the application to access their profile and other permissions. The first person to give consent causes a service principal that represents the application to be added to the directory.
-* When users sign in to Microsoft online services like [Microsoft 365](https://products.office.com/)
- * When you subscribe to Microsoft 365 or begin a trial, one or more service principals are created in the directory representing the various services that are used to deliver all of the functionality associated with Microsoft 365.
- * Some Microsoft 365 services like SharePoint create service principals on an ongoing basis to allow secure communication between components including workflows.
-* When an admin adds an application from the app gallery (this will also create an underlying app object)
-* Add an application to use the [Azure AD Application Proxy](../app-proxy/application-proxy.md)
-* Connect an application for single sign on using SAML or password single sign-on (SSO)
-* Programmatically via the Microsoft Graph API or PowerShell
+- When users sign in to a third-party application integrated with Azure AD
+ - During sign-in, users are asked to give permission to the application to access their profile and other permissions. The first person to give consent causes a service principal that represents the application to be added to the directory.
+- When users sign in to Microsoft online services like [Microsoft 365](https://products.office.com/)
+ - When you subscribe to Microsoft 365 or begin a trial, one or more service principals are created in the directory representing the various services that are used to deliver all of the functionality associated with Microsoft 365.
+ - Some Microsoft 365 services like SharePoint create service principals on an ongoing basis to allow secure communication between components including workflows.
+- When an admin adds an application from the app gallery (this will also create an underlying app object)
+- Add an application to use the [Azure AD Application Proxy](../app-proxy/application-proxy.md)
+- Connect an application for SSO using SAML or password SSO
+- Programmatically via the Microsoft Graph API or PowerShell
## How are application objects and service principals related to each other?
An application has one application object in its home directory that is referenc
In the preceding diagram, Microsoft maintains two directories internally (shown on the left) that it uses to publish applications:
-* One for Microsoft Apps (Microsoft services directory)
-* One for pre-integrated third-party applications (App gallery directory)
+- One for Microsoft Apps (Microsoft services directory)
+- One for pre-integrated third-party applications (App gallery directory)
-Application publishers/vendors who integrate with Azure AD are required to have a publishing directory (shown on the right as "Some SaaS Directory").
+Application publishers/vendors who integrate with Azure AD are required to have a publishing directory (shown on the right as "Some software as a service (SaaS) Directory").
Applications that you add yourself (represented as **App (yours)** in the diagram) include:
-* Apps you developed (integrated with Azure AD)
-* Apps you connected for single-sign-on
-* Apps you published using the Azure AD application proxy
+- Apps you developed (integrated with Azure AD)
+- Apps you connected for SSO
+- Apps you published using the Azure AD application proxy
### Notes and exceptions
-* Not all service principals point back to an application object. When Azure AD was originally built the services provided to applications were more limited and the service principal was sufficient for establishing an application identity. The original service principal was closer in shape to the Windows Server Active Directory service account. For this reason, it's still possible to create service principals through different pathways, such as using Azure AD PowerShell, without first creating an application object. The Microsoft Graph API requires an application object before creating a service principal.
-* Not all of the information described above is currently exposed programmatically. The following are only available in the UI:
- * Claims transformation rules
- * Attribute mappings (User provisioning)
-* For more detailed information on the service principal and application objects, see the Microsoft Graph API reference documentation:
- * [Application](/graph/api/resources/application)
- * [Service Principal](/graph/api/resources/serviceprincipal)
+- Not all service principals point back to an application object. When Azure AD was originally built the services provided to applications were more limited, and the service principal was sufficient for establishing an application identity. The original service principal was closer in shape to the Windows Server Active Directory service account. For this reason, it's still possible to create service principals through different pathways, such as using Azure AD PowerShell, without first creating an application object. The Microsoft Graph API requires an application object before creating a service principal.
+- Not all of the information described above is currently exposed programmatically. The following are only available in the UI:
+ - Claims transformation rules
+ - Attribute mappings (User provisioning)
+- For more detailed information on the service principal and application objects, see the Microsoft Graph API reference documentation:
+ - [Application](/graph/api/resources/application)
+ - [Service Principal](/graph/api/resources/serviceprincipal)
## Why do applications integrate with Azure AD?
-Applications are added to Azure AD to leverage one or more of the services it provides including:
+Applications are added to Azure AD to use one or more of the services it provides including:
-* Application authentication and authorization
-* User authentication and authorization
-* SSO using federation or password
-* User provisioning and synchronization
-* Role-based access control - Use the directory to define application roles to perform role-based authorization checks in an application
-* OAuth authorization services - Used by Microsoft 365 and other Microsoft applications to authorize access to APIs/resources
-* Application publishing and proxy - Publish an application from a private network to the internet
-* Directory schema extension attributes - [Extend the schema of service principal and user objects](active-directory-schema-extensions.md) to store additional data in Azure AD
+- Application authentication and authorization
+- User authentication and authorization
+- SSO using federation or password
+- User provisioning and synchronization
+- Role-based access control (RBAC) - Use the directory to define application roles to perform role-based authorization checks in an application
+- OAuth authorization services - Used by Microsoft 365 and other Microsoft applications to authorize access to APIs/resources
+- Application publishing and proxy - Publish an application from a private network to the internet
+- Directory schema extension attributes - [Extend the schema of service principal and user objects](active-directory-schema-extensions.md) to store additional data in Azure AD
## Who has permission to add applications to my Azure AD instance?
-While there are some tasks that only global administrators can do (such as adding applications from the app gallery and configuring an application to use the Application Proxy) by default all users in your directory have rights to register application objects that they are developing and discretion over which applications they share/give access to their organizational data through consent. If a person is the first user in your directory to sign in to an application and grant consent, that will create a service principal in your tenant; otherwise, the consent grant information will be stored on the existing service principal.
+While there are some tasks that only global administrators can do (such as adding applications from the app gallery, and configuring an application to use the Application Proxy) by default all users in your directory have rights to register application objects that they're developing and discretion over which applications they share/give access to their organizational data through consent. If a person is the first user in your directory to sign in to an application and grant consent, that will create a service principal in your tenant. Otherwise, the consent grant information will be stored on the existing service principal.
-Allowing users to register and consent to applications might initially sound concerning, but keep the following in mind:
+Allowing users to register and consent to applications might initially sound concerning, but keep the following reasons in mind:
-
-* Applications have been able to leverage Windows Server Active Directory for user authentication for many years without requiring the application to be registered or recorded in the directory. Now the organization will have improved visibility to exactly how many applications are using the directory and for what purpose.
-* Delegating these responsibilities to users negates the need for an admin-driven application registration and publishing process. With Active Directory Federation Services (ADFS) it was likely that an admin had to add an application as a relying party on behalf of their developers. Now developers can self-service.
-* Users signing in to applications using their organization accounts for business purposes is a good thing. If they subsequently leave the organization they will automatically lose access to their account in the application they were using.
-* Having a record of what data was shared with which application is a good thing. Data is more transportable than ever and it's useful to have a clear record of who shared what data with which applications.
-* API owners who use Azure AD for OAuth decide exactly what permissions users are able to grant to applications and which permissions require an admin to agree to. Only admins can consent to larger scopes and more significant permissions, while user consent is scoped to the users' own data and capabilities.
-* When a user adds or allows an application to access their data, the event can be audited so you can view the Audit Reports within the Azure portal to determine how an application was added to the directory.
+- Applications have been able to use Windows Server Active Directory for user authentication for many years without requiring the application to be registered or recorded in the directory. Now the organization will have improved visibility to exactly how many applications are using the directory and for what purpose.
+- Delegating these responsibilities to users negates the need for an admin-driven application registration and publishing process. With Active Directory Federation Services (ADFS) it was likely that an admin had to add an application as a relying party on behalf of their developers. Now developers can self-service.
+- Users signing in to applications using their organization accounts for business purposes is a good thing. If they subsequently leave the organization they'll automatically lose access to their account in the application they were using.
+- Having a record of what data was shared with which application is a good thing. Data is more transportable than ever and it's useful to have a clear record of who shared what data with which applications.
+- API owners who use Azure AD for OAuth decide exactly what permissions users are able to grant to applications and which permissions require an admin to agree to. Only admins can consent to larger scopes and more significant permissions, while user consent is scoped to the users' own data and capabilities.
+- When a user adds or allows an application to access their data, the event can be audited so you can view the Audit Reports within the Azure portal to determine how an application was added to the directory.
If you still want to prevent users in your directory from registering applications and from signing in to applications without administrator approval, there are two settings that you can change to turn off those capabilities:
-* To change the user consent settings in your organization, see [Configure how users consent to applications](../manage-apps/configure-user-consent.md).
+- To change the user consent settings in your organization, see [Configure how users consent to applications](../manage-apps/configure-user-consent.md).
-* To prevent users from registering their own applications:
- 1. In the Azure portal, go to the [User settings](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/UserSettings) section under Azure Active Directory
+- To prevent users from registering their own applications:
+ 1. In the Azure portal, go to the [User settings](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/UserSettings) section under **App registrations**
2. Change **Users can register applications** to **No**.
-> [!NOTE]
-> Microsoft itself uses the default configuration allowing users to register applications and only allows user consent for a very limited set of permissions.
- <!--Image references-->
-[apps_service_principals_directory]:../media/active-directory-how-applications-are-added/HowAppsAreAddedToAAD.jpg
+
+[apps_service_principals_directory]: ../media/active-directory-how-applications-are-added/HowAppsAreAddedToAAD.jpg
active-directory App Objects And Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-objects-and-service-principals.md
Previously updated : 07/20/2022 Last updated : 11/02/2022
active-directory Application Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/application-model.md
Previously updated : 09/27/2021 Last updated : 11/02/2022
active-directory Authentication Vs Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-vs-authorization.md
Previously updated : 08/26/2022 Last updated : 11/02/2022
active-directory Howto Modify Supported Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-modify-supported-accounts.md
Previously updated : 11/15/2020 Last updated : 11/02/2022 -+ # Customer intent: As an application developer, I need to know how to modify which account types can sign in to or access my application or API.
To specify a different setting for the account types supported by an existing ap
1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which the app is registered. 1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations**, then select your application.
-1. Now, specify who can use the application, sometimes referred to as the *sign-in audience*.
-
- | Supported account types | Description |
- |-|-|
- | **Accounts in this organizational directory only** | Select this option if you're building an application for use only by users (or guests) in *your* tenant.<br><br>Often called a *line-of-business* (LOB) application, this is a **single-tenant** application in the Microsoft identity platform. |
- | **Accounts in any organizational directory** | Select this option if you'd like users in *any* Azure AD tenant to be able to use your application. This option is appropriate if, for example, you're building a software-as-a-service (SaaS) application that you intend to provide to multiple organizations.<br><br>This is known as a **multi-tenant** application in the Microsoft identity platform. |
-1. Select **Save**.
+1. Under **Manage**, select **App registrations**, select your application, and then select **Manifest** to use the manifest editor.
+1. Download the manifest JSON file locally.
+1. Now, specify who can use the application, sometimes referred to as the *sign-in audience*. Find the *signInAudience* property in the manifest JSON file and set it to one of the following property values:
+
+ | Property value | Supported account types | Description |
+ |-|-|-|
+ | **AzureADMyOrg** | Accounts in this organizational directory only (Microsoft only - Single tenant) |All user and guest accounts in your directory can use your application or API. Use this option if your target audience is internal to your organization. |
+ | **AzureADMultipleOrgs** | Accounts in any organizational directory (Any Azure AD directory - Multitenant) | All users with a work or school account from Microsoft can use your application or API. This includes schools and businesses that use Office 365. Use this option if your target audience is business or educational customers and to enable multitenancy. |
+ | **AzureADandPersonalMicrosoftAccount** | Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox) | All users with a work or school, or personal Microsoft account can use your application or API. It includes schools and businesses that use Office 365 as well as personal accounts that are used to sign in to services like Xbox and Skype. Use this option to target the widest set of Microsoft identities and to enable multitenancy.|
+ | **PersonalMicrosoftAccount** | Personal Microsoft accounts only | Personal accounts that are used to sign in to services like Xbox and Skype. Use this option to target the widest set of Microsoft identities.|
+1. Save your changes to the JSON file locally, then select **Upload** in the manifest editor to upload the updated manifest JSON file.
### Why changing to multi-tenant can fail
active-directory Reference V2 Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-v2-libraries.md
Previously updated : 03/30/2021 Last updated : 10/28/2022 -+ # Customer intent: As a developer, I want to know whether there's a Microsoft Authentication Library (MSAL) available for the language/framework I'm using to build my application, and whether the library is GA or in preview.
active-directory Security Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-tokens.md
Previously updated : 09/27/2021 Last updated : 11/1/2022 -+ #Customer intent: As an application developer, I want to understand the basic concepts of security tokens in the Microsoft identity platform. # Security tokens
-A centralized identity provider is especially useful for apps that have users located around the globe who don't necessarily sign in from the enterprise's network. The Microsoft identity platform authenticates users and provides security tokens, such as [access tokens](developer-glossary.md#access-token), [refresh tokens](developer-glossary.md#refresh-token), and [ID tokens](developer-glossary.md#id-token). Security tokens allow a [client application](developer-glossary.md#client-application) to access protected resources on a [resource server](developer-glossary.md#resource-server).
+A centralized identity provider is especially useful for apps that have worldwide users who don't necessarily sign in from the enterprise's network. The Microsoft identity platform authenticates users and provides security tokens, such as [access tokens](developer-glossary.md#access-token), [refresh tokens](developer-glossary.md#refresh-token), and [ID tokens](developer-glossary.md#id-token). Security tokens allow a [client application](developer-glossary.md#client-application) to access protected resources on a [resource server](developer-glossary.md#resource-server).
-**Access token**: An access token is a security token that's issued by an [authorization server](developer-glossary.md#authorization-server) as part of an [OAuth 2.0](active-directory-v2-protocols.md) flow. It contains information about the user and the resource for which the token is intended. The information can be used to access web APIs and other protected resources. Access tokens are validated by resources to grant access to a client app. To learn more about how the Microsoft identity platform issues access tokens, see [Access tokens](access-tokens.md).
+**Access token**: An access token is a security token issued by an [authorization server](developer-glossary.md#authorization-server) as part of an [OAuth 2.0](active-directory-v2-protocols.md) flow. It contains information about the user and the resource for which the token is intended. The information can be used to access web APIs and other protected resources. Access tokens are validated by resources to grant access to a client app. To learn more about how the Microsoft identity platform issues access tokens, see [Access tokens](access-tokens.md).
**Refresh token**: Because access tokens are valid for only a short period of time, authorization servers will sometimes issue a refresh token at the same time the access token is issued. The client application can then exchange this refresh token for a new access token when needed. To learn more about how the Microsoft identity platform uses refresh tokens to revoke permissions, see [Refresh tokens](refresh-tokens.md). **ID token**: ID tokens are sent to the client application as part of an [OpenID Connect](v2-protocols-oidc.md) flow. They can be sent alongside or instead of an access token. ID tokens are used by the client to authenticate the user. To learn more about how the Microsoft identity platform issues ID tokens, see [ID tokens](id-tokens.md).
-> [!NOTE]
-> This article discusses security tokens used by the OAuth2 and OpenID Connect protocols. Many enterprise applications use SAML to authenticate users. For information on SAML assertions, see [Azure Active Directory SAML token reference](reference-saml-tokens.md).
+Many enterprise applications use SAML to authenticate users. For information on SAML assertions, see [Azure Active Directory SAML token reference](reference-saml-tokens.md).
## Validate security tokens It's up to the app for which the token was generated, the web app that signed in the user, or the web API being called to validate the token. The token is signed by the authorization server with a private key. The authorization server publishes the corresponding public key. To validate a token, the app verifies the signature by using the authorization server public key to validate that the signature was created using the private key.
-Tokens are valid for only a limited amount of time. Usually, the authorization server provides a pair of tokens, such as:
+Tokens are valid for only a limited amount of time, so the authorization server frequently provides a pair of tokens;
* An access token, which accesses the application or protected resource. * A refresh token, which is used to refresh the access token when the access token is close to expiring.
A claim consists of key-value pairs that provide information such as the:
* Security Token Server that generated the token. * Date when the token was generated.
-* Subject (such as the user--except for daemons).
+* Subject (like the user, but not daemons).
* Audience, which is the app for which the token was generated.
-* App (the client) that asked for the token. In the case of web apps, this app might be the same as the audience.
+* App (the client) that asked for the token. For web apps, this app might be the same as the audience.
To learn more about how the Microsoft identity platform implements tokens and claim information, see [Access tokens](access-tokens.md) and [ID tokens](id-tokens.md).
active-directory Single And Multi Tenant Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-and-multi-tenant-apps.md
Previously updated : 10/13/2021 Last updated : 11/02/2022
active-directory V2 Permissions And Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-permissions-and-consent.md
- Title: Microsoft identity platform scopes, permissions, & consent
-description: Learn about authorization in the Microsoft identity platform endpoint, including scopes, permissions, and consent.
-------- Previously updated : 04/21/2022-----
-# Permissions and consent in the Microsoft identity platform
-
-Applications that integrate with the Microsoft identity platform follow an authorization model that gives users and administrators control over how data can be accessed. The implementation of the authorization model has been updated on the Microsoft identity platform. It changes how an app must interact with the Microsoft identity platform. This article covers the basic concepts of this authorization model, including scopes, permissions, and consent.
-
-## Scopes and permissions
-
-The Microsoft identity platform implements the [OAuth 2.0](active-directory-v2-protocols.md) authorization protocol. OAuth 2.0 is a method through which a third-party app can access web-hosted resources on behalf of a user. Any web-hosted resource that integrates with the Microsoft identity platform has a resource identifier, or *application ID URI*.
-
-Here are some examples of Microsoft web-hosted resources:
-
-* Microsoft Graph: `https://graph.microsoft.com`
-* Microsoft 365 Mail API: `https://outlook.office.com`
-* Azure Key Vault: `https://vault.azure.net`
-
-The same is true for any third-party resources that have integrated with the Microsoft identity platform. Any of these resources also can define a set of permissions that can be used to divide the functionality of that resource into smaller chunks. As an example, [Microsoft Graph](https://graph.microsoft.com) has defined permissions to do the following tasks, among others:
-
-* Read a user's calendar
-* Write to a user's calendar
-* Send mail as a user
-
-Because of these types of permission definitions, the resource has fine-grained control over its data and how API functionality is exposed. A third-party app can request these permissions from users and administrators, who must approve the request before the app can access data or act on a user's behalf.
-
-When a resource's functionality is chunked into small permission sets, third-party apps can be built to request only the permissions that they need to perform their function. Users and administrators can know what data the app can access. And they can be more confident that the app isn't behaving with malicious intent. Developers should always abide by the principle of least privilege, asking for only the permissions they need for their applications to function.
-
-In OAuth 2.0, these types of permission sets are called *scopes*. They're also often referred to as *permissions*. In the Microsoft identity platform, a permission is represented as a string value. An app requests the permissions it needs by specifying the permission in the `scope` query parameter. Identity platform supports several well-defined [OpenID Connect scopes](#openid-connect-scopes) as well as resource-based permissions (each permission is indicated by appending the permission value to the resource's identifier or application ID URI). For example, the permission string `https://graph.microsoft.com/Calendars.Read` is used to request permission to read users calendars in Microsoft Graph.
-
-An app most commonly requests these permissions by specifying the scopes in requests to the Microsoft identity platform authorize endpoint. However, some high-privilege permissions can be granted only through administrator consent. They can be requested or granted by using the [administrator consent endpoint](#admin-restricted-permissions). Keep reading to learn more.
-
-In requests to the authorization, token or consent endpoints for the Microsoft Identity platform, if the resource identifier is omitted in the scope parameter, the resource is assumed to be Microsoft Graph. For example, `scope=User.Read` is equivalent to `https://graph.microsoft.com/User.Read`.
-
-## Permission types
-
-The Microsoft identity platform supports two types of permissions: *delegated permissions* and *application permissions*.
-
-* **Delegated permissions** are used by apps that have a signed-in user present. For these apps, either the user or an administrator consents to the permissions that the app requests. The app is delegated with the permission to act as a signed-in user when it makes calls to the target resource.
-
- Some delegated permissions can be consented to by nonadministrators. But some high-privileged permissions require [administrator consent](#admin-restricted-permissions). To learn which administrator roles can consent to delegated permissions, see [Administrator role permissions in Azure Active Directory (Azure AD)](../roles/permissions-reference.md).
-
-* **Application permissions** are used by apps that run without a signed-in user present, for example, apps that run as background services or daemons. Only [an administrator can consent to](#requesting-consent-for-an-entire-tenant) application permissions.
-
-_Effective permissions_ are the permissions that your app has when it makes requests to the target resource. It's important to understand the difference between the delegated permissions and application permissions that your app is granted, and the effective permissions your app is granted when it makes calls to the target resource.
--- For delegated permissions, the _effective permissions_ of your app are the least-privileged intersection of the delegated permissions the app has been granted (by consent) and the privileges of the currently signed-in user. Your app can never have more privileges than the signed-in user. -
- Within organizations, the privileges of the signed-in user can be determined by policy or by membership in one or more administrator roles. To learn which administrator roles can consent to delegated permissions, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md).
-
- For example, assume your app has been granted the _User.ReadWrite.All_ delegated permission. This permission nominally grants your app permission to read and update the profile of every user in an organization. If the signed-in user is a global administrator, your app can update the profile of every user in the organization. However, if the signed-in user doesn't have an administrator role, your app can update only the profile of the signed-in user. It can't update the profiles of other users in the organization because the user that it has permission to act on behalf of doesn't have those privileges.
--- For application permissions, the _effective permissions_ of your app are the full level of privileges implied by the permission. For example, an app that has the _User.ReadWrite.All_ application permission can update the profile of every user in the organization.-
-## OpenID Connect scopes
-
-The Microsoft identity platform implementation of OpenID Connect has a few well-defined scopes that are also hosted on Microsoft Graph: `openid`, `email`, `profile`, and `offline_access`. The `address` and `phone` OpenID Connect scopes aren't supported.
-
-If you request the OpenID Connect scopes and a token, you'll get a token to call the [UserInfo endpoint](userinfo.md).
-
-### openid
-
-If an app signs in by using [OpenID Connect](active-directory-v2-protocols.md), it must request the `openid` scope. The `openid` scope appears on the work account consent page as the **Sign you in** permission.
-
-By using this permission, an app can receive a unique identifier for the user in the form of the `sub` claim. The permission also gives the app access to the UserInfo endpoint. The `openid` scope can be used at the Microsoft identity platform token endpoint to acquire ID tokens. The app can use these tokens for authentication.
-
-### email
-
-The `email` scope can be used with the `openid` scope and any other scopes. It gives the app access to the user's primary email address in the form of the `email` claim.
-
-The `email` claim is included in a token only if an email address is associated with the user account, which isn't always the case. If your app uses the `email` scope, the app needs to be able to handle a case in which no `email` claim exists in the token.
-
-### profile
-
-The `profile` scope can be used with the `openid` scope and any other scope. It gives the app access to a large amount of information about the user. The information it can access includes, but isn't limited to, the user's given name, surname, preferred username, and object ID.
-
-For a complete list of the `profile` claims available in the `id_tokens` parameter for a specific user, see the [`id_tokens` reference](id-tokens.md).
-
-### offline_access
-
-The [`offline_access` scope](https://openid.net/specs/openid-connect-core-1_0.html#OfflineAccess) gives your app access to resources on behalf of the user for an extended time. On the consent page, this scope appears as the **Maintain access to data you have given it access to** permission.
-
-When a user approves the `offline_access` scope, your app can receive refresh tokens from the Microsoft identity platform token endpoint. Refresh tokens are long-lived. Your app can get new access tokens as older ones expire.
-
-> [!NOTE]
-> This permission currently appears on all consent pages, even for flows that don't provide a refresh token (such as the [implicit flow](v2-oauth2-implicit-grant-flow.md)). This setup addresses scenarios where a client can begin within the implicit flow and then move to the code flow where a refresh token is expected.
-
-On the Microsoft identity platform (requests made to the v2.0 endpoint), your app must explicitly request the `offline_access` scope, to receive refresh tokens. So when you redeem an authorization code in the [OAuth 2.0 authorization code flow](active-directory-v2-protocols.md), you'll receive only an access token from the `/token` endpoint.
-
-The access token is valid for a short time. It usually expires in one hour. At that point, your app needs to redirect the user back to the `/authorize` endpoint to get a new authorization code. During this redirect, depending on the type of app, the user might need to enter their credentials again or consent again to permissions.
-
-For more information about how to get and use refresh tokens, see the [Microsoft identity platform protocol reference](active-directory-v2-protocols.md).
-
-## Consent types
-
-Applications in Microsoft identity platform rely on consent in order to gain access to necessary resources or APIs. There are a number of kinds of consent that your app may need to know about in order to be successful. If you are defining permissions, you will also need to understand how your users will gain access to your app or API.
-
-### Static user consent
-
-In the static user consent scenario, you must specify all the permissions it needs in the app's configuration in the Azure portal. If the user (or administrator, as appropriate) has not granted consent for this app, then Microsoft identity platform will prompt the user to provide consent at this time.
-
-Static permissions also enable administrators to [consent on behalf of all users](#requesting-consent-for-an-entire-tenant) in the organization.
-
-While static permissions of the app defined in the Azure portal keep the code nice and simple, it presents some possible issues for developers:
--- The app needs to request all the permissions it would ever need upon the user's first sign-in. This can lead to a long list of permissions that discourages end users from approving the app's access on initial sign-in.--- The app needs to know all of the resources it would ever access ahead of time. It is difficult to create apps that could access an arbitrary number of resources.-
-### Incremental and dynamic user consent
-
-With the Microsoft identity platform endpoint, you can ignore the static permissions defined in the app registration information in the Azure portal and request permissions incrementally instead. You can ask for a bare minimum set of permissions upfront and request more over time as the customer uses additional app features. To do so, you can specify the scopes your app needs at any time by including the new scopes in the `scope` parameter when [requesting an access token](#requesting-individual-user-consent) - without the need to pre-define them in the application registration information. If the user hasn't yet consented to new scopes added to the request, they'll be prompted to consent only to the new permissions. Incremental, or dynamic consent, only applies to delegated permissions and not to application permissions.
-
-Allowing an app to request permissions dynamically through the `scope` parameter gives developers full control over your user's experience. You can also front load your consent experience and ask for all permissions in one initial authorization request. If your app requires a large number of permissions, you can gather those permissions from the user incrementally as they try to use certain features of the app over time.
-
-> [!IMPORTANT]
-> Dynamic consent can be convenient, but presents a big challenge for permissions that require admin consent. The admin consent experience in the **App registrations** and **Enterprise applications** blades in the portal doesn't know about those dynamic permissions at consent time. We recommend that a developer list all the admin privileged permissions that are needed by the app in the portal. This enables tenant admins to consent on behalf of all their users in the portal, once. Users won't need to go through the consent experience for those permissions on sign in. The alternative is to use dynamic consent for those permissions. To grant admin consent, an individual admin signs in to the app, triggers a consent prompt for the appropriate permissions, and selects **consent for my entire org** in the consent dialogue.
-
-### Admin consent
-
-[Admin consent](#using-the-admin-consent-endpoint) is required when your app needs access to certain high-privilege permissions. Admin consent ensures that administrators have some additional controls before authorizing apps or users to access highly privileged data from the organization.
-
-[Admin consent done on behalf of an organization](#requesting-consent-for-an-entire-tenant) is highly recommended if your app has an enterprise audience. Admin consent done on behalf of an organization requires the static permissions to be registered for the app in the portal. Set those permissions for apps in the app registration portal if you need an admin to give consent on behalf of the entire organization. The admin can consent to those permissions on behalf of all users in the org, once. The users will not need to go through the consent experience for those permissions when signing in to the app. This is easier for users and reduces the cycles required by the organization admin to set up the application.
-
-## Requesting individual user consent
-
-In an [OpenID Connect or OAuth 2.0](active-directory-v2-protocols.md) authorization request, an app can request the permissions it needs by using the `scope` query parameter. For example, when a user signs in to an app, the app sends a request like the following example. (Line breaks are added for legibility.)
-
-```HTTP
-GET https://login.microsoftonline.com/common/oauth2/v2.0/authorize?
-client_id=6731de76-14a6-49ae-97bc-6eba6914391e
-&response_type=code
-&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F
-&response_mode=query
-&scope=
-https%3A%2F%2Fgraph.microsoft.com%2Fcalendars.read%20
-https%3A%2F%2Fgraph.microsoft.com%2Fmail.send
-&state=12345
-```
-
-The `scope` parameter is a space-separated list of delegated permissions that the app is requesting. Each permission is indicated by appending the permission value to the resource's identifier (the application ID URI). In the request example, the app needs permission to read the user's calendar and send mail as the user.
-
-After the user enters their credentials, the Microsoft identity platform checks for a matching record of *user consent*. If the user hasn't consented to any of the requested permissions in the past, and if the administrator hasn't consented to these permissions on behalf of the entire organization, the Microsoft identity platform asks the user to grant the requested permissions.
-
-At this time, the `offline_access` ("Maintain access to data you have given it access to") permission and `User.Read` ("Sign you in and read your profile") permission are automatically included in the initial consent to an application. These permissions are generally required for proper app functionality. The `offline_access` permission gives the app access to refresh tokens that are critical for native apps and web apps. The `User.Read` permission gives access to the `sub` claim. It allows the client or app to correctly identify the user over time and access rudimentary user information.
--
-When the user approves the permission request, consent is recorded. The user doesn't have to consent again when they later sign in to the application.
-
-## Requesting consent for an entire tenant
-
-When an organization purchases a license or subscription for an application, the organization often wants to proactively set up the application for use by all members of the organization. As part of this process, an administrator can grant consent for the application to act on behalf of any user in the tenant. If the admin grants consent for the entire tenant, the organization's users don't see a consent page for the application.
-
-Admin consent done on behalf of an organization requires the static permissions registered for the app. Set those permissions for apps in the app registration portal if you need an admin to give consent on behalf of the entire organization.
-
-To request consent for delegated permissions for all users in a tenant, your app can use the [admin consent endpoint](#using-the-admin-consent-endpoint).
-
-Additionally, applications must use the admin consent endpoint to request application permissions.
-
-## Admin-restricted permissions
-
-Some high-privilege permissions in Microsoft resources can be set to *admin-restricted*. Here are some examples of these kinds of permissions:
-
-* Read all user's full profiles by using `User.Read.All`
-* Write data to an organization's directory by using `Directory.ReadWrite.All`
-* Read all groups in an organization's directory by using `Groups.Read.All`
-
-> [!NOTE]
->In requests to the authorization, token or consent endpoints for the Microsoft Identity platform, if the resource identifier is omitted in the scope parameter, the resource is assumed to be Microsoft Graph. For example, `scope=User.Read` is equivalent to `https://graph.microsoft.com/User.Read`.
-
-Although a consumer user might grant an application access to this kind of data, organizational users can't grant access to the same set of sensitive company data. If your application requests access to one of these permissions from an organizational user, the user receives an error message that says they're not authorized to consent to your app's permissions.
-
-If your app requires scopes for admin-restricted permissions, an organization's administrator must consent to those scopes on behalf of the organization's users. To avoid displaying prompts to users that request consent for permissions they can't grant, your app can use the admin consent endpoint. The admin consent endpoint is covered in the next section.
-
-If the application requests high-privilege delegated permissions and an administrator grants these permissions through the admin consent endpoint, consent is granted for all users in the tenant.
-
-If the application requests application permissions and an administrator grants these permissions through the admin consent endpoint, this grant isn't done on behalf of any specific user. Instead, the client application is granted permissions *directly*. These types of permissions are used only by daemon services and other noninteractive applications that run in the background.
-
-## Using the admin consent endpoint
-
-After you use the admin consent endpoint to grant admin consent, you're finished. Users don't need to take any further action. After admin consent is granted, users can get an access token through a typical auth flow. The resulting access token has the consented permissions.
-
-When a Global Administrator uses your application and is directed to the authorize endpoint, the Microsoft identity platform detects the user's role. It asks if the Global Administrator wants to consent on behalf of the entire tenant for the permissions you requested. You could instead use a dedicated admin consent endpoint to proactively request an administrator to grant permission on behalf of the entire tenant. This endpoint is also necessary for requesting application permissions. Application permissions can't be requested by using the authorize endpoint.
-
-If you follow these steps, your app can request permissions for all users in a tenant, including admin-restricted scopes. This operation is high privilege. Use the operation only if necessary for your scenario.
-
-To see a code sample that implements the steps, see the [admin-restricted scopes sample](https://github.com/Azure-Samples/active-directory-dotnet-admin-restricted-scopes-v2) in GitHub.
-
-### Request the permissions in the app registration portal
-
-In the app registration portal, applications can list the permissions they require, including both delegated permissions and application permissions. This setup allows the use of the `.default` scope and the Azure portal's **Grant admin consent** option.
-
-In general, the permissions should be statically defined for a given application. They should be a superset of the permissions that the app will request dynamically or incrementally.
-
-> [!NOTE]
->Application permissions can be requested only through the use of [`.default`](#the-default-scope). So if your app needs application permissions, make sure they're listed in the app registration portal.
-
-To configure the list of statically requested permissions for an application:
-
-1. Go to your application in the <a href="https://go.microsoft.com/fwlink/?linkid=2083908" target="_blank">Azure portal - App registrations</a> quickstart experience.
-1. Select an application, or [create an app](quickstart-register-app.md) if you haven't already.
-1. On the application's **Overview** page, under **Manage**, select **API Permissions** > **Add a permission**.
-1. Select **Microsoft Graph** from the list of available APIs. Then add the permissions that your app requires.
-1. Select **Add Permissions**.
-
-### Recommended: Sign the user in to your app
-
-Typically, when you build an application that uses the admin consent endpoint, the app needs a page or view in which the admin can approve the app's permissions. This page can be:
-
-* Part of the app's sign-up flow.
-* Part of the app's settings.
-* A dedicated "connect" flow.
-
-In many cases, it makes sense for the app to show this "connect" view only after a user has signed in with a work Microsoft account or school Microsoft account.
-
-When you sign the user in to your app, you can identify the organization to which the admin belongs before you ask them to approve the necessary permissions. Although this step isn't strictly necessary, it can help you create a more intuitive experience for your organizational users.
-
-To sign the user in, follow the [Microsoft identity platform protocol tutorials](active-directory-v2-protocols.md).
-
-### Request the permissions from a directory admin
-
-When you're ready to request permissions from your organization's admin, you can redirect the user to the Microsoft identity platform admin consent endpoint.
-
-```HTTP
-// Line breaks are for legibility only.
-GET https://login.microsoftonline.com/{tenant}/v2.0/adminconsent?
-client_id=6731de76-14a6-49ae-97bc-6eba6914391e
-&state=12345
-&redirect_uri=http://localhost/myapp/permissions
-&scope=
-https://graph.microsoft.com/calendars.read
-https://graph.microsoft.com/mail.send
-```
--
-| Parameter | Condition | Description |
-|:--|:--|:--|
-| `tenant` | Required | The directory tenant that you want to request permission from. It can be provided in a GUID or friendly name format. Or it can be generically referenced with organizations, as seen in the example. Don't use "common," because personal accounts can't provide admin consent except in the context of a tenant. To ensure the best compatibility with personal accounts that manage tenants, use the tenant ID when possible. |
-| `client_id` | Required | The application (client) ID that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. |
-| `redirect_uri` | Required |The redirect URI where you want the response to be sent for your app to handle. It must exactly match one of the redirect URIs that you registered in the app registration portal. |
-| `state` | Recommended | A value included in the request that will also be returned in the token response. It can be a string of any content you want. Use the state to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. |
-|`scope` | Required | Defines the set of permissions being requested by the application. Scopes can be either static (using [`.default`](#the-default-scope)) or dynamic. This set can include the OpenID Connect scopes (`openid`, `profile`, `email`). If you need application permissions, you must use `.default` to request the statically configured list of permissions. |
--
-At this point, Azure AD requires a tenant administrator to sign in to complete the request. The administrator is asked to approve all the permissions that you requested in the `scope` parameter. If you used a static (`.default`) value, it will function like the v1.0 admin consent endpoint and request consent for all scopes found in the required permissions for the app.
-
-#### Successful response
-
-If the admin approves the permissions for your app, the successful response looks like this:
-
-```HTTP
-GET http://localhost/myapp/permissions?tenant=a8990e1f-ff32-408a-9f8e-78d3b9139b95&state=state=12345&admin_consent=True
-```
-
-| Parameter | Description |
-| | |
-| `tenant` | The directory tenant that granted your application the permissions it requested, in GUID format. |
-| `state` | A value included in the request that also will be returned in the token response. It can be a string of any content you want. The state is used to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. |
-| `admin_consent` | Will be set to `True`. |
-
-#### Error response
-
-If the admin doesn't approve the permissions for your app, the failed response looks like this:
-
-```HTTP
-GET http://localhost/myapp/permissions?error=permission_denied&error_description=The+admin+canceled+the+request
-```
-
-| Parameter | Description |
-| | |
-| `error` | An error code string that can be used to classify types of errors that occur. It can also be used to react to errors. |
-| `error_description` | A specific error message that can help a developer identify the root cause of an error. |
-
-After you've received a successful response from the admin consent endpoint, your app has gained the permissions it requested. Next, you can request a token for the resource you want.
-
-## Using permissions
-
-After the user consents to permissions for your app, your app can acquire access tokens that represent the app's permission to access a resource in some capacity. An access token can be used only for a single resource. But encoded inside the access token is every permission that your app has been granted for that resource. To acquire an access token, your app can make a request to the Microsoft identity platform token endpoint, like this:
-
-```HTTP
-POST common/oauth2/v2.0/token HTTP/1.1
-Host: https://login.microsoftonline.com
-Content-Type: application/json
-
-{
- "grant_type": "authorization_code",
- "client_id": "6731de76-14a6-49ae-97bc-6eba6914391e",
- "scope": "https://outlook.office.com/Mail.Read https://outlook.office.com/mail.send",
- "code": "AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq...",
- "redirect_uri": "https://localhost/myapp",
- "client_secret": "zc53fwe80980293klaj9823" // NOTE: Only required for web apps
-}
-```
-
-You can use the resulting access token in HTTP requests to the resource. It reliably indicates to the resource that your app has the proper permission to do a specific task.
-
-For more information about the OAuth 2.0 protocol and how to get access tokens, see the [Microsoft identity platform endpoint protocol reference](active-directory-v2-protocols.md).
-
-## The .default scope
-
-The `.default` scope is used to refer generically to a resource service (API) in a request, without identifying specific permissions. If consent is necessary, using `.default` signals that consent should be prompted for all required permissions listed in the application registration (for all APIs in the list).
-
-The scope parameter value is constructed by using the identifier URI for the resource and `.default`, separated by a forward slash (`/`). For example, if the resource's identifier URI is `https://contoso.com`, the scope to request is `https://contoso.com/.default`. For cases where you must include a second slash to correctly request the token, see the [section about trailing slashes](#trailing-slash-and-default).
-
-Using `scope={resource-identifier}/.default` is functionally the same as `resource={resource-identifier}` on the v1.0 endpoint (where `{resource-identifier}` is the identifier URI for the API, for example `https://graph.microsoft.com` for Microsoft Graph).
-
-The `.default` scope can be used in any OAuth 2.0 flow and to initiate [admin consent](v2-admin-consent.md). Its use is required in the [On-Behalf-Of flow](v2-oauth2-on-behalf-of-flow.md) and [client credentials flow](v2-oauth2-client-creds-grant-flow.md).
-
-Clients can't combine static (`.default`) consent and dynamic consent in a single request. So `scope=https://graph.microsoft.com/.default Mail.Read` results in an error because it combines scope types.
-
-### .default when the user has already given consent
-
-The `.default` scope is functionally identical to the behavior of the `resource`-centric v1.0 endpoint. It carries the consent behavior of the v1.0 endpoint as well. That is, `.default` triggers a consent prompt only if consent has not been granted for any delegated permission between the client and the resource, on behalf of the signed-in user.
-
-If consent does exist, the returned token contains all scopes granted for that resource for the signed-in user. However, if no permission has been granted for the requested resource (or if the `prompt=consent` parameter has been provided), a consent prompt is shown for all required permissions configured on the client application registration, for all APIs in the list.
-
-For example, if the scope `https://graph.microsoft.com/.default` is requested, your application is requesting an access token for the Microsoft Graph API. If at least one delegated permission has been granted for Microsoft Graph on behalf of the signed-in user, the sign-in will continue and all Microsoft Graph delegated permissions which have been granted for that user will be included in the access token. If no permissions have been granted for the requested resource (Microsoft Graph, in this example), then a consent prompt will be presented for all required permissions configured on the application, for all APIs in the list.
-
-#### Example 1: The user, or tenant admin, has granted permissions
-
-In this example, the user or a tenant administrator has granted the `Mail.Read` and `User.Read` Microsoft Graph permissions to the client.
-
-If the client requests `scope=https://graph.microsoft.com/.default`, no consent prompt is shown, regardless of the contents of the client application's registered permissions for Microsoft Graph. The returned token contains the scopes `Mail.Read` and `User.Read`.
-
-#### Example 2: The user hasn't granted permissions between the client and the resource
-
-In this example, the user hasn't granted consent between the client and Microsoft Graph, nor has an administrator. The client has registered for the permissions `User.Read` and `Contacts.Read`. It has also registered for the Azure Key Vault scope `https://vault.azure.net/user_impersonation`.
-
-When the client requests a token for `scope=https://graph.microsoft.com/.default`, the user sees a consent page for the Microsoft Graph `User.Read` and `Contacts.Read` scopes, and for the Azure Key Vault `user_impersonation` scope. The returned token contains only the `User.Read` and `Contacts.Read` scopes, and it can be used only against Microsoft Graph.
-
-#### Example 3: The user has consented, and the client requests more scopes
-
-In this example, the user has already consented to `Mail.Read` for the client. The client has registered for the `Contacts.Read` scope.
-
-The client first performs a sign-in with `scope=https://graph.microsoft.com/.default`. Based on the `scopes` parameter of the response, the application's code detects that only `Mail.Read` has been granted. The client then initiates a second sign-in using `scope=https://graph.microsoft.com/.default`, and this time forces consent using `prompt=consent`. If the user is allowed to consent for all the permissions that the application registered, they will be shown the consent prompt. (If not, they will be shown an error message or the [admin consent request](../manage-apps/configure-admin-consent-workflow.md) form.) Both `Contacts.Read` and `Mail.Read` will be in the consent prompt. If consent is granted and the sign-in continues, the token returned is for Microsoft Graph, and contains `Mail.Read` and `Contacts.Read`.
-
-### Using the .default scope with the client
-
-In some cases, a client can request its own `.default` scope. The following example demonstrates this scenario.
-
-```http
-// Line breaks are for legibility only.
-
-GET https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize
- ?response_type=token //Code or a hybrid flow is also possible here
- &client_id=9ada6f8a-6d83-41bc-b169-a306c21527a5
- &scope=9ada6f8a-6d83-41bc-b169-a306c21527a5/.default
- &redirect_uri=https%3A%2F%2Flocalhost
- &state=1234
-```
-
-This code example produces a consent page for all registered permissions if the preceding descriptions of consent and `.default` apply to the scenario. Then the code returns an `id_token`, rather than an access token.
-
-This behavior accommodates some legacy clients that are moving from Azure AD Authentication Library (ADAL) to the Microsoft Authentication Library (MSAL). This setup *shouldn't* be used by new clients that target the Microsoft identity platform.
-
-### Client credentials grant flow and .default
-
-Another use of `.default` is to request app roles (also known as application permissions) in a non-interactive application like a daemon app that uses the [client credentials](v2-oauth2-client-creds-grant-flow.md) grant flow to call a web API.
-
-To define app roles (application permissions) for a web API, see [Add app roles in your application](howto-add-app-roles-in-azure-ad-apps.md).
-
-Client credentials requests in your client service *must* include `scope={resource}/.default`. Here, `{resource}` is the web API that your app intends to call, and wishes to obtain an access token for. Issuing a client credentials request by using individual application permissions (roles) is *not* supported. All the app roles (application permissions) that have been granted for that web API are included in the returned access token.
-
-To grant access to the app roles you define, including granting admin consent for the application, see [Configure a client application to access a web API](quickstart-configure-app-access-web-apis.md).
-
-### Trailing slash and .default
-
-Some resource URIs have a trailing forward slash, for example, `https://contoso.com/` as opposed to `https://contoso.com`. The trailing slash can cause problems with token validation. Problems occur primarily when a token is requested for Azure Resource Manager (`https://management.azure.com/`). In this case, a trailing slash on the resource URI means the slash must be present when the token is requested. So when you request a token for `https://management.azure.com/` and use `.default`, you must request `https://management.azure.com//.default` (notice the double slash!). In general, if you verify that the token is being issued, and if the token is being rejected by the API that should accept it, consider adding a second forward slash and trying again.
-
-## Troubleshooting permissions and consent
-
-For troubleshooting steps, see [Unexpected error when performing consent to an application](../manage-apps/application-sign-in-unexpected-user-consent-error.md).
-
-## Next steps
-
-* [ID tokens in the Microsoft identity platform](id-tokens.md)
-* [Access tokens in the Microsoft identity platform](access-tokens.md)
active-directory Workload Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identities-overview.md
Previously updated : 12/06/2021 Last updated : 11/02/2022
active-directory 10 Secure Local Guest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/10-secure-local-guest.md
+
+ Title: Convert local guests into Azure AD B2B guest accounts
+description: Learn how to convert local guests into Azure AD B2B guest accounts
++++ Last updated : 11/03/2022++++++++
+# Convert local guests into Azure Active Directory B2B guest accounts
+
+Azure Active Directory (Azure AD B2B) allows external users to collaborate using their own identities. However, it isn't uncommon for organizations to issue local usernames and passwords to external users. This approach isn't recommended as the bring-your-own-identity (BYOI) capabilities provided
+by Azure AD B2B to provide better security, lower cost, and reduce
+complexity when compared to local account creation. Learn more
+[here.](https://learn.microsoft.com/azure/active-directory/fundamentals/secure-external-access-resources)
+
+If your organization currently issues local credentials that external users have to manage and would like to migrate to using Azure AD B2B instead, this document provides a guide to make the transition as seamlessly as possible.
+
+## Identify external-facing applications
+
+Before migrating local accounts to Azure AD B2B, admins should understand what applications and workloads these external users need to access. For example, if external users need access to an application that is hosted on-premises, admins will need to validate that the application is integrated with Azure AD and that a provisioning process is implemented to provision the user from Azure AD to the application.
+The existence and use of on-premises applications could be a reason why local accounts are created in the first place. Learn more about
+[provisioning B2B guests to on-premises
+applications.](https://learn.microsoft.com/azure/active-directory/external-identities/hybrid-cloud-to-on-premises)
+
+All external-facing applications should have single-sign on (SSO) and provisioning integrated with Azure AD for the best end user experience.
+
+## Identify local guest accounts
+
+Admins will need to identify which accounts should be migrated to Azure AD B2B. External identities in Active Directory should be easily identifiable, which can be done with an attribute-value pair. For example, making ExtensionAttribute15 = `External` for all external users. If these users are being provisioned via Azure AD Connect or Cloud Sync, admins can optionally configure these synced external users
+to have the `UserType` attributes set to `Guest`. If these users are being
+provisioned as cloud-only accounts, admins can directly modify the
+users' attributes. What is most important is being able to identify the
+users who you want to convert to B2B.
+
+## Map local guest accounts to external identities
+
+Once you've identified which external user accounts you want to
+convert to Azure AD B2B, you need to identify the BYOI identities or external emails for each user. For example, admins will need to identify that the local account (v-Jeff@Contoso.com) is a user whose home identity/email address is Jeff@Fabrikam.com. How to identify the home identities is up to the organization, but some examples include:
+
+- Asking the external user's sponsor to provide the information.
+
+- Asking the external user to provide the information.
+
+- Referring to an internal database if this information is already known and stored by the organization.
+
+Once the mapping of each external local account to the BYOI identity is done, admins will need to add the external identity/email to the user.mail attribute on each local account.
+
+## End user communications
+
+External users should be notified that the migration will be taking place and when it will happen. Ensure you communicate the expectation that external users will stop using their existing password and post-migration will authenticate with their own home/corporate credentials going forward. Communications can include email campaigns, posters, and announcements.
+
+## Migrate local guest accounts to Azure AD B2B
+
+Once the local accounts have their user.mail attributes populated with the external identity/email that they're mapped to, admins can [convert the local accounts to Azure AD B2B by inviting the local account.](https://learn.microsoft.com/azure/active-directory/external-identities/invite-internal-users)
+This can be done in the UX or programmatically via PowerShell or the Microsoft Graph API. Once complete, the users will no longer
+authenticate with their local password, but will instead authenticate with their home identity/email that was populated in the user.mail attribute. You've successfully migrated to Azure AD B2B.
+
+## Post-migration considerations
+
+If local accounts for external users were being synced from on-premises, admins should take steps to reduce their on-premises footprint and use cloud-native B2B guest accounts moving forward. Some possible actions can include:
+
+- Transition existing local accounts for external users to Azure AD B2B and stop creating local accounts. Post-migration, admins should invite external users natively in Azure AD.
+
+- Randomize the passwords of existing local accounts for external users to ensure they can't authenticate locally to on-premises resources. This will increase security by ensuring that authentication and user lifecycle is tied to the external user's home identity.
+
+## Next steps
+
+See the following articles on securing external access to resources. We recommend you take the actions in the listed order.
+
+1. [Determine your desired security posture for external access](1-secure-access-posture.md)
+1. [Discover your current state](2-secure-access-current-state.md)
+1. [Create a governance plan](3-secure-access-plan.md)
+1. [Use groups for security](4-secure-access-groups.md)
+1. [Transition to Azure AD B2B](5-secure-access-b2b.md)
+1. [Secure access with Entitlement Management](6-secure-access-entitlement-managment.md)
+1. [Secure access with Conditional Access policies](7-secure-access-conditional-access.md)
+1. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
+1. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
+1. [Secure local guest accounts](10-secure-local-guest.md) (YouΓÇÖre here)
active-directory Secure External Access Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-external-access-resources.md
Previously updated : 09/13/2022 Last updated : 11/03/2022
See the following articles on securing external access to resources. We recommen
8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md) 9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)+
+10. [Secure local guest accounts](10-secure-local-guest.md)
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
To learn how to verify or turn on this feature, see [Sync userPrincipalName upda
We recommend that you roll over the Kerberos decryption key at least every 30 days to align with the way that Active Directory domain members submit password changes. There is no associated device attached to the AZUREADSSO computer account object, so you must perform the rollover manually.
-See FAQ [How do I roll over the Kerberos decryption key of the AZUREADSSO computer account?](how-to-connect-sso.md).
+See FAQ [How do I roll over the Kerberos decryption key of the AZUREADSSO computer account?](how-to-connect-sso-faq.yml#how-can-i-roll-over-the-kerberos-decryption-key-of-the--azureadsso--computer-account-).
### Monitoring and logging
active-directory Admin Consent Workflow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/admin-consent-workflow-overview.md
- Previously updated : 06/10/2022+ Last updated : 11/02/2022
active-directory Disable User Sign In Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/disable-user-sign-in-portal.md
Title: Disable how a how a user signs in
+ Title: Disable user sign-in for application
description: How to disable an enterprise application so that no users may sign in to it in Azure Active Directory
Last updated 09/06/2022
-#customer intent: As an admin, I want to disable the way a user signs in for an application so that no user can sign in to it in Azure Active Directory.
+#customer intent: As an admin, I want to disable user sign-in for an application so that no user can sign in to it in Azure Active Directory.
# Disable user sign-in for an application There may be situations while configuring or managing an application where you don't want tokens to be issued for an application. Or, you may want to preemptively block an application that you do not want your employees to try to access. To accomplish this, you can disable user sign-in for the application, which will prevent all tokens from being issued for that application.
-In this article, you will learn how to disable how a user signs in to an application in Azure Active Directory through both the Azure portal and PowerShell. If you are looking for how to block specific users from accessing an application, use [user or group assignment](./assign-user-or-group-access-portal.md).
+In this article, you will learn how to prevent users from signing in to an application in Azure Active Directory through both the Azure portal and PowerShell. If you are looking for how to block specific users from accessing an application, use [user or group assignment](./assign-user-or-group-access-portal.md).
## Prerequisites
-To disable how a user signs in, you need:
+To disable user sign-in, you need:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
active-directory Restore Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/restore-application.md
Previously updated : 07/28/2022 Last updated : 11/02/2022
To recover your enterprise application with its previous configurations, first d
1. To view the recently deleted enterprise application, run the following command: ```powershell
- Get-AzureADMSDeletedDirectoryObject -Id 'd4142c52-179b-4d31-b5b9-08940873507b'
- ```
+ Get-AzureADMSDeletedDirectoryObject -Id <id>
+ ```
+
+Replace id with the object ID of the service principal that you want to restore.
+
:::zone-end :::zone pivot="ms-powershell"
To recover your enterprise application with its previous configurations, first d
1. To view the recently deleted enterprise applications, run the following command: ```powershell
- Get-MgDirectoryDeletedItem -DirectoryObjectId 'd4142c52-179b-4d31-b5b9-08940873507b'
+ Get-MgDirectoryDeletedItem -DirectoryObjectId <id>
```
+Replace id with the object ID of the service principal that you want to restore.
+ :::zone-end :::zone pivot="ms-graph"
To get the list of deleted enterprise applications in your tenant, run the follo
```http GET https://graph.microsoft.com/v1.0/directory/deletedItems/microsoft.graph.servicePrincipal ```
-Record the ID of the enterprise application you want to restore.
+From the list of deleted service principals generated, record the ID of the enterprise application you want to restore.
+
+Alternatively, if you want to get the specific enterprise application that was deleted, fetch the deleted service principal and filter the results by the client's application ID (appId) property using the following syntax:
+
+`https://graph.microsoft.com/v1.0/directory/deletedItems/microsoft.graph.servicePrincipal?$filter=appId eq '{appId}'`. Once you've retrieved the object ID of the deleted service principal, proceed to restore it.
:::zone-end
Record the ID of the enterprise application you want to restore.
```powershell
- Restore-AzureADMSDeletedDirectoryObject -Id 'd4142c52-179b-4d31-b5b9-08940873507b'
+ Restore-AzureADMSDeletedDirectoryObject -Id <id>
```+
+Replace id with the object ID of the service principal that you want to restore.
+ :::zone-end :::zone pivot="ms-powershell"
Record the ID of the enterprise application you want to restore.
1. To restore the enterprise application, run the following command: ```powershell
- Restore-MgDirectoryObject -DirectoryObjectId 'd4142c52-179b-4d31-b5b9-08940873507b'
+ Restore-MgDirectoryObject -DirectoryObjectId <id>
```+
+Replace id with the object ID of the service principal that you want to restore.
+ :::zone-end :::zone pivot="ms-graph"
Record the ID of the enterprise application you want to restore.
```http POST https://graph.microsoft.com/v1.0/directory/deletedItems/{id}/restore ```+
+Replace id with the object ID of the service principal that you want to restore.
+ :::zone-end ## Permanently delete an enterprise application
Record the ID of the enterprise application you want to restore.
To permanently delete a soft deleted enterprise application, run the following command: ```powershell
-Remove-AzureADMSDeletedDirectoryObject -Id 'd4142c52-179b-4d31-b5b9-08940873507b'
+Remove-AzureADMSDeletedDirectoryObject -Id <id>
``` :::zone-end
Remove-AzureADMSDeletedDirectoryObject -Id 'd4142c52-179b-4d31-b5b9-08940873507b
1. To permanently delete the soft deleted enterprise application, run the following command: ```powershell
- Remove-MgDirectoryDeletedItem -DirectoryObjectId 'd4142c52-179b-4d31-b5b9-08940873507b'
+ Remove-MgDirectoryDeletedItem -DirectoryObjectId <id>
``` :::zone-end
active-directory Overview Flagged Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-flagged-sign-ins.md
Title: What are flagged sign-ins in Azure Active Directory? description: Provides a general overview of flagged sign-ins in Azure Active Directory. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ # Customer intent: As an Azure AD administrator, I want a tool that gives me the right level of insights into the sign-in activities in my system so that I can easily diagnose and solve problems when they occur.
As an IT admin, when a user failed to sign-in, you want to resolve the issue as
This article gives you an overview of a feature that significantly improves the time it takes to resolve user sign-in problems by making the related problems easy to find. ---
-## What it is
+## What are flagged sign-ins?
Azure AD sign-in events are critical to understanding what went right or wrong with user sign-ins and the authentication configuration in a tenant. However, Azure AD processes over 8 billion authentications a day, which can result in so many sign-in events that admins may find it difficult to find the ones which matter. In other words, the sheer number of sign-in events can make the signal of users who need assistance get lost in the volume of a large number of events.
-Flagged Sign-ins is a feature intended to increase the signal to noise ratio for user sign-ins requiring help. The functionality is intended to empower users to raise awareness about sign-in errors they want help with and, for admins and help desk workers, make finding the right events faster and more efficient. Flagged Sign-in events contain the same information as other sign-in events contain with one addition: they also indicate that a user flagged the event for review by admins.
+Flagged Sign-ins is a feature intended to increase the signal to noise ratio for user sign-ins requiring help. The functionality is intended to empower users to raise awareness about sign-in errors they want help with. Admins and help desk workers also benefit from finding the right events more efficiently. Flagged Sign-in events contain the same information as other sign-in events contain with one addition: they also indicate that a user flagged the event for review by admins.
-Flagged sign-ins gives the user the ability to enable flagging when an error is seen on a sign-in page and then reproduce that error. The error event will then appear as ΓÇ£Flagged for ReviewΓÇ¥ in the Azure AD Reporting blade for Sign-ins.
+Flagged sign-ins gives the user the ability to enable flagging when an error is seen on a sign-in page and then reproduce that error. The error event will then appear as ΓÇ£Flagged for ReviewΓÇ¥ in the Azure AD sign-ins log.
In summary, you can use flagged sign-ins to:
Flagged sign-ins gives you the ability to enable flagging when signing in using
### User: How to flag an error 1. The user receives an error during sign-in.
-2. The user clicks **View details** in the error page.
-3. In **Troubleshooting details**, click **Enable Flagging**. The text changes to **Disable Flagging**. Flagging is now enabled.
+2. The user selects **View details** in the error page.
+3. In **Troubleshooting details**, select **Enable Flagging**. The text changes to **Disable Flagging**. Flagging is now enabled.
4. Close the browser window.
-5. Open a new browser window (in the same browser application) and attempt the same sign in that failed.
+5. Open a new browser window (in the same browser application) and attempt the same sign-in that failed.
6. Reproduce the sign-in error that was seen before.
-After enabling flagging, the same browser application and client must be used or the events will not be flagged.
+With flagging enabled, the same browser application and client must be used or the events won't be flagged.
### Admin: Find flagged events in reports
-1. In the Azure AD portal, select **Sign-in logs** in the left-hand pane.
-2. Click **Add Filters**.
-3. In the filter menu titled **Pick a field**, select **Flagged for review**, and click **Apply**.
-4. All events that were flagged by users are shown.
-5. If needed, apply additional filters to refine the event view.
-6. Select the event to review what happened.
+1. In the Azure AD portal, go to **Sign-in logs** > **Add Filters**.
+1. From the **Pick a field** menu, select **Flagged for review** and **Apply**.
+1. All events that were flagged by users are shown.
+1. If needed, apply more filters to refine the event view.
+1. Select the event to review what happened.
### Admin or Developer: Find flagged events using MS Graph
You can find flagged sign-ins with a filtered query using the sign-ins reporting
Show all Flagged Sign-ins: `https://graph.microsoft.com/beta/auditLogs/signIns?&$filter=flaggedforReview eq true`
-Flagged Sign-ins query for specific user by UPN (e.g.: user@contoso.com):
+Flagged Sign-ins query for specific user by UPN (for example: user@contoso.com):
`https://graph.microsoft.com/beta/auditLogs/signIns?&$filter=flaggedforReview eq true and userPrincipalname eq 'user@contoso.com'` Flagged Sign-ins query for specific user and date greater than:
Any user signing into Azure AD via web page can use flag sign-ins for review. Me
Reviewing flagged sign-in events requires permissions to read the Sign-in Report events in the Azure AD portal. For more information, see [who can access it?](concept-sign-ins.md#who-can-access-it)
-To flag sign-in failures, you don't need additional permissions.
+To flag sign-in failures, you don't need extra permissions.
## What you should know
While the names are similar, **flagged sign-ins** and **risky sign-ins** are dif
## Next steps - [Sign-in logs in Azure Active Directory](concept-sign-ins.md)-- [Sign in diagnostics for Azure AD scenarios](concept-sign-in-diagnostics-scenarios.md)
+- [Sign-in diagnostics for Azure AD scenarios](concept-sign-in-diagnostics-scenarios.md)
active-directory Overview Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring.md
Title: What is Azure Active Directory monitoring? | Microsoft Docs description: Provides a general overview of Azure Active Directory monitoring. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ # Customer intent: As an Azure AD administrator, I want to understand what monitoring solutions are available for Azure AD activity data and how they can help me manage my tenant.
Currently, you can route the logs to:
## Licensing and prerequisites for Azure AD reporting and monitoring
-You'll need an Azure AD premium license to access the Azure AD sign in logs.
+You'll need an Azure AD premium license to access the Azure AD sign-in logs.
For detailed feature and licensing information in the [Azure Active Directory pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
active-directory Overview Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-reports.md
Title: What are Azure Active Directory reports? | Microsoft Docs description: Provides a general overview of Azure Active Directory reports. -+ - Previously updated : 08/22/2022- Last updated : 11/01/2022+ # Customer intent: As an Azure AD administrator, I want to understand what Azure AD reports are available and how I can use them to gain insights into my environment.
The [audit logs report](concept-audit-logs.md) provides you with records of syst
#### What Azure AD license do you need to access the audit logs report?
-The audit logs report is available for features for which you have licenses. If you have a license for a specific feature, you also have access to the audit log information for it. A detailed feature comparison as per [different types of licenses](../fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) can be seen on the [Azure Active Directory pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). For more details, see [Azure Active Directory features and capabilities](../fundamentals/active-directory-whatis.md#which-features-work-in-azure-ad).
+The audit logs report is available for features for which you have licenses. If you have a license for a specific feature, you also have access to the audit log information for it. A detailed feature comparison as per [different types of licenses](../fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) can be seen on the [Azure Active Directory pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). For more information, see [Azure Active Directory features and capabilities](../fundamentals/active-directory-whatis.md#which-features-work-in-azure-ad).
### Sign-ins report
To access the sign-ins activity report, your tenant must have an Azure AD Premiu
## Programmatic access
-In addition to the user interface, Azure AD also provides you with [programmatic access](concept-reporting-api.md) to the reports data, through a set of REST-based APIs. You can call these APIs from a variety of programming languages and tools.
+In addition to the user interface, Azure AD also provides you with [programmatic access](concept-reporting-api.md) to the reports data, through a set of REST-based APIs. You can call these APIs from various programming languages and tools.
## Next steps
active-directory Overview Service Health Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-service-health-notifications.md
Title: What are Service Health notifications in Azure Active Directory? | Microsoft Docs description: Learn how Service Health notifications provide you with a customizable dashboard that tracks the health of your Azure services in the regions where you use them. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+
Most of the built-in admin roles will have access to see these notifications. Fo
## What you should know
-Service Health events allow the addition of alerts and notifications to be applied to subscription events. Currently, this isn't yet supported with tenant events, but will be coming soon.
+Service Health events allow the addition of alerts and notifications to be applied to subscription events. This feature isn't yet supported with tenant events, but will be coming soon.
active-directory Overview Sign In Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-sign-in-diagnostics.md
Title: What is the sign-in diagnostic for Azure Active Directory? description: Provides a general overview of the sign-in diagnostic in Azure Active Directory. -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ # Customer intent: As an Azure AD administrator, I want a tool that gives me the right level of insights into the sign-in activities in my system so that I can easily diagnose and solve problems when they occur.
This article gives you an overview of what the diagnostic is and how you can use
In Azure AD, sign-in attempts are controlled by: -- **Who** - The user performing a sign in attempt.
+- **Who** - The user performing a sign-in attempt.
- **How** - How a sign-in attempt was performed. For example, you can configure conditional access policies that enable administrators to configure all aspects of the tenant when they sign in from the corporate network. But the same user might be blocked when they sign into the same account from an untrusted network.
To start and complete the diagnostic process, you need to:
The diagnostic allows two methods to find events to investigate: - Sign-in failures users have [flagged for assistance](overview-flagged-sign-ins.md). -- Search for specific events by the user and additional criteria.
+- Search for specific events by the user and other criteria.
-Flagged sign-ins are automatically presented in a list of up to 100. You can run a diagnostics on an event immediately by clicking it.
+Flagged sign-ins are automatically presented in a list of up to 100. You can run diagnostics on an event immediately by clicking it.
You can search a specific event by selecting the search tab even when flagged sign-ins are present. When searching for specific events, you can filter based on the following options:
You can change the content displayed in the columns based on your preference. Ex
### Take action
-For the selected sign-in event, you get a diagnostic results. Read through the results to identify action that you can take to fix the problem. These results add recommended steps and shed light on relevant information such as the related policies, sign-in details, and supportive documentation. Because it's not always possible to resolve issues without more help, a recommended step might be to open a support ticket.
+For the selected sign-in event, you get a diagnostic result. Read through the results to identify action that you can take to fix the problem. These results add recommended steps and shed light on relevant information such as the related policies, sign-in details, and supportive documentation. Because it's not always possible to resolve issues without more help, a recommended step might be to open a support ticket.
![Screenshot showing the diagnostic results.](./media/overview-sign-in-diagnostics/diagnostic-results.png)
For the selected sign-in event, you get a diagnostic results. Read through the r
## How to access it
-To use the diagnostic, you must be signed into the tenant as a global admin or a global reader. If you do not have this level of access, use [Privileged Identity Management, PIM](../privileged-identity-management/pim-resource-roles-activate-your-roles.md), to elevate your access to global admin/reader within the tenant. This will allow you to have temporary access to the diagnostic.
+To use the diagnostic, you must be signed into the tenant as a Global Administrator or a Global Reader.
With the correct access level, you can find the diagnostic in various places:
With the correct access level, you can find the diagnostic in various places:
1. Open **Azure Active Directory AAD or Azure AD Conditional Access**.
-2. From the main menu, click **Diagnose & Solve Problems**.
-
-3. Under the **Troubleshooters**, there is a sign-in diagnostic tile.
-
-4. Click **Troubleshoot** button.
-
-
+1. From the main menu, select **Diagnose & Solve Problems**.
+1. From the **Troubleshooters** section, select the **Troubleshoot** button from the sign-in diagnostic tile.
**Option B**: Sign-in Events
With the correct access level, you can find the diagnostic in various places:
2. On the main menu, in the **Monitoring** section, select **Sign-ins**.
-3. From the list of sign-ins, select a sign in with a **Failure** status. You can filter your list by Status to make it easier to find failed sign-ins.
+3. From the list of sign-ins, select a sign-in with a **Failure** status. You can filter your list by Status to make it easier to find failed sign-ins.
-4. The **Activity Details: Sign-ins** tab will open for the selected sign-in. Click on dotted icon to view more menu icons. Select the **Troubleshooting and support** tab.
+4. The **Activity Details: Sign-ins** tab will open for the selected sign-in. Select the dotted icon to view more menu icons. Select the **Troubleshooting and support** tab.
-5. Click the link to **Launch the Sign-in Diagnostic**.
+5. Select the link to **Launch the Sign-in Diagnostic**.
active-directory Plan Monitoring And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/plan-monitoring-and-reporting.md
Title: Plan reports & monitoring deployment - Azure AD description: Describes how to plan and execute implementation of reporting and monitoring. -+ Previously updated : 08/26/2022- Last updated : 11/01/2022+ # Customer intent: As an Azure AD administrator, I want to monitor logs and report on access to increase security
Your Azure Active Directory (Azure AD) reporting and monitoring solution depends
### Benefits of Azure AD reporting and monitoring
-Azure AD reporting provides a comprehensive view and logs of Azure AD activity in your environment, including sign in events, audit events, and changes to your directory.
+Azure AD reporting provides a comprehensive view and logs of Azure AD activity in your environment, including sign-in events, audit events, and changes to your directory.
The provided data enables you to:
With Azure AD monitoring, you can route logs to:
### Licensing and prerequisites for Azure AD reporting and monitoring
-You'll need an Azure AD premium license to access the Azure AD sign in logs.
+You'll need an Azure AD premium license to access the Azure AD sign-in logs.
For detailed feature and licensing information in the [Azure Active Directory pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
In this project, you'll define the audiences that will consume and monitor repor
### Engage the right stakeholders
-When technology projects fail, they typically do so due to mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md). Also ensure that stakeholder roles in the project are well understood by documenting the stakeholders and their project input and accountabilities.
+When technology projects fail, they typically do so due to mismatched expectations on effect, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md). Also ensure that stakeholder roles in the project are well understood by documenting the stakeholders and their project input and accountabilities.
### Plan communications
Reporting and monitoring are used to meet your business requirements, gain insig
|Area |Description | |-|-|
-|Retention| **Log retention of more than 30 days**. ΓÇÄDue to legal or business requirements it is required to store audit logs and sign in logs of Azure AD longer than 30 days. |
+|Retention| **Log retention of more than 30 days**. ΓÇÄDue to legal or business requirements it's required to store audit logs and sign in logs of Azure AD longer than 30 days. |
|Analytics| **The logs need to be searchable**. ΓÇÄThe stored logs need to be searchable with analytic tools. | | Operational Insights| **Insights for various teams**. The need to give access for different users to gain operational insights such as application usage, sign in errors, self-service usage, trends, etc. | | Security Insights| **Insights for various teams**. The need to give access for different users to gain operational insights such as application usage, sign in errors, self service usage, trends, etc. |
-| Integration in SIEM systems | **SIEM integration**. ΓÇÄThe need to integrate and stream Azure AD sign in logs and audit logs to existing SIEM systems. |
+| Integration in SIEM systems | **SIEM integration**. ΓÇÄThe need to integrate and stream Azure AD sign-in logs and audit logs to existing SIEM systems. |
### Choose a monitoring solution architecture
Learn how to [route data to your storage account](./quickstart-azure-monitor-rou
#### Send logs to Azure Monitor logs
-[Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) consolidate monitoring data from different sources. It also provides a query language and analytics engine that gives you insights into the operation of your applications and use of resources. By sending Azure AD activity logs to Azure Monitor logs, you can quickly retrieve, monitor, and alert on collected data. Use this method when you don't have an existing SIEM solution that you want to send your data to directly but do want queries and analysis. Once your data is in Azure Monitor logs, you can then send it to event hub and from there to a SIEM if you want to.
+[Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) consolidate monitoring data from different sources. It also provides a query language and analytics engine that gives you insights into the operation of your applications and use of resources. By sending Azure AD activity logs to Azure Monitor logs, you can quickly retrieve, monitor, and alert on collected data. Use this method when you don't have an existing SIEM solution that you want to send your data to directly but do want queries and analysis. Once your data is in Azure Monitor logs, you can then send it to event hub, and from there to a SIEM if you want to.
Learn how to [send data to Azure Monitor logs](./howto-integrate-activity-logs-with-log-analytics.md).
-You can also install the pre-built views for Azure AD activity logs to monitor common scenarios involving sign in and audit events.
+You can also install the pre-built views for Azure AD activity logs to monitor common scenarios involving sign-in and audit events.
Learn how to [install and use log analytics views for Azure AD activity logs](./howto-install-use-log-analytics-views.md).
Depending on the decisions you have made earlier using the design guidance above
Consider implementing [Privileged Identity Management](../privileged-identity-management/pim-configure.md)
-Consider implementing [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md)
+Consider implementing [Azure role-based access control](../../role-based-access-control/overview.md)
active-directory Quickstart Access Log With Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-access-log-with-graph-api.md
Previously updated : 08/26/2022-- Last updated : 11/01/2022++
This section provides you with the steps to get information about your sign-in u
5. In the **Request query address bar**, type `https://graph.microsoft.com/beta/auditLogs/signIns?$top=100&$filter=userDisplayName eq 'Isabella Simonsen'`
-6. Click **Run query**.
+6. Select **Run query**.
Review the outcome of your query.
active-directory Quickstart Analyze Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-analyze-sign-in.md
Previously updated : 08/26/2021-- Last updated : 11/01/2022++
The goal of this step is to create a record of a failed sign-in in the Azure AD
This section provides you with the steps to analyze a failed sign-in: -- **Filter sign-ins**: Remove all records that are not relevant to your analysis. For example, set a filter to display only the records of a specific user.-- **Lookup additional error information**: In addition to the information you can find in the sign-ins log, you can also lookup the error using the [sign-in error lookup tool](https://login.microsoftonline.com/error). This tool might provide you with additional information for a sign-in error.
+- **Filter sign-ins**: Remove all records that aren't relevant to your analysis. For example, set a filter to display only the records of a specific user.
+- **Lookup additional error information**: In addition to the information you can find in the sign-ins log, you can also look up the error using the [sign-in error lookup tool](https://login.microsoftonline.com/error). This tool might provide you with additional information for a sign-in error.
**To review the failed sign-in:**
This section provides you with the steps to analyze a failed sign-in:
2. To list only records for Isabella Simonsen:
- a. In the toolbar, click **Add filters**.
+ a. In the toolbar, select **Add filters**.
![Add user filter](./media/quickstart-analyze-sign-in/add-filters.png)
- b. In the **Pick a field** list, select **User**, and then click **Apply**.
+ b. In the **Pick a field** list, select **User**, and then select **Apply**.
- c. In the **Username** textbox, type **Isabella Simonsen**, and then click **Apply**.
+ c. In the **Username** textbox, type **Isabella Simonsen**, and then select **Apply**.
- d. In the toolbar, click **Refresh**.
+ d. In the toolbar, select **Refresh**.
-3. To analyze the issue, click **Troubleshooting and support**.
+3. To analyze the issue, select **Troubleshooting and support**.
![Add filter](./media/quickstart-analyze-sign-in/troubleshooting-and-support.png)
This section provides you with the steps to analyze a failed sign-in:
![Sign-in error code](./media/quickstart-analyze-sign-in/sign-in-error-code.png)
-5. Paste the error code into the textbox of the [sign-in error lookup tool](https://login.microsoftonline.com/error), and then click **Submit**.
+5. Paste the error code into the textbox of the [sign-in error lookup tool](https://login.microsoftonline.com/error), and then select **Submit**.
Review the outcome of the tool and determine whether it provides you with additional information. ![Error code lookup tool](./media/concept-all-sign-ins/error-code-lookup-tool.png)
-## Additional tests
+## More tests
Now, that you know how to find an entry in the sign-in log by name, you should also try to find the record using the following filters:
active-directory Quickstart Azure Monitor Route Logs To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md
Title: Tutorial - Archive directory logs to a storage account | Microsoft Docs description: Learn how to set up Azure Diagnostics to push Azure Active Directory logs to a storage account -+ - Previously updated : 08/26/2022- Last updated : 11/01/2022+ # Customer intent: As an IT administrator, I want to learn how to route Azure AD logs to an Azure storage account so I can retain it for longer than the default retention period.
To use this feature, you need:
![Export settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/ExportSettings.png)
-5. Once in the **Diagnostic setting** pane if you are creating a new setting, enter a name for the setting to remind you of its purpose (for example, *Send to Azure storage account*). You can't change the name of an existing setting.
+5. Once in the **Diagnostic setting** pane if you're creating a new setting, enter a name for the setting to remind you of its purpose (for example, *Send to Azure storage account*). You can't change the name of an existing setting.
6. Under **Destination Details** Select the **Archive to a storage account** check box.
-7. Select the Azure subscription in the **Subscription** drop down menu and storage account in the **Storage account** drop down menu that you want to route the logs to.
+7. Select the Azure subscription in the **Subscription** menu and storage account in the **Storage account** menu that you want to route the logs to.
8. Select all the relevant categories in under **Category details**:
active-directory Quickstart Filter Audit Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-filter-audit-log.md
Previously updated : 08/26/2022-- Last updated : 11/01/2022++
This section provides you with the steps to filter your audit log.
2. To list only records for Isabella Simonsen:
- a. In the toolbar, click **Add filters**.
+ a. In the toolbar, select **Add filters**.
![Add user filter](./media/quickstart-analyze-sign-in/add-filters.png)
- b. In the **Pick a field** list, select **Target**, and then click **Apply**
+ b. In the **Pick a field** list, select **Target**, and then select **Apply**
- c. In the **Target** textbox, type the **User Principal Name** of **Isabella Simonsen**, and then click **Apply**.
+ c. In the **Target** textbox, type the **User Principal Name** of **Isabella Simonsen**, and then select **Apply**.
-3. Click the filtered item.
+3. Select the filtered item.
![Filtered items](./media/quickstart-filter-audit-log/audit-log-list.png)
active-directory Opentext Fax Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/opentext-fax-tutorial.md
Previously updated : 10/10/2022 Last updated : 10/28/2022
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure OpenText XM Fax and XM SendSecure SSO
-To configure single sign-on on **OpenText XM Fax and XM SendSecure** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [OpenText XM Fax and XM SendSecure support team](mailto:support@opentext.com). They set this setting to have the SAML SSO connection set properly on both sides.
+1. Login to your XM Cloud account using a Web browser.
+
+1. From the main menu of your Web Portal, select **enterprise_account -> Enterprise Settings**.
+
+1. Go to **Single Sign-On** section and select **SAML 2.0**.
+
+1. Provide the following required information:
+
+ a. In the **Sign In URL** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ b. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **X.509 Signing Certificate** textbox.
+
+ c. click **Save**.
+
+> [!NOTE]
+> Keep the fail-safe URL (`https://login.[domain]/[account]/no-sso`) provided at the bottom of the SSO configuration section, it will allow you to log in using your XM Cloud account credentials if you lock yourself after SSO activation.
### Create OpenText XM Fax and XM SendSecure test user
advisor Advisor How To Performance Resize High Usage Vm Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-how-to-performance-resize-high-usage-vm-recommendations.md
+++
+ Title: Improve the performance of highly used VMs using Azure Advisor
+description: Use Azure Advisor to improve the performance of your Azure virtual machines with consistent high utilization.
+ Last updated : 10/27/2022++
+# Improve the performance of highly used VMs using Azure Advisor
+
+Azure Advisor helps you improve the speed and responsiveness of your business-critical applications. You can get performance recommendations from the **Performance** tab on the Advisor dashboard.
+
+1. Sign in to the [**Azure portal**](https://portal.azure.com).
+
+1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
+
+1. On the **Advisor** dashboard, select the **Performance** tab.
+
+## Optimize virtual machine (VM) performance by right-sizing highly utilized instances
+
+You can improve the quality of your workload and prevent many performance-related issues (i.e., throttling, high latency) by regularly assessing your [performance efficiency](/azure/architecture/framework/scalability/overview). Performance efficiency is defined by the [Azure Well-Architected Framework](/azure/architecture/framework/) as the ability of your workload to adapt to changes in load. Performance efficiency is one of the five pillars of architectural excellence on Azure.
+
+Unless by design, we recommend keeping your application's usage well below your virtual machine's size limits, so it can better operate and easily accommodate changes.
+
+Advisor aggregates various metrics over a minimum of 7 days, identifies virtual machines with consistent high utilization across those metrics, and finds better sizes (SKUs) for improved performance. Finally, Advisor examines capacity signals in Azure to frequently refresh the recommended SKUs, ensuring that they are available for deployment in the region.
+
+### Resize SKU recommendations
+
+Advisor recommends resizing virtual machines when use is consistently high (above predefined thresholds) given the running virtual machine's size limits.
+
+- The recommendation algorithm evaluates **CPU**, **Memory**, **VM Cached IOPS Consumed Percentage**, and **VM Uncached Bandwidth Consumed Percentage** usage metrics
+- The observation period is the past 7 days from the day of the recommendation
+- Metrics are sampled every 30 seconds, aggregated to 1 minute and then further aggregated to 30 minutes (taking the average of 1-minute average values while aggregating to 30 minutes)
+- A SKU upgrade for virtual machines is decided given the following criteria:
+ - For each metric, we create a new feature from the P50 (median) of its 30-mins averages aggregated over the observation period. Therefore, a virtual machine is identified as a candidate for a resize if:
+ * _Both_ `CPU` and `Memory` features are >= *90%* of the current SKU's limits
+ * Otherwise, _either_
+ * The `VM Cached IOPS` feature is >= to *95%* of the current SKU's limits, and the current SKU's max local disk IOPS is >= to its network disk IOPS. _or_
+ * the `VM Uncached Bandwidth` feature is >= *95%* of the current SKU's limits, and the current SKU's max network disk throttle limits are >= to its local disk throttle units
+- We ensure the following:
+ - The current workload utilization will be better on the new SKU's given that it has higher limits and better performance guarantees
+ - The new SKU has the same Accelerated Networking and Premium Storage capabilities
+ - The new SKU is supported and ready for deployment in the same region as the running virtual machine
++
+In some cases, recommendations can't be adopted or might not be applicable, such as some of these common scenarios (there may be other cases):
+- The virtual machine is short-lived
+- The current virtual machine has already been provisioned to accommodate upcoming traffic
+- Specific testing being done using the current SKU, even if not utilized efficiently
+- There's a need to keep the virtual machine as-is
+
+In such cases, simply use the Dismiss/Postpone options associated with the recommendation.
+
+We're constantly working on improving these recommendations. Feel free to share feedback on [Advisor Forum](https://aka.ms/advisorfeedback).
+
+## Next steps
+
+To learn more about Advisor recommendations and best practices, see:
+* [Get started with Advisor](advisor-get-started.md)
+* [Introduction to Advisor](advisor-overview.md)
+* [Advisor score](azure-advisor-score.md)
+* [Advisor performance recommendations](advisor-reference-performance-recommendations.md)
+* [Advisor cost recommendations (full list)](advisor-reference-cost-recommendations.md)
+* [Advisor reliability recommendations](advisor-reference-reliability-recommendations.md)
+* [Advisor security recommendations](advisor-security-recommendations.md)
+* [Advisor operational excellence recommendations](advisor-reference-operational-excellence-recommendations.md)
+* [The Microsoft Azure Well-Architected Framework](/azure/architecture/framework/)
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
Ultra disk is available in the same region as your database workload. Ultra disk
Learn more about [Virtual machine - AzureStorageVmUltraDisk (Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance.)](../virtual-machines/disks-enable-ultra-ssd.md?tabs=azure-portal).
+### Upgrade the size of your virtual machines close to resource exhaustion
+
+We analyzed data for the past 7 days and identified virtual machines (VMs) with high utilization across different metrics (i.e., CPU, Memory, and VM IO). Those VMs may experience performance issues since they are nearing/at their SKU's limits. Consider upgrading their SKU to improve performance.
+
+Learn more about [Virtual machine - Improve the performance of highly used VMs using Azure Advisor](https://aka.ms/aa_resizehighusagevmrec_learnmore)
+ ## Kubernetes ### Unsupported Kubernetes version is detected
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
If you navigated away from the **Deployment is in progress** page, the following
1. In the left pane, select **Outputs**. 1. Using the same copy technique as with the preceding values, save aside the values for the following outputs:
- * **appDeploymentTemplateYamlEncoded**
* **cmdToConnectToCluster**
+
- These values will be used later in this article. Note that several other useful commands are listed in the outputs.
+These values will be used later in this article. Note that several other useful commands are listed in the outputs.
## Create an Azure SQL Database
The directories *java*, *resources*, and *webapp* contain the source code of the
In the *aks* directory, we placed two deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *openlibertyapplication.yaml* is used to deploy the application image.
-In the *docker* directory, we placed four Dockerfiles. *Dockerfile-local* is used for local debugging, and *Dockerfile* is used to build the image for an AKS deployment. These two files work with Open Liberty. *Dockerfile-wlp-local* and *Dockerfile-wlp* are also used for local debugging and to build the image for an AKS deployment respectively, but instead work with WebSphere Liberty.
- In directory *liberty/config*, the *server.xml* FILE is used to configure the DB connection for the Open Liberty and WebSphere Liberty cluster.
-### Acquire necessary variables from AKS deployment
-
-After the offer is successfully deployed, an AKS cluster will be generated automatically. The AKS cluster is configured to connect to a generated ACR instance. Before we get started with the application, we need to extract the namespace configured for AKS.
-
-1. Run the following command to print the current deployment file, using the `appDeploymentTemplateYamlEncoded` you saved above. The output contains all the variables we need.
-
- ```bash
- echo <appDeploymentTemplateYamlEncoded> | base64 -d
- ```
-
-1. Save aside the `metadata.namespace` from this yaml output for later use in this article.
- ### Build the project
-Now that you've gathered the necessary properties, you can build the application. The POM file for the project reads many properties from the environment.
-
-Now that you've gathered the necessary properties, you can build the application. The POM file for the project reads many properties from the environment. The reason for this parameterization is to avoid having to hard-code values such as database server names, passwords, and other identifiers into the example source code. This allows the sample source code to be easier to use in a wider variety of contexts.
+Now that you've gathered the necessary properties, you can build the application. The POM file for the project reads many properties from the environment. The reason for this parameterization is to avoid having to hard-code values such as database server names, passwords, and other identifiers into the example source code. This allows the sample source code to be easier to use in a wider variety of contexts. These variables are used to also populate `JavaEECafeDB` properties in *server.xml* and in yaml files located in *src/main/aks*.
```bash cd <path-to-your-repo>/java-app
export REGISTRY_NAME=<Azure_Container_Registery_Name>
export USER_NAME=<Azure_Container_Registery_Username> export PASSWORD=<Azure_Container_Registery_Password> export DB_SERVER_NAME=<Server name>.database.windows.net
-export DB_PORT_NUMBER=1433
export DB_NAME=<Database name> export DB_USER=<Server admin login>@<Server name> export DB_PASSWORD=<Server admin password>
-export NAMESPACE=<metadata.namespace>
mvn clean install ```
-### Test your project locally
+### (Optional) Test your project locally
-Use the `liberty:devc` command to run and test the project locally before deploying to Azure. For more information on `liberty:devc`, see the [Liberty Plugin documentation](https://github.com/OpenLiberty/ci.maven/blob/main/docs/dev.md#devc-container-mode).
-In the sample application, we've prepared *Dockerfile-local* and *Dockerfile-wlp-local* for use with `liberty:devc`.
+Use your local ide, or `liberty:run` command to run and test the project locally before deploying to Azure.
-1. Start your local docker environment if you haven't done so already. The instructions for doing this vary depending on the host operating system.
+1. Start your local docker environment if you haven't done so already. The instructions for doing this vary depending on the host operating system. `liberty:run` will also leverage the environment variables defined in the above step.
-1. Start the application in `liberty:devc` mode
+1. Start the application in `liberty:run` mode
```bash cd <path-to-your-repo>/java-app
-
- # If you're running with Open Liberty
- mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_NAME} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-local
-
- # If you're running with WebSphere Liberty
- mvn liberty:devc -Ddb.server.name=${DB_SERVER_NAME} -Ddb.port.number=${DB_PORT_NUMBER} -Ddb.name=${DB_NAME} -Ddb.user=${DB_USER} -Ddb.password=${DB_PASSWORD} -Ddockerfile=target/Dockerfile-wlp-local
+ mvn liberty:run
```-
+
1. Verify the application works as expected. You should see a message similar to `[INFO] [AUDIT] CWWKZ0003I: The application javaee-cafe updated in 1.930 seconds.` in the command output if successful. Go to `http://localhost:9080/` in your browser and verify the application is accessible and all functions are working.
-1. Press `Ctrl+C` to stop `liberty:devc` mode.
+1. Press `Ctrl+C` to stop `liberty:run` mode.
### Build image for AKS deployment
-After successfully running the app in the Liberty Docker container, you can run the `docker build` command to build the image.
+After successfully running the app in the Liberty Docker container, you can run the `docker build` command to build the image.
```bash
-cd <path-to-your-repo>/java-app
-
-# Fetch maven artifactId as image name, maven build version as image version
-export IMAGE_NAME=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.artifactId}' --non-recursive exec:exec)
-export IMAGE_VERSION=$(mvn -q -Dexec.executable=echo -Dexec.args='${project.version}' --non-recursive exec:exec)
- cd <path-to-your-repo>/java-app/target # If you are running with Open Liberty
-docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile .
+docker build -t javaee-cafe:v1 --pull --file=Dockerfile .
# If you are running with WebSphere Liberty
-docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile-wlp .
+docker build -t javaee-cafe:v1 --pull --file=Dockerfile-wlp .
``` ### Upload image to ACR
docker build -t ${IMAGE_NAME}:${IMAGE_VERSION} --pull --file=Dockerfile-wlp .
Now, we upload the built image to the ACR created in the offer. ```bash
-docker tag ${IMAGE_NAME}:${IMAGE_VERSION} ${LOGIN_SERVER}/${IMAGE_NAME}:${IMAGE_VERSION}
+docker tag javaee-cafe:v1 ${LOGIN_SERVER}/javaee-cafe:v1
docker login -u ${USER_NAME} -p ${PASSWORD} ${LOGIN_SERVER}
-docker push ${LOGIN_SERVER}/${IMAGE_NAME}:${IMAGE_VERSION}
+docker push ${LOGIN_SERVER}/javaee-cafe:v1
``` ### Deploy and test the application
The following steps deploy and test the application.
Wait until all pods are restarted successfully using the following command. ```bash
- kubectl get pods -n $NAMESPACE --watch
+ kubectl get pods --watch
``` You should see output similar to the following to indicate that all the pods are running.
The following steps deploy and test the application.
1. Get endpoint of the deployed service ```bash
- kubectl get service -n $NAMESPACE
+ kubectl get service
``` 1. Go to `http://EXTERNAL-IP` to test the application.
+
## Clean up resources
az group delete --name <db-resource-group> --yes --no-wait
* [Azure Kubernetes Service](https://azure.microsoft.com/free/services/kubernetes-service/) * [Open Liberty](https://openliberty.io/) * [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator)
-* [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/)
+* [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/)
aks Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-access.md
Title: Connect to Azure Kubernetes Service (AKS) cluster nodes
description: Learn how to connect to Azure Kubernetes Service (AKS) cluster nodes for troubleshooting and maintenance tasks. Previously updated : 10/20/2022 Last updated : 11/1/2022
# Connect to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting
-Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you may need to access an AKS node. This access could be for maintenance, log collection, or other troubleshooting operations. You can access AKS nodes using SSH, including Windows Server nodes. You can also [connect to Windows Server nodes using remote desktop protocol (RDP) connections][aks-windows-rdp]. For security purposes, the AKS nodes aren't exposed to the internet. To connect to the AKS nodes, you use `kubectl debug` or the private IP address.
+Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you might need to access an AKS node. This access could be for maintenance, log collection, or troubleshooting operations. You can securely authenticate against AKS Linux and Windows nodes using SSH, and you can also [connect to Windows Server nodes using remote desktop protocol (RDP)][aks-windows-rdp]. For security reasons, the AKS nodes aren't exposed to the internet. To connect to the AKS nodes, you use `kubectl debug` or the private IP address.
-This article shows you how to create a connection to an AKS node.
+This article shows you how to create a connection to an AKS node and update the SSH key on an existing AKS cluster.
## Before you begin
When done, `exit` the SSH session, stop any port forwarding, and then `exit` the
kubectl delete pod node-debugger-aks-nodepool1-12345678-vmss000000-bkmmx ```
+## Update SSH key on an existing AKS cluster (preview)
+
+### Prerequisites
+* Before you start, ensure the Azure CLI is installed and configured. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* The aks-preview extension version 0.5.111 or later. To learn how to install an Azure extension, see [How to install extensions][how-to-install-azure-extensions].
+
+> [!NOTE]
+> Updating of the SSH key is supported on Azure virtual machine scale sets with AKS clusters.
+
+Use the [az aks update][az-aks-update] command to update the SSH key on the cluster. This operation will update the key on all node pools. You can either specify the key or a key file using the `--ssh-key-value` argument.
+
+```azurecli
+az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value <new SSH key value or SSH key file>
+```
+
+Examples:
+In the following example, you can specify the new SSH key value for the `--ssh-key-value` argument.
+
+```azurecli
+az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value 'ssh-rsa AAAAB3Nza-xxx'
+```
+
+In the following example, you specify a SSH key file.
+
+```azurecli
+az aks update --name myAKSCluster --resource-group MyResourceGroup --ssh-key-value .ssh/id_rsa.pub
+```
+
+> [!IMPORTANT]
+> During this operation, all virtual machine scale set instances are upgraded and re-imaged to use the new SSH key.
+ ## Next steps If you need more troubleshooting data, you can [view the kubelet logs][view-kubelet-logs] or [view the Kubernetes master node logs][view-master-logs].
If you need more troubleshooting data, you can [view the kubelet logs][view-kube
[aks-windows-rdp]: rdp.md [ssh-nix]: ../virtual-machines/linux/mac-create-ssh-keys.md [ssh-windows]: ../virtual-machines/linux/ssh-from-windows.md
-[ssh-linux-kubectl-debug]: #create-an-interactive-shell-connection-to-a-linux-node
+[ssh-linux-kubectl-debug]: #create-an-interactive-shell-connection-to-a-linux-node
+[az-aks-update]: /cli/azure/aks#az-aks-update
+[how-to-install-azure-extensions]: /cli/azure/azure-cli-extensions-overview#how-to-install-extensions
+
+
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
Title: Use Azure Active Directory pod-managed identities in Azure Kubernetes Ser
description: Learn how to use Azure AD pod-managed identities in Azure Kubernetes Service (AKS) Previously updated : 8/27/2022 Last updated : 11/01/2022
metadata:
## Clean up
-To remove an Azure AD pod-managed identity from your cluster, remove the sample application and the pod-managed identity from the cluster. Then remove the identity.
+To remove an Azure AD pod-managed identity from your cluster, remove the sample application and the pod-managed identity from the cluster. Then remove the identity and the role assignment of cluster identity.
```bash kubectl delete pod demo --namespace $POD_IDENTITY_NAMESPACE
az aks pod-identity delete --name ${POD_IDENTITY_NAME} --namespace ${POD_IDENTIT
az identity delete -g ${IDENTITY_RESOURCE_GROUP} -n ${IDENTITY_NAME} ```
+```azurecli
+az role assignment delete --role "Managed Identity Operator" --assignee "$IDENTITY_CLIENT_ID" --scope "$IDENTITY_RESOURCE_ID"
+```
+ ## Next steps For more information on managed identities, see [Managed identities for Azure resources][az-managed-identities].
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
Title: Use Key Management Service (KMS) etcd encryption in Azure Kubernetes Serv
description: Learn how to use the Key Management Service (KMS) etcd encryption with Azure Kubernetes Service (AKS) Previously updated : 10/03/2022 Last updated : 11/01/2022 # Add Key Management Service (KMS) etcd encryption to an Azure Kubernetes Service (AKS) cluster
-This article shows you how to enable encryption at rest for your Kubernetes data in etcd using Azure Key Vault with the Key Management Service (KMS) plugin. The KMS plugin allows you to:
+This article shows you how to enable encryption at rest for your Kubernetes secrets in etcd using Azure Key Vault with the Key Management Service (KMS) plugin. The KMS plugin allows you to:
* Use a key in Key Vault for etcd encryption. * Bring your own keys.
For more information on using the KMS plugin, see [Encrypting Secret Data at Res
* Azure CLI version 2.39.0 or later. Run `az --version` to find your version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. > [!WARNING]
-> KMS only supports Konnectivity and Vnet Integration.
+> KMS only supports Konnectivity and [API Server Vnet Integration][api-server-vnet-integration].
> You can use `kubectl get po -n kube-system` to verify the results show that a konnectivity-agent-xxx pod is running. If there is, it means the AKS cluster is using Konnectivity. When using VNet integration, you can run the command `az aks cluster show -g -n` to verify the setting `enableVnetIntegration` is set to **true**. ## Limitations
kubectl get secrets --all-namespaces -o json | kubectl replace -f -
[Enable-KMS-with-private-key-vault]: use-kms-etcd-encryption.md#enable-kms-with-private-key-vault [changing-associated-key-vault-mode]: use-kms-etcd-encryption.md#update-key-vault-mode [install-azure-cli]: /cli/azure/install-azure-cli
+[api-server-vnet-integration]: api-server-vnet-integration.md
app-service Configure Vnet Integration Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-vnet-integration-routing.md
az resource update --resource-group <group-name> --name <app-name> --resource-ty
We recommend that you use the site property to enable routing image pull traffic through the virtual network integration. Using the configuration setting allows you to audit the behavior with Azure Policy. The existing `WEBSITE_PULL_IMAGE_OVER_VNET` app setting with the value `true` can still be used, and you can enable routing through the virtual network with either setting.
-### Content storage
+### Content share
-Routing content storage over virtual network integration can be configured using the Azure CLI. In addition to enabling the feature, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445.
+Routing content share over virtual network integration can be configured using the Azure CLI. In addition to enabling the feature, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445.
```azurecli-interactive
-az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --properties.vnetContentStorageEnabled [true|false]
+az resource update --resource-group <group-name> --name <app-name> --resource-type "Microsoft.Web/sites" --properties.vnetContentShareEnabled [true|false]
```
-We recommend that you use the site property to enable content storage traffic through the virtual network integration. Using the configuration setting allows you to audit the behavior with Azure Policy. The existing `WEBSITE_CONTENTOVERVNET` app setting with the value `1` can still be used, and you can enable routing through the virtual network with either setting.
+We recommend that you use the site property to enable content share traffic through the virtual network integration. Using the configuration setting allows you to audit the behavior with Azure Policy. The existing `WEBSITE_CONTENTOVERVNET` app setting with the value `1` can still be used, and you can enable routing through the virtual network with either setting.
## Next steps
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
Learn [how to configure application routing](./configure-vnet-integration-routin
When you're using virtual network integration, you can configure how parts of the configuration traffic are managed. By default, configuration traffic will go directly over the public route, but for the mentioned individual components, you can actively configure it to be routed through the virtual network integration.
-##### Content storage
+##### Content share
-Bringing your own storage for content in often used in Functions where [content storage](./../azure-functions/configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network) is configured as part of the Functions app.
+Bringing your own storage for content in often used in Functions where [content share](./../azure-functions/configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network) is configured as part of the Functions app.
-To route content storage traffic through the virtual network integration, you must ensure that the routing setting is configured. Learn [how to configure content storage routing](./configure-vnet-integration-routing.md#content-storage).
+To route content share traffic through the virtual network integration, you must ensure that the routing setting is configured. Learn [how to configure content share routing](./configure-vnet-integration-routing.md#content-share).
In addition to configuring the routing, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445.
application-gateway Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link.md
Previously updated : 05/09/2022 Last updated : 11/02/2022
Today, you can deploy your critical workloads securely behind Application Gateway, gaining the flexibility of Layer 7 load balancing features. Access to the backend workloads is possible in two ways: - Public IP address - your workloads are accessible over the Internet. -- Private IP address- your workloads are accessible via a private IP address, but within the same VNet as the Application Gateway.
+- Private IP address- your workloads are accessible privately via your virtual network / connected networks
Private Link for Application Gateway allows you to connect workloads over a private connection spanning across VNets and subscriptions. When configured, a private endpoint will be placed into a defined virtual network's subnet, providing a private IP address for clients looking to communicate to the gateway. For a list of other PaaS services that support Private Link functionality, see [What is Azure Private Link?](../private-link/private-link-overview.md).
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| Nutanix | [Nutanix Kubernetes Engine](https://www.nutanix.com/products/kubernetes-engine) | Version [2.5](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_5:Nutanix-Kubernetes-Engine-v2_5); upstream K8s v1.23.11 | | Platform9 | [Platform9 Managed Kubernetes (PMK)](https://platform9.com/managed-kubernetes/) | PMK Version [5.3.0](https://platform9.com/docs/kubernetes/release-notes#platform9-managed-kubernetes-version-53-release-notes); Kubernetes versions: v1.20.5, v1.19.6, v1.18.10 | | Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution | Upstream K8s Version: 1.22.10 <br> Upstream K8s Version: 1.21.3 |
-| Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version [3.5.5](https://docs.mirantis.com/mke/3.5/release-notes/3-5-5.html) <br> MKE Version [3.4.7](https://docs.mirantis.com/mke/3.4/release-notes/3-4-7.html) |
+| Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version [3.6.0](https://docs.mirantis.com/mke/3.6/release-notes/3-6-0.html) <br> MKE Version [3.5.5](https://docs.mirantis.com/mke/3.5/release-notes/3-5-5.html) <br> MKE Version [3.4.7](https://docs.mirantis.com/mke/3.4/release-notes/3-4-7.html) |
| Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) | Wind River Cloud Platform 22.06; Upstream K8s version: 1.23.1 <br>Wind River Cloud Platform 21.12; Upstream K8s version: 1.21.8 <br>Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 | The Azure Arc team also ran the conformance tests and validated Azure Arc-enabled Kubernetes scenarios on the following public cloud providers:
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
The following Azure built-in roles are required for different aspects of managin
* To onboard machines, you must have the [Azure Connected Machine Onboarding](../../role-based-access-control/built-in-roles.md#azure-connected-machine-onboarding) or [Contributor](../../role-based-access-control/built-in-roles.md#contributor) role for the resource group in which the machines will be managed. * To read, modify, and delete a machine, you must have the [Azure Connected Machine Resource Administrator](../../role-based-access-control/built-in-roles.md#azure-connected-machine-resource-administrator) role for the resource group.
-* To select a resource group from the drop-down list when using the **Generate script** method, you must have the [Reader](../../role-based-access-control/built-in-roles.md#reader) role for that resource group (or another role which includes **Reader** access).
+* To select a resource group from the drop-down list when using the **Generate script** method, as well as the permissions needed to onboard machines, listed above, you must additionally have the [Reader](../../role-based-access-control/built-in-roles.md#reader) role for that resource group (or another role which includes **Reader** access).
## Azure subscription and service limits
azure-functions Functions Reference Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-java.md
public class Function {
```
-> [!NOTE]
-> The value of AppSetting FUNCTIONS_EXTENSION_VERSION should be ~2 or ~3 for an optimized cold start experience.
- ## Next steps For more information about Azure Functions Java development, see the following resources:
azure-maps How To Dataset Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md
Title: How to create a dataset using a GeoJson package
-description: Learn how to create a dataset using a GeoJson package embedding the module's JavaScript libraries.
+description: Learn how to create a dataset using a GeoJson package.
Previously updated : 10/31/2021 Last updated : 11/01/2021
https://us.atlas.microsoft.com/mapData/operations/{operationId}?api-version=2.0&
1. Copy the value of the `Resource-Location` key in the response header, which is the `resource location URL`. The `resource location URL` contains the unique identifier (`udid`) of the GeoJSON package resource. ### Create a dataset
-<!--
+ A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset from your GeoJSON, use the new [Dataset Create API][Dataset Create 2022-09-01-preview]. The Dataset Create API takes the `udid` you got in the previous section and returns the `datasetId` of the new dataset.>
-A dataset is a collection of map features, such as buildings, levels, and rooms. To create a dataset from your GeoJSON, use the new create dataset API. The create dataset API takes the `udid` you got in the previous section and returns the `datasetId` of the new dataset.
> [!IMPORTANT] > This is different from the [previous version][Dataset Create] in that it doesn't require a `conversionId` from a converted Drawing package.
See [Next steps](#next-steps) for links to articles to help you complete your in
## Add data to an existing dataset
-<!--
Data can be added to an existing dataset by providing the `datasetId` parameter to the [dataset create API][Dataset Create 2022-09-01-preview] along with the unique identifier of the data you wish to add. The unique identifier can be either a `udid` or `conversionId`. This creates a new dataset consisting of the data (facilities) from both the existing dataset and the new data being imported. Once the new dataset has been created successfully, the old dataset can be deleted.>-
-Data can be added to an existing dataset by providing the `datasetId` parameter to the create dataset API along with the unique identifier of the data you wish to add. The unique identifier can be either a `udid` or `conversionId`. This creates a new dataset consisting of the data (facilities) from both the existing dataset and the new data being imported. Once the new dataset has been created successfully, the old dataset can be deleted.
One thing to consider when adding to an existing dataset is how the feature IDs are created. If a dataset is created from a converted drawing package, the feature IDs are generated automatically. When a dataset is created from a GeoJSON package, feature IDs must be provided in the GeoJSON file. When appending to an existing dataset, the original dataset drives the way feature IDs are created. If the original dataset was created using a `udid`, it uses the IDs from the GeoJSON, and will continue to do so with all GeoJSON packages appended to that dataset in the future. If the dataset was created using a `conversionId`, IDs will be internally generated, and will continue to be internally generated with all GeoJSON packages appended to that dataset in the future.
https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&conversio
| conversionId | The ID returned when converting your drawing package. For more information, see [Convert a Drawing package][conversion]. | | datasetId | The dataset ID returned when creating the original dataset from a GeoJSON package). |
-<!--For more information, see [][].-->
- ## Geojson zip package requirements The GeoJSON zip package consists of one or more [RFC 7946][RFC 7946] compliant GeoJSON files, one for each feature class, all in the root directory (subdirectories aren't supported), compressed with standard Zip compression and named using the `.ZIP` extension.
Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.)
[Facility Ontology]: creator-facility-ontology.md?pivots=facility-ontology-v2 [RFC 7946]: https://www.rfc-editor.org/rfc/rfc7946.html [dataset-concept]: creator-indoor-maps.md#datasets
-<!--[Dataset Create 2022-09-01-preview]: /rest/api/maps/v20220901preview/dataset/create-->
+[Dataset Create 2022-09-01-preview]: /rest/api/maps/v20220901preview/dataset/create
[Visual Studio]: https://visualstudio.microsoft.com/downloads/
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
In addition to the generally available data collection listed above, Azure Monit
| Azure Monitor feature | Current support | Other extensions installed | More information | | : | : | : | : | | Text logs and Windows IIS logs | Public preview | None | [Collect text logs with Azure Monitor Agent (Public preview)](data-collection-text-log.md) |
-| Windows client installer | Public preview | None | [Set up Azure Monitor Agent on Windows client devices](azure-monitor-agent-windows-client.md) |
| [VM insights](../vm/vminsights-overview.md) | Public preview | Dependency Agent extension, if youΓÇÖre using the Map Services feature | [Enable VM Insights overview](../vm/vminsights-enable-overview.md) | In addition to the generally available data collection listed above, Azure Monitor Agent also supports these Azure services in preview:
View [supported operating systems for Azure Arc Connected Machine agent](../../a
## Next steps - [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Alerts Manage Alerts Previous Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alerts-previous-version.md
1. Select **Done**. 1. You can edit the rule **Description**, and **Severity**. These details are used in all alert actions. Additionally, you can choose to not activate the alert rule on creation by selecting **Enable rule upon creation**. 1. Use the [**Suppress Alerts**](./alerts-unified-log.md#state-and-resolving-alerts) option if you want to suppress rule actions for a specified time after an alert is fired. The rule will still run and create alerts but actions won't be triggered to prevent noise. Mute actions value must be greater than the frequency of alert to be effective.
+1. To make alerts stateful, select **Automatically resolve alerts (preview)**.
![Suppress Alerts for Log Alerts](media/alerts-log/AlertsPreviewSuppress.png) 1. Specify if the alert rule should trigger one or more [**Action Groups**](./action-groups.md#webhook) when alert condition is met. > [!NOTE] > Refer to the [Azure subscription service limits](../../azure-resource-manager/management/azure-subscription-service-limits.md) for limits on the actions that can be performed.
- > [!NOTE]
- > Log alert rules are currently [stateless and do not resolve](./alerts-unified-log.md#state-and-resolving-alerts).
1. (Optional) Customize actions in log alert rules: - **Custom Email Subject**: Overrides the *e-mail subject* of email actions. You can't modify the body of the mail and this field **isn't for email addresses**. - **Include custom Json payload**: Overrides the webhook JSON used by Action Groups assuming the action group contains a webhook action. Learn more about [webhook action for Log Alerts](./alerts-log-webhook.md).
azure-monitor Azure Web Apps Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net.md
In order to enable telemetry collection with Application Insights, only the Appl
Upgrading from version 2.8.9 happens automatically, without any extra actions. The new monitoring bits are delivered in the background to the target app service, and on application restart they'll be picked up.
-To check which version of the extension you're running, go to `https://scm.yoursitename.azurewebsites.net/ApplicationInsights`.
+To check which version of the extension you're running, go to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`.
:::image type="content"source="./media/azure-web-apps/extension-version.png" alt-text="Screenshot of the URL path to check the version of the extension you're running." border="false":::
If the upgrade is done from a version prior to 2.5.1, check that the Application
Below is our step-by-step troubleshooting guide for extension/agent based monitoring for ASP.NET based applications running on Azure App Services. 1. Check that `ApplicationInsightsAgent_EXTENSION_VERSION` app setting is set to a value of "~2".
-2. Browse to `https://scm.yoursitename.azurewebsites.net/ApplicationInsights`.
+2. Browse to `https://yoursitename.scm.azurewebsites.net/ApplicationInsights`.
:::image type="content"source="./media/azure-web-apps/app-insights-sdk-status.png" alt-text="Screenshot of the link above results page."border ="false":::
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
# Migrate to workspace-based Application Insights resources
-This article walks you through migrating a classic Application Insights resource to a workspace-based resource. Workspace-based resources support full integration between Application Insights and Log Analytics. Workspace-based resources send Application Insights telemetry to a common Log Analytics workspace. This behavior allows you to access [the latest features of Azure Monitor](#new-capabilities) while keeping application, infrastructure, and platform logs in a consolidated location.
-
-Workspace-based resources enable common Azure role-based access control across your resources and eliminate the need for cross-app/workspace queries.
-
-Workspace-based resources are currently available in all commercial regions and Azure US Government.
+This article walks through migrating a classic Application Insights resource to a workspace-based resource.
+
+Workspace-based resources:
+
+> [!div class="checklist"]
+> - Support full integration between Application Insights and [Log Analytics](../logs/log-analytics-overview.md)
+> - Send Application Insights telemetry to a common [Log Analytics workspace](../logs/log-analytics-workspace-overview.md)
+> - Allow you to access [the latest features of Azure Monitor](#new-capabilities) while keeping application, infrastructure, and platform logs in a consolidated location
+> - Enable common [Azure role-based access control](../../role-based-access-control/overview.md) across your resources
+> - Eliminate the need for cross-app/workspace queries
+> - Are available in all commercial regions and [Azure US Government](../../azure-government/index.yml)
+> - Do not require changing instrumentation keys after migration from a Classic resource
## New capabilities
azure-monitor Separate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/separate-resources.md
Title: How to design your Application Insights deployment - One vs many resources? description: Direct telemetry to different resources for development, test, and production stamps. Previously updated : 05/11/2020 Last updated : 11/01/2022 # How many Application Insights resources should I deploy
-When you are developing the next version of a web application, you don't want to mix up the [Application Insights](../../azure-monitor/app/app-insights-overview.md) telemetry from the new version and the already released version. To avoid confusion, send the telemetry from different development stages to separate Application Insights resources, with separate instrumentation keys (ikeys). To make it easier to change the instrumentation key as a version moves from one stage to another, it can be useful to set the ikey in code instead of in the configuration file.
+When you're developing the next version of a web application, you don't want to mix up the [Application Insights](../../azure-monitor/app/app-insights-overview.md) telemetry from the new version and the already released version.
+
+To avoid confusion, send the telemetry from different development stages to separate Application Insights resources, with separate instrumentation keys (ikeys).
+
+To make it easier to change the instrumentation key as a version moves from one stage to another, it can be useful to [set the ikey dynamically in code](#dynamic-ikey) instead of in the configuration file.
(If your system is an Azure Cloud Service, there's [another method of setting separate ikeys](../../azure-monitor/app/azure-web-apps-net-core.md).)
When you are developing the next version of a web application, you don't want to
When you set up Application Insights monitoring for your web app, you create an Application Insights *resource* in Microsoft Azure. You open this resource in the Azure portal in order to see and analyze the telemetry collected from your app. The resource is identified by an *instrumentation key* (ikey). When you install the Application Insights package to monitor your app, you configure it with the instrumentation key, so that it knows where to send the telemetry.
-Each Application Insights resource comes with metrics that are available out-of-box. If completely separate components report to the same Application Insights resource, these metrics may not make sense to dashboard/alert on.
+Each Application Insights resource comes with metrics that are available out-of-box. If separate components report to the same Application Insights resource, these metrics may not make sense to dashboard/alert on.
### When to use a single Application Insights resource -- For application components that are deployed together. Usually developed by a single team, managed by the same set of DevOps/ITOps users.
+- For application components that are deployed together. These applications are usually developed by a single team and managed by the same set of DevOps/ITOps users.
- If it makes sense to aggregate Key Performance Indicators (KPIs) such as response durations, failure rates in dashboard etc., across all of them by default (you can choose to segment by role name in the Metrics Explorer experience).-- If there is no need to manage Azure role-based access control (Azure RBAC) differently between the application components.
+- If there's no need to manage Azure role-based access control (Azure RBAC) differently between the application components.
- If you don't need metrics alert criteria that are different between the components.-- If you do not need to manage continuous exports differently between the components.-- If you do not need to manage billing/quotas differently between the components.-- If it is okay to have an API key have the same access to data from all components. And 10 API keys are sufficient for the needs across all of them.-- If it is okay to have the same smart detection and work item integration settings across all roles.
+- If you don't need to manage continuous exports differently between the components.
+- If you don't need to manage billing/quotas differently between the components.
+- If it's okay to have an API key have the same access to data from all components. And 10 API keys are sufficient for the needs across all of them.
+- If it's okay to have the same smart detection and work item integration settings across all roles.
> [!NOTE] > If you want to consolidate multiple Application Insights Resources, you may point your existing application components to a new, consolidated Application Insights Resource. The telemetry stored in your old resource will not be transfered to the new resource, so only delete the old resource when you have enough telemetry in the new resource for business continuity.
Each Application Insights resource comes with metrics that are available out-of-
### Other things to keep in mind - You may need to add custom code to ensure that meaningful values are set into the [Cloud_RoleName](./app-map.md?tabs=net#set-or-override-cloud-role-name) attribute. Without meaningful values set for this attribute, *NONE* of the portal experiences will work.-- For Service Fabric applications and classic cloud services, the SDK automatically reads from the Azure Role Environment and sets these. For all other types of apps, you will likely need to set this explicitly.-- Live Metrics experience does not support splitting by role name.
+- For Service Fabric applications and classic cloud services, the SDK automatically reads from the Azure Role Environment and sets these. For all other types of apps, you'll likely need to set this explicitly.
+- Live Metrics experience doesn't support splitting by role name.
## <a name="dynamic-ikey"></a> Dynamic instrumentation key
There are several different methods of setting the Application Version property.
This generates a file called *yourProjectName*.BuildInfo.config. The Publish process renames it to BuildInfo.config.
- The build label contains a placeholder (AutoGen_...) when you build with Visual Studio. But when built with MSBuild, it is populated with the correct version number.
+ The build label contains a placeholder (AutoGen_...) when you build with Visual Studio. But when built with MSBuild, it's populated with the correct version number.
To allow MSBuild to generate version numbers, set the version like `1.0.*` in AssemblyReference.cs
azure-monitor Container Insights Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-manage-agent.md
Container insights uses a containerized version of the Log Analytics agent for L
## How to upgrade the Container insights agent
-Container insights uses a containerized version of the Log Analytics agent for Linux. When a new version of the agent is released, the agent is automatically upgraded on your managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS) and Azure Red Hat OpenShift version 3.x. For a [hybrid Kubernetes cluster](container-insights-hybrid-setup.md) and Azure Red Hat OpenShift version 4.x, the agent is not managed, and you need to manually upgrade the agent.
+Container insights uses a containerized version of the Log Analytics agent for Linux. When a new version of the agent is released, the agent is automatically upgraded on your managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS) and Azure Arc enabled Kubernetes.
-If the agent upgrade fails for a cluster hosted on AKS or Azure Red Hat OpenShift version 3.x, this article also describes the process to manually upgrade the agent. To follow the versions released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
+If the agent upgrade fails for a cluster hosted on AKS, this article also describes the process to manually upgrade the agent. To follow the versions released, see [agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod).
### Upgrade agent on AKS cluster
Perform the following steps to upgrade the agent on a Kubernetes cluster running
* Self-managed Kubernetes clusters hosted on Azure using AKS Engine. * Self-managed Kubernetes clusters hosted on Azure Stack or on-premises using AKS Engine.
-* Red Hat OpenShift version 4.x.
If the Log Analytics workspace is in commercial Azure, run the following command:
If the Log Analytics workspace is in Azure US Government, run the following comm
$ helm upgrade --set omsagent.domain=opinsights.azure.us,omsagent.secret.wsid=<your_workspace_id>,omsagent.secret.key=<your_workspace_key>,omsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers ```
-### Upgrade agent on Azure Red Hat OpenShift v4
-
-Perform the following steps to upgrade the agent on a Kubernetes cluster running on Azure Red Hat OpenShift version 4.x.
-
->[!NOTE]
->Azure Red Hat OpenShift version 4.x only supports running in the Azure commercial cloud.
->
-
-```console
-curl -o upgrade-monitoring.sh -L https://aka.ms/upgrade-monitoring-bash-script
-export azureAroV4ClusterResourceId="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.RedHatOpenShift/OpenShiftClusters/<clusterName>"
-bash upgrade-monitoring.sh --resource-id $ azureAroV4ClusterResourceId
-```
- ## How to disable environment variable collection on a container Container insights collects environmental variables from the containers running in a pod and presents them in the property pane of the selected container in the **Containers** view. You can control this behavior by disabling collection for a specific container either during deployment of the Kubernetes cluster, or after by setting the environment variable *AZMON_COLLECT_ENV*. This feature is available from the agent version ΓÇô ciprod11292018 and higher.
azure-monitor Container Insights Prometheus Monitoring Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-prometheus-monitoring-addon.md
- Title: Send Prometheus metrics to Azure Monitor Logs with Container insights
-description: Configure the Container insights agent to scrape Prometheus metrics from your Kubernetes cluster and send to Log Analytics workspace in Azure Monitor.
-- Previously updated : 09/15/2022---
-# Send Prometheus metrics to Azure Monitor Logs with Container insights
-This article describes how to send Prometheus metrics to a Log Analytics workspace with the Container insights monitoring addon. You can also send metrics to Azure Monitor managed service for Prometheus with the metrics addon which that supports standard Prometheus features such as PromQL and Prometheus alert rules. See [Collect Prometheus metrics with Container insights](container-insights-prometheus.md).
--
-## Prometheus scraping settings
-
-Active scraping of metrics from Prometheus is performed from one of two perspectives:
--- **Cluster-wide**: Defined in the ConfigMap section *[Prometheus data_collection_settings.cluster]*.-- **Node-wide**: Defined in the ConfigMap section *[Prometheus_data_collection_settings.node]*.-
-| Endpoint | Scope | Example |
-|-|-||
-| Pod annotation | Cluster-wide | `prometheus.io/scrape: "true"` <br>`prometheus.io/path: "/mymetrics"` <br>`prometheus.io/port: "8000"` <br>`prometheus.io/scheme: "http"` |
-| Kubernetes service | Cluster-wide | `http://my-service-dns.my-namespace:9100/metrics` <br>`https://metrics-server.kube-system.svc.cluster.local/metrics`ΓÇï |
-| URL/endpoint | Per-node and/or cluster-wide | `http://myurl:9101/metrics` |
-
-When a URL is specified, Container insights only scrapes the endpoint. When Kubernetes service is specified, the service name is resolved with the cluster DNS server to get the IP address. Then the resolved service is scraped.
-
-|Scope | Key | Data type | Value | Description |
-||--|--|-|-|
-| Cluster-wide | | | | Specify any one of the following three methods to scrape endpoints for metrics. |
-| | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
-| | `kubernetes_services` | String | Comma-separated array | An array of Kubernetes services to scrape metrics from kube-state-metrics. Fully qualified domain names must be used here. For example,`kubernetes_services = ["https://metrics-server.kube-system.svc.cluster.local/metrics",http://my-service-dns.my-namespace.svc.cluster.local:9100/metrics]`|
-| | `monitor_kubernetes_pods` | Boolean | true or false | When set to `true` in the cluster-wide settings, the Container insights agent will scrape Kubernetes pods across the entire cluster for the following Prometheus annotations:<br> `prometheus.io/scrape:`<br> `prometheus.io/scheme:`<br> `prometheus.io/path:`<br> `prometheus.io/port:` |
-| | `prometheus.io/scrape` | Boolean | true or false | Enables scraping of the pod, and `monitor_kubernetes_pods` must be set to `true`. |
-| | `prometheus.io/scheme` | String | http or https | Defaults to scraping over HTTP. If necessary, set to `https`. |
-| | `prometheus.io/path` | String | Comma-separated array | The HTTP resource path from which to fetch metrics. If the metrics path isn't `/metrics`, define it with this annotation. |
-| | `prometheus.io/port` | String | 9102 | Specify a port to scrape from. If the port isn't set, it will default to 9102. |
-| | `monitor_kubernetes_pods_namespaces` | String | Comma-separated array | An allowlist of namespaces to scrape metrics from Kubernetes pods.<br> For example, `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]` |
-| Node-wide | `urls` | String | Comma-separated array | HTTP endpoint (either IP address or valid URL path specified). For example: `urls=[$NODE_IP/metrics]`. ($NODE_IP is a specific Container insights parameter and can be used instead of a node IP address. Must be all uppercase.) |
-| Node-wide or cluster-wide | `interval` | String | 60s | The collection interval default is one minute (60 seconds). You can modify the collection for either the *[prometheus_data_collection_settings.node]* and/or *[prometheus_data_collection_settings.cluster]* to time units such as s, m, and h. |
-| Node-wide or cluster-wide | `fieldpass`<br> `fielddrop`| String | Comma-separated array | You can specify certain metrics to be collected or not from the endpoint by setting the allow (`fieldpass`) and disallow (`fielddrop`) listing. You must set the allowlist first. |
-
-## Configure ConfigMaps
-Perform the following steps to configure your ConfigMap configuration file for your cluster. ConfigMaps is a global list and there can be only one ConfigMap applied to the agent. You can't have another ConfigMaps overruling the collections.
---
-1. [Download](https://aka.ms/container-azm-ms-agentconfig) the template ConfigMap YAML file and save it as c*ontainer-azm-ms-agentconfig.yaml*. If you've already deployed a ConfigMap to your cluster and you want to update it with a newer configuration, you can edit the ConfigMap file you've previously used.
-1. Edit the ConfigMap YAML file with your customizations to scrape Prometheus metrics.
--
- ### [Cluster-wide](#tab/cluster-wide)
-
- To collect Kubernetes services cluster-wide, configure the ConfigMap file by using the following example:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings
- [prometheus_data_collection_settings.cluster] ΓÇï
- interval = "1m" ## Valid time units are s, m, h.
- fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï
- fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting
- kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"]
- ```
-
- ### [Specific URL](#tab/url)
-
- To configure scraping of Prometheus metrics from a specific URL across the cluster, configure the ConfigMap file by using the following example:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings
- [prometheus_data_collection_settings.cluster] ΓÇï
- interval = "1m" ## Valid time units are s, m, h.
- fieldpass = ["metric_to_pass1", "metric_to_pass12"] ## specify metrics to pass through ΓÇï
- fielddrop = ["metric_to_drop"] ## specify metrics to drop from collecting
- urls = ["http://myurl:9101/metrics"] ## An array of urls to scrape metrics from
- ```
-
- ### [DaemonSet](#tab/deamonset)
-
- To configure scraping of Prometheus metrics from an agent's DaemonSet for every individual node in the cluster, configure the following example in the ConfigMap:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings ΓÇï
- [prometheus_data_collection_settings.node] ΓÇï
- interval = "1m" ## Valid time units are s, m, h.
- urls = ["http://$NODE_IP:9103/metrics"] ΓÇï
- fieldpass = ["metric_to_pass1", "metric_to_pass2"] ΓÇï
- fielddrop = ["metric_to_drop"] ΓÇï
- ```
-
- `$NODE_IP` is a specific Container insights parameter and can be used instead of a node IP address. It must be all uppercase.
-
- ### [Pod annotation](#tab/pod)
-
- To configure scraping of Prometheus metrics by specifying a pod annotation:
-
- 1. In the ConfigMap, specify the following configuration:
-
- ```
- prometheus-data-collection-settings: |- ΓÇï
- # Custom Prometheus metrics data collection settings
- [prometheus_data_collection_settings.cluster] ΓÇï
- interval = "1m" ## Valid time units are s, m, h
- monitor_kubernetes_pods = true
- ```
-
- 1. Specify the following configuration for pod annotations:
-
- ```
- - prometheus.io/scrape:"true" #Enable scraping for this pod ΓÇï
- - prometheus.io/scheme:"http" #If the metrics endpoint is secured then you will need to set this to `https`, if not default ΓÇÿhttpΓÇÖΓÇï
- - prometheus.io/path:"/mymetrics" #If the metrics path is not /metrics, define it with this annotation. ΓÇï
- - prometheus.io/port:"8000" #If port is not 9102 use this annotationΓÇï
- ```
-
- If you want to restrict monitoring to specific namespaces for pods that have annotations, for example, only include pods dedicated for production workloads, set the `monitor_kubernetes_pod` to `true` in ConfigMap. Then add the namespace filter `monitor_kubernetes_pods_namespaces` to specify the namespaces to scrape from. An example is `monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"]`.
-
-2. Run the following kubectl command: `kubectl apply -f <configmap_yaml_file.yaml>`.
-
- Example: `kubectl apply -f container-azm-ms-agentconfig.yaml`.
-
-The configuration change can take a few minutes to finish before taking effect. You must restart all Azure Monitor Agent pods manually. When the restarts are finished, a message appears that's similar to the following and includes the result `configmap "container-azm-ms-agentconfig" created`.
--
-## Verify configuration
-
-To verify the configuration was successfully applied to a cluster, use the following command to review the logs from an agent pod: `kubectl logs ama-logs-fdf58 -n=kube-system`.
--
-If there are configuration errors from the Azure Monitor Agent pods, the output will show errors similar to the following example:
-
-```
-***************Start Config Processing********************
-config::unsupported/missing config schema version - 'v21' , using defaults
-```
-
-Errors related to applying configuration changes are also available for review. The following options are available to perform additional troubleshooting of configuration changes and scraping of Prometheus metrics:
--- From an agent pod logs using the same `kubectl logs` command.--- From Live Data (preview). Live Data (preview) logs show errors similar to the following example:-
- ```
- 2019-07-08T18:55:00Z E! [inputs.prometheus]: Error in plugin: error making HTTP request to http://invalidurl:1010/metrics: Get http://invalidurl:1010/metrics: dial tcp: lookup invalidurl on 10.0.0.10:53: no such host
- ```
--- From the **KubeMonAgentEvents** table in your Log Analytics workspace. Data is sent every hour with *Warning* severity for scrape errors and *Error* severity for configuration errors. If there are no errors, the entry in the table will have data with severity *Info*, which reports no errors. The **Tags** property contains more information about the pod and container ID on which the error occurred and also the first occurrence, last occurrence, and count in the last hour.-- For Azure Red Hat OpenShift v3.x and v4.x, check the Azure Monitor Agent logs by searching the **ContainerLog** table to verify if log collection of openshift-azure-logging is enabled.-
-Errors prevent Azure Monitor Agent from parsing the file, causing it to restart and use the default configuration. After you correct the errors in ConfigMap on clusters other than Azure Red Hat OpenShift v3.x, save the YAML file and apply the updated ConfigMaps by running the command `kubectl apply -f <configmap_yaml_file.yaml`.
-
-For Azure Red Hat OpenShift v3.x, edit and save the updated ConfigMaps by running the command `oc edit configmaps container-azm-ms-agentconfig -n openshift-azure-logging`.
-
-## Query Prometheus metrics data
-
-To view Prometheus metrics scraped by Azure Monitor and any configuration/scraping errors reported by the agent, review [Query Prometheus metrics data](container-insights-log-query.md#prometheus-metrics).
-
-## View Prometheus metrics in Grafana
-
-Container insights supports viewing metrics stored in your Log Analytics workspace in Grafana dashboards. We've provided a template that you can download from Grafana's [dashboard repository](https://grafana.com/grafana/dashboards?dataSource=grafana-azure-monitor-datasource&category=docker). Use the template to get started and reference it to help you learn how to query other data from your monitored clusters to visualize in custom Grafana dashboards.
---
-## Next steps
--- [Learn more about scraping Prometheus metrics](container-insights-prometheus.md).-- [Configure your cluster to send data to Azure Monitor managed service for Prometheus](container-insights-prometheus-metrics-addon.md).
azure-monitor Grafana Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/grafana-plugin.md
Use the following steps to set up a Grafana server and build dashboards for metr
## Set up Grafana
-### Set up Azure Managed Grafana (Preview)
+### Set up Azure Managed Grafana
Azure Managed Grafana is optimized for the Azure environment and works seamlessly with Azure Monitor. Enabling you to: - Manage user authentication and access control using Azure Active Directory identities
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
na Previously updated : 10/27/2022 Last updated : 11/02/2022 # Create and manage Active Directory connections for Azure NetApp Files
Several features of Azure NetApp Files require that you have an Active Directory
See [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for information about these options.
+ * <a name="encrypted-smb-dc"></a> **Encrypted SMB connections to Domain Controller**
+
+ **Encrypted SMB connections to Domain Controller** specifies whether encryption should be used for communication between an SMB server and domain controller. When enabled, only SMB3 will be used for encrypted domain controller connections.
+
+ This feature is currently in preview. If this is your first time using Encrypted SMB connections to domain controller, you must register it:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFEncryptedSMBConnectionsToDC
+ ```
+
+ Check the status of the feature registration:
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is `Registered` before continuing.
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFEncryptedSMBConnectionsToDC
+ ```
+
+ You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+ * <a name="backup-policy-users"></a> **Backup policy users** This option grants addition security privileges to AD DS domain users or groups that require elevated backup privileges to support backup, restore, and migration workflows in Azure NetApp Files. The specified AD DS user accounts or groups will have elevated NTFS permissions at the file or folder level.
azure-netapp-files Modify Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/modify-active-directory-connections.md
Previously updated : 07/22/2022 Last updated : 11/02/2022
Once you have [created an Active Directory connection](create-active-directory-c
| Allow local NFS users with LDAP | If enabled, this option will manage access for local users and LDAP users. | Yes | This option will allow access to local users. It is not recommended and, if enabled, should only be used for a limited time and later disabled. | If enabled, this option will allow access to local users and LDAP users. If access is needed for only LDAP users, this option must be disabled. | | LDAP over TLS | If enabled, LDAP over TLS will be configured to support secure LDAP communication to active directory. | Yes | None | If LDAP over TLS is enabled and if the server root CA certificate is already present in the database, then LDAP traffic is secured using the CA certificate. If a new certificate is passed in, that certificate will be installed. | | Server root CA Certificate | When LDAP over SSL/TLS is enabled, the LDAP client is required to have base64-encoded Active Directory Certificate Service's self-signed root CA certificate. | Yes | None* | LDAP traffic secured with new certificate only if LDAP over TLS is enabled |
+| Encrypted SMB connections to Domain Controller | This specifies whether encryption should be used for communication between SMB server and domain controller. See [Create Active Directory connections](create-active-directory-connections.md#encrypted-smb-dc) for more details on using this feature. | Yes | SMB, Kerberos, and LDAP enabled volume creation cannot be used if the domain controller does not support SMB3 | Only SMB3 will be used for encrypted domain controller connections. |
| Backup policy users | You can include additional accounts that require elevated privileges to the computer account created for use with Azure NetApp Files. See [Create and manage Active Directory connections](create-active-directory-connections.md#create-an-active-directory-connection) for more information. | Yes | None* | The specified accounts will be allowed to change the NTFS permissions at the file or folder level. | | Administrators | Specify users or groups that will be given administrator privileges on the volume | Yes | None | User account will receive administrator privileges | | Username | Username of the Active Directory domain administrator | Yes | None* | Credential change to contact DC |
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 10/25/2022 Last updated : 11/02/2022 # What's new in Azure NetApp Files Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## November 2022
+
+* [Encrypted SMB connections to Domain Controller](create-active-directory-connections.md#encrypted-smb-dc) (Preview)
+
+ With the Encrypted SMB connections to Active Directory Domain Controller capability you can now specify whether encryption should be used for communication between SMB server and domain controller in Active Directory connections. When enabled, only SMB3 will be used for encrypted domain controller connections.
+ ## October 2022 * [Availability zone volume placement](manage-availability-zone-volume-placement.md) (Preview)
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
Title: Linter settings for Bicep config description: Describes how to customize configuration values for the Bicep linter Previously updated : 09/30/2022 Last updated : 11/01/2022 # Add linter settings in the Bicep config file
The following example shows the rules that are available for configuration.
"artifacts-parameters": { "level": "warning" },
+ "decompiler-cleanup": {
+ "level": "warning"
+ },
"max-outputs": { "level": "warning" },
azure-resource-manager Deployment Script Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md
Previously updated : 10/26/2022 Last updated : 11/01/2022 - # Use deployment scripts in Bicep
resource runPowerShellInline 'Microsoft.Resources/deploymentScripts@2020-10-01'
storageAccountName: 'myStorageAccount' storageAccountKey: 'myKey' }
- azPowerShellVersion: '6.4' // or azCliVersion: '2.28.0'
+ azPowerShellVersion: '8.3' // or azCliVersion: '2.40.0'
arguments: '-name \\"John Dole\\"' environmentVariables: [ {
Property value details:
The following Bicep file has one resource defined with the `Microsoft.Resources/deploymentScripts` type. The highlighted part is the inline script. The script takes a parameter, and output the parameter value. `DeploymentScriptOutputs` is used for storing outputs. The output line shows how to access the stored values. `Write-Output` is used for debugging purpose. To learn how to access the output file, see [Monitor and troubleshoot deployment scripts](#monitor-and-troubleshoot-deployment-scripts). For the property descriptions, see [Sample Bicep files](#sample-bicep-files).
You can use the [loadTextContent](bicep-functions-files.md#loadtextcontent) func
The following example loads a script from a file and uses it for a deployment script. ## Use external scripts
The supporting files are copied to `azscripts/azscriptinput` at the runtime. Use
The following Bicep file shows how to pass values between two `deploymentScripts` resources: In the first resource, you define a variable called `$DeploymentScriptOutputs`, and use it to store the output values. Use resource symbolic name to access the output values.
Different from the PowerShell deployment script, CLI/bash support doesn't expose
Deployment script outputs must be saved in the `AZ_SCRIPTS_OUTPUT_PATH` location, and the outputs must be a valid JSON string object. The contents of the file must be saved as a key-value pair. For example, an array of strings is stored as `{ "MyResult": [ "foo", "bar"] }`. Storing just the array results, for example `[ "foo", "bar" ]`, is invalid. [jq](https://stedolan.github.io/jq/) is used in the previous sample. It comes with the container images. See [Configure development environment](#configure-development-environment).
SubscriptionId : 01234567-89AB-CDEF-0123-456789ABCDEF
ProvisioningState : Succeeded Identity : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/mydentity1008rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myuami ScriptKind : AzurePowerShell
-AzPowerShellVersion : 3.0
-StartTime : 6/18/2020 7:46:45 PM
-EndTime : 6/18/2020 7:49:45 PM
-ExpirationDate : 6/19/2020 7:49:45 PM
+AzPowerShellVersion : 8.3
+StartTime : 6/18/2022 7:46:45 PM
+EndTime : 6/18/2022 7:49:45 PM
+ExpirationDate : 6/19/2022 7:49:45 PM
CleanupPreference : OnSuccess StorageAccountId : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0618rg/providers/Microsoft.Storage/storageAccounts/ftnlvo6rlrvo2azscripts ContainerInstanceId : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0618rg/providers/Microsoft.ContainerInstance/containerGroups/ftnlvo6rlrvo2azscripts
The list command output is similar to:
[ { "arguments": "-name \\\"John Dole\\\"",
- "azPowerShellVersion": "3.0",
+ "azPowerShellVersion": "8.3",
"cleanupPreference": "OnSuccess", "containerSettings": { "containerGroupName": null }, "environmentVariables": null,
- "forceUpdateTag": "20200625T025902Z",
+ "forceUpdateTag": "20220625T025902Z",
"id": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Resources/deploymentScripts/runPowerShellInlineWithOutput", "identity": { "tenantId": "01234567-89AB-CDEF-0123-456789ABCDEF",
The list command output is similar to:
"scriptContent": "\r\n param([string] $name)\r\n $output = \"Hello {0}\" -f $name\r\n Write-Output $output\r\n $DeploymentScriptOutputs = @{}\r\n $DeploymentScriptOutputs['text'] = $output\r\n ", "status": { "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.ContainerInstance/containerGroups/64lxews2qfa5uazscripts",
- "endTime": "2020-06-25T03:00:16.796923+00:00",
+ "endTime": "2022-06-25T03:00:16.796923+00:00",
"error": null,
- "expirationTime": "2020-06-26T03:00:16.796923+00:00",
- "startTime": "2020-06-25T02:59:07.595140+00:00",
+ "expirationTime": "2022-06-26T03:00:16.796923+00:00",
+ "startTime": "2022-06-25T02:59:07.595140+00:00",
"storageAccountId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Storage/storageAccounts/64lxews2qfa5uazscripts" }, "storageAccountSettings": null, "supportingScriptUris": null, "systemData": {
- "createdAt": "2020-06-25T02:59:04.750195+00:00",
+ "createdAt": "2022-06-25T02:59:04.750195+00:00",
"createdBy": "someone@contoso.com", "createdByType": "User",
- "lastModifiedAt": "2020-06-25T02:59:04.750195+00:00",
+ "lastModifiedAt": "2022-06-25T02:59:04.750195+00:00",
"lastModifiedBy": "someone@contoso.com", "lastModifiedByType": "User" },
The output is similar to:
"systemData": { "createdBy": "someone@contoso.com", "createdByType": "User",
- "createdAt": "2020-06-25T02:59:04.7501955Z",
+ "createdAt": "2022-06-25T02:59:04.7501955Z",
"lastModifiedBy": "someone@contoso.com", "lastModifiedByType": "User",
- "lastModifiedAt": "2020-06-25T02:59:04.7501955Z"
+ "lastModifiedAt": "2022-06-25T02:59:04.7501955Z"
}, "properties": { "provisioningState": "Succeeded",
- "forceUpdateTag": "20200625T025902Z",
- "azPowerShellVersion": "3.0",
+ "forceUpdateTag": "20220625T025902Z",
+ "azPowerShellVersion": "8.3",
"scriptContent": "\r\n param([string] $name)\r\n $output = \"Hello {0}\" -f $name\r\n Write-Output $output\r\n $DeploymentScriptOutputs = @{}\r\n $DeploymentScriptOutputs['text'] = $output\r\n ", "arguments": "-name \\\"John Dole\\\"", "retentionInterval": "P1D",
The output is similar to:
"status": { "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.ContainerInstance/containerGroups/64lxews2qfa5uazscripts", "storageAccountId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Storage/storageAccounts/64lxews2qfa5uazscripts",
- "startTime": "2020-06-25T02:59:07.5951401Z",
- "endTime": "2020-06-25T03:00:16.7969234Z",
- "expirationTime": "2020-06-26T03:00:16.7969234Z"
+ "startTime": "2022-06-25T02:59:07.5951401Z",
+ "endTime": "2022-06-25T03:00:16.7969234Z",
+ "expirationTime": "2022-06-26T03:00:16.7969234Z"
}, "outputs": { "text": "Hello John Dole"
azure-resource-manager Linter Rule Decompiler Cleanup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-decompiler-cleanup.md
+
+ Title: Linter rule - decompiler cleanup
+description: Linter rule - decompiler cleanup
+ Last updated : 11/01/2022++
+# Linter rule - decompiler cleanup
+
+The [Bicep CLI decompile](./bicep-cli.md#decompile) command converts ARM template JSON to a Bicep file. If a variable name, or a parameter name, or a resource symbolic name is ambiguous, the Bicep CLI adds a suffix to the name, for example *accountName_var* or *virtualNetwork_resource*. This rule finds these names in Bicep files.
+
+## Linter rule code
+
+Use the following value in the [Bicep configuration file](bicep-config-linter.md) to customize rule settings:
+
+`decompiler-cleanup`
+
+## Solution
+
+To increase the readability, update these names with more meaningful names.
+
+## Next steps
+
+For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md
Title: Use Bicep linter description: Learn how to use Bicep linter. Previously updated : 9/30/2022 Last updated : 11/01/2022 # Use Bicep linter
The default set of linter rules is minimal and taken from [arm-ttk test cases](.
- [adminusername-should-not-be-literal](./linter-rule-admin-username-should-not-be-literal.md) - [artifacts-parameters](./linter-rule-artifacts-parameters.md)
+- [decompiler-cleanup](./linter-rule-decompiler-cleanup.md)
- [max-outputs](./linter-rule-max-outputs.md) - [max-params](./linter-rule-max-parameters.md) - [max-resources](./linter-rule-max-resources.md)
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> | - | -- | - | -- | > | diagnosticsettings | No | No | No | > | diagnosticsettingscategories | No | No | No |
-> | privatelinkforazuread | Yes | Yes | No |
-> | tenants | Yes | Yes | No |
+> | privatelinkforazuread | **Yes** | **Yes** | No |
+> | tenants | **Yes** | **Yes** | No |
## Microsoft.Addons
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | actionrules | Yes | Yes | No |
+> | actionrules | **Yes** | **Yes** | No |
> | alerts | No | No | No | > | alertslist | No | No | No | > | alertsmetadata | No | No | No | > | alertssummary | No | No | No | > | alertssummarylist | No | No | No |
-> | smartdetectoralertrules | Yes | Yes | No |
+> | smartdetectoralertrules | **Yes** | **Yes** | No |
> | smartgroups | No | No | No | ## Microsoft.AnalysisServices
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | servers | Yes | Yes | No |
+> | servers | **Yes** | **Yes** | No |
## Microsoft.ApiManagement
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | reportfeedback | No | No | No |
-> | service | Yes | Yes | Yes (using template) <br/><br/> [Move API Management across regions](../../api-management/api-management-howto-migrate.md). |
+> | service | **Yes** | **Yes** | **Yes** (using template) <br/><br/> [Move API Management across regions](../../api-management/api-management-howto-migrate.md). |
## Microsoft.App > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | managedenvironments | Yes | Yes | No |
+> | managedenvironments | **Yes** | **Yes** | No |
## Microsoft.AppConfiguration > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | configurationstores | Yes | Yes | No |
+> | configurationstores | **Yes** | **Yes** | No |
> | configurationstores / eventgridfilters | No | No | No | ## Microsoft.AppPlatform
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | spring | Yes | Yes | No |
+> | spring | **Yes** | **Yes** | No |
## Microsoft.AppService
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | apiapps | No | No | Yes (using template)<br/><br/> [Move an App Service app to another region](../../app-service/manage-move-across-regions.md) |
+> | apiapps | No | No | **Yes** (using template)<br/><br/> [Move an App Service app to another region](../../app-service/manage-move-across-regions.md) |
> | appidentities | No | No | No | > | gateways | No | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | attestationproviders | Yes | Yes | No |
+> | attestationproviders | **Yes** | **Yes** | No |
## Microsoft.Authorization
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | automationaccounts | Yes | Yes | Yes (using template) <br/><br/> [Using geo-replication](../../automation/automation-managing-data.md#geo-replication-in-azure-automation) |
-> | automationaccounts / configurations | Yes | Yes | No |
-> | automationaccounts / runbooks | Yes | Yes | No |
+> | automationaccounts | **Yes** | **Yes** | **Yes** (using template) <br/><br/> [Using geo-replication](../../automation/automation-managing-data.md#geo-replication-in-azure-automation) |
+> | automationaccounts / configurations | **Yes** | **Yes** | No |
+> | automationaccounts / runbooks | **Yes** | **Yes** | No |
## Microsoft.AVS > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | privateclouds | Yes | Yes | No |
+> | privateclouds | **Yes** | **Yes** | No |
## Microsoft.AzureActiveDirectory > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | b2cdirectories | Yes | Yes | No |
+> | b2cdirectories | **Yes** | **Yes** | No |
> | b2ctenants | No | No | No | ## Microsoft.AzureData
Jump to a resource provider namespace:
> | sqlinstances | No | No | No | > | sqlmanagedinstances | No | No | No | > | sqlserverinstances | No | No | No |
-> | sqlserverregistrations | Yes | Yes | No |
+> | sqlserverregistrations | **Yes** | **Yes** | No |
## Microsoft.AzureStack
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | cloudmanifestfiles | No | No | No |
-> | registrations | Yes | Yes | No |
+> | registrations | **Yes** | **Yes** | No |
## Microsoft.AzureStackHCI
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | batchaccounts | Yes | Yes | Batch accounts can't be moved directly from one region to another, but you can use a template to export a template, modify it, and deploy the template to the new region. <br/><br/> Learn about [moving a Batch account across regions](../../batch/account-move.md) |
+> | batchaccounts | **Yes** | **Yes** | Batch accounts can't be moved directly from one region to another, but you can use a template to export a template, modify it, and deploy the template to the new region. <br/><br/> Learn about [moving a Batch account across regions](../../batch/account-move.md) |
## Microsoft.Billing
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | botservices | Yes | Yes | No |
+> | botservices | **Yes** | **Yes** | No |
## Microsoft.Cache
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | redis | Yes | Yes | No |
+> | redis | **Yes** | **Yes** | No |
> | redisenterprise | No | No | No | ## Microsoft.Capacity
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | cdnwebapplicationfirewallmanagedrulesets | No | No | No |
-> | cdnwebapplicationfirewallpolicies | Yes | Yes | No |
+> | cdnwebapplicationfirewallpolicies | **Yes** | **Yes** | No |
> | edgenodes | No | No | No |
-> | profiles | Yes | Yes | No |
-> | profiles / endpoints | Yes | Yes | No |
+> | profiles | **Yes** | **Yes** | No |
+> | profiles / endpoints | **Yes** | **Yes** | No |
## Microsoft.CertificateRegistration
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | certificateorders | Yes | Yes | No |
+> | certificateorders | **Yes** | **Yes** | No |
## Microsoft.ClassicCompute
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | capabilities | No | No | No |
-> | domainnames | Yes | No | No |
+> | domainnames | **Yes** | No | No |
> | quotas | No | No | No | > | resourcetypes | No | No | No | > | validatesubscriptionmoveavailability | No | No | No |
-> | virtualmachines | Yes | Yes | No |
+> | virtualmachines | **Yes** | **Yes** | No |
## Microsoft.ClassicInfrastructureMigrate
Jump to a resource provider namespace:
> | osplatformimages | No | No | No | > | publicimages | No | No | No | > | quotas | No | No | No |
-> | storageaccounts | Yes | No | Yes |
+> | storageaccounts | **Yes** | No | **Yes** |
> | vmimages | No | No | No | ## Microsoft.ClassicSubscription
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | accounts | Yes | Yes | No |
-> | Cognitive Search | Yes | Yes | Supported with manual steps.<br/><br/> Learn about [moving your Azure Cognitive Search service to another region](../../search/search-howto-move-across-regions.md) |
+> | accounts | **Yes** | **Yes** | No |
+> | Cognitive Search | **Yes** | **Yes** | Supported with manual steps.<br/><br/> Learn about [moving your Azure Cognitive Search service to another region](../../search/search-howto-move-across-regions.md) |
## Microsoft.Commerce
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | communicationservices | Yes | Yes <br/><br/> Note that resources with attached phone numbers cannot be moved to subscriptions in different data locations, nor subscriptions that do not support having phone numbers. | No |
+> | communicationservices | **Yes** | **Yes** <br/><br/> Note that resources with attached phone numbers cannot be moved to subscriptions in different data locations, nor subscriptions that do not support having phone numbers. | No |
## Microsoft.Compute
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | availabilitysets | Yes | Yes | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move availability sets. |
+> | availabilitysets | **Yes** | **Yes** | **Yes** <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move availability sets. |
> | diskaccesses | No | No | No | > | diskencryptionsets | No | No | No |
-> | disks | Yes | Yes | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move Azure VMs and related disks. |
+> | disks | **Yes** | **Yes** | **Yes** <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move Azure VMs and related disks. |
> | galleries | No | No | No | > | galleries / images | No | No | No | > | galleries / images / versions | No | No | No | > | hostgroups | No | No | No | > | hostgroups / hosts | No | No | No |
-> | images | Yes | Yes | No |
-> | proximityplacementgroups | Yes | Yes | No |
+> | images | **Yes** | **Yes** | No |
+> | proximityplacementgroups | **Yes** | **Yes** | No |
> | restorepointcollections | No | No | No | > | restorepointcollections / restorepoints | No | No | No | > | sharedvmextensions | No | No | No | > | sharedvmimages | No | No | No | > | sharedvmimages / versions | No | No | No |
-> | snapshots | Yes - Full <br> No - Incremental | Yes - Full <br> No - Incremental | No - Full <br> No - Incremental |
+> | snapshots | **Yes** - Full <br> No - Incremental | **Yes** - Full <br> No - Incremental | No - Full <br> No - Incremental |
> | sshpublickeys | No | No | No |
-> | virtualmachines | Yes | Yes | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move Azure VMs. |
-> | virtualmachines / extensions | Yes | Yes | No |
-> | virtualmachinescalesets | Yes | Yes | No |
+> | virtualmachines | **Yes** | **Yes** | **Yes** <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move Azure VMs. |
+> | virtualmachines / extensions | **Yes** | **Yes** | No |
+> | virtualmachinescalesets | **Yes** | **Yes** | No |
## Microsoft.Confluent
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | registries | Yes | Yes | No |
-> | registries / agentpools | Yes | Yes | No |
-> | registries / buildtasks | Yes | Yes | No |
-> | registries / replications | Yes | Yes | No |
-> | registries / tasks | Yes | Yes | No |
-> | registries / webhooks | Yes | Yes | No |
+> | registries | **Yes** | **Yes** | No |
+> | registries / agentpools | **Yes** | **Yes** | No |
+> | registries / buildtasks | **Yes** | **Yes** | No |
+> | registries / replications | **Yes** | **Yes** | No |
+> | registries / tasks | **Yes** | **Yes** | No |
+> | registries / webhooks | **Yes** | **Yes** | No |
## Microsoft.ContainerService
Jump to a resource provider namespace:
> | billingaccounts | No | No | No | > | budgets | No | No | No | > | cloudconnectors | No | No | No |
-> | connectors | Yes | Yes | No |
+> | connectors | **Yes** | **Yes** | No |
> | departments | No | No | No | > | dimensions | No | No | No | > | enrollmentaccounts | No | No | No |
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | associations | No | No | No |
-> | resourceproviders | Yes | Yes | No |
+> | resourceproviders | **Yes** | **Yes** | No |
## Microsoft.DataBox
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | catalogs | Yes | Yes | No |
+> | catalogs | **Yes** | **Yes** | No |
> | datacatalogs | No | No | No | ## Microsoft.DataConnect
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | datafactories | Yes | Yes | No |
-> | factories | Yes | Yes | No |
+> | datafactories | **Yes** | **Yes** | No |
+> | factories | **Yes** | **Yes** | No |
## Microsoft.DataLake
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | accounts | Yes | Yes | No |
+> | accounts | **Yes** | **Yes** | No |
## Microsoft.DataLakeStore > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | accounts | Yes | Yes | No |
+> | accounts | **Yes** | **Yes** | No |
## Microsoft.DataMigration
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | - |
-> | backupvaults | [Yes](../../backup/backup-vault-overview.md#use-azure-portal-to-move-backup-vault-to-a-different-resource-group) | [Yes](../../backup/backup-vault-overview.md#use-azure-portal-to-move-backup-vault-to-a-different-subscription) | No |
+> | backupvaults | [**Yes**](../../backup/backup-vault-overview.md#use-azure-portal-to-move-backup-vault-to-a-different-resource-group) | [**Yes**](../../backup/backup-vault-overview.md#use-azure-portal-to-move-backup-vault-to-a-different-subscription) | No |
## Microsoft.DataShare > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | accounts | Yes | Yes | No |
+> | accounts | **Yes** | **Yes** | No |
## Microsoft.DBforMariaDB > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | servers | Yes | Yes | You can use a cross-region read replica to move an existing server. [Learn more](../../postgresql/howto-move-regions-portal.md).<br/><br/> If the service is provisioned with geo-redundant backup storage, you can use geo-restore to restore in other regions. [Learn more](../../mariadb/concepts-business-continuity.md#recover-from-an-azure-regional-data-center-outage).
+> | servers | **Yes** | **Yes** | You can use a cross-region read replica to move an existing server. [Learn more](../../postgresql/howto-move-regions-portal.md).<br/><br/> If the service is provisioned with geo-redundant backup storage, you can use geo-restore to restore in other regions. [Learn more](../../mariadb/concepts-business-continuity.md#recover-from-an-azure-regional-data-center-outage).
## Microsoft.DBforMySQL > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | flexibleServers | Yes | Yes | No |
-> | servers | Yes | Yes | You can use a cross-region read replica to move an existing server. [Learn more](../../mysql/howto-move-regions-portal.md).
+> | flexibleServers | **Yes** | **Yes** | No |
+> | servers | **Yes** | **Yes** | You can use a cross-region read replica to move an existing server. [Learn more](../../mysql/howto-move-regions-portal.md).
## Microsoft.DBforPostgreSQL > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | flexibleServers | Yes | Yes | No |
+> | flexibleServers | **Yes** | **Yes** | No |
> | servergroups | No | No | No |
-> | servers | Yes | Yes | You can use a cross-region read replica to move an existing server. [Learn more](../../postgresql/howto-move-regions-portal.md).
-> | serversv2 | Yes | Yes | No |
+> | servers | **Yes** | **Yes** | You can use a cross-region read replica to move an existing server. [Learn more](../../postgresql/howto-move-regions-portal.md).
+> | serversv2 | **Yes** | **Yes** | No |
## Microsoft.DeploymentManager > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | artifactsources | Yes | Yes | No |
-> | rollouts | Yes | Yes | No |
-> | servicetopologies | Yes | Yes | No |
-> | servicetopologies / services | Yes | Yes | No |
-> | servicetopologies / services / serviceunits | Yes | Yes | No |
-> | steps | Yes | Yes | No |
+> | artifactsources | **Yes** | **Yes** | No |
+> | rollouts | **Yes** | **Yes** | No |
+> | servicetopologies | **Yes** | **Yes** | No |
+> | servicetopologies / services | **Yes** | **Yes** | No |
+> | servicetopologies / services / serviceunits | **Yes** | **Yes** | No |
+> | steps | **Yes** | **Yes** | No |
## Microsoft.DesktopVirtualization > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | applicationgroups | Yes | Yes | No |
-> | hostpools | Yes | Yes | No |
-> | workspaces | Yes | Yes | No |
+> | applicationgroups | **Yes** | **Yes** | No |
+> | hostpools | **Yes** | **Yes** | No |
+> | workspaces | **Yes** | **Yes** | No |
## Microsoft.Devices
Jump to a resource provider namespace:
> | - | -- | - | -- | > | elasticpools | No | No | No. Resource isn't exposed. | > | elasticpools / iothubtenants | No | No | No. Resource isn't exposed. |
-> | iothubs | Yes | Yes | Yes. [Learn more](../../iot-hub/iot-hub-how-to-clone.md) |
-> | provisioningservices | Yes | Yes | No |
+> | iothubs | **Yes** | **Yes** | **Yes**. [Learn more](../../iot-hub/iot-hub-how-to-clone.md) |
+> | provisioningservices | **Yes** | **Yes** | No |
## Microsoft.DevOps > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | pipelines | Yes | Yes | No |
+> | pipelines | **Yes** | **Yes** | No |
> | controllers | **pending** | **pending** | No | ## Microsoft.DevSpaces
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | controllers | Yes | Yes | No |
+> | controllers | **Yes** | **Yes** | No |
> | AKS cluster | **pending** | **pending** | No<br/><br/> [Learn more](/previous-versions/azure/dev-spaces/) about moving to another region. ## Microsoft.DevTestLab
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | labcenters | No | No | No |
-> | labs | Yes | No | No |
-> | labs / environments | Yes | Yes | No |
-> | labs / servicerunners | Yes | Yes | No |
-> | labs / virtualmachines | Yes | No | No |
-> | schedules | Yes | Yes | No |
+> | labs | **Yes** | No | No |
+> | labs / environments | **Yes** | **Yes** | No |
+> | labs / servicerunners | **Yes** | **Yes** | No |
+> | labs / virtualmachines | **Yes** | No | No |
+> | schedules | **Yes** | **Yes** | No |
## Microsoft.DigitalTwins > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | digitaltwinsinstances | No | No | Yes, by recreating resources in new region. [Learn more](../../digital-twins/how-to-move-regions.md) |
+> | digitaltwinsinstances | No | No | **Yes**, by recreating resources in new region. [Learn more](../../digital-twins/how-to-move-regions.md) |
## Microsoft.DocumentDB
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | databaseaccountnames | No | No | No |
-> | databaseaccounts | Yes | Yes | No |
+> | databaseaccounts | **Yes** | **Yes** | No |
## Microsoft.DomainRegistration > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | domains | Yes | Yes | No |
+> | domains | **Yes** | **Yes** | No |
> | generatessorequest | No | No | No | > | topleveldomains | No | No | No | > | validatedomainregistrationinformation | No | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | services | Yes | Yes | No |
+> | services | **Yes** | **Yes** | No |
## Microsoft.EventGrid > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | domains | Yes | Yes | No |
+> | domains | **Yes** | **Yes** | No |
> | eventsubscriptions | No - can't be moved independently but automatically moved with subscribed resource. | No - can't be moved independently but automatically moved with subscribed resource. | No | > | extensiontopics | No | No | No |
-> | partnernamespaces | Yes | Yes | No |
+> | partnernamespaces | **Yes** | **Yes** | No |
> | partnerregistrations | No | No | No |
-> | partnertopics | Yes | Yes | No |
-> | systemtopics | Yes | Yes | No |
-> | topics | Yes | Yes | No |
+> | partnertopics | **Yes** | **Yes** | No |
+> | systemtopics | **Yes** | **Yes** | No |
+> | topics | **Yes** | **Yes** | No |
> | topictypes | No | No | No | ## Microsoft.EventHub
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | clusters | Yes | Yes | No |
-> | namespaces | Yes | Yes | Yes (with template)<br/><br/> [Move an Event Hub namespace to another region](../../event-hubs/move-across-regions.md) |
+> | clusters | **Yes** | **Yes** | No |
+> | namespaces | **Yes** | **Yes** | **Yes** (with template)<br/><br/> [Move an Event Hub namespace to another region](../../event-hubs/move-across-regions.md) |
> | sku | No | No | No | ## Microsoft.Experimentation
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | namespaces | Yes | Yes | No |
+> | namespaces | **Yes** | **Yes** | No |
## Microsoft.Features
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | clusters | Yes | Yes | No |
+> | clusters | **Yes** | **Yes** | No |
## Microsoft.HealthcareApis > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | services | Yes | Yes | No |
+> | services | **Yes** | **Yes** | No |
## Microsoft.HybridCompute > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | machines | Yes | Yes | No |
-> | machines / extensions | Yes | Yes | No |
+> | machines | **Yes** | **Yes** | No |
+> | machines / extensions | **Yes** | **Yes** | No |
## Microsoft.HybridData > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | datamanagers | Yes | Yes | No |
+> | datamanagers | **Yes** | **Yes** | No |
## Microsoft.HybridNetwork
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | jobs | Yes | Yes | No |
+> | jobs | **Yes** | **Yes** | No |
## Microsoft.Insights
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | accounts | Yes | Yes | No. [Learn more](../../azure-monitor/faq.yml#how-do-i-move-an-application-insights-resource-to-a-new-region-). |
-> | actiongroups | Yes | Yes | No |
+> | accounts | **Yes** | **Yes** | No. [Learn more](../../azure-monitor/faq.yml#how-do-i-move-an-application-insights-resource-to-a-new-region-). |
+> | actiongroups | **Yes** | **Yes** | No |
> | activitylogalerts | No | No | No |
-> | alertrules | Yes | Yes | No |
-> | autoscalesettings | Yes | Yes | No |
+> | alertrules | **Yes** | **Yes** | No |
+> | autoscalesettings | **Yes** | **Yes** | No |
> | baseline | No | No | No |
-> | components | Yes | Yes | No |
+> | components | **Yes** | **Yes** | No |
> | datacollectionrules | No | No | No | > | diagnosticsettings | No | No | No | > | diagnosticsettingscategories | No | No | No |
Jump to a resource provider namespace:
> | notificationgroups | No | No | No | > | privatelinkscopes | No | No | No | > | rollbacktolegacypricingmodel | No | No | No |
-> | scheduledqueryrules | Yes | Yes | No |
+> | scheduledqueryrules | **Yes** | **Yes** | No |
> | topology | No | No | No | > | transactions | No | No | No | > | vminsightsonboardingstatuses | No | No | No |
-> | webtests | Yes | Yes | No |
+> | webtests | **Yes** | **Yes** | No |
> | webtests / gettestresultfile | No | No | No |
-> | workbooks | Yes | Yes | No |
-> | workbooktemplates | Yes | Yes | No |
+> | workbooks | **Yes** | **Yes** | No |
+> | workbooktemplates | **Yes** | **Yes** | No |
## Microsoft.IoTCentral
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | apptemplates | No | No | No |
-> | iotapps | Yes | Yes | No |
+> | iotapps | **Yes** | **Yes** | No |
## Microsoft.IoTHub > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | iothub | Yes | Yes | Yes (clone hub) <br/><br/> [Clone an IoT hub to another region](../../iot-hub/iot-hub-how-to-clone.md) |
+> | iothub | **Yes** | **Yes** | **Yes** (clone hub) <br/><br/> [Clone an IoT hub to another region](../../iot-hub/iot-hub-how-to-clone.md) |
## Microsoft.IoTSpaces > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | graph | Yes | Yes | No |
+> | graph | **Yes** | **Yes** | No |
## Microsoft.KeyVault
Jump to a resource provider namespace:
> | deletedvaults | No | No | No | > | hsmpools | No | No | No | > | managedhsms | No | No | No |
-> | vaults | Yes | Yes | No |
+> | vaults | **Yes** | **Yes** | No |
## Microsoft.Kubernetes
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | clusters | Yes | Yes | No |
+> | clusters | **Yes** | **Yes** | No |
## Microsoft.LabServices
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | hostingenvironments | No | No | No |
-> | integrationaccounts | Yes | Yes | No |
-> | integrationserviceenvironments | Yes | No | No |
-> | integrationserviceenvironments / managedapis | Yes | No | No |
+> | integrationaccounts | **Yes** | **Yes** | No |
+> | integrationserviceenvironments | **Yes** | No | No |
+> | integrationserviceenvironments / managedapis | **Yes** | No | No |
> | isolatedenvironments | No | No | No |
-> | workflows | Yes | Yes | No |
+> | workflows | **Yes** | **Yes** | No |
## Microsoft.MachineLearning
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | commitmentplans | No | No | No |
-> | webservices | Yes | No | No |
-> | workspaces | Yes | Yes | No |
+> | webservices | **Yes** | No | No |
+> | workspaces | **Yes** | **Yes** | No |
## Microsoft.MachineLearningCompute
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | configurationassignments | No | No | Yes. [Learn more](../../virtual-machines/move-region-maintenance-configuration.md) |
-> | maintenanceconfigurations | Yes | Yes | Yes. [Learn more](../../virtual-machines/move-region-maintenance-configuration-resources.md) |
+> | configurationassignments | No | No | **Yes**. [Learn more](../../virtual-machines/move-region-maintenance-configuration.md) |
+> | maintenanceconfigurations | **Yes** | **Yes** | **Yes**. [Learn more](../../virtual-machines/move-region-maintenance-configuration-resources.md) |
> | updates | No | No | No | ## Microsoft.ManagedIdentity
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | accounts | Yes | Yes | No, Azure Maps is a geospatial service. |
-> | accounts / privateatlases | Yes | Yes | No |
+> | accounts | **Yes** | **Yes** | No, Azure Maps is a geospatial service. |
+> | accounts / privateatlases | **Yes** | **Yes** | No |
## Microsoft.Marketplace
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | mediaservices | Yes | Yes | No |
-> | mediaservices / liveevents | Yes | Yes | No |
-> | mediaservices / streamingendpoints | Yes | Yes | No |
+> | mediaservices | **Yes** | **Yes** | No |
+> | mediaservices / liveevents | **Yes** | **Yes** | No |
+> | mediaservices / streamingendpoints | **Yes** | **Yes** | No |
## Microsoft.Microservices4Spring
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | - | > | objectunderstandingaccounts | No | No | No |
-> | remoterenderingaccounts | Yes | Yes | No |
-> | spatialanchorsaccounts | Yes | Yes | No |
+> | remoterenderingaccounts | **Yes** | **Yes** | No |
+> | spatialanchorsaccounts | **Yes** | **Yes** | No |
## Microsoft.NetApp
Jump to a resource provider namespace:
> | - | -- | - | -- | > | applicationgateways | No | No | No | > | applicationgatewaywebapplicationfirewallpolicies | No | No | No |
-> | applicationsecuritygroups | Yes | Yes | No |
+> | applicationsecuritygroups | **Yes** | **Yes** | No |
> | azurefirewalls | No | No | No | > | bastionhosts | No | No | No | > | bgpservicecommunities | No | No | No |
-> | connections | Yes | Yes | No |
-> | ddoscustompolicies | Yes | Yes | No |
+> | connections | **Yes** | **Yes** | No |
+> | ddoscustompolicies | **Yes** | **Yes** | No |
> | ddosprotectionplans | No | No | No |
-> | dnszones | Yes | Yes | No |
+> | dnszones | **Yes** | **Yes** | No |
> | expressroutecircuits | No | No | No | > | expressroutegateways | No | No | No | > | expressrouteserviceproviders | No | No | No | > | firewallpolicies | No | No | No | > | frontdoors | No | No | No |
-> | ipallocations | Yes | Yes | No |
+> | ipallocations | **Yes** | **Yes** | No |
> | ipgroups | No | No | No |
-> | loadbalancers | Yes - Basic SKU<br> Yes - Standard SKU | Yes - Basic SKU<br>No - Standard SKU | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move internal and external load balancers. |
-> | localnetworkgateways | Yes | Yes | No |
+> | loadbalancers | **Yes** - Basic SKU<br> **Yes** - Standard SKU | **Yes** - Basic SKU<br>No - Standard SKU | **Yes** <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move internal and external load balancers. |
+> | localnetworkgateways | **Yes** | **Yes** | No |
> | natgateways | No | No | No | > | networkexperimentprofiles | No | No | No |
-> | networkintentpolicies | Yes | Yes | No |
-> | networkinterfaces | Yes | Yes | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move NICs. |
+> | networkintentpolicies | **Yes** | **Yes** | No |
+> | networkinterfaces | **Yes** | **Yes** | **Yes** <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move NICs. |
> | networkprofiles | No | No | No |
-> | networksecuritygroups | Yes | Yes | Yes <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move network security groups (NSGs). |
+> | networksecuritygroups | **Yes** | **Yes** | **Yes** <br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move network security groups (NSGs). |
> | networkwatchers | No | No | No |
-> | networkwatchers / connectionmonitors | Yes | No | No |
-> | networkwatchers / flowlogs | Yes | No | No |
-> | networkwatchers / pingmeshes | Yes | No | No |
+> | networkwatchers / connectionmonitors | **Yes** | No | No |
+> | networkwatchers / flowlogs | **Yes** | No | No |
+> | networkwatchers / pingmeshes | **Yes** | No | No |
> | p2svpngateways | No | No | No |
-> | privatednszones | Yes | Yes | No |
-> | privatednszones / virtualnetworklinks | Yes | Yes | No |
+> | privatednszones | **Yes** | **Yes** | No |
+> | privatednszones / virtualnetworklinks | **Yes** | **Yes** | No |
> | privatednszonesinternal | No | No | No | > | privateendpointredirectmaps | No | No | No |
-> | privateendpoints | Yes - for [supported private-link resources](./move-limitations/networking-move-limitations.md#private-endpoints)<br>No - for all other private-link resources | Yes - for [supported private-link resources](./move-limitations/networking-move-limitations.md#private-endpoints)<br>No - for all other private-link resources | No |
+> | privateendpoints | **Yes** - for [supported private-link resources](./move-limitations/networking-move-limitations.md#private-endpoints)<br>No - for all other private-link resources | **Yes** - for [supported private-link resources](./move-limitations/networking-move-limitations.md#private-endpoints)<br>No - for all other private-link resources | No |
> | privatelinkservices | No | No | No |
-> | publicipaddresses | Yes | Yes - see [Networking move guidance](./move-limitations/networking-move-limitations.md) | Yes<br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move public IP address configurations (IP addresses are not retained). |
-> | publicipprefixes | Yes | Yes | No |
+> | publicipaddresses | **Yes** | **Yes** - see [Networking move guidance](./move-limitations/networking-move-limitations.md) | **Yes**<br/><br/> Use [Azure Resource Mover](../../resource-mover/tutorial-move-region-virtual-machines.md) to move public IP address configurations (IP addresses are not retained). |
+> | publicipprefixes | **Yes** | **Yes** | No |
> | routefilters | No | No | No |
-> | routetables | Yes | Yes | No |
-> | securitypartnerproviders | Yes | Yes | No |
-> | serviceendpointpolicies | Yes | Yes | No |
+> | routetables | **Yes** | **Yes** | No |
+> | securitypartnerproviders | **Yes** | **Yes** | No |
+> | serviceendpointpolicies | **Yes** | **Yes** | No |
> | trafficmanagergeographichierarchies | No | No | No |
-> | trafficmanagerprofiles | Yes | Yes | No |
+> | trafficmanagerprofiles | **Yes** | **Yes** | No |
> | trafficmanagerprofiles / heatmaps | No | No | No | > | trafficmanagerusermetricskeys | No | No | No | > | virtualhubs | No | No | No |
-> | virtualnetworkgateways | Yes | Yes - see [Networking move guidance](./move-limitations/networking-move-limitations.md) | No |
-> | virtualnetworks | Yes | Yes | No |
+> | virtualnetworkgateways | **Yes** | **Yes** - see [Networking move guidance](./move-limitations/networking-move-limitations.md) | No |
+> | virtualnetworks | **Yes** | **Yes** | No |
> | virtualnetworktaps | No | No | No |
-> | virtualrouters | Yes | Yes | No |
+> | virtualrouters | **Yes** | **Yes** | No |
> | virtualwans | No | No | > | vpngateways (Virtual WAN) | No | No | No | > | vpnserverconfigurations | No | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | namespaces | Yes | Yes | No |
-> | namespaces / notificationhubs | Yes | Yes | No |
+> | namespaces | **Yes** | **Yes** | No |
+> | namespaces / notificationhubs | **Yes** | **Yes** | No |
## Microsoft.ObjectStore > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | osnamespaces | Yes | Yes | No |
+> | osnamespaces | **Yes** | **Yes** | No |
## Microsoft.OffAzure
Jump to a resource provider namespace:
> | deletedworkspaces | No | No | No | > | linktargets | No | No | No | > | storageinsightconfigs | No | No | No |
-> | workspaces | Yes | Yes | No |
+> | workspaces | **Yes** | **Yes** | No |
## Microsoft.OperationsManagement
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | managementassociations | No | No | No |
-> | managementconfigurations | Yes | Yes | No |
-> | solutions | Yes | Yes | No |
-> | views | Yes | Yes | No |
+> | managementconfigurations | **Yes** | **Yes** | No |
+> | solutions | **Yes** | **Yes** | No |
+> | views | **Yes** | **Yes** | No |
## Microsoft.Peering
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | consoles | No | No | No |
-> | dashboards | Yes | Yes | No |
+> | dashboards | **Yes** | **Yes** | No |
> | usersettings | No | No | No | ## Microsoft.PowerBI
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | workspacecollections | Yes | Yes | No |
+> | workspacecollections | **Yes** | **Yes** | No |
## Microsoft.PowerBIDedicated > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | capacities | Yes | Yes | No |
+> | capacities | **Yes** | **Yes** | No |
## Microsoft.ProjectBabylon
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | - |
-> | accounts | Yes | Yes | No |
+> | accounts | **Yes** | **Yes** | No |
## Microsoft.ProviderHub
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | replicationeligibilityresults | No | No | No |
-> | vaults | Yes | Yes | No.<br/><br/> Moving Recovery Services vaults for Azure Backup across Azure regions isn't supported.<br/><br/> In Recovery Services vaults for Azure Site Recovery, you can [disable and recreate the vault](../../site-recovery/move-vaults-across-regions.md) in the target region. |
+> | vaults | **Yes** | **Yes** | No.<br/><br/> Moving Recovery Services vaults for Azure Backup across Azure regions isn't supported.<br/><br/> In Recovery Services vaults for Azure Site Recovery, you can [disable and recreate the vault](../../site-recovery/move-vaults-across-regions.md) in the target region. |
## Microsoft.RedHatOpenShift
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | namespaces | Yes | Yes | No |
+> | namespaces | **Yes** | **Yes** | No |
## Microsoft.ResourceGraph > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | queries | Yes | Yes | No |
+> | queries | **Yes** | **Yes** | No |
> | resourcechangedetails | No | No | No | > | resourcechanges | No | No | No | > | resources | No | No | No |
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | deployments | No | No | No |
-> | deploymentscripts | No | No | Yes<br/><br/>[Move Microsoft.Resources resources to new region](microsoft-resources-move-regions.md) |
+> | deploymentscripts | No | No | **Yes**<br/><br/>[Move Microsoft.Resources resources to new region](microsoft-resources-move-regions.md) |
> | deploymentscripts / logs | No | No | No | > | links | No | No | No | > | providers | No | No | No |
Jump to a resource provider namespace:
> | resources | No | No | No | > | subscriptions | No | No | No | > | tags | No | No | No |
-> | templatespecs | No | No | Yes<br/><br/>[Move Microsoft.Resources resources to new region](microsoft-resources-move-regions.md) |
+> | templatespecs | No | No | **Yes**<br/><br/>[Move Microsoft.Resources resources to new region](microsoft-resources-move-regions.md) |
> | templatespecs / versions | No | No | No | > | tenants | No | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | applications | Yes | No | No |
-> | resources | Yes | Yes | No |
+> | applications | **Yes** | No | No |
+> | resources | **Yes** | **Yes** | No |
> | saasresources | No | No | No | ## Microsoft.Search
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | resourcehealthmetadata | No | No | No |
-> | searchservices | Yes | Yes | No |
+> | searchservices | **Yes** | **Yes** | No |
## Microsoft.Security
Jump to a resource provider namespace:
> | assessmentmetadata | No | No | No | > | assessments | No | No | No | > | autodismissalertsrules | No | No | No |
-> | automations | Yes | Yes | No |
+> | automations | **Yes** | **Yes** | No |
> | autoprovisioningsettings | No | No | No | > | complianceresults | No | No | No | > | compliances | No | No | No |
Jump to a resource provider namespace:
> | discoveredsecuritysolutions | No | No | No | > | externalsecuritysolutions | No | No | No | > | informationprotectionpolicies | No | No | No |
-> | iotsecuritysolutions | Yes | Yes | No |
+> | iotsecuritysolutions | **Yes** | **Yes** | No |
> | iotsecuritysolutions / analyticsmodels | No | No | No | > | iotsecuritysolutions / analyticsmodels / aggregatedalerts | No | No | No | > | iotsecuritysolutions / analyticsmodels / aggregatedrecommendations | No | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | namespaces | Yes | Yes | No |
+> | namespaces | **Yes** | **Yes** | No |
> | premiummessagingregions | No | No | No | > | sku | No | No | No |
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | applications | No | No | No |
-> | clusters | Yes | Yes | No |
+> | clusters | **Yes** | **Yes** | No |
> | containergroups | No | No | No | > | containergroupsets | No | No | No | > | edgeclusters | No | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | applications | Yes | Yes | No |
+> | applications | **Yes** | **Yes** | No |
> | containergroups | No | No | No |
-> | gateways | Yes | Yes | No |
-> | networks | Yes | Yes | No |
-> | secrets | Yes | Yes | No |
-> | volumes | Yes | Yes | No |
+> | gateways | **Yes** | **Yes** | No |
+> | networks | **Yes** | **Yes** | No |
+> | secrets | **Yes** | **Yes** | No |
+> | volumes | **Yes** | **Yes** | No |
## Microsoft.Services
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | signalr | Yes | Yes | No |
+> | signalr | **Yes** | **Yes** | No |
## Microsoft.SoftwarePlan
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | instancepools | No | No | No |
-> | locations | Yes | Yes | No |
-> | managedinstances | No | No | Yes <br/><br/> [Learn more](/azure/azure-sql/database/move-resources-across-regions) about moving managed instances across regions. |
-> | managedinstances / databases | No | No | Yes |
-> | servers | Yes | Yes |Yes |
-> | servers / databases | Yes | Yes | Yes <br/><br/> [Learn more](/azure/azure-sql/database/move-resources-across-regions) about moving databases across regions.<br/><br/> [Learn more](../../resource-mover/tutorial-move-region-sql.md) about using Azure Resource Mover to move Azure SQL databases. |
-> | servers / databases / backuplongtermretentionpolicies | Yes | Yes | No |
-> | servers / elasticpools | Yes | Yes | Yes <br/><br/> [Learn more](/azure/azure-sql/database/move-resources-across-regions) about moving elastic pools across regions.<br/><br/> [Learn more](../../resource-mover/tutorial-move-region-sql.md) about using Azure Resource Mover to move Azure SQL elastic pools. |
-> | servers / jobaccounts | Yes | Yes | No |
-> | servers / jobagents | Yes | Yes | No |
+> | locations | **Yes** | **Yes** | No |
+> | managedinstances | No | No | **Yes** <br/><br/> [Learn more](/azure/azure-sql/database/move-resources-across-regions) about moving managed instances across regions. |
+> | managedinstances / databases | No | No | **Yes** |
+> | servers | **Yes** | **Yes** | **Yes** |
+> | servers / databases | **Yes** | **Yes** | **Yes** <br/><br/> [Learn more](/azure/azure-sql/database/move-resources-across-regions) about moving databases across regions.<br/><br/> [Learn more](../../resource-mover/tutorial-move-region-sql.md) about using Azure Resource Mover to move Azure SQL databases. |
+> | servers / databases / backuplongtermretentionpolicies | **Yes** | **Yes** | No |
+> | servers / elasticpools | **Yes** | **Yes** | **Yes** <br/><br/> [Learn more](/azure/azure-sql/database/move-resources-across-regions) about moving elastic pools across regions.<br/><br/> [Learn more](../../resource-mover/tutorial-move-region-sql.md) about using Azure Resource Mover to move Azure SQL elastic pools. |
+> | servers / jobaccounts | **Yes** | **Yes** | No |
+> | servers / jobagents | **Yes** | **Yes** | No |
> | virtualclusters | No | No | No | ## Microsoft.SqlVirtualMachine
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | sqlvirtualmachinegroups | Yes | Yes | No |
-> | sqlvirtualmachines | Yes | Yes | No |
+> | sqlvirtualmachinegroups | **Yes** | **Yes** | No |
+> | sqlvirtualmachines | **Yes** | **Yes** | No |
## Microsoft.Storage > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | storageaccounts | Yes | Yes | Yes<br/><br/> [Move an Azure Storage account to another region](../../storage/common/storage-account-move.md) |
+> | storageaccounts | **Yes** | **Yes** | **Yes**<br/><br/> [Move an Azure Storage account to another region](../../storage/common/storage-account-move.md) |
## Microsoft.StorageCache
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | storagesyncservices | Yes | Yes | No |
+> | storagesyncservices | **Yes** | **Yes** | No |
## Microsoft.StorageSyncDev
Jump to a resource provider namespace:
> | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- | > | clusters | No | No | No |
-> | streamingjobs | Yes | Yes | No |
+> | streamingjobs | **Yes** | **Yes** | No |
## Microsoft.StreamAnalyticsExplorer
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | environments | Yes | Yes | No |
-> | environments / eventsources | Yes | Yes | No |
-> | environments / referencedatasets | Yes | Yes | No |
+> | environments | **Yes** | **Yes** | No |
+> | environments / eventsources | **Yes** | **Yes** | No |
+> | environments / referencedatasets | **Yes** | **Yes** | No |
## Microsoft.Token > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | stores | Yes | Yes | No |
+> | stores | **Yes** | **Yes** | No |
## Microsoft.VirtualMachineImages
Jump to a resource provider namespace:
> | - | -- | - | -- | > | availablestacks | No | No | No | > | billingmeters | No | No | No |
-> | certificates | No | Yes | No |
+> | certificates | No | **Yes** | No |
> | certificates (managed) | No | No | No |
-> | connectiongateways | Yes | Yes | No |
-> | connections | Yes | Yes | No |
-> | customapis | Yes | Yes | No |
+> | connectiongateways | **Yes** | **Yes** | No |
+> | connections | **Yes** | **Yes** | No |
+> | customapis | **Yes** | **Yes** | No |
> | deletedsites | No | No | No | > | deploymentlocations | No | No | No | > | georegions | No | No | No | > | hostingenvironments | No | No | No |
-> | kubeenvironments | Yes | Yes | No |
+> | kubeenvironments | **Yes** | **Yes** | No |
> | publishingusers | No | No | No | > | recommendations | No | No | No | > | resourcehealthmetadata | No | No | No | > | runtimes | No | No | No |
-> | serverfarms | Yes | Yes | No |
+> | serverfarms | **Yes** | **Yes** | No |
> | serverfarms / eventgridfilters | No | No | No |
-> | sites | Yes | Yes | No |
-> | sites / premieraddons | Yes | Yes | No |
-> | sites / slots | Yes | Yes | No |
+> | sites | **Yes** | **Yes** | No |
+> | sites / premieraddons | **Yes** | **Yes** | No |
+> | sites / slots | **Yes** | **Yes** | No |
> | sourcecontrols | No | No | No | > | staticsites | No | No | No |
azure-resource-manager Deployment Script Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-script-template.md
Previously updated : 10/26/2022 Last updated : 11/01/2022 - + # Use deployment scripts in ARM templates Learn how to use deployment scripts in Azure Resource templates (ARM templates). With a new resource type called `Microsoft.Resources/deploymentScripts`, users can execute scripts in template deployments and review execution results. These scripts can be used for performing custom steps such as:
The following JSON is an example. For more information, see the latest [template
"storageAccountName": "myStorageAccount", "storageAccountKey": "myKey" },
- "azPowerShellVersion": "6.4", // or "azCliVersion": "2.28.0",
+ "azPowerShellVersion": "8.3", // or "azCliVersion": "2.40.0",
"arguments": "-name \\\"John Dole\\\"", "environmentVariables": [ {
SubscriptionId : 01234567-89AB-CDEF-0123-456789ABCDEF
ProvisioningState : Succeeded Identity : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/mydentity1008rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/myuami ScriptKind : AzurePowerShell
-AzPowerShellVersion : 3.0
-StartTime : 6/18/2020 7:46:45 PM
-EndTime : 6/18/2020 7:49:45 PM
-ExpirationDate : 6/19/2020 7:49:45 PM
+AzPowerShellVersion : 8.3
+StartTime : 6/18/2022 7:46:45 PM
+EndTime : 6/18/2022 7:49:45 PM
+ExpirationDate : 6/19/2022 7:49:45 PM
CleanupPreference : OnSuccess StorageAccountId : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0618rg/providers/Microsoft.Storage/storageAccounts/ftnlvo6rlrvo2azscripts ContainerInstanceId : /subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0618rg/providers/Microsoft.ContainerInstance/containerGroups/ftnlvo6rlrvo2azscripts
The list command output is similar to:
[ { "arguments": "-name \\\"John Dole\\\"",
- "azPowerShellVersion": "3.0",
+ "azPowerShellVersion": "8.3",
"cleanupPreference": "OnSuccess", "containerSettings": { "containerGroupName": null }, "environmentVariables": null,
- "forceUpdateTag": "20200625T025902Z",
+ "forceUpdateTag": "20220625T025902Z",
"id": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Resources/deploymentScripts/runPowerShellInlineWithOutput", "identity": { "tenantId": "01234567-89AB-CDEF-0123-456789ABCDEF",
The list command output is similar to:
"scriptContent": "\r\n param([string] $name)\r\n $output = \"Hello {0}\" -f $name\r\n Write-Output $output\r\n $DeploymentScriptOutputs = @{}\r\n $DeploymentScriptOutputs['text'] = $output\r\n ", "status": { "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.ContainerInstance/containerGroups/64lxews2qfa5uazscripts",
- "endTime": "2020-06-25T03:00:16.796923+00:00",
+ "endTime": "2022-06-25T03:00:16.796923+00:00",
"error": null,
- "expirationTime": "2020-06-26T03:00:16.796923+00:00",
- "startTime": "2020-06-25T02:59:07.595140+00:00",
+ "expirationTime": "2022-06-26T03:00:16.796923+00:00",
+ "startTime": "2022-06-25T02:59:07.595140+00:00",
"storageAccountId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Storage/storageAccounts/64lxews2qfa5uazscripts" }, "storageAccountSettings": null, "supportingScriptUris": null, "systemData": {
- "createdAt": "2020-06-25T02:59:04.750195+00:00",
+ "createdAt": "2022-06-25T02:59:04.750195+00:00",
"createdBy": "someone@contoso.com", "createdByType": "User",
- "lastModifiedAt": "2020-06-25T02:59:04.750195+00:00",
+ "lastModifiedAt": "2022-06-25T02:59:04.750195+00:00",
"lastModifiedBy": "someone@contoso.com", "lastModifiedByType": "User" },
The output is similar to:
"systemData": { "createdBy": "someone@contoso.com", "createdByType": "User",
- "createdAt": "2020-06-25T02:59:04.7501955Z",
+ "createdAt": "2022-06-25T02:59:04.7501955Z",
"lastModifiedBy": "someone@contoso.com", "lastModifiedByType": "User",
- "lastModifiedAt": "2020-06-25T02:59:04.7501955Z"
+ "lastModifiedAt": "2022-06-25T02:59:04.7501955Z"
}, "properties": { "provisioningState": "Succeeded",
- "forceUpdateTag": "20200625T025902Z",
- "azPowerShellVersion": "3.0",
+ "forceUpdateTag": "20220625T025902Z",
+ "azPowerShellVersion": "8.3",
"scriptContent": "\r\n param([string] $name)\r\n $output = \"Hello {0}\" -f $name\r\n Write-Output $output\r\n $DeploymentScriptOutputs = @{}\r\n $DeploymentScriptOutputs['text'] = $output\r\n ", "arguments": "-name \\\"John Dole\\\"", "retentionInterval": "P1D",
The output is similar to:
"status": { "containerInstanceId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.ContainerInstance/containerGroups/64lxews2qfa5uazscripts", "storageAccountId": "/subscriptions/01234567-89AB-CDEF-0123-456789ABCDEF/resourceGroups/myds0624rg/providers/Microsoft.Storage/storageAccounts/64lxews2qfa5uazscripts",
- "startTime": "2020-06-25T02:59:07.5951401Z",
- "endTime": "2020-06-25T03:00:16.7969234Z",
- "expirationTime": "2020-06-26T03:00:16.7969234Z"
+ "startTime": "2022-06-25T02:59:07.5951401Z",
+ "endTime": "2022-06-25T03:00:16.7969234Z",
+ "expirationTime": "2022-06-26T03:00:16.7969234Z"
}, "outputs": { "text": "Hello John Dole"
azure-signalr Howto Shared Private Endpoints Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-shared-private-endpoints-key-vault.md
Title: Access Key Vault in private network through Shared Private Endpoints
+ Title: Access Key Vault in a private network through shared private endpoints
-description: How to access key vault in private network through Shared Private Endpoints
+description: Learn how Azure SignalR Service can use shared private endpoints to avoid exposing your key vault on a public network.
Last updated 09/23/2022
-# Access Key Vault in private network through Shared Private Endpoints
+# Access Key Vault in a private network through shared private endpoints
-Azure SignalR Service can access your Key Vault in private network through Shared Private Endpoints. In this way you don't have to expose your Key Vault on public network.
+Azure SignalR Service can access your Azure Key Vault instance in a private network through shared private endpoints. In this way, you don't have to expose your key vault on a public network.
- :::image type="content" alt-text="Diagram showing architecture of shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\shared-private-endpoint-overview.png" :::
-## Shared Private Link Resources Management
+## Management of shared private link resources
-Private endpoints of secured resources that are created through Azure SignalR Service APIs are referred to as *shared private link resources*. This is because you're "sharing" access to a resource, such as an Azure Key Vault, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside Azure SignalR Service execution environment and aren't directly visible to you.
+Private endpoints of secured resources that are created through Azure SignalR Service APIs are called *shared private link resources*. This is because you're "sharing" access to a resource, such a key vault, that has been integrated with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/). These private endpoints are created inside an Azure SignalR Service execution environment and aren't directly visible to you.
> [!NOTE] > The examples in this article are based on the following assumptions:
-> * The resource ID of this Azure SignalR Service is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr_.
-> * The resource ID of Azure Key Vault is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.KeyVault/vaults/contoso-kv_.
+> * The resource ID of the Azure SignalR Service instance is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr_.
+> * The resource ID of the key vault is _/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.KeyVault/vaults/contoso-kv_.
-The rest of the examples show how the *contoso-signalr* service can be configured so that its outbound calls to Key Vault go through a private endpoint rather than public network.
+The examples show how the *contoso-signalr* service can be configured so that its outbound calls to the key vault go through a private endpoint rather than a public network.
-### Step 1: Create a shared private link resource to the Key Vault
+## Create a shared private link resource to the key vault
-#### [Azure portal](#tab/azure-portal)
+### [Azure portal](#tab/azure-portal)
1. In the Azure portal, go to your Azure SignalR Service resource.
-1. In the menu pane, select **Networking**. Switch to **Private access** tab.
-1. Click **Add shared private endpoint**.
+1. On the menu pane, select **Networking**. Switch to the **Private access** tab.
+1. Select **Add shared private endpoint**.
- :::image type="content" alt-text="Screenshot of shared private endpoints management." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" lightbox="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" :::
+ :::image type="content" alt-text="Screenshot of the button for adding a shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" lightbox="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-management.png" :::
1. Fill in a name for the shared private endpoint.
-1. Select the target linked resource either by selecting from your owned resources or by filling a resource ID.
-1. Click **Add**.
+1. Select the target linked resource either by selecting from your owned resources or by filling in a resource ID.
+1. Select **Add**.
:::image type="content" alt-text="Screenshot of adding a shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-add.png" :::
-1. The shared private endpoint resource will be in **Succeeded** provisioning state. The connection state is **Pending** approval at target resource side.
+1. Confirm that the shared private endpoint resource is now in a **Succeeded** provisioning state. The connection state is **Pending** at the target resource side.
:::image type="content" alt-text="Screenshot of an added shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-added.png" lightbox="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-added.png" :::
-#### [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
You can make the following API call with the [Azure CLI](/cli/azure/) to create a shared private link resource:
The process of creating an outbound private endpoint is a long-running (asynchro
You can poll this URI periodically to obtain the status of the operation.
-If you're using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value,
+If you're using the CLI, you can poll for the status by manually querying the `Azure-AsyncOperationHeader` value:
```dotnetcli az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/operationStatuses/c0786383-8d5f-4554-8d17-f16fcf482fb2?api-version=2021-06-01-preview ```
-Wait until the status changes to "Succeeded" before proceeding to the next steps.
+Wait until the status changes to **Succeeded** before you proceed to the next steps.
--
-### Step 2a: Approve the private endpoint connection for the Key Vault
+## Approve the private endpoint connection for the key vault
-#### [Azure portal](#tab/azure-portal)
+### [Azure portal](#tab/azure-portal)
-1. In the Azure portal, select the **Networking** tab of your Key Vault and navigate to **Private endpoint connections**. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call.
+1. In the Azure portal, select the **Networking** tab for your key vault and go to **Private endpoint connections**. After the asynchronous operation has succeeded, there should be a request for a private endpoint connection with the request message from the previous API call.
- :::image type="content" alt-text="Screenshot of the Azure portal, showing the Private endpoint connections pane." source="media\howto-shared-private-endpoints-key-vault\portal-key-vault-approve-private-endpoint.png" :::
+1. Select the private endpoint that Azure SignalR Service created. Then select **Approve**.
-1. Select the private endpoint that Azure SignalR Service created. Click **Approve**.
+ :::image type="content" alt-text="Screenshot of the Azure portal that shows the pane for private endpoint connections." source="media\howto-shared-private-endpoints-key-vault\portal-key-vault-approve-private-endpoint.png" :::
+
+1. Make sure that the private endpoint connection appears, as shown in the following screenshot. It could take one to two minutes for the status to be updated in the portal.
- Make sure that the private endpoint connection appears as shown in the following screenshot. It could take one to two minutes for the status to be updated in the portal.
+ :::image type="content" alt-text="Screenshot of the Azure portal that shows an Approved status on the pane for private endpoint connections." source="media\howto-shared-private-endpoints-key-vault\portal-key-vault-approved-private-endpoint.png" :::
- :::image type="content" alt-text="Screenshot of the Azure portal, showing an Approved status on the Private endpoint connections pane." source="media\howto-shared-private-endpoints-key-vault\portal-key-vault-approved-private-endpoint.png" :::
+### [Azure CLI](#tab/azure-cli)
-#### [Azure CLI](#tab/azure-cli)
-
-1. List private endpoint connections.
+1. List private endpoint connections:
```dotnetcli az network private-endpoint-connection list -n <key-vault-resource-name> -g <key-vault-resource-group-name> --type 'Microsoft.KeyVault/vaults'
Wait until the status changes to "Succeeded" before proceeding to the next steps
] ```
-1. Approve the private endpoint connection.
+1. Approve the private endpoint connection:
```dotnetcli az network private-endpoint-connection approve --id <private-endpoint-connection-id>
Wait until the status changes to "Succeeded" before proceeding to the next steps
--
-### Step 2b: Query the status of the shared private link resource
+## Query the status of the shared private link resource
-It takes minutes for the approval to be propagated to Azure SignalR Service. You can check the state using either Azure portal or Azure CLI.
+It takes minutes for the approval to be propagated to Azure SignalR Service. You can check the state by using either the Azure portal or the Azure CLI.
-#### [Azure portal](#tab/azure-portal)
+### [Azure portal](#tab/azure-portal)
:::image type="content" alt-text="Screenshot of an approved shared private endpoint." source="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-approved.png" lightbox="media\howto-shared-private-endpoints-key-vault\portal-shared-private-endpoints-approved.png" :::
-#### [Azure CLI](#tab/azure-cli)
+### [Azure CLI](#tab/azure-cli)
```dotnetcli az rest --method get --uri https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/contoso/providers/Microsoft.SignalRService/signalr/contoso-signalr/sharedPrivateLinkResources/func-pe?api-version=2021-06-01-preview ```
-This would return a JSON, where the connection state would show up as "status" under the "properties" section.
+This command returns JSON that shows the connection state as the `status` value in the `properties` section.
```json {
This would return a JSON, where the connection state would show up as "status" u
```
-If the "Provisioning State" (`properties.provisioningState`) of the resource is `Succeeded` and "Connection State" (`properties.status`) is `Approved`, it means that the shared private link resource is functional and Azure SignalR Service can communicate over the private endpoint.
+If the provisioning state (`properties.provisioningState`) of the resource is `Succeeded` and the connection state (`properties.status`) is `Approved`, the shared private link resource is functional and Azure SignalR Service can communicate over the private endpoint.
-- At this point, the private endpoint between Azure SignalR Service and Azure Key Vault is established.
-Now you can configure features like custom domain as usual. **You don't have to use a special domain for Key Vault**. DNS resolution is automatically handled by Azure SignalR Service.
+Now you can configure features like custom domain as usual. *You don't have to use a special domain for Key Vault*. Azure SignalR Service automatically handles DNS resolution.
## Next steps Learn more: + [What are private endpoints?](../private-link/private-endpoint-overview.md)
-+ [Configure custom domain](howto-custom-domain.md)
++ [Configure a custom domain](howto-custom-domain.md)
azure-video-indexer Indexing Configuration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/indexing-configuration-guide.md
Below are the indexing type options with details of their insights provided. To
### Audio only -- **Basic**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles, named entities (brands, locations, people), and topics.
+- **Basic**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles.
- **Standard**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles, emotions, keywords, named entities (brands, locations, people), sentiments, speakers, and topics. - **Advanced**: Indexes and extract insights by using audio only (ignoring video) and provides the following insights: transcription, translation, formatting of output captions and subtitles, audio effects (preview), emotions, keywords, named entities (brands, locations, people), sentiments, speakers, and articles.
backup Backup Azure Sap Hana Database Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database-troubleshoot.md
Title: Troubleshoot SAP HANA databases backup errors description: Describes how to troubleshoot common errors that might occur when you use Azure Backup to back up SAP HANA databases. Previously updated : 05/16/2022 Last updated : 11/02/2022
Refer to the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and [
**Possible causes** | When you've reached the maximum permissible limit for an operation in a span of 24 hours, this error appears. This error usually appears when there are at-scale operations such as modify policy or auto-protection. Unlike the case of CloudDosAbsoluteLimitReached, there isn't much you can do to resolve this state. In fact, Azure Backup service will retry the operations internally for all the items in question.<br><br> For example, if you've a large number of datasources protected with a policy and you try to modify that policy, it will trigger configure protection jobs for each of the protected items and sometimes may hit the maximum limit permissible for such operations per day. **Recommended action** | Azure Backup service will automatically retry this operation after 24 hours.
+### UserErrorInvalidBackint
+
+**Error message** | Found invalid hdbbackint executable.
+ |
+**Possible case** | 1. The operation to change Backint path from `/opt/msawb/bin` to `/usr/sap/<sid>/SYS/global/hdb/opt/hdbbackint` failed due to insufficient storage space in the new location. <br><br> 2. The *hdbbackint utility* located on `/usr/sap/<sid>/SYS/global/hdb/opt/hdbbackint` doesn't have executable permissions or correct ownership.
+**Recommended action** | 1. Ensure that there is free space available on `/usr/sap/<sid>/SYS/global/hdb/opt/hdbbackint` or the path where you want to save backups. <br><br> 2. Ensure that *sapsys* group has appropriate permissions on the `/usr/sap/<sid>/SYS/global/hdb/opt/hdbbackint` file by running the command `chmod 755`.
+ ## Restore checks ### Single Container Database (SDC) restore
backup Geo Code List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/scripts/geo-code-list.md
Title: Geo-code mapping description: Learn about geo-codes mapped with the respective regions. Previously updated : 11/01/2022 Last updated : 03/07/2022
This sample XML provides you an insight about the geo-codes mapped with the respective regions. Use these geo-codes to create and add custom DNS zones for private endpoint for Recovery Services vault.
-## Fetch mapping details
-
-To fetch the geo-code mapping list, run the following command:
-
-```azurecli-interactive
- az cli list-locations
-```
- ## Mapping details ```xml
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Anomaly-Detector/whats-new.md
We've also added links to some user-generated content. Those items will be marke
### Nov 2022
-* Multivariate Anomaly Detection is now a generally available feature in Anomaly Detector service, with a better user experience and better model performance. Learn more about [how to use latest Multivariate Anomaly Detection](quickstarts/client-libraries-multivariate.md).
+* Multivariate Anomaly Detection is now a generally available feature in Anomaly Detector service, with a better user experience and better model performance. Learn more about [how to get started using the latest release of Multivariate Anomaly Detection](how-to/create-resource.md).
### June 2022
cognitive-services Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/autoscale.md
Yes, you can disable the autoscale feature through Azure portal or CLI and retur
Autoscale feature is available for the following
+* [Cognitive Services multi-key](/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Canomaly-detector%2Clanguage-service%2Ccomputer-vision%2Cwindows)
* [Computer Vision](computer-vision/index.yml) * [Language](language-service/overview.md) (only available for sentiment analysis, key phrase extraction, named entity recognition, and text analytics for health)
+* [Anomaly Detector](/azure/cognitive-services/anomaly-detector/overview)
+* [Content Moderator](/azure/cognitive-services/content-moderator/overview)
+* [Custom Vision (Prediction)](/azure/cognitive-services/custom-vision-service/overview)
+* [Immersive Reader](/azure/applied-ai-services/immersive-reader/overview)
+* [LUIS](/azure/cognitive-services/luis/what-is-luis)
+* [Metrics Advisor](/azure/applied-ai-services/metrics-advisor/overview)
+* [Personalizer](/azure/cognitive-services/personalizer/what-is-personalizer)
+* [QnAMaker](/azure/cognitive-services/qnamaker/overview/overview)
* [Form Recognizer](../applied-ai-services/form-recognizer/overview.md?tabs=v3-0) ### Can I test this feature using a free subscription?
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the quotas and limits t
| Limit Name | Limit Value | |--|--| | OpenAI resources per region | 2 |
-| Requests per second per deployment | 5 |
+| Requests per second per deployment | 10 |
| Max fine-tuned model deployments | 2 | | Ability to deploy same model to multiple deployments | Not allowed | | Total number of training jobs per resource | 100 |
cognitive-services Concept Feature Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/concept-feature-evaluation.md
- Title: Feature evaluation - Personalizer-
-description: When you run an Evaluation in your Personalizer resource from the Azure portal, Personalizer provides information about what features of context and actions are influencing the model.
--
-ms.
--- Previously updated : 07/29/2019--
-# Feature evaluation
-
-When you run an Evaluation in your Personalizer resource from the [Azure portal](https://portal.azure.com), Personalizer provides information about what features of context and actions are influencing the model.
-
-This is useful in order to:
-
-* Imagine additional features you could use, getting inspiration from what features are more important in the model.
-* See what features aren't important, and potentially remove them or further analyze what may be affecting usage.
-* Provide guidance to editorial or curation teams about new content or products worth bringing into the catalog.
-* Troubleshoot common problems and mistakes that happen when sending features to Personalizer.
-
-The more important features have stronger weights in the model. Because these features have stronger weight, they tend to be present when Personalizer obtains higher rewards.
-
-## Getting feature importance evaluation
-
-To see feature importance results, you must run an evaluation. The evaluation creates human-readable feature labels based on the feature names observed during the evaluation period.
-
-The resulting information about feature importance represents the current Personalizer online model. The evaluation analyzes feature importance of the model saved at the end date of the evaluation period, after undergoing all the training done during the evaluation, with the current online learning policy.
-
-The feature importance results don't represent other policies and models tested or created during the evaluation. The evaluation won't include features sent to Personalizer after the end of the evaluation period.
-
-## How to interpret the feature importance evaluation
-
-Personalizer evaluates features by creating "groups" of features that have similar importance. One group can be said to have overall stronger importance than others, but within the group, ordering of features is alphabetically.
-
-Information about each Feature includes:
-
-* Whether the feature comes from Context or Actions
-* Feature Key and Value
-
-For example, an ice cream shop ordering app may see `Context.Weather:Hot` as a very important feature.
-
-Personalizer displays correlations of features that, when taken into account together, produce higher rewards.
-
-For example, you may see `Context.Weather:Hot` *with* `Action.MenuItem:IceCream` as well as `Context.Weather:Cold` *with* `Action.MenuItem:WarmTea:`.
-
-## Actions you can take based on feature evaluation
-
-### Imagine additional features you could use
-
-Get inspiration from the more important features in the model. For example, if you see "Context.MobileBattery:Low" in a video mobile app, you may think that connection type may also make customers choose to see one video clip over another, then add features about connectivity type and bandwidth into your app.
-
-### See what features aren't important
-
-Potentially remove unimportant features or further analyze what may affect usage. Features may rank low for many reasons. One could be that genuinely the feature doesn't affect user behavior. But it could also mean that the feature isn't apparent to the user.
-
-For example, a video site could see that "Action.VideoResolution=4k" is a low-importance feature, contradicting user research. The cause could be that the application doesn't even mention or show the video resolution, so users wouldn't change their behavior based on it.
-
-### Provide guidance to editorial or curation teams
-
-Provide guidance about new content or products worth bringing into the catalog. Personalizer is designed to be a tool that augments human insight and teams. One way it does this is by providing information to editorial groups on what is it about products, articles or content that drives behavior. For example, the video application scenario may show that there's an important feature called "Action.VideoEntities.Cat:true", prompting the editorial team to bring in more cat videos.
-
-### Troubleshoot common problems and mistakes
-
-Common problems and mistakes can be fixed by changing your application code so it won't send inappropriate or incorrectly formatted features to Personalizer.
-
-Common mistakes when sending features include the following:
-
-* Sending personally identifiable information (PII). PII specific to one individual (such as name, phone number, credit card numbers, IP Addresses) shouldn't be used with Personalizer. If your application needs to track users, use a non-identifying UUID or some other UserID number. In most scenarios this is also problematic.
-* With large numbers of users, it's unlikely that each user's interaction will weigh more than all the population's interaction, so sending user IDs (even if non-PII) will probably add more noise than value to the model.
-* Sending date-time fields as precise timestamps instead of featurized time values. Having features such as Context.TimeStamp.Day=Monday or "Context.TimeStamp.Hour"="13" is more useful. There will be at most 7 or 24 feature values for each. But `"Context.TimeStamp":"1985-04-12T23:20:50.52Z"` is so precise that there will be no way to learn from it because it will never happen again.
-
-## Next steps
-
-Understand [scalability and performance](concepts-scalability-performance.md) with Personalizer.
-
cognitive-services How To Feature Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-feature-evaluation.md
+
+ Title: Personalizer feature evaluations
+
+description: When you run a Feature Evaluation in your Personalizer resource from the Azure portal, Personalizer creates a report containing Feature Scores, a measure of how influential each feature was to the model during the evaluation period.
++
+ms.
+++ Last updated : 09/22/2022++
+# Evaluate feature importances
+
+You can assess how important each feature was to Personalizer's machine learning model by conducting a _feature evaluation_ on your historical log data. Feature evaluations are useful to:
+
+* Understand which features are most or least important to the model.
+* Brainstorm extra features that may be beneficial to learning, by deriving inspiration from what features are currently important in the model.
+* Identify potentially unimportant or non-useful features that should be considered for further analysis or removal.
+* Troubleshoot common problems and errors that may occur when designing features and sending them to Personalizer. For example, using GUIDs, timestamps, or other features that are generally _sparse_ may be problematic. Learn more about [improving features](concepts-features.md).
+
+## What is a feature evaluation?
+
+Feature evaluations are conducted by training and running a copy of your current model configuration on historically collected log data in a specified time period. Features are ignored one at a time to measure the difference in model performance with and without each feature. Because the feature evaluations are performed on historical data, there's no guarantee that these patterns will be observed in future data. However, these insights may still be relevant to future data if your logged data has captured sufficient variability or non-stationary properties of your data. Your current model's performance isn't affected by running a feature evaluation.
+
+A _feature importance_ score is a measure of the relative impact of the feature on the reward over the evaluation period. Feature importance scores are a number between 0 (least important) and 100 (most important) and are shown in the feature evaluation. Since the evaluation is run over a specific time period, the feature importances can change as additional data is sent to Personalizer and as your users, scenarios, and data change over time.
+
+## Creating a feature evaluation
+
+To obtain feature importance scores, you must create a feature evaluation over a period of logged data to generate a report containing the feature importance scores. This report is viewable in the Azure portal. To create a feature evaluation:
+
+1. Go to the [Azure portal](https://portal.azure.com) website
+1. Select your Personalizer resource
+1. Select the _Monitor_ section from the side navigation pane
+1. Select the _Features_ tab
+1. Select "Create report" and a new screen should appear
+1. Choose a name for your report
+1. Choose _start_ and _end_ times for your evaluation period
+1. Select "Create report"
+
+![Screenshot that shows how to create a Feature Evaluation in your Personalizer resource by clicking on "Monitor" blade, the "Feature" tab, then "Create a report".](media/feature-evaluation/create-report.png)
++
+![Screenshot that shows in the creation window and how to fill in the fields for your report including the name, start date, and end date.](media/feature-evaluation/create-report-window.png)
+
+Next, your report name should appear in the reports table below. Creating a feature evaluation is a long running process, where the time to completion depends on the volume of data sent to Personalizer during the evaluation period. While the report is being generated, the _Status_ column will indicate "Running" for your evaluation, and will update to "Succeeded" once completed. Check back periodically to see if your evaluation has finished.
+
+You can run multiple feature evaluations over various periods of time that your Personalizer resource has log data. Make sure that your [data retention period](how-to-settings.md#data-retention) is set sufficiently long to enable you to perform evaluations over older data.
+
+## Interpreting feature importance scores
+
+### Features with a high importance score
+
+Features with higher importance scores were more influential to the model during the evaluation period as compared to the other features. Important features can provide inspiration for designing additional features to be included in the model. For example, if you see the context features "IsWeekend" or "IsWeekday" have high importance for grocery shopping, it may be the case that holidays or long-weekends may also be important factors, so you may want to consider adding features that capture this information.
+
+### Features with a low importance score
+
+Features with low importance scores are good candidates for further analysis. Not all low scoring features necessarily _bad_ or not useful as low scores can occur for one or more several reasons. The list below can help you get started with analyzing why your features may have low scores:
+
+* The feature was rarely observed in the data during the evaluation period.
+ <!-- * Check The _Feature occurrences_ in your feature evaluation. If it's low in comparison to other features, this may indicate that feature was not present often enough for the model to determine if it's valuable or not. -->
+ * If the number of occurrences of this feature is low in comparison to other features, this may indicate that feature wasn't present often enough for the model to determine if it's valuable or not.
+* The feature values didn't have a lot of diversity or variation.
+ <!-- * Check The _Number of unique values_ in your feature evaluation. If it's lower than you would expect, this may indicate that the feature did not vary much during the evaluation period and won't provide significant insight. -->
+ * If the number of unique values for this feature lower than you would expect, this may indicate that the feature didn't vary much during the evaluation period and won't provide significant insight.
+
+* The feature values were too noisy (random), or too distinct, and provided little value.
+ <!-- * Check the _Number of unique values_ in your feature evaluation. If it's higher than you expected, or high in comparison to other features, this may indicate that the feature was too noisy during the evaluation period. -->
+ * Check the _Number of unique values_ in your feature evaluation. If the number of unique values for this feature is higher than you expected, or high in comparison to other features, this may indicate that the feature was too noisy during the evaluation period.
+* There's a data or formatting issue.
+ * Check to make sure the features are formatted and sent to Personalizer in the way you expect.
+* The feature may not be valuable to model learning and performance if the feature score is low and the reasons above do not apply.
+ * Consider removing the feature as it's not helping your model maximize the average reward.
+
+Removing features with low importance scores can help speed up model training by reducing the amount of data needed to learn. It can also potentially improve the performance of the model. However, this isn't guaranteed and further analysis may be needed. [Learn more about designing context and action features.](concepts-features.md)
+
+### Common issues and steps to improve features
+
+- **Sending features with high cardinality.** Features with high cardinality are those that have many distinct values that are not likely to repeat over many events. For example, personal information specific to one individual (such as name, phone number, credit card number, IP address) shouldn't be used with Personalizer.
+
+- **Sending user IDs** With large numbers of users, it's unlikely that this information is relevant to Personalizer learning to maximize the average reward score. Sending user IDs (even if not personal information) will likely add more noise to the model and isn't recommended.
+
+- **Features are too sparse. Values are distinct and rarely occur more than a few times**. Precise timestamps down to the second can be very sparse. It can be made more dense (and therefore, effective) by grouping times into "morning", "midday" or "afternoon", for example.
+
+Location information also typically benefits from creating broader classifications. For example, a latitude-longitude coordinates such as Lat: 47.67402┬░ N, Long: 122.12154┬░ W is too precise and forces the model to learn latitude and longitude as distinct dimensions. When you're trying to personalize based on location information, it helps to group location information in larger sectors. An easy way to do that is to choose an appropriate rounding precision for the lat-long numbers, and combine latitude and longitude into "areas" by making them one string. For example, a good way to represent Lat: 47.67402┬░ N, Long: 122.12154┬░ W in regions approximately a few kilometers wide would be "location":"34.3 , 12.1".
+
+- **Expand feature sets with extrapolated information**
+You can also get more features by thinking of unexplored attributes that can be derived from information you already have. For example, in a fictitious movie list personalization, is it possible that a weekend vs weekday elicits different behavior from users? Time could be expanded to have a "weekend" or "weekday" attribute. Do national cultural holidays drive attention to certain movie types? For example, a "Halloween" attribute is useful in places where it's relevant. Is it possible that rainy weather has significant impact on the choice of a movie for many people? With time and place, a weather service could provide that information and you can add it as an extra feature.
++
+## Next steps
+
+[Analyze policy performances with an offline evaluation](how-to-offline-evaluation.md) with Personalizer.
+
cognitive-services How To Offline Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/how-to-offline-evaluation.md
Last updated 02/20/2020
# Analyze your learning loop with an offline evaluation
-Learn how to complete an offline evaluation and understand the results.
+Learn how to create an offline evaluation and interpret the results.
-Offline Evaluations allow you to measure how effective Personalizer is compared to your application's default behavior, learn what features are contributing most to personalization, and discover new machine learning values automatically.
+Offline Evaluations allow you to measure how effective Personalizer is compared to your application's default behavior over a period of logged (historical) data, and assess how well other model configuration settings may perform for your model.
+
+When you create an offline evaluation, the _Optimization discovery_ option will run offline evaluations over a variety of learning policy values to find one that may improve the performance of your model. You can also provide additional policies to assess in the offline evaluation.
Read about [Offline Evaluations](concepts-offline-evaluation.md) to learn more. ## Prerequisites
-* A configured Personalizer loop
-* The Personalizer loop must have a representative amount of data - as a ballpark we recommend at least 50,000 events in its logs for meaningful evaluation results. Optionally, you may also have previously exported _learning policy_ files you can compare and test in the same evaluation.
+* A configured Personalizer resource
+* The Personalizer resource must have a representative amount of logged data - as a ballpark figure, we recommend at least 50,000 events in its logs for meaningful evaluation results. Optionally, you may also have previously exported _learning policy_ files that you wish to test and compare in this evaluation.
## Run an offline evaluation 1. In the [Azure portal](https://azure.microsoft.com/free/cognitive-services), locate your Personalizer resource. 1. In the Azure portal, go to the **Evaluations** section and select **Create Evaluation**. ![In the Azure portal, go to the **Evaluations** section and select **Create Evaluation**.](./media/offline-evaluation/create-new-offline-evaluation.png)
-1. Configure the following values:
+1. Fill out the options in the _Create an evaluation_ window:
* An evaluation name. * Start and end date - these are dates that specify the range of data to use in the evaluation. This data must be present in the logs, as specified in the [Data Retention](how-to-settings.md) value.
- * Optimization Discovery set to **yes**.
+ * Set _Optimization discovery_ to **yes**, if you wish Personalizer to attempt to find more optimal learning policies.
+ * Add learning settings - upload a learning policy file if you wish to evaluate a custom or previously exported policy.l
> [!div class="mx-imgBorder"] > ![Choose offline evaluation settings](./media/offline-evaluation/create-an-evaluation-form.png)
-1. Start the Evaluation by selecting **Ok**.
+1. Start the Evaluation by selecting **Start evaluation**.
## Review the evaluation results Evaluations can take a long time to run, depending on the amount of data to process, number of learning policies to compare, and whether an optimization was requested.
-Once completed, you can select the evaluation from the list of evaluations, then select **Compare the score of your application with other potential learning settings**. Select this feature when you want to see how your current learning policy performs compared to a new policy.
+1. Once completed, you can select the evaluation from the list of evaluations, then select **Compare the score of your application with other potential learning settings**. Select this feature when you want to see how your current learning policy performs compared to a new policy.
-1. Review the performance of the [learning policies](concepts-offline-evaluation.md#discovering-the-optimized-learning-policy).
+1. Next, Review the performance of the [learning policies](concepts-offline-evaluation.md#discovering-the-optimized-learning-policy).
> [!div class="mx-imgBorder"] > [![Review evaluation results](./media/offline-evaluation/evaluation-results.png)](./media/offline-evaluation/evaluation-results.png#lightbox)
-1. Select **Apply** to apply the policy that improves the model best for your data.
+You'll see various learning policies on the chart, along with their estimated average reward, confidence intervals, and options to download or apply a specific policy.
+- "Online" - Personalizer's current policy
+- "Baseline1" - Your application's baseline policy
+- "BaselineRand" - A policy of taking actions at random
+- "Inter-len#" or "Hyper#" - Policies created by Optimization discovery.
+
+Select **Apply** to apply the policy that improves the model best for your data.
+ ## Next steps
cognitive-services Responsible Data And Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/responsible-data-and-privacy.md
Personalizer processes the following types of data:
To understand more about what information you typically use with Personalizer, see [Features are information about Actions and Context](concepts-features.md).
-[!TIP] You decide which features to use, how to aggregate them, and where the information comes from when you call the Personalizer Rank API in your application. You also determine how to create reward scores. To make informed decisions about what information to use with Personalizer, see the [Personalizer responsible use guidelines](responsible-use-cases.md).
-
+[!TIP] You decide which features to use, how to aggregate them, and where the information comes from when you call the Personalizer Rank API in your application. You also determine how to create reward scores.
## How does Personalizer process data?
Personalizer processes data as follows:
4. After the rank and reward information for events is correlated, it's removed from transient caches and placed in more permanent storage. It remains in permanent storage until the number of days specified in the Data Retention setting has gone by, at which time the information is deleted. If you choose not to specify a number of days in the Data Retention setting, this data will be saved as long as the Personalizer Azure Resource is not deleted or until you choose to Clear Data via the UI or APIs. You can change the Data Retention setting at any time. 5. Personalizer continuously trains internal Personalizer AI models specific to this Personalizer loop by using the data in the permanent storage and machine learning configuration parameters in [Learning settings](concept-active-learning.md). 6. Personalizer creates [offline evaluations either](concepts-offline-evaluation.md) automatically or on demand.
-Offline evaluations contain a report of rewards obtained by Personalizer models during a past time period. An offline evaluation embeds the models active at the time of their creation, and the learning settings used to create them, as well as a historical aggregate of average reward per event for that time window. Evaluations also include [feature importance](concept-feature-evaluation.md), which is a list of features observed in the time period, and their relative importance in the model.
+Offline evaluations contain a report of rewards obtained by Personalizer models during a past time period. An offline evaluation embeds the models active at the time of their creation, and the learning settings used to create them, as well as a historical aggregate of average reward per event for that time window. Evaluations also include [feature importance](how-to-feature-evaluation.md), which is a list of features observed in the time period, and their relative importance in the model.
### Independence of Personalizer loops
communication-services Browser Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/browser-support.md
A `CallClient` instance is required for this operation. When you have a `CallCli
```javascript const callClient = new CallClient(options);
-const environmentInfo = await callClient.getEnvironmentInfo();
+const environmentInfo = await callClient.feature(Features.DebugInfo).getEnvironmentInfo();
``` The `getEnvironmentInfo` method asynchronously returns an object of type `EnvironmentInfo`.
cosmos-db Quickstart Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-nodejs.md
Create a doc with the *product* properties for the `adventureworks` database:
:::code language="javascript" source="~/samples-cosmosdb-mongodb-javascript/001-quickstart/index.js" id="new_doc":::
-Create an doc in the collect by calling [``Collection.UpdateOne``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html#updateOne). In this example, we chose to *upsert* instead of *create* a new doc in case you run this sample code more than once.
+Create an doc in the collection by calling [``Collection.UpdateOne``](https://mongodb.github.io/node-mongodb-native/4.5/classes/Collection.html#updateOne). In this example, we chose to *upsert* instead of *create* a new doc in case you run this sample code more than once.
### Get a doc
Troubleshooting:
## Run the code
-This app creates a API for MongoDB database and collection and creates a doc and then reads the exact same doc back. Finally, the example issues a query that should only return that single doc. With each step, the example outputs information to the console about the steps it has performed.
+This app creates an API for MongoDB database and collection and creates a doc and then reads the exact same doc back. Finally, the example issues a query that should only return that single doc. With each step, the example outputs information to the console about the steps it has performed.
To run the app, use a terminal to navigate to the application directory and run the application.
data-factory Concepts Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture.md
Previously updated : 10/18/2022 Last updated : 11/01/2022 # Change data capture in Azure Data Factory and Azure Synapse Analytics
When you perform data integration and ETL processes in the cloud, your jobs can
### Native change data capture in mapping data flow
-The changed data including inserted, updated and deleted rows can be automatically detected and extracted by ADF mapping data flow from the source databases. No timestamp or ID columns are required to identify the changes since it uses the native change data capture technology in the databases. By simply chaining a source transform and a sink transform reference to a database dataset in a mapping data flow, you will see the changes happened on the source database to be automatically applied to the target database, so that you can easily synchronize data between two tables. You can also add any transformations in between for any business logic to process the delta data.
+The changed data including inserted, updated and deleted rows can be automatically detected and extracted by ADF mapping data flow from the source databases. No timestamp or ID columns are required to identify the changes since it uses the native change data capture technology in the databases. By simply chaining a source transform and a sink transform reference to a database dataset in a mapping data flow, you will see the changes happened on the source database to be automatically applied to the target database, so that you can easily synchronize data between two tables. You can also add any transformations in between for any business logic to process the delta data. When defining your sink data destination, you can set insert, update, upsert, and delete operations in your sink without the need of an Alter Row transformation because ADF is able to automatically detect the row makers.
**Supported connectors** - [SAP CDC](connector-sap-change-data-capture.md)
data-factory Data Flow Alter Row https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-alter-row.md
Previously updated : 08/03/2022 Last updated : 11/01/2022 # Alter row transformation in mapping data flow
Alter Row transformations only operate on database, REST, or Azure Cosmos DB sin
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4vJYc]
+> [!NOTE]
+> An Alter Row transformation is not needed for Change Data Capture data flows that use native CDC sources like SQL Server or SAP. In those instances, ADF will automatically detect the row marker so Alter Row policies are unnecessary.
+ ## Specify a default row policy Create an Alter Row transformation and specify a row policy with a condition of `true()`. Each row that doesn't match any of the previously defined expressions will be marked for the specified row policy. By default, each row that doesn't match any conditional expression will be marked for `Insert`.
data-factory Data Flow Sink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-sink.md
Previously updated : 10/26/2022 Last updated : 11/01/2022 # Sink transformation in mapping data flow
For example, if I specify a single key column of `column1` in a cache sink calle
**Write to activity output** The cached sink can optionally write your output data to the input of the next pipeline activity. This will allow you to quickly and easily pass data out of your data flow activity without needing to persist the data in a data store.
+## Update method
+
+For database sink types, the Settings tab will include an "Update method" property. The default is insert but also includes checkbox options for update, upsert, and delete. To utilize those additional options, you will need to add an [Alter Row transformation](data-flow-alter-row.md) before the sink. The Alter Row will allow you to define the conditions for each of the database actions. If your source is a native CDC enable source, then you can set the update methods without an Alter Row as ADF is already aware of the row markers for insert, update, upsert, and delete.
+ ## Field mapping Similar to a select transformation, on the **Mapping** tab of the sink, you can decide which incoming columns will get written. By default, all input columns, including drifted columns, are mapped. This behavior is known as *automapping*.
data-factory Pricing Examples Transform Mapping Data Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/pricing-examples-transform-mapping-data-flows.md
To accomplish the scenario, you need to create a pipeline with the following ite
| **Operations** | **Types and Units** | | | | | Run Pipeline | 2 Activity runs **per execution** (1 for trigger run, 1 for activity runs) = 480 activity runs, rounded up since the calculator only allows increments of 1000. |
-| Data Flow Assumptions: General purpose 16 vCore hours **per execution** = 10 min + 10 min TTL | 20 min \ 60 min |
+| Data Flow Assumptions: General purpose 16 vCore hours **per execution** = 10 min | 10 min \ 60 min |
## Pricing calculator example
-**Total scenario pricing for 30 days: $350.76**
+**Total scenario pricing for 30 days: $175.88**
:::image type="content" source="media/pricing-concepts/scenario-4a-pricing-calculator.png" alt-text="Screenshot of the orchestration section of the pricing calculator configured to transform data in a blob store with mapping data flows." lightbox="media/pricing-concepts/scenario-4a-pricing-calculator.png":::
To accomplish the scenario, you need to create a pipeline with the following ite
- [Pricing example: Run SSIS packages on Azure-SSIS integration runtime](pricing-examples-ssis-on-azure-ssis-integration-runtime.md) - [Pricing example: Using mapping data flow debug for a normal workday](pricing-examples-mapping-data-flow-debug-workday.md) - [Pricing example: Data integration in Azure Data Factory Managed VNET](pricing-examples-data-integration-managed-vnet.md)-- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
+- [Pricing example: Get delta data from SAP ECC via SAP CDC in mapping data flows](pricing-examples-get-delta-data-from-sap-ecc.md)
data-factory Quickstart Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-get-started.md
# Quickstart: Get started with Azure Data Factory
-> [!div class="op_single_selector" title1="Select the version of Data Factory service you are using:"]
-> * [Version 1](v1/data-factory-copy-data-from-azure-blob-storage-to-sql-database.md)
-> * [Current version](quickstart-create-data-factory-rest-api.md)
- [!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)] Welcome to Azure Data Factory! This getting started article will let you create your first data factory and pipeline within 5 minutes. The ARM template below will create and configure everything you need to try it out. Then you only need to navigate to your demo data factory and make one more click to trigger the pipeline, which moves some sample data from one Azure blob storage to another.
All of the resources referenced above will be created in the new resource group,
1. In the resource group, you will see the new data factory, Azure blob storage account, and managed identity that were created by the deployment. :::image type="content" source="media/quickstart-get-started/resource-group-contents.png" alt-text="A screenshot of the contents of the resource group created for the demo.":::
-1. Select the data factory in the resource group to view it. Then select the **Open Azure Data Factory Studio** button to continue.
- :::image type="content" source="media/quickstart-get-started/open-data-factory-studio.png" alt-text="A screenshot of the Azure portal on the newly created data factory page, highlighting the location of the Open Azure Data Factory Studio button.":::
+1. Select the data factory in the resource group to view it. Then select the **Launch Studio** button to continue.
+ :::image type="content" source="media/quickstart-get-started/launch-adf-studio.png" alt-text="A screenshot of the Azure portal on the newly created data factory page, highlighting the location of the Open Azure Data Factory Studio button.":::
1. Select on the **Author** tab <img src="media/quickstart-get-started/author-button.png" alt="Author tab"/> and then the **Pipeline** created by the template. Then check the source data by selecting **Open**.
All of the resources referenced above will be created in the new resource group,
1. In this quickstart, the pipeline has only one activity type: Copy. Click on the pipeline name and you can see the details of the copy activity's run results.
- :::image type="content" source="media/quickstart-get-started/copy-activity-run-results.png" alt-text="Screenshot of the run results of a copy activity in the data factorying monitoring tab.":::
+ :::image type="content" source="media/quickstart-get-started/copy-activity-run-results.png" alt-text="Screenshot of the run results of a copy activity in the data factory monitoring tab.":::
1. Click on details, and the detailed copy process is displayed. From the results, data read and written size are the same, and 1 file was read and written, which also proves all the data has been successfully copied to the destination.
data-factory Source Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/source-control.md
Previously updated : 03/01/2022 Last updated : 10/26/2022 # Source control in Azure Data Factory
For more info about connecting Azure Repos to your organization's Active Directo
Visual authoring with GitHub integration supports source control and collaboration for work on your data factory pipelines. You can associate a data factory with a GitHub account repository for source control, collaboration, versioning. A single GitHub account can have multiple repositories, but a GitHub repository can be associated with only one data factory. If you don't have a GitHub account or repository, follow [these instructions](https://github.com/join) to create your resources.
-The GitHub integration with Data Factory supports both public GitHub (that is, [https://github.com](https://github.com)) and GitHub Enterprise. You can use both public and private GitHub repositories with Data Factory as long you have read and write permission to the repository in GitHub. ADFΓÇÖs GitHub enterprise server integration only works with [officially supported versions of GitHub enterprise server.](https://docs.github.com/en/enterprise-server@3.1/admin/all-releases)
+The GitHub integration with Data Factory supports both public GitHub (that is, [https://github.com](https://github.com)), GitHub Enterprise Cloud and GitHub Enterprise Server. You can use both public and private GitHub repositories with Data Factory as long you have read and write permission to the repository in GitHub. ADFΓÇÖs GitHub enterprise server integration only works with [officially supported versions of GitHub enterprise server.](https://docs.github.com/en/enterprise-server@3.1/admin/all-releases)
> [!NOTE] > If you are using Microsoft Edge, GitHub Enterprise version less than 2.1.4 does not work with it. GitHub officially supports >=3.0 and these all should be fine for ADF. As GitHub changes its minimum version, ADF supported versions will also change. ### GitHub settings ++ :::image type="content" source="media/author-visually/github-integration-image2.png" alt-text="GitHub repository settings"::: The configuration pane shows the following GitHub repository settings:
The configuration pane shows the following GitHub repository settings:
| **Setting** | **Description** | **Value** | |: |: |: | | **Repository Type** | The type of the Azure Repos code repository. | GitHub |
-| **Use GitHub Enterprise** | Checkbox to select GitHub Enterprise | unselected (default) |
-| **GitHub Enterprise URL** | The GitHub Enterprise root URL (must be HTTPS for local GitHub Enterprise server). For example: `https://github.mydomain.com`. Required only if **Use GitHub Enterprise** is selected | `<your GitHub enterprise url>` |
-| **GitHub account** | Your GitHub account name. This name can be found from https:\//github.com/{account name}/{repository name}. Navigating to this page prompts you to enter GitHub OAuth credentials to your GitHub account. | `<your GitHub account name>` |
-| **Repository Name** | Your GitHub code repository name. GitHub accounts contain Git repositories to manage your source code. You can create a new repository or use an existing repository that's already in your account. | `<your repository name>` |
-| **Collaboration branch** | Your GitHub collaboration branch that is used for publishing. By default, it's main. Change this setting in case you want to publish resources from another branch. | `<your collaboration branch>` |
+| **Use GitHub Enterprise Server** | Checkbox to select GitHub Enterprise Server.| unselected (default) |
+| **GitHub Enterprise Server URL** | The GitHub Enterprise root URL (must be HTTPS for local GitHub Enterprise server). For example: `https://github.mydomain.com`. Required only if **Use GitHub Enterprise Server** is selected | `<your GitHub Enterprise Server URL>` |
+| **GitHub repository owner** | GitHub organization or account that owns the repository. This name can be found from https:\//github.com/{owner}/{repository name}. Navigating to this page prompts you to enter GitHub OAuth credentials to your GitHub organization or account. If you select **Use GitHub Enterprise Server**, a dialog box will pop out to let you enter your access token. | `<your GitHub repository owner name>` |
+| **Repository Name** | Your GitHub code repository name. GitHub accounts contain Git repositories to manage your source code. You can create a new repository or use an existing repository that's already in your account. Specify your GitHub code repository name when you select **Select repository**. | `<your repository name>` |
+|**Git repository link**| Your GitHub code repository link. Specify your GitHub code repository link when you select **Use repository link**. |`<your repository link>`|
+| **Collaboration branch** | Your GitHub collaboration branch that is used for publishing. By default, it's main. Change this setting in case you want to publish resources from another branch. You can also create a new collaboration branch here. | `<your collaboration branch>` |
+| **Publish branch** |The branch in your repository where publishing related ARM templates are stored and updated.| `<your publish branch name>`|
| **Root folder** | Your root folder in your GitHub collaboration branch. |`<your root folder name>` |
-| **Import existing Data Factory resources to repository** | Specifies whether to import existing data factory resources from the UX authoring canvas into a GitHub repository. Select the box to import your data factory resources into the associated Git repository in JSON format. This action exports each resource individually (that is, the linked services and datasets are exported into separate JSONs). When this box isn't selected, the existing resources aren't imported. | Selected (default) |
-| **Branch to import resource into** | Specifies into which branch the data factory resources (pipelines, datasets, linked services etc.) are imported. You can import resources into one of the following branches: a. Collaboration b. Create new c. Use Existing | |
+| **Import existing resources to repository** | Specifies whether to import existing data factory resources from the UX authoring canvas into a GitHub repository. Select the box to import your data factory resources into the associated Git repository in JSON format. This action exports each resource individually (that is, the linked services and datasets are exported into separate JSONs). When this box isn't selected, the existing resources aren't imported. | Selected (default) |
+| **Import resource into this branch** | Specifies into which branch the data factory resources (pipelines, datasets, linked services etc.) are imported. | |
### GitHub organizations Connecting to a GitHub organization requires the organization to grant permission to Azure Data Factory. A user with ADMIN permissions on the organization must perform the below steps to allow data factory to connect.
-#### Connecting to GitHub for the first time in Azure Data Factory
+#### Connecting to public GitHub or GitHub Enterprise Cloud for the first time in Azure Data Factory
-If you're connecting to GitHub from Azure Data Factory for the first time, follow these steps to connect to a GitHub organization.
+If you're connecting to public GitHub or GitHub Enterprise Cloud from Azure Data Factory for the first time, follow these steps to connect to a GitHub organization.
1. In the Git configuration pane, enter the organization name in the *GitHub Account* field. A prompt to login into GitHub will appear. 1. Login using your user credentials.
If you're connecting to GitHub from Azure Data Factory for the first time, follo
Once you follow these steps, your factory will be able to connect to both public and private repositories within your organization. If you are unable to connect, try clearing the browser cache and retrying.
-#### Already connected to GitHub using a personal account
+#### Already connected to public GitHub or GitHub Enterprise Cloud using a personal account
-If you have already connected to GitHub and only granted permission to access a personal account, follow the below steps to grant permissions to an organization.
+If you have already connected to public GitHub or GitHub Enterprise Cloud and only granted permission to access a personal account, follow the below steps to grant permissions to an organization.
1. Go to GitHub and open **Settings**.
If you have already connected to GitHub and only granted permission to access a
Once you follow these steps, your factory will be able to connect to both public and private repositories within your organization.
+#### Connecting to GitHub Enterprise Server
+
+If you connect to GitHub Enterprise Server, you need to use personal access token for authentication. Learn how to create a personal access token in [Creating a personal access token](https://docs.github.com/en/enterprise-server@3.6/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token).
+
+> [!Note]
+> GitHub Enterprise Server is in your self-hosted private environment, so you need have full control on the firewall, network policies and VPN when you use this authentication. For more information, see [About GitHub Enterprise Server](https://docs.github.com/en/enterprise-server@3.6/admin/overview/about-github-enterprise-server#about-github-enterprise-server).
+++ ### Known GitHub limitations - You can store script and data files in a GitHub repository. However, you have to upload the files manually to Azure Storage. A Data Factory pipeline does not automatically upload script or data files stored in a GitHub repository to Azure Storage.
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
If you enabled the integration, but still don't see the extension running on you
### What are the licensing requirements for Microsoft Defender for Endpoint?
-Licenses for Defender for Endpoint for servers are included with **Microsoft Defender for Servers**. Alternatively, you can [purchase licenses for Defender for Endpoint](https://www.microsoft.com/en-us/security/business/get-started/contact-us) for servers separately.
+Licenses for Defender for Endpoint for servers are included with **Microsoft Defender for Servers**.
### Do I need to buy a separate anti-malware solution to protect my machines? No. With MDE integration in Defender for Servers, you'll also get malware protection on your machines.
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
To have full visibility to Microsoft Defender for Servers security content, ensu
- Additional extensions should be enabled on the Arc-connected machines. - Microsoft Defender for Endpoint - VA solution (TVM/ Qualys)
- - Log Analytics (LA) agent on Arc machines. Ensure the selected workspace has security solution installed.
+ - Log Analytics (LA) agent on Arc machines or Azure Monitor agent (AMA). Ensure the selected workspace has security solution installed.
- The LA agent is currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regard to the LA agent.
+ The LA agent and AMA are currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regard to the LA agent and AMA.
Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
defender-for-cloud Quickstart Onboard Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md
Title: 'Quickstart: Connect your GitHub repositories to Microsoft Defender for Cloud' description: Learn how to connect your GitHub repositories to Defender for Cloud. Previously updated : 09/20/2022 Last updated : 11/02/2022
By connecting your GitHub repositories to Defender for Cloud, you'll extend Defe
:::image type="content" source="media/quickstart-onboard-github/select-github.png" alt-text="Screenshot that shows you where to select, to select GitHub." lightbox="media/quickstart-onboard-github/select-github.png":::
-1. Enter a name, select your subscription, resource group, and region.
+1. Enter a name (limit of 20 characters), select your subscription, resource group, and region.
> [!NOTE] > The subscription will be the location where Defender for DevOps will create and store the GitHub connection.
By connecting your GitHub repositories to Defender for Cloud, you'll extend Defe
1. Select **Next: Authorize connection**.
-1. Select **Authorize** to grant your Azure subscription access to your GitHub repositories. Sign in, if necessary, with an account that has permissions to the repositories you want to protect
+1. Select **Authorize** to grant your Azure subscription access to your GitHub repositories. Sign in, if necessary, with an account that has permissions to the repositories you want to protect.
> [!NOTE] > The authorization will auto-login using the session from your browser tab. After you select Authorize, if you do not see the GitHub organizations you expect to see, check whether you are logged in to MDC in one browser tab and logged in to GitHub in another browser tab.
+ > After authorization, if you wait too long to install the DevOps application, the session will time out and you will receive an error message.
1. Select **Install**.
defender-for-cloud Recommendations Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/recommendations-reference.md
description: This article lists Microsoft Defender for Cloud's security recommen
Previously updated : 08/24/2022 Last updated : 11/02/2022
shown in your environment depend on the resources you're protecting and your cus
configuration. Defender for Cloud's recommendations are based on the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction).
-the Microsoft cloud security benchmark is the Microsoft-authored, Azure-specific set of guidelines for security
+the Microsoft cloud security benchmark is the Microsoft-authored set of guidelines for security
and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
You can now monitor your cloud security compliance posture per cloud in a single
Microsoft cloud security benchmark is automatically assigned to your Azure subscriptions and AWS accounts when you onboard Defender for Cloud.
-Learn more about the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction).
+Learn more about the [Microsoft cloud security benchmark](concept-regulatory-compliance.md).
### Attack path analysis and contextual security capabilities in Defender for Cloud (Preview)
defender-for-cloud Security Policy Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/security-policy-concept.md
Title: Understanding security policies, initiatives, and recommendations in Micr
description: Learn about security policies, initiatives, and recommendations in Microsoft Defender for Cloud. Previously updated : 06/06/2022 Last updated : 11/02/2022 # What are security policies, initiatives, and recommendations?
A security initiative defines the desired configuration of your workloads and he
Like security policies, Defender for Cloud initiatives are also created in Azure Policy. You can use [Azure Policy](../governance/policy/overview.md) to manage your policies, build initiatives, and assign initiatives to multiple subscriptions or for entire management groups.
-The default initiative automatically assigned to every subscription in Microsoft Defender for Cloud is Microsoft cloud security benchmark. This benchmark is the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security. Learn more about [Microsoft cloud security benchmark](/security/benchmark/azure/introduction).
+The default initiative automatically assigned to every subscription in Microsoft Defender for Cloud is Microsoft cloud security benchmark. This benchmark is the Microsoft-authored set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud-centric security. Learn more about [Microsoft cloud security benchmark](/security/benchmark/azure/introduction).
Defender for Cloud offers the following options for working with security initiatives and policies:
defender-for-cloud Tutorial Security Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-security-policy.md
Title: Working with security policies
description: Learn how to work with security policies in Microsoft Defender for Cloud. Previously updated : 01/25/2022 Last updated : 10/31/2022 # Manage security policies
To view your security policies in Defender for Cloud:
1. To view and edit the default initiative, select it and proceed as described below.
- :::image type="content" source="./media/security-center-policies/policy-screen.png" alt-text="Effective policy screen.":::
- This **Security policy** screen reflects the action taken by the policies assigned on the subscription or management group you selected. * Use the links at the top to open a policy **assignment** that applies on the subscription or management group. These links let you access the assignment and edit or disable the policy. For example, if you see that a particular policy assignment is effectively denying endpoint protection, use the link to edit or disable the policy.
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
Title: Alert types and descriptions
-description: Review Defender for IoT Alert descriptions.
Previously updated : 12/13/2021-
+ Title: OT monitoring alert types and descriptions
+description: Learn more about the alerts that are triggered for traffic on OT networks.
Last updated : 11/01/2022+
-# Alert types and descriptions
+# OT monitoring alert types and descriptions
This article provides information on the alert types, descriptions, and severity that may be generated from the Defender for IoT engines. This information can be used to help map alerts into playbooks, define Forwarding rules, Exclusion rules, and custom alerts and define the appropriate rules within a SIEM. Alerts appear in the Alerts window, which allows you to manage the alert event. ### Alert news
-New alerts may be added and existing alerts may be updated or disabled. Certain disabled alerts can be re-enabled from the Support page of the sensor console. Alerts that can be re-enabled are marked with an asterisk (*) in the tables below.
+New alerts may be added and existing alerts may be updated or disabled. Certain disabled alerts can be re-enabled from the **Support** page of the sensor console. Alerts that can be re-enabled are marked with an asterisk (*) in the tables below.
-You may have configured newly disabled alerts in your Forwarding rules. If so, you may need to update related Defender for IoT Exclusion rules, or update SIEM rules and playbooks where relevant.
+You may have configured newly disabled alerts in your Forwarding rules. If so, you may need to update related Defender for IoT Exclusion rules, or update SIEM rules and playbooks where relevant.
See [What's new in Microsoft Defender for IoT?](release-notes.md#whats-new-in-microsoft-defender-for-iot) for detailed information about changes made to alerts.
See [What's new in Microsoft Defender for IoT?](release-notes.md#whats-new-in-m
| Alert type | Description | |-|-|
-| Policy violation alerts | Triggered when the Policy Violation engine detects a deviation from traffic previously learned. For example: <br /> - A new device is detected. <br /> - A new configuration is detected on a device. <br /> - A device not defined as a programming device carries out a programming change. <br /> - A firmware version changed. |
-| Protocol violation alerts | Triggered when the Protocol Violation engine detects packet structures or field values that don't comply with the protocol specification. |
-| Operational alerts | Triggered when the Operational engine detects network operational incidents or a device malfunctioning. For example, a network device was stopped through a Stop PLC command, or an interface on a sensor stopped monitoring traffic. |
-| Malware alerts | Triggered when the Malware engine detects malicious network activity. For example, the engine detects a known attack such as Conficker. |
-| Anomaly alerts | Triggered when the Anomaly engine detects a deviation. For example, a device is performing network scans but isn't defined as a scanning device. |
+| **Policy violation alerts** | Triggered when the Policy Violation engine detects a deviation from traffic previously learned. For example: <br /> - A new device is detected. <br /> - A new configuration is detected on a device. <br /> - A device not defined as a programming device carries out a programming change. <br /> - A firmware version changed. |
+| **Protocol violation alerts** | Triggered when the Protocol Violation engine detects packet structures or field values that don't comply with the protocol specification. |
+| **Operational alerts** | Triggered when the Operational engine detects network operational incidents or a device malfunctioning. For example, a network device was stopped through a Stop PLC command, or an interface on a sensor stopped monitoring traffic. |
+| **Malware alerts** | Triggered when the Malware engine detects malicious network activity. For example, the engine detects a known attack such as Conficker. |
+| **Anomaly alerts** | Triggered when the Anomaly engine detects a deviation. For example, a device is performing network scans but isn't defined as a scanning device. |
+## Supported alert categories
+
+Each alert has one of the following categories:
+
+ :::column span="":::
+ - Abnormal Communication Behavior
+ - Abnormal HTTP Communication Behavior
+ - Authentication
+ - Backup
+ - Bandwidth Anomalies
+ - Buffer overflow
+ - Command Failures
+ - Configuration changes
+ - Custom Alerts
+ - Discovery
+ - Firmware change
+ - Illegal commands
+ :::column-end:::
+ :::column span="":::
+ - Internet Access
+ - Operation Failures
+ - Operational issues
+ - Programming
+ - Remote access
+ - Restart/Stop Commands
+ - Scan
+ - Sensor traffic
+ - Suspicion of malicious activity
+ - Suspicion of Malware
+ - Unauthorized Communication Behavior
+ - Unresponsive
+ :::column-end:::
## Policy engine alerts
Policy engine alerts describe detected deviations from learned baseline behavior
| Title | Description | Severity | Category | |--|--|--|--|
-| Beckhoff Software Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| Database Login Failed | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. <br><br> Threshold: 2 login failures in 5 minutes | Major | Authentication |
-| Emerson ROC Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| External address within the network communicated with Internet | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access |
-| Field Device Discovered Unexpectedly | A new source device was detected on the network but hasn't been authorized. | Major | Discovery |
-| Firmware Change Detected | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| Foxboro I/A Unauthorized Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| FTP Login Failed | A failed sign-in attempt was detected from a source device to a destination server. This alert might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major | Authentication |
-| Function Code Raised Unauthorized Exception | A source device (secondary) returned an exception to a destination device (primary). | Major | Command Failures |
-| GOOSE Message Type Settings | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior |
-| Honeywell Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| * Illegal HTTP Communication | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
-| Internet Access Detected | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Major | Internet Access |
-| Mitsubishi Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| Modbus Address Range Violation | A primary device requested access to a new secondary memory address. | Major | Unauthorized Communication Behavior |
-| Modbus Firmware Version Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| New Activity Detected - CIP Class | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - CIP Class Service | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - CIP PCCC Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - CIP Symbol | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - EtherNet/IP I/O Connection | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - EtherNet/IP Protocol Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - GSM Message Code | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - LonTalk Command Codes | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Port Discovery | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Warning | Discovery |
-| New Activity Detected - LonTalk Network Variable | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Ovation Data Request | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Read/Write Command (AMS Index Group) | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes |
-| New Activity Detected - Read/Write Command (AMS Index Offset) | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes |
-| New Activity Detected - Unauthorized DeltaV Message Type | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Unauthorized DeltaV ROC Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Unauthorized RPC Message Type | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Using AMS Protocol Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Using Siemens SICAM Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Using Suitelink Protocol command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Using Suitelink Protocol sessions | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Activity Detected - Using Yokogawa VNetIP Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| New Asset Detected | A new source device was detected on the network but hasn't been authorized. <br><br>This alert applies to devices discovered in OT subnets. New devices discovered in IT subnets don't trigger an alert.| Major | Discovery |
-| New LLDP Device Configuration | A new source device was detected on the network but hasn't been authorized. | Major | Configuration Changes |
-| Omron FINS Unauthorized Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| S7 Plus PLC Firmware Changed | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
-| Sampled Values Message Type Settings | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior |
-| Suspicion of Illegal Integrity Scan | A scan was detected on a DNP3 source device (outstation). This scan wasn't authorized as learned traffic on your network. | Major | Scan |
-| Toshiba Computer Link Unauthorized Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Minor | Unauthorized Communication Behavior |
-| Unauthorized ABB Totalflow File Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized ABB Totalflow Register Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Access to Siemens S7 Data Block | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Warning | Unauthorized Communication Behavior |
-| Unauthorized Access to Siemens S7 Plus Object | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Access to Wonderware Tag | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Unauthorized Communication Behavior |
-| Unauthorized BACNet Object Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized BACNet Route | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Database Login | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication |
-| Unauthorized Database Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
-| Unauthorized Emerson ROC Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized GE SRTP File Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized GE SRTP Protocol Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized GE SRTP System Memory Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized HTTP Activity | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
-| * Unauthorized HTTP SOAP Action | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
-| * Unauthorized HTTP User Agent | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal HTTP Communication Behavior |
-| Unauthorized Internet Connectivity Detected | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access |
-| Unauthorized Mitsubishi MELSEC Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized MMS Program Access | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Programming |
-| Unauthorized MMS Service | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Multicast/Broadcast Connection | A Multicast/Broadcast connection was detected between a source device and other devices. Multicast/Broadcast communication isn't authorized. | Critical | Abnormal Communication Behavior |
-| Unauthorized Name Query | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
-| Unauthorized OPC UA Activity | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized OPC UA Request/Response | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Operation was detected by a User Defined Rule | Traffic was detected between two devices. This activity is unauthorized based on a Custom Alert Rule defined by a user. | Major | Custom Alerts |
-| Unauthorized PLC Configuration Read | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Warning | Configuration Changes |
-| Unauthorized PLC Configuration Write | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Configuration Changes |
-| Unauthorized PLC Program Upload | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Programming |
-| Unauthorized PLC Programming | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Critical | Programming |
-| Unauthorized Profinet Frame Type | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized SAIA S-Bus Command | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Siemens S7 Execution of Control Function | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Siemens S7 Execution of User Defined Function | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Siemens S7 Plus Block Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized Siemens S7 Plus Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unauthorized SMB Login | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication |
-| Unauthorized SNMP Operation | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
-| Unauthorized SSH Access | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Remote Access |
-| Unauthorized Windows Process | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior |
-| Unauthorized Windows Service | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior |
-| Unauthorized Operation was detected by a User Defined Rule | New traffic parameters were detected. This parameter combination violates a user defined rule | Major |
-| Unpermitted Modbus Schneider Electric Extension | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unpermitted Usage of ASDU Types | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unpermitted Usage of DNP3 Function Code | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
-| Unpermitted Usage of Internal Indication (IIN) | A DNP3 source device (outstation) reported an internal indication (IIN) that hasn't authorized as learned traffic on your network. | Major | Illegal Commands |
-| Unpermitted Usage of Modbus Function Code | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Beckhoff Software Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| **Database Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. <br><br> Threshold: 2 sign-in failures in 5 minutes | Major | Authentication |
+| **Emerson ROC Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| **External address within the network communicated with Internet** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access |
+| **Field Device Discovered Unexpectedly** | A new source device was detected on the network but hasn't been authorized. | Major | Discovery |
+| **Firmware Change Detected** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| **Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| **Foxboro I/A Unauthorized Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **FTP Login Failed** | A failed sign-in attempt was detected from a source device to a destination server. This alert might be the result of human error, but could also indicate a malicious attempt to compromise the server or data on it. | Major | Authentication |
+| **Function Code Raised Unauthorized Exception** | A source device (secondary) returned an exception to a destination device (primary). | Major | Command Failures |
+| **GOOSE Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior |
+| **Honeywell Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| * **Illegal HTTP Communication** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
+| **Internet Access Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Major | Internet Access |
+| **Mitsubishi Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| **Modbus Address Range Violation** | A primary device requested access to a new secondary memory address. | Major | Unauthorized Communication Behavior |
+| **Modbus Firmware Version Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| **New Activity Detected - CIP Class** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - CIP Class Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - CIP PCCC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - CIP Symbol** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - EtherNet/IP I/O Connection** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - EtherNet/IP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - GSM Message Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - LonTalk Command Codes** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Port Discovery** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Warning | Discovery |
+| **New Activity Detected - LonTalk Network Variable** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Ovation Data Request** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Read/Write Command (AMS Index Group)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes |
+| **New Activity Detected - Read/Write Command (AMS Index Offset)** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Configuration Changes |
+| **New Activity Detected - Unauthorized DeltaV Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Unauthorized DeltaV ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Unauthorized RPC Message Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Using AMS Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Using Siemens SICAM Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Using Suitelink Protocol command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Using Suitelink Protocol sessions** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Activity Detected - Using Yokogawa VNetIP Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **New Asset Detected** | A new source device was detected on the network but hasn't been authorized. <br><br>This alert applies to devices discovered in OT subnets. New devices discovered in IT subnets don't trigger an alert.| Major | Discovery |
+| **New LLDP Device Configuration** | A new source device was detected on the network but hasn't been authorized. | Major | Configuration Changes |
+| **Omron FINS Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **S7 Plus PLC Firmware Changed** | Firmware was updated on a source device. This may be authorized activity, for example a planned maintenance procedure. | Major | Firmware Change |
+| **Sampled Values Message Type Settings** | Message (identified by protocol ID) settings were changed on a source device. | Warning | Unauthorized Communication Behavior |
+| **Suspicion of Illegal Integrity Scan** | A scan was detected on a DNP3 source device (outstation). This scan wasn't authorized as learned traffic on your network. | Major | Scan |
+| **Toshiba Computer Link Unauthorized Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Minor | Unauthorized Communication Behavior |
+| **Unauthorized ABB Totalflow File Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized ABB Totalflow Register Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Access to Siemens S7 Data Block** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Warning | Unauthorized Communication Behavior |
+| **Unauthorized Access to Siemens S7 Plus Object** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Access to Wonderware Tag** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Unauthorized Communication Behavior |
+| **Unauthorized BACNet Object Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized BACNet Route** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Database Login** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication |
+| **Unauthorized Database Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
+| **Unauthorized Emerson ROC Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized GE SRTP File Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized GE SRTP Protocol Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized GE SRTP System Memory Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized HTTP Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
+| * **Unauthorized HTTP SOAP Action** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal HTTP Communication Behavior |
+| * **Unauthorized HTTP User Agent** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal HTTP Communication Behavior |
+| **Unauthorized Internet Connectivity Detected** | A source device defined as part of your network is communicating with Internet addresses. The source isn't authorized to communicate with Internet addresses. | Critical | Internet Access |
+| **Unauthorized Mitsubishi MELSEC Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized MMS Program Access** | A source device attempted to access a resource on another device. An access attempt to this resource between these two devices hasn't been authorized as learned traffic on your network. | Major | Programming |
+| **Unauthorized MMS Service** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Multicast/Broadcast Connection** | A Multicast/Broadcast connection was detected between a source device and other devices. Multicast/Broadcast communication isn't authorized. | Critical | Abnormal Communication Behavior |
+| **Unauthorized Name Query** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
+| **Unauthorized OPC UA Activity** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized OPC UA Request/Response** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Operation was detected by a User Defined Rule** | Traffic was detected between two devices. This activity is unauthorized, based on a Custom Alert Rule defined by a user. | Major | Custom Alerts |
+| **Unauthorized PLC Configuration Read** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Warning | Configuration Changes |
+| **Unauthorized PLC Configuration Write** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Configuration Changes |
+| **Unauthorized PLC Program Upload** | The source device sent a command to read/write the program of a destination controller. This activity wasn't previously seen. | Major | Programming |
+| **Unauthorized PLC Programming** | The source device isn't defined as a programming device but performed a read/write operation on a destination controller. Programming changes should only be performed by programming devices. A programming application may have been installed on this device. | Critical | Programming |
+| **Unauthorized Profinet Frame Type** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized SAIA S-Bus Command** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Siemens S7 Execution of Control Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Siemens S7 Execution of User Defined Function** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Siemens S7 Plus Block Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized Siemens S7 Plus Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unauthorized SMB Login** | A sign-in attempt between a source client and destination server was detected. Communication between these devices hasn't been authorized as learned traffic on your network. | Major | Authentication |
+| **Unauthorized SNMP Operation** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Abnormal Communication Behavior |
+| **Unauthorized SSH Access** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Remote Access |
+| **Unauthorized Windows Process** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior |
+| **Unauthorized Windows Service** | An unauthorized application was detected on a source device. The application hasn't been authorized as a learned application on your network. | Major | Abnormal Communication Behavior |
+| **Unauthorized Operation was detected by a User Defined Rule** | New traffic parameters were detected. This parameter combination violates a user defined rule | Major |
+| **Unpermitted Modbus Schneider Electric Extension** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unpermitted Usage of ASDU Types** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unpermitted Usage of DNP3 Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
+| **Unpermitted Usage of Internal Indication (IIN)** | A DNP3 source device (outstation) reported an internal indication (IIN) that hasn't authorized as learned traffic on your network. | Major | Illegal Commands |
+| **Unpermitted Usage of Modbus Function Code** | New traffic parameters were detected. This parameter combination hasn't been authorized as learned traffic on your network. The following combination is unauthorized. | Major | Unauthorized Communication Behavior |
## Anomaly engine alerts
Anomaly engine alerts describe detected anomalies in network activity.
| Title | Description | Severity | Category | |--|--|--|--|
-| Abnormal Exception Pattern in Slave | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. <br><br> Threshold: 20 exceptions in 1 hour | Minor | Abnormal Communication Behavior |
-| * Abnormal HTTP Header Length | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior |
-| * Abnormal Number of Parameters in HTTP Header | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior |
-| Abnormal Periodic Behavior In Communication Channel | A change in the frequency of communication between the source and destination devices was detected. | Minor | Abnormal Communication Behavior |
-| Abnormal Termination of Applications | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 20 stop commands in 3 hours | Major | Abnormal Communication Behavior |
-| Abnormal Traffic Bandwidth | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies |
-| Abnormal Traffic Bandwidth Between Devices | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies |
-| Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 2 minutes | Critical | Scan |
-| ARP Address Scan Detected | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. <br><br> Threshold: 40 scans in 6 minutes | Critical | Scan |
-| ARP Spoofing | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior |
-| Excessive Login Attempts | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 20 login attempts in 1 minute | Critical | Authentication |
-| Excessive Number of Sessions | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 50 sessions in 1 minute | Critical | Abnormal Communication Behavior |
-| Excessive Restart Rate of an Outstation | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 10 restarts in 1 hour | Major | Restart/ Stop Commands |
-| Excessive SMB login attempts | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 10 login attempts in 10 minutes | Critical | Authentication |
-| ICMP Flooding | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior |
-|* Illegal HTTP Header Content | The source device initiated an invalid request. | Critical | Abnormal HTTP Communication Behavior |
-| Inactive Communication Channel | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. <br><br> Threshold: 1 minute | Warning | Unresponsive |
-| Long Duration Address Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 10 minutes | Critical | Scan |
-| Password Guessing Attempt Detected | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 100 attempts in 1 minute | Critical | Authentication |
-| PLC Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 10 scans in 2 minutes | Critical | Scan |
-| Port Scan Detected | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 25 scans in 2 minutes | Critical | Scan |
-| Unexpected message length | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. <br><br> Threshold: text length - 32768 | Critical | Abnormal Communication Behavior |
-| Unexpected Traffic for Standard Port | Traffic was detected on a device using a port reserved for another protocol. | Major | Abnormal Communication Behavior |
+| **Abnormal Exception Pattern in Slave** | An excessive number of errors were detected on a source device. This alert may be the result of an operational issue. <br><br> Threshold: 20 exceptions in 1 hour | Minor | Abnormal Communication Behavior |
+| * **Abnormal HTTP Header Length** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior |
+| * **Abnormal Number of Parameters in HTTP Header** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. | Critical | Abnormal HTTP Communication Behavior |
+| **Abnormal Periodic Behavior In Communication Channel** | A change in the frequency of communication between the source and destination devices was detected. | Minor | Abnormal Communication Behavior |
+| **Abnormal Termination of Applications** | An excessive number of stop commands were detected on a source device. This alert may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 20 stop commands in 3 hours | Major | Abnormal Communication Behavior |
+| **Abnormal Traffic Bandwidth** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies |
+| **Abnormal Traffic Bandwidth Between Devices** | Abnormal bandwidth was detected on a channel. Bandwidth appears to be lower/higher than previously detected. For details, work with the Total Bandwidth widget. | Warning | Bandwidth Anomalies |
+| **Address Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 2 minutes | Critical | Scan |
+| **ARP Address Scan Detected** | A source device was detected scanning network devices using Address Resolution Protocol (ARP). This device address hasn't been authorized as valid ARP scanning address. <br><br> Threshold: 40 scans in 6 minutes | Critical | Scan |
+| **ARP Spoofing** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior |
+| **Excessive Login Attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This alert may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 20 sign-in attempts in 1 minute | Critical | Authentication |
+| **Excessive Number of Sessions** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 50 sessions in 1 minute | Critical | Abnormal Communication Behavior |
+| **Excessive Restart Rate of an Outstation** | An excessive number of restart commands were detected on a source device. These alerts may be the result of an operational issue or an attempt to manipulate the device. <br><br> Threshold: 10 restarts in 1 hour | Major | Restart/ Stop Commands |
+| **Excessive SMB login attempts** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 10 sign-in attempts in 10 minutes | Critical | Authentication |
+| **ICMP Flooding** | An abnormal quantity of packets was detected in the network. This alert could indicate an attack, for example, an ARP spoofing or ICMP flooding attack. <br><br> Threshold: 60 packets in 1 minute | Warning | Abnormal Communication Behavior |
+|* **Illegal HTTP Header Content** | The source device initiated an invalid request. | Critical | Abnormal HTTP Communication Behavior |
+| **Inactive Communication Channel** | A communication channel between two devices was inactive during a period in which activity is usually observed. This might indicate that the program generating this traffic was changed, or the program might be unavailable. It's recommended to review the configuration of installed program and verify that it's configured properly. <br><br> Threshold: 1 minute | Warning | Unresponsive |
+| **Long Duration Address Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 50 connections to the same B class subnet in 10 minutes | Critical | Scan |
+| **Password Guessing Attempt Detected** | A source device was seen performing excessive sign-in attempts to a destination server. This may indicate a brute force attack. The server may be compromised by a malicious actor. <br><br> Threshold: 100 attempts in 1 minute | Critical | Authentication |
+| **PLC Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 10 scans in 2 minutes | Critical | Scan |
+| **Port Scan Detected** | A source device was detected scanning network devices. This device hasn't been authorized as a network scanning device. <br><br> Threshold: 25 scans in 2 minutes | Critical | Scan |
+| **Unexpected message length** | The source device sent an abnormal message. This alert may indicate an attempt to attack the destination device. <br><br> Threshold: text length - 32768 | Critical | Abnormal Communication Behavior |
+| **Unexpected Traffic for Standard Port** | Traffic was detected on a device using a port reserved for another protocol. | Major | Abnormal Communication Behavior |
## Protocol violation engine alerts
Protocol engine alerts describe detected deviations in the packet structure, or
| Title | Description | Severity | Category | |--|--|--|--|
-| Excessive Malformed Packets In a Single Session | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. <br><br> Threshold: 2 malformed packets in 10 minutes | Major | Illegal Commands |
-| Firmware Update | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning | Firmware Change |
-| Function Code Not Supported by Outstation | The destination device received an invalid request. | Major | Illegal Commands |
-| Illegal BACNet message | The source device initiated an invalid request. | Major | Illegal Commands |
-| Illegal Connection Attempt on Port 0 | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and canΓÇÖt be used. For UDP, the port is optional and a value of 0 means no port. There's usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Minor | Illegal Commands |
-| Illegal DNP3 Operation | The source device initiated an invalid request. | Major | Illegal Commands |
-| Illegal MODBUS Operation (Exception Raised by Master) | The source device initiated an invalid request. | Major | Illegal Commands |
-| Illegal MODBUS Operation (Function Code Zero) | The source device initiated an invalid request. | Major | Illegal Commands |
-| Illegal Protocol Version | The source device initiated an invalid request. | Major | Illegal Commands |
-| Incorrect Parameter Sent to Outstation | The destination device received an invalid request. | Major | Illegal Commands |
-| Initiation of an Obsolete Function Code (Initialize Data) | The source device initiated an invalid request. | Minor | Illegal Commands |
-| Initiation of an Obsolete Function Code (Save Config) | The source device initiated an invalid request. | Minor | Illegal Commands |
-| Master Requested an Application Layer Confirmation | The source device initiated an invalid request. | Warning | Illegal Commands |
-| Modbus Exception | A source device (secondary) returned an exception to a destination device (primary). | Major | Illegal Commands |
-| Slave Device Received Illegal ASDU Type | The destination device received an invalid request. | Major | Illegal Commands |
-| Slave Device Received Illegal Command Cause of Transmission | The destination device received an invalid request. | Major | Illegal Commands |
-| Slave Device Received Illegal Common Address | The destination device received an invalid request. | Major | Illegal Commands |
-| Slave Device Received Illegal Data Address Parameter | The destination device received an invalid request. | Major | Illegal Commands |
-| Slave Device Received Illegal Data Value Parameter | The destination device received an invalid request. | Major | Illegal Commands |
-| Slave Device Received Illegal Function Code | The destination device received an invalid request. | Major | Illegal Commands |
-| Slave Device Received Illegal Information Object Address | The destination device received an invalid request. | Major | Illegal Commands |
-| Unknown Object Sent to Outstation | The destination device received an invalid request. | Major | Illegal Commands |
-| Usage of a Reserved Function Code | The source device initiated an invalid request. | Major | Illegal Commands |
-| Usage of Improper Formatting by Outstation | The source device initiated an invalid request. | Warning | Illegal Commands |
-| Usage of Reserved Status Flags (IIN) | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It's recommended to check the device's configuration. | Warning | Illegal Commands |
+| **Excessive Malformed Packets In a Single Session** | An abnormal number of malformed packets sent from the source device to the destination device. This alert might indicate erroneous communications, or an attempt to manipulate the targeted device. <br><br> Threshold: 2 malformed packets in 10 minutes | Major | Illegal Commands |
+| **Firmware Update** | A source device sent a command to update firmware on a destination device. Verify that recent programming, configuration and firmware upgrades made to the destination device are valid. | Warning | Firmware Change |
+| **Function Code Not Supported by Outstation** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Illegal BACNet message** | The source device initiated an invalid request. | Major | Illegal Commands |
+| **Illegal Connection Attempt on Port 0** | A source device attempted to connect to destination device on port number zero (0). For TCP, port 0 is reserved and canΓÇÖt be used. For UDP, the port is optional and a value of 0 means no port. There's usually no service on a system that listens on port 0. This event may indicate an attempt to attack the destination device, or indicate that an application was programmed incorrectly. | Minor | Illegal Commands |
+| **Illegal DNP3 Operation** | The source device initiated an invalid request. | Major | Illegal Commands |
+| **Illegal MODBUS Operation (Exception Raised by Master)** | The source device initiated an invalid request. | Major | Illegal Commands |
+| **Illegal MODBUS Operation (Function Code Zero)** | The source device initiated an invalid request. | Major | Illegal Commands |
+| **Illegal Protocol Version** | The source device initiated an invalid request. | Major | Illegal Commands |
+| **Incorrect Parameter Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Initiation of an Obsolete Function Code (Initialize Data)** | The source device initiated an invalid request. | Minor | Illegal Commands |
+| **Initiation of an Obsolete Function Code (Save Config)** | The source device initiated an invalid request. | Minor | Illegal Commands |
+| **Master Requested an Application Layer Confirmation** | The source device initiated an invalid request. | Warning | Illegal Commands |
+| **Modbus Exception** | A source device (secondary) returned an exception to a destination device (primary). | Major | Illegal Commands |
+| **Slave Device Received Illegal ASDU Type** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Slave Device Received Illegal Command Cause of Transmission** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Slave Device Received Illegal Common Address** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Slave Device Received Illegal Data Address Parameter** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Slave Device Received Illegal Data Value Parameter** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Slave Device Received Illegal Function Code** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Slave Device Received Illegal Information Object Address** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Unknown Object Sent to Outstation** | The destination device received an invalid request. | Major | Illegal Commands |
+| **Usage of a Reserved Function Code** | The source device initiated an invalid request. | Major | Illegal Commands |
+| **Usage of Improper Formatting by Outstation** | The source device initiated an invalid request. | Warning | Illegal Commands |
+| **Usage of Reserved Status Flags (IIN)** | A DNP3 source device (outstation) used the reserved Internal Indicator 2.6. It's recommended to check the device's configuration. | Warning | Illegal Commands |
## Malware engine alerts
Malware engine alerts describe detected malicious network activity.
| Title | Description| Severity | Category | |--|--|--|--|
-| Connection Attempt to Known Malicious IP | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| Invalid SMB Message (DoublePulsar Backdoor Implant) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Malicious Domain Name Request | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| Malware Test File Detected - EICAR AV Success | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major | Suspicion of Malicious Activity |
-| Suspicion of Conficker Malware | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware |
-| Suspicion of Denial Of Service Attack | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. <br><br> Threshold: 3000 syn attempts in 1 minute | Critical | Suspicion of Malicious Activity |
-| Suspicion of Malicious Activity | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Major | Suspicion of Malicious Activity |
-| Suspicion of Malicious Activity (BlackEnergy) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (DarkComet) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (Duqu) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (Flame) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (Havex) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (Karagany) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (LightsOut) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (Name Queries) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br> Threshold: 25 name queries in 1 minute | Major | Suspicion of Malicious Activity |
-| Suspicion of Malicious Activity (Poison Ivy) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (Regin) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (Stuxnet) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Malicious Activity (WannaCry) | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware |
-| Suspicion of NotPetya Malware - Illegal SMB Parameters Detected | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of NotPetya Malware - Illegal SMB Transaction Detected | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
-| Suspicion of Remote Code Execution with PsExec | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| Suspicion of Remote Windows Service Management | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| Suspicious Executable File Detected on Endpoint | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
-| Suspicious Traffic Detected | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical | Suspicion of Malicious Activity |
-| Backup Activity with Antivirus Signatures | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning | Backup
+| **Connection Attempt to Known Malicious IP** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| **Invalid SMB Message (DoublePulsar Backdoor Implant)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Malicious Domain Name Request** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| **Malware Test File Detected - EICAR AV Success** | An EICAR AV test file was detected in traffic between two devices (over any transport - TCP or UDP). The file isn't malware. It's used to confirm that the antivirus software is installed correctly. Demonstrate what happens when a virus is found, and check internal procedures and reactions when a virus is found. Antivirus software should detect EICAR as if it were a real virus. | Major | Suspicion of Malicious Activity |
+| **Suspicion of Conficker Malware** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware |
+| **Suspicion of Denial Of Service Attack** | A source device attempted to initiate an excessive number of new connections to a destination device. This may indicate a Denial Of Service (DOS) attack against the destination device, and might interrupt device functionality, affect performance and service availability, or cause unrecoverable errors. <br><br> Threshold: 3000 attempts in 1 minute | Critical | Suspicion of Malicious Activity |
+| **Suspicion of Malicious Activity** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team. | Major | Suspicion of Malicious Activity |
+| **Suspicion of Malicious Activity (BlackEnergy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (DarkComet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (Duqu)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (Flame)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (Havex)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (Karagany)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (LightsOut)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (Name Queries)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. <br><br> Threshold: 25 name queries in 1 minute | Major | Suspicion of Malicious Activity |
+| **Suspicion of Malicious Activity (Poison Ivy)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (Regin)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (Stuxnet)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Malicious Activity (WannaCry)** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malware |
+| **Suspicion of NotPetya Malware - Illegal SMB Parameters Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of NotPetya Malware - Illegal SMB Transaction Detected** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Critical | Suspicion of Malware |
+| **Suspicion of Remote Code Execution with PsExec** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| **Suspicion of Remote Windows Service Management** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| **Suspicious Executable File Detected on Endpoint** | Suspicious network activity was detected. This activity may be associated with an attack exploiting a method used by known malware. | Major | Suspicion of Malicious Activity |
+| **Suspicious Traffic Detected** | Suspicious network activity was detected. This activity may be associated with an attack that triggered known 'Indicators of Compromise' (IOCs). Alert metadata should be reviewed by the security team | Critical | Suspicion of Malicious Activity |
+| **Backup Activity with Antivirus Signatures** | Traffic detected between the source device and the destination backup server triggered this alert. The traffic includes backup of antivirus software that might contain malware signatures. This is most likely legitimate backup activity. | Warning | Backup
## Operational engine alerts
Operational engine alerts describe detected operational incidents, or malfunctio
| Title | Description | Severity | Category | |--|--|--|--|
-| An S7 Stop PLC Command was Sent | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
-| BACNet Operation Failed | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
-| Bad MMS Device State | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Major | Operational Issues |
-| Change of Device Configuration | A configuration change was detected on a source device. | Minor | Configuration Changes |
-| Continuous Event Buffer Overflow at Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. <br><br> Threshold: 3 occurrences in 10 minutes | Major | Buffer Overflow |
-| Controller Reset | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Warning | Restart/ Stop Commands |
-| Controller Stop | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
-| Device Failed to Receive a Dynamic IP Address | The source device is configured to receive a dynamic IP address from a DHCP server but didn't receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It's recommended to notify the network administrator of the incident | Major | Command Failures |
-| Device is Suspected to be Disconnected (Unresponsive) | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: 8 attempts in 5 minutes | Major | Unresponsive |
-| EtherNet/IP CIP Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| EtherNet/IP Encapsulation Protocol Command Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| Event Buffer Overflow in Outstation | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major | Buffer Overflow |
-| Expected Backup Operation Did Not Occur | Expected backup/file transfer activity didn't occur between two devices. This alert may indicate errors in the backup / file transfer process. <br><br> Threshold: 100 seconds | Major | Backup |
-| GE SRTP Command Failure | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
-| GE SRTP Stop PLC Command was Sent | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
-| GOOSE Control Block Requires Further Configuration | A source device sent a GOOSE message indicating that the device needs commissioning. This means that the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major | Configuration Changes |
-| GOOSE Dataset Configuration was Changed | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes |
-| Honeywell Controller Unexpected Status | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Warning | Operational Issues |
-|* HTTP Client Error | The source device initiated an invalid request. | Warning | Abnormal HTTP Communication Behavior |
-| Illegal IP Address | System detected traffic between a source device and an IP address that is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Minor | Abnormal Communication Behavior |
-| Master-Slave Authentication Error | The authentication process between a DNP3 source device (primary) and a destination device (outstation) failed. | Minor | Authentication |
-| MMS Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| No Traffic Detected on Sensor Interface | A sensor stopped detecting network traffic on a network interface. | Critical | Sensor Traffic |
-| OPC UA Server Raised an Event That Requires User's Attention | An OPC UA server sent an event notification to a client. This type of event requires user attention | Major | Operational Issues |
-| OPC UA Service Request Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| Outstation Restarted | A cold restart was detected on a source device. This means the device was physically turned off and back on again. | Warning | Restart/ Stop Commands |
-| Outstation Restarts Frequently | An excessive number of cold restarts were detected on a source device. This means the device was physically turned off and back on again an excessive number of times. <br><br> Threshold: 2 restarts in 10 minutes | Minor | Restart/ Stop Commands |
-| Outstation's Configuration Changed | A configuration change was detected on a source device. | Major | Configuration Changes |
-| Outstation's Corrupted Configuration Detected | This DNP3 source device (outstation) reported a corrupted configuration. | Major | Configuration Changes |
-| Profinet DCP Command Failed | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
-| Profinet Device Factory Reset | A source device sent a factory reset command to a Profinet destination device. The reset command clears Profinet device configurations and stops its operation. | Warning | Restart/ Stop Commands |
-| * RPC Operation Failed | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
-| Sampled Values Message Dataset Configuration was Changed | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes |
-| Slave Device Unrecoverable Failure | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Command Failures |
-| Suspicion of Hardware Problems in Outstation | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Operational Issues |
-| Suspicion of Unresponsive MODBUS Device | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: Minimum of 1 valid response for a minimum of 3 requests within 5 minutes | Minor | Unresponsive |
-| Traffic Detected on Sensor Interface | A sensor resumed detecting network traffic on a network interface. | Warning | Sensor Traffic |
+| **An S7 Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
+| **BACNet Operation Failed** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **Bad MMS Device State** | An MMS Virtual Manufacturing Device (VMD) sent a status message. The message indicates that the server may not be configured correctly, partially operational, or not operational at all. | Major | Operational Issues |
+| **Change of Device Configuration** | A configuration change was detected on a source device. | Minor | Configuration Changes |
+| **Continuous Event Buffer Overflow at Outstation** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. <br><br> Threshold: 3 occurrences in 10 minutes | Major | Buffer Overflow |
+| **Controller Reset** | A source device sent a reset command to a destination controller. The controller stopped operating temporarily and started again automatically. | Warning | Restart/ Stop Commands |
+| **Controller Stop** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
+| **Device Failed to Receive a Dynamic IP Address** | The source device is configured to receive a dynamic IP address from a DHCP server but didn't receive an address. This indicates a configuration error on the device, or an operational error in the DHCP server. It's recommended to notify the network administrator of the incident | Major | Command Failures |
+| **Device is Suspected to be Disconnected (Unresponsive)** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: 8 attempts in 5 minutes | Major | Unresponsive |
+| **EtherNet/IP CIP Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **EtherNet/IP Encapsulation Protocol Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **Event Buffer Overflow in Outstation** | A buffer overflow event was detected on a source device. The event may cause data corruption, program crashes, or execution of malicious code. | Major | Buffer Overflow |
+| **Expected Backup Operation Did Not Occur** | Expected backup/file transfer activity didn't occur between two devices. This alert may indicate errors in the backup / file transfer process. <br><br> Threshold: 100 seconds | Major | Backup |
+| **GE SRTP Command Failure** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **GE SRTP Stop PLC Command was Sent** | The source device sent a stop command to a destination controller. The controller will stop operating until a start command is sent. | Warning | Restart/ Stop Commands |
+| **GOOSE Control Block Requires Further Configuration** | A source device sent a GOOSE message indicating that the device needs commissioning. This means that the GOOSE control block requires further configuration and GOOSE messages are partially or completely non-operational. | Major | Configuration Changes |
+| **GOOSE Dataset Configuration was Changed** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes |
+| **Honeywell Controller Unexpected Status** | A Honeywell Controller sent an unexpected diagnostic message indicating a status change. | Warning | Operational Issues |
+|* **HTTP Client Error** | The source device initiated an invalid request. | Warning | Abnormal HTTP Communication Behavior |
+| **Illegal IP Address** | System detected traffic between a source device and an IP address that is an invalid address. This may indicate wrong configuration or an attempt to generate illegal traffic. | Minor | Abnormal Communication Behavior |
+| **Master-Slave Authentication Error** | The authentication process between a DNP3 source device (primary) and a destination device (outstation) failed. | Minor | Authentication |
+| **MMS Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **No Traffic Detected on Sensor Interface** | A sensor stopped detecting network traffic on a network interface. | Critical | Sensor Traffic |
+| **OPC UA Server Raised an Event That Requires User's Attention** | An OPC UA server sent an event notification to a client. This type of event requires user attention | Major | Operational Issues |
+| **OPC UA Service Request Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **Outstation Restarted** | A cold restart was detected on a source device. This means the device was physically turned off and back on again. | Warning | Restart/ Stop Commands |
+| **Outstation Restarts Frequently** | An excessive number of cold restarts were detected on a source device. This means the device was physically turned off and back on again an excessive number of times. <br><br> Threshold: 2 restarts in 10 minutes | Minor | Restart/ Stop Commands |
+| **Outstation's Configuration Changed** | A configuration change was detected on a source device. | Major | Configuration Changes |
+| **Outstation's Corrupted Configuration Detected** | This DNP3 source device (outstation) reported a corrupted configuration. | Major | Configuration Changes |
+| **Profinet DCP Command Failed** | A server returned an error code. This indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **Profinet Device Factory Reset** | A source device sent a factory reset command to a Profinet destination device. The reset command clears Profinet device configurations and stops its operation. | Warning | Restart/ Stop Commands |
+| * **RPC Operation Failed** | A server returned an error code. This alert indicates a server error or an invalid request by a client. | Major | Command Failures |
+| **Sampled Values Message Dataset Configuration was Changed** | A message (identified by protocol ID) dataset was changed on a source device. This means the device will report a different dataset for this message. | Warning | Configuration Changes |
+| **Slave Device Unrecoverable Failure** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Command Failures |
+| **Suspicion of Hardware Problems in Outstation** | An unrecoverable condition error was detected on a source device. This kind of error usually indicates a hardware failure or failure to perform a specific command. | Major | Operational Issues |
+| **Suspicion of Unresponsive MODBUS Device** | A source device didn't respond to a command sent to it. It may have been disconnected when the command was sent. <br><br> Threshold: Minimum of 1 valid response for a minimum of 3 requests within 5 minutes | Minor | Unresponsive |
+| **Traffic Detected on Sensor Interface** | A sensor resumed detecting network traffic on a network interface. | Warning | Sensor Traffic |
\* The alert is disabled by default, but can be enabled again. To enable the alert, navigate to the Support page, find the alert and select **Enable**. You need administrative level permissions to access the Support page.
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
Enter the following parameters:
| Syslog CEF output format | Description | |--|--|
-| Date and time | Date and time that the syslog server machine received the information. |
| Priority | User.Alert |
-| Hostname | Sensor IP address |
+| Date and time | Date and time that the syslog server machine received the information. (Added by Syslog server) |
+| Hostname | Sensor hostname (Added by Syslog server) |
| Message | CEF:0 <br />Microsoft Defender for IoT/CyberX <br />Sensor name <br />Sensor version <br />Microsoft Defender for IoT Alert <br />Alert title <br />Integer indication of serverity. 1=**Warning**, 4=**Minor**, 8=**Major**, or 10=**Critical**.<br />msg= The message of the alert. <br />protocol= The protocol of the alert. <br />severity= **Warning**, **Minor**, **Major**, or **Critical**. <br />type= **Protocol Violation**, **Policy Violation**, **Malware**, **Anomaly**, or **Operational**. <br />UUID= UUID of the alert <br /> start= The time that the alert was detected. <br />Might vary from the time of the syslog server machine, and depends on the time-zone configuration of the forwarding rule. <br />src_ip= IP address of the source device. <br />src_mac= MAC address of the source device. (Optional) <br />dst_ip= IP address of the destination device.<br />dst_mac= MAC address of the destination device. (Optional)<br />cat= The alert group associated with the alert. | | Syslog LEEF output format | Description |
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
The following alert details are displayed by default in the grid:
| **Source device** | The IP address, MAC, or device name. | | **Tactics** | The MITRE ATT&CK stage. |
-**To view additional information:**
+### View more alert details
1. Select **Edit columns** from the Alerts page. 1. In the Edit Columns dialog box, select **Add Column** and choose an item to add. The following items are available:
For example, filter alerts by **Category**:
:::image type="content" source="media/how-to-view-manage-cloud-alerts/category-filter.png" alt-text="Screenshot of the Category filter option in Alerts page in the Azure portal.":::
-Supported categories include:
-
- :::column span="":::
- - Abnormal Communication Behavior
- - Abnormal HTTP Communication Behavior
- - Authentication
- - Backup
- - Bandwidth Anomalies
- - Buffer overflow
- - Command Failures
- - Configuration changes
- - Custom Alerts
- - Discovery
- - Firmware change
- - Illegal commands
- :::column-end:::
- :::column span="":::
- - Internet Access
- - Operation Failures
- - Operational issues
- - Programming
- - Remote access
- - Restart/Stop Commands
- - Scan
- - Sensor traffic
- - Suspicion of malicious activity
- - Suspicion of Malware
- - Unauthorized Communication Behavior
- - Unresponsive
- :::column-end:::
- ### Group alerts displayed Use the **Group by** menu at the top right to collapse the grid into subsections according to specific parameters.
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
| 10.5.3 | 10/2021 | 07/2022 | | 10.5.2 | 10/2021 | 07/2022 |
+## October 2022
+
+|Service area |Updates |
+|||
+|**OT networks** | [Enhanced OT monitoring alert reference](#enhanced-ot-monitoring-alert-reference) |
+
+### Enhanced OT monitoring alert reference
+
+Our alert reference article now includes the following details for each alert:
+
+- **Alert category**, helpful when you want to investigate alerts that are aggregated by a specific activity or configure SIEM rules to generate incidents based on specific activities
+
+- **Alert threshold**, for relevant alerts. Thresholds indicate the specific point at which an alert is triggered. Modify alert thresholds as needed from the sensor's **Support** page.
+
+For more information, see [OT monitoring alert types and descriptions](alert-engine-messages.md), specifically [Supported alert categories](alert-engine-messages.md#supported-alert-categories).
+ ## September 2022 |Service area |Updates |
Unicode characters are now supported when working with sensor certificate passph
## Next steps
-[Getting started with Defender for IoT](getting-started.md)
+[Getting started with Defender for IoT](getting-started.md)
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md
Service methods return strongly typed objects wherever possible. However, becaus
API metrics such as requests, latency, and failure rate can be viewed in the [Azure portal](https://portal.azure.com/).
-From the portal homepage, search for your Azure Digital Twins instance to pull up its details. Select the **Metrics** option from the Azure Digital Twins instance's menu to bring up the **Metrics** page.
--
-From here, you can view the metrics for your instance and create custom views.
+For information about viewing and managing metrics with Azure Monitor, see [Get started with metrics explorer](../azure-monitor/essentials/metrics-getting-started.md). For a full list of API metrics available for Azure Digital Twins, see [Azure Digital Twins API request metrics](how-to-monitor.md#api-request-metrics).
## Next steps
digital-twins How To Manage Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-routes.md
az resource create --id <Azure-Digital-Twins-instance-Azure-resource-ID>/endpoin
When an endpoint can't deliver an event within a certain time period or after trying to deliver the event a certain number of times, it can send the undelivered event to a storage account. This process is known as *dead-lettering*.
-You can set up the necessary storage resources using the [Azure portal](https://portal.azure.com/#home) or the [Azure Digital Twins CLI](/cli/azure/dt). However, to create an endpoint with dead-lettering enabled, you'll need use the [Azure Digital Twins CLI](/cli/azure/dt) or [control plane APIs](concepts-apis-sdks.md#overview-control-plane-apis).
+You can set up the necessary storage resources using the [Azure portal](https://portal.azure.com/#home) or the [Azure Digital Twins CLI](/cli/azure/dt). However, to create an endpoint with dead-lettering enabled, you'll need to use the [Azure Digital Twins CLI](/cli/azure/dt) or [control plane APIs](concepts-apis-sdks.md#overview-control-plane-apis).
To learn more about dead-lettering, see [Endpoints and event routes](concepts-route-events.md#dead-letter-events). For instructions on how to set up an endpoint with dead-lettering, continue through the rest of this section.
When you implement or update a filter, the change may take a few minutes to be r
Routing metrics such as count, latency, and failure rate can be viewed in the [Azure portal](https://portal.azure.com/).
-From the portal homepage, search for your Azure Digital Twins instance to pull up its details. Select the **Metrics** option from the Azure Digital Twins instance's navigation menu on the left to bring up the **Metrics** page.
--
-From here, you can view the metrics for your instance and create custom views.
-
-For more on viewing Azure Digital Twins metrics, see [Monitor with metrics](how-to-monitor-metrics.md).
+For information about viewing and managing metrics with Azure Monitor, see [Get started with metrics explorer](../azure-monitor/essentials/metrics-getting-started.md). For a full list of routing metrics available for Azure Digital Twins, see [Azure Digital Twins routing metrics](how-to-monitor.md#routing-metrics).
## Next steps Read about the different types of event messages you can receive:
-* [Event notifications](concepts-event-notifications.md)
+* [Event notifications](concepts-event-notifications.md)
digital-twins How To Monitor Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor-alerts.md
-
-# Mandatory fields.
Title: Monitor with alerts-
-description: Learn how to troubleshoot Azure Digital Twins by setting up alerts based on service metrics.
-- Previously updated : 03/10/2022----
-# Monitor Azure Digital Twins with alerts
-
-In this article, you'll learn how to set up *alerts* in the [Azure portal](https://portal.azure.com). These alerts will notify you when configurable conditions you've defined based on the metrics of your Azure Digital Twins instance are met, allowing you to take important actions.
-
-Azure Digital Twins collects [metrics](how-to-monitor-metrics.md) for your service instance that give information about the state of your resources. You can use these metrics to assess the overall health of Azure Digital Twins service and the resources connected to it.
-
-Alerts proactively notify you when important conditions are found in your metrics data. They allow you to identify and address issues before the users of your system notice them. You can read more about alerts in [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md).
-
-## Turn on alerts
-
-Here's how to enable alerts for your Azure Digital Twins instance:
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Digital Twins instance. You can find it by typing its name into the portal search bar.
-
-2. Select **Alerts** from the menu, then **+ New alert rule**.
-
- :::image type="content" source="media/how-to-monitor-alerts/alerts-pre.png" alt-text="Screenshot of the Azure portal showing the button to create a new alert rule in the Alerts section of an Azure Digital Twin instance." lightbox="media/how-to-monitor-alerts/alerts-pre.png":::
-
-3. On the **Create alert rule** page that follows, you can follow the prompts to define conditions, actions to be triggered, and alert details.
- * **Scope** details should fill automatically with the details for your instance
- * You'll define **Condition** and **Action group** details to customize alert triggers and responses. For more information about this process, see the [Select conditions](#select-conditions) section later in this article.
- * In the **Alert rule details** section, enter a name and optional description for your rule.
- - You can select the **Enable alert rule upon creation** checkbox if you want the alert to become active as soon as it's created.
- - You can select the **Automatically resolve alerts** checkbox if you want to resolve the alert when the condition isn't met anymore.
- - This section is also where you select a **subscription**, **resource group**, and **Severity** level.
-
-4. Select the **Create alert rule** button to create your alert rule.
-
- :::image type="content" source="media/how-to-monitor-alerts/create-alert-rule.png" alt-text="Screenshot of the Azure portal showing the Create Alert Rule page with sections for scope, condition, action group, and alert rule details." lightbox="media/how-to-monitor-alerts/create-alert-rule.png":::
-
-For a guided walkthrough of filling out these fields, see [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md). Below are some examples of what the steps will look like for Azure Digital Twins.
-
-## Select conditions
-
-Here's an excerpt from the **Select condition** process illustrating what types of alert signals are available for Azure Digital Twins. On this page you can filter the type of signal, and select the signal that you want from a list.
--
-After selecting a signal, you'll be asked to configure the logic of the alert. You can filter on a dimension, set a threshold value for your alert, and set the frequency of checks for the condition. Here's an example of setting up an alert for when the average Routing Failure Rate metric goes above 5%.
--
-## Verify success
-
-After setting up alerts, they'll show up back on the **Alerts** page for your instance.
-
-
-## Next steps
-
-* For more information about alerts with Azure Monitor, see [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md).
-* For information about the Azure Digital Twins metrics, see [Monitor with metrics](how-to-monitor-metrics.md).
-* To see how to enable diagnostics logging for your Azure Digital Twins metrics, see [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
digital-twins How To Monitor Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor-metrics.md
-
-# Mandatory fields.
Title: Monitor with metrics-
-description: Learn how to view Azure Digital Twins metrics in Azure Monitor to troubleshoot and oversee your instance.
-- Previously updated : 03/10/2022---
-# Optional fields. Don't forget to remove # if you need a field.
-#
-#
-#
--
-# Monitor Azure Digital Twins with metrics
-
-The metrics described in this article give you information about the state of Azure Digital Twins resources in your Azure subscription. Azure Digital Twins metrics help you assess the overall health of the Azure Digital Twins service and the resources connected to it. These user-facing statistics help you see what is going on with your Azure Digital Twins and help analyze the root causes of issues without needing to contact Azure support.
-
-Metrics are enabled by default. You can view Azure Digital Twins metrics from the [Azure portal](https://portal.azure.com).
-
-## View the metrics
-
-1. Create an Azure Digital Twins instance. You can find instructions on how to set up an Azure Digital Twins instance in [Set up an instance and authentication](how-to-set-up-instance-portal.md).
-
-2. Find your Azure Digital Twins instance in the [Azure portal](https://portal.azure.com) (you can open the page for it by typing its name into the portal search bar).
-
- From the instance's menu, select **Metrics**.
-
- :::image type="content" source="media/how-to-monitor-metrics/azure-digital-twins-metrics.png" alt-text="Screenshot showing the metrics page for Azure Digital Twins in the Azure portal.":::
-
- This page displays the metrics for your Azure Digital Twins instance. You can also create custom views of your metrics by selecting the ones you want to see from the list.
-
-3. You can choose to send your metrics data to an Event Hubs endpoint or an Azure Storage account by selecting **Diagnostics settings** from the menu, then **Add diagnostic setting**.
-
- :::image type="content" source="media/how-to-monitor-diagnostics/diagnostic-settings.png" alt-text="Screenshot showing the diagnostic settings page and button to add in the Azure portal.":::
-
- For more information about this process, see [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
-
-4. You can choose to set up alerts for your metrics data by selecting **Alerts** from the menu, then **+ New alert rule**.
- :::image type="content" source="media/how-to-monitor-alerts/alerts-pre.png" alt-text="Screenshot showing the Alerts page and button to add in the Azure portal.":::
-
- For more information about this process, see [Monitor with alerts](how-to-monitor-alerts.md).
-
-## List of metrics
-
-Azure Digital Twins provides several metrics to give you an overview of the health of your instance and its associated resources. You can also combine information from multiple metrics to paint a bigger picture of the state of your instance.
-
-The following tables describe the metrics tracked by each Azure Digital Twins instance, and how each metric relates to the overall status of your instance.
-
-#### Metrics for tracking service limits
-
-You can configure these metrics to track when you're approaching a [published service limit](reference-service-limits.md#functional-limits) for some aspect of your solution.
-
-To set up tracking, use the [alerts](how-to-monitor-alerts.md) feature in Azure Monitor. You can define thresholds for these metrics so that you receive an alert when a metric reaches a certain percentage of its published limit.
-
-| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
-| | | | | | |
-| TwinCount | Twin Count (Preview) | Count | Total | Total number of twins in the Azure Digital Twins instance. Use this metric to determine if you're approaching the [service limit](reference-service-limits.md#functional-limits) for max number of twins allowed per instance. | None |
-| ModelCount | Model Count (Preview) | Count | Total | Total number of models in the Azure Digital Twins instance. Use this metric to determine if you're approaching the [service limit](reference-service-limits.md#functional-limits) for max number of models allowed per instance. | None |
-
-#### API request metrics
-
-Metrics having to do with API requests:
-
-| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
-| | | | | | |
-| ApiRequests | API Requests | Count | Total | The number of API Requests made for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol, <br>Status Code, <br>Status Code Class, <br>Status Text |
-| ApiRequestsFailureRate | API Requests Failure Rate | Percent | Average | The percentage of API requests that the service receives for your instance that give an internal error (500) response code for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol, <br>Status Code, <br>Status Code Class, <br>Status Text
-| ApiRequestsLatency | API Requests Latency | Milliseconds | Average | The response time for API requests. This value refers to the time from when the request is received by Azure Digital Twins until the service sends a success/fail result for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol |
-
-#### Billing metrics
-
-Metrics having to do with billing:
-
-| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
-| | | | | | |
-| BillingApiOperations | Billing API Operations | Count | Total | Billing metric for the count of all API requests made against the Azure Digital Twins service. | Meter ID |
-| BillingMessagesProcessed | Billing Messages Processed | Count | Total | Billing metric for the number of messages sent out from Azure Digital Twins to external endpoints.<br><br>To be considered a single message for billing purposes, a payload must be no larger than 1 KB. Payloads larger than this limit will be counted as additional messages in 1 KB increments (so a message between 1 KB and 2 KB will be counted as 2 messages, between 2 KB and 3 KB will be 3 messages, and so on).<br>This restriction also applies to responsesΓÇöso a call that returns 1.5 KB in the response body, for example, will be billed as 2 operations. | Meter ID |
-| BillingQueryUnits | Billing Query Units | Count | Total | The number of Query Units, an internally computed measure of service resource usage, consumed to execute queries. There's also a helper API available for measuring Query Units: [QueryChargeHelper Class](/dotnet/api/azure.digitaltwins.core.querychargehelper?view=azure-dotnet&preserve-view=true) | Meter ID |
-
-For more information on the way Azure Digital Twins is billed, see [Azure Digital Twins pricing](https://azure.microsoft.com/pricing/details/digital-twins/).
-
-#### Ingress metrics
-
-Metrics having to do with data ingress:
-
-| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
-| | | | | | |
-| IngressEvents | Ingress Events | Count | Total | The number of incoming telemetry events into Azure Digital Twins. | Result |
-| IngressEventsFailureRate | Ingress Events Failure Rate | Percent | Average | The percentage of incoming telemetry events for which the service returns an internal error (500) response code. | Result |
-| IngressEventsLatency | Ingress Events Latency | Milliseconds | Average | The time from when an event arrives to when it's ready to be egressed by Azure Digital Twins, at which point the service sends a success/fail result. | Result |
-
-#### Routing metrics
-
-Metrics having to do with routing:
-
-| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
-| | | | | | |
-| MessagesRouted | Messages Routed | Count | Total | The number of messages routed to an endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
-| RoutingFailureRate | Routing Failure Rate | Percent | Average | The percentage of events that result in an error as they're routed from Azure Digital Twins to an endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
-| RoutingLatency | Routing Latency | Milliseconds | Average | Time elapsed between an event getting routed from Azure Digital Twins to when it's posted to the endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
-
-## Dimensions
-
-Dimensions help identify more details about the metrics. Some of the routing metrics provide information per endpoint. The table below lists possible values for these dimensions.
-
-| Dimension | Values |
-| | |
-| Authentication | OAuth |
-| Operation (for API Requests) | Microsoft.DigitalTwins/digitaltwins/delete, <br>Microsoft.DigitalTwins/digitaltwins/write, <br>Microsoft.DigitalTwins/digitaltwins/read, <br>Microsoft.DigitalTwins/eventroutes/read, <br>Microsoft.DigitalTwins/eventroutes/write, <br>Microsoft.DigitalTwins/eventroutes/delete, <br>Microsoft.DigitalTwins/models/read, <br>Microsoft.DigitalTwins/models/write, <br>Microsoft.DigitalTwins/models/delete, <br>Microsoft.DigitalTwins/query/action |
-| Endpoint Type | Event Grid, <br>Event Hubs, <br>Service Bus |
-| Protocol | HTTPS |
-| Result | Success, <br>Failure |
-| Status Code | 200, 404, 500, and so on. |
-| Status Code Class | 2xx, 4xx, 5xx, and so on. |
-| Status Text | Internal Server Error, Not Found, and so on. |
-
-## Next steps
-
-To learn more about managing recorded metrics for Azure Digital Twins, see [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
digital-twins How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor.md
+
+# Mandatory fields.
+ Title: Monitor your instance
+
+description: Monitor Azure Digital Twins instances with metrics, alerts, and diagnostics.
++ Last updated : 10/31/2022+++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Monitor Azure Digital Twins with metrics, alerts, and diagnostics
+
+Azure Digital Twins integrates with [Azure Monitor](../azure-monitor/overview.md) to provide metrics and diagnostic information that you can use to monitor your Azure Digital Twins resources. **Metrics** are enabled by default, and give you information about the state of Azure Digital Twins resources in your Azure subscription. **Alerts** can proactively notify you when certain conditions are found in your metrics data. You can also collect **diagnostic logs** for your service instance to monitor its performance, access, and other data.
+
+These monitoring features can help you assess the overall health of the Azure Digital Twins service and the resources connected to it. You can use them to understand what is happening in your Azure Digital Twins instance, and analyze root causes on issues without needing to contact Azure support.
+
+They can be accessed from the [Azure portal](https://portal.azure.com), grouped under the **Monitoring** heading for the Azure Digital Twins resource.
++
+## Metrics and alerts
+
+For general information about viewing Azure resource **metrics**, see [Get started with metrics explorer](../azure-monitor/essentials/metrics-getting-started.md) in the Azure Monitor documentation. For general information about configuring **alerts** for Azure metrics, see [Create a new alert rule](../azure-monitor/alerts/alerts-create-new-alert-rule.md?tabs=metric).
+
+The rest of this section describes the metrics tracked by each Azure Digital Twins instance, and how each metric relates to the overall status of your instance.
+
+### Metrics for tracking service limits
+
+You can configure these metrics to track when you're approaching a [published service limit](reference-service-limits.md#functional-limits) for some aspect of your solution.
+
+To set up tracking, use the [alerts](../azure-monitor/alerts/alerts-overview.md) feature in Azure Monitor. You can define thresholds for these metrics so that you receive an alert when a metric reaches a certain percentage of its published limit.
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| TwinCount | Twin Count (Preview) | Count | Total | Total number of twins in the Azure Digital Twins instance. Use this metric to determine if you're approaching the [service limit](reference-service-limits.md#functional-limits) for max number of twins allowed per instance. | None |
+| ModelCount | Model Count (Preview) | Count | Total | Total number of models in the Azure Digital Twins instance. Use this metric to determine if you're approaching the [service limit](reference-service-limits.md#functional-limits) for max number of models allowed per instance. | None |
+
+### API request metrics
+
+Metrics having to do with API requests:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| ApiRequests | API Requests | Count | Total | The number of API Requests made for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol, <br>Status Code, <br>Status Code Class, <br>Status Text |
+| ApiRequestsFailureRate | API Requests Failure Rate | Percent | Average | The percentage of API requests that the service receives for your instance that gives an internal error (500) response code for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol, <br>Status Code, <br>Status Code Class, <br>Status Text
+| ApiRequestsLatency | API Requests Latency | Milliseconds | Average | The response time for API requests. This value refers to the time from when the request is received by Azure Digital Twins until the service sends a success/fail result for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol |
+
+### Billing metrics
+
+Metrics having to do with billing:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| BillingApiOperations | Billing API Operations | Count | Total | Billing metric for the count of all API requests made against the Azure Digital Twins service. | Meter ID |
+| BillingMessagesProcessed | Billing Messages Processed | Count | Total | Billing metric for the number of messages sent out from Azure Digital Twins to external endpoints.<br><br>To be considered a single message for billing purposes, a payload must be no larger than 1 KB. Payloads larger than this limit will be counted as additional messages in 1 KB increments (so a message between 1 KB and 2 KB will be counted as 2 messages, between 2 KB and 3 KB will be 3 messages, and so on).<br>This restriction also applies to responsesΓÇöso a call that returns 1.5 KB in the response body, for example, will be billed as 2 operations. | Meter ID |
+| BillingQueryUnits | Billing Query Units | Count | Total | The number of Query Units, an internally computed measure of service resource usage, consumed to execute queries. There's also a helper API available for measuring Query Units: [QueryChargeHelper Class](/dotnet/api/azure.digitaltwins.core.querychargehelper?view=azure-dotnet&preserve-view=true) | Meter ID |
+
+For more information on the way Azure Digital Twins is billed, see [Azure Digital Twins pricing](https://azure.microsoft.com/pricing/details/digital-twins/).
+
+### Ingress metrics
+
+Metrics having to do with data ingress:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| IngressEvents | Ingress Events | Count | Total | The number of incoming telemetry events into Azure Digital Twins. | Result |
+| IngressEventsFailureRate | Ingress Events Failure Rate | Percent | Average | The percentage of incoming telemetry events for which the service returns an internal error (500) response code. | Result |
+| IngressEventsLatency | Ingress Events Latency | Milliseconds | Average | The time from when an event arrives to when it's ready to be egressed by Azure Digital Twins, at which point the service sends a success/fail result. | Result |
+
+### Routing metrics
+
+Metrics having to do with routing:
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| MessagesRouted | Messages Routed | Count | Total | The number of messages routed to an endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
+| RoutingFailureRate | Routing Failure Rate | Percent | Average | The percentage of events that result in an error as they're routed from Azure Digital Twins to an endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
+| RoutingLatency | Routing Latency | Milliseconds | Average | Time elapsed between an event getting routed from Azure Digital Twins to when it's posted to the endpoint Azure service such as Event Hubs, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
+
+### Metric dimensions
+
+Dimensions help identify more details about the metrics. Some of the routing metrics provide information per endpoint. The table below lists possible values for these dimensions.
+
+| Dimension | Values |
+| | |
+| Authentication | OAuth |
+| Operation (for API Requests) | Microsoft.DigitalTwins/digitaltwins/delete, <br>Microsoft.DigitalTwins/digitaltwins/write, <br>Microsoft.DigitalTwins/digitaltwins/read, <br>Microsoft.DigitalTwins/eventroutes/read, <br>Microsoft.DigitalTwins/eventroutes/write, <br>Microsoft.DigitalTwins/eventroutes/delete, <br>Microsoft.DigitalTwins/models/read, <br>Microsoft.DigitalTwins/models/write, <br>Microsoft.DigitalTwins/models/delete, <br>Microsoft.DigitalTwins/query/action |
+| Endpoint Type | Event Grid, <br>Event Hubs, <br>Service Bus |
+| Protocol | HTTPS |
+| Result | Success, <br>Failure |
+| Status Code | 200, 404, 500, and so on. |
+| Status Code Class | 2xx, 4xx, 5xx, and so on. |
+| Status Text | Internal Server Error, Not Found, and so on. |
+
+## Diagnostics logs
+
+For general information about Azure **diagnostics settings**, including how to enable them, see [Diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md). For information about querying diagnostic logs using **Log Analytics**, see [Overview of Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-overview.md).
+
+The rest of this section describes the diagnostic log categories that Azure Digital Twins can collect, and their schemas.
+
+### Log categories
+
+Here are more details about the categories of logs that Azure Digital Twins collects.
+
+| Log category | Description |
+| | |
+| ADTModelsOperation | Log all API calls related to Models |
+| ADTQueryOperation | Log all API calls related to Queries |
+| ADTEventRoutesOperation | Log all API calls related to Event Routes and egress of events from Azure Digital Twins to an endpoint service like Event Grid, Event Hubs, and Service Bus |
+| ADTDigitalTwinsOperation | Log all API calls related to individual twins |
+
+Each log category consists of operations of write, read, delete, and action. These categories map to REST API calls as follows:
+
+| Event type | REST API operations |
+| | |
+| Write | PUT and PATCH |
+| Read | GET |
+| Delete | DELETE |
+| Action | POST |
+
+Here's a comprehensive list of the operations and corresponding [Azure Digital Twins REST API calls](/rest/api/azure-digitaltwins/) that are logged in each category.
+
+>[!NOTE]
+> Each log category contains several operations/REST API calls. In the table below, each log category maps to all operations/REST API calls underneath it until the next log category is listed.
+
+| Log category | Operation | REST API calls and other events |
+| | | |
+| ADTModelsOperation | Microsoft.DigitalTwins/models/write | Digital Twin Models Update API |
+| | Microsoft.DigitalTwins/models/read | Digital Twin Models Get By ID and List APIs |
+| | Microsoft.DigitalTwins/models/delete | Digital Twin Models Delete API |
+| | Microsoft.DigitalTwins/models/action | Digital Twin Models Add API |
+| ADTQueryOperation | Microsoft.DigitalTwins/query/action | Query Twins API |
+| ADTEventRoutesOperation | Microsoft.DigitalTwins/eventroutes/write | Event Routes Add API |
+| | Microsoft.DigitalTwins/eventroutes/read | Event Routes Get By ID and List APIs |
+| | Microsoft.DigitalTwins/eventroutes/delete | Event Routes Delete API |
+| | Microsoft.DigitalTwins/eventroutes/action | Failure while attempting to publish events to an endpoint service (not an API call) |
+| ADTDigitalTwinsOperation | Microsoft.DigitalTwins/digitaltwins/write | Digital Twins Add, Add Relationship, Update, Update Component |
+| | Microsoft.DigitalTwins/digitaltwins/read | Digital Twins Get By ID, Get Component, Get Relationship by ID, List Incoming Relationships, List Relationships |
+| | Microsoft.DigitalTwins/digitaltwins/delete | Digital Twins Delete, Delete Relationship |
+| | Microsoft.DigitalTwins/digitaltwins/action | Digital Twins Send Component Telemetry, Send Telemetry |
+
+### Log schemas
+
+Each log category has a schema that defines how events in that category are reported. Each individual log entry is stored as text and formatted as a JSON blob. The fields in the log and example JSON bodies are provided for each log type below.
+
+`ADTDigitalTwinsOperation`, `ADTModelsOperation`, and `ADTQueryOperation` use a consistent API log schema. `ADTEventRoutesOperation` extends the schema to contain an `endpointName` field in properties.
+
+#### API log schemas
+
+This log schema is consistent for `ADTDigitalTwinsOperation`, `ADTModelsOperation`, `ADTQueryOperation`. The same schema is also used for `ADTEventRoutesOperation`, except the `Microsoft.DigitalTwins/eventroutes/action` operation name (for more information about that schema, see the next section, [Egress log schemas](#egress-log-schemas)).
+
+The schema contains information pertinent to API calls to an Azure Digital Twins instance.
+
+Here are the field and property descriptions for API logs.
+
+| Field name | Data type | Description |
+|--||-|
+| `Time` | DateTime | The date and time that this event occurred, in UTC |
+| `ResourceId` | String | The Azure Resource Manager Resource ID for the resource where the event took place |
+| `OperationName` | String | The type of action being performed during the event |
+| `OperationVersion` | String | The API Version used during the event |
+| `Category` | String | The type of resource being emitted |
+| `ResultType` | String | Outcome of the event |
+| `ResultSignature` | String | Http status code for the event |
+| `ResultDescription` | String | Additional details about the event |
+| `DurationMs` | String | How long it took to perform the event in milliseconds |
+| `CallerIpAddress` | String | A masked source IP address for the event |
+| `CorrelationId` | Guid | Unique identifier for the event |
+| `ApplicationId` | Guid | Application ID used in bearer authorization |
+| `Level` | Int | The logging severity of the event |
+| `Location` | String | The region where the event took place |
+| `RequestUri` | Uri | The endpoint used during the event |
+| `TraceId` | String | `TraceId`, as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of the whole trace used to uniquely identify a distributed trace across systems. |
+| `SpanId` | String | `SpanId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of this request in the trace. |
+| `ParentId` | String | `ParentId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). A request without a parent ID is the root of the trace. |
+| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, and so on. |
+| `TraceState` | String | `TraceState` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Additional vendor-specific trace identification information to span across different distributed tracing systems. |
+
+Below are example JSON bodies for these types of logs.
+
+##### ADTDigitalTwinsOperation
+
+```json
+{
+ "time": "2020-03-14T21:11:14.9918922Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/digitaltwins/write",
+ "operationVersion": "2020-10-31",
+ "category": "DigitalTwinOperation",
+ "resultType": "Success",
+ "resultSignature": "200",
+ "resultDescription": "",
+ "durationMs": 8,
+ "callerIpAddress": "13.68.244.*",
+ "correlationId": "2f6a8e64-94aa-492a-bc31-16b9f0b16ab3",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/digitaltwins/factory-58d81613-2e54-4faa-a930-d980e6e2a884?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+}
+```
+
+##### ADTModelsOperation
+
+```json
+{
+ "time": "2020-10-29T21:12:24.2337302Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/models/write",
+ "operationVersion": "2020-10-31",
+ "category": "ModelsOperation",
+ "resultType": "Success",
+ "resultSignature": "201",
+ "resultDescription": "",
+ "durationMs": "80",
+ "callerIpAddress": "13.68.244.*",
+ "correlationId": "9dcb71ea-bb6f-46f2-ab70-78b80db76882",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/Models?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+}
+```
+
+##### ADTQueryOperation
+
+```json
+{
+ "time": "2020-12-04T21:11:44.1690031Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/query/action",
+ "operationVersion": "2020-10-31",
+ "category": "QueryOperation",
+ "resultType": "Success",
+ "resultSignature": "200",
+ "resultDescription": "",
+ "durationMs": "314",
+ "callerIpAddress": "13.68.244.*",
+ "correlationId": "1ee2b6e9-3af4-4873-8c7c-1a698b9ac334",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/query?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+}
+```
+
+##### ADTEventRoutesOperation
+
+Here's an example JSON body for an `ADTEventRoutesOperation` that isn't of `Microsoft.DigitalTwins/eventroutes/action` type (for more information about that schema, see the next section, [Egress log schemas](#egress-log-schemas)).
+
+```json
+ {
+ "time": "2020-10-30T22:18:38.0708705Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/eventroutes/write",
+ "operationVersion": "2020-10-31",
+ "category": "EventRoutesOperation",
+ "resultType": "Success",
+ "resultSignature": "204",
+ "resultDescription": "",
+ "durationMs": 42,
+ "callerIpAddress": "212.100.32.*",
+ "correlationId": "7f73ab45-14c0-491f-a834-0827dbbf7f8e",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "https://myinstancename.api.scus.digitaltwins.azure.net/EventRoutes/egressRouteForEventHub?api-version=2020-10-31",
+ "properties": {},
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+ },
+```
+
+#### Egress log schemas
+
+The following example is the schema for `ADTEventRoutesOperation` logs specific to the `Microsoft.DigitalTwins/eventroutes/action` operation name. These contain details related to exceptions and the API operations around egress endpoints connected to an Azure Digital Twins instance.
+
+|Field name | Data type | Description |
+|--||-|
+| `Time` | DateTime | The date and time that this event occurred, in UTC |
+| `ResourceId` | String | The Azure Resource Manager Resource ID for the resource where the event took place |
+| `OperationName` | String | The type of action being performed during the event |
+| `Category` | String | The type of resource being emitted |
+| `ResultDescription` | String | Additional details about the event |
+| `CorrelationId` | Guid | Customer provided unique identifier for the event |
+| `ApplicationId` | Guid | Application ID used in bearer authorization |
+| `Level` | Int | The logging severity of the event |
+| `Location` | String | The region where the event took place |
+| `TraceId` | String | `TraceId`, as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of the whole trace used to uniquely identify a distributed trace across systems. |
+| `SpanId` | String | `SpanId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). The ID of this request in the trace. |
+| `ParentId` | String | `ParentId` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). A request without a parent ID is the root of the trace. |
+| `TraceFlags` | String | `TraceFlags` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Controls tracing flags such as sampling, trace level, and so on. |
+| `TraceState` | String | `TraceState` as part of [W3C's Trace Context](https://www.w3.org/TR/trace-context/). Additional vendor-specific trace identification information to span across different distributed tracing systems. |
+| `EndpointName` | String | The name of egress endpoint created in Azure Digital Twins |
+
+Here's an example JSON body for an `ADTEventRoutesOperation` that of `Microsoft.DigitalTwins/eventroutes/action` type.
+
+```json
+{
+ "time": "2020-11-05T22:18:38.0708705Z",
+ "resourceId": "/SUBSCRIPTIONS/BBED119E-28B8-454D-B25E-C990C9430C8F/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.DIGITALTWINS/DIGITALTWINSINSTANCES/MYINSTANCENAME",
+ "operationName": "Microsoft.DigitalTwins/eventroutes/action",
+ "operationVersion": "",
+ "category": "EventRoutesOperation",
+ "resultType": "",
+ "resultSignature": "",
+ "resultDescription": "Unable to send EventHub message to [myPath] for event Id [f6f45831-55d0-408b-8366-058e81ca6089].",
+ "durationMs": -1,
+ "callerIpAddress": "",
+ "correlationId": "7f73ab45-14c0-491f-a834-0827dbbf7f8e",
+ "identity": {
+ "claims": {
+ "appId": "872cd9fa-d31f-45e0-9eab-6e460a02d1f1"
+ }
+ },
+ "level": "4",
+ "location": "southcentralus",
+ "uri": "",
+ "properties": {
+ "endpointName": "myEventHub"
+ },
+ "traceContext": {
+ "traceId": "95ff77cfb300b04f80d83e64d13831e7",
+ "spanId": "b630da57026dd046",
+ "parentId": "9f0de6dadae85945",
+ "traceFlags": "01",
+ "tracestate": "k1=v1,k2=v2"
+ }
+},
+```
+
+## Next steps
+
+Read more about Azure Monitor and its capabilities in the [Azure Monitor documentation](../azure-monitor/overview.md).
digital-twins Reference Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/reference-service-limits.md
When a limit is reached, any requests beyond it are throttled by the service, wh
To manage the throttling, here are some recommendations for working with limits. * Use retry logic. The [Azure Digital Twins SDKs](concepts-apis-sdks.md) implement retry logic for failed requests, so if you're working with a provided SDK, this functionality is already built-in. Otherwise, consider implementing retry logic in your own application. The service sends back a `Retry-After` header in the failure response, which you can use to determine how long to wait before retrying.
-* Use thresholds and notifications to warn about approaching limits. Some of the service limits for Azure Digital Twins have corresponding [metrics](how-to-monitor-metrics.md) that can be used to track usage in these areas. To configure thresholds and set up an alert on any metric when a threshold is approached, see the instructions in [Monitor with alerts](how-to-monitor-alerts.md). To set up notifications for other limits where metrics aren't provided, consider implementing this logic in your own application code.
+* Use thresholds and notifications to warn about approaching limits. Some of the service limits for Azure Digital Twins have corresponding [metrics](../azure-monitor/essentials/data-platform-metrics.md) that can be used to track usage in these areas. To configure thresholds and set up an alert on any metric when a threshold is approached, see the instructions in [Create a new alert rule](../azure-monitor/alerts/alerts-create-new-alert-rule.md?tabs=metric). To set up notifications for other limits where metrics aren't provided, consider implementing this logic in your own application code.
* Deploy at scale across multiple instances. Avoid having a single point of failure. Instead of one large graph for your entire deployment, consider sectioning out subsets of twins logically (like by region or tenant) across multiple instances. >[!NOTE]
digital-twins Troubleshoot Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-performance.md
If you're experiencing delays or other performance issues when working with Azur
## Isolate the source of the delay
-Determine whether the delay is coming from Azure Digital Twins or another service in your solution. To investigate this delay, you can use the **API Latency** metric in [Azure Monitor](../azure-monitor/essentials/quick-monitor-azure-resource.md) through the Azure portal. For instructions on how to view Azure Monitor metrics for an Azure Digital Twins instance, see [Monitor with metrics](how-to-monitor-metrics.md).
+Determine whether the delay is coming from Azure Digital Twins or another service in your solution. To investigate this delay, you can use the **API Latency** metric in [Azure Monitor](../azure-monitor/essentials/quick-monitor-azure-resource.md) through the Azure portal. For more about Azure Monitor metrics for Azure Digital Twins, see [Azure Digital Twins metrics and alerts](how-to-monitor.md#metrics-and-alerts).
## Check regions
If your solution uses Azure Digital Twins in combination with other Azure servic
## Check logs
-Azure Digital Twins can collect logs for your service instance to help monitor its performance, among other data. Logs can be sent to [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) or your custom storage mechanism. To enable logging in your instance, use the instructions in [Monitor with diagnostic logs](how-to-monitor-diagnostics.md). You can analyze the timestamps on the logs to measure latencies, evaluate if they're consistent, and understand their source.
+Azure Digital Twins can collect logs for your service instance to help monitor its performance, among other data. Logs can be sent to [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) or your custom storage mechanism. To enable logging in your instance, use the instructions in [Diagnostic settings in Azure Monitor](../azure-monitor/essentials/diagnostic-settings.md). You can analyze the timestamps on the logs to measure latencies, evaluate if they're consistent, and understand their source.
## Check API frequency
If you're still experiencing performance issues after troubleshooting with the s
Follow these steps:
-1. Gather [metrics](how-to-monitor-metrics.md) and [logs](how-to-monitor-diagnostics.md) for your instance.
+1. Gather [metrics](how-to-monitor.md#metrics-and-alerts) and [logs](how-to-monitor.md#diagnostics-logs) for your instance.
2. Navigate to [Azure Help + support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal. Use the prompts to provide details of your issue, see recommended solutions, share your metrics/log files, and submit any other information that the support team can use to help investigate your issue. For more information on creating support requests, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md). ## Next steps
-Read about other ways to monitor your Azure Digital Twins instance to help with troubleshooting:
-* [Monitor with metrics](how-to-monitor-metrics.md)
-* [Monitor with diagnostics logs](how-to-monitor-diagnostics.md).
-* [Monitor with alerts](how-to-monitor-alerts.md)
-* [Monitor resource health](how-to-monitor-resource-health.md)
+Read about other ways to monitor your Azure Digital Twins instance to help with troubleshooting in [Monitor your Azure Digital Twins instance](how-to-monitor.md).
digital-twins Troubleshoot Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/troubleshoot-resource-health.md
+
+# Mandatory fields.
+ Title: Troubleshoot resource health
+
+description: Learn how to use Azure Resource Health to check the health of your Azure Digital Twins instance.
++ Last updated : 11/1/2022++++
+# Optional fields. Don't forget to remove # if you need a field.
+#
+#
+#
++
+# Troubleshoot Azure Digital Twins resource health
+
+[Azure Service Health](../service-health/index.yml) is a suite of experiences that can help you diagnose and get support for service problems that affect your Azure resources. It contains resource health, service health, and status information, and reports on both current and past health information.
+
+## Use Azure Resource Health
+
+[Azure Resource Health](../service-health/resource-health-overview.md) can help you monitor whether your Azure Digital Twins instance is up and running. You can also use it to learn whether a regional outage is impacting the health of your instance.
+
+To check the health of your instance, follow these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Digital Twins instance. You can find it by typing its name into the portal search bar.
+
+2. From your instance's menu, select **Resource health** under **Support + troubleshooting**. This will take you to the page for viewing resource health history.
+
+ :::image type="content" source="media/troubleshoot-resource-health/resource-health.png" alt-text="Screenshot showing the 'Resource health' page. There is a 'Health history' section showing a daily report from the last nine days.":::
+
+In the image above, this instance is showing as **Available**, and has been for the past nine days. To learn more about the Available status and the other status types that may appear, see [Resource Health overview](../service-health/resource-health-overview.md).
+
+You can also learn more about the different checks that go into resource health for different types of Azure resources in [Resource types and health checks in Azure resource health](../service-health/resource-health-checks-resource-types.md).
+
+## Use Azure Service Health
+
+[Azure Service Health](../service-health/service-health-overview.md) can help you check the health of the entire Azure Digital Twins service in a certain region, and be aware of events like ongoing service issues and upcoming planned maintenance.
+
+To check service health, sign in to the [Azure portal](https://portal.azure.com) and navigate to the **Service Health** service. You can find it by typing "service health" into the portal search bar.
+
+You can then filter service issues by subscription, region, and service.
+
+For more information on using Azure Service Health, see [Service Health overview](../service-health/service-health-overview.md).
+
+## Use Azure status
+
+The [Azure status](../service-health/azure-status-overview.md) page provides a global view of the health of Azure services and regions. While Azure Service Health and Azure Resource Health are personalized to your specific resource, Azure status has a larger scope and can be useful to understand incidents with wide-ranging impact.
+
+To check Azure status, navigate to the [Azure status](https://azure.status.microsoft/status/) page. The page displays a table of Azure services along with health indicators per region. You can view Azure Digital Twins by searching for its table entry on the page.
+
+For more information on using the Azure status page, see [Azure status overview](../service-health/azure-status-overview.md).
+
+## Next steps
+
+Read about other ways to monitor your Azure Digital Twins instance in [Monitor your Azure Digital Twins instance](how-to-monitor.md).
education-hub Set Up Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/azure-dev-tools-teaching/set-up-access.md
Last updated 06/30/2020
-# Setting up access for Azure Dev tools
+# Setting up access for Azure Dev Tools for Teaching
There are two ways to access your subscription so that you can deploy software to your students and outfit your labs: 1. By downloading software and keys from the Visual Studio Subscription Portal.
event-grid Compare Messaging Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/compare-messaging-services.md
Title: Compare Azure messaging services description: Describes the three Azure messaging services - Azure Event Grid, Event Hubs, and Service Bus. Recommends which service to use for different scenarios. Previously updated : 04/26/2022 Last updated : 11/01/2022 # Choose between Azure messaging services - Event Grid, Event Hubs, and Service Bus
An event is a lightweight notification of a condition or a state change. The pub
Discrete events report state change and are actionable. To take the next step, the consumer only needs to know that something happened. The event data has information about what happened but doesn't have the data that triggered the event. For example, an event notifies consumers that a file was created. It may have general information about the file, but it doesn't have the file itself. Discrete events are ideal for serverless solutions that need to scale.
-A series of events report a condition and are analyzable. The events are time-ordered and interrelated. The consumer needs the sequenced series of events to analyze what happened.
+A series of events reports a condition and are analyzable. The events are time-ordered and interrelated. The consumer needs the sequenced series of events to analyze what happened.
### Message A message is raw data produced by a service to be consumed or stored elsewhere. The message contains the data that triggered the message pipeline. The publisher of the message has an expectation about how the consumer handles the message. A contract exists between the two sides. For example, the publisher sends a message with the raw data, and expects the consumer to create a file from that data and send a response when the work is done.
It has the following characteristics:
- Serverless - At least once delivery of an event
-Event Grid is offered in two editions: **Azure Event Grid**, a fully managed PaaS service on Azure, and **Event Grid on Kubernetes with Azure Arc**, which lets you use Event Grid on your Kubernetes cluster wherever that is deployed, on-prem or on the cloud. For more information, see [Azure Event Grid overview](overview.md) and [Event Grid on Kubernetes with Azure Arc overview](./kubernetes/overview.md).
+Event Grid is offered in two editions: **Azure Event Grid**, a fully managed PaaS service on Azure, and **Event Grid on Kubernetes with Azure Arc**, which lets you use Event Grid on your Kubernetes cluster wherever that is deployed, on-premises or on the cloud. For more information, see [Azure Event Grid overview](overview.md) and [Event Grid on Kubernetes with Azure Arc overview](./kubernetes/overview.md).
## Azure Event Hubs Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. It facilitates the capture, retention, and replay of telemetry and event stream data. The data can come from many concurrent sources. Event Hubs allows telemetry and event data to be made available to various stream-processing infrastructures and analytics services. It's available either as data streams or bundled event batches. This service provides a single solution that enables rapid data retrieval for real-time processing, and repeated replay of stored raw data. It can capture the streaming data into a file for processing and analysis.
It has the following characteristics:
For more information, see [Event Hubs overview](../event-hubs/event-hubs-about.md). ## Azure Service Bus
-Service Bus is a fully managed enterprise message broker with message queues and publish-subscribe topics. The service is intended for enterprise applications that require transactions, ordering, duplicate detection, and instantaneous consistency. Service Bus enables cloud-native applications to provide reliable state transition management for business processes. When handling high-value messages that cannot be lost or duplicated, use Azure Service Bus. This service also facilitates highly secure communication across hybrid cloud solutions and can connect existing on-premises systems to cloud solutions.
+Service Bus is a fully managed enterprise message broker with message queues and publish-subscribe topics. The service is intended for enterprise applications that require transactions, ordering, duplicate detection, and instantaneous consistency. Service Bus enables cloud-native applications to provide reliable state transition management for business processes. When handling high-value messages that can't be lost or duplicated, use Azure Service Bus. This service also facilitates highly secure communication across hybrid cloud solutions and can connect existing on-premises systems to cloud solutions.
Service Bus is a brokered messaging system. It stores messages in a "broker" (for example, a queue) until the consuming party is ready to receive the messages. It has the following characteristics:
For more information, see [Service Bus overview](../service-bus-messaging/servic
## Use the services together In some cases, you use the services side by side to fulfill distinct roles. For example, an e-commerce site can use Service Bus to process the order, Event Hubs to capture site telemetry, and Event Grid to respond to events like an item was shipped.
-In other cases, you link them together to form an event and data pipeline. You use Event Grid to respond to events in the other services. For an example of using Event Grid with Event Hubs to migrate data to a data warehouse, see [Stream big data into a data warehouse](event-grid-event-hubs-integration.md). The following image shows the workflow for streaming the data.
+In other cases, you link them together to form an event and data pipeline. You use Event Grid to respond to events in the other services. For an example of using Event Grid with Event Hubs to migrate data to Azure Synapse Analytics, see [Stream big data into a Azure Synapse Analytics](event-grid-event-hubs-integration.md). The following image shows the workflow for streaming the data.
## Next steps See the following articles:
event-grid Subscribe To Sap Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-sap-events.md
Last updated 10/25/2022
# Subscribe to events published by SAP
-This article describes steps to subscribe to events published by an SAP S/4HANA system.
+This article describes steps to subscribe to events published by a SAP S/4HANA system.
+
+> [!NOTE]
+> See the [New SAP events on Azure Event Grid](https://techcommunity.microsoft.com/t5/messaging-on-azure-blog/new-sap-events-on-azure-event-grid/ba-p/3663372) for an announcement of this feature.
## High-level steps
SAP's BETA program started in October 2022 and will last a couple of months. The
If you have any questions, you can contact us at <a href="mailto:ask-grid-and-ms-sap@microsoft.com">ask-grid-and-ms-sap@microsoft.com</a>. ## Next steps
-See [subscribe to partner events](subscribe-to-partner-events.md).
+See [subscribe to partner events](subscribe-to-partner-events.md).
event-hubs Event Hubs Capture Enable Through Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-capture-enable-through-portal.md
Title: Event Hubs - Capture streaming events using Azure portal description: This article describes how to enable capturing of events streaming through Azure Event Hubs by using the Azure portal. Previously updated : 10/27/2021 Last updated : 10/27/2022
event-hubs Event Hubs Quickstart Kafka Enabled Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md
Title: 'Quickstart: Data streaming with Azure Event Hubs using the Kafka protocol' description: 'Quickstart: This article provides information on how to stream into Azure Event Hubs using the Kafka protocol and APIs.' Previously updated : 09/26/2022 Last updated : 11/02/2022
When you create an Event Hubs namespace, the Kafka endpoint for the namespace is
## Send and receive messages with Kafka in Event Hubs +
+### [Connection string](#tab/connection-string)
+
+1. Clone the [Azure Event Hubs for Kafka repository](https://github.com/Azure/azure-event-hubs-for-kafka).
+
+1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/producer*.
+
+1. Update the configuration details for the producer in *src/main/resources/producer.config* as follows:
+
+ ```xml
+ bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
+ security.protocol=SASL_SSL
+ sasl.mechanism=PLAIN
+ sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
+ ```
+
+ > [!IMPORTANT]
+ > Replace `{YOUR.EVENTHUBS.CONNECTION.STRING}` with the connection string for your Event Hubs namespace. For instructions on getting the connection string, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). Here's an example configuration: `sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=XXXXXXXXXXXXXXXX";`
+
+1. Run the producer code and stream events into Event Hubs:
+
+ ```shell
+ mvn clean package
+ mvn exec:java -Dexec.mainClass="TestProducer"
+ ```
+
+1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/consumer*.
+
+1. Update the configuration details for the consumer in *src/main/resources/consumer.config* as follows:
+
+ ```xml
+ bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
+ security.protocol=SASL_SSL
+ sasl.mechanism=PLAIN
+ sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
+ ```
+
+ > [!IMPORTANT]
+ > Replace `{YOUR.EVENTHUBS.CONNECTION.STRING}` with the connection string for your Event Hubs namespace. For instructions on getting the connection string, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). Here's an example configuration: `sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=XXXXXXXXXXXXXXXX";`
+
+1. Run the consumer code and process events from event hub using your Kafka clients:
+
+ ```java
+ mvn clean package
+ mvn exec:java -Dexec.mainClass="TestConsumer"
+ ```
+
+If your Event Hubs Kafka cluster has events, you'll now start receiving them from the consumer.
+ ### [Passwordless (Recommended)](#tab/passwordless) 1. Enable a system-assigned managed identity for the virtual machine. For more information about configuring managed identity on a VM, see [Configure managed identities for Azure resources on a VM using the Azure portal](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity). Managed identities for Azure resources provide Azure services with an automatically managed identity in Azure Active Directory. You can use this identity to authenticate to any service that supports Azure AD authentication, without having credentials in your code.
Azure Event Hubs supports using Azure Active Directory (Azure AD) to authorize r
:::image type="content" source="./media/event-hubs-quickstart-kafka-enabled-event-hubs/producer-consumer-output.png" alt-text="Screenshot showing the Producer and Consumer app windows showing the events.":::
-### [Connection string](#tab/connection-string)
-
-1. Clone the [Azure Event Hubs for Kafka repository](https://github.com/Azure/azure-event-hubs-for-kafka).
-
-1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/producer*.
-
-1. Update the configuration details for the producer in *src/main/resources/producer.config* as follows:
-
- ```xml
- bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
- security.protocol=SASL_SSL
- sasl.mechanism=PLAIN
- sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
- ```
-
- > [!IMPORTANT]
- > Replace `{YOUR.EVENTHUBS.CONNECTION.STRING}` with the connection string for your Event Hubs namespace. For instructions on getting the connection string, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). Here's an example configuration: `sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=XXXXXXXXXXXXXXXX";`
-
-1. Run the producer code and stream events into Event Hubs:
-
- ```shell
- mvn clean package
- mvn exec:java -Dexec.mainClass="TestProducer"
- ```
-
-1. Navigate to *azure-event-hubs-for-kafka/quickstart/java/consumer*.
-
-1. Update the configuration details for the consumer in *src/main/resources/consumer.config* as follows:
-
- ```xml
- bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
- security.protocol=SASL_SSL
- sasl.mechanism=PLAIN
- sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
- ```
-
- > [!IMPORTANT]
- > Replace `{YOUR.EVENTHUBS.CONNECTION.STRING}` with the connection string for your Event Hubs namespace. For instructions on getting the connection string, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). Here's an example configuration: `sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=XXXXXXXXXXXXXXXX";`
-
-1. Run the consumer code and process events from event hub using your Kafka clients:
-
- ```java
- mvn clean package
- mvn exec:java -Dexec.mainClass="TestConsumer"
- ```
-
-If your Event Hubs Kafka cluster has events, you will now start receiving them from the consumer.
- ## Next steps
governance Get Compliance Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/get-compliance-data.md
Title: Get policy compliance data description: Azure Policy evaluations and effects determine compliance. Learn how to get the compliance details of your Azure resources. Previously updated : 10/26/2022 Last updated : 11/02/2022
either **Compliant**, **Non-compliant**, or **Exempt**. If either **name** or **
property in the definition, then all included and non-exempt resources are considered applicable and are evaluated.
-The compliance percentage is determined by dividing **Compliant** and **Exempt** resources by _total
+The compliance percentage is determined by dividing **Compliant**, **Exempt**, and **Unknown** resources by _total
resources_. _Total resources_ is defined as the sum of the **Compliant**, **Non-compliant**, **Exempt**, and **Conflicting** resources. The overall compliance numbers are the sum of distinct
-resources that are **Compliant** or **Exempt** divided by the sum of all distinct resources. In the
+resources that are **Compliant**, **Exempt**, and **Unknown** divided by the sum of all distinct resources. In the
image below, there are 20 distinct resources that are applicable and only one is **Non-compliant**. The overall resource compliance is 95% (19 out of 20).
hpc-cache Customer Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/customer-keys.md
Title: Use customer-manged keys to encrypt data in Azure HPC Cache
+ Title: Use customer-managed keys to encrypt data in Azure HPC Cache
description: How to use Azure Key Vault with Azure HPC Cache to control encryption key access instead of using the default Microsoft-managed encryption keys-+ Previously updated : 07/15/2021- Last updated : 11/02/2022+ # Use customer-managed encryption keys for Azure HPC Cache
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
Title: Quickstart - Provision an X.509 certificate simulated device to Microsoft
description: Learn how to provision a simulated device that authenticates with an X.509 certificate in the Azure IoT Hub Device Provisioning Service Previously updated : 05/31/2022 Last updated : 11/01/2022
In this section, you'll prepare a development environment that's used to build t
3. Copy the tag name for the latest release of the Azure IoT C SDK.
-4. In your Windows command prompt, run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. (replace `<release-tag>` with the tag you copied in the previous step).
+4. In your Windows command prompt, run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Replace `<release-tag>` with the tag you copied in the previous step.
```cmd git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git
In this section, you'll prepare a development environment that's used to build t
``` >[!TIP]
- >If `cmake` does not find your C++ compiler, you may get build errors while running the above command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
+ >If `cmake` doesn't find your C++ compiler, you may get build errors while running the above command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
7. When the build succeeds, the last few output lines look similar to the following output:
In this section, you'll prepare a development environment that's used to build t
::: zone pivot="programming-language-csharp"
-1. In your Windows command prompt, clone the [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository using the following command:
+In your Windows command prompt, clone the [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository using the following command:
- ```cmd
- git clone https://github.com/Azure/azure-iot-sdk-csharp.git
- ```
+```cmd
+git clone https://github.com/Azure/azure-iot-sdk-csharp.git
+```
::: zone-end ::: zone pivot="programming-language-nodejs"
-1. In your Windows command prompt, clone the [Azure IoT Samples for Node.js](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
+In your Windows command prompt, clone the [Azure IoT Samples for Node.js](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
- ```cmd
- git clone https://github.com/Azure/azure-iot-sdk-node.git
- ```
+```cmd
+git clone https://github.com/Azure/azure-iot-sdk-node.git
+```
::: zone-end ::: zone pivot="programming-language-python"
-1. In your Windows command prompt, clone the [Azure IoT Samples for Python](https://github.com/Azure/azure-iot-sdk-python.git) GitHub repository using the following command:
+In your Windows command prompt, clone the [Azure IoT Samples for Python](https://github.com/Azure/azure-iot-sdk-python.git) GitHub repository using the following command:
- ```cmd
- git clone https://github.com/Azure/azure-iot-sdk-python.git --recursive
- ```
+```cmd
+git clone https://github.com/Azure/azure-iot-sdk-python.git --recursive
+```
::: zone-end
Keep the Git Bash prompt open. You'll need it later in this quickstart.
::: zone pivot="programming-language-csharp"
-The C# sample code is set up to use X.509 certificates that are stored in a password-protected PKCS12 formatted file (`certificate.pfx`). You'll still need the PEM formatted public key certificate file (`device-cert.pem`) that you just created to create an individual enrollment entry later in this quickstart.
+The C# sample code is set up to use X.509 certificates that are stored in a password-protected PKCS#12 formatted file (`certificate.pfx`). You'll still need the PEM formatted public key certificate file (`device-cert.pem`) that you just created to create an individual enrollment entry later in this quickstart.
1. To generate the PKCS12 formatted file expected by the sample, enter the following command:
You won't need the Git Bash prompt for the rest of this quickstart. However, you
::: zone pivot="programming-language-nodejs"
-6. Copy the device certificate and private key to the project directory for the X.509 device provisioning sample. The path given is relative to the location where you downloaded the SDK.
+6. The sample code requires a private key that isn't encrypted. Run the following command to create an unencrypted private key:
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ winpty openssl rsa -in device-key.pem -out unencrypted-device-key.pem
+ ```
+
+ # [Linux](#tab/linux)
+
+ ```bash
+ openssl rsa -in device-key.pem -out unencrypted-device-key.pem
+ ```
+
+
+
+7. When asked to **Enter pass phrase for device-key.pem:**, use the same pass phrase you did previously, `1234`.
+
+8. Copy the device certificate and unencrytped private key to the project directory for the X.509 device provisioning sample. The path given is relative to the location where you downloaded the SDK.
```bash cp device-cert.pem ./azure-iot-sdk-node/provisioning/device/samples
- cp device-key.pem ./azure-iot-sdk-node/provisioning/device/samples
+ cp unencrypted-device-key.pem ./azure-iot-sdk-node/provisioning/device/samples
``` You won't need the Git Bash prompt for the rest of this quickstart. However, you may want to keep it open to check your certificate if you have problems in later steps.
In this section, you'll use your Windows command prompt.
1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
-1. Copy the **ID Scope** and **Global device endpoint** values.
+1. Copy the **ID Scope** value.
- :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the ID scope and global device endpoint on Azure portal.":::
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope on Azure portal.":::
1. In your Windows command prompt, go to the sample directory, and install the packages needed by the sample. The path shown is relative to the location where you cloned the SDK. ```cmd
- cd ./azure-iot-sdk-node/provisioning/device/samples
+ cd .\azure-iot-sdk-node\provisioning\device\samples
npm install ```
-1. Edit the **register_x509.js** file and make the following changes:
+ The sample uses five environment variables to authenticate and provision an IoT device using DPS. These environment variables are:
+
+ | Variable name | Description |
+ | :- | :- |
+ | `PROVISIONING_HOST` | The endpoint to use for connecting to your DPS instance. For this quickstart, use the global endpoint, `global.azure-devices-provisioning.net`. |
+ | `PROVISIONING_IDSCOPE` | The ID Scope for your DPS instance. |
+ | `PROVISIONING_REGISTRATION_ID` | The registration ID for your device. It must match the subject common name in the device certificate. |
+ | `CERTIFICATE_FILE` | The path to your device certificate file. |
+ | `KEY_FILE` | The path to your device private key file. |
+
+1. Add environment variables for the global device endpoint and ID scope. Replace `<id-scope>` with the value you copied in step 2.
+
+ ```cmd
+ set PROVISIONING_HOST=global.azure-devices-provisioning.net
+ set PROVISIONING_IDSCOPE=<id-scope>
+ ```
+
+1. Set the environment variable for the device registration ID. The registration ID for the IoT device must match subject common name on its device certificate. If you followed the steps in this quickstart to generate a self-signed test certificate, `my-x509-device` is both the subject name and the registration ID for the device.
+
+ ```cmd
+ set PROVISIONING_REGISTRATION_ID=my-x509-device
+ ```
- * Replace `provisioning host` with the **Global Device Endpoint** noted in **Step 1** above.
- * Replace `id scope` with the **ID Scope** noted in **Step 1** above.
- * Replace `registration id` with the **Registration ID** noted in the previous section.
- * Replace `cert filename` and `key filename` with the files you generated previously, *device-cert.pem* and *device-key.pem*.
+1. Set the environment variables for the device certificate and (unencrypted) device private key files.
-1. Save the file.
+ ```cmd
+ set CERTIFICATE_FILE=.\device-cert.pem
+ set KEY_FILE=.\unencrypted-device-key.pem
+ ```
1. Run the sample and verify that the device was provisioned successfully.
In this section, you'll use your Windows command prompt.
node register_x509.js ```
->[!TIP]
->The [Azure IoT Hub Node.js Device SDK](https://github.com/Azure/azure-iot-sdk-node) provides an easy way to simulate a device. For more information, see [Device concepts](./concepts-service.md).
+ You should see output similar to the following:
+
+ ```output
+ registration succeeded
+ assigned hub=contoso-hub-2.azure-devices.net
+ deviceId=my-x509-device
+ Client connected
+ send status: MessageEnqueued
+ ```
::: zone-end
In this section, you'll use your Windows command prompt.
| Variable name | Description | | :- | :- |
- | `PROVISIONING_HOST` | This value is the global endpoint used for connecting to your DPS resource |
- | `PROVISIONING_IDSCOPE` | This value is the ID Scope for your DPS resource |
- | `DPS_X509_REGISTRATION_ID` | This value is the ID for your device. It must also match the subject name on the device certificate |
- | `X509_CERT_FILE` | Your device certificate filename |
- | `X509_KEY_FILE` | The private key filename for your device certificate |
+ | `PROVISIONING_HOST` | The global endpoint used for connecting to your DPS instance. |
+ | `PROVISIONING_IDSCOPE` | The ID Scope for your DPS instance. |
+ | `DPS_X509_REGISTRATION_ID` | The registration ID for your device. It must also match the subject name on the device certificate. |
+ | `X509_CERT_FILE` | The path to your device certificate file. |
+ | `X509_KEY_FILE` | The path to your device certificate private key file. |
| `PASS_PHRASE` | The pass phrase you used to encrypt the certificate and private key file (`1234`). | 1. Add the environment variables for the global device endpoint and ID Scope.
In this section, you'll use your Windows command prompt.
set PROVISIONING_IDSCOPE=<ID scope for your DPS resource> ```
-1. The registration ID for the IoT device must match subject name on its device certificate. If you generated a self-signed test certificate, `my-x509-device` is both the subject name and the registration ID for the device.
-
-1. Set the environment variable for the registration ID as follows:
+1. Set the environment variable for the registration ID. The registration ID for the IoT device must match subject name on its device certificate. If you followed the steps in this quickstart to generate a self-signed test certificate, `my-x509-device` is both the subject name and the registration ID for the device.
```cmd set DPS_X509_REGISTRATION_ID=my-x509-device
In this section, you'll use your Windows command prompt.
set PASS_PHRASE=1234 ```
-1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())`.
-
-1. Save your changes.
+1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())` and save your changes.
1. Run the sample. The sample will connect to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub.
If you plan to continue working on and exploring the device client sample, don't
## Next steps
-To learn how to enroll your X.509 device programmatically:
+To learn how to provision multiple X.509 devices using an enrollment group:
> [!div class="nextstepaction"]
-> [Azure quickstart - Enroll X.509 devices to Azure IoT Hub Device Provisioning Service](quick-enroll-device-x509.md)
+> [Tutorial: Provision multiple X.509 devices using an enrollment group](tutorial-custom-hsm-enrollment-group-x509.md)
iot-dps Tutorial Custom Hsm Enrollment Group X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
Title: Tutorial - Provision X.509 devices to Azure IoT Hub using a custom Hardware Security Module (HSM) and a DPS enrollment group
-description: This tutorial shows how to use X.509 certificates to provision multiple devices through an enrollment group in your Azure IoT Hub Device Provisioning Service (DPS) instance. The devices are simulated using the C device SDK and a custom Hardware Security Module (HSM).
+ Title: Tutorial - Provision X.509 devices to Azure IoT Hub using a DPS enrollment group
+description: This tutorial shows how to use X.509 certificates to provision multiple devices through an enrollment group in your Azure IoT Hub Device Provisioning Service (DPS) instance.
Previously updated : 07/12/2022 Last updated : 11/01/2022
-#Customer intent: As a new IoT developer, I want provision groups of devices using X.509 certificate chains and the C SDK.
+zone_pivot_groups: iot-dps-set1
+#Customer intent: As a new IoT developer, I want provision groups of devices using X.509 certificate chains and the Azure IoT device SDK.
# Tutorial: Provision multiple X.509 devices using enrollment groups
The Azure IoT Hub Device Provisioning Service supports three forms of authentica
* [Trusted platform module (TPM)](concepts-tpm-attestation.md) * [Symmetric keys](./concepts-symmetric-key-attestation.md) This tutorial uses the [custom HSM sample](https://github.com/Azure/azure-iot-sdk-c/tree/master/provisioning_client/samples/custom_hsm_example), which provides a stub implementation for interfacing with hardware-based secure storage. A [Hardware Security Module (HSM)](./concepts-service.md#hardware-security-module) is used for secure, hardware-based storage of device secrets. An HSM can be used with symmetric key, X.509 certificate, or TPM attestation to provide secure storage for secrets. Hardware-based storage of device secrets isn't required, but it is strongly recommended to help protect sensitive information like your device certificate's private key.+
+A [Hardware Security Module (HSM)](./concepts-service.md#hardware-security-module) is used for secure, hardware-based storage of device secrets. An HSM can be used with symmetric key, X.509 certificate, or TPM attestation to provide secure storage for secrets. Hardware-based storage of device secrets isn't required, but it is strongly recommended to help protect sensitive information like your device certificate's private key.
In this tutorial, you'll complete the following objectives:
In this tutorial, you'll complete the following objectives:
> * Create a certificate chain of trust to organize a set of devices using X.509 certificates. > * Complete proof of possession with a signing certificate used with the certificate chain. > * Create a new group enrollment that uses the certificate chain.
-> * Set up the development environment for provisioning a device using code from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c).
-> * Provision a device using the certificate chain with the custom Hardware Security Module (HSM) sample in the SDK.
+> * Set up the development environment.
+> * Provision a device using the certificate chain using sample code in the Azure IoT device SDK.
## Prerequisites
In this tutorial, you'll complete the following objectives:
* Complete the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md). + The following prerequisites are for a Windows development environment used to simulate the devices. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation. * Install [Visual Studio](https://visualstudio.microsoft.com/vs/) 2022 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015, Visual Studio 2017, and Visual Studio 19 are also supported.
The following prerequisites are for a Windows development environment used to si
>[!IMPORTANT] >Confirm that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. Also, be aware that older versions of the CMake build system fail to generate the solution file used in this tutorial. Make sure to use the latest version of CMake. ++
+The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-csharp/blob/main/doc/devbox_setup.md) in the SDK documentation.
+
+* Install [.NET SDK 6.0](https://dotnet.microsoft.com/download) or later on your Windows-based machine. You can use the following command to check your version.
+
+ ```cmd
+ dotnet --info
+ ```
+++
+The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/blob/main/doc/node-devbox-setup.md) in the SDK documentation.
+
+* Install [Node.js v4.0 or above](https://nodejs.org) on your machine.
+++
+The following prerequisites are for a Windows development environment.
+
+* [Python 3.6 or later](https://www.python.org/downloads/) on your machine.
+++
+The following prerequisites are for a Windows development environment. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-jav) in the SDK documentation.
+
+* Install the [Java SE Development Kit 8](/azure/developer/java/fundamentals/java-support-on-azure) or later on your machine.
+
+* Download and install [Maven](https://maven.apache.org/install.html).
++ * Install the latest version of [Git](https://git-scm.com/download/). Make sure that Git is added to the environment variables accessible to the command window. See [Software Freedom Conservancy's Git client tools](https://git-scm.com/download/) for the latest version of `git` tools to install, which includes *Git Bash*, the command-line app that you can use to interact with your local Git repository. * Make sure [OpenSSL](https://www.openssl.org/) is installed on your machine. On Windows, your installation of Git includes an installation of OpenSSL. You can access OpenSSL from the Git Bash prompt. To verify that OpenSSL is installed, open a Git Bash prompt and enter `openssl version`.
The following prerequisites are for a Windows development environment used to si
The steps in this tutorial assume that you're using a Windows machine and the OpenSSL installation that is installed as part of Git. You'll use the Git Bash prompt to issue OpenSSL commands and the Windows command prompt for everything else. If you're using Linux, you can issue all commands from a Bash shell.
-## Prepare the Azure IoT C SDK development environment
+## Prepare your development environment
+ In this section, you'll prepare a development environment used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The SDK includes sample code and tools used by devices provisioning with DPS.
In this section, you'll prepare a development environment used to build the [Azu
-- Build files have been written to: C:/azure-iot-sdk-c/cmake ``` ++
+In your Windows command prompt, clone the [Azure IoT SDK for C#](https://github.com/Azure/azure-iot-sdk-csharp) GitHub repository using the following command:
+
+```cmd
+git clone https://github.com/Azure/azure-iot-sdk-csharp.git
+```
+++
+In your Windows command prompt, clone the [Azure IoT Samples for Node.js](https://github.com/Azure/azure-iot-sdk-node.git) GitHub repository using the following command:
+
+```cmd
+git clone https://github.com/Azure/azure-iot-sdk-node.git
+```
+++
+In your Windows command prompt, clone the [Azure IoT Samples for Python](https://github.com/Azure/azure-iot-sdk-python.git) GitHub repository using the following command:
+
+```cmd
+git clone https://github.com/Azure/azure-iot-sdk-python.git --recursive
+```
+++
+1. In your Windows command prompt, clone the [Azure IoT Samples for Java](https://github.com/Azure/azure-iot-sdk-java.git) GitHub repository using the following command:
+
+ ```cmd
+ git clone https://github.com/Azure/azure-iot-sdk-java.git --recursive
+ ```
+
+2. Go to the root `azure-iot-sdk-java` directory and build the project to download all needed packages.
+
+ ```cmd
+ cd azure-iot-sdk-java
+ mvn install -DskipTests=true
+ ```
++ ## Create an X.509 certificate chain In this section, you'll generate an X.509 certificate chain of three certificates for testing each device with this tutorial. The certificates have the following hierarchy.
In this section, you create two device certificates and their full chain certifi
cat ./certs/device-01.cert.pem ./certs/azure-iot-test-only.intermediate.cert.pem ./certs/azure-iot-test-only.root.ca.cert.pem > ./certs/device-01-full-chain.cert.pem ```
-1. Open the certificate chain file, *./certs/device-01-full-chain.cert.pem*, in a text editor to examine it. The certificate chain text contains the full chain of all three certificates. You'll use this text as the certificate chain with in the custom HSM device code later in this tutorial for `device-01`.
+1. Open the certificate chain file, *./certs/device-01-full-chain.cert.pem*, in a text editor to examine it. The certificate chain text contains the full chain of all three certificates. You'll use this certificate chain later in this tutorial to provision `device-01`.
The full chain text has the following format:
In this section, you create two device certificates and their full chain certifi
--END CERTIFICATE-- ```
-1. To create the private key, X.509 certificate, and full chain certificate for the second device, copy and paste this script into your GitBash command prompt. To create certificates for more devices, you can modify the `registration_id` variable declared at the beginning of the script.
+1. To create the private key, X.509 certificate, and full chain certificate for the second device, copy and paste this script into your Git Bash command prompt. To create certificates for more devices, you can modify the `registration_id` variable declared at the beginning of the script.
# [Windows](#tab/windows)
Your signing certificates are now trusted on the Windows-based device and the fu
1. In the **Add Enrollment Group** panel, enter the following information, then select **Save**.
- | Field | Value |
- | :-- | :-- |
- | **Group name** | For this tutorial, enter **custom-hsm-x509-devices**. The enrollment group name is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). |
- | **Attestation Type** | Select **Certificate** |
- | **IoT Edge device** | Select **False** |
- | **Certificate Type** | Select **Intermediate Certificate** |
- | **Primary certificate .pem or .cer file** | Navigate to the intermediate certificate that you created earlier (*./certs/azure-iot-test-only.intermediate.cert.pem*). This intermediate certificate is signed by the root certificate that you already uploaded and verified. DPS trusts that root once it's verified. DPS can verify that the intermediate provided with this enrollment group is truly signed by the trusted root. DPS will trust each intermediate truly signed by that root certificate, and therefore be able to verify and trust leaf certificates signed by the intermediate. |
+ * **Group name**: For this tutorial, enter **custom-hsm-x509-devices**. The enrollment group name is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`).
+ * **Attestation Type**: Select **Certificate**.
+ * **IoT Edge device**: Select **False**.
+ * **Certificate Type**: Select **Intermediate Certificate**.
+ * **Primary certificate .pem or .cer file**: Navigate to the intermediate certificate that you created earlier (*./certs/azure-iot-test-only.intermediate.cert.pem*) and upload it.
+
+ Your intermediate certificate is signed by the root certificate that you already uploaded and verified. Because DPS trusts that root certificate, it will trust any intermediate certificate that is either directly signed by the root, or whose signing chain contains the root. DPS will permit any device to register through the enrollment group whose certificate signing chain contains the intermediate certificate and the verified (root) certificate.
:::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/custom-hsm-enrollment-group-x509.png" alt-text="Screenshot that shows adding an enrollment group in the portal.":::
-## Configure the provisioning device code
+## Prepare and run the device provisioning code
In this section, you update the sample code with your Device Provisioning Service instance information. If a device is authenticated, it will be assigned to an IoT hub linked to the Device Provisioning Service instance configured in this section. +
+In this section, you'll use your Git Bash prompt and the Visual Studio IDE.
+
+### Configure the provisioning device code
+
+In this section, you update the sample code with your Device Provisioning Service instance information.
+ 1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service instance and note the **ID Scope** value. :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/copy-id-scope.png" alt-text="Screenshot that shows the ID scope on the DPS overview pane.":::
In this section, you update the sample code with your Device Provisioning Servic
7. Right-click the **prov\_dev\_client\_sample** project and select **Set as Startup Project**.
-## Configure the custom HSM stub code
+### Configure the custom HSM stub code
The specifics of interacting with actual secure hardware-based storage vary depending on the device hardware. The certificate chains used by the simulated devices in this tutorial will be hardcoded in the custom HSM stub code. In a real-world scenario, the certificate chain would be stored in the actual HSM hardware to provide better security for sensitive information. Methods similar to the stub methods used in this sample would then be implemented to read the secrets from that hardware-based storage. While HSM hardware isn't required, it is recommended to protect sensitive information like the certificate's private key. If an actual HSM was being called by the sample, the private key wouldn't be present in the source code. Having the key in the source code exposes the key to anyone that can view the code. This is only done in this tutorial to assist with learning.
-To update the custom HSM stub code to simulate the identity of the device with ID `device-01`, perform the following steps:
+To update the custom HSM stub code to simulate the identity of the device with ID `device-01`:
1. In Solution Explorer for Visual Studio, navigate to **Provision_Samples > custom_hsm_example > Source Files** and open *custom_hsm_example.c*.
To update the custom HSM stub code to simulate the identity of the device with I
Press enter key to exit: ``` ++
+The C# sample code is set up to use X.509 certificates that are stored in a password-protected PKCS#12 formatted file (.pfx). The full chain certificates you created previously are in the PEM format. To convert the full chain certificates to PKCS#12 format, enter the following commands in your Git Bash prompt from the directory where you previously ran the OpenSSL commands.
+
+* device-01
+
+ ```bash
+ openssl pkcs12 -inkey ./private/device-01.key.pem -in ./certs/device-01-full-chain.cert.pem -export -passin pass:1234 -passout pass:1234 -out ./certs/device-01-full-chain.cert.pfx
+ ```
+
+* device-02
+
+ ```bash
+ openssl pkcs12 -inkey ./private/device-02.key.pem -in ./certs/device-02-full-chain.cert.pem -export -passin pass:1234 -passout pass:1234 -out ./certs/device-02-full-chain.cert.pfx
+ ```
+
+In the rest of this section, you'll use your Windows command prompt.
+
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
+
+2. Copy the **ID Scope** value.
+
+ :::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope on Azure portal.":::
+
+3. In your Windows command prompt, change to the X509Sample directory. This directory is located in the *.\azure-iot-sdk-csharp\provisioning\device\samples\Getting Started\X509Sample* directory off the directory where you cloned the samples on your computer.
+
+4. Enter the following command to build and run the X.509 device provisioning sample (replace `<id-scope>` with the ID Scope that you copied in step 2. Replace `<your-certificate-folder>` with the path to the folder where you ran your OpenSSL commands.
+
+ ```cmd
+ run -- -s <id-scope> -c <your-certificate-folder>\certs\device-01-full-chain.cert.pfx -p 1234
+ ```
+
+ The device connects to DPS and is assigned to an IoT hub. Then, the device sends a telemetry message to the IoT hub. You should see output similar to the following:
+
+ ```output
+ Loading the certificate...
+ Found certificate: 3E5AA3C234B2032251F0135E810D75D38D2AA477 CN=Azure IoT Hub CA Cert Test Only; PrivateKey: False
+ Found certificate: 81FE182C08D18941CDEEB33F53F8553BA2081E60 CN=Azure IoT Hub Intermediate Cert Test Only; PrivateKey: False
+ Found certificate: 5BA1DB226D50EBB7A6A6071CED4143892855AE43 CN=device-01; PrivateKey: True
+ Using certificate 5BA1DB226D50EBB7A6A6071CED4143892855AE43 CN=device-01
+ Initializing the device provisioning client...
+ Initialized for registration Id device-01.
+ Registering with the device provisioning service...
+ Registration status: Assigned.
+ Device device-01 registered to contoso-hub-2.azure-devices.net.
+ Creating X509 authentication for IoT Hub...
+ Testing the provisioned device with IoT Hub...
+ Sending a telemetry message...
+ Finished.
+ ```
+
+ >[!NOTE]
+ > If you don't specify certificate and password on the command line, the certificate file will default to *./certificate.pfx* and you'll be prompted for your password.
+ >
+ > Additional parameters can be passed to change the TransportType (-t) and the GlobalDeviceEndpoint (-g). For a full list of parameters type `dotnet run -- --help`.
+
+5. To register your second device, re-run the sample using its full chain certificate.
+
+ ```cmd
+ run -- -s <id-scope> -c <your-certificate-folder>\certs\device-02-full-chain.cert.pfx -p 1234
+ ```
+++
+In the following steps, use your Windows command prompt.
+
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
+
+1. Copy the **ID Scope** value.
+
+ :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope in the Azure portal.":::
+
+1. In your Windows command prompt, go to the sample directory, and install the packages needed by the sample. The path shown is relative to the location where you cloned the SDK.
+
+ ```cmd
+ cd .\azure-iot-sdk-node\provisioning\device\samples
+ npm install
+ ```
+
+1. In the *provisioning\device\samples* folder, open *register_x509.js* and review the code.
+
+ The sample defaults to MQTT as the transport protocol. If you want to use a different protocol, comment out the following line and uncomment the line for the appropriate protocol.
+
+ ```javascript
+ var ProvisioningTransport = require('azure-iot-provisioning-device-mqtt').Mqtt;
+ ```
+
+ The sample uses five environment variables to authenticate and provision an IoT device using DPS. These environment variables are:
+
+ | Variable name | Description |
+ | :- | :- |
+ | `PROVISIONING_HOST` | The endpoint to use for connecting to your DPS instance. For this tutorial, use the global endpoint, `global.azure-devices-provisioning.net`. |
+ | `PROVISIONING_IDSCOPE` | The ID Scope for your DPS instance. |
+ | `PROVISIONING_REGISTRATION_ID` | The registration ID for your device. It must match the subject common name in the device certificate. |
+ | `CERTIFICATE_FILE` | The path to your device full chain certificate file. |
+ | `KEY_FILE` | The path to your device certificate private key file. |
+
+ The `ProvisioningDeviceClient.register()` method attempts to register your device.
+
+1. Add environment variables for the global device endpoint and ID scope. Replace `<id-scope>` with the value you copied in step 2.
+
+ ```cmd
+ set PROVISIONING_HOST=global.azure-devices-provisioning.net
+ set PROVISIONING_IDSCOPE=<id-scope>
+ ```
+
+1. Set the environment variable for the device registration ID. The registration ID for the IoT device must match subject common name on its device certificate. For this tutorial, *device-01* is both the subject name and the registration ID for the device.
+
+ ```cmd
+ set PROVISIONING_REGISTRATION_ID=device-01
+ ```
+
+1. Set the environment variables for the device full chain certificate and device private key files you generated previously. Replace `<your-certificate-folder>` with the path to the folder where you ran your OpenSSL commands.
+
+ ```cmd
+ set CERTIFICATE_FILE=<your-certificate-folder>\certs\device-01-full-chain.cert.pem
+ set KEY_FILE=<your-certificate-folder>\private\device-01.key.pem
+ ```
+
+1. Run the sample and verify that the device was provisioned successfully.
+
+ ```cmd
+ node register_x509.js
+ ```
+
+ You should see output similar to the following:
+
+ ```output
+ registration succeeded
+ assigned hub=contoso-hub-2.azure-devices.net
+ deviceId=device-01
+ Client connected
+ send status: MessageEnqueued
+ ```
+
+1. Update the environment variables for your second device (`device-02`) according to the table below and run the sample again.
+
+ | Environment Variable | Value |
+ | :- | : |
+ | PROVISIONING_REGISTRATION_ID | `device-02` |
+ | CERTIFICATE_FILE | *\<your-certificate-folder\>\certs\device-02-full-chain.cert.pem* |
+ | KEY_FILE | *\<your-certificate-folder\>\private\device-02.key.pem* |
+++
+In the following steps, use your Windows command prompt.
+
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
+
+1. Copy the **ID Scope**.
+
+ :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope in the Azure portal.":::
+
+1. In your Windows command prompt, go to the directory of the [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py) sample. The path shown is relative to the location where you cloned the SDK.
+
+ ```cmd
+ cd .\azure-iot-sdk-python\samples\async-hub-scenarios
+ ```
+
+ This sample uses six environment variables to authenticate and provision an IoT device using DPS. These environment variables are:
+
+ | Variable name | Description |
+ | :- | :- |
+ | `PROVISIONING_HOST` | The endpoint to use for connecting to your DPS instance. For this tutorial, use the global endpoint, `global.azure-devices-provisioning.net`. |
+ | `PROVISIONING_IDSCOPE` | The ID Scope for your DPS instance. |
+ | `DPS_X509_REGISTRATION_ID` | The registration ID for your device. It must match the subject common name in the device certificate. |
+ | `X509_CERT_FILE` | The path to your device full chain certificate file. |
+ | `X509_KEY_FILE` | The path to your device certificate private key file. |
+ | `PASS_PHRASE` | The pass phrase used to encrypt the private key file (if used). Not needed for this tutorial. |
+
+1. Add the environment variables for the global device endpoint and ID Scope. You copied the ID scope for your instance in step 2.
+
+ ```cmd
+ set PROVISIONING_HOST=global.azure-devices-provisioning.net
+ set PROVISIONING_IDSCOPE=<ID scope for your DPS resource>
+ ```
+
+1. Set the environment variable for the device registration ID. The registration ID for the IoT device must match subject common name on its device certificate. For this tutorial, *device-01* is both the subject name and the registration ID for the device.
+
+ ```cmd
+ set DPS_X509_REGISTRATION_ID=device-01
+ ```
+
+1. Set the environment variables for the device full chain certificate and device private key files you generated previously. Replace `<your-certificate-folder>` with the path to the folder where you ran your OpenSSL commands.
+
+ ```cmd
+ set X509_CERT_FILE=<your-certificate-folder>\certs\device-01-full-chain.cert.pem
+ set X509_KEY_FILE=<your-certificate-folder>\private\device-01.key.pem
+ ```
+
+1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())`.
+
+1. Run the sample. The sample connects to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample sends some test messages to the IoT hub.
+
+ ```cmd
+ python provision_x509.py
+ ```
+
+ You should see output similar to the following:
+
+ ```output
+ The complete registration result is
+ device-01
+ contoso-hub-2.azure-devices.net
+ initialAssignment
+ null
+ Will send telemetry from the provisioned device
+ sending message #1
+ sending message #2
+ sending message #3
+ sending message #4
+ sending message #5
+ sending message #6
+ sending message #7
+ sending message #8
+ sending message #9
+ sending message #10
+ done sending message #1
+ done sending message #2
+ done sending message #3
+ done sending message #4
+ done sending message #5
+ done sending message #6
+ done sending message #7
+ done sending message #8
+ done sending message #9
+ done sending message #10
+ ```
+
+1. Update the environment variables for your second device (`device-02`) according to the table below and run the sample again.
+
+ | Environment Variable | Value |
+ | :- | : |
+ | DPS_X509_REGISTRATION_ID | `device-02` |
+ | X509_CERT_FILE | *\<your-certificate-folder\>\certs\device-02-full-chain.cert.pem* |
+ | X509_KEY_FILE | *\<your-certificate-folder\>\private\device-02.key.pem* |
+++
+In the following steps, you'll use both your Windows command prompt and your Git Bash prompt.
+
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service.
+
+1. Copy the **ID Scope**.
+
+ :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/copy-id-scope.png" alt-text="Screenshot of the ID scope in the Azure portal.":::
+
+1. In your Windows command prompt, navigate to the sample project folder. The path shown is relative to the location where you cloned the SDK
+
+ ```cmd
+ cd .\azure-iot-sdk-java\provisioning\provisioning-samples\provisioning-X509-sample
+ ```
+
+1. Enter the provisioning service and X.509 identity information in the sample code. This is used during provisioning, for attestation of the simulated device, prior to device registration.
+
+ 1. Open the file `.\src\main\java\samples\com/microsoft\azure\sdk\iot\ProvisioningX509Sample.java` in your favorite editor.
+
+ 1. Update the following values. For `idScope`, use the **ID Scope** that you copied previously. For global endpoint, use the **Global device endpoint**. This endpoint is the same for every DPS instance, `global.azure-devices-provisioning.net`.
+
+ ```java
+ private static final String idScope = "[Your ID scope here]";
+ private static final String globalEndpoint = "[Your Provisioning Service Global Endpoint here]";
+ ```
+
+ 1. The sample defaults to using HTTPS as the transport protocol. If you want to change the protocol, comment out the following line and uncomment the line for the protocol you want to use.
+
+ ```java
+ private static final ProvisioningDeviceClientTransportProtocol PROVISIONING_DEVICE_CLIENT_TRANSPORT_PROTOCOL = ProvisioningDeviceClientTransportProtocol.HTTPS;
+ ```
+
+ 1. Update the value of the `leafPublicPem` constant string with the value of your device certificate, *device-01.cert.pem*.
+
+ The syntax of certificate text must follow the pattern below with no extra spaces or characters.
+
+ ```java
+ private static final String leafPublicPem = "--BEGIN CERTIFICATE--\n"
+ "MIIFOjCCAyKgAwIBAgIJAPzMa6s7mj7+MA0GCSqGSIb3DQEBCwUAMCoxKDAmBgNV\n"
+ ...
+ "MDMwWhcNMjAxMTIyMjEzMDMwWjAqMSgwJgYDVQQDDB9BenVyZSBJb1QgSHViIENB\n"
+ "--END CERTIFICATE--";
+ ```
+
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `leafPublicPem` string constant value and write it to the output.
+
+ ```Bash
+ sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' ./certs/device-01.cert.pem
+ ```
+
+ Copy and paste the output certificate text for the constant value.
+
+ 1. Update the string value of the `leafPrivateKey` constant with the unencrypted private key for your device certificate, *unencrypted-device-key.pem*.
+
+ The syntax of the private key text must follow the pattern below with no extra spaces or characters.
+
+ ```java
+ private static final String leafPrivateKey = "--BEGIN PRIVATE KEY--\n" +
+ "MIIJJwIBAAKCAgEAtjvKQjIhp0EE1PoADL1rfF/W6v4vlAzOSifKSQsaPeebqg8U\n" +
+ ...
+ "X7fi9OZ26QpnkS5QjjPTYI/wwn0J9YAwNfKSlNeXTJDfJ+KpjXBcvaLxeBQbQhij\n" +
+ "--END PRIVATE KEY--";
+ ```
+
+ To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `leafPrivateKey` string constant value and write it to the output.
+
+ ```Bash
+ sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' ./private/device-01.key.pem
+ ```
+
+ Copy and paste the output private key text for the constant value.
+
+ 1. Add a `rootPublicPem` constant string with the value of your root CA certificate, *azure-iot-test-only.root.ca.cert.pem*. You can add it just after the `leafPrivateKey` constant.
+
+ The syntax of certificate text must follow the pattern below with no extra spaces or characters.
+
+ ```java
+ private static final String rootPublicPem = "--BEGIN CERTIFICATE--\n"
+ "MIIFOjCCAyKgAwIBAgIJAPzMa6s7mj7+MA0GCSqGSIb3DQEBCwUAMCoxKDAmBgNV\n"
+ ...
+ "MDMwWhcNMjAxMTIyMjEzMDMwWjAqMSgwJgYDVQQDDB9BenVyZSBJb1QgSHViIENB\n"
+ "--END CERTIFICATE--";
+ ```
+
+ To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `rootPublicPem` string constant value and write it to the output.
+
+ ```Bash
+ sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' ./certs/azure-iot-test-only.root.ca.cert.pem
+ ```
+
+ Copy and paste the output certificate text for the constant value.
+
+ 1. Add an `intermediatePublicPem` constant string with the value of your intermediate CA certificate, *azure-iot-test-only.intermediate.cert.pem*. You can add it just after the previous constant.
+
+ The syntax of certificate text must follow the pattern below with no extra spaces or characters.
+
+ ```java
+ private static final String intermediatePublicPem = "--BEGIN CERTIFICATE--\n"
+ "MIIFOjCCAyKgAwIBAgIJAPzMa6s7mj7+MA0GCSqGSIb3DQEBCwUAMCoxKDAmBgNV\n"
+ ...
+ "MDMwWhcNMjAxMTIyMjEzMDMwWjAqMSgwJgYDVQQDDB9BenVyZSBJb1QgSHViIENB\n"
+ "--END CERTIFICATE--";
+ ```
+
+ To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `intermediatePublicPem` string constant value and write it to the output.
+
+ ```Bash
+ sed 's/^/"/;$ !s/$/\\n" +/;$ s/$/"/' ./certs/azure-iot-test-only.intermediate.cert.pem
+ ```
+
+ Copy and paste the output certificate text for the constant value.
+
+ 1. Find the following lines in the `main` method.
+
+ ```java
+ // For group enrollment uncomment this line
+ //signerCertificatePemList.add("<Your Signer/intermediate Certificate Here>");
+ ```
+
+ Add these two lines directly beneath them to add your intermediate and root CA certificates to the signing chain. Your signing chain should include the whole certificate chain up to and including a certificate that you've verified with DPS.
+
+ ```java
+ signerCertificatePemList.add(intermediatePublicPem);
+ signerCertificatePemList.add(rootPublicPem);
+ ```
+
+ > [!NOTE]
+ > The order that the signing certificates are added is important. The sample will fail if it's changed.
+
+ 1. Save your changes.
+
+1. Build the sample, and then go to the `target` folder.
+
+ ```cmd
+ mvn clean install
+ cd target
+ ```
+
+1. The build outputs .jar file in the `target` folder with the following file format: `provisioning-x509-sample-{version}-with-deps.jar`; for example: `provisioning-x509-sample-1.8.1-with-deps.jar`. Execute the .jar file. You may need to replace the version in the command below.
+
+ ```cmd
+ java -jar ./provisioning-x509-sample-1.8.1-with-deps.jar
+ ```
+
+ The sample will connect to DPS, which will provision the device to an IoT hub. After the device is provisioned, the sample will send some test messages to the IoT hub.
+
+ ```output
+ Starting...
+ Beginning setup.
+ WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
+ 2022-10-21 10:41:20,476 DEBUG (main) [com.microsoft.azure.sdk.iot.provisioning.device.ProvisioningDeviceClient] - Initialized a ProvisioningDeviceClient instance using SDK version 2.0.2
+ 2022-10-21 10:41:20,479 DEBUG (main) [com.microsoft.azure.sdk.iot.provisioning.device.ProvisioningDeviceClient] - Starting provisioning thread...
+ Waiting for Provisioning Service to register
+ 2022-10-21 10:41:20,482 INFO (global.azure-devices-provisioning.net-4f8279ac-CxnPendingConnectionId-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Opening the connection to device provisioning service...
+ 2022-10-21 10:41:20,652 INFO (global.azure-devices-provisioning.net-4f8279ac-Cxn4f8279ac-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Connection to device provisioning service opened successfully, sending initial device registration message
+ 2022-10-21 10:41:20,680 INFO (global.azure-devices-provisioning.net-4f8279ac-Cxn4f8279ac-azure-iot-sdk-RegisterTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.RegisterTask] - Authenticating with device provisioning service using x509 certificates
+ 2022-10-21 10:41:21,603 INFO (global.azure-devices-provisioning.net-4f8279ac-Cxn4f8279ac-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Waiting for device provisioning service to provision this device...
+ 2022-10-21 10:41:21,605 INFO (global.azure-devices-provisioning.net-4f8279ac-Cxn4f8279ac-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Current provisioning status: ASSIGNING
+ 2022-10-21 10:41:24,868 INFO (global.azure-devices-provisioning.net-4f8279ac-Cxn4f8279ac-azure-iot-sdk-ProvisioningTask) [com.microsoft.azure.sdk.iot.provisioning.device.internal.task.ProvisioningTask] - Device provisioning service assigned the device successfully
+ IotHUb Uri : contoso-hub-2.azure-devices.net
+ Device ID : device-01
+ 2022-10-21 10:41:30,514 INFO (main) [com.microsoft.azure.sdk.iot.device.transport.ExponentialBackoffWithJitter] - NOTE: A new instance of ExponentialBackoffWithJitter has been created with the following properties. Retry Count: 2147483647, Min Backoff Interval: 100, Max Backoff Interval: 10000, Max Time Between Retries: 100, Fast Retry Enabled: true
+ 2022-10-21 10:41:30,526 INFO (main) [com.microsoft.azure.sdk.iot.device.transport.ExponentialBackoffWithJitter] - NOTE: A new instance of ExponentialBackoffWithJitter has been created with the following properties. Retry Count: 2147483647, Min Backoff Interval: 100, Max Backoff Interval: 10000, Max Time Between Retries: 100, Fast Retry Enabled: true
+ 2022-10-21 10:41:30,533 DEBUG (main) [com.microsoft.azure.sdk.iot.device.DeviceClient] - Initialized a DeviceClient instance using SDK version 2.1.2
+ 2022-10-21 10:41:30,590 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.MqttIotHubConnection] - Opening MQTT connection...
+ 2022-10-21 10:41:30,625 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sending MQTT CONNECT packet...
+ 2022-10-21 10:41:31,452 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sent MQTT CONNECT packet was acknowledged
+ 2022-10-21 10:41:31,453 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sending MQTT SUBSCRIBE packet for topic devices/device-01/messages/devicebound/#
+ 2022-10-21 10:41:31,523 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.Mqtt] - Sent MQTT SUBSCRIBE packet for topic devices/device-01/messages/devicebound/# was acknowledged
+ 2022-10-21 10:41:31,525 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.mqtt.MqttIotHubConnection] - MQTT connection opened successfully
+ 2022-10-21 10:41:31,528 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - The connection to the IoT Hub has been established
+ 2022-10-21 10:41:31,531 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Updating transport status to new status CONNECTED with reason CONNECTION_OK
+ 2022-10-21 10:41:31,532 DEBUG (main) [com.microsoft.azure.sdk.iot.device.DeviceIO] - Starting worker threads
+ 2022-10-21 10:41:31,535 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking connection status callbacks with new status details
+ 2022-10-21 10:41:31,536 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Client connection opened successfully
+ 2022-10-21 10:41:31,537 INFO (main) [com.microsoft.azure.sdk.iot.device.DeviceClient] - Device client opened successfully
+ Sending message from device to IoT Hub...
+ 2022-10-21 10:41:31,539 DEBUG (main) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Message was queued to be sent later ( Message details: Correlation Id [0d143280-dbc7-405f-a61e-fcc7a1d80b87] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] )
+ Press any key to exit...
+ 2022-10-21 10:41:31,540 DEBUG (contoso-hub-2.azure-devices.net-device-01-d7c67552-Cxn0bd73809-420e-46fe-91ee-942520b775db-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Sending message ( Message details: Correlation Id [0d143280-dbc7-405f-a61e-fcc7a1d80b87] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] )
+ 2022-10-21 10:41:31,844 DEBUG (MQTT Call: device-01) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - IotHub message was acknowledged. Checking if there is record of sending this message ( Message details: Correlation Id [0d143280-dbc7-405f-a61e-fcc7a1d80b87] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] )
+ 2022-10-21 10:41:31,846 DEBUG (contoso-hub-2.azure-devices.net-device-01-d7c67552-Cxn0bd73809-420e-46fe-91ee-942520b775db-azure-iot-sdk-IotHubSendTask) [com.microsoft.azure.sdk.iot.device.transport.IotHubTransport] - Invoking the callback function for sent message, IoT Hub responded to message ( Message details: Correlation Id [0d143280-dbc7-405f-a61e-fcc7a1d80b87] Message Id [4d8d39c8-5a38-4299-8f07-3ae02cdc3218] ) with status OK
+ Message sent!
+ ```
+
+1. Update the constants for your second device (`device-02`) according to the table below, rebuild, and run the sample again.
+
+ | Constant | File to use |
+ | :- | : |
+ | `leafPublicPem` | *./certs/device-02.cert.pem* |
+ | `leafPrivateKey` | *./private/device-02.key.pem* |
++ ## Confirm your device provisioning registration Examine the registration records of the enrollment group to see the registration details for your devices:
When you're finished testing and exploring this device client sample, use the fo
1. Close the device client sample output window on your machine.
-1. From the left-hand menu in the Azure portal, select **All resources** and then select your Device Provisioning Service instance. Open **Manage Enrollments** for your service, and then select the **Enrollment Groups** tab. Select the check box next to the *Group Name* of the device group you created in this tutorial, and select **Delete** at the top of the pane.
+### Delete your enrollment group
+
+1. From the left-hand menu in the Azure portal, select **All resources**.
+
+1. Select your DPS instance.
+
+1. In the **Settings** menu, select **Manage enrollments**.
+
+1. Select the **Enrollment Groups** tab.
+
+1. Select the enrollment group you used for this tutorial, *custom-hsm-x509-devices*.
+
+1. On the **Enrollment Group Details** page, select the **Registration Records** tab. Then select the check box next to the **Device Id** column header to select all of the registration records for the enrollment group. Select **Delete Registrations** at the top of the page to delete the registration records.
+
+ > [!IMPORTANT]
+ > Deleting an enrollment group doesn't delete the registration records associated with it. These orphaned records will count against the [registrations quota](about-iot-dps.md#quotas-and-limits) for the DPS instance. For this reason, it's a best practice to delete all registration records associated with an enrollment group before you delete the enrollment group itself.
+
+1. Go back to the **Manage Enrollments** page and make sure the **Enrollment Groups** tab is selected.
+
+1. Select the check box next to the *GROUP NAME* of the enrollment group you used for this tutorial, *custom-hsm-x509-devices*.
+
+1. At the top of the page, select **Delete**.
+
+### Delete registered CA certificates from DPS
+
+1. Select **Certificates** from the left-hand menu of your DPS instance. For each certificate you uploaded and verified in this tutorial, select the certificate and select **Delete** and confirm your choice to remove it.
+
+### Delete device registration(s) from IoT Hub
+
+1. From the left-hand menu in the Azure portal, select **All resources**.
+
+2. Select your IoT hub.
+
+3. In the **Explorers** menu, select **IoT devices**.
-1. Select **Certificates** in DPS. For each certificate you uploaded and verified in this tutorial, select the certificate and select **Delete** to remove it.
+4. Select the check box next to the *DEVICE ID* of the devices you registered in this tutorial. For example, *device-01* and *device-02*..
-1. From the left-hand menu in the Azure portal, select **All resources** and then select your IoT hub. Open **IoT devices** for your hub. Select the check box next to the *DEVICE ID* of the device that you registered in this tutorial. Select **Delete** at the top of the pane.
+5. At the top of the page, select **Delete**.
## Next steps
-In this tutorial, you provisioned an X.509 device using a custom HSM to your IoT hub. To learn how to provision IoT devices across multiple IoT hubs, see:
+In this tutorial, you provisioned X.509 devices using an enrollment group to your IoT hub. To learn how to provision IoT devices to multiple hubs continue to the next tutorial.
> [!div class="nextstepaction"] > [How to use allocation policies](how-to-use-allocation-policies.md)
iot-hub-device-update Device Update Agent Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-agent-provisioning.md
This section describes how to start and verify the Device Update agent as a modu
1. Open a Terminal window, and enter the command below. ```shell
- sudo systemctl restart adu-agent
+ sudo systemctl restart deviceupdate-agent
``` 1. You can check the status of the agent using the command below. If you see any issues, refer to this [troubleshooting guide](troubleshoot-device-update.md). ```shell
- sudo systemctl status adu-agent
+ sudo systemctl status deviceupdate-agent
``` You should see status OK.
key-vault Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/authentication.md
A security principal is an object that represents a user, group, service, or app
* A **group** security principal identifies a set of users created in Azure Active Directory. Any roles or permissions assigned to the group are granted to all of the users within the group.
-* A **service principal** is a type of security principal that identifies an application or service, which is to say, a piece of code rather than a user or group. A service principal's object ID is known as its **client ID** and acts like its username. The service principal's **client secret** acts like its password.
+* A **service principal** is a type of security principal that identifies an application or service, which is to say, a piece of code rather than a user or group. A service principal's object ID acts like its username; the service principal's **client secret** acts like its password.
For applications, there are two ways to obtain a service principal:
key-vault Byok Specification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/byok-specification.md
Use the **az keyvault key create** command to create KEK with key operations set
az keyvault key create --kty RSA-HSM --size 4096 --name KEKforBYOK --ops import --vault-name ContosoKeyVaultHSM ```
+> [!NOTE]
+> Services support different KEK lengths; Azure SQL, for instance, only supports key lengths of [2048 or 3072 bytes](/azure/azure-sql/database/transparent-data-encryption-byok-overview#requirements-for-configuring-customer-managed-tde). Consult the documentation for your service for specifics.
+ ### Step 2: Retrieve the public key of the KEK Download the public key portion of the KEK and store it into a PEM file.
load-testing How To Define Test Criteria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-define-test-criteria.md
Azure Load Testing supports the following client metrics:
|||||-| |`response_time_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) | Response time or elapsed time, in milliseconds. Learn more about [elapsed time in the Apache JMeter documentation](https://jmeter.apache.org/usermanual/glossary.html). | |`latency_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) | Latency, in milliseconds. Learn more about [latency in the Apache JMeter documentation](https://jmeter.apache.org/usermanual/glossary.html). |
-|`error` | `percentage` | Numerical value in the range 0-100, representing a percentage. | `>` (greater than) <BR> `<` (less than) | Percentage of failed requests. |
+|`error` | `percentage` | Numerical value in the range 0-100, representing a percentage. | `>` (greater than) | Percentage of failed requests. |
|`requests_per_sec` | `avg` (average) | Numerical value with up to two decimal places. | `>` (greater than) <BR> `<` (less than) | Number of requests per second. | |`requests` | `count` | Integer value. | `>` (greater than) <BR> `<` (less than) | Total number of requests. |
load-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-limits-quotas-capacity.md
The following limits apply on a per-region, per-subscription basis.
| Resource | Default limit | Maximum limit | |||
-| Concurrent engine instances | 5-100 <sup>1</sup> | 5000 |
-| Engine instances per test run | 1-45 <sup>1</sup> | 5000 |
+| Concurrent engine instances | 5-100 <sup>1</sup> | 1000 |
+| Engine instances per test run | 1-45 <sup>1</sup> | 45 |
<sup>1</sup> To request an increase beyond this limit, contact Azure Support. Default limits vary by offer category type.
The following limits apply on a per-region, per-subscription basis.
| Resource | Default limit | Maximum limit | |||
-| Concurrent test runs | 5-25 <sup>2</sup> | 5000 |
+| Concurrent test runs | 5-25 <sup>2</sup> | 1000 |
| Test duration | 3 hours | <sup>2</sup> To request an increase beyond this limit, contact Azure Support. Default limits vary by offer category type.
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
Title: Create workflows with single-tenant Azure Logic Apps (Standard) in the Azure portal
-description: Create automated workflows to integrate apps, data, services, and systems with single-tenant Azure Logic Apps (Standard) in the Azure portal.
+ Title: Create Standard workflows in single-tenant Azure Logic Apps with the Azure portal
+description: Create Standard logic app workflows that run in single-tenant Azure Logic Apps to automate integration tasks across apps, data, services, and systems using the Azure portal.
ms.suite: integration Previously updated : 10/26/2022 Last updated : 11/01/2022
-#Customer intent: As a developer, I want to create an automated integration workflow that runs in single-tenant Azure Logic Apps using the Azure portal.
+# Customer intent: As a logic apps developer, I want to create a Standard logic app workflow that runs in single-tenant Azure Logic Apps using the Azure portal.
# Create an integration workflow with single-tenant Azure Logic Apps (Standard) in the Azure portal
As you progress, you'll complete these high-level tasks:
* To deploy your **Logic App (Standard)** resource to an [App Service Environment v3 (ASEv3)](../app-service/environment/overview.md), you have to create this environment resource first. You can then select this environment as the deployment location when you create your logic app resource. For more information, review [Resources types and environments](single-tenant-overview-compare.md#resource-environment-differences) and [Create an App Service Environment](../app-service/environment/creation.md).
+* Starting mid-October 2022, new Standard logic app workflows in the Azure portal automatically use Azure Functions v4. Throughout November 2022, existing Standard workflows in the Azure portal are automatically migrating to Azure Functions v4. Unless you deployed your Standard logic apps as NuGet-based projects or pinned your logic apps to a specific bundle version, this upgrade is designed to require no action from you nor have a runtime impact. However, if the exceptions apply to you, or for more information about Azure Functions v4 support, see [Azure Logic Apps Standard now supports Azure Functions v4](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/azure-logic-apps-standard-now-supports-azure-functions-v4/ba-p/3656072).
+ ## Best practices and recommendations For optimal designer responsiveness and performance, review and follow these guidelines:
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
Title: Create workflows with single-tenant Azure Logic Apps (Standard) in Visual Studio Code
-description: Create automated workflows to integrate apps, data, services, and systems with single-tenant Azure Logic Apps (Standard) in Visual Studio Code.
+ Title: Create Standard workflows in single-tenant Azure Logic Apps with Visual Studio Code
+description: Create Standard logic app workflows that run in single-tenant Azure Logic Apps to automate integration tasks across apps, data, services, and systems using Visual Studio Code.
ms.suite: integration Previously updated : 09/06/2022 Last updated : 11/01/2022 +
+# Customer intent: As a logic apps developer, I want to create a Standard logic app workflow that runs in single-tenant Azure Logic Apps using Visual Studio Code.
-# Create an integration workflow with single-tenant Azure Logic Apps (Standard) in Visual Studio Code
+# Create a Standard logic app workflow for single-tenant Azure Logic Apps using Visual Studio Code
[!INCLUDE [logic-apps-sku-standard](../../includes/logic-apps-sku-standard.md)]
-This article shows how to create an example automated integration workflow that runs in the *single-tenant* Azure Logic Apps environment by using Visual Studio Code with the **Azure Logic Apps (Standard)** extension. When you use this extension, you create a Standard logic app resource and workflow that provides the following capabilities:
+This how-to guide shows how to create an example integration workflow that runs in single-tenant Azure Logic Apps by using Visual Studio Code with the **Azure Logic Apps (Standard)** extension. Before you create this workflow, you'll create a Standard logic app resource, which provides the following capabilities:
* Your logic app can include multiple [stateful and stateless workflows](single-tenant-overview-compare.md#stateful-stateless). * Workflows in the same logic app and tenant run in the same process as the Azure Logic Apps runtime, so they share the same resources and provide better performance.
-* You can locally create, run, and test workflows in the Visual Studio Code development environment. You can deploy your logic app locally, to Azure, which includes the single-tenant Azure Logic Apps environment or App Service Environment v3 (ASEv3) - Windows plans only, and on-premises using containers, due to the Azure Logic Apps containerized runtime.
+* You can locally create, run, and test workflows using the Visual Studio Code development environment.
-For more information about single-tenant Azure Logic Apps, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+ When you're ready, you can deploy your logic app to Azure where your workflow can run in the single-tenant Azure Logic Apps environment or in an App Service Environment v3 (ASEv3 - Windows plans only). You can also deploy and run your workflow anywhere that Kubernetes can run, including Azure, Azure Kubernetes Service, on premises, or even other cloud providers, due to the Azure Logic Apps containerized runtime. For more information about single-tenant Azure Logic Apps, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md#resource-environment-differences).
While the example workflow is cloud-based and has only two steps, you can create workflows from hundreds of operations that can connect a wide range of apps, data, services, and systems across cloud, on premises, and hybrid environments. The example workflow starts with the built-in Request trigger and follows with an Office 365 Outlook action. The trigger creates a callable endpoint for the workflow and waits for an inbound HTTPS request from any caller. When the trigger receives a request and fires, the next action runs by sending email to the specified email address along with selected outputs from the trigger.
For more information, review the [Azurite documentation](https://github.com/Azur
* [C# for Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp), which enables F5 functionality to run your logic app.
- * [Azure Functions Core Tools - 3.x version](https://github.com/Azure/azure-functions-core-tools/releases/tag/3.0.4585) by using the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`. Don't install the 4.x version, which isn't supported and won't work.
+ * [Azure Functions Core Tools - 3.x version](https://github.com/Azure/azure-functions-core-tools/releases/tag/3.0.4585) by using the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`. These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code.
- These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code.
+ * If you have an installation that's earlier than these versions, uninstall that version first, or make sure that the PATH environment variable points at the version that you download and install.
- > [!IMPORTANT]
- > If you have an installation that's earlier than these versions, uninstall that version first,
- > or make sure that the PATH environment variable points at the version that you download and install.
+ * Azure Functions v3 support ends in late 2022. Starting mid-October 2022, new Standard logic app workflows in the Azure portal automatically use Azure Functions v4. Throughout November 2022, existing Standard workflows in the Azure portal are automatically migrating to Azure Functions v4. Unless you deployed your Standard logic apps as NuGet-based projects or pinned your logic apps to a specific bundle version, this upgrade is designed to require no action from you nor have
+ a runtime impact. However, if the exceptions apply to you, or for more information about Azure Functions v3 support, see [Azure Logic Apps Standard now supports Azure Functions v4](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/azure-logic-apps-standard-now-supports-azure-functions-v4/ba-p/3656072).
* [Azure Logic Apps (Standard) extension for Visual Studio Code](https://go.microsoft.com/fwlink/p/?linkid=2143167).
logic-apps Logic Apps Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-pricing.md
Title: Usage metering, billing, and pricing
-description: Learn how usage metering, billing, and pricing models work in Azure Logic Apps.
+description: Learn how usage metering, billing, and pricing work in Azure Logic Apps.
ms.suite: integration Previously updated : 08/20/2022 Last updated : 11/02/2022
+# As a logic apps developer, I want to learn and understand how usage metering, billing, and pricing work in Azure Logic Apps.
-# Usage metering, billing, and pricing models for Azure Logic Apps
+# Usage metering, billing, and pricing for Azure Logic Apps
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
The following table summarizes how the Consumption model handles metering and bi
### Trigger and action operations in the Consumption model
-Except for the initial number of free built-in operation executions, per Azure subscription, that a workflow can run, the Consumption model meters and bills an operation based on *each execution*, whether or not the overall workflow successfully runs, finishes, or is even instantiated. An operation usually makes a single execution [unless the operation has retry attempts enabled](#other-operation-behavior). In turn, an execution usually makes a single call [unless the operation supports and enables chunking or pagination to get large amounts of data](logic-apps-handle-large-messages.md). If chunking or pagination is enabled, an operation execution might have to make multiple calls. The Consumption model meters and bills an operation *per execution, not per call*.
+Except for the initial number of free built-in operation executions, per Azure subscription, that a workflow can run, the Consumption model meters and bills an operation based on *each execution*, whether or not the overall workflow successfully runs, finishes, or is even instantiated. An operation usually makes a single execution [unless the operation has retry attempts enabled](#other-operation-behavior). In turn, an execution usually makes a single call [unless the operation supports and enables chunking or pagination to get large amounts of data](logic-apps-handle-large-messages.md). If chunking or pagination is enabled, an operation execution might have to make multiple calls.
-For example, suppose a workflow starts with a polling trigger that gets records by regularly making outbound calls to an endpoint. The outbound call is metered and billed as a single execution, whether or not the trigger fires or is skipped, such as when a trigger checks an endpoint but doesn't find any data or events. The trigger state controls whether or not the workflow instance is created and run. Now, suppose the operation also supports and has enabled chunking or pagination. If the operation has to make 10 calls to finish getting all the data, the operation is still metered and billed as a *single execution*, despite making multiple calls.
+The Consumption model meters and bills an operation *per execution, not per call*. For example, suppose a workflow starts with a polling trigger that gets records by regularly making outbound calls to an endpoint. The outbound call is metered and billed as a single execution, whether or not the trigger fires or is skipped, such as when a trigger checks an endpoint but doesn't find any data or events. The trigger state controls whether or not the workflow instance is created and run. Now, suppose the operation also supports and has enabled chunking or pagination. If the operation has to make 10 calls to finish getting all the data, the operation is still metered and billed as a *single execution*, despite making multiple calls.
+
+> [!NOTE]
+>
+> By default, triggers that return an array have a **Split On** setting that's already enabled.
+> This setting results in a trigger event, which you can review in the trigger history, and a
+> workflow instance *for each* array item. All the workflow instances run in parallel so that
+> the array items are processed at the same time. Billing applies to all trigger events whether
+> the trigger state is **Succeeded** or **Skipped**. Triggers are still billable even in scenarios
+> where the triggers don't instantiate and start the workflow, but the trigger state is **Succeeded**,
+> **Failed**, or **Skipped**.
The following table summarizes how the Consumption model handles metering and billing for these operation types when used with a logic app and workflow in multi-tenant Azure Logic Apps:
machine-learning How To Authenticate Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-authenticate-batch-endpoint.md
In this case, we want to execute a batch endpoint using a service principal alre
# [REST](#tab/rest)
-You can use the REST API of Azure Machine Learning to start a batch endpoints job using the user's credential. Follow these steps:
+1. Create a secret to use for authentication as explained at [Option 2: Create a new application secret](../../active-directory/develop/howto-create-service-principal-portal.md#option-2-create-a-new-application-secret).
1. Use the login service from Azure to get an authorization token. Authorization tokens are issued to a particular scope. The resource type for Azure Machine learning is `https://ml.azure.com`. The request would look as follows:
machine-learning How To Troubleshoot Batch Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/batch-inference/how-to-troubleshoot-batch-endpoints.md
The following section contains common problems and solutions you may see during
### No module named 'azureml'
+__Message logged__: `No module named 'azureml'`.
+ __Reason__: Azure Machine Learning Batch Deployments require the package `azureml-core` to be installed. __Solution__: Add `azureml-core` to your conda dependencies file.
__Message logged__: There is no succeeded mini batch item returned from run(). P
__Reason__: The batch endpoint failed to provide data in the expected format to the `run()` method. This may be due to corrupted files being read or incompatibility of the input data with the signature of the model (MLflow). __Solution__: To understand what may be happening, go to __Outputs + Logs__ and open the file at `logs > user > stdout > 10.0.0.X > process000.stdout.txt`. Look for error entries like `Error processing input file`. You should find there details about why the input file can't be correctly read.+
+### Audiences in JWT are not allowed
+
+__Context__: When invoking a batch endpoint using its REST APIs.
+
+__Reason__: The access token used to invoke the REST API for the endpoint/deployment is indicating a token that is issued for a different audience/service. Azure Active Directory tokens are issued for specific actions.
+
+__Solution__: When generating an authentication token to be used with the Batch Endpoint REST API, ensure the `resource` parameter is set to `https://ml.azure.com`. Please notice that this resource is different from the resource you need to indicate to manage the endpoint using the REST API. All Azure resources (including batch endpoints) use the resource `https://management.azure.com` for managing them. Ensure you use the right resource URI on each case. Notice that if you want to use the management API and the job invocation API at the same time, you will need two tokens. For details see: [Authentication on batch endpoints (REST)](how-to-authenticate-batch-endpoint.md?tabs=rest).
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
Specify the storage output location to any datastore and path. By default, batch
## Next steps -- [How to deploy online endpoints with the Azure CLI](how-to-deploy-managed-online-endpoints.md)-- [How to deploy batch endpoints with the Azure CLI](batch-inference/how-to-use-batch-endpoint.md)
+- [How to deploy online endpoints with the Azure CLI and Python SDK](how-to-deploy-managed-online-endpoints.md)
+- [How to deploy batch endpoints with the Azure CLI and Python SDK](batch-inference/how-to-use-batch-endpoint.md)
- [How to use online endpoints with the studio](how-to-use-managed-online-endpoint-studio.md) - [Deploy models with REST](how-to-deploy-with-rest.md) - [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md)
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
Title: MLflow and Azure Machine Learning
description: Learn about how Azure Machine Learning uses MLflow to log metrics and artifacts from machine learning models, and to deploy your machine learning models to an endpoint. --++ Last updated 08/15/2022
machine-learning How To Secure Kubernetes Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-kubernetes-online-endpoint.md
description: Learn about how to use TLS/SSL to configure secure Kubernetes onlin
-+ Last updated 10/10/2022
marketplace Plans Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plans-pricing.md
Plans are not supported for the following offer types:
- Consulting service - Dynamics 365 Business Central - Dynamics 365 Operations Apps-- Power BI app-- Power BI Visual ## Plan information
This table provides pricing information thatΓÇÖs specific to various offer types
| IoT Edge module | <ul><li>[Plan an IoT Edge module offer](marketplace-iot-edge.md#licensing-options)</li></ul> | | Managed service | <ul><li>[Plan a Managed Service offer](plan-managed-service-offer.md#plans-and-pricing)</li><li>[Create plans for a Managed Service offer](create-managed-service-offer-plans.md#define-pricing-and-availability) | | Power BI app | <ul><li>[Plan a Power BI App offer](marketplace-power-bi.md#licensing-options)</li></ul> |
+| Power BI visual | <ul><li>[Create a Power BI App offer](power-bi-visual-offer-setup.md#setup-details)</li></ul> |
| Software as a Service (SaaS) | <ul><li>[SaaS pricing models](plan-saas-offer.md#saas-pricing-models)</li><li>[SaaS billing](plan-saas-offer.md#saas-billing)</li><li>[Create plans for a SaaS offer](create-new-saas-offer-plans.md#define-a-pricing-model)</li></ul> |
mysql How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-upgrade.md
Last updated 10/12/2022
>[!Note] > This article contains references to the term slave, a term that Microsoft no longer uses. When the term is removed from the software, we will remove it from this article.
-This article describes how you can upgrade your MySQL major version in-place in Azure Database for MySQL Flexible server.
-This feature will enable customers to perform in-place upgrades of their MySQL 5.7 servers to MySQL 8.0 with a select of button without any data movement or the need of any application connection string changes.
+This article describes how you can upgrade your MySQL major version in-place in Azure Database for MySQL - Flexible server.
+This feature enables customers to perform in-place upgrades of their MySQL 5.7 servers to MySQL 8.0 without any data movement or the need to make any application connection string changes.
>[!Important]
-> - Major version upgrade for Azure database for MySQL Flexible Server is available in public preview.
-> - Major version upgrade is currently not available for Burstable SKU 5.7 servers.
-> - Duration of downtime will vary based on the size of your database instance and the number of tables on the database.
-> - Upgrading major MySQL version is irreversible. Your deployment might fail if validation identifies the server is configured with any features that are [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) or [deprecated](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-deprecations). You can make necessary configuration changes on the server and try upgrade again
+> - Major version upgrade for Azure Database for MySQL - Flexible Server is available in public preview.
+> - Major version upgrade is currently unavailable for version 5.7 servers based on the Burstable SKU.
+> - Duration of downtime varies based on the size of the database instance and the number of tables it contains.
+> - Upgrading the major MySQL version is irreversible. Your deployment might fail if validation identifies that the server is configured with any features that are [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) or [deprecated](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-deprecations). You can make necessary configuration changes on the server and try upgrade again.
## Prerequisites - Read Replicas with MySQL version 5.7 should be upgraded before Primary Server for replication to be compatible between different MySQL versions, read more on [Replication Compatibility between MySQL versions](https://dev.mysql.com/doc/mysql-replication-excerpt/8.0/en/replication-compatibility.html). - Before you upgrade your production servers, we strongly recommend you to test your application compatibility and verify your database compatibility with features [removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals)/[deprecated](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-deprecations) in the new MySQL version.-- Trigger [on-demand backup](./how-to-trigger-on-demand-backup.md) before you perform major version upgrade on your production server, which can be used to [rollback to version 5.7](./how-to-restore-server-portal.md) from the full on-demand backup taken.
+- Trigger [on-demand backup](./how-to-trigger-on-demand-backup.md) before you perform major version upgrade on your production server, which can be used to [rollback to version 5.7](./how-to-restore-server-portal.md) from the full on-demand backup taken.
-## Perform Planned Major version upgrade from MySQL 5.7 to MySQL 8.0 using Azure portal
+## Perform a Planned major version upgrade from MySQL 5.7 to MySQL 8.0 using the Azure portal
-1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.7 server.
+To perform a major version upgrade of an Azure Database for MySQL 5.7 server using the Azure portal, perform the following steps.
+
+1. In the [Azure portal](https://portal.azure.com/), select your existing Azure Database for MySQL 5.7 server.
>[!Important] > We recommend performing upgrade first on restored copy of the server rather than upgrading production directly. See [how to perform point-in-time restore](./how-to-restore-server-portal.md).
-2. From the overview page, select the Upgrade button in the toolbar
+2. On the **Overview** page, in the toolbar, select **Upgrade**.
>[!Important] > Before upgrading visit link for list of [features removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) in MySQL 8.0.
This feature will enable customers to perform in-place upgrades of their MySQL 5
:::image type="content" source="./media/how-to-upgrade/1-how-to-upgrade.png" alt-text="Screenshot showing Azure Database for MySQL Upgrade.":::
-3. In the Upgrade sidebar, verify Major Upgrade version to upgrade i.e 8.0.
+3. In the **Upgrade** sidebar, in the **MySQL version to upgrade** text box, verify the major MySQL version you want to upgrade to, i.e., 8.0.
:::image type="content" source="./media/how-to-upgrade/2-how-to-upgrade.png" alt-text="Screenshot showing Upgrade.":::
-4. For Primary Server, select on confirmation checkbox, to confirm that all your replica servers are upgraded before primary server. Once confirmed that all your replicas are upgraded, Upgrade button will be enabled. For your read-replicas and standalone servers, Upgrade button will be enabled by default.
-
- :::image type="content" source="./media/how-to-upgrade/3-how-to-upgrade.png" alt-text="Screenshot showing confirmation.":::
+ Before you can upgrade your primary server, you first need to have upgraded any associated read replica servers. Until this is completed, **Upgrade** will be disabled.
-5. Once Upgrade button is enabled, you can select on Upgrade button to proceed with deployment.
+4. On the primary server, select the confirmation message to verify that all replica servers have been upgraded, and then select **Upgrade**.
:::image type="content" source="./media/how-to-upgrade/4-how-to-upgrade.png" alt-text="Screenshot showing upgrade.":::
-## Perform Planned Major version upgrade from MySQL 5.7 to MySQL 8.0 using Azure CLI
+ On read replica and standalone servers, **Upgrade** is enabled by default.
+
+## Perform a Planned major version upgrade from MySQL 5.7 to MySQL 8.0 using the Azure CLI
-Follow these steps to perform major version upgrade for your Azure Database of MySQL 5.7 server using Azure CLI.
+To perform a major version upgrade of an Azure Database for MySQL 5.7 server using the Azure CLI, perform the following steps.
-1. Install [Azure CLI](/cli/azure/install-azure-cli) for Windows or use [Azure CLI](../../cloud-shell/overview.md) in Azure Cloud Shell to run the upgrade commands.
+1. Install the [Azure CLI](/cli/azure/install-azure-cli) for Windows or use the [Azure CLI](../../cloud-shell/overview.md) in Azure Cloud Shell to run the upgrade commands.
This upgrade requires version 2.40.0 or later of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed. Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade.
-2. After you sign in, run the [az mysql server upgrade](/cli/azure/mysql/server#az-mysql-server-upgrade) command.
+2. After you sign in, run the [az mysql server upgrade](/cli/azure/mysql/server#az-mysql-server-upgrade) command.
```azurecli az mysql server upgrade --name testsvr --resource-group testgroup --subscription MySubscription --version 8.0 ```
-3. Under confirmation prompt, type ΓÇ£yΓÇ¥ for confirming or ΓÇ£nΓÇ¥ to stop the upgrade process and enter.
+3. Under the confirmation prompt, type **y** to confirm or **n** to stop the upgrade process, and then press Enter.
+
+## Perform a major version upgrade from MySQL 5.7 to MySQL 8.0 on a read replica server using the Azure portal
+
+To perform a major version upgrade of an Azure Database for MySQL 5.7 server to MySQL 8.0 on a read replica using the Azure portal, perform the following steps.
-## Perform major version upgrade from MySQL 5.7 to MySQL 8.0 on read replica using Azure portal
+1. In the Azure portal, select your existing Azure Database for MySQL 5.7 read replica server.
-1. In the Azure portal, select your existing Azure Database for MySQL 5.7 read replica server.
+2. On the **Overview** page, in the toolbar, select **Upgrade**.
-2. From the Overview page, select the Upgrade button in the toolbar.
>[!Important] > Before upgrading visit link for list of [features removed](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals) in MySQL 8.0. >Verify deprecated [sql_mode](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_sql_mode) values and remove/deselect them from your current Flexible Server 5.7 using Server Parameters Blade on your Azure Portal to avoid deployment failure.
-3. In the Upgrade section, select Upgrade button to upgrade Azure database for MySQL 5.7 read replica server to 8.0 server.
+3. In the **Upgrade** section, select **Upgrade** to upgrade an Azure Database for MySQL 5.7 read replica server to MySQL 8.0.
-4. A notification will confirm that upgrade is successful.
+ A notification appears to confirm that upgrade is successful.
-5. From the Overview page, confirm that your Azure database for MySQL read replica server version is 8.0.
+4. On the **Overview** page, confirm that your Azure Database for MySQL read replica server is running version is 8.0.
-6. Now go to your primary server and perform major version upgrade on it.
+5. Now, go to your primary server and perform major version upgrade on it.
## Perform minimal downtime major version upgrade from MySQL 5.7 to MySQL 8.0 using read replicas
+To perform a major version upgrade of an Azure Database for MySQL 5.7 server to MySQL 8.0 with minimal downtime using read replica servers, perform the following steps.
+ 1. In the Azure portal, select your existing Azure Database for MySQL 5.7. 2. Create a [read replica](./how-to-read-replicas-portal.md) from your primary server.
-3. Upgrade your [read replica to version](#perform-planned-major-version-upgrade-from-mysql-57-to-mysql-80-using-azure-cli) 8.0.
+3. [Upgrade](#perform-a-planned-major-version-upgrade-from-mysql-57-to-mysql-80-using-the-azure-cli) your read replica to version 8.0.
+
+4. After you confirm that the replica server is running version 8.0, stop your application from connecting to your primary server.
-4. Once you confirm that the replica server is running on version 8.0, stop your application from connecting to your primary server.
+5. Check replication status to ensure that the replica has caught up with the primary so that all data is in sync and that no new operations are being performed on the primary.
-5. Check replication status, and make sure replica is all caught up with primary, so all the data is in sync and ensure there are no new operations performed in primary.
-Confirm with the show slave status command on the replica server to view the replication status.
+6. Confirm with the show slave status command on the replica server to view the replication status.
```azurecli SHOW SLAVE STATUS\G ```
- If the state of Slave_IO_Running and Slave_SQL_Running are "yes" and the value of Seconds_Behind_Master is "0", replication is working well. Seconds_Behind_Master indicates how late the replica is. If the value isn't "0", it means that the replica is processing updates. Once you confirm Seconds_Behind_Master is "0" it's safe to stop replication.
+ If the state of Slave_IO_Running and Slave_SQL_Running is **yes** and the value of Seconds_Behind_Master is **0**, replication is working well. Seconds_Behind_Master indicates how late the replica is. If the value isn't **0**, then the replica is still processing updates. After you confirm that the value of Seconds_Behind_Master is ****, it's safe to stop replication.
-6. Promote your read replica to primary by stopping replication.
+7. Promote your read replica to primary by stopping replication.
-7. Set Server Parameter read_only to 0 that is, OFF to start writing on promoted primary.
+8. Set Server Parameter read_only to **0** (OFF) to start writing on promoted primary.
- Point your application to the new primary (former replica) which is running server 8.0. Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
+9. Point your application to the new primary (former replica) which is running server 8.0. Each server has a unique connection string. Update your application to point to the (former) replica instead of the source.
>[!Note]
-> This scenario will have downtime during steps 4, 5 and 6 only.
+> This scenario only incur downtime during steps 4 through 7.
## Frequently asked questions -- Will this cause downtime of the server and if so, how long?
+- **Will this cause downtime of the server and if so, how long?**
+ To have minimal downtime during upgrades, follow the steps mentioned under - [Perform minimal downtime major version upgrade from MySQL 5.7 to MySQL 8.0 using read replicas](#perform-minimal-downtime-major-version-upgrade-from-mysql-57-to-mysql-80-using-read-replicas). The server will be unavailable during the upgrade process, so we recommend you perform this operation during your planned maintenance window. The estimated downtime depends on the database size, storage size provisioned (IOPs provisioned), and the number of tables on the database. The upgrade time is directly proportional to the number of tables on the server. To estimate the downtime for your server environment, we recommend to first perform upgrade on restored copy of the server. -- When will this upgrade feature be GA?
- The GA of this feature will be planned by December 2022. However, the feature is production ready and fully supported by Azure so you should run it with confidence in your environment. As a recommended best practice, we strongly suggest you run and test it first on a restored copy of the server so you can estimate the downtime during upgrade, and perform application compatibility test before you run it on production.
+- **When will this upgrade feature be GA?**
+
+ GA of this feature will be planned by December 2022. However, the feature is production ready and fully supported by Azure so you should run it with confidence in your environment. As a recommended best practice, we strongly suggest you run and test it first on a restored copy of the server so you can estimate the downtime during upgrade, and perform application compatibility test before you run it on production.
+
+- **What happens to my backups after upgrade?**
-- What happens to my backups after upgrade? All backups (automated/on-demand) taken before major version upgrade, when used for restoration will always restore to a server with older version (5.7). All the backups (automated/on-demand) taken after major version upgrade will restore to server with upgraded version (8.0). It's highly recommended to take on-demand backup before you perform the major version upgrade for an easy rollback. ## Next steps-- Learn more on [how to configure scheduled maintenance](./how-to-maintenance-portal.md) for your Azure Database for MySQL flexible server.
+- Learn more about [how to configure scheduled maintenance](./how-to-maintenance-portal.md) for your Azure Database for MySQL flexible server.
- Learn about what's new in [MySQL version 8.0](https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html).
orbital Satellite Imagery With Orbital Ground Station https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/satellite-imagery-with-orbital-ground-station.md
cd ~/drl/data/pub/gsfcdata/aqua/modis/
## Next steps
+To easily deploy downstream components necessary to receive and process spaceborne earth observation data using Azure Orbital Ground Station, see:
+- [Azure Orbital Integration](https://github.com/Azure/azure-orbital-integration)
+ For an end-to-end implementation that involves extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics, see: - [Spaceborne data analysis with Azure Synapse Analytics](/azure/architecture/industries/aerospace/geospatial-processing-analytics)
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/overview.md
A comprehensive list of features of Azure Native ISV Services is listed below.
### Integrations -- Log and metrics: Use Microsoft Azure Monitor for collecting telemetry across all Azure environments.
+- Logs and metrics: Seamlessly direct logs and metrics from Azure Monitor to the Azure Native ISV Service using just a few gestures. You can configure auto-discovery of resources to monitor, and set up automatic log forwarding and metrics shipping. You can easily do the setup in Azure, without needing to create additional infrastructure or write custom code.
- VNet injection: Provides private data plane access to Azure Native ISV services from customersΓÇÖ virtual networks. - Unified billing: Engage with a single entity, Microsoft Azure Marketplace, for billing. No separate license purchase is required to use Azure Native ISV Services.
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
The following tables list the Private Link services and the regions where they'r
|Azure Event Grid| All public regions<br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Event Grid.](../event-grid/network-security.md) | |Azure Service Bus | All public region<br/>All Government regions | Supported with premium tier of Azure Service Bus. [Select for tiers](../service-bus-messaging/service-bus-premium-messaging.md) | GA <br/> [Learn how to create a private endpoint for Azure Service Bus.](../service-bus-messaging/private-link-service.md) | | Azure API Management | All public regions<br/> All Government regions | | Preview <br/> [Connect privately to API Management using a private endpoint.](../event-grid/network-security.md) |
+| Azure Logic Apps | All public regions | | GA <br/> [Learn how to create a private endpoint for Azure Logic Apps.](/azure/logic-apps/secure-single-tenant-workflow-virtual-network-private-endpoint) |
### Internet of Things (IoT)
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Microsoft PowerBI (Microsoft.PowerBI/privateLinkServicesForPowerBI) | privatelink.analysis.windows.net </br> privatelink.pbidedicated.windows.net </br> privatelink.tip1.powerquery.microsoft.com | analysis.windows.net </br> pbidedicated.windows.net </br> tip1.powerquery.microsoft.com | | Azure Bot Service (Microsoft.BotService/botServices) / Bot | privatelink.directline.botframework.com | directline.botframework.com </br> europe.directline.botframework.com | | Azure Bot Service (Microsoft.BotService/botServices) / Token | privatelink.token.botframework.com | token.botframework.com </br> europe.token.botframework.com |
+| Azure Data Health Data Services (Microsoft.HealthcareApis/workspaces) / healthcareworkspace | workspace.privatelink.azurehealthcareapis.com </br> fhir.privatelink.azurehealthcareapis.com </br> dicom.privatelink.azurehealthcareapis.com | workspace.azurehealthcareapis.com </br> fhir.azurehealthcareapis.com </br> dicom.azurehealthcareapis.com |
<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hub-compatible-endpoint)
+>[!Note]
+>In the above text, `{region}` refers to the region code (for example, **eus** for East US and **ne** for North Europe). Refer to the following lists for regions codes:
+>- [All public clouds](https://download.microsoft.com/download/1/2/6/126a410b-0e06-45ed-b2df-84f353034fa1/AzureRegionCodesList.docx)
+ ### Government | Private link resource type / Subresource |Private DNS zone name | Public DNS zone forwarders |
For Azure services, use the recommended zone names as described in the following
| Azure Cache for Redis (Microsoft.Cache/Redis) / redisCache | privatelink.redis.cache.usgovcloudapi.net | redis.cache.usgovcloudapi.net | | Azure HDInsight (Microsoft.HDInsight) | privatelink.azurehdinsight.us | azurehdinsight.us |
+>[!Note]
+>In the above text, `{region}` refers to the region code (for example, **eus** for East US and **ne** for North Europe). Refer to the following lists for regions codes:
+>- [US Gov](../azure-government/documentation-government-developer-guide.md)
+ ### China | Private link resource type / Subresource |Private DNS zone name | Public DNS zone forwarders |
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
The following information lists the known limitations to the use of private endp
| No more than 50 members in an Application Security Group. | Fifty is the number of IP Configurations that can be tied to each respective ASG thatΓÇÖs coupled to the NSG on the private endpoint subnet. Connection failures may occur with more than 50 members. | | Destination port ranges supported up to a factor of 250K. | Destination port ranges are supported as a multiplication SourceAddressPrefixes, DestinationAddressPrefixes, and DestinationPortRanges. </br></br> Example inbound rule: </br> 1 source * 1 destination * 4K portRanges = 4K Valid </br> 10 sources * 10 destinations * 10 portRanges = 1K Valid </br> 50 sources * 50 destinations * 50 portRanges = 125K Valid </br> 50 sources * 50 destinations * 100 portRanges = 250K Valid </br> 100 sources * 100 destinations * 100 portRanges = 1M Invalid, NSG has too many sources/destinations/ports. | | Source port filtering is interpreted as * | Source port filtering isn't actively used as valid scenario of traffic filtering for traffic destined to a private endpoint. |
-| Feature unavailable in select regions. | Currently unavailable in the following regions: </br> West India </br> UK North </br> UK South 2 </br> Australia Central 2 </br> South Africa West </br> Brazil Southeast |
+| Feature unavailable in select regions. | Currently unavailable in the following regions: </br> West India </br> Australia Central 2 </br> South Africa West </br> Brazil Southeast |
| Dual port NSG rules are unsupported. | If multiple port ranges are used with NSG rules, only the first port range is honored for allow rules and deny rules. Rules with multiple port ranges are defaulted to *deny all* instead of to denying specific ports. </br><br>For more information, see the UDR rule example in the next table. | The following table shows an example of a dual port NSG rule:
purview Create A Custom Classification And Classification Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-a-custom-classification-and-classification-rule.md
Title: Create a custom classification and classification rule description: Learn how to create custom classifications to define data types in your data estate that are unique to your organization in Microsoft Purview.--++
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-credentials.md
Title: Create and manage credentials for scans description: Learn about the steps to create and manage credentials in Microsoft Purview.--++
purview Manage Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-data-sources.md
Title: How to manage multicloud data sources description: Learn how to register new data sources, manage collections of data sources, and view sources in Microsoft Purview.--++
purview Register Scan Azure Files Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-files-storage-source.md
Title: Connect to and manage Azure Files description: This guide describes how to connect to Azure Files in Microsoft Purview, and use Microsoft Purview's features to scan and manage your Azure Files source.--++
purview Register Scan Azure Multiple Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-multiple-sources.md
Title: Connect to and manage multiple Azure sources description: This guide describes how to connect to multiple Azure sources in Microsoft Purview at once, and use Microsoft Purview's features to scan and manage your sources.--++
purview Register Scan Azure Synapse Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-synapse-analytics.md
Title: 'Connect to and manage dedicated SQL pools (formerly SQL DW)' description: This guide describes how to connect to dedicated SQL pools (formerly SQL DW) in Microsoft Purview, and use Microsoft Purview's features to scan and manage your dedicated SQL pools source.--++
purview Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-on-premises-sql-server.md
Title: Connect to and manage on-premises SQL server instances description: This guide describes how to connect to on-premises SQL server instances in Microsoft Purview, and use Microsoft Purview's features to scan and manage your on-premises SQL server source.--++
purview Register Scan Synapse Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-synapse-workspace.md
Title: Connect to and manage Azure Synapse Analytics workspaces description: This guide describes how to connect to Azure Synapse Analytics workspaces in Microsoft Purview, and use Microsoft Purview's features to scan and manage your Azure Synapse Analytics workspace source.--++
purview Troubleshoot Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/troubleshoot-connections.md
Title: Troubleshoot your connections in Microsoft Purview description: This article explains the steps to troubleshoot your connections in Microsoft Purview.--++
purview Tutorial Register Scan On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-register-scan-on-premises-sql-server.md
Title: 'Tutorial: Register and scan an on-premises SQL Server' description: This tutorial describes how to register an on-prem SQL Server to Microsoft Purview, and scan the server using a self-hosted IR. --++
role-based-access-control Resource Provider Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/resource-provider-operations.md
Previously updated : 10/12/2022 Last updated : 10/31/2022
Azure service: [Azure NetApp Files](../azure-netapp-files/index.yml)
> | Microsoft.NetApp/locations/read | Reads a location wide operation. | > | Microsoft.NetApp/locations/checknameavailability/action | Check if resource name is available | > | Microsoft.NetApp/locations/checkfilepathavailability/action | Check if file path is available |
-> | Microsoft.NetApp/locations/checkinventory/action | Checks ReservedCapacity inventory. |
> | Microsoft.NetApp/locations/operationresults/read | Reads an operation result resource. | > | Microsoft.NetApp/locations/quotaLimits/read | Reads a Quotalimit resource type. | > | Microsoft.NetApp/locations/RegionInfo/read | Reads a regionInfo resource. |
search Search Howto Index Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-mysql.md
The data source definition specifies the data to index, credentials, and policie
api-key: [admin key] {
- "name" : "hotel-mysql-ds"
+ "name" : "hotel-mysql-ds",
"description" : "[Description of MySQL data source]", "type" : "mysql", "credentials" : {
search Search Pagination Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-pagination-page-layout.md
Title: How to work with search results
-description: Structure and sort search results, get a document count, and add content navigation to search results in Azure Cognitive Search.
+description: Define search result composition, get a document count, sort results, and add content navigation to search results in Azure Cognitive Search.
- Previously updated : 07/22/2022+ Last updated : 11/02/2022 # How to work with search results in Azure Cognitive Search
Parameters on the query determine:
+ Selection of fields within results + Count of matches found in the index for the query + Number of results in the response (up to 50, by default)
-+ Sort order of results
++ Sort order + Highlighting of terms within a result, matching on either the whole or partial term in the body ## Result composition
Occasionally, the substance and not the structure of results are unexpected. For
## Counting matches
-The count parameter returns the number of documents in the index that are considered a match for the query. To return the count, add **`$count=true`** to the query request. There is no maximum value imposed by the search service. Depending on your query and the content of your documents, the count could be as high as every document in the index.
+The count parameter returns the number of documents in the index that are considered a match for the query. To return the count, add **`$count=true`** to the query request. There's no maximum value imposed by the search service. Depending on your query and the content of your documents, the count could be as high as every document in the index.
Count is accurate when the index is stable. If the system is actively adding, updating, or deleting documents, the count will be approximate, excluding any documents that aren't fully indexed.
To control the paging of all documents returned in a result set, add `$top` and
+ Return the second set, skipping the first 15 to get the next 15: `$top=15&$skip=15`. Repeat for the third set of 15: `$top=15&$skip=30`
-The results of paginated queries aren't guaranteed to be stable if the underlying index is changing. Paging changes the value of `$skip` for each page, but each query is independent and operates on the current view of the data as it exists in the index at query time (in other words, there is no caching or snapshot of results, such as those found in a general purpose database).
+The results of paginated queries aren't guaranteed to be stable if the underlying index is changing. Paging changes the value of `$skip` for each page, but each query is independent and operates on the current view of the data as it exists in the index at query time (in other words, there's no caching or snapshot of results, such as those found in a general purpose database).
Following is an example of how you might get duplicates. Assume an index with four documents:
Notice that document 2 is fetched twice. This is because the new document 5 has
## Ordering results
-In a full text search query, results can be ranked by a search score, a semantic re-ranker score (if using [semantic search](semantic-search-overview.md)), or by an **`$orderby`** expression in the query request.
+In a full text search query, results can be ranked by a search score, a semantic reranker score (if using [semantic search](semantic-search-overview.md)), or by an **`$orderby`** expression in the query request that specifies an explicit sort order.
-A @search.score equal to 1.00 indicates an un-scored or un-ranked result set, where the 1.0 score is uniform across all results. Un-scored results occur when the query form is fuzzy search, wildcard or regex queries, or an empty search (`search=*`). If you need to impose a ranking structure over un-scored results, an **`$orderby`** expression will help you achieve that objective.
+Sorting methodologies aren't designed to be used together. For example, if you're sorting with **`$orderby`** for primary sorting, you can't apply a secondary sort based on search score (because the search score will be uniform).
+
+### Ordering by search score
For full text search queries, results are automatically ranked by a search score, calculated based on term frequency and proximity in a document (derived from [TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf)), with higher scores going to documents having more or stronger matches on a search term.
+The "@search.score" range is 0 up to (but not including) 1.00. A "@search.score" equal to 1.00 indicates an unscored or unranked result set, where the 1.0 score is uniform across all results. Unscored results occur when the query form is fuzzy search, wildcard or regex queries, or an empty search (`search=*`). If you need to impose a ranking structure over unscored results, an **`$orderby`** expression will help you achieve that objective.
+ Search scores convey general sense of relevance, reflecting the strength of match relative to other documents in the same result set. But scores aren't always consistent from one query to the next, so as you work with queries, you might notice small discrepancies in how search documents are ordered. There are several explanations for why this might occur. | Cause | Description |
Search scores convey general sense of relevance, reflecting the strength of matc
| Multiple replicas | For services using multiple replicas, queries are issued against each replica in parallel. The index statistics used to calculate a search score are calculated on a per-replica basis, with results merged and ordered in the query response. Replicas are mostly mirrors of each other, but statistics can differ due to small differences in state. For example, one replica might have deleted documents contributing to their statistics, which were merged out of other replicas. Typically, differences in per-replica statistics are more noticeable in smaller indexes. For more information about this condition, see [Concepts: search units, replicas, partitions, shards](search-capacity-planning.md#concepts-search-units-replicas-partitions-shards) in the capacity planning documentation. | | Identical scores | If multiple documents have the same score, any one of them might appear first. |
-### How to get consistent ordering
+### Ordering by the semantic reranker
+
+If you're using [semantic search](semantic-search-overview.md), the "@search.rerankerScore" determines the sort order of your results.
+
+The "@search.rerankerScore" range is 1 to 4.00, where a higher score indicates a stronger semantic match.
+
+### Ordering with $orderby
If consistent ordering is an application requirement, you can explicitly define an [**`$orderby`** expression](query-odata-filter-orderby-syntax.md) on a field. Only fields that are indexed as "sortable" can be used to order results. Fields commonly used in an **`$orderby`** include rating, date, and location. Filtering by location requires that the filter expression calls the [**`geo.distance()` function**](search-query-odata-geo-spatial-functions.md?#order-by-examples), in addition to the field name.
-Another approach that promotes order consistency is using a [custom scoring profile](index-add-scoring-profiles.md). Scoring profiles give you more control over the ranking of items in search results, with the ability to boost matches found in specific fields. The additional scoring logic can help override minor differences among replicas because the search scores for each document are farther apart. We recommend the [ranking algorithm](index-ranking-similarity.md) for this approach.
+Numeric fields (Edm.Double, Edm.Int32, Edm.Int64) are sorted in numeric order (for example, 1, 2, 10, 11, 20).
+
+String fields (Edm.String, Edm.ComplexType subfields) are sorted in either [ASCII sort order](https://en.wikipedia.org/wiki/ASCII#Printable_characters) or [Unicode sort order](https://en.wikipedia.org/wiki/List_of_Unicode_characters), depending on the language. You can't sort collections of any type.
+++ Numeric content in string fields is sorted alphabetically (1, 10, 11, 2, 20).+++ Upper case strings are sorted ahead of lower case (APPLE, Apple, BANANA, Banana, apple, banana). You can assign a [text normalizer](search-normalizers.md) to preprocess the text before sorting to change this behavior. Using the lowercase tokenizer ona field will have no effect on sorting behavior because Cognitive Search sorts on a non-analyzed copy of the field.+++ Strings that lead with diacritics appear last (Äpfel, Öffnen, Üben)+
+### Use a scoring profile to influence relevance
+
+Another approach that promotes order consistency is using a [custom scoring profile](index-add-scoring-profiles.md). Scoring profiles give you more control over the ranking of items in search results, with the ability to boost matches found in specific fields. The extra scoring logic can help override minor differences among replicas because the search scores for each document are farther apart. We recommend the [ranking algorithm](index-ranking-similarity.md) for this approach.
## Hit highlighting Hit highlighting refers to text formatting (such as bold or yellow highlights) applied to matching terms in a result, making it easy to spot the match. Highlighting is useful for longer content fields, such as a description field, where the match isn't immediately obvious.
-Notice that highlighting is applied to individual terms. There is no highlight capability for the contents of an entire field. If you want highlighting over a phrase, you'll have to provide the matching terms (or phrase) in a quote-enclosed query string. This technique is described further on in this section.
+Notice that highlighting is applied to individual terms. There's no highlight capability for the contents of an entire field. If you want to highlight over a phrase, you'll have to provide the matching terms (or phrase) in a quote-enclosed query string. This technique is described further on in this section.
Hit highlighting instructions are provided on the [query request](/rest/api/searchservice/search-documents). Queries that trigger query expansion in the engine, such as fuzzy and wildcard search, have limited support for hit highlighting.
In a keyword search, each term is scanned for independently. A query for "divine
### Keyword search highlighting
-Within a highlighted field, formatting is applied to whole terms. For example, on a match against "The Divine Secrets of the Ya-Ya Sisterhood", formatting is applied to each term separately, even though they are consecutive.
+Within a highlighted field, formatting is applied to whole terms. For example, on a match against "The Divine Secrets of the Ya-Ya Sisterhood", formatting is applied to each term separately, even though they're consecutive.
```json "@odata.count": 39,
search Search Query Odata Orderby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-orderby.md
Previously updated : 07/18/2022 Last updated : 11/02/2022 # OData $orderby syntax in Azure Cognitive Search
-In Azure Cognitive Search, the **$orderby** parameter specifies custom sort order for search results. This article describes the OData syntax of **$orderby** and provides examples.
+In Azure Cognitive Search, the **$orderby** parameter specifies a custom sort order for search results. This article describes the OData syntax of **$orderby** and provides examples.
-Field path construction and constants are described in the [OData language overview in Azure Cognitive Search](query-odata-filter-orderby-syntax.md). For more information about sorting and search results composition, see [How to work with search results in Azure Cognitive Search](search-pagination-page-layout.md).
+Field path construction and constants are described in the [OData language overview in Azure Cognitive Search](query-odata-filter-orderby-syntax.md). For more information about sorting behaviors, see [Ordering results](search-pagination-page-layout.md#ordering-results).
## Syntax
Sort hotels in descending order by search.score and rating, and then in ascendin
$orderby=search.score() desc,Rating desc,geo.distance(Location, geography'POINT(-122.131577 47.678581)') asc ```
-## Next steps
+## See also
- [How to work with search results in Azure Cognitive Search](search-pagination-page-layout.md) - [OData expression language overview for Azure Cognitive Search](query-odata-filter-orderby-syntax.md)
search Semantic Answers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-answers.md
Previously updated : 03/16/2022 Last updated : 11/02/2022 # Return a semantic answer in Azure Cognitive Search
All prerequisites that apply to [semantic queries](semantic-how-to-query-request
+ Query strings entered by the user must be recognizable as a question (what, where, when, how).
-+ Search documents in the index must contain text having the characteristics of an answer, and that text must exist in one of the fields listed in the [semantic configuration](semantic-how-to-query-request.md#create-a-semantic-configuration). For example, given a query "what is a hash table", if none of the fields in the semantic configuration contain passages that include "A hash table is ...", then it's unlikely an answer will be returned.
++ Search documents in the index must contain text having the characteristics of an answer, and that text must exist in one of the fields listed in the [semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration). For example, given a query "what is a hash table", if none of the fields in the semantic configuration contain passages that include "A hash table is ...", then it's unlikely an answer will be returned. ## What is a semantic answer?
Answers are returned as an independent, top-level object in the query response p
<a name="query-params"></a>
-## Formulate a query rest for "answers"
+## Formulate a REST query for "answers"
-The approach for listing fields in priority order has changed recently, with "semanticConfiguration" replacing "searchFields". If you're currently using searchFields, update your code to the 2021-04-30-Preview API version and use "semanticConfiguration" instead.
+The approach for listing fields in priority order has changed, with "semanticConfiguration" replacing "searchFields". If you're currently using "searchFields", update your code to the 2021-04-30-Preview API version and use "semanticConfiguration" instead.
### [**Semantic Configuration (recommended)**](#tab/semanticConfiguration) To return a semantic answer, the query must have the semantic "queryType", "queryLanguage", "semanticConfiguration", and the "answers" parameters. Specifying these parameters doesn't guarantee an answer, but the request must include them for answer processing to occur.
-The "semanticConfiguration" parameter is crucial to returning a high-quality answer.
+The "semanticConfiguration" parameter is required. It's defined in a search index, and then referenced in a query, as shown below.
```json {
The "semanticConfiguration" parameter is crucial to returning a high-quality ans
+ "queryLanguage" must be one of the values from the [supported languages list (REST API)](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-+ A "semanticConfiguration" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#searchfields) for details.
++ A "semanticConfiguration" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration) for details. + For "answers", parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of 10. Whether you need more than one answer depends on the user experience of your app, and how you want to render results.
The "searchFields" parameter is crucial to returning a high-quality answer, both
+ "queryLanguage" must be one of the values from the [supported languages list (REST API)](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-+ "searchFields" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Set searchFields](semantic-how-to-query-request.md#searchfields) for details.
++ "searchFields" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Set searchFields](semantic-how-to-query-request.md#2buse-searchfields-for-field-prioritization) for details. + For "answers", parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of 10. Whether you need more than one answer depends on the user experience of your app, and how you want to render results.
Within @search.answers:
+ **"score"** is a confidence score that reflects the strength of the answer. If there are multiple answers in the response, this score is used to determine the order. Top answers and top captions can be derived from different search documents, where the top answer originates from one document, and the top caption from another, but in general you will see the same documents in the top positions within each array.
-Answers are followed by the **"value"** array, which always includes scores, captions, and any fields that are retrievable by default. If you specified the select parameter, the "value" array is limited to the fields that you specified. See [Create a semantic query](semantic-how-to-query-request.md) for details.
+Answers are followed by the **"value"** array, which always includes scores, captions, and any fields that are retrievable by default. If you specified the select parameter, the "value" array is limited to the fields that you specified. See [Configure semantic ranking](semantic-how-to-query-request.md) for details.
## Tips for producing high-quality answers
For best results, return semantic answers on a document corpus having the follow
+ [Semantic search overview](semantic-search-overview.md) + [Semantic ranking algorithm](semantic-ranking.md) + [Similarity ranking algorithm](index-ranking-similarity.md)
-+ [Create a semantic query](semantic-how-to-query-request.md)
++ [Configure semantic ranking](semantic-how-to-query-request.md)
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-query-request.md
Title: Create a semantic query
+ Title: Configure semantic search
-description: Set a semantic query type to attach the deep learning models to query processing, inferring intent and context as part of search rank and relevance.
+description: Set a semantic query type to attach the deep learning models of semantic search.
- Previously updated : 12/17/2021+ Last updated : 11/01/2022
-# Create a query that invokes semantic ranking and returns semantic captions
+# Configure semantic ranking and return captions in search results
> [!IMPORTANT]
-> Semantic search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and beta SDKs. These features are billable. For more information about, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
+> Semantic search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through Azure portal, preview REST APIs, and beta SDKs. This feature is billable. See [Availability and pricing](semantic-search-overview.md#availability-and-pricing).
-Semantic search is a premium feature in Azure Cognitive Search that invokes a semantic ranking algorithm over a result set and returns semantic captions (and optionally [semantic answers](semantic-answers.md)), with highlights over the most relevant terms and phrases. Both captions and answers are returned in query requests formulated using the "semantic" query type.
+In this article, you'll learn how to invoke a semantic ranking algorithm over a result set, promoting the most semantically relevant results to the top of the stack. You can also get semantic captions, with highlights over the most relevant terms and phrases, and [semantic answers](semantic-answers.md).
-Captions and answers are extracted verbatim from text in the search document. The semantic subsystem determines what part of your content has the characteristics of a caption or answer, but it does not compose new sentences or phrases. For this reason, content that includes explanations or definitions work best for semantic search.
+There are two main activities to perform:
+++ Add a semantic configuration to an index++ Add parameters to a query request ## Prerequisites
-+ A Cognitive Search service at a Standard tier (S1, S2, S3) or Storage Optimized tier (L1, L2), located in one of these regions: Australia East, East US, East US 2, North Central US, South Central US, West US, West US 2, North Europe, UK South, West Europe. If you have an existing S1 or greater service in one of these regions, you can enable semantic search on your service without having to create a new one.
++ A search service on Standard tier (S1, S2, S3) or Storage Optimized tier (L1, L2), in these regions: Australia East, East US, East US 2, North Central US, South Central US, West US, West US 2, North Europe, UK South, West Europe.
-+ [Semantic search enabled on your search service](semantic-search-overview.md#enable-semantic-search).
+ If you have an existing S1 or greater service in one of these regions, you can enable semantic search without having to create a new service.
-+ An existing search index with content in a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage). Semantic search works best on content that is informational or descriptive.
++ Semantic search [enabled on your search service](semantic-search-overview.md#enable-semantic-search).
-+ A search client for sending queries and updating indexes.
++ An existing search index with rich content in a [supported query language](/rest/api/searchservice/preview-api/search-documents#queryLanguage). Semantic search works best on content that is informational or descriptive.
- The search client must support preview REST APIs on the query request. You can use [Postman](search-get-started-rest.md), another web client, or code that makes REST calls to the preview APIs. [Search explorer](search-explorer.md) in Azure portal can be used to submit a semantic query. You can also use [Azure.Search.Documents 11.4.0-beta.5](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.5).
++ Review the [Semantic search overview](semantic-search-overview.md) if you need an introduction to the feature.
-+ A [query request](/rest/api/searchservice/preview-api/search-documents) must include `queryType=semantic` and other parameters described in this article.
+> [!NOTE]
+> Captions and answers are extracted verbatim from text in the search document. The semantic subsystem determines what part of your content has the characteristics of a caption or answer, but it doesn't compose new sentences or phrases. For this reason, content that includes explanations or definitions work best for semantic search.
-## What's a semantic query type?
+## 1 - Choose a client
-In Cognitive Search, a query is a parameterized request that determines query processing and the shape of the response. A *semantic query* has [parameters](#query-using-rest) that invoke the semantic reranking model that can assess the context and meaning of matching results, promote more relevant matches to the top, and return semantic answers and captions.
+You'll need a search client that supports preview APIs on the query request. Here are some options:
-The approach for listing fields in priority order has changed recently, with semanticConfiguration replacing searchFields. If you are currently using searchFields, please update your code to the 2021-04-30-Preview API version and use semanticConfiguration instead.
++ [Search explorer](search-explorer.md) in Azure portal, recommended for initial exploration.
-### [**Semantic Configuration (recommended)**](#tab/semanticConfiguration)
++ [Postman Desktop App](https://www.postman.com/downloads/) using the [2021-04-30-Preview REST APIs](/rest/api/searchservice/preview-api/). See this [Quickstart](search-get-started-rest.md) for help with setting up your requests.
-The following request is representative of a minimal semantic query (without answers).
++ [Azure.Search.Documents 11.4.0-beta.5](https://www.nuget.org/packages/Azure.Search.Documents/11.4.0-beta.5) in the Azure SDK for .NET Preview.
-```http
-POST https://[service name].search.windows.net/indexes/[index name]/docs/search?api-version=2021-04-30-Preview     
-{   
- "search": " Where was Alan Turing born?",   
- "queryType": "semantic",ΓÇ»
- "semanticConfiguration": "my-semantic-config",
- "queryLanguage": "en-us"ΓÇ»
-}
-```
++ [Azure.Search.Documents 11.3.0b6](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-search-documents/11.3.0b6/azure.search.documents.aio.html) in the Azure SDK for Python.
-### [**searchFields**](#tab/searchFields)
+## 2 - Create a semantic configuration
-The following request is representative of a minimal semantic query (without answers).
-
-```http
-POST https://[service name].search.windows.net/indexes/[index name]/docs/search?api-version=2020-06-30-Preview     
-{   
- "search": " Where was Alan Turing born?",   
- "queryType": "semantic",ΓÇ»
- "searchFields": "title,url,body",ΓÇ»
- "queryLanguage": "en-us"ΓÇ»
-}
-```
---
-As with all queries in Cognitive Search, the request targets the documents collection of a single index. Furthermore, a semantic query undergoes the same sequence of parsing, analysis, scanning, and scoring as a non-semantic query.
+> [!IMPORTANT]
+> A semantic configuration is required for the 2021-04-30-Preview REST APIs, Search explorer, and some versions of the beta SDKs. If you're using the 2020-06-30-preview REST API, skip this step and use the ["searchFields" approach for field prioritization](#2buse-searchfields-for-field-prioritization) instead.
-The difference lies in relevance and scoring. As defined in this preview release, a semantic query is one whose *results* are reranked using a semantic language model, providing a way to surface the matches deemed most relevant by the semantic ranker, rather than the scores assigned by the default similarity ranking algorithm.
+A *semantic configuration* specifies how fields are used in semantic ranking. It gives the underlying models hints about which index fields are most important for semantic ranking, captions, highlights, and answers.
-Only the top 50 matches from the initial results can be semantically ranked, and all results include captions in the response. Optionally, you can specify an **`answer`** parameter on the request to extract a potential answer. For more information, see [Semantic answers](semantic-answers.md).
+You'll add a semantic configuration to your [index definition](/rest/api/searchservice/preview-api/create-or-update-index). The tabbed sections below provide instructions for the REST APIs, Azure portal, and the .NET SDK Preview.
-## Create a semantic configuration
+You can add or update a semantic configuration at any time without rebuilding your index. When you issue a query, you'll add the semantic configuration (one per query) that specifies which semantic configuration to use for the query.
-> [!NOTE]
-> Semantic configurations are a new addition to the 2021-04-30-Preview API and are now required for semantic queries. If using 2020-06-30-Preview, **searchFields** is used instead of **semanticConfiguration**. We recommend upgrading to 2021-04-30-Preview and using **semanticConfiguration** for best results.
+1. Review the properties you'll need to specify. A semantic configuration has a name and at least one each of the following properties:
-In order to get the best results from semantic search, it's important to give the underlying models hints about which fields in your index are most important for semantic ranking, captions, highlights, and answers. To provide that information, you'll need to create a semantic configuration.
+ + **Title field** - A title field should be a concise description of the document, ideally a string that is under 25 words. This field could be the title of the document, name of the product, or item in your search index. If you don't have a title in your search index, leave this field blank.
+ + **Content fields** - Content fields should contain text in natural language form. Common examples of content are the body of a document, the description of a product, or other free-form text.
+ + **Keyword fields** - Keyword fields should be a list of keywords, such as the tags on a document, or a descriptive term, such as the category of an item.
-A semantic configuration contains properties to list three different types of fields, which map back to the inputs the underlying models for semantic search expect:
+ You can only specify one title field but you can specify as many content and keyword fields as you like. For content and keyword fields, list the fields in priority order because lower priority fields may get truncated.
-+ **Title field** - A title field should be a concise description of the document, ideally a string that is under 25 words. This could be the title of the document, name of the product, or item in your search index. If you don't have a title in your search index, leave this field blank.
-+ **Content fields** - Content fields should contain text in natural language form. Common examples of content are the text of a document, the description of a product, or other free-form text.
-+ **Keyword fields** - Keyword fields should be a list of keywords, such as the tags on a document, or a descriptive term, such as the category of an item.
+1. For the above properties, determine which fields to assign.
-You can only specify a single title field as part of your semantic configuration but you can specify as many content and keyword fields as you like. However, it's important that you list the content and keyword fields in priority order because lower priority fields may get truncated. Fields listed first will be given higher priority.
+ A field must be a [supported data type](/rest/api/searchservice/supported-data-types) and it should contain strings. If you happen to include an invalid field, there's no error, but those fields won't be used in semantic ranking.
-You're only required to specify one field between `titleField`, `prioritizedContentFields`, and `prioritizedKeywordsFields`, but it's best to add the fields to your semantic configuration if they exist in your search index.
+ | Data type | Example from hotels-sample-index |
+ |--|-|
+ | Edm.String | HotelName, Category, Description |
+ | Edm.ComplexType | Address.StreetNumber, Address.City, Address.StateProvince, Address.PostalCode |
+ | Collection(Edm.String) | Tags (a comma-delimited list of strings) |
-Similar to [scoring profiles](index-add-scoring-profiles.md), semantic configurations are a part of your [index definition](/rest/api/searchservice/preview-api/create-or-update-index) and can be updated at any time without rebuilding your index. When you issue a query, you'll add the `semanticConfiguration` that specifies which semantic configuration to use for the query.
+ > [!NOTE]
+ > Subfields of Collection(Edm.ComplexType) fields aren't currently supported by semantic search and won't be used for semantic ranking, captions, or answers.
### [**Azure portal**](#tab/portal)
Similar to [scoring profiles](index-add-scoring-profiles.md), semantic configura
### [**REST API**](#tab/rest)
- ```json
-"semantic": {
- "configurations": [
- {
- "name": "my-semantic-config",
- "prioritizedFields": {
- "titleField": {
- "fieldName": "hotelName"
- },
- "prioritizedContentFields": [
- {
- "fieldName": "description"
- },
- {
- "fieldName": "description_fr"
- }
- ],
- "prioritizedKeywordsFields": [
- {
- "fieldName": "tags"
- },
- {
- "fieldName": "category"
+1. Formulate a [Create or Update Index](/rest/api/searchservice/preview-api/create-or-update-index?branch=main) request.
+
+1. Add a semantic configuration to the index definition, perhaps after `scoringProfiles` or `suggesters`.
+
+ ```json
+ "semantic": {
+ "configurations": [
+ {
+ "name": "my-semantic-config",
+ "prioritizedFields": {
+ "titleField": {
+ "fieldName": "hotelName"
+ },
+ "prioritizedContentFields": [
+ {
+ "fieldName": "description"
+ },
+ {
+ "fieldName": "description_fr"
+ }
+ ],
+ "prioritizedKeywordsFields": [
+ {
+ "fieldName": "tags"
+ },
+ {
+ "fieldName": "category"
+ }
+ ]
}
- ]
- }
+ }
+ ]
}
- ]
- }
-```
+ ```
### [**.NET SDK**](#tab/sdk)
+Use the [SemanticConfiguration class](/dotnet/api/azure.search.documents.indexes.models.semanticconfiguration?view=azure-dotnet-preview&preserve-view=true) in the Azure SDK for .NET Preview.
+ ```c# var definition = new SearchIndex(indexName, searchFields);
adminClient.CreateOrUpdateIndex(definition);
-To see an example of creating a semantic configuration and using it to issue a semantic query, check out the
+> [!TIP]
+> To see an example of creating a semantic configuration and using it to issue a semantic query, check out the
[semantic search Postman sample](https://github.com/Azure-Samples/azure-search-postman-samples/tree/master/semantic-search).
-### Allowed data types
+## 2b - Use searchFields for field prioritization
-When selecting fields for your semantic configuration, choose only fields of the following [supported data types](/rest/api/searchservice/supported-data-types). If you happen to include an invalid field, there is no error, but those fields won't be used in semantic ranking.
+This step is only for solutions using the 2020-06-30-Preview REST API or a beta SDK that doesn't support semantic configurations. Instead of setting field prioritization in the index through a semantic configuration, you'll set the priority at query time, using the "searchFields" parameter of a query.
-| Data type | Example from hotels-sample-index |
-|--|-|
-| Edm.String | HotelName, Category, Description |
-| Edm.ComplexType | Address.StreetNumber, Address.City, Address.StateProvince, Address.PostalCode |
-| Collection(Edm.String) | Tags (a comma-delimited list of strings) |
-
-> [!NOTE]
-> Subfields of Collection(Edm.ComplexType) fields are not currently supported by semantic search and won't be used for semantic ranking, captions, or answers.
-
-## Query in Azure portal
-
-[Search explorer](search-explorer.md) has been updated to include options for semantic queries. To create a semantic query in the portal, follow the steps below:
-
-1. Open the [Azure portal](https://portal.azure.com) and navigate to a search service that has semantic search [enabled](semantic-search-overview.md#enable-semantic-search).
-
-1. Click **Search explorer** at the top of the overview page.
-
-1. Choose an index that has content in a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-
-1. In Search explorer, set query options that enable semantic queries, semantic configurations, and spell correction. You can also paste the required query parameters into the query string.
--
-## Query using REST
-
-Use the [Search Documents (REST preview)](/rest/api/searchservice/preview-api/search-documents) to formulate the request programmatically. A response includes captions and highlighting automatically. If you want spelling correction or answers in the response, add **`speller`** or **`answers`** to the request.
-
-The following example uses the [hotels-sample-index](search-get-started-portal.md) to create a semantic query request with spell check, semantic answers, and captions:
-
-### [**Semantic Configuration (recommended)**](#tab/semanticConfiguration)
-
-```http
-POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2021-04-30-Preview     
-{
- "search": "newer hotel near the water with a great restaurant",
- "queryType": "semantic",
- "queryLanguage": "en-us",
- "semanticConfiguration": "my-semantic-config",
- "speller": "lexicon",
- "answers": "extractive|count-3",
- "captions": "extractive|highlight-true",
- "highlightPreTag": "<strong>",
- "highlightPostTag": "</strong>",
- "select": "HotelId,HotelName,Description,Category",
- "count": true
-}
-```
-
-The following table summarizes the parameters used in a semantic query. For a list of all parameters in a request, see [Search Documents (REST preview)](/rest/api/searchservice/preview-api/search-documents)
-
-| Parameter | Type | Description |
-|--|-|-|
-| queryType | String | Valid values include simple, full, and semantic. A value of "semantic" is required for semantic queries. |
-| queryLanguage | String | Required for semantic queries. The lexicon you specify applies equally to semantic ranking, captions, answers, and spell check. For more information, see [supported languages (REST API reference)](/rest/api/searchservice/preview-api/search-documents#queryLanguage). |
-| semanticConfiguration | String | Required for semantic queries. The name of your [semantic configuration](#create-a-semantic-configuration). </br></br>In contrast with simple and full query types, the order in which fields are listed determines precedence. For more usage instructions, see [Create a semantic configuration](#create-a-semantic-configuration). |
-| speller | String | Optional parameter, not specific to semantic queries, that corrects misspelled terms before they reach the search engine. For more information, see [Add spell correction to queries](speller-how-to-add.md). |
-| answers |String | Optional parameters that specify whether semantic answers are included in the result. Currently, only "extractive" is implemented. Answers can be configured to return a maximum of ten. The default is one. This example shows a count of three answers: `extractive|count-3`. For more information, see [Return semantic answers](semantic-answers.md).|
-| captions |String | Optional parameters that specify whether semantic captions are included in the result. Currently, only "extractive" is implemented. Captions can be configured to return results with or without highlights. The default is for highlights to be returned. This example returns captions without highlights: `extractive|highlight-false`. For more information, see [Return semantic answers](semantic-answers.md).|
-
-### [**searchFields**](#tab/searchFields)
+Using "searchFields" for field prioritization was an early implementation detail that won't be supported once semantic search exits public preview. We encourage you to use semantic configurations if your application requirements allow it.
```http
-POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2020-06-30-Preview     
-{
- "search": "newer hotel near the water with a great restaurant",
- "queryType": "semantic",
- "queryLanguage": "en-us",
- "searchFields": "HotelName,Category,Description",
- "speller": "lexicon",
- "answers": "extractive|count-3",
- "highlightPreTag": "<strong>",
- "highlightPostTag": "</strong>",
- "select": "HotelId,HotelName,Description,Category",
- "count": true
+POST https://[service name].search.windows.net/indexes/[index name]/docs/search?api-version=2020-06-30-Preview     
+{   
+ "search": " Where was Alan Turing born?",   
+ "queryType": "semantic",ΓÇ»
+ "searchFields": "title,url,body",ΓÇ»
+ "queryLanguage": "en-us"ΓÇ»
} ```
-The following table summarizes the parameters used in a semantic query. For a list of all parameters in a request, see [Search Documents (REST preview)](/rest/api/searchservice/preview-api/search-documents)
-
-| Parameter | Type | Description |
-|--|-|-|
-| queryType | String | Valid values include simple, full, and semantic. A value of "semantic" is required for semantic queries. |
-| queryLanguage | String | Required for semantic queries. The lexicon you specify applies equally to semantic ranking, captions, answers, and spell check. For more information, see [supported languages (REST API reference)](/rest/api/searchservice/preview-api/search-documents#queryLanguage). |
-| searchFields | String | A comma-delimited list of searchable fields. Specifies the fields over which semantic ranking occurs, from which captions and answers are extracted. </br></br>In contrast with simple and full query types, the order in which fields are listed determines precedence. For more usage instructions, see [Step 2: Set searchFields](#searchfields). |
-| speller | String | Optional parameter, not specific to semantic queries, that corrects misspelled terms before they reach the search engine. For more information, see [Add spell correction to queries](speller-how-to-add.md). |
-| answers |String | Optional parameters that specify whether semantic answers are included in the result. Currently, only "extractive" is implemented. Answers can be configured to return a maximum of ten. The default is one. This example shows a count of three answers: `extractive|count-3`. For more information, see [Return semantic answers](semantic-answers.md).|
----
-### Formulate the request
-
-This section steps through query formulation.
-
-### [**Semantic Configuration (recommended)**](#tab/semanticConfiguration)
-
-#### Step 1: Set queryType and queryLanguage
-
-Add the following parameters to the rest. Both parameters are required.
-
-```json
-"queryType": "semantic",
-"queryLanguage": "en-us",
-```
-
-The queryLanguage must be a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage) and it must be consistent with any [language analyzers](index-add-language-analyzers.md) assigned to field definitions in the index schema. For example, you indexed French strings using a French language analyzer (such as "fr.microsoft" or "fr.lucene"), then queryLanguage should also be French language variant.
+Field order is critical because the semantic ranker limits the amount of content it can process while still delivering a reasonable response time. Content from fields at the start of the list are more likely to be included; content from the end could be truncated if the maximum limit is reached. For more information, see [Pre-processing during semantic ranking](semantic-ranking.md#pre-processing).
-In a query request, if you are also using [spell correction](speller-how-to-add.md), the queryLanguage you set applies equally to speller, answers, and captions. There is no override for individual parts. Spell check supports [fewer languages](speller-how-to-add.md#supported-languages), so if you are using that feature, you must set queryLanguage to one from that list.
++ If you're specifying just one field, choose a descriptive field where the answer to semantic queries might be found, such as the main content of a document.
-While content in a search index can be composed in multiple languages, the query input is most likely in one. The search engine doesn't check for compatibility of queryLanguage, language analyzer, and the language in which content is composed, so be sure to scope queries accordingly to avoid producing incorrect results.
++ For two or more fields in searchFields:
-<a name="searchfields"></a>
+ + The first field should always be concise (such as a title or name), ideally a string that is under 25 words.
-#### Step 2: Set semanticConfiguration
+ + If the index has a URL field that is human readable such as `www.domain.com/name-of-the-document-and-other-details` (rather than machine focused, such as `www.domain.com/?id=23463&param=eis`), place it second in the list (or first if there's no concise title field).
-Add a semanticConfiguration to the request. A semantic configuration is required and important for getting the best results from semantic search.
+ + Follow the above fields with other descriptive fields, where the answer to semantic queries may be found, such as the main content of a document.
-```json
-"semanticConfiguration": "my-semantic-config",
-```
+When setting "searchFields", choose only fields of the following [supported data types](/rest/api/searchservice/supported-data-types):
-The [semantic configuration](#create-a-semantic-configuration) is used to tell semantic search's models which fields are most important for reranking search results based on semantic similarity.
+| Data type | Example from hotels-sample-index |
+|--|-|
+| Edm.String | HotelName, Category, Description |
+| Edm.ComplexType | Address.StreetNumber, Address.City, Address.StateProvince, Address.PostalCode |
+| Collection(Edm.String) | Tags (a comma-delimited list of strings) |
+If you happen to include an invalid field, there's no error, but those fields won't be used in semantic ranking.
-#### Step 3: Remove or bracket query features that bypass relevance scoring
+## 3 - Avoid features that bypass relevance scoring
-Several query capabilities in Cognitive Search do not undergo relevance scoring, and some bypass the full text search engine altogether. If your query logic includes the following features, you will not get relevance scores or semantic ranking on your results:
+Several query capabilities in Cognitive Search don't undergo relevance scoring, and some bypass the full text search engine altogether. If your query logic includes the following features, you won't get relevance scores or semantic ranking on your results:
+ Filters, fuzzy search queries, and regular expressions iterate over untokenized text, scanning for verbatim matches in the content. Search scores for all of the above query forms are a uniform 1.0, and won't provide meaningful input for semantic ranking. + Sorting (orderBy clauses) on specific fields will also override search scores and semantic score. Given that semantic score is used to order results, including explicit sort logic will cause an HTTP 400 error to be returned.
-#### Step 4: Add answers and captions
-
-Optionally, add "answers" and "captions" if you want to include additional processing that provides an answer and captions. For details about this parameter, see [How to specify semantic answers](semantic-answers.md).
-
-```json
-"answers": "extractive|count-3",
-"captions": "extractive|highlight-true",
-```
-
-Answers (and captions) are extracted from passages found in fields listed in the semantic configuration. This is why you want to include content-rich fields in the prioritizedContentFields of a semantic configuration, so that you can get the best answers and captions in a response. Answers are not guaranteed on every request. To get an answer, the query must look like a question and the content must include text that looks like an answer.
+## 4 - Set up the query
-#### Step 5: Add other parameters
+Your next step is adding parameters to the query request. To be successful, your query should be full text search (using the "search" parameter to pass in a string), and the index should contain text fields with rich semantic content.
-Set any other parameters that you want in the request. Parameters such as [speller](speller-how-to-add.md), [select](search-query-odata-select.md), and count improve the quality of the request and readability of the response.
+### [**Azure portal**](#tab/portal-query)
-```json
-"speller": "lexicon",
-"select": "HotelId,HotelName,Description,Category",
-"count": true,
-"highlightPreTag": "<mark>",
-"highlightPostTag": "</mark>",
-```
+[Search explorer](search-explorer.md) has been updated to include options for semantic queries. To configure semantic ranking in the portal, follow the steps below:
-Highlight styling is applied to captions in the response. You can use the default style, or optionally customize the highlight style applied to captions. Captions apply highlight formatting over key passages in the document that summarize the response. The default is `<em>`. If you want to specify the type of formatting (for example, yellow background), you can set the highlightPreTag and highlightPostTag.
-
-### [**searchFields**](#tab/searchFields)
-
-#### Step 1: Set queryType and queryLanguage
-
-Add the following parameters to the rest. Both parameters are required.
-
-```json
-"queryType": "semantic",
-"queryLanguage": "en-us",
-```
-
-The queryLanguage must be a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage) and it must be consistent with any [language analyzers](index-add-language-analyzers.md) assigned to field definitions in the index schema. For example, you indexed French strings using a French language analyzer (such as "fr.microsoft" or "fr.lucene"), then queryLanguage should also be French language variant.
-
-In a query request, if you are also using [spell correction](speller-how-to-add.md), the queryLanguage you set applies equally to speller, answers, and captions. There is no override for individual parts. Spell check supports [fewer languages](speller-how-to-add.md#supported-languages), so if you are using that feature, you must set queryLanguage to one from that list.
-
-While content in a search index can be composed in multiple languages, the query input is most likely in one. The search engine doesn't check for compatibility of queryLanguage, language analyzer, and the language in which content is composed, so be sure to scope queries accordingly to avoid producing incorrect results.
+1. Open the [Azure portal](https://portal.azure.com) and navigate to a search service that has semantic search [enabled](semantic-search-overview.md#enable-semantic-search).
-<a name="searchfields"></a>
+1. Select **Search explorer** at the top of the overview page.
-#### Step 2: Set searchFields
+1. Choose an index that has content in a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-Add searchFields to the request. It's optional but strongly recommended.
+1. In Search explorer, set query options that enable semantic queries, semantic configurations, and spell correction. You can also paste the required query parameters into the query string.
-```json
-"searchFields": "HotelName,Category,Description",
-```
-The searchFields parameter is used to identify passages to be evaluated for "semantic similarity" to the query. For the preview, we do not recommend leaving searchFields blank as the model requires a hint as to which fields are the most important to process.
+### [**REST API**](#tab/rest-query)
-In contrast with other parameters, searchFields is not new. You might already be using searchFields in existing code for simple or full Lucene queries. If so, revisit how the parameter is used so that you can check for field order when switching to a semantic query type.
+Use the [Search Documents (REST preview)](/rest/api/searchservice/preview-api/search-documents) to formulate the request.
-##### Allowed data types
+A response includes an "@search.rerankerScore"" automatically. If you want captions, spelling correction, or answers in the response, add "captions", "speller", or "answers" to the request.
-When setting searchFields, choose only fields of the following [supported data types](/rest/api/searchservice/supported-data-types). If you happen to include an invalid field, there is no error, but those fields won't be used in semantic ranking.
+The following example in this section uses the [hotels-sample-index](search-get-started-portal.md) to demonstrate semantic ranking with spell check, semantic answers, and captions.
-| Data type | Example from hotels-sample-index |
-|--|-|
-| Edm.String | HotelName, Category, Description |
-| Edm.ComplexType | Address.StreetNumber, Address.City, Address.StateProvince, Address.PostalCode |
-| Collection(Edm.String) | Tags (a comma-delimited list of strings) |
+1. Paste the following request into a web client as a template. Replace the service name and index name with valid values.
-##### Order of fields in searchFields
+ ```http
+ POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2021-04-30-Preview     
+ {
+ "queryType": "semantic",
+ "queryLanguage": "en-us",
+ "search": "newer hotel near the water with a great restaurant",
+ "semanticConfiguration": "my-semantic-config",
+ "searchFields": "",
+ "speller": "lexicon",
+ "answers": "extractive|count-3",
+ "captions": "extractive|highlight-true",
+ "highlightPreTag": "<strong>",
+ "highlightPostTag": "</strong>",
+ "select": "HotelId,HotelName,Description,Category",
+ "count": true
+ }
+ ```
-Field order is critical because the semantic ranker limits the amount of content it can process while still delivering a reasonable response time. Content from fields at the start of the list are more likely to be included; content from the end could be truncated if the maximum limit is reached. For more information, see [Pre-processing during semantic ranking](semantic-ranking.md#pre-processing).
+1. Set "queryType" to "semantic".
-+ If you're specifying just one field, choose a descriptive field where the answer to semantic queries might be found, such as the main content of a document.
+ In other queries, the "queryType" is used to specify the query parser. In semantic search, it's set to "semantic". For the "search" field, you can specify queries that conform to the [simple syntax](query-simple-syntax.md).
-+ For two or more fields in searchFields:
+1. Set "queryLanguage" to a [supported language](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
- + The first field should always be concise (such as a title or name), ideally a string that is under 25 words.
+ The "queryLanguage" must be consistent with any [language analyzers](index-add-language-analyzers.md) assigned to field definitions in the index schema. For example, you indexed French strings using a French language analyzer (such as "fr.microsoft" or "fr.lucene"), then "queryLanguage" should also be French language variant.
- + If the index has a URL field that is human readable such as `www.domain.com/name-of-the-document-and-other-details`, (rather than machine focused, such as `www.domain.com/?id=23463&param=eis`), place it second in the list (or first if there is no concise title field).
+ In a query request, if you're also using [spell correction](speller-how-to-add.md), the "queryLanguage" you set applies equally to speller, answers, and captions. There's no override for individual parts. Spell check supports [fewer languages](speller-how-to-add.md#supported-languages), so if you're using that feature, you must set queryLanguage to one from that list.
- + Follow the above fields with other descriptive fields, where the answer to semantic queries may be found, such as the main content of a document.
+ While content in a search index can be composed in multiple languages, the query input is most likely in one. The search engine doesn't check for compatibility of queryLanguage, language analyzer, and the language in which content is composed, so be sure to scope queries accordingly to avoid producing incorrect results.
-#### Step 3: Remove or bracket query features that bypass relevance scoring
+1. Set "search" to a full text search query based on the [simple syntax](query-simple-syntax.md). Semantic search is an extension of full text search, so while this parameter isn't required, you won't get an expected outcome if it's null.
-Several query capabilities in Cognitive Search do not undergo relevance scoring, and some bypass the full text search engine altogether. If your query logic includes the following features, you will not get graduated relevance scores that feed into the semantic re-ranking of results:
+1. Set "semanticConfiguration" to a [predefined semantic configuration](#2create-a-semantic-configuration) that's embedded in your index, assuming your client supports it. For some clients and API versions, "semanticConfiguration" is required and important for getting the best results from semantic search.
-+ Empty search (`search=0`), wildcard search, fuzzy search, and regular expressions iterate over untokenized text, scanning for verbatim matches in the content, returning an un-scored result set. An un-scored result set assigns a uniform 1.0 on each match, and won't provide meaningful input for semantic ranking. Up to 50 documents will still be passed to the re-ranker, but the document selection is arbitrary.
+1. Set "searchFields" to a prioritized list of searchable string fields. If you didn't use a semantic configuration, this field provides important hints to the underlying models as to which fields the most important. If you do have a semantic configuration, setting this parameter is still useful because it scopes the query to high-value fields.
-+ Sorting (orderBy clauses) on specific fields will also override search scores and semantic score. Given that semantic score is used to order results, including explicit sort logic will cause an HTTP 400 error to be returned.
+ In contrast with other parameters, searchFields isn't new. You might already be using "searchFields" in existing code for simple or full Lucene queries. If so, revisit how the parameter is used so that you can check for field order when switching to a semantic query type.
-#### Step 4: Add answers
+1. Set "speller" to correct misspelled terms before they reach the search engine. This parameter is optional and not specific to semantic queries. For more information, see [Add spell correction to queries](speller-how-to-add.md).
-Optionally, add "answers" if you want to include additional processing that provides an answer. For details about this parameter, see [How to specify semantic answers](semantic-answers.md).
+1. Set "answers" to specify whether [semantic answers](semantic-answers.md) are included in the result. Currently, the only valid value for this parameter is "extractive". Answers can be configured to return a maximum of 10. The default is one. This example shows a count of three answers: `extractive|count-3`.
-```json
-"answers": "extractive|count-3",
-```
+ Answers are extracted from passages found in fields listed in the semantic configuration. This behavior is why you want to include content-rich fields in the prioritizedContentFields of a semantic configuration, so that you can get the best answers and captions in a response. Answers aren't guaranteed on every request. To get an answer, the query must look like a question and the content must include text that looks like an answer.
-Answers (and captions) are extracted from passages found in fields listed in searchFields. This is why you want to include content-rich fields in searchFields, so that you can get the best answers in a response. Answers are not guaranteed on every request. The query must look like a question, and the content must include text that looks like an answer.
+1. Set "captions" to specify whether semantic captions are included in the result. If you're using a semantic configuration, you should set this parameter. While the ["searchFields" approach](#2buse-searchfields-for-field-prioritization) automatically included captions, "semanticConfiguration" doesn't.
-#### Step 5: Add other parameters
+ Currently, the only valid value for this parameter is "extractive". Captions can be configured to return results with or without highlights. The default is for highlights to be returned. This example returns captions without highlights: `extractive|highlight-false`.
-Set any other parameters that you want in the request. Parameters such as [speller](speller-how-to-add.md), [select](search-query-odata-select.md), and count improve the quality of the request and readability of the response.
+1. Set "highlightPreTag" and "highlightPostTag" if you want to override the default highlight formatting that's applied to captions.
-```json
-"speller": "lexicon",
-"select": "HotelId,HotelName,Description,Category",
-"count": true,
-"highlightPreTag": "<mark>",
-"highlightPostTag": "</mark>",
-```
+ Captions apply highlight formatting over key passages in the document that summarize the response. The default is `<em>`. If you want to specify the type of formatting (for example, yellow background), you can set the highlightPreTag and highlightPostTag.
-Highlight styling is applied to captions in the response. You can use the default style, or optionally customize the highlight style applied to captions. Captions apply highlight formatting over key passages in the document that summarize the response. The default is `<em>`. If you want to specify the type of formatting (for example, yellow background), you can set the highlightPreTag and highlightPostTag.
+1. Set ["select"](search-query-odata-select.md) to specify which fields are returned in the response, and "count" to return the number of matches in the index. These parameters improve the quality of the request and readability of the response.
-
+1. Send the request to execute the query and return results.
-## Query using Azure SDKs
+### [**.NET SDK**](#tab/dotnet-query)
-Beta versions of the Azure SDKs include support for semantic search. Because the SDKs are beta versions, there is no documentation or samples, but you can refer to the REST API section above for insights on how the APIs should work.
+Beta versions of the Azure SDKs include support for semantic search. Because the SDKs are beta versions, there's no documentation or samples, but you can refer to the REST API section above for insights on how the APIs should work.
-### [**Semantic Configuration (recommended)**](#tab/semanticConfiguration)
+The following beta versions support semantic configuration:
| Azure SDK | Package | |--||
Beta versions of the Azure SDKs include support for semantic search. Because the
| JavaScript | [azure/search-documents 11.3.0-beta.5](https://www.npmjs.com/package/@azure/search-documents/v/11.3.0-beta.5)| | Python | [azure-search-documents 11.3.0b6](https://pypi.org/project/azure-search-documents/11.3.0b6/) |
-### [**searchFields**](#tab/searchFields)
+These beta versions use "searchFields" for field prioritization:
| Azure SDK | Package | |--||
Beta versions of the Azure SDKs include support for semantic search. Because the
-## Evaluate the response
+## 5 - Evaluate the response
-As with all queries, a response is composed of all fields marked as retrievable, or just those fields listed in the select parameter. It includes the original relevance score, and might also include a count, or batched results, depending on how you formulated the request.
+Only the top 50 matches from the initial results can be semantically ranked. As with all queries, a response is composed of all fields marked as retrievable, or just those fields listed in the select parameter. A response includes the original relevance score, and might also include a count, or batched results, depending on how you formulated the request.
-In a semantic query, the response has additional elements: a new semantically ranked relevance score, captions in plain text and with highlights, and optionally an answer.
+In semantic search, the response has more elements: a new semantically ranked relevance score, an optional caption in plain text and with highlights, and an optional [answer](semantic-answers.md). If your results don't include these extra elements, then your query might be misconfigured. As a first step towards troubleshooting the problem, check the semantic configuration to ensure it's specified in both the index definition and query.
-In a client app, you can structure the search page to include a caption as the description of the match, rather than the entire contents of a specific field. This is useful when individual fields are too dense for the search results page.
+In a client app, you can structure the search page to include a caption as the description of the match, rather than the entire contents of a specific field. This approach is useful when individual fields are too dense for the search results page.
-The response for the above example query returns the following match as the top pick. Captions are returned automatically, with plain text and highlighted versions. Answers are omitted from the example because one could not be determined for this particular query and corpus.
+The response for the above example query returns the following match as the top pick. Captions are returned because the "captions" property is set, with plain text and highlighted versions. Answers are omitted from the example because one couldn't be determined for this particular query and corpus.
```json "@odata.count": 35,
search Semantic Ranking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-ranking.md
Before scoring for relevance, content must be reduced to a manageable number of
Whatever the document count, whether one or 50, the initial result set establishes the first iteration of the document corpus for semantic ranking.
-1. Next, across the corpus, the contents of each field in the [semantic configuration](semantic-how-to-query-request.md#create-a-semantic-configuration) are extracted and combined into a long string.
+1. Next, across the corpus, the contents of each field in the [semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration) are extracted and combined into a long string.
1. After string consolidation, any strings that are excessively long are trimmed to ensure the overall length meets the input requirements of the summarization step.
A [semantic answer](semantic-answers.md) will also be returned if you specified
## Next steps
-Semantic ranking is offered on Standard tiers, in specific regions. For more information about availability and sign up, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing). A new query type enables the ranking and response structures of semantic search. To get started, [Create a semantic query](semantic-how-to-query-request.md).
+Semantic ranking is offered on Standard tiers, in specific regions. For more information about availability and sign-up, see [Availability and pricing](semantic-search-overview.md#availability-and-pricing). A new query type enables the ranking and response structures of semantic search. To get started, [Configure semantic ranking](semantic-how-to-query-request.md).
Alternatively, review the following articles about default ranking. Semantic ranking depends on the similarity ranker to return the initial results. Knowing about query execution and ranking will give you a broad understanding of how the entire process works.
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-search-overview.md
Semantic search is a collection of features that improve the quality of search r
||-| | [Semantic re-ranking](semantic-ranking.md) | Uses the context or semantic meaning of a query to compute a new relevance score over existing results. | | [Semantic captions and highlights](semantic-how-to-query-request.md) | Extracts sentences and phrases from a document that best summarize the content, with highlights over key passages for easy scanning. Captions that summarize a result are useful when individual content fields are too dense for the results page. Highlighted text elevates the most relevant terms and phrases so that users can quickly determine why a match was considered relevant. |
-| [Semantic answers](semantic-answers.md) | An optional and additional substructure returned from a semantic query. It provides a direct answer to a query that looks like a question. It requires that a document have text with the characteristics of an answer. |
+| [Semantic answers](semantic-answers.md) | An optional and additional substructure returned from a semantic query. It provides a direct answer to a query that looks like a question. It requires that a document has text with the characteristics of an answer. |
## How semantic ranking works
To re-enable semantic search, rerun the above request, setting "semanticSearch"
## Next steps
-[Enable semantic search](#enable-semantic-search) for your search service and follow the documentation on how to [create a semantic query](semantic-how-to-query-request.md) so that you can test out semantic search on your content.
+[Enable semantic search](#enable-semantic-search) for your search service and follow the steps in [Configure semantic ranking](semantic-how-to-query-request.md) so that you can test out semantic search on your content.
search Speller How To Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/speller-how-to-add.md
POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/
## Spell correction with semantic search
-This query, with typos in every term except one, undergoes spelling corrections to return relevant results. To learn more, see [Create a semantic query](semantic-how-to-query-request.md).
+This query, with typos in every term except one, undergoes spelling corrections to return relevant results. To learn more, see [Configure semantic ranking](semantic-how-to-query-request.md).
```http POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/search?api-version=2020-06-30-Preview    
search Tutorial Csharp Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-create-load-index.md
Previously updated : 08/30/2022 Last updated : 11/01/2022 ms.devlang: csharp
ms.devlang: csharp
# 2 - Create and load Search Index with .NET Continue to build your Search-enabled website by:
-* Creating a Search resource with the VS Code extension
-* Creating a new index and importing data with .NET using the sample script and Azure SDK [Azure.Search.Documents](https://www.nuget.org/packages/Azure.Search.Documents/).
+* Create a Search resource with the VS Code extension
+* Create a new index
+* Import data with .NET using the sample script and Azure SDK [Azure.Search.Documents](https://www.nuget.org/packages/Azure.Search.Documents/).
## Create an Azure Search resource
Create a new Search resource with the [Azure Cognitive Search](https://marketpla
1. In the Side bar, **right-click on your Azure subscription** under the `Azure: Cognitive Search` area and select **Create new search service**.
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-create-search-resource.png" alt-text="In the Side bar, right-click on your Azure subscription under the **Azure: Cognitive Search** area and select **Create new search service**.":::
+ :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-create-search-resource.png" alt-text="Screenshot of Visual Studio code showing the Azure explorer bar, right-click on your Azure subscription under the Azure: Cognitive Search area and select Create new search service.":::
1. Follow the prompts to provide the following information:
Get your Search resource admin key with the Visual Studio Code extension.
1. In Visual Studio Code, in the Side bar, right-click on your Search resource and select **Copy Admin Key**.
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-admin-key.png" alt-text="In the Side bar, right-click on your Search resource and select **Copy Admin Key**.":::
+ :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-admin-key.png" alt-text="Screenshot of Visual Studio code showing the Azure explorer bar, right-click on your Search resource and select Copy Admin Key.":::
-1. Keep this admin key, you will need to use it in [a later section](#prepare-the-bulk-import-script-for-search).
+1. Keep this admin key, you'll need to use it in [a later section](#prepare-the-bulk-import-script-for-search).
## Prepare the bulk import script for Search
The script uses the Azure SDK for Cognitive Search:
* [NuGet package Azure.Search.Documents](https://www.nuget.org/packages/Azure.Search.Documents/) * [Reference Documentation](/dotnet/api/overview/azure/search)
-1. In Visual Studio Code, open the `Program.cs` file in the subdirectory, `search-website/bulk-insert`, replace the following variables with your own values to authenticate with the Azure Search SDK:
+1. In Visual Studio Code, open the `Program.cs` file in the subdirectory, `search-website-functions-v4/bulk-insert`, replace the following variables with your own values to authenticate with the Azure Search SDK:
* YOUR-SEARCH-RESOURCE-NAME * YOUR-SEARCH-ADMIN-KEY
- :::code language="csharp" source="~/azure-search-dotnet-samples/search-website/bulk-insert/Program.cs" highlight="16-19" :::
+ :::code language="csharp" source="~/azure-search-dotnet-samples/search-website-functions-v4/bulk-insert/Program.cs" highlight="16-19, 21-23, 32, 49" :::
-1. Open an integrated terminal in Visual Studio Code for the project directory's subdirectory, `search-website/bulk-insert`, then run the following command to install the dependencies.
+1. Open an integrated terminal in Visual Studio Code for the project directory's subdirectory, `search-website-functions-v4/bulk-insert`, then run the following command to install the dependencies.
```bash dotnet restore
The script uses the Azure SDK for Cognitive Search:
## Run the bulk import script for Search
-1. Continue using the integrated terminal in Visual Studio for the project directory's subdirectory, `search-website/bulk-insert`, to run the following bash command to run the `Program.cs` script:
+1. Continue using the integrated terminal in Visual Studio for the project directory's subdirectory, `search-website-functions-v4/bulk-insert`, to run the following bash command to run the `Program.cs` script:
```bash dotnet run
The script uses the Azure SDK for Cognitive Search:
## Review the new Search Index
-Once the upload completes, the Search Index is ready to use. Review your new Index.
-
-1. In Visual Studio Code, open the Azure Cognitive Search extension and select your Search resource.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-resource.png" alt-text="In Visual Studio Code, open the Azure Cognitive Search extension and open your Search resource.":::
-
-1. Expand Indexes, then Documents, then `good-books`, then select a doc to see all the document-specific data.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-docs.png" lightbox="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-docs.png" alt-text="Expand Indexes, then `good-books`, then select a doc.":::
-
-## Copy your Search resource name
-
-Note your **Search resource name**. You will need this to connect the Azure Function app to your Search resource.
-
-> [!CAUTION]
-> While you may be tempted to use your Search admin key in the Azure Function, that isn't following the principle of least privilege. The Azure Function will use the query key to conform to least privilege.
## Rollback bulk import file changes
-Use the following git command in the VS Code integrated terminal at the `bulk-insert` directory, to rollback the changes. They are not needed to continue the tutorial and you shouldn't save or push these secrets to your repo.
-```git
-git checkout .
-```
+## Copy your Search resource name
## Next steps
search Tutorial Csharp Deploy Static Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-deploy-static-web-app.md
Previously updated : 08/30/2022 Last updated : 11/01/2022 ms.devlang: csharp # 3 - Deploy the search-enabled .NET website
-Deploy the search-enabled website as an Azure Static web app. This deployment includes both the React app and the Function app.
-
-The Static Web app pulls the information and files for deployment from GitHub using your fork of the samples repository.
-
-## Create a Static Web App in Visual Studio Code
-
-1. Select **Azure** from the Activity Bar, then open **Resources** from the Side bar.
-
-1. Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-create-static-web-app-resource-advanced.png" alt-text="Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**":::
-
-1. If you see a pop-up window in VS Code asking which branch you want to deploy from, select the default branch, usually **master** or **main**.
-
- This setting means only changes you commit to that branch are deployed to your static web app.
-
-1. If you see a pop-up window asking you to commit your changes, do not do this. The secrets from the bulk import step should not be committed to the repository.
-
- To rollback the changes, in VS Code select the Source Control icon in the Activity bar, then select each changed file in the Changes list and select the **Discard changes** icon.
-
-1. Follow the prompts to provide the following information:
-
- |Prompt|Enter|
- |--|--|
- |Enter the name for the new Static Web App.|Create a unique name for your resource. For example, you can prepend your name to the repository name such as, `joansmith-azure-search-dotnet-samples`. |
- |Select a resource group for new resources.|Use the resource group you created for this tutorial.|
- |Select a SKU| Select the free SKU for this tutorial.|
- |Choose build preset to configure default project structure.|Select **Custom**|
- |Select the location of your application code|`search-website`<br><br>This is the path, from the root of the repository, to your Azure Static web app. |
- |Select the location of your Azure Function code|`search-website/api`<br><br>This is the path, from the root of the repository, to your Azure Function app. |
- |Enter the path of your build output...|`build`<br><br>This is the path, from your Azure Static web app, to your generated files.|
- |Select a location for new resources.|Select a region close to you.|
-
-1. The resource is created, select **Open Actions in GitHub** from the Notifications. This opens a browser window pointed to your forked repo.
-
- The list of actions indicates your web app, both client and functions, were successfully pushed to your Azure Static Web App.
-
- Wait until the build and deployment complete before continuing. This may take a minute or two to finish.
-
-## Get Cognitive Search query key in Visual Studio Code
-
-1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-
-1. In the Side bar, select your Azure subscription under the **Azure: Cognitive Search** area, then right-click on your Search resource and select **Copy Query Key**.
-
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-query-key.png" alt-text="In the Side bar, select your Azure subscription under the **Azure: Cognitive Search** area, then right-click on your Search resource and select **Copy Query Key**.":::
-
-1. Keep this query key, you will need to use it in the next section. The query key is able to query your Index.
-
-## Add configuration settings in Azure portal
-
-The Azure Function app won't return Search data until the Search secrets are in settings.
-
-1. Select **Azure** from the Activity Bar.
-1. Right-click on your Static web app resource then select **Open in Portal**.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/open-static-web-app-in-azure-portal.png" alt-text="Right-click on your JavaScript Static web app resource then select Open in Portal.":::
-
-1. Select **Configuration** then select **+ Add**.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/add-new-application-setting-to-static-web-app-in-portal.png" alt-text="Select Configuration then select Add for your JavaScript app.":::
-
-1. Add each of the following settings:
-
- |Setting|Your Search resource value|
- |--|--|
- |SearchApiKey|Your Search query key|
- |SearchServiceName|Your Search resource name|
- |SearchIndexName|`good-books`|
- |SearchFacets|`authors*,language_code`|
-
- Azure Cognitive Search requires different syntax for filtering collections than it does for strings. Add a `*` after a field name to denote that the field is of type `Collection(Edm.String)`. This allows the Azure Function to add filters correctly to queries.
-
-1. Select **Save** to save the settings.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/save-new-application-setting-to-static-web-app-in-portal.png" alt-text="Select Save to save the settings for your JavaScript app..":::
-
-1. Return to VS Code.
-1. Refresh your Static web app to see the Static web app's application settings.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/visual-studio-code-extension-fresh-resource.png" alt-text="Refresh your JavaScript Static web app to see the Static web app's application settings.":::
-
-## Use search in your Static web app
-
-1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-1. In the Side bar, **right-click on your Azure subscription** under the `Static web apps` area and find the Static web app you created for this tutorial.
-1. Right-click the Static Web App name and select **Browse site**.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-browse-static-web-app.png" alt-text="Right-click the Static Web App name and select **Browse site**.":::
-
-1. Select **Open** in the pop-up dialog.
-1. In the website search bar, enter a search query such as `code`, _slowly_ so the suggest feature suggests book titles. Select a suggestion or continue entering your own query. Press enter when you've completed your search query.
-1. Review the results then select one of the books to see more details.
-
-## Clean up resources
-
-To clean up the resources created in this tutorial, delete the resource group.
-
-1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-
-1. In the Side bar, **right-click on your Azure subscription** under the `Resource Groups` area and find the resource group you created for this tutorial.
-1. Right-click the resource group name then select **Delete**.
- This deletes both the Search and Static web app resources.
-1. If you no longer want the GitHub fork of the sample, remember to delete that on GitHub. Go to your fork's **Settings** then delete the fork.
- ## Next steps
search Tutorial Csharp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-overview.md
Previously updated : 08/30/2022 Last updated : 11/01/2022 ms.devlang: csharp
ms.devlang: csharp
This tutorial builds a website to search through a catalog of books then deploys the website to an Azure Static Web App. The application is available:
-* [Sample](https://github.com/azure-samples/azure-search-dotnet-samples/tree/master/search-website)
+* [Sample](https://github.com/azure-samples/azure-search-dotnet-samples/tree/master/search-website-functions-v4)
* [Demo website - aka.ms/azs-good-books](https://aka.ms/azs-good-books) ## What does the sample do? -
-This sample website provides access to a catalog of 10,000 books. A user can search the catalog by entering text in the search bar. While the user enters text, the website uses the search index's suggest feature to complete the text. Once the query finishes, the list of books is displayed with a portion of the details. A user can select a book to see all the details, stored in the search index, of the book.
--
-The search experience includes:
-
-* Search ΓÇô provides search functionality for the application.
-* Suggest ΓÇô provides suggestions as the user is typing in the search bar.
-* Document Lookup ΓÇô looks up a document by ID to retrieve all of its contents for the details page.
## How is the sample organized?
-The [sample](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website) includes the following:
+The [sample](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website-functions-v4) includes the following:
|App|Purpose|GitHub<br>Repository<br>Location| |--|--|--|
-|Client|React app (presentation layer) to display books, with search. It calls the Azure Function app. |[/search-website/src](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website/src)|
-|Server|Azure .NET Function app (business layer) - calls the Azure Cognitive Search API using .NET SDK |[/search-website/api](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website/api)|
-|Bulk insert|.NET file to create the index and add documents to it.|[/search-website/bulk-insert](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website/bulk-insert)|
+|Client|React app (presentation layer) to display books, with search. It calls the Azure Function app. |[/search-website-functions-v4/client](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website-functions-v4/client)|
+|Server|Azure .NET Function app (business layer) - calls the Azure Cognitive Search API using .NET SDK |[/search-website-functions-v4/api](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website-functions-v4/api)|
+|Bulk insert|.NET file to create the index and add documents to it.|[/search-website-functions-v4/bulk-insert](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/search-website-functions-v4/bulk-insert)|
## Set up your development environment Install the following for your local development environment. -- [.NET 5](https://dotnet.microsoft.com/download/dotnet/5.0)
+- [.NET 6](https://dotnet.microsoft.com/download/dotnet/6.0)
- [Git](https://git-scm.com/downloads) - [Visual Studio Code](https://code.visualstudio.com/) and the following extensions
- - [Azure Resources](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureresourcegroups)
- [Azure Cognitive Search 0.2.0+](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch) - [Azure Static Web App](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps) - Optional:
Forking the sample repository is critical to be able to deploy the Static Web Ap
Complete the fork process in your web browser with your GitHub account. This tutorial uses your fork as part of the deployment to an Azure Static Web App.
-1. At a bash terminal, download the sample application to your local computer.
+1. At a Bash terminal, download your forked sample application to your local computer.
Replace `YOUR-GITHUB-ALIAS` with your GitHub alias.
Forking the sample repository is critical to be able to deploy the Static Web Ap
git clone https://github.com/YOUR-GITHUB-ALIAS/azure-search-dotnet-samples ```
-1. In Visual Studio Code, open your local folder of the cloned repository. The remaining tasks are accomplished from Visual Studio Code, unless specified.
+1. At the same Bash terminal, go into your forked repository for this website search example:
-## Create a resource group for your Azure resources
+ ```bash
+ cd azure-search-dotnet-samples
+ ```
-1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-1. In Resources, select Add (**+**), and then select **Create Resource Group**.
+1. Use the Visual Studio Code command, `code .` to open your forked repository. The remaining tasks are accomplished from Visual Studio Code, unless specified.
- :::image type="content" source="./media/tutorial-javascript-overview/visual-studio-code-create-resource-group.png" alt-text="In Resources, select Add (**+**), and then select **Create Resource Group**.":::
-1. Enter a resource group name, such as `cognitive-search-website-tutorial`.
-1. Select a location close to you.
-1. When you create the Cognitive Search and Static Web App resources, later in the tutorial, use this resource group.
+ ```bash
+ code .
+ ```
+
+## Create a resource group for your Azure resources
- Creating a resource group gives you a logical unit to manage the resources, including deleting them when you are finished using them.
## Next steps
search Tutorial Csharp Search Query Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-search-query-integration.md
Previously updated : 04/23/2021 Last updated : 11/01/2022 ms.devlang: csharp # 4 - .NET Search integration cheat sheet
-In the previous lessons, you added search to a Static Web App. This lesson highlights the essential steps that establish integration. If you are looking for a cheat sheet on how to integrate search into your web app, this article explains what you need to know.
+In the previous lessons, you added search to a Static Web App. This lesson highlights the essential steps that establish integration. If you're looking for a cheat sheet on how to integrate search into your web app, this article explains what you need to know.
The application is available:
-* [Sample](https://github.com/azure-samples/azure-search-dotnet-samples/tree/master/search-website)
+* [Sample](https://github.com/azure-samples/azure-search-dotnet-samples/tree/master/search-website-functions-v4)
* [Demo website - aka.ms/azs-good-books](https://aka.ms/azs-good-books) ## Azure SDK Azure.Search.Documents
The Function app authenticates through the SDK to the cloud-based Cognitive Sear
## Configure secrets in a local.settings.json file
-1. Create a new file named `local.settings.json` at `./api/` and copy the following JSON object into the file.
-
- ```json
- {
- "IsEncrypted": false,
- "Values": {
- "AzureWebJobsStorage": "",
- "FUNCTIONS_WORKER_RUNTIME": "dotnet",
- "SearchApiKey": "YOUR_SEARCH_QUERY_KEY",
- "SearchServiceName": "YOUR_SEARCH_RESOURCE_NAME",
- "SearchIndexName": "good-books"
- }
- }
- ```
-
-1. Change the following for you own Search resource values:
- * YOUR_SEARCH_RESOURCE_NAME
- * YOUR_SEARCH_QUERY_KEY
## Azure Function: Search the catalog
-The `Search` [API](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website/api/Search.cs) takes a search term and searches across the documents in the Search Index, returning a list of matches.
+The `Search` [API](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website-functions-v4/api/Search.cs) takes a search term and searches across the documents in the Search Index, returning a list of matches.
The Azure Function pulls in the Search configuration information, and fulfills the query. ## Client: Search from the catalog Call the Azure Function in the React client with the following code. ## Azure Function: Suggestions from the catalog
-The `Suggest` [API](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website/api/Suggest.cs) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches.
+The `Suggest` [API](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website-functions-v4/api/Suggest.cs) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches.
-The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website/bulk-insert/BookSearchIndex.cs) used during bulk upload.
+The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website-functions-v4/bulk-insert/BookSearchIndex.cs) used during bulk upload.
## Client: Suggestions from the catalog
-The Suggest function API is called in the React app at `\src\components\SearchBar\SearchBar.js` as part of component initialization:
+The Suggest function API is called in the React app at `\client\src\components\SearchBar\SearchBar.js` as part of component initialization:
## Azure Function: Get specific document
-The `Lookup` [API](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website/api/Lookup.cs) takes a ID and returns the document object from the Search Index.
+The `Lookup` [API](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/search-website-functions-v4/api/Lookup.cs) takes an ID and returns the document object from the Search Index.
## Client: Get specific document
-This function API is called in the React app at `\src\pages\Details\Detail.js` as part of component initialization:
+This function API is called in the React app at `\client\src\pages\Details\Detail.js` as part of component initialization:
## C# models to support function app The following models are used to support the functions in this app. ## Next steps
search Tutorial Javascript Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-create-load-index.md
Create a new Search resource with the [Azure Cognitive Search](https://marketpla
1. In the Side bar, **right-click on your Azure subscription** under the `Azure: Cognitive Search` area and select **Create new search service**.
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-create-search-resource.png" alt-text="In the Side bar, right-click on your Azure subscription under the **Azure: Cognitive Search** area and select **Create new search service**.":::
+ :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-create-search-resource.png" alt-text="Screenshot of Visual Studio Code showing Azure explorer, right-click on your Azure subscription under the Azure: Cognitive Search area and select Create new search service.":::
1. Follow the prompts to provide the following information:
Get your Search resource admin key with the Visual Studio Code extension.
1. In Visual Studio Code, in the Side bar, right-click on your Search resource and select **Copy Admin Key**.
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-admin-key.png" alt-text="In the Side bar, right-click on your Search resource and select **Copy Admin Key**.":::
+ :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-admin-key.png" alt-text="Screenshot of Visual Studio Code showing Azure explorer, right-click on your Search resource and select Copy Admin Key.":::
1. Keep this admin key, you'll need to use it in [a later section](#prepare-the-bulk-import-script-for-search).
The script uses the Azure SDK for Cognitive Search:
* YOUR-SEARCH-RESOURCE-NAME * YOUR-SEARCH-ADMIN-KEY
- :::code language="javascript" source="~/azure-search-javascript-samples/search-website-functions-v4/bulk-insert/bulk_insert_books.js" highlight="16,17" :::
+ :::code language="javascript" source="~/azure-search-javascript-samples/search-website-functions-v4/bulk-insert/bulk_insert_books.js" highlight="14,16,17,27-38,83,92,119" :::
1. Open an integrated terminal in Visual Studio for the project directory's subdirectory, `search-website-functions-v4/bulk-insert`, and run the following command to install the dependencies.
The script uses the Azure SDK for Cognitive Search:
## Next steps
-[Deploy your Static Web App](tutorial-javascript-deploy-static-web-app.md)
+[Deploy your Static Web App](tutorial-javascript-deploy-static-web-app.md)
search Tutorial Python Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-create-load-index.md
Previously updated : 11/17/2021 Last updated : 11/02/2022 ms.devlang: python
ms.devlang: python
# 2 - Create and load Search Index with Python Continue to build your Search-enabled website by:
-* Creating a Search resource with the VS Code extension
-* Creating a new index and importing data with Python using the sample script and Azure SDK [azure-search-documents](https://pypi.org/project/azure-search-documents/).
+* Create a Search resource with the VS Code extension
+* Create a new index
+* Import data with Python using the [sample script](https://github.com/Azure-Samples/azure-search-python-samples/blob/main/search-website-functions-v4/bulk-upload/bulk-upload.py) and Azure SDK [azure-search-documents](https://pypi.org/project/azure-search-documents/).
## Create an Azure Cognitive Search resource
Create a new Search resource with the [Azure Cognitive Search](https://marketpla
1. In the Side bar, **right-click on your Azure subscription** under the `Azure: Cognitive Search` area and select **Create new search service**.
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-create-search-resource.png" alt-text="In the Side bar, right-click on your Azure subscription under the **Azure: Cognitive Search** area and select **Create new search service**.":::
+ :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-create-search-resource.png" alt-text="Screenshot of Visual Studio Code showing the Azure explorer, right-click on your Azure subscription under the Azure: Cognitive Search area and select Create new search service.":::
1. Follow the prompts to provide the following information:
Get your Search resource admin key with the Visual Studio Code extension.
1. In Visual Studio Code, in the Side bar, right-click on your Search resource and select **Copy Admin Key**.
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-admin-key.png" alt-text="In the Side bar, right-click on your Search resource and select **Copy Admin Key**.":::
+ :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-admin-key.png" alt-text="Screenshot of Visual Studio Code showing the Azure explorer, right-click on your Search resource and select Copy Admin Key.":::
-1. Keep this admin key, you will need to use it to create objects in [a later section](#prepare-the-bulk-import-script-for-search).
+1. Keep this admin key, you'll need to use it to create objects in [a later section](#prepare-the-bulk-import-script-for-search).
## Prepare the bulk import script for Search
The script uses the Azure SDK for Cognitive Search:
* [PYPI package azure-search-documents](https://pypi.org/project/azure-search-documents/) * [Reference Documentation](/python/api/azure-search-documents)
-1. In Visual Studio Code, open the `bulk_upload.py` file in the subdirectory, `search-website/bulk-upload`, replace the following variables with your own values to authenticate with the Azure Search SDK:
+1. In Visual Studio Code, open the `bulk_upload.py` file in the subdirectory, `search-website-functions-v4/bulk-upload`, replace the following variables with your own values to authenticate with the Azure Search SDK:
* YOUR-SEARCH-SERVICE-NAME * YOUR-SEARCH-SERVICE-ADMIN-API-KEY
- :::code language="python" source="~/azure-search-python-samples/search-website/bulk-upload/bulk-upload.py" highlight="20,21,69,83,135" :::
+ :::code language="python" source="~/azure-search-python-samples/search-website-functions-v4/bulk-upload/bulk-upload.py" highlight="20-22,46-48,53-54,75-80,83,69,83,135,142" :::
-1. Open an integrated terminal in Visual Studio for the project directory's subdirectory, `search-website/bulk-upload`, and run the following command to install the dependencies.
+1. Open an integrated terminal in Visual Studio for the project directory's subdirectory, `search-website-functions-v4/bulk-upload`, and run the following command to install the dependencies.
# [macOS/Linux](#tab/linux-install)
The script uses the Azure SDK for Cognitive Search:
## Run the bulk import script for Search
-1. Continue using the integrated terminal in Visual Studio for the project directory's subdirectory, `search-website/bulk-upload`, to run the following bash command to run the `bulk_upload.py` script:
+1. Continue using the integrated terminal in Visual Studio for the project directory's subdirectory, `search-website-functions-v4/bulk-upload`, to run the following bash command to run the `bulk_upload.py` script:
# [macOS/Linux](#tab/linux-run)
The script uses the Azure SDK for Cognitive Search:
1. As the code runs, the console displays progress.
-1. When the upload is complete, the last statement printed to the console is "Done. Press any key to close the terminal.".
+1. When the upload is complete, the last statement printed to the console is "Done! Upload complete".
## Review the new Search Index
-Once the upload completes, the search index is ready to use. Review your new index.
-1. In Visual Studio Code, open the Azure Cognitive Search extension and select your Search resource.
+## Rollback bulk import file changes
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-resource.png" alt-text="In Visual Studio Code, open the Azure Cognitive Search extension and open your Search resource.":::
-
-1. Expand Indexes, then Documents, then `good-books`, then select a doc to see all the document-specific data.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-docs.png" lightbox="media/tutorial-javascript-create-load-index/visual-studio-code-search-extension-view-docs.png" alt-text="Expand Indexes, then `good-books`, then select a doc.":::
## Copy your Search resource name
-Note your **Search resource name**. You will need this to connect the Azure Function app to your Search resource.
-
-> [!CAUTION]
-> While you may be tempted to use your Search admin key in the Azure Function, that isn't following the principle of least privilege. The Azure Function will use the query key to conform to least privilege.
## Next steps
search Tutorial Python Deploy Static Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-deploy-static-web-app.md
Previously updated : 08/30/2022 Last updated : 11/02/2022 ms.devlang: python # 3 - Deploy the search-enabled Python website
-Deploy the search-enabled website as an Azure Static web app. This deployment includes both the React app and the Function app.
-
-The Static Web app pulls the information and files for deployment from GitHub using your fork of the samples repository.
-
-## Create a Static Web App in Visual Studio Code
-
-1. Select **Azure** from the Activity Bar, then open **Resources** from the Side bar.
-
-1. Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-create-static-web-app-resource-advanced.png" alt-text="Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**":::
-
-1. Follow the 8 prompts to provide the following information:
-
- |Prompt|Enter|
- |--|--|
- |Enter the name for the new Static Web App.|Create a unique name for your resource. For example, you can prepend your name to the repository name such as, `joansmith-azure-search-javascript-samples`. |
- |Select a resource group for new resources.|Use the resource group you created for this tutorial.|
- |Select a SKU| Select the free SKU for this tutorial.|
- |Choose build preset to configure default project structure.|Select **Custom**|
- |Select the location of your application code|`search-website`<br><br>This is the path, from the root of the repository, to your Azure Static web app. |
- |Select the location of your Azure Function code|`search-website/api`<br><br>This is the path, from the root of the repository, to your Azure Function app. |
- |Enter the path of your build output...|`build`<br><br>This is the path, from your Azure Static web app, to your generated files.|
- |Select a location for new resources.|Select a region close to you.|
-
-1. The resource is created, select **Open Actions in GitHub** from the Notifications. This opens a browser window pointed to your forked repo.
-
- The list of actions indicates your web app, both client and functions, were successfully pushed to your Azure Static Web App.
-
- Wait until the build and deployment complete before continuing. This may take a minute or two to finish.
-
-## Get Cognitive Search query key in VS Code
-
-1. In VS Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-
-1. In the Side bar, select your Azure subscription under the **Azure: Cognitive Search** area, then right-click on your Search resource and select **Copy Query Key**.
-
- :::image type="content" source="./media/tutorial-javascript-create-load-index/visual-studio-code-copy-query-key.png" alt-text="In the Side bar, select your Azure subscription under the **Azure: Cognitive Search** area, then right-click on your Search resource and select **Copy Query Key**.":::
-
-1. Keep this query key, you will need to use it in the next section. The query key is able to query your index.
-
-## Add configuration settings in Azure portal
-
-The Azure Function app won't return search data until the search secrets are in settings.
-
-1. Select **Azure** from the Activity Bar.
-1. Right-click on your Static web app resource then select **Open in Portal**.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/open-static-web-app-in-azure-portal.png" alt-text="Right-click on your Python Static web app resource then select Open in Portal.":::
-
-1. Select **Configuration** then select **+ Add**.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/add-new-application-setting-to-static-web-app-in-portal.png" alt-text="Select Configuration then select Add for your Python app.":::
-
-1. Add each of the following settings:
-
- |Setting|Your Search resource value|
- |--|--|
- |SearchApiKey|Your search query key|
- |SearchServiceName|Your search resource name|
- |SearchIndexName|`good-books`|
- |SearchFacets|`authors*,language_code`|
-
- Azure Cognitive Search requires different syntax for filtering collections than it does for strings. For the authors* facet, adding a * after a field name denotes that the field is of type Collection(Edm.String). This allows the Azure Function to add filters correctly to queries.
-
-1. Select **Save** to save the settings.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/save-new-application-setting-to-static-web-app-in-portal.png" alt-text="Select Save to save the settings.":::
-
-1. Return to VS Code.
-1. Refresh your static web app to see the static web app's application settings.
-
- :::image type="content" source="media/tutorial-javascript-static-web-app/visual-studio-code-extension-fresh-resource.png" alt-text="Refresh your Static web app to see the Static web app's application settings.":::
-
-## Use search in your Static web app
-
-1. In VS Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-1. In the Side Bar, **right-click on your Azure subscription** under the `Static web apps` area and find the static web app you created for this tutorial.
-1. Right-click your static web app name and select **Browse site**.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-browse-static-web-app.png" alt-text="Right-click the Static Web App name and select **Browse site**.":::
-
-1. Select **Open** in the pop-up dialog.
-1. In the website search bar, enter a search query such as `code`, _slowly_ so the suggest feature suggests book titles. Select a suggestion or continue entering your own query. Press enter when you've completed your search query.
-1. Review the results then select one of the books to see more details.
-
-## Clean up resources
-
-To clean up the resources created in this tutorial, delete the resource group.
-
-1. In VS Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-
-1. In the Side bar, **right-click on your Azure subscription** under the `Resource Groups` area and find the resource group you created for this tutorial.
-1. Right-click the resource group name then select **Delete**.
- This deletes both the Search and Static web app resources.
-1. If you no longer want the GitHub fork of the sample, remember to delete that on GitHub. Go to your fork's **Settings** then delete the fork.
- ## Next steps
search Tutorial Python Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-overview.md
Previously updated : 08/30/2022 Last updated : 11/02/2022 ms.devlang: python
ms.devlang: python
This tutorial builds a website to search through a catalog of books then deploys the website to an Azure Static Web App. The application is available:
-* [Sample](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website)
+* [Sample](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website-functions-v4)
* [Demo website - aka.ms/azs-good-books](https://aka.ms/azs-good-books) ## What does the sample do?
-This sample website provides access to a catalog of 10,000 books. A user can search the catalog by entering text in the search bar. While the user enters text, the website uses your search index's suggest feature to complete the text. Once the query finishes, the list of books is displayed with a portion of the details. A user can select a book to see all the details, stored in the search index, of the book.
--
-The search experience includes:
-
-* Search ΓÇô provides search functionality for the application.
-* Suggest ΓÇô provides suggestions as the user is typing in the search bar.
-* Document Lookup ΓÇô looks up a document by ID to retrieve all of its contents for the details page.
## How is the sample organized?
-The [sample](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website) includes the following:
+The [sample](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website-functions-v4) includes the following:
|App|Purpose|GitHub<br>Repository<br>Location| |--|--|--|
-|Client|React app (presentation layer) to display books, with search. It calls the Azure Function app. |[/search-website/src](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website/src)|
-|Server|Azure Function app (business layer) - calls the Azure Cognitive Search API using Python SDK |[/search-website/api](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website/src)|
-|Bulk insert|Python file to create the index and add documents to it.|[/search-website/bulk-upload](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website/bulk-upload)|
+|Client|React app (presentation layer) to display books, with search. It calls the Azure Function app. |[/search-website-functions-v4/client](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website-functions-v4/client)|
+|Server|Azure Function app (business layer) - calls the Azure Cognitive Search API using Python SDK |[/search-website-functions-v4/api](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website-functions-v4/api)|
+|Bulk insert|Python file to create the index and add documents to it.|[/search-website-functions-v4/bulk-upload](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website-functions-v4/bulk-upload)|
## Set up your development environment
Install the following for your local development environment.
- [Python 3.9](https://www.python.org/downloads/) - [Git](https://git-scm.com/downloads) - [Visual Studio Code](https://code.visualstudio.com/) and the following extensions
- - [Azure Resources](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureresourcegroups)
- - [Azure Cognitive Search 0.2.0+](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch)
+ - [Azure Cognitive Search](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurecognitivesearch)
- [Azure Static Web App](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestaticwebapps) - Optional: - This tutorial doesn't run the Azure Function API locally but if you intend to run it locally, you need to install [azure-functions-core-tools](../azure-functions/functions-run-local.md?tabs=linux%2ccsharp%2cbash).
Forking the sample repository is critical to be able to deploy the static web ap
## Create a resource group for your Azure resources
-1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-1. In Resources, select Add (**+**), and then select **Create Resource Group**.
-
- :::image type="content" source="./media/tutorial-javascript-overview/visual-studio-code-create-resource-group.png" alt-text="In Resources, select Add (**+**), and then select **Create Resource Group**.":::
-1. Enter a resource group name, such as `cognitive-search-website-tutorial`.
-1. Select a location close to you.
-1. When you create the Cognitive Search and Static Web App resources, later in the tutorial, use this resource group.
-
- Creating a resource group gives you a logical unit to manage the resources, including deleting them when you are finished using them.
## Next steps
search Tutorial Python Search Query Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-search-query-integration.md
Previously updated : 11/17/2021 Last updated : 11/02/2022 ms.devlang: python # 4 - Python Search integration cheat sheet
-In the previous lessons, you added search to a Static Web App. This lesson highlights the essential steps that establish integration. If you are looking for a cheat sheet on how to integrate search into your Python app, this article explains what you need to know.
+In the previous lessons, you added search to a Static Web App. This lesson highlights the essential steps that establish integration. If you're looking for a cheat sheet on how to integrate search into your Python app, this article explains what you need to know.
The application is available:
-* [Sample](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website)
+* [Sample](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/search-website-functions-v4)
* [Demo website - aka.ms/azs-good-books](https://aka.ms/azs-good-books) ## Azure SDK azure-search-documents
The Function app authenticates through the SDK to the cloud-based Cognitive Sear
The Azure Function app settings environment variables are pulled in from a file, `__init__.py`, shared between the three API functions. ## Azure Function: Search the catalog
-The Search [API](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website/api/Search/__init__.py) takes a search term and searches across the documents in the Search Index, returning a list of matches.
+The Search [API](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website-functions-v4/api/Search/__init__.py) takes a search term and searches across the documents in the Search Index, returning a list of matches.
-Routing for the Search API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website/api/Search/function.json) bindings.
+Routing for the Search API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website-functions-v4/api/Search/function.json) bindings.
The Azure Function pulls in the search configuration information, and fulfills the query. ## Client: Search from the catalog Call the Azure Function in the React client with the following code. ## Azure Function: Suggestions from the catalog
-The `Suggest` [API](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website/api/Suggest/__init__.py) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches.
+The `Suggest` [API](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website-functions-v4/api/Suggest/__init__.py) takes a search term while a user is typing and suggests search terms such as book titles and authors across the documents in the search index, returning a small list of matches.
-The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website/bulk-upload/good-books-index.json) used during bulk upload.
+The search suggester, `sg`, is defined in the [schema file](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website-functions-v4/bulk-upload/good-books-index.json) used during bulk upload.
-Routing for the Suggest API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website/api/Suggest/function.json) bindings.
+Routing for the Suggest API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website-functions-v4/api/Suggest/function.json) bindings.
## Client: Suggestions from the catalog
-The Suggest function API is called in the React app at `\src\components\SearchBar\SearchBar.js` as part of component initialization:
+The Suggest function API is called in the React app at `client\src\components\SearchBar\SearchBar.js` as part of component initialization:
## Azure Function: Get specific document
-The `Lookup` [API](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website/api/Lookup/__init__.py) takes a ID and returns the document object from the Search Index.
+The `Lookup` [API](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website-functions-v4/api/Lookup/__init__.py) takes an ID and returns the document object from the Search Index.
-Routing for the Lookup API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website/api/Lookup/function.json) bindings.
+Routing for the Lookup API is contained in the [function.json](https://github.com/Azure-Samples/azure-search-python-samples/blob/master/search-website-functions-v4/api/Lookup/function.json) bindings.
## Client: Get specific document
-This function API is called in the React app at `\src\pages\Details\Detail.js` as part of component initialization:
+This function API is called in the React app at `client\src\pages\Details\Detail.js` as part of component initialization:
## Next steps
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Learn about the latest updates to Azure Cognitive Search functionality, docs, an
| Month | Feature | Description | |-||-|
-| December | [Enhanced configuration for semantic search](semantic-how-to-query-request.md#create-a-semantic-configuration) | This configuration is a new addition to the 2021-04-30-Preview API, and is now required for semantic queries. Public preview in the portal and preview REST APIs.|
+| December | [Enhanced configuration for semantic search](semantic-how-to-query-request.md##2create-a-semantic-configuration | This configuration is a new addition to the 2021-04-30-Preview API, and is now required for semantic queries and Azure portal.|
| November | [Azure Files indexer (preview)](./search-file-storage-integration.md) | Public preview in the portal and preview REST APIs.| | July | [Search REST API 2021-04-30-Preview](/rest/api/searchservice/index-preview) | Public preview announcement. | | July | [Role-based access control for data plane (preview)](search-security-rbac.md) | Public preview announcement. |
sentinel Connect Log Forwarder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-log-forwarder.md
Your machine must meet the following requirements:
- Your Linux machine must have a minimum of **4 CPU cores and 8 GB RAM**. > [!NOTE]
- > - A single log forwarder machine using the **rsyslog** daemon has a supported capacity of **up to 8500 events per second (EPS)** collected.
+ > - A single log forwarder machine with the above hardware configuration and using the **rsyslog** daemon has a supported capacity of **up to 8500 events per second (EPS)** collected.
- **Operating system**
sentinel Create Nrt Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-nrt-rules.md
Title: Work with near-real-time (NRT) detection analytics rules in Microsoft Sen
description: This article explains how to view and create near-real-time (NRT) detection analytics rules in Microsoft Sentinel. Previously updated : 11/09/2021 Last updated : 11/02/2022 - # Work with near-real-time (NRT) detection analytics rules in Microsoft Sentinel
You create NRT rules the same way you create regular [scheduled-query analytics
1. From the Microsoft Sentinel navigation menu, select **Analytics**.
-1. Select **Create** from the button bar, then **NRT query rule** from the drop-down list.
+1. Select **Create** from the button bar, then **NRT query rule (preview)** from the drop-down list.
- :::image type="content" source="media/create-nrt-rules/create-nrt-rule.png" alt-text="Create a new NRT rule.":::
+ :::image type="content" source="media/create-nrt-rules/create-nrt-rule.png" alt-text="Screenshot shows how to create a new NRT rule." lightbox="media/create-nrt-rules/create-nrt-rule.png":::
1. Follow the instructions of the [**analytics rule wizard**](detect-threats-custom.md). The configuration of NRT rules is in most ways the same as that of scheduled analytics rules.
- - You can refer to [**watchlists**](watchlists.md) and [**threat intelligence feeds**](understand-threat-intelligence.md) in your query logic.
+ - You can refer to [**watchlists**](watchlists.md) in your query logic.
- You can use all of the alert enrichment methods: [**entity mapping**](map-data-fields-to-entities.md), [**custom details**](surface-custom-details-in-alerts.md), and [**alert details**](customize-alert-details.md).
You create NRT rules the same way you create regular [scheduled-query analytics
In this document, you learned how to create near-real-time (NRT) analytics rules in Microsoft Sentinel. -- Learn more about about [near-real-time (NRT) analytics rules in Microsoft Sentinel](near-real-time-rules.md).
+- Learn more about [near-real-time (NRT) analytics rules in Microsoft Sentinel](near-real-time-rules.md).
- Explore other [analytics rule types](detect-threats-built-in.md).
sentinel Near Real Time Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/near-real-time-rules.md
Title: Detect threats quickly with near-real-time (NRT) analytics rules in Micro
description: This article explains how the new near-real-time (NRT) analytics rules can help you detect threats quickly in Microsoft Sentinel. Previously updated : 11/09/2021 Last updated : 11/02/2022 - # Detect threats quickly with near-real-time (NRT) analytics rules in Microsoft Sentinel
The following limitations currently govern the use of NRT rules:
1. As this type of rule is new, its syntax is currently limited but will gradually evolve. Therefore, at this time the following restrictions are in effect:
- 1. The query defined in an NRT rule can reference **only one table**. Queries can, however, refer to multiple watchlists and to threat intelligence feeds.
+ 1. The query defined in an NRT rule can reference **only one table**. Queries can, however, refer to multiple watchlists.
1. You cannot use unions or joins.
sentinel Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new-archive.md
- Title: Archive for What's new in Microsoft Sentinel
-description: A description of what's new and changed in Microsoft Sentinel from six months ago and earlier.
--- Previously updated : 08/31/2022--
-# Archive for What's new in Microsoft Sentinel
-
-The primary [What's new in Sentinel](whats-new.md) release notes page contains updates for the last six months, while this page contains older items.
-
-For information about earlier features delivered, see our [Tech Community blogs](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/bg-p/MicrosoftSentinelBlog/label-name/What's%20New).
-
-Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
--
-> [!TIP]
-> Our threat hunting teams across Microsoft contribute queries, playbooks, workbooks, and notebooks to the [Microsoft Sentinel Community](https://github.com/Azure/Azure-Sentinel), including specific [hunting queries](https://github.com/Azure/Azure-Sentinel) that your teams can adapt and use.
->
-> You can also contribute! Join us in the [Microsoft Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
--
-## December 2021
--- [Apache Log4j Vulnerability Detection solution](#apache-log4j-vulnerability-detection-solution-public-preview)-- [IoT OT Threat Monitoring with Defender for IoT solution](#iot-ot-threat-monitoring-with-defender-for-iot-solution-public-preview)-- [Continuous Threat Monitoring for GitHub solution](#ingest-github-logs-into-your-microsoft-sentinel-workspace-public-preview)--
-### Apache Log4j Vulnerability Detection solution
-
-Remote code execution vulnerabilities related to Apache Log4j were disclosed on 9 December 2021. The vulnerability allows for unauthenticated remote code execution, and it's triggered when a specially crafted string, provided by the attacker through a variety of different input vectors, is parsed and processed by the Log4j 2 vulnerable component.
-
-The [Apache Log4J Vulnerability Detection](sentinel-solutions-catalog.md#domain-solutions) solution was added to the Microsoft Sentinel content hub to help customers monitor, detect, and investigate signals related to the exploitation of this vulnerability, using Microsoft Sentinel.
-
-For more information, see the [Microsoft Security Response Center blog](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/) and [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md).
-
-### IoT OT Threat Monitoring with Defender for IoT solution (Public preview)
-
-The new **IoT OT Threat Monitoring with Defender for IoT** solution available in the [Microsoft Sentinel content hub](sentinel-solutions-deploy.md) provides further support for the Microsoft Sentinel integration with Microsoft Defender for IoT, bridging gaps between IT and OT security challenges, and empowering SOC teams with enhanced abilities to efficiently and effectively detect and respond to OT threats.
-
-For more information, see [Tutorial: Investigate Microsoft Defender for IoT devices with Microsoft Sentinel](iot-advanced-threat-monitoring.md).
--
-### Ingest GitHub logs into your Microsoft Sentinel workspace (Public preview)
-
-Use the new [Continuous Threat Monitoring for GitHub](https://azuremarketplace.microsoft.com/marketplace/apps/microsoftcorporation1622712991604.sentinel4github?tab=Overview) solution and [data connector](data-connectors-reference.md#github-preview) to ingest your GitHub logs into your Microsoft Sentinel workspace.
-
-The **Continuous Threat Monitoring for GitHub** solution includes a data connector, relevant analytics rules, and a workbook that you can use to visualize your log data.
-
-For example, view the number of users that were added or removed from GitHub repositories, how many repositories were created, forked, or cloned, in the selected time frame.
-
-> [!NOTE]
-> The **Continuous Threat Monitoring for GitHub** solution is supported for GitHub enterprise licenses only.
->
-
-For more information, see [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions (Public preview)](sentinel-solutions-deploy.md) and [instructions](data-connectors-reference.md#github-preview) for installing the GitHub data connector.
-
-### Apache Log4j Vulnerability Detection solution (Public preview)
-
-Remote code execution vulnerabilities related to Apache Log4j were disclosed on 9 December 2021. The vulnerability allows for unauthenticated remote code execution, and it's triggered when a specially crafted string, provided by the attacker through a variety of different input vectors, is parsed and processed by the Log4j 2 vulnerable component.
-
-The [Apache Log4J Vulnerability Detection](sentinel-solutions-catalog.md#domain-solutions) solution was added to the Microsoft Sentinel content hub to help customers monitor, detect, and investigate signals related to the exploitation of this vulnerability, using Microsoft Sentinel.
-
-For more information, see the [Microsoft Security Response Center blog](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/) and [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md).
-
-## November 2021
--- [Incident advanced search now available in GA](#incident-advanced-search-now-available-in-ga)-- [Amazon Web Services S3 connector now available (Public preview)](#amazon-web-services-s3-connector-now-available-public-preview)-- [Windows Forwarded Events connector now available (Public preview)](#windows-forwarded-events-connector-now-available-public-preview)-- [Near-real-time (NRT) threat detection rules now available (Public preview)](#near-real-time-nrt-threat-detection-rules-now-available-public-preview)-- [Fusion engine now detects emerging and unknown threats (Public preview)](#fusion-engine-now-detects-emerging-and-unknown-threats-public-preview)-- [Fine-tuning recommendations for your analytics rules (Public preview)](#get-fine-tuning-recommendations-for-your-analytics-rules-public-preview)-- [Free trial updates](#free-trial-updates)-- [Content hub and new solutions (Public preview)](#content-hub-and-new-solutions-public-preview)-- [Continuous deployment from your content repositories (Public preview)](#enable-continuous-deployment-from-your-content-repositories-public-preview)-- [Enriched threat intelligence with Geolocation and WhoIs data (Public preview)](#enriched-threat-intelligence-with-geolocation-and-whois-data-public-preview)-- [Use notebooks with Azure Synapse Analytics in Microsoft Sentinel (Public preview)](#use-notebooks-with-azure-synapse-analytics-in-microsoft-sentinel-public-preview)-- [Enhanced Notebooks area in Microsoft Sentinel](#enhanced-notebooks-area-in-microsoft-sentinel)-- [Microsoft Sentinel renaming](#microsoft-sentinel-renaming)-- [Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel](#deploy-and-monitor-azure-key-vault-honeytokens-with-microsoft-sentinel)-
-### Incident advanced search now available in GA
-
-Searching for incidents using the advanced search functionality is now generally available.
-
-The advanced incident search provides the ability to search across more data, including alert details, descriptions, entities, tactics, and more.
-
-For more information, see [Search for incidents](investigate-cases.md#search-for-incidents).
-
-### Amazon Web Services S3 connector now available (Public preview)
-
-You can now connect Microsoft Sentinel to your Amazon Web Services (AWS) S3 storage bucket, in order to ingest logs from a variety of AWS services.
-
-For now, you can use this connection to ingest VPC Flow Logs and GuardDuty findings, as well as AWS CloudTrail.
-
-For more information, see [Connect Microsoft Sentinel to S3 Buckets to get Amazon Web Services (AWS) data](connect-aws.md).
-
-### Windows Forwarded Events connector now available (Public preview)
-
-You can now stream event logs from Windows Servers connected to your Microsoft Sentinel workspace using Windows Event Collection / Windows Event Forwarding (WEC / WEF), thanks to this new data connector. The connector uses the new Azure Monitor Agent (AMA), which provides a number of advantages over the legacy Log Analytics agent (also known as the MMA):
--- **Scalability:** If you've enabled Windows Event Collection (WEC), you can install the Azure Monitor Agent (AMA) on the WEC machine to collect logs from many servers with a single connection point.--- **Speed:** The AMA can send data at an improved rate of 5 K EPS, allowing for faster data refresh.--- **Efficiency:** The AMA allows you to design complex Data Collection Rules (DCR) to filter the logs at their source, choosing the exact events to stream to your workspace. DCRs help lower your network traffic and your ingestion costs by leaving out undesired events.--- **Coverage:** WEC / WEF enables the collection of Windows Event logs from legacy (on-premises and physical) servers and also from high-usage or sensitive machines, such as domain controllers, where installing an agent is undesired.-
-We recommend using this connector with the [Microsoft Sentinel Information Model (ASIM)](normalization.md) parsers installed to ensure full support for data normalization.
-
-Learn more about the [Windows Forwarded Events connector](data-connectors-reference.md#windows-forwarded-events-preview).
-
-### Near-real-time (NRT) threat detection rules now available (Public preview)
-
-When you're faced with security threats, time and speed are of the essence. You need to be aware of threats as they materialize so you can analyze and respond quickly to contain them. Microsoft Sentinel's near-real-time (NRT) analytics rules offer you faster threat detection - closer to that of an on-premises SIEM - and the ability to shorten response times in specific scenarios.
-
-Microsoft SentinelΓÇÖs [near-real-time analytics rules](detect-threats-built-in.md#nrt) provide up-to-the-minute threat detection out-of-the-box. This type of rule was designed to be highly responsive by running its query at intervals just one minute apart.
-
-Learn more about [NRT rules](near-real-time-rules.md) and [how to use them](create-nrt-rules.md).
-
-### Fusion engine now detects emerging and unknown threats (Public preview)
-
-In addition to detecting attacks based on [predefined scenarios](fusion-scenario-reference.md), Microsoft Sentinel's ML-powered Fusion engine can help you find the emerging and unknown threats in your environment by applying extended ML analysis and by correlating a broader scope of anomalous signals, while keeping the alert fatigue low.
-
-The Fusion engine's ML algorithms constantly learn from existing attacks and apply analysis based on how security analysts think. It can therefore discover previously undetected threats from millions of anomalous behaviors across the kill-chain throughout your environment, which helps you stay one step ahead of the attackers.
-
-Learn more about [Fusion for emerging threats](fusion.md#fusion-for-emerging-threats).
-
-Also, the [Fusion analytics rule is now more configurable](configure-fusion-rules.md), reflecting its increased functionality.
-
-### Get fine-tuning recommendations for your analytics rules (Public preview)
-
-Fine-tuning threat detection rules in your SIEM can be a difficult, delicate, and continuous process of balancing between maximizing your threat detection coverage and minimizing false positive rates. Microsoft Sentinel simplifies and streamlines this process by using machine learning to analyze billions of signals from your data sources as well as your responses to incidents over time, deducing patterns and providing you with actionable recommendations and insights that can significantly lower your tuning overhead and allow you to focus on detecting and responding to actual threats.
-
-[Tuning recommendations and insights](detection-tuning.md) are now built in to your analytics rules.
-
-### Free trial updates
-
-Microsoft Sentinel's free trial continues to support new or existing Log Analytics workspaces at no additional cost for the first 31 days.
-
-We're evolving our free trial experience to include the following updates:
--- **New Log Analytics workspaces** can ingest up to 10 GB / day of log data for the first 31-days at no cost. New workspaces include workspaces that are less than three days old.-
- Both Log Analytics data ingestion and Microsoft Sentinel charges are waived during the 31-day trial period. This free trial is subject to a 20-workspace limit per Azure tenant.
--- **Existing Log Analytics workspaces** can enable Microsoft Sentinel at no additional cost. Existing workspaces include any workspaces created more than three days ago.-
- Only the Microsoft Sentinel charges are waived during the 31-day trial period.
-
-Usage beyond these limits will be charged per the pricing listed on the [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/microsoft-sentinel/) page. Charges related to additional capabilities for [automation](automation.md) and [bring your own machine learning](bring-your-own-ml.md) are still applicable during the free trial.
-
-> [!TIP]
-> During your free trial, find resources for cost management, training, and more on the **News & guides > Free trial** tab in Microsoft Sentinel. This tab also displays details about the dates of your free trial, and how many days you've left until it expires.
->
-
-For more information, see [Plan and manage costs for Microsoft Sentinel](billing.md).
-
-### Content hub and new solutions (Public preview)
-
-Microsoft Sentinel now provides a **Content hub**, a centralized location to find and deploy Microsoft Sentinel out-of-the-box (built-in) content and solutions to your Microsoft Sentinel workspace. Find the content you need by filtering for content type, support models, categories and more, or use the powerful text search.
-
-Under **Content management**, select **Content hub**. Select a solution to view more details on the right, and then click **Install** to install it in your workspace.
--
-The following list includes highlights of new, out-of-the-box solutions added to the Content hub:
-
- :::column span="":::
- - Microsoft Sentinel Training Lab
- - Cisco ASA
- - Cisco Duo Security
- - Cisco Meraki
- - Cisco StealthWatch
- - Digital Guardian
- - 365 Dynamics
- - GCP Cloud DNS
- :::column-end:::
- :::column span="":::
- - GCP CloudMonitor
- - GCP Identity and Access Management
- - FalconForce
- - FireEye NX
- - Flare Systems Firework
- - Forescout
- - Fortinet Fortigate
- - Imperva Cloud FAW
- :::column-end:::
- :::column span="":::
- - Insider Risk Management (IRM)
- - IronNet CyberSecurity Iron Defense
- - Lookout
- - McAfee Network Security Platform
- - Microsoft MITRE ATT&CK Solution for Cloud
- - Palo Alto PAN-OS
- :::column-end:::
- :::column span="":::
- - Rapid7 Nexpose / Insight VM
- - ReversingLabs
- - RSA SecurID
- - Semperis
- - Tenable Nessus Scanner
- - Vectra Stream
- - Zero Trust
- :::column-end:::
-
-For more information, see:
--- [Learn about Microsoft Sentinel solutions](sentinel-solutions.md)-- [Discover and deploy Microsoft Sentinel solutions](sentinel-solutions-deploy.md)-- [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md)-
-### Enable continuous deployment from your content repositories (Public preview)
-
-The new Microsoft Sentinel **Repositories** page provides the ability to manage and deploy your custom content from GitHub or Azure DevOps repositories, as an alternative to managing them in the Azure portal. This capability introduces a more streamlined and automated approach for managing and deploying content across Microsoft Sentinel workspaces.
-
-If you store your custom content in an external repository in order to maintain it outside of Microsoft Sentinel, now you can connect that repository to your Microsoft Sentinel workspace. Content you add, create, or edit in your repository is automatically deployed to your Microsoft Sentinel workspaces, and will be visible from the various Microsoft Sentinel galleries, such as the **Analytics**, **Hunting**, or **Workbooks** pages.
-
-For more information, see [Deploy custom content from your repository](ci-cd.md).
-
-### Enriched threat intelligence with Geolocation and WhoIs data (Public preview)
-
-Now, any threat intelligence data that you bring in to Microsoft Sentinel via data connectors and logic app playbooks, or create in Microsoft Sentinel, is automatically enriched with GeoLocation and WhoIs information.
-
-GeoLocation and WhoIs data can provide more context for investigations where the selected indicator of compromise (IOC) is found.
-
-For example, use GeoLocation data to find details like *Organization* or *Country* for the indicator, and WhoIs data to find data like *Registrar* and *Record creation* data.
-
-You can view GeoLocation and WhoIs data on the **Threat Intelligence** pane for each indicator of compromise that you've imported into Microsoft Sentinel. Details for the indicator are shown on the right, including any Geolocation and WhoIs data available.
-
-For example:
--
-> [!TIP]
-> The Geolocation and WhoIs information come from the Microsoft Threat Intelligence service, which you can also access via API. For more information, see [Enrich entities with geolocation data via API](geolocation-data-api.md).
->
-
-For more information, see:
--- [Understand threat intelligence in Microsoft Sentinel](understand-threat-intelligence.md)-- [Understand threat intelligence integrations](threat-intelligence-integration.md)-- [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md)-- [Connect threat intelligence platforms](connect-threat-intelligence-tip.md)-
-### Use notebooks with Azure Synapse Analytics in Microsoft Sentinel (Public preview)
-
-Microsoft Sentinel now integrates Jupyter notebooks with Azure Synapse for large-scale security analytics scenarios.
-
-Until now, Jupyter notebooks in Microsoft Sentinel have been integrated with Azure Machine Learning. This functionality supports users who want to incorporate notebooks, popular open-source machine learning toolkits, and libraries such as TensorFlow, as well as their own custom models, into security workflows.
-
-The new Azure Synapse integration provides extra analytic horsepower, such as:
--- **Security big data analytics**, using cost-optimized, fully managed Azure Synapse Apache Spark compute pool.--- **Cost-effective Data Lake access** to build analytics on historical data via Azure Data Lake Storage Gen2, which is a set of capabilities dedicated to big data analytics, built on top of Azure Blob Storage.--- **Flexibility to integrate data sources** into security operation workflows from multiple sources and formats.--- **PySpark, a Python-based API** for using the Spark framework in combination with Python, reducing the need to learn a new programming language if you're already familiar with Python.-
-To support this integration, we added the ability to create and launch an Azure Synapse workspace directly from Microsoft Sentinel. We also added new, sample notebooks to guide you through configuring the Azure Synapse environment, setting up a continuous data export pipeline from Log Analytics into Azure Data Lake Storage, and then hunting on that data at scale.
-
-For more information, see [Integrate notebooks with Azure Synapse](notebooks-with-synapse.md).
-
-### Enhanced Notebooks area in Microsoft Sentinel
-
-The **Notebooks** area in Microsoft Sentinel also now has an **Overview** tab, where you can find basic information about notebooks, and a new **Notebook types** column in the **Templates** tab to indicate the type of each notebook displayed. For example, notebooks might have types of **Getting started**, **Configuration**, **Hunting**, and now **Synapse**.
-
-For example:
--
-For more information, see [Use Jupyter notebooks to hunt for security threats](notebooks.md).
-
-### Microsoft Sentinel renaming
-
-Starting in November 2021, Microsoft Sentinel is being renamed to Microsoft Sentinel, and you'll see upcoming updates in the portal, documentation, and other resources in parallel.
-
-Earlier entries in this article and the older [Archive for What's new in Sentinel](whats-new-archive.md) continue to use the name *Azure* Sentinel, as that was the service name when those features were new.
-
-For more information, see our [blog on recent security enhancements](https://aka.ms/secblg11).
-
-### Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel
-
-The new **Microsoft Sentinel Deception** solution helps you watch for malicious activity in your key vaults by helping you to deploy decoy keys and secrets, called *honeytokens*, to selected Azure key vaults.
-
-Once deployed, any access or operation with the honeytoken keys and secrets generate incidents that you can investigate in Microsoft Sentinel.
-
-Since there's no reason to actually use honeytoken keys and secrets, any similar activity in your workspace may be malicious and should be investigated.
-
-The **Microsoft Sentinel Deception** solution includes a workbook to help you deploy the honeytokens, either at scale or one at a time, watchlists to track the honeytokens created, and analytics rules to generate incidents as needed.
-
-For more information, see [Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel (Public preview)](monitor-key-vault-honeytokens.md).
-
-## October 2021
--- [Windows Security Events connector using Azure Monitor Agent now in GA](#windows-security-events-connector-using-azure-monitor-agent-now-in-ga)-- [Defender for Office 365 events now available in the Microsoft 365 Defender connector (Public preview)](#defender-for-office-365-events-now-available-in-the-microsoft-365-defender-connector-public-preview)-- [Playbook templates and gallery now available (Public preview)](#playbook-templates-and-gallery-now-available-public-preview)-- [Template versioning for your scheduled analytics rules (Public preview)](#manage-template-versions-for-your-scheduled-analytics-rules-public-preview)-- [DHCP normalization schema (Public preview)](#dhcp-normalization-schema-public-preview)-
-### Windows Security Events connector using Azure Monitor Agent now in GA
-
-The new version of the Windows Security Events connector, based on the Azure Monitor Agent, is now generally available. For more information, see [Connect to Windows servers to collect security events](connect-windows-security-events.md?tabs=AMA).
-
-### Defender for Office 365 events now available in the Microsoft 365 Defender connector (Public preview)
-
-In addition to those from Microsoft Defender for Endpoint, you can now ingest raw [advanced hunting events](/microsoft-365/security/defender/advanced-hunting-overview) from [Microsoft Defender for Office 365](/microsoft-365/security/office-365-security/overview) through the [Microsoft 365 Defender connector](connect-microsoft-365-defender.md). [Learn more](microsoft-365-defender-sentinel-integration.md#advanced-hunting-event-collection).
-
-### Playbook templates and gallery now available (Public preview)
-
-A playbook template is a pre-built, tested, and ready-to-use workflow that can be customized to meet your needs. Templates can also serve as a reference for best practices when developing playbooks from scratch, or as inspiration for new automation scenarios.
-
-Playbook templates have been developed by the Sentinel community, independent software vendors (ISVs), and Microsoft's own experts, and you can find them in the **Playbook templates** tab (under **Automation**), as part of an [Microsoft Sentinel solution](sentinel-solutions.md), or in the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks).
-
-For more information, see [Create and customize playbooks from built-in templates](use-playbook-templates.md).
-
-### Manage template versions for your scheduled analytics rules (Public preview)
-
-When you create analytics rules from [built-in Microsoft Sentinel rule templates](detect-threats-built-in.md), you effectively create a copy of the template. Past that point, the active rule is ***not*** dynamically updated to match any changes that get made to the originating template.
-
-However, rules created from templates ***do*** remember which templates they came from, which allows you two advantages:
--- If you made changes to a rule when creating it from a template (or at any time after that), you can always revert the rule back to its original version (as a copy of the template).--- If a template is updated, you'll be notified and you can choose to update your rules to the new version of their templates, or leave them as they are.-
-[Learn how to manage these tasks](manage-analytics-rule-templates.md), and what to keep in mind. These procedures apply to any [Scheduled](detect-threats-built-in.md#scheduled) analytics rules created from templates.
-
-### DHCP normalization schema (Public preview)
-
-The Advanced Security Information Model (ASIM) now supports a DHCP normalization schema, which is used to describe events reported by a DHCP server and is used by Microsoft Sentinel to enable source-agnostic analytics.
-
-Events described in the DHCP normalization schema include serving requests for DHCP IP address leased from client systems and updating a DNS server with the leases granted.
-
-For more information, see:
--- [Microsoft Sentinel DHCP normalization schema reference (Public preview)](dhcp-normalization-schema.md)-- [Normalization and the Microsoft Sentinel Information Model (ASIM)](normalization.md)-
-## September 2021
--- [Data connector health enhancements (Public preview)](#data-connector-health-enhancements-public-preview)-- [New in docs: scaling data connector documentation](#new-in-docs-scaling-data-connector-documentation)-- [Azure Storage account connector changes](#azure-storage-account-connector-changes)-
-### Data connector health enhancements (Public preview)
-
-Microsoft Sentinel now provides the ability to enhance your data connector health monitoring with a new *SentinelHealth* table. The *SentinelHealth* table is created after you [turn on the Microsoft Sentinel health feature](monitor-sentinel-health.md) in your Microsoft Sentinel workspace, at the first success or failure health event generated.
-
-For more information, see [Monitor the health of your data connectors with this Microsoft Sentinel workbook](monitor-data-connector-health.md).
-
-> [!NOTE]
-> The *SentinelHealth* data table is currently supported only for selected data connectors. For more information, see [Supported data connectors](monitor-data-connector-health.md#supported-data-connectors).
->
--
-### New in docs: scaling data connector documentation
-
-As we continue to add more and more built-in data connectors for Microsoft Sentinel, we reorganized our data connector documentation to reflect this scaling.
-
-For most data connectors, we replaced full articles that describe an individual connector with a series of generic procedures and a full reference of all currently supported connectors.
-
-Check the [Microsoft Sentinel data connectors reference](data-connectors-reference.md) for details about your connector, including references to the relevant generic procedure, as well as extra information and configurations required.
-
-For more information, see:
--- **Conceptual information**: [Connect data sources](connect-data-sources.md)--- **Generic how-to articles**:-
- - [Connect to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md)
- - [Connect your data source to the Microsoft Sentinel Data Collector API to ingest data](connect-rest-api-template.md)
- - [Get CEF-formatted logs from your device or appliance into Microsoft Sentinel](connect-common-event-format.md)
- - [Collect data from Linux-based sources using Syslog](connect-syslog.md)
- - [Collect data in custom log formats to Microsoft Sentinel with the Log Analytics agent](connect-custom-logs.md)
- - [Use Azure Functions to connect your data source to Microsoft Sentinel](connect-azure-functions-template.md)
- - [Resources for creating Microsoft Sentinel custom connectors](create-custom-connector.md)
-
-### Azure Storage account connector changes
-
-Due to some changes made within the Azure Storage account resource configuration itself, the connector also needs to be reconfigured.
-The storage account (parent) resource has within it other (child) resources for each type of storage: files, tables, queues, and blobs.
-
-When configuring diagnostics for a storage account, you must select and configure, in turn:
-- The parent account resource, exporting the **Transaction** metric.-- Each of the child storage-type resources, exporting all the logs and metrics (see the table above).-
-You'll only see the storage types that you actually have defined resources for.
--
-## August 2021
--- [Advanced incident search (Public preview)](#advanced-incident-search-public-preview)-- [Fusion detection for Ransomware (Public preview)](#fusion-detection-for-ransomware-public-preview)-- [Watchlist templates for UEBA data](#watchlist-templates-for-ueba-data-public-preview)-- [File event normalization schema (Public preview)](#file-event-normalization-schema-public-preview)-- [New in docs: Best practice guidance](#new-in-docs-best-practice-guidance)-
-### Advanced incident search (Public preview)
-
-By default, incident searches run across the **Incident ID**, **Title**, **Tags**, **Owner**, and **Product name** values only. Microsoft Sentinel now provides [advanced search options](investigate-cases.md#search-for-incidents) to search across more data, including alert details, descriptions, entities, tactics, and more.
-
-For example:
--
-For more information, see [Search for incidents](investigate-cases.md#search-for-incidents).
-
-### Fusion detection for Ransomware (Public preview)
-
-Microsoft Sentinel now provides new Fusion detections for possible Ransomware activities, generating incidents titled as **Multiple alerts possibly related to Ransomware activity detected**.
-
-Incidents are generated for alerts that are possibly associated with Ransomware activities, when they occur during a specific time-frame, and are associated with the Execution and Defense Evasion stages of an attack. You can use the alerts listed in the incident to analyze the techniques possibly used by attackers to compromise a host / device and to evade detection.
-
-Supported data connectors include:
--- [Azure Defender (Azure Security Center)](connect-defender-for-cloud.md)-- [Microsoft Defender for Endpoint](./data-connectors-reference.md#microsoft-defender-for-endpoint)-- [Microsoft Defender for Identity](./data-connectors-reference.md#microsoft-defender-for-identity)-- [Microsoft Cloud App Security](./data-connectors-reference.md#microsoft-defender-for-cloud-apps)-- [Microsoft Sentinel scheduled analytics rules](detect-threats-built-in.md#scheduled)-
-For more information, see [Multiple alerts possibly related to Ransomware activity detected](fusion.md#fusion-for-ransomware).
-
-### Watchlist templates for UEBA data (Public preview)
-
-Microsoft Sentinel now provides built-in watchlist templates for UEBA data, which you can customize for your environment and use during investigations.
-
-After UEBA watchlists are populated with data, you can correlate that data with analytics rules, view it in the entity pages and investigation graphs as insights, create custom uses such as to track VIP or sensitive users, and more.
-
-Watchlist templates currently include:
--- **VIP Users**. A list of user accounts of employees that have high impact value in the organization.-- **Terminated Employees**. A list of user accounts of employees that have been, or are about to be, terminated.-- **Service Accounts**. A list of service accounts and their owners.-- **Identity Correlation**. A list of related user accounts that belong to the same person.-- **High Value Assets**. A list of devices, resources, or other assets that have critical value in the organization.-- **Network Mapping**. A list of IP subnets and their respective organizational contexts.-
-For more info