Updates from: 06/08/2023 01:26:50
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
active-directory Concept Authentication Authenticator App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-authenticator-app.md
Previously updated : 11/16/2022 Last updated : 06/06/2023
In some rare instances where the relevant Google or Apple service responsible fo
The Authenticator app can be used as a software token to generate an OATH verification code. After entering your username and password, you enter the code provided by the Authenticator app into the sign-in interface. The verification code provides a second form of authentication.
-Users may have a combination of up to five OATH hardware tokens or authenticator applications, such as the Authenticator app, configured for use at any time.
-
-> [!WARNING]
-> To ensure the highest level of security for self-service password reset when only one method is required for reset, a verification code is the only option available to users.
->
-> When two methods are required, users can reset using either a notification or verification code in addition to any other enabled methods.
+> [!NOTE]
+> OATH verification codes generated by Authenticator aren't supported for certificate-based authentication.
+Users may have a combination of up to five OATH hardware tokens or authenticator applications, such as the Authenticator app, configured for use at any time.
## FIPS 140 compliant for Azure AD authentication
active-directory Concept Sspr Howitworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-howitworks.md
If a user doesn't have the minimum number of required methods registered when th
#### Mobile app and SSPR
-When using a mobile app as a method for password reset, like the Microsoft Authenticator app, the following considerations apply:
+When using a mobile app as a method for password reset, like the Microsoft Authenticator app, the following considerations apply if an organization has not [migrated to the centralized Authentication methods policy](how-to-authentication-methods-manage.md):
* When administrators require one method be used to reset a password, verification code is the only option available. * When administrators require two methods be used to reset a password, users are able to use notification **OR** verification code in addition to any other enabled methods.
When using a mobile app as a method for password reset, like the Microsoft Authe
| :: | :: | :: | | Mobile app features available | Code | Code or Notification |
-Users don't have the option to register their mobile app when registering for self-service password reset from [https://aka.ms/ssprsetup](https://aka.ms/ssprsetup). Users can register their mobile app at [https://aka.ms/mfasetup](https://aka.ms/mfasetup), or in the combined security info registration at [https://aka.ms/setupsecurityinfo](https://aka.ms/setupsecurityinfo).
+Users can register their mobile app at [https://aka.ms/mfasetup](https://aka.ms/mfasetup), or in the combined security info registration at [https://aka.ms/setupsecurityinfo](https://aka.ms/setupsecurityinfo).
> [!IMPORTANT]
-> The Authenticator app can't be selected as the only authentication method when only one method is required. Similarly, the Authenticator app and only one additional method cannot be selected when requiring two methods.
+> If The Authenticator app can't be selected as the only authentication method when only one method is required. Similarly, the Authenticator app and only one additional method cannot be selected when requiring two methods.
> > When configuring SSPR policies that include the Authenticator app as a method, at least one additional method should be selected when one method is required, and at least two additional methods should be selected when configuring two methods are required.
->
-> This requirement is because the current SSPR registration experience doesn't include the option to register the authenticator app. The option to register the authenticator app is included with the new [combined registration experience](./concept-registration-mfa-sspr-combined.md).
->
-> Allowing policies that only use the Authenticator app (when one method is required), or the Authenticator app and only one additional method (when two methods are required), could lead to users being blocked from registering for SSPR until they're configured to use the new combined registration experience.
### Change authentication methods
active-directory Fido2 Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/fido2-compatibility.md
# Browser support of FIDO2 passwordless authentication
-Azure Active Directory allows [FIDO2 security keys](./concept-authentication-passwordless.md#fido2-security-keys) to be used as a passwordless device. The availability of FIDO2 authentication for Microsoft accounts was [announced in 2018](https://techcommunity.microsoft.com/t5/identity-standards-blog/all-about-fido2-ctap2-and-webauthn/ba-p/288910), and it became [generally available](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/passwordless-authentication-is-now-generally-available/ba-p/1994700) in March 2021. The following diagram shows which browsers and operating system combinations support passwordless authentication using FIDO2 authentication keys with Azure Active Directory.
+Azure Active Directory allows [FIDO2 security keys](./concept-authentication-passwordless.md#fido2-security-keys) to be used as a passwordless device. The availability of FIDO2 authentication for Microsoft accounts was [announced in 2018](https://techcommunity.microsoft.com/t5/identity-standards-blog/all-about-fido2-ctap2-and-webauthn/ba-p/288910), and it became [generally available](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/passwordless-authentication-is-now-generally-available/ba-p/1994700) in March 2021. The following diagram shows which browsers and operating system combinations support passwordless authentication using FIDO2 authentication keys with Azure Active Directory. Azure AD currently supports only hardware FIDO2 keys and does not support passkeys for any platform.
## Supported browsers
active-directory How To Configure Aws Iam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-configure-aws-iam.md
+
+ Title: Configure AWS IAM Identity Center as an identity provider
+description: How to configure AWS IAM Identity Center as an identity provider.
+++++++ Last updated : 06/07/2023+++
+# Configure AWS IAM Identity Center as an identity provider
+
+If you're an Amazon Web Services (AWS) customer who uses the AWS IAM Identity Center, you can configure the Identity Center as an identity provider in Permissions Management. Configuring your AWS IAM Identity Center information allows you to receive more accurate data for your identities in Permissions Management.
+
+> [!NOTE]
+> Configuring AWS IAM Identity Center as an identity provider is an optional step. By configuring identity provider information, Permissions Management can read user and role access configured at AWS IAM Identity Center. Admins can see the augmented view of assigned permissions to the identities. You can return to these steps to configure an IdP at any time.
+
+## How to configure AWS IAM Identity Center as an identity provider
+
+1. If the **Data Collectors** dashboard isn't displayed when Permissions Management launches, select **Settings** (gear icon), and then select the **Data Collectors** subtab.
++
+2. On the **Data Collectors** dashboard, select **AWS**, and then select **Create Configuration**. If a Data Collector already exists in your AWS account and you want to add AWS IAM integration, do the following:
+ - Select the Data Collector for which you want to configure AWS IAM.
+ - Click on the ellipsis next to theΓÇ»**Authorization Systems Status**.
+ - SelectΓÇ»**Integrate Identity Provider**.
+
+3. On the **Integrate Identity provider (IdP)** page, select the box for **AWS IAM Identity Center**.
+
+4. Fill in the following fields:
+ - The **AWS IAM Identity Center Region**. Specify the region where AWS IAM Identity Center is installed. All data configured in the IAM Identity Center
+ is stored in the Region where the IAM Identity Center is installed.
+ - Your **AWS Management Account ID**
+ - Your **AWS Management Account Role**
+
+5. SelectΓÇ»**Launch Management Account Template**. The template opens in a new window.
+6. If the Management Account stack is created with the Cloud Formation Template as part of the previous onboarding steps, update the stack by running ``EnableSSO`` as true. This creates a new stack when running the Management Account Template.
+
+The template execution attaches the AWS managed policy ``AWSSSOReadOnly`` and the newly created custom policy ``SSOPolicy`` to the AWS IAM role that allows Microsoft Entra Permissions Management to collect organizational information. The following details are requested in the template. All fields are pre-populated, and you can edit the data as you need:
+- **Stack name** ΓÇô This is the name of the AWS stack for creating the required AWS resources for Permissions Management to collect organizational information. The default value is ``mciem-org-<tenant-id>``.
+
+- **CFT Parameters**
+ - **OIDC Provider Role Name** ΓÇô Name of the IAM Role OIDC Provider that can assume the role. The default value is the OIDC account role (as entered in Permissions Management).
+
+ - **Org Account Role Name** - Name of the IAM Role. The default value is pre-populated with the Management account role name (as entered in Microsoft Entra PM).
+
+ - **true** ΓÇô Enables AWS SSO. The default value is ``true`` when the template is launched from the Configure Identity Provider (IdP) page, otherwise the default is ``false``.
+
+ - **OIDC Provider Account ID** ΓÇô The Account ID where the OIDC Provider is created. The default value is the OIDC Provider Account ID (as entered in Permissions Management).
+
+ - **Tenant ID** ΓÇô ID of the tenant where the application is created. The default value is ``tenant-id`` (the configured tenant).
+7. Click **Next** to review and confirm the information you've entered.
+
+8. Click **Verify Now & Save**.
++
+## Next steps
+
+- For information on how to attach and detach permissions AWS identities, see [Attach and detach policies for AWS identities](how-to-attach-detach-permissions.md).
active-directory Security Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-tokens.md
Title: Security tokens
+ Title: Tokens and claims overview
description: Learn about the basics of security tokens in the Microsoft identity platform.
Previously updated : 01/06/2023 Last updated : 06/02/2023 --++ #Customer intent: As an application developer, I want to understand the basic concepts of security tokens in the Microsoft identity platform.
-# Security tokens
+# Tokens and claims overview
-A centralized identity provider is especially useful for apps that have worldwide users who don't necessarily sign in from the enterprise's network. The Microsoft identity platform authenticates users and provides security tokens, such as [access tokens](developer-glossary.md#access-token), [refresh tokens](developer-glossary.md#refresh-token), and [ID tokens](developer-glossary.md#id-token). Security tokens allow a [client application](developer-glossary.md#client-application) to access protected resources on a [resource server](developer-glossary.md#resource-server).
+A centralized identity provider is especially useful for apps that have worldwide users who don't necessarily sign in from the enterprise's network. The Microsoft identity platform authenticates users and provides security tokens, such as access tokens, refresh tokens, and ID tokens. Security tokens allow a client application to access protected resources on a resource server.
-**Access token**: An access token is a security token issued by an [authorization server](developer-glossary.md#authorization-server) as part of an [OAuth 2.0](active-directory-v2-protocols.md) flow. It contains information about the user and the resource for which the token is intended. The information can be used to access web APIs and other protected resources. Access tokens are validated by resources to grant access to a client app. To learn more about how the Microsoft identity platform issues access tokens, see [Access tokens](access-tokens.md).
+ - **Access token** - An access token is a security token issued by an authorization server as part of an OAuth 2.0 flow. It contains information about the user and the resource for which the token is intended. The information can be used to access web APIs and other protected resources. Resources validate access tokens to grant access to a client application. For more information, see [Access tokens in the Microsoft identity platform](access-tokens.md).
+- **Refresh token** - Because access tokens are valid for only a short period of time, authorization servers sometimes issue a refresh token at the same time the access token is issued. The client application can then exchange this refresh token for a new access token when needed. For more information, see [Refresh tokens in the Microsoft identity platform](refresh-tokens.md).
+- **ID token** - ID tokens are sent to the client application as part of an OpenID Connect flow. They can be sent alongside or instead of an access token. ID tokens are used by the client to authenticate the user. To learn more about how the Microsoft identity platform issues ID tokens, see [ID tokens in the Microsoft identity platform](id-tokens.md).
-**Refresh token**: Because access tokens are valid for only a short period of time, authorization servers will sometimes issue a refresh token at the same time the access token is issued. The client application can then exchange this refresh token for a new access token when needed. To learn more about how the Microsoft identity platform uses refresh tokens to revoke permissions, see [Refresh tokens](refresh-tokens.md).
+Many enterprise applications use SAML to authenticate users. For information on SAML assertions, see [SAML token reference](reference-saml-tokens.md).
-**ID token**: ID tokens are sent to the client application as part of an [OpenID Connect](v2-protocols-oidc.md) flow. They can be sent alongside or instead of an access token. ID tokens are used by the client to authenticate the user. To learn more about how the Microsoft identity platform issues ID tokens, see [ID tokens](id-tokens.md).
+## Validate tokens
-Many enterprise applications use SAML to authenticate users. For information on SAML assertions, see [Azure Active Directory SAML token reference](reference-saml-tokens.md).
+It's up to the application for which the token was generated, the web app that signed in the user, or the web API being called to validate the token. The authorization server signs the token with a private key. The authorization server publishes the corresponding public key. To validate a token, the app verifies the signature by using the authorization server public key to validate that the signature was created using the private key.
-## Validate security tokens
+Tokens are valid for only a limited amount of time, so the authorization server frequently provides a pair of tokens. An access token is provided, which accesses the application or protected resource. A refresh token is provided, which is used to refresh the access token when the access token is close to expiring.
-It's up to the app for which the token was generated, the web app that signed in the user, or the web API being called to validate the token. The token is signed by the authorization server with a private key. The authorization server publishes the corresponding public key. To validate a token, the app verifies the signature by using the authorization server public key to validate that the signature was created using the private key.
-
-Tokens are valid for only a limited amount of time, so the authorization server frequently provides a pair of tokens;
-
-* An access token, which accesses the application or protected resource.
-* A refresh token, which is used to refresh the access token when the access token is close to expiring.
-
-Access tokens are passed to a web API as the bearer token in the `Authorization` header. An app can provide a refresh token to the authorization server. If the user access to the app wasn't revoked, it will get back a new access token and a new refresh token. This is how the scenario of someone leaving the enterprise is handled. When the authorization server receives the refresh token, it won't issue another valid access token if the user is no longer authorized.
+Access tokens are passed to a web API as the bearer token in the `Authorization` header. An app can provide a refresh token to the authorization server. If the user access to the app wasn't revoked, it receives a new access token and a new refresh token. When the authorization server receives the refresh token, it issues another access token only if the user is still authorized.
## JSON Web Tokens and claims The Microsoft identity platform implements security tokens as JSON Web Tokens (JWTs) that contain *claims*. Since JWTs are used as security tokens, this form of authentication is sometimes called *JWT authentication*.
-A [claim](developer-glossary.md#claim) provides assertions about one entity, such as a client application or [resource owner](developer-glossary.md#resource-owner), to another entity, such as a resource server. A claim might also be referred to as a JWT claim or a JSON Web Token claim.
+A claim provides assertions about one entity, such as a client application or resource owner, to another entity, such as a resource server. A claim might also be referred to as a JWT claim or a JSON Web Token claim.
-Claims are name or value pairs that relay facts about the token subject. For example, a claim might contain facts about the security principal that was authenticated by the authorization server. The claims present in a specific token depend on many things, such as the type of token, the type of credential used to authenticate the subject, and the application configuration.
+Claims are name or value pairs that relay facts about the token subject. For example, a claim might contain facts about the security principal that the authorization server authenticated. The claims present in a specific token depend on many things, such as the type of token, the type of credential used to authenticate the subject, and the application configuration.
-Applications can use claims for various tasks, such as to:
+Applications can use claims for the following various tasks:
-* Validate the token.
-* Identify the token subject's [tenant](developer-glossary.md#tenant).
-* Display user information.
-* Determine the subject's authorization.
+* Validate the token
+* Identify the token subject's tenant
+* Display user information
+* Determine the subject's authorization
-A claim consists of key-value pairs that provide information such as the:
+A claim consists of key-value pairs that provide the following types of information:
-* Security Token Server that generated the token.
-* Date when the token was generated.
-* Subject (like the user, but not daemons).
-* Audience, which is the app for which the token was generated.
-* App (the client) that asked for the token. For web apps, this app might be the same as the audience.
+* Security token server that generated the token
+* Date when the token was generated
+* Subject (like the user, but not daemons)
+* Audience, which is the app for which the token was generated
+* App (the client) that asked for the token
-To learn more about how the Microsoft identity platform implements tokens and claim information, see [Access tokens](access-tokens.md) and [ID tokens](id-tokens.md).
+## Authorization flows and authentication codes
-## How each flow emits tokens and codes
+Depending on how your client is built, it can use one or several of the authentication flows supported by the Microsoft identity platform. The supported flows can produce various tokens and authorization codes and require different tokens to make them work. The following table provides an overview.
-Depending on how your client is built, it can use one (or several) of the authentication flows supported by the Microsoft identity platform. These flows can produce various tokens (ID tokens, refresh tokens, access tokens) and authorization codes. They require different tokens to make them work. This table provides an overview.
+| Flow | Requires | ID token | Access token | Refresh token | Authorization code |
+||-|-|--||--|
+| [Authorization code flow](v2-oauth2-auth-code-flow.md) | | x | x | x | x |
+| [Implicit flow](v2-oauth2-implicit-grant-flow.md) | | x | x | | |
+| [Hybrid OIDC flow](v2-protocols-oidc.md#protocol-diagram-access-token-acquisition)| | x | | | x |
+| [Refresh token redemption](v2-oauth2-auth-code-flow.md#refresh-the-access-token) | Refresh token | x | x | x | |
+| [On-behalf-of flow](v2-oauth2-on-behalf-of-flow.md) | Access token | x | x| x | |
+| [Client credentials](v2-oauth2-client-creds-grant-flow.md) | | | x (App only) | | |
-|Flow | Requires | ID token | Access token | Refresh token | Authorization code |
-|--|-|-|--||--|
-|[Authorization code flow](v2-oauth2-auth-code-flow.md) | | x | x | x | x|
-|[Implicit flow](v2-oauth2-implicit-grant-flow.md) | | x | x | | |
-|[Hybrid OIDC flow](v2-protocols-oidc.md#protocol-diagram-access-token-acquisition)| | x | | | x |
-|[Refresh token redemption](v2-oauth2-auth-code-flow.md#refresh-the-access-token) | Refresh token | x | x | x| |
-|[On-behalf-of flow](v2-oauth2-on-behalf-of-flow.md) | Access token| x| x| x| |
-|[Client credentials](v2-oauth2-client-creds-grant-flow.md) | | | x (App only)| | |
+Tokens issued using the implicit flow have a length limitation because they're passed back to the browser using the URL, where `response_mode` is `query` or `fragment`. Some browsers have a limit on the size of the URL that can be put in the browser bar and fail when it's too long. As a result, these tokens don't have `groups` or `wids` claims.
-Tokens issued via the implicit mode have a length limitation because they're passed back to the browser via the URL, where `response_mode` is `query` or `fragment`. Some browsers have a limit on the size of the URL that can be put in the browser bar and fail when it's too long. As a result, these tokens don't have `groups` or `wids` claims.
+## See also
-## Next steps
+* [OAuth 2.0](active-directory-v2-protocols.md)
+* [OpenID Connect](v2-protocols-oidc.md)
-For more information about authentication and authorization in the Microsoft identity platform, see the following articles:
+## Next steps
* To learn about the basic concepts of authentication and authorization, see [Authentication vs. authorization](authentication-vs-authorization.md).
-* To learn about registering your application for integration, see [Application model](application-model.md).
-* To learn about the sign-in flow of web, desktop, and mobile apps, see [App sign-in flow](app-sign-in-flow.md).
active-directory How To Single Page App Vanillajs Sign In Sign Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-sign-in-sign-out.md
Now that all the required code snippets have been added, the application can be
npm start ```
-1. Open a new private browser, and enter the application URI into the browser, `https://localhost:3000/`.
+1. Open a new private browser, and enter the application URI into the browser, `http://localhost:3000/`.
1. Select **No account? Create one**, which starts the sign-up flow. 1. In the **Create account** window, enter the email address registered to your Azure Active Directory (AD) for customers tenant, which starts the sign-up flow as a user for your application. 1. After entering a one-time passcode from the customer tenant, enter a new password and more account details, this sign-up flow is completed.
Now that all the required code snippets have been added, the application can be
- [Enable self-service password reset](./how-to-enable-password-reset-customers.md) - [Customize the default branding](how-to-customize-branding-customers.md) - [Configure sign-in with Google](how-to-google-federation-customers.md)-- [Sign in users in your own ASP.NET web application by using an Azure AD for customers tenant](how-to-web-app-dotnet-sign-in-prepare-app.md)
+- [Sign in users in your own ASP.NET web application by using an Azure AD for customers tenant](how-to-web-app-dotnet-sign-in-prepare-app.md)
active-directory How To Single Page Application React Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-application-react-prepare-app.md
Identity related **npm** packages must be installed in the project to enable use
export const msalConfig = { auth: { clientId: 'Enter_the_Application_Id_Here', // This is the ONLY mandatory field that you need to supply.
- authority: 'https://login.microsoftonline.com/Enter_the_Tenant_Id_Here', // Defaults to "https://login.microsoftonline.com/common"
+ authority: 'https://Enter_the_Tenant_Subdomain_Here.ciamlogin.com/', // Replace the placeholder with your tenant subdomain
redirectUri: '/', // Points to window.location.origin. You must register this URI on Azure Portal/App Registration. postLogoutRedirectUri: '/', // Indicates the page to navigate after logout. navigateToLoginRequestUrl: false, // If "true", will navigate back to the original request location before processing the auth code response.
Identity related **npm** packages must be installed in the project to enable use
* An optional silentRequest object can be used to achieve silent SSO * between applications by providing a "login_hint" property. */
- export const silentRequest = {
- scopes: ["openid", "profile"],
- loginHint: "example@domain.net"
- };
+ // export const silentRequest = {
+ // scopes: ["openid", "profile"],
+ // loginHint: "example@domain.net"
+ // };
``` 1. Replace the following values with the values from the Azure portal. - Replace `Enter_the_Application_Id_Here` with the **Application (client) ID** value that was recorded earlier from the overview page of the registered application.
- - The *Tenant ID* is the identifier of the tenant where the application is registered. Replace the `_Enter_the_Tenant_Info_Here` with the **Directory (tenant) ID** value that was recorded earlier from the overview page of the registered application.
+ - In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is *caseyjensen@onmicrosoft.com*, the value you should enter is *casyjensen*.
## Modify index.js to include the authentication provider
active-directory Secure Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-best-practices.md
Similarly, Azure Monitor can be integrated with ITSM systems through the [IT Ser
* [Azure AD fundamentals](secure-with-azure-ad-fundamentals.md)
-* [Azure resource management fundamentals](secure-with-azure-ad-resource-management.md)
+* [Azure resource management fundamentals](secure-resource-management.md)
-* [Resource isolation in a single tenant](secure-with-azure-ad-single-tenant.md)
+* [Resource isolation in a single tenant](secure-single-tenant.md)
-* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
+* [Resource isolation with multiple tenants](secure-multiple-tenants.md)
active-directory Secure Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-fundamentals.md
Azure AD also provides information on the actions that are being performed withi
* [Introduction to delegated administration and isolated environments](secure-introduction.md)
-* [Azure resource management fundamentals](secure-with-azure-ad-resource-management.md)
+* [Azure resource management fundamentals](secure-resource-management.md)
-* [Resource isolation in a single tenant](secure-with-azure-ad-single-tenant.md)
+* [Resource isolation in a single tenant](secure-single-tenant.md)
-* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
+* [Resource isolation with multiple tenants](secure-multiple-tenants.md)
* [Best practices](secure-best-practices.md)
active-directory Secure Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-introduction.md
Incorporating zero-trust principles into your Azure AD design strategy can help
* [Azure AD fundamentals](secure-with-azure-ad-fundamentals.md)
-* [Azure resource management fundamentals](secure-with-azure-ad-resource-management.md)
+* [Azure resource management fundamentals](secure-resource-management.md)
-* [Resource isolation in a single tenant](secure-with-azure-ad-single-tenant.md)
+* [Resource isolation in a single tenant](secure-single-tenant.md)
-* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
+* [Resource isolation with multiple tenants](secure-multiple-tenants.md)
* [Best practices](secure-best-practices.md)
active-directory Secure Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-multiple-tenants.md
+
+ Title: Resource isolation with multiple tenants to secure with Azure Active Directory
+description: Introduction to resource isolation with multiple tenants in Azure Active Directory.
+++++++ Last updated : 7/5/2022++++++
+# Resource isolation with multiple tenants
+
+There are specific scenarios when delegating administration in a single tenant boundary doesn't meet your needs. In this section, there are requirements that may drive you to create a multi-tenant architecture. Multi-tenant organizations might span two or more Azure AD tenants. This can result in unique cross-tenant collaboration and management requirements. Multi-tenant architectures increase management overhead and complexity and should be used with caution. We recommend using a single tenant if your needs can be met with that architecture. For more detailed information, see [Multi-tenant user management](multi-tenant-user-management-introduction.md).
+
+A separate tenant creates a new boundary, and therefore decoupled management of Azure AD directory roles, directory objects, conditional access policies, Azure resource groups, Azure management groups, and other controls as described in previous sections.
+
+A separate tenant is useful for an organization's IT department to validate tenant-wide changes in Microsoft services such as, Intune, Azure AD Connect, or a hybrid authentication configuration while protecting an organization's users and resources. This includes testing service configurations that might have tenant-wide effects and can't be scoped to a subset of users in the production tenant.
+
+Deploying a non-production environment in a separate tenant might be necessary during development of custom applications that can change data of production user objects with MS Graph or similar APIs (for example, applications that are granted Directory.ReadWrite.All, or similar wide scope).
+
+>[!Note]
+>Azure AD Connect synchronization to multiple tenants, which might be useful when deploying a non-production environment in a separate tenant. For more information, see [Azure AD Connect: Supported topologies](../hybrid/plan-connect-topologies.md).
+
+## Outcomes
+
+In addition to the outcomes achieved with a single tenant architecture as described previously, organizations can fully decouple the resource and tenant interactions:
+
+### Resource separation
+
+* **Visibility** - Resources in a separate tenant can't be discovered or enumerated by users and administrators in other tenants. Similarly, usage reports and audit logs are contained within the new tenant boundary. This separation of visibility allows organizations to manage resources needed for confidential projects.
+
+* **Object footprint** - Applications that write to Azure AD and/or other Microsoft Online services through Microsoft Graph or other management interfaces can operate in a separate object space. This enables development teams to perform tests during the software development lifecycle without affecting other tenants.
+
+* **Quotas** - Consumption of tenant-wide [Azure Quotas and Limits](../../azure-resource-manager/management/azure-subscription-service-limits.md) is separated from that of the other tenants.
+
+### Configuration separation
+
+A new tenant provides a separate set of tenant-wide settings that can accommodate resources and trusting applications that have requirements that need different configurations at the tenant level. Additionally, a new tenant provides a new set of Microsoft Online services such as Office 365.
+
+### Administrative separation
+
+A new tenant boundary involves a separate set of Azure AD directory roles, which enables you to configure different sets of administrators.
+
+## Common usage
+
+The following diagram illustrates a common usage for resource isolation in multiple tenants: a pre-production or "sandbox" environment that requires more separation than can be achieved with delegated administration in a single tenant.
+
+ ![Diagram that shows common usage scenario.](media/secure-multiple-tenants/multiple-tenant-common-scenario.png)
+
+Contoso is an organization that augmented their corporate tenant architecture with a pre-production tenant called ContosoSandbox.com. The sandbox tenant is used to support ongoing development of enterprise solutions that write to Azure AD and Microsoft 365 using Microsoft Graph. These solutions are deployed in the corporate tenant.
+
+The sandbox tenant is brought online to prevent those applications under development from impacting production systems either directly or indirectly, by consuming tenant resources and affecting quotas, or throttling.
+
+Developers require access to the sandbox tenant during the development lifecycle, ideally with self-service access requiring additional permissions that are restricted in the production environment. Examples of these additional permissions might include creating, deleting, and updating user accounts, registering applications, provisioning and deprovisioning Azure resources, and changes to policies or overall configuration of the environment.
+
+In this example, Contoso uses [Azure AD B2B Collaboration](../external-identities/what-is-b2b.md) to provision users from the corporate tenant to enable users that can manage and access resources in applications in the sandbox tenant without managing multiple credentials. This capability is primarily oriented to cross-organization collaboration scenarios. However, enterprises with multiple tenants like Contoso can use this capability to avoid additional credential lifecycle administration and user experience complexities.
+
+Use [External Identities cross-tenant access](../external-identities/cross-tenant-access-settings-b2b-collaboration.md) settings to manage how you collaborate with other Azure AD organizations through B2B collaboration. These settings determine both the level of inbound access users in external Azure AD organizations have to your resources, and the level of outbound access your users have to external organizations. They also let you trust multifactor authentication (MFA) and device claims ([compliant claims and hybrid Azure AD joined claims](../conditional-access/howto-conditional-access-policy-compliant-device.md)) from other Azure AD organizations. For details and planning considerations, see [Cross-tenant access in Azure AD External Identities](../external-identities/cross-tenant-access-overview.md).
+
+Another approach could have been to utilize the capabilities of Azure AD Connect to sync the same on-premises Azure AD credentials to multiple tenants, keeping the same password but differentiating on the users UPN domain.
+
+## Multi-tenant resource isolation
+
+With a new tenant, you have a separate set of administrators. Organizations can choose to use corporate identities through [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). Similarly, organizations can implement [Azure Lighthouse](../../lighthouse/overview.md) for cross-tenant management of Azure resources so that non-production Azure subscriptions are managed by identities in the production counterpart. Azure Lighthouse can't be used to manage services outside of Azure, such as Microsoft Intune. For Managed Service Providers (MSPs), [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide&preserve-view=true) is an admin portal that helps secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
+
+This will allow users to continue to use their corporate credentials, while achieving the benefits of separation.
+
+Azure AD B2B collaboration in sandbox tenants should be configured to allow only identities from the corporate environment to be onboarded using Azure B2B [allow/deny lists](../external-identities/allow-deny-list.md). For tenants that you do want to allow for B2B consider using External Identities cross-tenant access settings for cross tenant multifactor authentication\Device trust.
+
+>[!IMPORTANT]
+>Multi-tenant architectures with external identity access enabled provide only resource isolation, but don't enable identity isolation. Resource isolation using Azure AD B2B collaboration and Azure Lighthouse don't mitigate risks related to identities.
+
+If the sandbox environment shares identities with the corporate environment, the following scenarios are applicable to the sandbox tenant:
+
+* A malicious actor that compromises a user, a device, or hybrid infrastructure in the corporate tenant, and is invited into the sandbox tenant, might gain access to the sandbox tenant's apps and resources.
+
+* An operational error (for example, user account deletion or credential revocation) in the corporate tenant might affect the access of an invited user into the sandbox tenant.
+
+You must do the risk analysis and potentially consider identity isolation through multiple tenants for business-critical resources that require a highly defensive approach. Azure Privileged Identity Management can help mitigate some of the risks by imposing extra security for accessing business critical tenants and resources.
+
+### Directory objects
+
+The tenant you use to isolate resources may contain the same types of objects, Azure resources, and trusting applications as your primary tenant. You may need to provision the following object types:
+
+**Users and groups**: Identities needed by solution engineering teams, such as:
+
+* Sandbox environment administrators.
+
+* Technical owners of applications.
+
+* Line-of-business application developers.
+
+* Test end-user accounts.
+
+These identities might be provisioned for:
+
+* Employees who come with their corporate account through [Azure AD B2B collaboration](../external-identities/what-is-b2b.md).
+
+* Employees who need local accounts for administration, emergency administrative access, or other technical reasons.
+
+Customers who have or require non-production Active Directory on-premises can also synchronize their on-premises identities to the sandbox tenant if needed by the underlying resources and applications.
+
+**Devices**: The non-production tenant contains a reduced number of devices to the extent that are needed in the solution engineering cycle:
+
+* Administration workstations
+
+* Non-production computers and mobile devices needed for development, testing, and documentation
+
+### Applications
+
+Azure AD integrated applications: Application objects and service principals for:
+
+* Test instances of the applications that are deployed in production (for example, applications that write to Azure AD and Microsoft online services).
+
+* Infrastructure services to manage and maintain the non-production tenant, potentially a subset of the solutions available in the corporate tenant.
+
+Microsoft Online
+
+* Typically, the team that owns the Microsoft Online Services in production should be the one owning the non-production instance of those services.
+
+* Administrators of non-production test environments shouldn't be provisioning Microsoft Online Services unless those services are specifically being tested. This avoids inappropriate use of Microsoft services, for example setting up production SharePoint sites in a test environment.
+
+* Similarly, provisioning of Microsoft Online services that can be initiated by end users (also known as ad-hoc subscriptions) should be locked down. For more information, see [What is self-service sign-up for Azure Active Directory?](../enterprise-users/directory-self-service-signup.md).
+
+* Generally, all non-essential license features should be disabled for the tenant using group-based licensing. This should be done by the same team that manages licenses in the production tenant, to avoid misconfiguration by developers who might not know the effect of enabling licensed features.
+
+### Azure resources
+
+Any Azure resources needed by trusting applications may also be deployed. For example, databases, virtual machines, containers, Azure functions, etc. For your sandbox environment, you must weigh the cost savings of using less-expensive SKUs for products and services with the less security features available.
+
+The RBAC model for access control should still be employed in a non-production environment in case changes are replicated to production after tests have concluded. Failure to do so allows security flaws in the non-production environment to propagate to your production tenant.
+
+## Resource and identity isolation with multiple tenants
+
+### Isolation outcomes
+
+There are limited situations where resource isolation can't meet your requirements. You can isolate both resources and identities in a multi-tenant architecture by disabling all cross-tenant collaboration capabilities and effectively building a separate identity boundary. This approach is a defense against operational errors and compromise of user identities, devices, or hybrid infrastructure in corporate tenants.
+
+### Isolation common usage
+
+A separate identity boundary is typically used for business-critical applications and resources such as customer-facing services. In this scenario, Fabrikam has decided to create a separate tenant for their customer-facing SaaS product to avoid the risk of employee identity compromise affecting their SaaS customers. The following diagram illustrates this architecture:
+
+The FabrikamSaaS tenant contains the environments used for applications that are offered to customers as part of Fabrikam's business model.
+
+### Isolation of directory objects
+
+The directory objects in FabrikamSaas are as follows:
+
+Users and groups: Identities needed by solution IT teams, customer support staff, or other necessary personnel are created within the SaaS tenant. To preserve isolation, only local accounts are used, and Azure AD B2B collaboration isn't enabled.
+
+Azure AD B2C directory objects: If the tenant environments are accessed by customers, it may contain an Azure AD B2C tenant and its associated identity objects. Subscriptions that hold these directories are good candidates for an isolated consumer-facing environment.
+
+Devices: This tenant contains a reduced number of devices; only those that are needed to run customer-facing solutions:
+
+* Secure administration workstations.
+
+* Support personnel workstations (this can include engineers who are "on call" as described above).
+
+### Isolation of applications
+
+**Azure AD integrated applications**: Application objects and service principals for:
+
+* Production applications (for example, multi-tenant application definitions).
+
+* Infrastructure services to manage and maintain the customer-facing environment.
+
+**Azure Resources**: Hosts the IaaS, PaaS and SaaS resources of the customer-facing production instances.
+
+## Next steps
+
+* [Introduction to delegated administration and isolated environments](secure-introduction.md)
+
+* [Azure AD fundamentals](secure-with-azure-ad-fundamentals.md)
+
+* [Azure resource management fundamentals](secure-resource-management.md)
+
+* [Resource isolation in a single tenant](secure-single-tenant.md)
+
+* [Best practices](secure-best-practices.md)
active-directory Secure Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-resource-management.md
+
+ Title: Resource management fundamentals in Azure Active Directory
+description: Introduction to resource management in Azure Active Directory.
+++++++ Last updated : 3/23/2023+++++
+# Azure resource management fundamentals
+
+It's important to understand the structure and terms that are specific to Azure resources. The following image shows an example of the four levels of scope that are provided by Azure:
+
+![Diagram that shows Azure resource management model.](media/secure-resource-management/resource-management-terminology.png)
+
+## Terminology
+
+The following are some of the terms you should be familiar with:
+
+**Resource** - A manageable item that is available through Azure. Virtual machines, storage accounts, web apps, databases, and virtual networks are examples of resources.
+
+**Resource group** - A container that holds related resources for an Azure solution such as a collection of virtual machines, associated VNets, and load balancers that require management by specific teams. The [resource group](../../azure-resource-manager/management/overview.md) includes those resources that you want to manage as a group. You decide which resources belong in a resource group based on what makes the most sense for your organization. Resource groups can also be used to help with life-cycle management by deleting all resources that have the same lifespan at one time. This approach also provides security benefit by leaving no fragments that might be exploited.
+
+**Subscription** - From an organizational hierarchy perspective, a subscription is a billing and management container of resources and resource groups. An Azure subscription has a trust relationship with Azure AD. A subscription trusts Azure AD to authenticate users, services, and devices.
+
+>[!Note]
+>A subscription may trust only one Azure AD tenant. However, each tenant may trust multiple subscriptions and subscriptions can be moved between tenants.
+
+**Management group** - [Azure management groups](../../governance/management-groups/overview.md) provide a hierarchical method of applying policies and compliance at different scopes above subscriptions. It can be at the tenant root management group (highest scope) or at lower levels in the hierarchy. You organize subscriptions into containers called "management groups" and apply your governance conditions to the management groups. All subscriptions within a management group automatically inherit the conditions applied to the management group. Note, policy definitions can be applied to a management group or subscription.
+
+**Resource provider** - A service that supplies Azure resources. For example, a common [resource provider](../../azure-resource-manager/management/resource-providers-and-types.md) is Microsoft. Compute, which supplies the virtual machine resource. Microsoft. Storage is another common resource provider.
+
+**Resource Manager template** - A JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group, subscription, tenant, or management group. The template can be used to deploy the resources consistently and repeatedly. See [Template deployment overview](../../azure-resource-manager/templates/overview.md). Additionally, the [Bicep language](../../azure-resource-manager/bicep/overview.md) can be used instead of JSON.
+
+## Azure Resource Management Model
+
+Each Azure subscription is associated with controls used by [Azure Resource Manager](../../azure-resource-manager/management/overview.md) (ARM). Resource Manager is the deployment and management service for Azure, it has a trust relationship with Azure AD for identity management for organizations, and the Microsoft Account (MSA) for individuals. Resource Manager provides a management layer that enables you to create, update, and delete resources in your Azure subscription. You use management features like access control, locks, and tags, to secure and organize your resources after deployment.
+
+>[!NOTE]
+>Prior to ARM, there was another deployment model named Azure Service Manager (ASM) or "classic". To learn more, see [Azure Resource Manager vs. classic deployment](../../azure-resource-manager/management/deployment-models.md). Managing environments with the ASM model is out of scope of this content.
+
+Azure Resource Manager is the front-end service, which hosts the REST APIs used by PowerShell, the Azure portal, or other clients to manage resources. When a client makes a request to manage a specific resource, Resource Manager proxies the request to the resource provider to complete the request. For example, if a client makes a request to manage a virtual machine resource, Resource Manager proxies the request to the Microsoft. Compute resource provider. Resource Manager requires the client to specify an identifier for both the subscription and the resource group to manage the virtual machine resource.
+
+Before any resource management request can be executed by Resource Manager, a set of controls is checked.
+
+* **Valid user check** - The user requesting to manage the resource must have an account in the Azure AD tenant associated with the subscription of the managed resource.
+
+* **User permission check** - Permissions are assigned to users using [role-based access control (RBAC)](../../role-based-access-control/overview.md). An RBAC role specifies a set of permissions a user may take on a specific resource. RBAC helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
+
+* **Azure policy check** - [Azure policies](../../governance/policy/overview.md) specify the operations allowed or explicitly denied for a specific resource. For example, a policy can specify that users are only allowed (or not allowed) to deploy a specific type of virtual machine.
+
+The following diagram summarizes the resource model we just described.
+
+![Diagram that shows Azure resource management with ARM and Azure AD.](media/secure-resource-management/resource-model.png)
+
+**Azure Lighthouse** - [Azure Lighthouse](../../lighthouse/overview.md) enables resource management across tenants. Organizations can delegate roles at the subscription or resource group level to identities in another tenant.
+
+Subscriptions that enable [delegated resource management](../../lighthouse/concepts/azure-delegated-resource-management.md) with Azure Lighthouse have attributes that indicate the tenant IDs that can manage subscriptions or resource groups, and mapping between the built-in RBAC role in the resource tenant to identities in the service provider tenant. At runtime, Azure Resource Manager will consume these attributes to authorize tokens coming from the service provider tenant.
+
+It's worth noting that Azure Lighthouse itself is modeled as an Azure resource provider, which means that aspects of the delegation across a tenant can be targeted through Azure Policies.
+
+**Microsoft 365 Lighthouse** - [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide&preserve-view=true) is an admin portal that helps Managed Service Providers (MSPs) secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
+
+## Azure resource management with Azure AD
+
+Now that you have a better understanding of the resource management model in Azure, let's briefly examine some of the capabilities of Azure AD that can provide identity and access management for Azure resources.
+
+### Billing
+
+Billing is important to resource management because some billing roles interact with or can manage resources. Billing works differently depending on the type of agreement that you have with Microsoft.
+
+#### Azure Enterprise Agreements
+
+Azure Enterprise Agreement (Azure EA) customers are onboarded to the Azure EA Portal upon execution of their commercial contract with Microsoft. Upon onboarding, an identity is associated to a "root" Enterprise Administrator billing role. The portal provides a hierarchy of management functions:
+
+* Departments help you segment costs into logical groupings and enable you to set a budget or quota at the department level.
+
+* Accounts are used to further segment departments. You can use accounts to manage subscriptions and to access reports.
+The EA portal can authorize Microsoft Accounts (MSA) or Azure AD accounts (identified in the portal as "Work or School Accounts"). Identities with the role of "Account Owner" in the EA portal can create Azure subscriptions.
+
+#### Enterprise billing and Azure AD tenants
+
+When an Account Owner creates an Azure subscription within an enterprise agreement, the identity and access management of the subscription is configured as follows:
+
+* The Azure subscription is associated with the same Azure AD tenant of the Account Owner.
+
+* The account owner who created the subscription will be assigned the Service Administrator and Account Administrator roles. (The Azure EA Portal assigns Azure Service Manager (ASM) or "classic" roles to manage subscriptions. To learn more, see [Azure Resource Manager vs. classic deployment](../../azure-resource-manager/management/deployment-models.md).)
+
+An enterprise agreement can be configured to support multiple tenants by setting the authentication type of "Work or school account cross-tenant" in the Azure EA Portal. Given the above, organizations can set multiple accounts for each tenant, and multiple subscriptions for each account, as shown in the diagram below.
+
+![Diagram that shows Enterprise Agreement billing structure.](media/secure-resource-management/billing-tenant-relationship.png)
+
+It's important to note that the default configuration described above grants the Azure EA Account Owner privileges to manage the resources in any subscriptions they created. For subscriptions holding production workloads, consider decoupling billing and resource management by changing the service administrator of the subscription right after creation.
+
+ To further decouple and prevent the account owner from regaining service administrator access to the subscription, the subscription's tenant can be [changed](../fundamentals/active-directory-how-subscriptions-associated-directory.md) after creation. If the account owner doesn't have a user object in the Azure AD tenant the subscription is moved to, they can't regain the service owner role.
+
+To learn more, visit [Azure roles, Azure AD roles, and classic subscription administrator roles](../../role-based-access-control/rbac-and-directory-admin-roles.md).
+
+### Microsoft Customer Agreement
+
+Customers enrolled with a [Microsoft Customer Agreement](../../cost-management-billing/understand/mca-overview.md) (MCA) have a different billing management system with its own roles.
+
+A [billing account](../../cost-management-billing/manage/understand-mca-roles.md) for the Microsoft Customer Agreement contains one or more [billing profiles](../../cost-management-billing/manage/understand-mca-roles.md) that allow managing invoices and payment methods. Each billing profile contains one or more [invoice sections](../../cost-management-billing/manage/understand-mca-roles.md) to organize costs on the billing profile's invoice.
+
+In a Microsoft Customer Agreement, billing roles come from a single Azure AD tenant. To provision subscriptions for multiple tenants, the subscriptions must be initially created in the same Azure AD Tenant as the MCA, and then changed. In the diagram below, the subscriptions for the Corporate IT pre-production environment were moved to the ContosoSandbox tenant after creation.
+
+ ![Diagram that shows MCA billing structure.](media/secure-resource-management/microsoft-customer-agreement.png)
+
+## RBAC and role assignments in Azure
+
+In the Azure AD Fundamentals section, you learned Azure RBAC is the authorization system that provides fine-grained access management to Azure resources, and includes many [built-in roles](../../role-based-access-control/built-in-roles.md). You can create [custom roles](../../role-based-access-control/custom-roles.md), and assign roles at different scopes. Permissions are enforced by assigning RBAC roles to objects requesting access to Azure resources.
+
+Azure AD roles operate on concepts like [Azure role-based access control](../../role-based-access-control/overview.md). The [difference between these two role-based access control systems](../../role-based-access-control/rbac-and-directory-admin-roles.md) is that Azure RBAC uses Azure Resource Management to control access to Azure resources such as virtual machines or storage, and Azure AD roles control access to Azure AD, applications, and Microsoft services such as Office 365.
+
+Both Azure AD roles and Azure RBAC roles integrate with Azure AD Privileged Identity Management to enable just-in-time activation policies such as approval workflow and MFA.
+
+## ABAC and role assignments in Azure
+
+[Attribute-based access control (ABAC)](../../role-based-access-control/conditions-overview.md) is an authorization system that defines access based on attributes associated with security principals, resources, and environment. With ABAC, you can grant a security principal access to a resource based on attributes. Azure ABAC refers to the implementation of ABAC for Azure.
+
+Azure ABAC builds on Azure RBAC by adding role assignment conditions based on attributes in the context of specific actions. A role assignment condition is an additional check that you can optionally add to your role assignment to provide more fine-grained access control. A condition filters down permissions granted as a part of the role definition and role assignment. For example, you can add a condition that requires an object to have a specific tag to read the object. You can't explicitly deny access to specific resources using conditions.
+
+## Conditional Access
+
+Azure AD [Conditional Access](../../role-based-access-control/conditional-access-azure-management.md) (CA) can be used to manage access to Azure management endpoints. CA policies can be applied to the Microsoft Azure Management cloud app to protect the Azure resource management endpoints such as:
+
+* Azure Resource Manager Provider (services)
+
+* Azure Resource Manager APIs
+
+* Azure PowerShell
+
+* Azure CLI
+
+* Azure portal
+
+![Diagram that shows the Conditional Access policy.](media/secure-resource-management/conditional-access.jpeg)
+
+For example, an administrator may configure a Conditional Access policy, which allows a user to sign into the Azure portal only from approved locations, and also requires either multifactor authentication (MFA) or a hybrid Azure AD domain-joined device.
+
+## Azure Managed Identities
+
+A common challenge when building cloud applications is how to manage the credentials in your code for authenticating to cloud services. Keeping the credentials secure is an important task. Ideally, the credentials never appear on developer workstations and aren't checked into source control. [Managed identities for Azure resources](../managed-identities-azure-resources/overview.md) provide Azure services with an automatically managed identity in Azure AD. You can use the identity to authenticate to any service that supports Azure AD authentication without any credentials in your code.
+
+There are two types of managed identities:
+
+* A system-assigned managed identity is enabled directly on an Azure resource. When the resource is enabled, Azure creates an identity for the resource in the associated subscription's trusted Azure AD tenant. After the identity is created, the credentials are provisioned onto the resource. The lifecycle of a system-assigned identity is directly tied to the Azure resource. If the resource is deleted, Azure automatically cleans up the credentials and the identity in Azure AD.
+
+* A user-assigned managed identity is created as a standalone Azure resource. Azure creates an identity in the Azure AD tenant that's trusted by the subscription with which the resource is associated. After the identity is created, the identity can be assigned to one or more Azure resources. The lifecycle of a user-assigned identity is managed separately from the lifecycle of the Azure resources to which it's assigned.
+
+Internally, managed identities are service principals of a special type, to only be used by specific Azure resources. When the managed identity is deleted, the corresponding service principal is automatically removed. Noe that authorization of Graph API permissions can only be done by PowerShell, so not all features of Managed Identity are accessible via the Portal UI.
+
+## Azure Active Directory Domain Services
+
+Azure Active Directory Domain Services (Azure AD DS) provides a managed domain to facilitate authentication for Azure workloads using legacy protocols. Supported servers are moved from an on-premises AD DS forest and joined to an Azure AD DS managed domain and continue to use legacy protocols for authentication (for example, Kerberos authentication).
+
+## Azure AD B2C directories and Azure
+
+An Azure AD B2C tenant is linked to an Azure subscription for billing and communication purposes. Azure AD B2C tenants have a self-contained role structure in the directory, which is independent from the Azure RBAC privileged roles of the Azure subscription.
+
+When the Azure AD B2C tenant is initially provisioned, the user creating the B2C tenant must have contributor or owner permissions in the subscription. Upon creation, that user becomes the first Azure AD B2C tenant global administrator and they can later create other accounts and assign them to directory roles.
+
+It's important to note that the owners and contributors of the linked Azure AD subscription can remove the link between the subscription and the directory, which will affect the ongoing billing of the Azure AD B2C usage.
+
+## Identity considerations for IaaS solutions in Azure
+
+This scenario covers identity isolation requirements that organizations have for Infrastructure-as-a-Service (IaaS) workloads.
+
+There are three key options regarding isolation management of IaaS workloads:
+
+* Virtual machines joined to stand-alone Active Directory Domain Services (AD DS)
+
+* Azure Active Directory Domain Services (Azure AD DS) joined virtual machines
+
+* Sign-in to virtual machines in Azure using Azure AD authentication
+
+A key concept to address with the first two options is that there are two identity realms that are involved in these scenarios.
+
+* When you sign in to an Azure Windows Server VM via remote desktop protocol (RDP), you're generally logging on to the server using your domain credentials, which performs a Kerberos authentication against an on-premises AD DS domain controller or Azure AD DS. Alternatively, if the server isn't domain-joined then a local account can be used to sign in to the virtual machines.
+
+* When you sign into the Azure portal to create or manage a VM, you're authenticating against Azure AD (potentially using the same credentials if you've synchronized the correct accounts), and this could result in an authentication against your domain controllers should you be using Active Directory Federation Services (AD FS) or PassThrough Authentication.
+
+### Virtual machines joined to standalone Active Directory Domain Services
+
+AD DS is the Windows Server based directory service that organizations have largely adopted for on-premises identity services. AD DS can be deployed when a requirement exists to deploy IaaS workloads to Azure that require identity isolation from AD DS administrators and users in another forest.
+
+![Diagram that shows AD DS virtual machine management](media/secure-resource-management/vm-to-standalone-domain-controller.jpeg)
+
+The following considerations need to be made in this scenario:
+
+AD DS domain controllers: a minimum of two AD DS domain controllers must be deployed to ensure that authentication services are highly available and performant. For more information, see [AD DS Design and Planning](/windows-server/identity/ad-ds/plan/ad-ds-design-and-planning).
+
+**AD DS Design and Planning** - A new AD DS forest must be created with the following services configured correctly:
+
+* **AD DS Domain Name Services (DNS)** - AD DS DNS must be configured for the relevant zones within AD DS to ensure that name resolution operates correctly for servers and applications.
+
+* **AD DS Sites and Services** - These services must be configured to ensure that applications have low latency and performant access to domain controllers. The relevant virtual networks, subnets, and data center locations that servers are located in should be configured in sites and services.
+
+* **AD DS FSMOs** - The Flexible Single Master Operation (FSMO) roles that are required should be reviewed and assigned to the appropriate AD DS domain controllers.
+
+* **AD DS Domain Join** - All servers (excluding "jumpboxes") that require AD DS for authentication, configuration and management need to be joined to the isolated forest.
+
+* **AD DS Group Policy (GPO)** - AD DS GPOs must be configured to ensure that the configuration meets the security requirements, and that the configuration is standardized across the forest and domain-joined machines.
+
+* **AD DS Organizational Units (OU)** - AD DS OUs must be defined to ensure grouping of AD DS resources into logical management and configuration silos for purposes of administration and application of configuration.
+
+* **Role-based access control** - RBAC must be defined for administration and access to resources joined to this forest. This includes:
+
+ * **AD DS Groups** - Groups must be created to apply appropriate permissions for users to AD DS resources.
+
+ * **Administration accounts** - As mentioned at the start of this section there are two administration accounts required to manage this solution.
+
+ * An AD DS administration account with the least privileged access required to perform the administration required in AD DS and domain-joined servers.
+
+ * An Azure AD administration account for Azure portal access to connect, manage, and configure virtual machines, VNets, network security groups and other required Azure resources.
+
+ * **AD DS user accounts** - Relevant user accounts need to be provisioned and added to correct groups to allow user access to applications hosted by this solution.
+
+**Virtual networks (VNets)** - Configuration guidance
+
+* **AD DS domain controller IP address** - The domain controllers shouldn't be configured with static IP addresses within the operating system. The IP addresses should be reserved on the Azure VNet to ensure they always stay the same and DC should be configured to use DHCP.
+
+* **VNet DNS Server** - DNS servers must be configured on VNets that are part of this isolated solution to point to the domain controllers. This is required to ensure that applications and servers can resolve the required AD DS services or other services joined to the AD DS forest.
+
+* **Network security groups (NSGs)** - The domain controllers should be located on their own VNet or subnet with NSGs defined to only allow access to domain controllers from required servers (for example, domain-joined machines or jumpboxes). Jumpboxes should be added to an application security group (ASG) to simplify NSG creation and administration.
+
+**Challenges**: The list below highlights key challenges with using this option for identity isolation:
+
+* An additional AD DS Forest to administer, manage and monitor resulting in more work for the IT team to perform.
+
+* Further infrastructure may be required for management of patching and software deployments. Organizations should consider deploying Azure Update Management, Group Policy (GPO) or System Center Configuration Manager (SCCM) to manage these servers.
+
+* Additional credentials for users to remember and use to access resources.
+
+>[!IMPORTANT]
+>For this isolated model, it is assumed that there is no connectivity to or from the domain controllers from the customer's corporate network and that there are no trusts configured with other forests. A jumpbox or management server should be created to allow a point from which the AD DS domain controllers can be managed and administered.
+
+### Azure Active Directory Domain Services joined virtual machines
+
+When a requirement exists to deploy IaaS workloads to Azure that require identity isolation from AD DS administrators and users in another forest, then an Azure AD Domain Services (Azure AD DS) managed domain can be deployed. Azure AD DS is a service that provides a managed domain to facilitate authentication for Azure workloads using legacy protocols. This provides an isolated domain without the technical complexities of building and managing your own AD DS. The following considerations need to be made.
+
+![Diagram that shows Azure AD DS virtual machine management.](media/secure-resource-management/vm-to-domain-services.png)
+
+**Azure AD DS managed domain** - Only one Azure AD DS managed domain can be deployed per Azure AD tenant and this is bound to a single VNet. It's recommended that this VNet forms the "hub" for Azure AD DS authentication. From this hub, "spokes" can be created and linked to allow legacy authentication for servers and applications. The spokes are additional VNets on which Azure AD DS joined servers are located and are linked to the hub using Azure network gateways or VNet peering.
+
+**Managed domain location** - A location must be set when deploying an Azure AD DS managed domain. The location is a physical region (data center) where the managed domain is deployed. It's recommended you:
+
+* Consider a location that is geographically closed to the servers and applications that require Azure AD DS services.
+
+* Consider regions that provide Availability Zones capabilities for high availability requirements. For more information, see [Regions and Availability Zones in Azure](../../reliability/availability-zones-service-support.md).
+
+**Object provisioning** - Azure AD DS synchronizes identities from the Azure AD that is associated with the subscription that Azure AD DS is deployed into. It's also worth noting that if the associated Azure AD has synchronization set up with Azure AD Connect (user forest scenario) then the life cycle of these identities can also be reflected in Azure AD DS. This service has two modes that can be used for provisioning user and group objects from Azure AD.
+
+* **All**: All users and groups are synchronized from Azure AD into Azure AD DS.
+
+* **Scoped**: Only users in scope of a group(s) are synchronized from Azure AD into Azure AD DS.
+
+When you first deploy Azure AD DS, an automatic one-way synchronization is configured to replicate the objects from Azure AD. This one-way synchronization continues to run in the background to keep the Azure AD DS managed domain up to date with any changes from Azure AD. No synchronization occurs from Azure AD DS back to Azure AD. For more information, see [How objects and credentials are synchronized in an Azure AD Domain Services managed domain](../../active-directory-domain-services/synchronization.md).
+
+It's worth noting that if you need to change the type of synchronization from All to Scoped (or vice versa), then the Azure AD DS managed domain will need to be deleted, recreated and configured. In addition, organizations should consider the use of "scoped" provisioning to reduce the identities to only those that need access to Azure AD DS resources as a good practice.
+
+**Group Policy Objects (GPO)** - To configure GPO in an Azure AD DS managed domain you must use Group Policy Management tools on a server that has been domain joined to the Azure AD DS managed domain. For more information, see [Administer Group Policy in an Azure AD Domain Services managed domain](../../active-directory-domain-services/manage-group-policy.md).
+
+**Secure LDAP** - Azure AD DS provides a secure LDAP service that can be used by applications that require it. This setting is disabled by default and to enable secure LDAP a certificate needs to be uploaded, in addition, the NSG that secures the VNet that Azure AD DS is deployed on to must allow port 636 connectivity to the Azure AD DS managed domains. For more information, see [Configure secure LDAP for an Azure Active Directory Domain Services managed domain](../../active-directory-domain-services/tutorial-configure-ldaps.md).
+
+**Administration** - To perform administration duties on Azure AD DS (for example, domain join machines or edit GPO), the account used for this task needs to be part of the Azure AD DC Administrators group. Accounts that are members of this group can't directly sign-in to domain controllers to perform management tasks. Instead, you create a management VM that is joined to the Azure AD DS managed domain, then install your regular AD DS management tools. For more information, see [Management concepts for user accounts, passwords, and administration in Azure Active Directory Domain Services](../../active-directory-domain-services/administration-concepts.md).
+
+**Password hashes** - For authentication with Azure AD DS to work, password hashes for all users need to be in a format that is suitable for NT LAN Manager (NTLM) and Kerberos authentication. To ensure authentication with Azure AD DS works as expected, the following prerequisites need to be performed.
+
+* **Users synchronized with Azure AD Connect (from AD DS)** - The legacy password hashes need to be synchronized from on-premises AD DS to Azure AD.
+
+* **Users created in Azure AD** - Need to reset their password for the correct hashes to be generated for usage with Azure AD DS. For more information, see [Enable synchronization of password hashes](../../active-directory-domain-services/tutorial-configure-password-hash-sync.md).
+
+**Network** - Azure AD DS is deployed on to an Azure VNet so considerations need to be made to ensure that servers and applications are secured and can access the managed domain correctly. For more information, see [Virtual network design considerations and configuration options for Azure AD Domain Services](../../active-directory-domain-services/network-considerations.md).
+
+* Azure AD DS must be deployed in its own subnet: Don't use an existing subnet or a gateway subnet.
+
+* **A network security group (NSG)** - is created during the deployment of an Azure AD DS managed domain. This network security group contains the required rules for correct service communication. Don't create or use an existing network security group with your own custom rules.
+
+* **Azure AD DS requires 3-5 IP addresses** - Make sure that your subnet IP address range can provide this number of addresses. Restricting the available IP addresses can prevent Azure AD DS from maintaining two domain controllers.
+
+* **VNet DNS Server** - As previously discussed about the "hub and spoke" model, it's important to have DNS configured correctly on the VNets to ensure that servers joined to the Azure AD DS managed domain have the correct DNS settings to resolve the Azure AD DS managed domain. Each VNet has a DNS server entry that is passed to servers as they obtain an IP address and these DNS entries need to be the IP addresses of the Azure AD DS managed domain. For more information, see [Update DNS settings for the Azure virtual network](../../active-directory-domain-services/tutorial-create-instance.md).
+
+**Challenges** - The following list highlights key challenges with using this option for Identity Isolation.
+
+* Some Azure AD DS configuration can only be administered from an Azure AD DS joined server.
+
+* Only one Azure AD DS managed domain can be deployed per Azure AD tenant. As we describe in this section the hub and spoke model is recommended to provide Azure AD DS authentication to services on other VNets.
+
+* Further infrastructure maybe required for management of patching and software deployments. Organizations should consider deploying Azure Update Management, Group Policy (GPO) or System Center Configuration Manager (SCCM) to manage these servers.
+
+For this isolated model, it's assumed that there's no connectivity to the VNet that hosts the Azure AD DS managed domain from the customer's corporate network and that there are no trusts configured with other forests. A jumpbox or management server should be created to allow a point from which the Azure AD DS can be managed and administered.
+
+### Sign into virtual machines in Azure using Azure Active Directory authentication
+
+When a requirement exists to deploy IaaS workloads to Azure that require identity isolation, then the final option is to use Azure AD for logon to servers in this scenario. This provides the ability to make Azure AD the identity realm for authentication purposes and identity isolation can be achieved by provisioning the servers into the relevant subscription, which is linked to the required Azure AD tenant. The following considerations need to be made.
+
+![Diagram that shows Azure AD authentication to Azure VMs.](media/secure-resource-management/sign-into-vm.png)
+
+**Supported operating systems**: Signing into virtual machines in Azure using Azure AD authentication is currently supported in Windows and Linux. For more specifics on supported operating systems, refer to the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](../devices/howto-vm-sign-in-azure-ad-linux.md).
+
+**Credentials**: One of the key benefits of signing into virtual machines in Azure using Azure AD authentication is the ability to use the same federated or managed Azure AD credentials that you normally use for access to Azure AD services for sign-in to the virtual machine.
+
+>[!NOTE]
+>The Azure AD tenant that is used for sign-in in this scenario is the Azure AD tenant that is associated with the subscription that the virtual machine has been provisioned into. This Azure AD tenant can be one that has identities synchronized from on-premises AD DS. Organizations should make an informed choice that aligns with their isolation principals when choosing which subscription and Azure AD tenant they wish to use for sign-in to these servers.
+
+**Network Requirements**: These virtual machines will need to access Azure AD for authentication so you must ensure that the virtual machines network configuration permits outbound access to Azure AD endpoints on 443. See the documentation for [Windows](../devices/howto-vm-sign-in-azure-ad-windows.md) and [Linux](../devices/howto-vm-sign-in-azure-ad-linux.md) for more information.
+
+**Role-based Access Control (RBAC)**: Two RBAC roles are available to provide the appropriate level of access to these virtual machines. These RBAC roles can be configured via the Azure portal or via the Azure Cloud Shell Experience. For more information, see [Configure role assignments for the VM](../devices/howto-vm-sign-in-azure-ad-windows.md).
+
+* **Virtual machine administrator logon**: Users with this role assigned to them can log into an Azure virtual machine with administrator privileges.
+
+* **Virtual machine user logon**: Users with this role assigned to them can log into an Azure virtual machine with regular user privileges.
+
+Conditional Access: A key benefit of using Azure AD for signing into Azure virtual machines is the ability to enforce Conditional Access as part of the sign-in process. This provides the ability for organizations to require conditions to be met before allowing access to the virtual machine and to use multifactor authentication to provide strong authentication. For more information, see [Using Conditional Access](../devices/howto-vm-sign-in-azure-ad-windows.md).
+
+>[!NOTE]
+>Remote connection to virtual machines joined to Azure AD is only allowed from Windows 10, Windows 11, and Cloud PC PCs that are Azure AD joined or hybrid Azure AD joined to the same directory as the virtual machine.
+
+**Challenges**: The list below highlights key challenges with using this option for identity isolation.
+
+* No central management or configuration of servers. For example, there's no Group Policy that can be applied to a group of servers. Organizations should consider deploying [Update Management in Azure](../../automation/update-management/overview.md) to manage patching and updates of these servers.
+
+* Not suitable for multi-tiered applications that have requirements to authenticate with on-premises mechanisms such as Windows Integrated Authentication across these servers or services. If this is a requirement for the organization, then it's recommended that you explore the Standalone Active Directory Domain Services, or the Azure Active Directory Domain Services scenarios described in this section.
+
+For this isolated model, it's assumed that there's no connectivity to the VNet that hosts the virtual machines from the customer's corporate network. A jumpbox or management server should be created to allow a point from which these servers can be managed and administered.
+
+## Next steps
+
+* [Introduction to delegated administration and isolated environments](secure-introduction.md)
+
+* [Azure AD fundamentals](secure-with-azure-ad-fundamentals.md)
+
+* [Resource isolation in a single tenant](secure-single-tenant.md)
+
+* [Resource isolation with multiple tenants](secure-multiple-tenants.md)
+
+* [Best practices](secure-best-practices.md)
active-directory Secure Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-single-tenant.md
+
+ Title: Resource isolation in a single tenant to secure with Azure Active Directory
+description: Introduction to resource isolation in a single tenant in Azure Active Directory.
+++++++ Last updated : 7/5/2022++++++
+# Resource isolation in a single tenant
+
+Many separation scenarios can be achieved within a single tenant. If possible, we recommend that you delegate administration to separate environments within a single tenant to provide the best productivity and collaboration experience for your organization.
+
+## Outcomes
+
+**Resource separation** - With Azure AD directory roles, security groups, conditional access policies, Azure resource groups, Azure management groups, administrative units (AU's), and other controls, you can restrict resource access to specific users, groups, and service principals. Resources can be managed by separate administrators, and have separate users, permissions, and access requirements.
+
+If a set of resources require unique tenant-wide settings, or there's minimal risk tolerance for unauthorized access by tenant members, or critical impact could be caused by configuration changes, you must achieve isolation in multiple tenants.
+
+**Configuration separation** - In some cases, resources such as applications have dependencies on tenant-wide configurations like authentication methods or [named locations](../conditional-access/location-condition.md#named-locations). You should consider these dependencies when isolating resources. Global administrators can configure the resource settings and tenant-wide settings that affect resources.
+
+If a set of resources require unique tenant-wide settings, or the tenant's settings must be administered by a different entity, you must achieve isolation with multiple tenants.
+
+**Administrative separation** - With Azure AD delegated administration, you can segregate the administration of resources such as applications and APIs, users and groups, resource groups, and conditional access policies.
+
+Global administrators can discover and obtain full access to any trusting resources. You can set up auditing and alerts to know when an administrator changes a resource if they're authenticated.
+
+You can also use administrative units (AU) in Azure AD to provide some level of administrative separation. Administrative units restrict permissions in a role to any portion of your organization that you define. You could, for example, use administrative units to delegate the [Helpdesk Administrator](../roles/permissions-reference.md) role to regional support specialists, so they can manage users only in the region that they support.
+
+![Diagram that shows administrative units.](media/secure-single-tenant/administrative-units.png)
+
+Administrative Units can be used to separate [user, groups and device objects](../roles/administrative-units.md). Assignments of those units can be managed by [dynamic membership rules](../roles/admin-units-members-dynamic.md).
+
+By using Privileged Identity Management (PIM) you can define who in your organization is the best person to approve the request for highly privileged roles. For example, admins requiring global administrator access to make tenant-wide changes.
+
+>[!NOTE]
+>Using PIM requires and Azure AD P2 license per human.
+
+If you must ensure that global administrators are unable to manage a specific resource, you must isolate that resource in a separate tenant with separate global administrators. This can be especially important for backups, see [multi-user authorization guidance](../../backup/multi-user-authorization.md) for examples of this.
+
+## Common usage
+
+One of the most common uses for multiple environments in a single tenant is to segregate production from nonproduction resources. Within a single tenant, development teams and application owners can create and manage a separate environment with test apps, test users and groups, and test policies for those objects; similarly, they can create nonproduction instances of Azure resources and trusted apps.
+
+The following diagram illustrates the nonproduction environments and the production environment.
+
+![Diagram that shows Azure AD tenant boundary.](media/secure-single-tenant/tenant-boundary.png)
+
+In this diagram, there are nonproduction Azure resources and nonproduction instances Azure AD integrated applications with equivalent nonproduction directory objects. In this example, the nonproduction resources in the directory are used for testing purposes.
+
+>[!NOTE]
+>You cannot have more than one Microsoft 365 environment in a single Azure AD tenant. However, you can have multiple Dynamics 365 environments in a single Azure AD tenant.
+
+Another scenario for isolation within a single tenant could be separation between locations, subsidiary or implementation of tiered administration (according to the "[Enterprise Access Model](/security/compass/privileged-access-access-model)").
+
+Azure RBAC role assignments allow scoped administration of Azure resources. Similarly, Azure AD allows granular management of Azure AD trusting applications through multiple capabilities such as conditional access, user and group filtering, administrative unit assignments and application assignments.
+
+If you must ensure full isolation (including staging of organization-level configuration) of Microsoft 365 services, you need to choose a [multiple tenant isolation](../../backup/multi-user-authorization.md).
+
+## Scoped management in a single tenant
+
+### Scoped management for Azure resources
+
+Azure RBAC allows you to design an administration model with granular scopes and surface area. Consider the management hierarchy in the following example:
+
+>[!NOTE]
+>There are multiple ways to define the management hierarchy based on an organization's individual requirements, constraints, and goals. For more information, consult the Cloud Adoption Framework guidance on how to [Organize Azure Resources](/azure/cloud-adoption-framework/ready/azure-setup-guide/organize-resources)).
+
+![Diagram that shows resource isolation in a single tenant.](media/secure-single-tenant/resource-hierarchy.png)
+
+* **Management group** - You can assign roles to specific management groups so that they don't impact any other management groups. In the scenario above, the HR team can define an Azure Policy to audit the regions where resources are deployed across all HR subscriptions.
+
+* **Subscription** - You can assign roles to a specific subscription to prevent it from impacting any other resource groups. In the example above, the HR team can assign the Reader role for the Benefits subscription, without reading any other HR subscription, or a subscription from any other team.
+
+* **Resource group** - You can assign roles to specific resource groups so that they don't impact any other resource groups. In the example above, the Benefits engineering team can assign the Contributor role to the test lead so they can manage the test DB and the test web app, or to add more resources.
+
+* **Individual resources** - You can assign roles to specific resources so that they don't impact any other resources. In the example above, the Benefits engineering team can assign a data analyst the Cosmos DB Account Reader role just for the test instance of the Azure Cosmos DB database, without interfering with the test web app or any production resource.
+
+For more information, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md) and [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md).
+
+This is a hierarchical structure, so the higher up in the hierarchy, the more scope, visibility, and impact there's to lower levels. Top-level scopes affect all Azure resources in the Azure AD tenant boundary. This also means that permissions can be applied at multiple levels. The risk this introduces is that assigning roles higher up the hierarchy could provide more access lower down the scope than intended. [Microsoft Entra](https://www.microsoft.com/security/business/identity-access/microsoft-entra-permissions-management) (formally CloudKnox) is a Microsoft product that provides visibility and remediation to help reduce the risk. A few details are as follows:
+
+* The root management group defines Azure Policies and RBAC role assignments that will be applied to all subscriptions and resources.
+
+* Global Administrators can [elevate access](https://aka.ms/AzureADSecuredAzure/12a) to all subscriptions and management groups.
+
+Both top-level scopes should be strictly monitored. It's important to plan for other dimensions of resource isolation such as networking. For general guidance on Azure networking, see [Azure best practices for network security](../../security/fundamentals/network-best-practices.md). Infrastructure as a Service (IaaS) workloads have special scenarios where both identity and resource isolation need to be part of the overall design and strategy.
+
+Consider isolating sensitive or test resources according to [Azure landing zone conceptual architecture](/azure/cloud-adoption-framework/ready/landing-zone/). For example, Identity subscription should be assigned to separated management group and all subscriptions for development purposes could be separated in "Sandbox" management group. More details can be found in the [Enterprise-Scale documentation](/azure/cloud-adoption-framework/ready/enterprise-scale/faq). Separation for testing purposes within a single tenant is also considered in the [management group hierarchy of the reference architecture](/azure/cloud-adoption-framework/ready/enterprise-scale/testing-approach).
+
+### Scoped management for Azure AD trusting applications
+
+The pattern to scope management of Azure AD trusting applications is outlined in the following section.
+
+Azure AD supports configuring multiple instances of custom and SaaS apps, but not most Microsoft services, against the same directory with [independent user assignments](../manage-apps/assign-user-or-group-access-portal.md). The above example contains both a production and a test version of the travel app. You can deploy preproduction versions against the corporate tenant to achieve app-specific configuration and policy separation that enables workload owners to perform testing with their corporate credentials. Nonproduction directory objects such as test users and test groups are associated to the nonproduction application with separate [ownership](https://aka.ms/AzureADSecuredAzure/14a) of those objects.
+
+There are tenant-wide aspects that affect all trusting applications in the Azure AD tenant boundary including:
+
+* Global Administrators can manage all tenant-wide settings.
+
+* Other [directory roles](https://aka.ms/AzureADSecuredAzure/14b) such as User Administrator, Administrator, and Conditional Access Administrators can manage tenant-wide configuration within the scope of the role.
+
+Configuration settings such authentication methods allowed, hybrid configurations, B2B collaboration allow-listing of domains, and named locations are tenant wide.
+
+>[!Note]
+>Microsoft Graph API Permissions and consent permissions cannot be scoped to a group or members of Administrative Units. Those permissions will be assigned on directory-level, only resource-specific consent allows scope on resource-level (currently limited to [Microsoft Teams Chat permissions](/microsoftteams/platform/graph-api/rsc/resource-specific-consent))
+
+>[!IMPORTANT]
+>The lifecycle of Microsoft SaaS services such as Office 365, Microsoft Dynamics, and Microsoft Exchange are bound to the Azure AD tenant. As a result, multiple instances of these services necessarily require multiple Azure AD tenants. Check the documentation for individual services to learn more about specific management scoping capabilities.
+
+## Next steps
+
+* [Introduction to delegated administration and isolated environments](secure-introduction.md)
+
+* [Azure AD fundamentals](secure-with-azure-ad-fundamentals.md)
+
+* [Azure resource management fundamentals](secure-resource-management.md)
+
+* [Resource isolation with multiple tenants](secure-multiple-tenants.md)
+
+* [Best practices](secure-best-practices.md)
active-directory Secure Hybrid Access Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
The following software-defined perimeter (SDP) solutions providers connect with
* **Perimeter 81** * [Tutorial: Azure AD SSO integration with Perimeter 81](../saas-apps/perimeter-81-tutorial.md) * **Silverfort Authentication Platform**
- * [Tutorial: Configure Secure Hybrid Access with Azure AD and Silverfort](./silverfort-azure-ad-integration.md)
+ * [Tutorial: Configure Secure Hybrid Access with Azure AD and Silverfort](./silverfort-integration.md)
* **Strata Maverics Identity Orchestrator** * [Integrate Azure AD SSO with Maverics Identity Orchestrator SAML Connector](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md) * **Zscaler Private Access**
active-directory Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access.md
The following partners offer solutions to support [Conditional Access policies p
|F5, Inc.|[Integrate F5 BIG-IP with Azure AD](f5-integration.md)</br>[Tutorial: Configure F5 BIG-IP SSL-VPN for Azure AD SSO](f5-passwordless-vpn.md)| |Progress Software Corporation, Progress Kemp|[Tutorial: Azure AD SSO integration with Kemp LoadMaster Azure AD integration](../saas-apps/kemp-tutorial.md)| |Perimeter 81 Ltd.|[Tutorial: Azure AD SSO integration with Perimeter 81](../saas-apps/perimeter-81-tutorial.md)|
-|Silverfort|[Tutorial: Configure Secure Hybrid Access with Azure AD and Silverfort](silverfort-azure-ad-integration.md)|
+|Silverfort|[Tutorial: Configure Secure Hybrid Access with Azure AD and Silverfort](silverfort-integration.md)|
|Strata Identity, Inc.|[Integrate Azure AD SSO with Maverics Identity Orchestrator SAML Connector](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md)| #### Partners with pre-built solutions and integration documentation
active-directory Silverfort Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/silverfort-integration.md
+
+ Title: Secure hybrid access with Azure AD and Silverfort
+description: In this tutorial, learn how to integrate Silverfort with Azure AD for secure hybrid access
+++++++ Last updated : 12/14/2022+++++
+# Tutorial: Configure Secure Hybrid Access with Azure Active Directory and Silverfort
+
+[Silverfort](https://www.silverfort.com/) uses agent-less and proxy-less technology to connect your assets on-premises and in the cloud to Azure Active Directory (Azure AD). This solution enables organizations to apply identity protection, visibility, and user experience across environments in Azure AD. It enables universal risk-based monitoring and assessment of authentication activity for on-premises and cloud environments, and helps to prevent threats.
+
+In this tutorial, learn how to integrate your on-premises Silverfort implementation with Azure AD.
+
+Learn more: [Hybrid Azure AD joined devices](../devices/concept-azure-ad-join-hybrid.md).
+
+Silverfort connects assets with Azure AD. These bridged assets appear as regular applications in Azure AD and can be protected with [Conditional Access](../conditional-access/overview.md), single-sign-on (SSO), multi-factor authentication (MFA), auditing and more. Use Silverfort to connect assets including:
+
+- Legacy and homegrown applications
+- Remote desktop and Secure Shell (SSH)
+- Command-line tools and other admin access
+- File shares and databases
+- Infrastructure and industrial systems
+
+Silverfort integrates your corporate assets and third-party Identity and Access Management (IAM) platforms. This includes Active Directory, Active Directory Federation Services (ADFS), and Remote Authentication Dial-In User Service (RADIUS) on Azure AD, including hybrid and multicloud environments.
+
+Use this tutorial to configure and test the Silverfort Azure AD bridge in your Azure AD tenant to communicate with your Silverfort implementation. After configuration, you can create Silverfort authentication policies that bridge authentication requests from identity sources to Azure AD for SSO. After an application is bridged, you can manage it in Azure AD.
+
+## Silverfort with Azure AD authentication architecture
+
+The following diagram shows the authentication architecture orchestrated by Silverfort, in a hybrid environment.
+
+![image shows the architecture diagram](./media/silverfort-integration/silverfort-architecture-diagram.png)
+
+### User flow
+
+1. User sends authentication request to the original Identity Provider (IdP) through protocols such as Kerberos, SAML, NTLM, OIDC, and LDAP(s)
+2. The response is routed as-is to Silverfort for validation to check authentication state
+3. Silverfort provides visibility, discovery, and a bridge to Azure AD
+4. If the application is bridged, the authentication decision passes to Azure AD. Azure AD evaluates Conditional Access policies and validates authentication.
+5. The authentication state response goes as-is from Silverfort to the IdP
+6. IdP grants or denies access to the resource
+7. User is notified if access request is granted or denied
+
+## Prerequisites
+
+You need Silverfort deployed in your tenant or infrastructure to perform this tutorial. To deploy Silverfort in your tenant or infrastructure, go to silverfort.com [Silverfort](https://www.silverfort.com/) to install the Silverfort desktop app on your workstations.
+
+Set up Silverfort Azure AD Adapter in your Azure AD tenant:
+
+- An Azure account with an active subscription
+ - You can create an [Azure free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+- One of the following roles in your Azure account:
+ - Global Administrator
+ - Cloud Application Administrator
+ - Application Administrator
+ - Service Principal Owner
+- The Silverfort Azure AD Adapter application in the Azure AD gallery is pre-configured to support SSO. From the gallery, add the Silverfort Azure AD Adapter to your tenant as an Enterprise application.
+
+## Configure Silverfort and create a policy
+
+1. From a browser, sign in to the Silverfort admin console.
+2. In the main menu, navigate to **Settings** and then scroll to **Azure AD Bridge Connector** in the General section.
+3. Confirm your tenant ID, and then select **Authorize**.
+4. Select **Save Changes**.
+5. On the **Permissions requested** dialog, select **Accept**.
+
+ ![image shows azure ad bridge connector](./media/silverfort-integration/bridge-connector.png)
+
+ ![image shows registration confirmation](./media/silverfort-integration/grant-permission.png)
+
+6. A Registration Completed message appears in a new tab. Close this tab.
+
+ ![image shows registration completed](./media/silverfort-integration/registration-completed.png)
+
+7. On the **Settings** page, select **Save Changes**.
+
+ ![image shows the azure ad adapter](./media/silverfort-integration/silverfort-adapter.png)
+
+8. Sign in to your Azure AD console. In the left pane, select **Enterprise applications**. The **Silverfort Azure AD Adapter** application appears as registered.
+
+ ![image shows enterprise application](./media/silverfort-integration/enterprise-application.png)
+
+9. In the Silverfort admin console, navigate to the **Policies** page and select **Create Policy**. The **New Policy** dialog appears.
+10. Enter a **Policy Name**, the application name to be created in Azure. For example, if adding multiple servers or applications for this policy, name it to reflect the resources covered by the policy. In the example, we create a policy for the SL-APP1 server.
+
+ ![image shows define policy](./media/silverfort-integration/define-policy.png)
+
+11. Select the **Auth Type**, and **Protocol**.
+
+12. In the **Users and Groups** field, select the **edit** icon to configure users affected by the policy. These users' authentication bridges to Azure AD.
+
+ ![image shows user and groups](./media/silverfort-integration/user-groups.png)
+
+13. Search and select users, groups, or Organization Units (OUs).
+
+ ![image shows search users](./media/silverfort-integration/search-users.png)
+
+14. Selected users appear in the **SELECTED** box.
+
+ ![image shows selected user](./media/silverfort-integration/select-user.png)
+
+15. Select the **Source** for which the policy will apply. In this example, **All Devices** is selected.
+
+ ![image shows source](./media/silverfort-integration/source.png)
+
+16. Set the **Destination** to SL-App1. Optional: You can select the **edit** button to change or add more resources, or groups of resources.
+
+ ![image shows destination](./media/silverfort-integration/destination.png)
+
+17. For Action, select **AZURE AD BRIDGE**.
+
+ ![image shows save azure ad bridge](./media/silverfort-integration/save-bridge.png)
+
+18. Select **Save**. You're prompted to turn on the policy.
+
+ ![image shows change status](./media/silverfort-integration/change-status.png)
+
+19. In the Azure AD Bridge section, the policy appears on the Policies page.
+
+ ![image shows add policy](./media/silverfort-integration/add-policy.png)
+
+20. Return to the Azure AD console, and navigate to **Enterprise applications**. The new Silverfort application appears. You can include this application in Conditional Access policies.
+
+Learn more: [Tutorial: Secure user sign-in events with Azure AD Multi-Factor Authentication](../authentication/tutorial-enable-azure-mfa.md?bc=/azure/active-directory/conditional-access/breadcrumb/toc.json&toc=/azure/active-directory/conditional-access/toc.json%23create-a-conditional-access-policy).
+
+## Next steps
+
+- [Silverfort Azure AD adapter](https://azuremarketplace.microsoft.com/marketplace/apps/aad.silverfortazureadadapter?tab=overview)
+- [Silverfort resources](https://www.silverfort.com/resources/)
+- [Silverfort, company contact](https://www.silverfort.com/company/contact/)
active-directory Asana Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/asana-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
|Attribute|Type|Supported for filtering|Required by Asana| |||||
- |userName|String|&check;|&check;|
- |active|Boolean|||
- |name.formatted|String|||
- |preferredLanguage|String|||
- |title|String|||
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|||
+ |userName|String|&check;|&check;
+ |active|Boolean||
+ |name.formatted|String||
+ |title|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Asana**.
This section guides you through the steps to configure the Azure AD provisioning
|Attribute|Type|Supported for filtering|Required by Asana| ||||| |displayName|String|&check;|&check;
- |members|Reference|||
+ |members|Reference||
1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
Once you've configured provisioning, use the following resources to monitor your
## Change log
-* 11/06/2021 - Dropped support for **externalId, name.givenName and name.familyName**. Added support for **preferredLanguage , title and urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department**. And enabled **Group Provisioning**.
+* 11/06/2021 - Dropped support for **externalId, name.givenName and name.familyName**. Added support for **preferredLanguage , title and urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department**. Enabled **Group Provisioning**.
+* 05/23/2023 - Dropped support for **preferredLanguage** Added support for **urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager**.
## More resources
active-directory Circus Street Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/circus-street-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Circus Street
+description: Learn how to configure single sign-on between Azure Active Directory and Circus Street.
++++++++ Last updated : 06/06/2023++++
+# Azure Active Directory SSO integration with Circus Street
+
+In this article, you'll learn how to integrate Circus Street with Azure Active Directory (Azure AD). Circus Street is a global leader in providing digital training including e-commerce, data analytics and digital marketing to organizations through its proprietary platform.
+
+When you integrate Circus Street with Azure AD, you can:
+
+* Use Azure AD to control who has access to Circus Street.
+* Enable your users to be automatically signed-in to Circus Street with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Circus Street in a test environment. Circus Street supports **SP** and **IDP** initiated single sign-on.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Circus Street, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Circus Street single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Circus Street application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Circus Street from the Azure AD gallery
+
+Add Circus Street from the Azure AD application gallery to configure single sign-on with Circus Street. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Circus Street** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already preintegrated with Azure.
+
+1. If you wish to configure the application in **SP** initiated mode, then perform the following step:
+
+ In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://<CustomerSubDomainName>.circusstreet.com`
+
+ > [!NOTE]
+ > This value is not real. Update this value with the actual Sign on URL. Contact [Circus Street support team](mailto:support@circusstreet.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Circus Street application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, Circus Street application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+ | first name | user.givenname |
+ | last name | user.surname |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Circus Street** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Circus Street SSO
+
+To configure single sign-on on **Circus Street** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Circus Street support team](mailto:support@circusstreet.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Circus Street test user
+
+In this section, contact [Circus Street support team](mailto:support@circusstreet.com) to add the users in the Circus Street platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Circus Street Sign-on URL where you can initiate the login flow.
+
+* Go to Circus Street Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Circus Street for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Circus Street tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Circus Street for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Circus Street you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Humbol Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/humbol-provisioning-tutorial.md
This tutorial describes the steps you need to perform in both Humbol and Azure A
The scenario outlined in this tutorial assumes that you already have the following prerequisites:
-* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* Active contract with Humbol including SCIM API usage with Humbol Inc.
* A user account in Humbol with Admin permissions. ## Step 1. Plan your provisioning deployment
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure Humbol to support provisioning with Azure AD Contact Humbol support to configure Humbol to support provisioning with Azure AD.
+1. As Humbol Admin login to your [Humbol](https://my.humbol.app/login) organization.
+1. Go to organization's API [settings page](https://my.humbol.app/settings#apis).
+ 1. On this page you can find the organization SCIM API url. Copy it.
+ 1. Create SCIM API token and copy the value.
+ > [!NOTE]
+ > The token value is not saved anywhere on the Humbol service, so if you lose it, you should create a new one and remove old one.
+ ## Step 3. Add Humbol from the Azure AD application gallery Add Humbol from the Azure AD application gallery to start managing provisioning to Humbol. If you have previously setup Humbol for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
This section guides you through the steps to configure the Azure AD provisioning
|preferredLanguage|String|| |name.givenName|String||&check; |name.familyName|String||&check;
- |addresses[type eq "work"].locality|String||&check;
- |addresses[type eq "work"].region|String||&check;
- |addresses[type eq "work"].country|String||&check;
- |roles[primary eq "True"].value|String||&check;
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||&check;
- |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||&check;
-
+ |addresses[type eq "work"].locality|String||
+ |addresses[type eq "work"].region|String||
+ |addresses[type eq "work"].country|String||
+ |roles[primary eq "True"].value|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+
+ > [!NOTE]
+ > * If you include `roles[primary eq "True"].value` every user must have precisely one role.
+ > * Another option is to remove the role attribute mapping and manage Humbol user roles inside the Humbol application.
+
1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 1. To enable the Azure AD provisioning service for Humbol, change the **Provisioning Status** to **On** in the **Settings** section.
active-directory Krisp Technologies Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/krisp-technologies-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Krisp Technologies
+description: Learn how to configure single sign-on between Azure Active Directory and Krisp Technologies.
++++++++ Last updated : 06/06/2023++++
+# Azure Active Directory SSO integration with Krisp Technologies
+
+In this article, you'll learn how to integrate Krisp Technologies with Azure Active Directory (Azure AD). KrispΓÇÖs Voice Productivity AI improves voice communication by removing background noise, clarifying accents, and call transcripts. When you integrate Krisp Technologies with Azure AD, you can:
+
+* Control in Azure AD who has access to Krisp Technologies.
+* Enable your users to be automatically signed-in to Krisp Technologies with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Krisp Technologies in a test environment. Krisp Technologies supports **SP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Krisp Technologies, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Krisp Technologies single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Krisp Technologies application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Krisp Technologies from the Azure AD gallery
+
+Add Krisp Technologies from the Azure AD application gallery to configure single sign-on with Krisp Technologies. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Krisp Technologies** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a value using the following pattern:
+ `<TEAM_SLUG_ID>`
+
+ b. In the **Reply URL** textbox, type a URL using the following pattern:
+ `https://api.krisp.ai/v2/auth/sso/saml/<ID>`
+
+ c. In the **Sign on URL** textbox, type a URL using the following pattern:
+ `https://account.krisp.ai/sso/<ID>`
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Krisp Technologies support team](mailto:support@krisp.ai) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Krisp Technologies application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, Krisp Technologies application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/certificateraw.png "Certificate")
+
+1. On the **Set up Krisp Technologies** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Krisp Technologies SSO
+
+To configure single sign-on on **Krisp Technologies** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Krisp Technologies support team](mailto:support@krisp.ai). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Krisp Technologies test user
+
+In this section, a user called B.Simon is created in Krisp Technologies. Krisp Technologies supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Krisp Technologies, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Krisp Technologies Sign-on URL where you can initiate the login flow.
+
+* Go to Krisp Technologies Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Krisp Technologies tile in the My Apps, this will redirect to Krisp Technologies Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Krisp Technologies you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Openforms Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/openforms-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure OpenForms for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to OpenForms.
++
+writer: twimmers
+
+ms.assetid: 8a9a7be2-9e2f-4da5-9619-1382b6e17a4a
++++ Last updated : 05/30/2023+++
+# Tutorial: Configure OpenForms for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both OpenForms and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [OpenForms](https://granicus.com/solution/govservice/openforms) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in OpenForms.
+> * Remove users in OpenForms when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and OpenForms.
+> * Provision groups and group memberships in OpenForms.
+> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to OpenForms (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in OpenForms with Admin permissions.
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and OpenForms](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure OpenForms to support provisioning with Azure AD
+Contact OpenForms support to configure OpenForms to support provisioning with Azure AD.
+
+## Step 3. Add OpenForms from the Azure AD application gallery
+
+Add OpenForms from the Azure AD application gallery to start managing provisioning to OpenForms. If you have previously setup OpenForms for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to OpenForms
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for OpenForms in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **OpenForms**.
+
+ ![Screenshot of the OpenForms link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your OpenForms Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to OpenForms. If the connection fails, ensure your OpenForms account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to OpenForms**.
+
+1. Review the user attributes that are synchronized from Azure AD to OpenForms in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in OpenForms for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the OpenForms API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by OpenForms|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |externalId|String||&check;
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to OpenForms**.
+
+1. Review the group attributes that are synchronized from Azure AD to OpenForms in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in OpenForms for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by OpenForms|
+ |||||
+ |displayName|String|&check;|&check;
+ |externalId|String||&check;
+ |members|Reference||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for OpenForms, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to OpenForms by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Scilife Azure Ad Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/scilife-azure-ad-sso-tutorial.md
+
+ Title: Azure Active Directory SSO integration with Scilife Azure AD SSO
+description: Learn how to configure single sign-on between Azure Active Directory and Scilife Azure AD SSO.
++++++++ Last updated : 06/06/2023++++
+# Azure Active Directory SSO integration with Scilife Azure AD SSO
+
+In this article, you'll learn how to integrate Scilife Azure AD SSO with Azure Active Directory (Azure AD). With the help of this application SSO integration is made simple and hassle free as most of the configuration will take place on its own with minimalist efforts. When you integrate Scilife Azure AD SSO with Azure AD, you can:
+
+* Control in Azure AD who has access to Scilife Azure AD SSO.
+* Enable your users to be automatically signed-in to Scilife Azure AD SSO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+You'll configure and test Azure AD single sign-on for Scilife Azure AD SSO in a test environment. Scilife Azure AD SSO supports **SP** initiated single sign-on and **Just In Time** user provisioning.
+
+## Prerequisites
+
+To integrate Azure Active Directory with Scilife Azure AD SSO, you need:
+
+* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Scilife Azure AD SSO single sign-on (SSO) enabled subscription.
+
+## Add application and assign a test user
+
+Before you begin the process of configuring single sign-on, you need to add the Scilife Azure AD SSO application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration.
+
+### Add Scilife Azure AD SSO from the Azure AD gallery
+
+Add Scilife Azure AD SSO from the Azure AD application gallery to configure single sign-on with Scilife Azure AD SSO. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md).
+
+### Create and assign Azure AD test user
+
+Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides).
+
+## Configure Azure AD SSO
+
+Complete the following steps to enable Azure AD single sign-on in the Azure portal.
+
+1. In the Azure portal, on the **Scilife Azure AD SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** textbox, type a URL using one of the following patterns:
+
+ | **Identifier** |
+ |--|
+ | `https://ldap-Environment.scilife.io/simplesaml/module.php/saml/sp/metadata.php/<CustomerUrlPrefix>-<Environment>-sp` |
+ | `https://ldap.scilife.io/simplesaml/module.php/saml/sp/metadata.php/<CustomerUrlPrefix>-sp` |
+
+ b. In the **Reply URL** textbox, type a URL using one of the following patterns:
+
+ | **Reply URL** |
+ ||
+ | `https://<CustomerUrlPrefix>.scilife.io/<languageCode>/login` |
+ | `https://ldap.scilife.io/simplesaml/module.php/saml/sp/metadata.php/<CustomerUrlPrefix>-sp` |
+ | `https://ldap.scilife.io/simplesaml/module.php/saml/sp/saml2-acs.php/<CustomerUrlPrefix>-sp` |
+ | `https://<CustomerUrlPrefix>-<Environment>.scilife.io/<languageCode>/login` |
+ | `https://ldap-<Environment>.scilife.io/simplesaml/module.php/saml/sp/metadata.php/<CustomerUrlPrefix>-<Environment>-sp` |
+ | `https://ldap-<Environment>.scilife.io/simplesaml/module.php/saml/sp/saml2-acs.php/<CustomerUrlPrefix>-<Environment>-sp` |
+
+ c. In the **Sign on URL** textbox, type a URL using one of the following patterns:
+
+ | **Sign on URL** |
+ |--|
+ | `https://<CustomerUrlPrefix>.scilife.io/<languageCode>/login` |
+ | `https://<CustomerUrlPrefix>-<Environment>.scilife.io/<languageCode>/login` |
+
+ > [!Note]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Scilife Azure AD SSO support team](mailto:support@scilife.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Scilife Azure AD SSO application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of attributes configuration.](common/default-attributes.png "Image")
+
+1. In addition to above, Scilife Azure AD SSO application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | | |
+ | email | user.mail |
+ | firstname | user.givenname |
+ | lastname | user.surname |
+ | ldap_user_id | user.userprincipalname |
+ | mobile | user.mobilephone |
+
+1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Scilife Azure AD SSO** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata")
+
+## Configure Scilife Azure AD SSO
+
+1. Log in to your Scilife Azure AD SSO company site as an administrator.
+
+1. Go to **Manage** > **Active Directory Settings** and perform the following steps:
+
+ ![Screenshot shows the Scilife Azure administration portal.](media/scilife-azure-ad-sso-tutorial/manage.png "Admin")
+
+ 1. Enable **Configure Active Directory**.
+
+ 1. Select **AD Azure** type from the drop-down.
+
+ 1. Download the **Federation Metadata XML file** from the Azure portal and **Upload MetadataXML** file by clicking on **Choose file**.
+
+ 1. Click **Parse Metadata**.
+
+1. Enter **Tenant ID**, **Application ID** and **Client ID** in the following fields.
+
+ ![Screenshot shows the Scilife Azure tenant ID.](media/scilife-azure-ad-sso-tutorial/tenant.png "App")
+
+1. Copy **AD TRUST URL**, paste this value into the **Identifier (Entity ID)** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Copy **AD CONSUMER SERVICE URL**, paste this value into the **Reply URL (Assertion Consumer Service URL)** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ ![Screenshot shows the Scilife Azure portal URLs.](media/scilife-azure-ad-sso-tutorial/portal.png "Azure Configuration")
+
+1. Click **Save Configuration**.
+
+### Create Scilife Azure AD SSO test user
+
+In this section, a user called B.Simon is created in Scilife Azure AD SSO. Scilife Azure AD SSO supports just-in-time user provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in Scilife Azure AD SSO, a new one is commonly created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Scilife Azure AD SSO Sign-on URL where you can initiate the login flow.
+
+* Go to Scilife Azure AD SSO Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Scilife Azure AD SSO tile in the My Apps, this will redirect to Scilife Azure AD SSO Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Additional resources
+
+* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md).
+
+## Next steps
+
+Once you configure Scilife Azure AD SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Templafy Openid Connect Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/templafy-openid-connect-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String|
+ > [!NOTE]
+ > Schema Discovery feature is enabled for this application.
+
10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Templafy**. ![Templafy OpenID Connect Group Mappings](media/templafy-openid-connect-provisioning-tutorial/group-mapping.png)
This section guides you through the steps to configure the Azure AD provisioning
|members|Reference| |externalId|String|
+ > [!NOTE]
+ > Schema Discovery feature is enabled for this application.
+ 12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 13. To enable the Azure AD provisioning service for Templafy OpenID Connect, change the **Provisioning Status** to **On** in the **Settings** section.
Once you've configured provisioning, use the following resources to monitor your
* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion * If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+## Change log
+
+* 05/04/2023 - Added support for **Schema Discovery**.
+ ## Additional resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory Templafy Saml 2 Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/templafy-saml-2-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
9. Review the user attributes that are synchronized from Azure AD to Templafy SAML2 in the **Attribute Mappings** section. The attributes selected as **Matching** properties are used to match the user accounts in Templafy SAML2 for update operations. Select the **Save** button to commit any changes. + |Attribute|Type|Supported for filtering| |||| |userName|String|&check;|
This section guides you through the steps to configure the Azure AD provisioning
|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String|
+ > [!NOTE]
+ > Schema Discovery feature is enabled for this application.
+ 10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Templafy**. ![Templafy SAML2 Group Mappings](media/templafy-saml-2-provisioning-tutorial/group-mapping.png)
This section guides you through the steps to configure the Azure AD provisioning
|members|Reference| |externalId|String|
+ > [!NOTE]
+ > Schema Discovery feature is enabled for this application.
12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
Once you've configured provisioning, use the following resources to monitor your
* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion * If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+## Change log
+
+* 05/04/2023 - Added support for **Schema Discovery**.
+ ## Additional resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory Zero Networks Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zero-networks-tutorial.md
Previously updated : 11/21/2022 Last updated : 06/07/2023
In this tutorial, you configure Azure AD SSO for the Zero Networks Admin Portal
To configure the integration of Zero Networks into Azure AD, you need to add Zero Networks from the gallery to your list of managed SaaS apps.
-1. Sign in to the Azure portal using a work Microsoft account.
+1. Sign in to the Azure portal using a Microsoft work or school account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Zero Networks** in the search box.
-1. Select **Zero Networks** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. Select **Zero Networks** from results panel and select **Create** to add the app. Wait a few seconds while the app is added to your tenant.
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
To configure the integration of Zero Networks into Azure AD, you need to add Zer
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Zero Networks** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, go back to **Azure Active Directory**, click **Enterprise Applications** select the **Zero Networks** application, in the **Manage** section select **Single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, perform the following step.
- a. In the **Sign on URL** text box, type the URL:
- `https://portal.zeronetworks.com/#/login`
+ a. In the **Identifier (Entity ID)** text box, type the URL:
+ `https://portal.zeronetworks.com/api/v1/sso/azure/metadata`
+
+ b. In the **Reply URL (Assertion Consumer Service URL)** text box, type the URL:
+ `https://portal.zeronetworks.com/api/v1/sso/azure/acs`
+
+ c. In the **Sign on URL** text box, type the URL:
+ `https://portal.zeronetworks.com/#/login`
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+1. On the **Set up single sign-on with SAML** page, in the **SAML Certificates** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
![The Certificate download link](common/certificatebase64.png)
Follow these steps to enable Azure AD SSO in the Azure portal.
![Screenshot shows settings of SSO configuration.](./media/zero-networks-tutorial/settings.png "Account")
- 1. Copy **Identifier(Entity ID)** value, paste this value into the **Identifier** text box in the **Basic SAML Configuration** section in the Azure portal.
-
- 1. Copy **Reply URL (Assertion Consumer Service URL)** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
- 1. In the **Login URL** textbox, paste the **Login URL** value which you have copied from the Azure portal. 1. In the **Logout URL** textbox, paste the **Logout URL** value which you have copied from the Azure portal.
In this section, you test your Azure AD single sign-on configuration with follow
* Go to Zero Networks Sign-on URL directly and initiate the login flow from there.
-* You can use Microsoft My Apps. When you click the Zero Networks tile in the My Apps, this will redirect to Zero Networks Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+* You can use Microsoft My Apps. When you click the Zero Networks tile in the My Apps, this will redirect to Zero Networks Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
active-directory Pci Requirement 6 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/pci-requirement-6.md
|-|-| |**6.5.1** Changes to all system components in the production environment are made according to established procedures that include: </br> Reason for, and description of, the change. </br> Documentation of security impact. </br> Documented change approval by authorized parties. </br> Testing to verify that the change doesn't adversely impact system security. </br> For bespoke and custom software changes, all updates are tested for compliance with Requirement 6.2.4 before being deployed into production. </br> Procedures to address failures and return to a secure state.|Include changes to Azure AD configuration in the change control process. | |**6.5.2** Upon completion of a significant change, all applicable PCI-DSS requirements are confirmed to be in place on all new or changed systems and networks, and documentation is updated as applicable.|Not applicable to Azure AD.|
-|**6.5.3** Preproduction environments are separated from production environments and the separation is enforced with access controls.|Approaches to separate preproduction and production environments, based on organizational requirements. [Resource isolation in a single tenant](../fundamentals/secure-with-azure-ad-single-tenant.md) </br> [Resource isolation with multiple tenants](../fundamentals/secure-with-azure-ad-multiple-tenants.md)|
+|**6.5.3** Preproduction environments are separated from production environments and the separation is enforced with access controls.|Approaches to separate preproduction and production environments, based on organizational requirements. [Resource isolation in a single tenant](../fundamentals/secure-single-tenant.md) </br> [Resource isolation with multiple tenants](../fundamentals/secure-multiple-tenants.md)|
|**6.5.4** Roles and functions are separated between production and preproduction environments to provide accountability such that only reviewed and approved changes are deployed.|Learn about privileged roles and dedicated preproduction tenants. [Best practices for Azure AD roles](../roles/best-practices.md)| |**6.5.5** Live PANs aren't used in preproduction environments, except where those environments are included in the CDE and protected in accordance with all applicable PCI-DSS requirements.|Not applicable to Azure AD.| |**6.5.6** Test data and test accounts are removed from system components before the system goes into production.|Not applicable to Azure AD.|
aks Edge Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/edge-zones.md
The following example is an Azure Resource Manager template (ARM template) that
* **Linux Admin Username**: Enter a username to connect using SSH, such as azureuser.
-* **SSH RSA Public Key**: Copy and paste the public part of your SSH key pair (by default, the contents of ~/.ssh/id_rsa.pub).
+* **SSH RSA Public Key**: Copy and paste the public part of your SSH key pair (by default, the contents of the `~/.ssh/id_rsa.pub` file).
If you're unfamiliar with ARM templates, see the tutorial on [deploying a local ARM template][arm-template-deploy].
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
aks Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/static-ip.md
This article shows you how to create a static public IP address and assign it to
2. Create a file named `load-balancer-service.yaml` and copy in the contents of the following YAML file, providing your own public IP address created in the previous step and the node resource group name.
+ > [!IMPORTANT]
+ > Adding the `loadBalancerIP` property to the load balancer YAML manifest is deprecating following [upstream Kubernetes](https://github.com/kubernetes/kubernetes/pull/107235). While current usage remains the same and existing services are expected to work without modification, we **highly recommend setting service annotations** instead. To set service annotations, you can use `service.beta.kubernetes.io/azure-load-balancer-ipv4` for an IPv4 address and `service.beta.kubernetes.io/azure-load-balancer-ipv6` for an IPv6 address.
+ ```yaml apiVersion: v1 kind: Service
This article shows you how to create a static public IP address and assign it to
3. Use the `kubectl apply` command to create the service and deployment.
-```console
-kubectl apply -f load-balancer-service.yaml
-```
+ ```console
+ kubectl apply -f load-balancer-service.yaml
+ ```
## Apply a DNS label to the service
aks Use Pod Sandboxing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-sandboxing.md
Title: Pod Sandboxing (preview) with Azure Kubernetes Service (AKS)
description: Learn about and deploy Pod Sandboxing (preview), also referred to as Kernel Isolation, on an Azure Kubernetes Service (AKS) cluster. Previously updated : 03/07/2023 Last updated : 06/07/2023 # Pod Sandboxing (preview) with Azure Kubernetes Service (AKS)
The following are constraints with this preview of Pod Sandboxing (preview):
* [Microsoft Defender for Containers][defender-for-containers] doesn't support assessing Kata runtime pods.
-* [Container Insights][container-insights] doesn't support monitoring of Kata runtime pods in the preview release.
- * [Kata][kata-network-limitations] host-network isn't supported. * AKS does not support [Container Storage Interface drivers][csi-storage-driver] and [Secrets Store CSI driver][csi-secret-store driver] in this preview release.
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
app-service Configure Language Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-ruby.md
You can run an unsupported version of Ruby by building your own container image
## Set Ruby version
-Run the following command in the [Cloud Shell](https://shell.azure.com) to set the Ruby version to 2.3:
+Run the following command in the [Cloud Shell](https://shell.azure.com) to set the Ruby version to 2.7:
```azurecli-interactive
-az webapp config set --resource-group <resource-group-name> --name <app-name> --linux-fx-version "RUBY|2.3"
+az webapp config set --resource-group <resource-group-name> --name <app-name> --linux-fx-version "RUBY:2.7"
``` > [!NOTE] > If you see errors similar to the following during deployment time: > > ```
-> Your Ruby version is 2.3.3, but your Gemfile specified 2.3.1
+> Your Ruby version is 2.7.3, but your Gemfile specified 2.3.1
> ``` > > or
az webapp config set --resource-group <resource-group-name> --name <app-name> --
> rbenv: version `2.3.1' is not installed > ``` >
-> It means that the Ruby version configured in your project is different than the version that's installed in the container you're running (`2.3.3` in the example above). In the example above, check both *Gemfile* and *.ruby-version* and verify that the Ruby version is not set, or is set to the version that's installed in the container you're running (`2.3.3` in the example above).
+> It means that the Ruby version configured in your project is different than the version that's installed in the container you're running (`2.7.3` in the example above). In the example above, check both *Gemfile* and *.ruby-version* and verify that the Ruby version is not set, or is set to the version that's installed in the container you're running (`2.7.3` in the example above).
## Access environment variables
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
azure-app-configuration Quickstart Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-azure-kubernetes-service.md
If you don't see your application picking up the data from your App Configuratio
kubectl get configmap configmap-created-by-appconfig-provider -n appconfig-demo ```
-If the ConfigMap is not created properly, run the following command to get the data retrieval status.
+If the ConfigMap is not created, run the following command to get the data retrieval status.
```console kubectl get AzureAppConfigurationProvider appconfigurationprovider-sample -n appconfig-demo -o yaml
helm uninstall azureappconfiguration.kubernetesprovider --namespace azappconfig-
[!INCLUDE[Azure App Configuration cleanup](../../includes/azure-app-configuration-cleanup.md)]
-## Summary
+## Next steps
In this quickstart, you: * Created an application running in Azure Kubernetes Service (AKS). * Connected your AKS cluster to your App Configuration store using the App Configuration Kubernetes Provider. * Created a ConfigMap with data from your App Configuration store.
-* Ran the application with configuration from your App Configuration store without changing your application code.
+* Ran the application with configuration from your App Configuration store without changing your application code.
+
+To learn more about the Azure App Configuration Kubernetes Provider, see [Azure App Configuration Kubernetes Provider reference](./reference-kubernetes-provider.md).
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|
-| TKG 2.1.0 | 1.24.9 | 1.15.0_2023-01-10 | 16.0.816.19223 | 14.5 (Ubuntu 20.04)
-| TKG-1.6.0 | 1.23.8 | 1.11.0_2022-09-13 | 16.0.312.4243 | 12.3 (Ubuntu 12.3-1)
-| TKGm v1.5.3 | 1.22.8 | 1.9.0_2022-07-12 | 16.0.312.4243 | 12.3 (Ubuntu 12.3-1)|
+| TKGm 2.2 | 1.25.7 | 1.19.0_2023-05-09 | 16.0.937.6223 | 14.5 (Ubuntu 20.04)
+| TKGm 2.1.0 | 1.24.9 | 1.15.0_2023-01-10 | 16.0.816.19223 | 14.5 (Ubuntu 20.04)
+| TKGm 1.6.0 | 1.23.8 | 1.11.0_2022-09-13 | 16.0.312.4243 | 12.3 (Ubuntu 12.3-1)
+| TKGm 1.5.3 | 1.22.8 | 1.9.0_2022-07-12 | 16.0.312.4243 | 12.3 (Ubuntu 12.3-1)|
### Wind River
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023 #
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| Provider name | Distribution name | Version | | | -- | - | | RedHat | [OpenShift Container Platform](https://www.openshift.com/products/container-platform) | [4.9.43](https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html), [4.10.23](https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html), 4.11.0-rc.6 |
-| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) | TKG 2.1.0; upstream K8s v1.24.9_vmware.1 <br> TKGm 1.6.0; upstream K8s v1.23.8+vmware.2 <br>TKGm 1.5.3; upstream K8s v1.22.8+vmware.1 <br>TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5_vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 |
+| VMware | [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) | TKGm 2.2; upstream K8s v1.25.7+vmware.2 <br> TKG 2.1.0; upstream K8s v1.24.9+vmware.1 <br> TKGm 1.6.0; upstream K8s v1.23.8+vmware.2 <br>TKGm 1.5.3; upstream K8s v1.22.8+vmware.1 <br>TKGm 1.4.0; upstream K8s v1.21.2+vmware.1 <br>TKGm 1.3.1; upstream K8s v1.20.5+vmware.2 <br>TKGm 1.2.1; upstream K8s v1.19.3+vmware.1 |
| Canonical | [Charmed Kubernetes](https://ubuntu.com/kubernetes) | [1.24](https://ubuntu.com/kubernetes/docs/1.24/components) | | SUSE Rancher | [Rancher Kubernetes Engine](https://rancher.com/products/rke/) | RKE CLI version: [v1.3.13](https://github.com/rancher/rke/releases/tag/v1.3.13); Kubernetes versions: 1.24.2, 1.23.8 | | Nutanix | [Nutanix Kubernetes Engine](https://www.nutanix.com/products/kubernetes-engine) | Version [2.5](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_5:Nutanix-Kubernetes-Engine-v2_5); upstream K8s v1.23.11 |
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Installing the Connected Machine agent for Window applies the following system-w
### Linux agent installation details
-The preferred package format for the distribution (`.rpm` or `.deb`) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/) provides the Connected Machine agent for Linux. The shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent) installs and configurs the agent.
+The preferred package format for the distribution (`.rpm` or `.deb`) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/) provides the Connected Machine agent for Linux. The shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent) installs and configures the agent.
Installing, upgrading, and removing the Connected Machine agent isn't required after server restart.
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
azure-linux Tutorial Azure Linux Add Nodepool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/tutorial-azure-linux-add-nodepool.md
Title: Azure Linux Container Host for AKS tutorial - Add an Azure Linux node pool to your existing AKS cluster
-description: In this Azure Linux Container Host for AKS tutorial, you will learn how to add an Azure Linux node pool to your existing cluster.
+description: In this Azure Linux Container Host for AKS tutorial, you learn how to add an Azure Linux node pool to your existing cluster.
Previously updated : 04/18/2023 Last updated : 06/06/2023 # Tutorial: Add an Azure Linux node pool to your existing AKS cluster In AKS, nodes with the same configurations are grouped together into node pools. Each pool contains the VMs that run your applications. In the previous tutorial, you created an Azure Linux Container Host cluster with a single node pool. To meet the varying compute or storage requirements of your applications, you can create additional user node pools.
-In this tutorial, part two of five, you will learn how to:
+In this tutorial, part two of five, you learn how to:
> [!div class="checklist"]
+>
> * Add an Azure Linux node pool. > * Check the status of your node pools.
-In later tutorials, you'll learn how to migrate nodes to Azure Linux and enable telemetry to monitor your clusters.
+In later tutorials, you learn how to migrate nodes to Azure Linux and enable telemetry to monitor your clusters.
## Prerequisites -- In the previous tutorial, you created and deployed an Azure Linux Container Host cluster. If you haven't done these steps and would like to follow along, start with [Tutorial 1: Create a cluster with the Azure Linux Container Host for AKS](./tutorial-azure-linux-create-cluster.md).--- You need the latest version of Azure CLI. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+* In the previous tutorial, you created and deployed an Azure Linux Container Host cluster. If you haven't done these steps and would like to follow along, start with [Tutorial 1: Create a cluster with the Azure Linux Container Host for AKS](./tutorial-azure-linux-create-cluster.md).
+* You need the latest version of Azure CLI. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
## 1 - Add an Azure Linux node pool
-To add an Azure Linux node pool into your existing cluster, use the `az aks nodepool add` command and specify `--os-sku AzureLinux`. The following example creates a node pool named *myAzureLinuxNodepool* that runs 3 nodes in the *testAzureLinuxCluster* cluster in the *testAzureLinuxResourceGroup* resource group:
+To add an Azure Linux node pool into your existing cluster, use the `az aks nodepool add` command and specify `--os-sku AzureLinux`. The following example creates a node pool named *ALnodepool* that runs three nodes in the *testAzureLinuxCluster* cluster in the *testAzureLinuxResourceGroup* resource group:
```azurecli-interactive az aks nodepool add \ --resource-group testAzureLinuxResourceGroup \ --cluster-name testAzureLinuxCluster \
- --name myAzureLinuxNodepool \
- --node-count 3
+ --name ALnodepool \
+ --node-count 3 \
--os-sku AzureLinux ``` > [!NOTE]
-> The name of a node pool must start with a lowercase letter and can only contain alphanumeric characters. For Linux node pools the length must be between 1 and 12 characters.
+> The name of a node pool must start with a lowercase letter and can only contain alphanumeric characters. For Linux node pools the length must be between one and 12 characters.
## 2 - Check the node pool status
az aks nodepool list --resource-group testAzureLinuxResourceGroup --cluster-name
## Next steps
-In this tutorial, you added an Azure Linux node pool to your existing cluster. You learned how to:
+In this tutorial, you added an Azure Linux node pool to your existing cluster. You learned how to:
> [!div class="checklist"]
+>
> * Add an Azure Linux node pool. > * Check the status of your node pools.
-In the next tutorial, you'll learn how to migrate existing nodes to Azure Linux.
+In the next tutorial, you learn how to migrate existing nodes to Azure Linux.
> [!div class="nextstepaction"]
-> [Migrating to Azure Linux](./tutorial-azure-linux-migration.md)
+> [Migrating to Azure Linux](./tutorial-azure-linux-migration.md)
azure-maps Map Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer.md
Title: Add a heat map layer to a map | Microsoft Azure Maps
-description: Learn how to create a heat map. See how to use the Azure Maps Web SDK to add a heat map layer to a map. Find out how to customize heat map layers.
-- Previously updated : 10/06/2021
+description: Learn how to create a heat map and customize heat map layers using the Azure Maps Web SDK.
++ Last updated : 06/06/2023 - # Add a heat map layer to a map
map.layers.add(new atlas.layer.HeatMapLayer(datasource, null, {
}), 'labels'); ```
-Here's the complete running code sample of the preceding code.
+The [Simple Heat Map Layer] sample demonstrates how to create a simple heat map from a data set of point features.
-<br/>
+<!
<iframe height='500' scrolling='no' title='Simple Heat Map Layer' src='//codepen.io/azuremaps/embed/gQqdQB/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/gQqdQB/'>Simple Heat Map Layer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+>
## Customize the heat map layer
The previous example customized the heat map by setting the radius and opacity o
However, if you use an expression, the weight of each data point can be based on the properties of each data point. For example, suppose each data point represents an earthquake. The magnitude value has been an important metric for each earthquake data point. Earthquakes happen all the time, but most have a low magnitude, and aren't noticed. Use the magnitude value in an expression to assign the weight to each data point. By using the magnitude value to assign the weight, you get a better representation of the significance of earthquakes within the heat map. - `source` and `source-layer`: Enable you to update the data source.
-Here's a tool to test out the different heat map layer options.
+The [Heat Map Layer Options] sample shows how the different options of the heat map layer that affects rendering.
-<br/>
+<!
<iframe height='700' scrolling='no' title='Heat Map Layer Options' src='//codepen.io/azuremaps/embed/WYPaXr/?height=700&theme-id=0&default-tab=result' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/WYPaXr/'>Heat Map Layer Options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+>
## Consistent zoomable heat map
Use a `zoom` expression to scale the radius for each zoom level, such that each
Scaling the radius so that it doubles with each zoom level creates a heat map that looks consistent on all zoom levels. To apply this scaling, use `zoom` with a base 2 `exponential interpolation` expression, with the pixel radius set for the minimum zoom level and a scaled radius for the maximum zoom level calculated as `2 * Math.pow(2, minZoom - maxZoom)` as shown in the following sample. Zoom the map to see how the heat map scales with the zoom level.
-<br/>
+The [Consistent zoomable Heat Map] sample shows how to create a heat map where the radius of each data point covers the same physical area on the ground, creating a more consistent user experience when zooming the map. The heat map in this sample scales consistently between zoom levels 10 and 22. Each zoom level of the map has twice as many pixels vertically and horizontally as the previous zoom level. Doubling the radius with each zoom level creates a heat map that looks consistent across all zoom levels.
+
+<!
<iframe height="500" scrolling="no" title="Consistent zoomable heat map" src="//codepen.io/azuremaps/embed/OGyMZr/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/OGyMZr/'>Consistent zoomable heat map</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+>
-The `zoom` expression can only be used in `step` and `interpolate` expressions. The following expression can be used to approximate a radius in meters. This expression uses a placeholder `radiusMeters` which you should replace with your desired radius. This expression calculates the approximate pixel radius for a zoom level at the equator for zoom levels 0 and 24, and uses an `exponential interpolation` expression to scale between these values the same way the tiling system in the map works.
+The `zoom` expression can only be used in `step` and `interpolate` expressions. The following expression can be used to approximate a radius in meters. This expression uses a placeholder `radiusMeters`, which you should replace with your desired radius. This expression calculates the approximate pixel radius for a zoom level at the equator for zoom levels 0 and 24, and uses an `exponential interpolation` expression to scale between these values the same way the tiling system in the map works.
```json [
For more code examples to add to your maps, see the following articles:
> [!div class="nextstepaction"] > [Use data-driven style expressions](data-driven-style-expressions-web-sdk.md)+
+[Simple Heat Map Layer]: https://samples.azuremaps.com/?search=heat%20map%20layer&sample=simple-heat-map-layer
+[Heat Map Layer Options]: https://samples.azuremaps.com/?search=heat%20map%20layer&sample=heat-map-layer-options
+[Consistent zoomable Heat Map]: https://samples.azuremaps.com/?search=zoom&sample=consistent-zoomable-heat-map
azure-maps Map Add Image Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-image-layer.md
Title: Add an Image layer to a map | Microsoft Azure Maps description: Learn how to add images to a map. See how to use the Azure Maps Web SDK to customize image layers and overlay images on fixed sets of coordinates.-- Previously updated : 07/29/2019-++ Last updated : 06/06/2023+ - # Add an image layer to a map
This article shows you how to overlay an image to a fixed set of coordinates. He
The image layer supports the following image formats: -- JPEG-- PNG-- BMP-- GIF (no animations)
+* JPEG
+* PNG
+* BMP
+* GIF (no animations)
## Add an image layer
map.layers.add(new atlas.layer.ImageLayer({
})); ```
-Here's the complete running code sample of the preceding code.
+For a fully functional sample that shows how to overlay an image of a map of Newark New Jersey from 1922 as an Image layer, see [Simple Image Layer] in the [Azure Maps Samples].
-<br/>
+<!--
<iframe height='500' scrolling='no' title='Simple Image Layer' src='//codepen.io/azuremaps/embed/eQodRo/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/eQodRo/'>Simple Image Layer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+-->
## Import a KML file as ground overlay
This sample demonstrates how to add KML ground overlay information as an image l
The code uses the static `getCoordinatesFromEdges` function from the [ImageLayer](/javascript/api/azure-maps-control/atlas.layer.imagelayer) class. It calculates the four corners of the image using the north, south, east, west, and rotation information of the KML ground overlay.
-<br/>
+For a fully functional sample that shows how to use a KML Ground Overlay as Image Layer, see [KML Ground Overlay as Image Layer] in the [Azure Maps Samples].
+
+<!--
<iframe height='500' scrolling='no' title='KML Ground Overlay as Image Layer' src='//codepen.io/azuremaps/embed/EOJgpj/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/EOJgpj/'>KML Ground Overlay as Image Layer</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+-->
> [!TIP] > Use the `getPixels` and `getPositions` functions of the image layer class to convert between geographic coordinates of the positioned image layer and the local image pixel coordinates. ## Customize an image layer
-The image layer has many styling options. Here's a tool to try them out.
+The image layer has many styling options. For a fully functional sample that shows how the different options of the image layer affect rendering, see [Image Layer Options] in the [Azure Maps Samples].
-<br/>
+<!--
<iframe height='700' scrolling='no' title='Image Layer Options' src='//codepen.io/azuremaps/embed/RqOGzx/?height=700&theme-id=0&default-tab=result' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/RqOGzx/'>Image Layer Options</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
+-->
## Next steps
See the following articles for more code samples to add to your maps:
> [!div class="nextstepaction"] > [Add a tile layer](./map-add-tile-layer.md)+
+[Simple Image Layer]: https://samples.azuremaps.com/?search=image%20layer&sample=simple-image-layer
+[Azure Maps Samples]: https://samples.azuremaps.com
+[KML Ground Overlay as Image Layer]: https://samples.azuremaps.com/?search=KML&sample=kml-ground-overlay-as-image-layer
+[Image Layer Options]: https://samples.azuremaps.com/?search=image%20layer&sample=image-layer-options
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Windows Server 2008 R2 SP1 | X | X | X | | Windows Server 2008 R2 | | | X | | Windows Server 2008 SP2 | | X | |
-| Windows 11 Client Enterprise<br>(including multi-session) and Pro | X<sup>2</sup>, <sup>3</sup> | | |
+| Windows 11 Client and Pro | X<sup>2</sup>, <sup>3</sup> | | |
+| Windows 11 Enterprise<br>(including multi-session) | X<sup>1</sup> | | |
| Windows 10 1803 (RS4) and higher | X<sup>2</sup> | | | | Windows 10 Enterprise<br>(including multi-session) and Pro<br>(Server scenarios only<sup>1</sup>) | X | X | X | | Windows 8 Enterprise and Pro<br>(Server scenarios only<sup>1</sup>) | | X | |
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
This article shows you how to create an alert rule. To learn more about alerts,
You create an alert rule by combining: - The resources to be monitored.
+ - The signal or data from the resource.
- Conditions. You then define these elements for the resulting alert actions by using:
Alerts triggered by these alert rules contain a payload that uses the [common al
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-new-alert-rule.png" alt-text="Screenshot that shows steps to create a new alert rule.":::
+### Select a scope for the alert rule
+ 1. On the **Select a resource** pane, set the scope for your alert rule. You can filter by **subscription**, **resource type**, or **resource location**. > [!NOTE]
Alerts triggered by these alert rules contain a payload that uses the [common al
1. Select **Apply**. 1. Select **Next: Condition** at the bottom of the page.+
+### Set the conditions for the alert rule
+ 1. On the **Condition** tab, when you select the **Signal name** field, the most commonly used signals are displayed in the drop-down list. Select one of these popular signals, or select **See all signals** if you want to choose a different signal for the condition. +
+ :::image type="content" source="media/alerts-create-new-alert-rule/alerts-popular-signals.png" alt-text="Screenshot that shows popular signals when creating an alert rule.":::
+ 1. (Optional) If you chose to **See all signals** in the previous step, use the **Select a signal** pane to search for the signal name or filter the list of signals. Filter by: - **Signal type**: The [type of alert rule](alerts-overview.md#types-of-alerts) you're creating. - **Signal source**: The service sending the signal. The list is prepopulated based on the type of alert rule you selected.
Alerts triggered by these alert rules contain a payload that uses the [common al
Select the **Signal name** and **Apply**. 1. Follow the steps in the tab that corresponds to the type of alert you're creating.
- ### [Metric alert](#tab/metric)
+ #### [Metric alert](#tab/metric)
1. Preview the results of the selected metric signal in the **Preview** section. Select values for the following fields.
Alerts triggered by these alert rules contain a payload that uses the [common al
|Ignore data before|Use this setting to select the date from which to start using the metric historical data for calculating the dynamic thresholds. For example, if a resource was running in testing mode and is moved to production, you may want to disregard the metric behavior while the resource was in testing.| 1. Select **Done**.
- ### [Log alert](#tab/log)
+ #### [Log alert](#tab/log)
> [!NOTE] > If you're creating a new log alert rule, note that the current alert rule wizard is different from the earlier experience. For more information, see [Changes to the log alert rule creation experience](#changes-to-the-log-alert-rule-creation-experience).
Alerts triggered by these alert rules contain a payload that uses the [common al
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-query-pane.png" alt-text="Screenshot that shows the Query pane when creating a new log alert rule.":::
- 1. (Optional) If you are querying an ADX cluster, Log Analytics can't automatically identify the column with the event timestamp, so we recommend that you add a time range filter to the query. For example:
+ 1. (Optional) If you're querying an ADX cluster, Log Analytics can't automatically identify the column with the event timestamp, so we recommend that you add a time range filter to the query. For example:
```azurecli adx(cluster).table | where MyTS >= ago(5m) and MyTS <= now()
Alerts triggered by these alert rules contain a payload that uses the [common al
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-alert-rule-preview.png" alt-text="Screenshot that shows a preview of a new alert rule.":::
- ### [Activity log alert](#tab/activity-log)
+ #### [Activity log alert](#tab/activity-log)
1. On the **Conditions** pane, select the **Chart period**. 1. The **Preview** chart shows you the results of your selection.
Alerts triggered by these alert rules contain a payload that uses the [common al
|Status|Select the status levels for the alert.| |Event initiated by|Select the user or service principal that initiated the event.|
- ### [Resource Health alert](#tab/resource-health)
+ #### [Resource Health alert](#tab/resource-health)
On the **Conditions** pane, select values for each of these fields:
Alerts triggered by these alert rules contain a payload that uses the [common al
|Previous resource status|Select the previous resource status. Values are **Available**, **Degraded**, **Unavailable**, and **Unknown**.| |Reason type|Select the causes of the Resource Health events. Values are **Platform Initiated**, **Unknown**, and **User Initiated**.|
- ### [Service Health alert](#tab/service-health)
+ #### [Service Health alert](#tab/service-health)
On the **Conditions** pane, select values for each of these fields:
Alerts triggered by these alert rules contain a payload that uses the [common al
From this point on, you can select the **Review + create** button at any time.
+### Set the actions for the alert rule
+ 1. On the **Actions** tab, select or create the required [action groups](./action-groups.md). > [!NOTE]
Alerts triggered by these alert rules contain a payload that uses the [common al
**Example 1**
- This example creates an "Additional Details" tag with data refarding the "window start time" and "window end time".
+ This example creates an "Additional Details" tag with data regarding the "window start time" and "window end time".
- **Name:** "Additional Details" - **Value:** "Evaluation windowStartTime: \${data.context.condition.windowStartTime}. windowEndTime: \${data.context.condition.windowEndTime}"
Alerts triggered by these alert rules contain a payload that uses the [common al
**Example 2**
- This example add the data regarding the reason of resolving or firing the alert.
+ This example adds the data regarding the reason of resolving or firing the alert.
- **Name:** "Alert \${data.essentials.monitorCondition} reason" - **Value:** "\${data.context.condition.allOf[0].metricName} \${data.context.condition.allOf[0].operator} \${data.context.condition.allOf[0].threshold} \${data.essentials.monitorCondition}. The value is \${data.context.condition.allOf[0].metricValue}"
Alerts triggered by these alert rules contain a payload that uses the [common al
> [!NOTE] > The [common schema](alerts-common-schema.md) overwrites custom configurations. Therefore, you can't use both custom properties and the common schema for log alerts.
+### Set the details for the alert rule
+ 1. On the **Details** tab, define the **Project details**. - Select the **Subscription**. - Select the **Resource group**.
Alerts triggered by these alert rules contain a payload that uses the [common al
> We're continually adding more regions for regional data processing. 1. Define the **Alert rule details**.
- ### [Metric alert](#tab/metric)
+ #### [Metric alert](#tab/metric)
1. Select the **Severity**. 1. Enter values for the **Alert rule name** and the **Alert rule description**.
Alerts triggered by these alert rules contain a payload that uses the [common al
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-metric-rule-details-tab.png" alt-text="Screenshot that shows the Details tab when creating a new alert rule.":::
- ### [Log alert](#tab/log)
+ #### [Log alert](#tab/log)
1. Select the **Severity**. 1. Enter values for the **Alert rule name** and the **Alert rule description**.
Alerts triggered by these alert rules contain a payload that uses the [common al
|Mute actions |Select to set a period of time to wait before alert actions are triggered again. If you select this checkbox, the **Mute actions for** field appears to select the amount of time to wait after an alert is fired before triggering actions again.| |Check workspace linked storage|Select if logs workspace linked storage for alerts is configured. If no linked storage is configured, the rule isn't created.|
- ### [Activity log alert](#tab/activity-log)
+ #### [Activity log alert](#tab/activity-log)
1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select the **Region**.
Alerts triggered by these alert rules contain a payload that uses the [common al
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule.":::
- ### [Resource Health alert](#tab/resource-health)
+ #### [Resource Health alert](#tab/resource-health)
1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select the **Region**. 1. Select **Enable upon creation** for the alert rule to start running as soon as you're done creating it. :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule.":::
- ### [Service Health alert](#tab/service-health)
+ #### [Service Health alert](#tab/service-health)
1. Enter values for the **Alert rule name** and the **Alert rule description**. 1. Select the **Region**.
Alerts triggered by these alert rules contain a payload that uses the [common al
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule.":::
+### Finish creating the alert rule
+ 1. On the **Tags** tab, set any required tags on the alert rule resource. :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-tags-tab.png" alt-text="Screenshot that shows the Tags tab when creating a new alert rule.":::
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
1. In the [portal](https://portal.azure.com/), select **Cloud Shell**. At the prompt, use the commands that follow.
- ### [Metric alert](#tab/metric)
+ #### [Metric alert](#tab/metric)
To create a metric alert rule, use the `az monitor metrics alert create` command. You can see detailed documentation on the metric alert rule create command in the `az monitor metrics alert create` section of the [CLI reference documentation for metric alerts](/cli/azure/monitor/metrics/alert).
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
```azurecli az monitor metrics alert create -n {nameofthealert} -g {ResourceGroup} --scopes {VirtualMachineResourceID} --condition "avg Percentage CPU > 90" --description {descriptionofthealert} ```
- ### [Log alert](#tab/log)
+ #### [Log alert](#tab/log)
To create a log alert rule that monitors the count of system event errors:
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
> [!NOTE] > Azure CLI support is only available for the `scheduledQueryRules` API version `2021-08-01` and later. Previous API versions can use the Azure Resource Manager CLI with templates as described in the following sections. If you use the legacy [Log Analytics Alert API](./api-alerts.md), you must switch to use the CLI. [Learn more about switching](./alerts-log-api-switch.md).
- ### [Activity log alert](#tab/activity-log)
+ #### [Activity log alert](#tab/activity-log)
To create a new activity log alert rule, use the following commands: - [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource.
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
You can find detailed documentation on the activity log alert rule create command in the `az monitor activity-log alert create` section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
- ### [Resource Health alert](#tab/resource-health)
+ #### [Resource Health alert](#tab/resource-health)
To create a new activity log alert rule, use the following commands by using the `Resource Health` category: - [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource.
You can create a new alert rule using the [Azure CLI](/cli/azure/get-started-wit
You can find detailed documentation on the alert rule create command in the `az monitor activity-log alert create` section of the [CLI reference documentation for activity log alerts](/cli/azure/monitor/activity-log/alert).
- ### [Service Health alert](#tab/service-health)
+ #### [Service Health alert](#tab/service-health)
To create a new activity log alert rule, use the following commands by using the `Service Health` category: - [az monitor activity-log alert create](/cli/azure/monitor/activity-log/alert#az-monitor-activity-log-alert-create): Create a new activity log alert rule resource.
ARM templates for activity log alerts contain additional properties for the cond
|resourceGroup |Name of the resource group for the affected resource in the activity log event. | |resourceProvider |For more information, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). For a list that maps resource providers to Azure services, see [Resource providers for Azure services](../../azure-resource-manager/management/resource-providers-and-types.md). | |status |String describing the status of the operation in the activity event. Possible values are `Started`, `In Progress`, `Succeeded`, `Failed`, `Active`, or `Resolved`. |
-|subStatus |Usually, this field is the HTTP status code of the corresponding REST call. This field can also include other strings describing a sub-status. Examples of HTTP status codes include `OK` (HTTP Status Code: 200), `No Content` (HTTP Status Code: 204), and `Service Unavailable` (HTTP Status Code: 503), among many others. |
+|subStatus |Usually, this field is the HTTP status code of the corresponding REST call. This field can also include other strings describing a substatus. Examples of HTTP status codes include `OK` (HTTP Status Code: 200), `No Content` (HTTP Status Code: 204), and `Service Unavailable` (HTTP Status Code: 503), among many others. |
|resourceType |The type of the resource affected by the event. An example is `Microsoft.Resources/deployments`. | For more information about the activity log fields, see [Azure activity log event schema](../essentials/activity-log-schema.md).
azure-monitor Alerts Log Api Switch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-api-switch.md
Title: Upgrade legacy rules management to the current Azure Monitor Log Alerts API description: Learn how to switch to the log alerts management to ScheduledQueryRules API-- Last updated 2/23/2022
azure-monitor Java Get Started Supplemental https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-get-started-supplemental.md
For more information, see [Using Azure Monitor Application Insights with Spring
## Java Application servers
+The following sections show how to set the Application Insights Java agent path for different application servers. You can find the configuration options [here](./java-standalone-config.md).
+ ### Tomcat 8 (Linux) #### Tomcat installed via apt-get or yum
azure-monitor Opentelemetry Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md
const appInsights = new ApplicationInsightsClient(config);
Set the Cloud Role Name and the Cloud Role Instance via [Resource](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/sdk.md#resource-sdk) attributes. Cloud Role Name uses `service.namespace` and `service.name` attributes, although it falls back to `service.name` if `service.namespace` isn't set. Cloud Role Instance uses the `service.instance.id` attribute value. For information on standard attributes for resources, see [Resource Semantic Conventions](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/resource/semantic_conventions/README.md).
-Set Resource attributes using the `OTEL_RESOURCE_ATTRIBUTES` and/or `OTEL_SERVICE_NAME` environment variables. `OTEL_RESOURCE_ATTRIBUTES` takes series of comma-separated key-value pairs. For example, to set the Cloud Role Name to "my-namespace" and set Cloud Role Instance to "my-instance", you can set `OTEL_RESOURCE_ATTRIBUTES` as such:
+Set Resource attributes using the `OTEL_RESOURCE_ATTRIBUTES` and/or `OTEL_SERVICE_NAME` environment variables. `OTEL_RESOURCE_ATTRIBUTES` takes series of comma-separated key-value pairs. For example, to set the Cloud Role Name to "my-namespace.my-helloworld-service" and set Cloud Role Instance to "my-instance", you can set `OTEL_RESOURCE_ATTRIBUTES` and `OTEL_SERVICE_NAME` as such:
``` export OTEL_RESOURCE_ATTRIBUTES="service.namespace=my-namespace,service.instance.id=my-instance"
+export OTEL_SERVICE_NAME="my-helloworld-service"
```
-If you don't set Cloud Role Name via the "service.namespace" Resource Attribute, you can alternatively set the Cloud Role Name via the `OTEL_SERVICE_NAME` environment variable:
+If you do not set the `service.namespace` Resource attribute, you can alternatively set the Cloud Role Name with only the OTEL_SERVICE_NAME environment variable or the `service.name` Resource attribute. For example, to set the Cloud Role Name to "my-helloworld-service" and set Cloud Role Instance to "my-instance", you can set `OTEL_RESOURCE_ATTRIBUTES` and `OTEL_SERVICE_NAME` as such:
``` export OTEL_RESOURCE_ATTRIBUTES="service.instance.id=my-instance"
-export OTEL_SERVICE_NAME="my-namespace"
+export OTEL_SERVICE_NAME="my-helloworld-service"
```
azure-monitor Best Practices Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-analysis.md
Title: Azure Monitor best practices - Analysis and visualizations description: Guidance and recommendations for customizing visualizations beyond standard analysis features in Azure Monitor. -- Previously updated : 05/31/2023++ Last updated : 06/07/2023
-# Analyzing and visualize data
+# Analyze and visualize monitoring data
This article describes built-in features for visualizing and analyzing collected data in Azure Monitor. Visualizations like charts and graphs can help you analyze your monitoring data to drill down on issues and identify patterns. You can create custom visualizations to meet the requirements of different users in your organization.
All versions of Grafana include the [Azure Monitor datasource plug-in](visualize
[Azure Managed Grafana](../managed-grafan) to get started. +
+The [out-of-the-box Grafana Azure alerts dashboard](https://grafana.com/grafana/dashboards/15128-azure-alert-consumption/) allows you to view and consume Azure monitor alerts for Azure Monitor, your Azure datasources, and Azure Monitor managed service for Prometheus.
+ - For more information on define Azure Monitor alerts, see [Create a new alert rule](alerts/alerts-create-new-alert-rule.md).
+ - For Azure Monitor managed service for Prometheus, define your alerts using [Prometheus alert rules](alerts/prometheus-alerts.md) that are created as part of a [Prometheus rule group](essentials/prometheus-rule-groups.md), applied on the Azure Monitor workspace.
+ ![Screenshot that shows Grafana visualizations.](media/visualizations/grafana.png) ### Power BI
All versions of Grafana include the [Azure Monitor datasource plug-in](visualize
## Choose the right visualization tool
-|Visualization tool|Benefits|Common use cases|Good fit for|
-|:|:|:|:|
-|[Azure Workbooks](./visualize/workbooks-overview.md)|- Native dashboarding platform in Azure.<br>- Designed for collaborating and troubleshooting.<br>- Out-of-the-box templates and reports.<br>- Fully customizable. |- Create an interactive report with parameters where selecting an element in a table dynamically updates associated charts and visualizations.<br>- Share a report with other users in your organization.<br>- Collaborate with other workbook authors in your organization by using a public GitHub-based template gallery. | |
-|[Azure dashboards](../azure-portal/azure-portal-dashboards.md)|- Native dashboarding platform in Azure.<br>- Supports at scale deployments.<br>- Supports RBAC.<br>- No added cost|- Create a dashboard that combines a metrics graph and the results of a log query with operational data for related services.<br>- Share a dashboard with service owners through integration with [Azure role-based access control](../role-based-access-control/overview.md). |Azure/Arc exclusive environments|
-|[Azure Managed Grafana](../managed-grafan)|- Multi-platform, multicloud single pane of glass visualizations.<br>- Out-of-the-box plugins from most monitoring tools and platforms.<br>- Dashboard templates with focus on operations.<br>- Supports portability, multi-tenancy, and flexible RBAC.<br>- Azure managed Grafana provides seamless integration with Azure. |- Combine time-series and event data in a single visualization panel.<br>- Create a dynamic dashboard based on user selection of dynamic variables.<br>- Create a dashboard from a community-created and community-supported template.<br>- Create a vendor-agnostic business continuity and disaster scenario that runs on any cloud provider or on-premises. |- Cloud Native CNCF monitoring.<br>- Best with Prometheus.<br>- Multicloud environments.<br>- Combining with 3rd party monitoring tools.|
-|[Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/) |- Helps design business centric KPI dashboards for long term trends.<br>- Supports BI analytics with extensive slicing and dicing. <br>- Create rich visualizations.<br>- Benefit from extensive interactivity, including zoom-in and cross-filtering.<br>- Share easily throughout your organization.<br>- Integrate data from multiple data sources.<br>- Experience better performance with results cached in a cube. |Dashboarding for long term trends.|
+|Visualization tool|Benefits|Recommended uses|
+||||
+|[Azure Workbooks](./visualize/workbooks-overview.md)|Native Azure dashboarding platform |Use as a tool for engineering and technical teams to visualize and investigate scenarios. |
+| |Autorefresh |Use as a reporting tool for App developers, Cloud engineers, and other technical personnel|
+| |Out-of-the-box and public GitHub templates and reports | |
+| |Parameters allow dynamic real time updates | |
+| |Can provide high-level summaries that allow you to select any item for more in-depth data using the selected value in the query| |
+| |Can query more sources than other visualizations| |
+| |Fully customizable | |
+| |Designed for collaborating and troubleshooting | |
+|[Azure dashboards](../azure-portal/azure-portal-dashboards.md)|Native Azure dashboarding platform |For Azure/Arc exclusive environments |
+| |No added cost | |
+| |Supports at scale deployments | |
+| |Can combine a metrics graph and the results of a log query with operational data for related services | |
+| |Share a dashboard with service owners through integration with [Azure role-based access control](../role-based-access-control/overview.md) | |
+|[Azure Managed Grafana](../managed-grafan)|Multi-platform, multicloud single pane of glass visualizations |For users without Azure access |
+| |Seamless integration with Azure |Use for external visualization experiences, especially for RAG type dashboards in SOC and NOC environments |
+| |Can combine time-series and event data in a single visualization panel |Cloud Native CNCF monitoring |
+| |Can create dynamic dashboards based on user selection of dynamic variables |Multicloud environments |
+| |Prometheus support|Overall Statuses, Up/Down, and high level trend reports for management or executive level users |
+| |Integrates with third party monitoring tools|Use to show status of environments, apps, security, and network for continuous display in Network Operations Center (NOC) dashboards |
+| |Out-of-the-box plugins from most monitoring tools and platforms | |
+| |Dashboard templates with focus on operations | |
+| |Can create a dashboard from a community-created and community-supported template | |
+| |Can create a vendor-agnostic business continuity and disaster scenario that runs on any cloud provider or on-premises | |
+|[Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/)|Rich visualizations |Use for external visualizations aimed at management and executive levels |
+| |Supports BI analytics with extensive slicing and dicing |Use to help design business centric KPI dashboards for long term trends |
+| |Integrate data from multiple data sources| |
+| |Results cached in a cube for better performance| |
+| |Extensive interactivity, including zoom-in and cross-filtering| |
+| |Share easily throughout your organization| |
+ ## Other options Some Azure Monitor partners provide visualization functionality. For a list of partners that Microsoft has evaluated, see [Azure Monitor partner integrations](./partners.md). An Azure Monitor partner might provide out-of-the-box visualizations to save you time, although these solutions might have an extra cost.
azure-monitor Container Insights Enable Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks.md
After you've enabled monitoring, it might take about 15 minutes before you can v
You'll either download template and parameter files or create your own depending on the authentication mode you're using.
-To enable [managed identity authentication (preview)](container-insights-onboard.md#authentication):
+To enable [managed identity authentication](container-insights-onboard.md#authentication):
1. Download the template in the [GitHub content file](https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file) and save it as **existingClusterOnboarding.json**.
To enable [managed identity authentication (preview)](container-insights-onboard
- `workspaceResourceId`: Use the resource ID of your Log Analytics workspace. - `resourceTagValues`: Match the existing tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will be *MSCI-\<clusterName\>-\<clusterRegion\>* and this resource created in an AKS clusters resource group. If this is the first time onboarding, you can set the arbitrary tag values.
-To enable [managed identity authentication (preview)](container-insights-onboard.md#authentication):
+To enable [managed identity authentication](container-insights-onboard.md#authentication):
1. Save the following JSON as **existingClusterOnboarding.json**.
Use the following procedure if you're not using managed identity authentication.
## Limitations -- Enabling managed identity authentication (preview) isn't currently supported by using Terraform or Azure Policy.-- When you enable managed identity authentication (preview), a data collection rule is created with the name *MSCI-\<cluster-region\>-<\cluster-name\>*. Currently, this name can't be modified.
+- When you enable managed identity authentication, a data collection rule is created with the name *MSCI-\<cluster-region\>-<\cluster-name\>*. Currently, this name can't be modified.
## Next steps
azure-monitor Container Insights Enable Arc Enabled Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-arc-enabled-clusters.md
This option uses the following defaults:
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers ```
-To use [managed identity authentication (preview)](container-insights-onboard.md#authentication), add the `configuration-settings` parameter as in the following:
+To use [managed identity authentication](container-insights-onboard.md#authentication), add the `configuration-settings` parameter as in the following:
```azurecli az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type connectedClusters --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
4. You can now choose the [Log Analytics workspace](../logs/quick-create-workspace.md) to send your metrics and logs data to.
-5. To use managed identity authentication, select the *Use managed identity (preview)* checkbox.
+5. To use managed identity authentication, select the *Use managed identity* checkbox.
6. Select the 'Configure' button to deploy the Azure Monitor Container Insights cluster extension.
az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-n
4. Choose the Log Analytics workspace.
-5. To use managed identity authentication, select the *Use managed identity (preview)* checkbox.
+5. To use managed identity authentication, select the *Use managed identity* checkbox.
6. Select the 'Configure' button to continue.
az k8s-extension show --name azuremonitor-containers --cluster-name <cluster-nam
-## Migrate to managed identity authentication (preview)
-Use the flowing guidance to migrate an existing extension instance to managed identity authentication (preview).
+## Migrate to managed identity authentication
+Use the flowing guidance to migrate an existing extension instance to managed identity authentication.
## [CLI](#tab/migrate-cli) First retrieve the Log Analytics workspace configured for Container insights extension.
azure-monitor Container Insights Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-onboard.md
If you have a Kubernetes cluster with Windows nodes, review and configure the ne
## Authentication
-Container insights now supports authentication by using managed identity (in preview). This secure and simplified authentication model has a monitoring agent that uses the cluster's managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a *Monitoring Metrics Publisher* role to the cluster.
+Container insights defaults to managed identity authentication. This secure and simplified authentication model has a monitoring agent that uses the cluster's managed identity to send data to Azure Monitor. It replaces the existing legacy certificate-based local authentication and removes the requirement of adding a *Monitoring Metrics Publisher* role to the cluster.
> [!NOTE] > Container insights preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available." They're excluded from the service-level agreements and limited warranty. Container insights previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see [Frequently asked questions about Azure Kubernetes Service](../../aks/faq.md).
azure-monitor Access Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/access-api.md
Title: API access and authentication description: Learn how to authenticate and access the Azure Monitor Log Analytics API.-- Last updated 11/28/2022++ # Access the Azure Monitor Log Analytics API
azure-monitor Azure Resource Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/azure-resource-queries.md
Title: Querying logs for Azure resources description: In Log Analytics, queries typically execute in the context of a workspace. A workspace may contain data for many resources, making it difficult to isolate data for a particular resource.-- Last updated 12/07/2021++ # Querying logs for Azure resources
azure-monitor Batch Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/batch-queries.md
Title: Batch queries description: The Azure Monitor Log Analytics API supports batching.-- Last updated 11/22/2021 ++ # Batch queries
azure-monitor Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/cache.md
Title: Caching description: To improve performance, responses can be served from a cache. By default, responses are stored for 2 minutes.-- Last updated 08/06/2022++ # Caching
azure-monitor Cross Workspace Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/cross-workspace-queries.md
Title: Cross workspace queries description: The API supports the ability to query across multiple workspaces.-- Last updated 08/06/2022++ # Cross workspace queries
azure-monitor Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/errors.md
Title: Azure Monitor Log Analytics API errors description: This section contains a non-exhaustive list of known common errors that can occur in the Azure Monitor Log Analytics API, their causes, and possible solutions.-- Last updated 11/29/2021++ # Azure Monitor Log Analytics API errors
azure-monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/overview.md
Title: Overview description: This article describes the REST API that makes the data collected by Azure Log Analytics easily available.-- Last updated 02/28/2023++ # Azure Monitor Log Analytics API overview
azure-monitor Prefer Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/prefer-options.md
Title: Prefer options description: The API supports setting some request options using the Prefer header. This section describes how to set each preference and their values.-- Last updated 11/29/2021++ # Prefer options
azure-monitor Register App For Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/register-app-for-token.md
Title: Register an App to request authorization tokens and work with APIs description: How to register an app and assign a role so it can access request a token and work with APIs-- Last updated 01/04/2023++
The CLI following example assigns the `Reader` role to the service principal for
```azurecli az role assignment create --assignee 0a123b56-c987-1234-abcd-1a2b3c4d5e6f --role Reader --scope '\/subscriptions/a1234bcd-5849-4a5d-a2eb-5267eae1bbc7/resourceGroups/rg-001' ```
-For more information on creating a service principal using Azure CLI, see [Create an Azure service principal with the Azure CLI](https://learn.microsoft.com/cli/azure/create-an-azure-service-principal-azure-cli)
+For more information on creating a service principal using Azure CLI, see [Create an Azure service principal with the Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli).
### [PowerShell](#tab/powershell) The following sample script demonstrates creating an Azure Active Directory service principal via PowerShell. For a more detailed walkthrough, see [using Azure PowerShell to create a service principal to access resources](../../../active-directory/develop/howto-authenticate-service-principal-powershell.md)
azure-monitor Request Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/request-format.md
Title: Request format description: The Azure Monitor Log Analytics API request format.-- Last updated 11/22/2021++ # Azure Monitor Log Analytics API request format
azure-monitor Response Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/response-format.md
Title: Azure Monitor Log Analytics API response format description: The Azure Monitor Log Analytics API response is JSON that contains an array of table objects.-- Last updated 11/21/2021++ # Azure Monitor Log Analytics API response format
azure-monitor Timeouts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/api/timeouts.md
Title: Timeouts of query executions description: Query execution times can vary widely based on the complexity of the query, the amount of data being analyzed, and the load on the system and workspace at the time of the query.-- Last updated 11/28/2021++ # Timeouts
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
azure-resource-manager Conditional Resource Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/conditional-resource-deployment.md
param vmName string
param location string param logAnalytics string = ''
-resource vmName_omsOnboarding 'Microsoft.Compute/virtualMachines/extensions@2023-03-01'' = if (!empty(logAnalytics)) {
+resource vmName_omsOnboarding 'Microsoft.Compute/virtualMachines/extensions@2023-03-01' = if (!empty(logAnalytics)) {
name: '${vmName}/omsOnboarding' location: location properties: {
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
azure-resource-manager Key Vault Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/key-vault-access.md
Title: Use Azure Key Vault when deploying Managed Applications
description: Shows how to access secrets in Azure Key Vault when deploying Managed Applications. Previously updated : 04/14/2023 Last updated : 06/06/2023 # Access Key Vault secret when deploying Azure Managed Applications
This article describes how to configure the Key Vault to work with Managed Appli
## Add service as contributor
-Assign the **Contributor** role to the **Appliance Resource Provider** user at the key vault scope. For detailed steps, go to [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+Assign the **Contributor** role to the **Appliance Resource Provider** user at the key vault scope. The **Contributor** role is a _privileged administrator role_ for the role assignment. For detailed steps, go to [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
The **Appliance Resource Provider** is a service principal in your Azure Active Directory's tenant. From the Azure portal, you can verify if it's registered by going to **Azure Active Directory** > **Enterprise applications** and change the search filter to **Microsoft Applications**. Search for _Appliance Resource Provider_. If it's not found, [register](../troubleshooting/error-register-resource-provider.md) the `Microsoft.Solutions` resource provider.
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
backup Backup Vault Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-vault-overview.md
Follow these steps:
1. Sign in to [Azure portal](https://portal.azure.com/).
-2. [Create a new Backup vault](backup-vault-overview.md#create-backup-vault) or choose an existing Backup vault, and then enable Cross Region Restore by going to **Properties** > **Cross Region Restore (Preview)**, and choose **Enable**.
+1. [Create a new Backup vault](backup-vault-overview.md#create-backup-vault) or choose an existing Backup vault, and then enable Cross Region Restore by going to **Properties** > **Cross Region Restore (Preview)**, and choose **Enable**.
:::image type="content" source="./media/backup-vault-overview/enable-cross-region-restore-for-postgresql-database.png" alt-text="Screenshot shows how to enable Cross Region Restore for PostgreSQL database." lightbox="./media/backup-vault-overview/enable-cross-region-restore-for-postgresql-database.png":::
-3. Go to the Backup vaultΓÇÖs **Overview** pane, and then [configure a backup for PostgreSQL database](backup-azure-database-postgresql.md).
+1. Go to the Backup vaultΓÇÖs **Overview** pane, and then [configure a backup for PostgreSQL database](backup-azure-database-postgresql.md).
- Once the backup is complete in the primary region, it can take up to *12 hours* for the recovery point in the primary region to get replicated to the secondary region.
+1. Once the backup is complete in the primary region, it can take up to *12 hours* for the recovery point in the primary region to get replicated to the secondary region.
-4. To check the availability of recovery point in the secondary region, go to the **Backup center** > **Backup Instances** > **Filter to Azure Database for PostgreSQL servers**, filter **Instance Region** as *Secondary Region*, and then select the required Backup Instance.
+ To check the availability of recovery point in the secondary region, go to the **Backup center** > **Backup Instances** > **Filter to Azure Database for PostgreSQL servers**, filter **Instance Region** as *Secondary Region*, and then select the required Backup Instance.
:::image type="content" source="./media/backup-vault-overview/check-availability-of-recovery-point-in-secondary-region.png" alt-text="Screenshot shows how to check availability for the recovery points in the secondary region." lightbox="./media/backup-vault-overview/check-availability-of-recovery-point-in-secondary-region.png":::
- The recovery points available in the secondary region are now listed.
+1. The recovery points available in the secondary region are now listed.
-5. Choose **Restore to secondary region**.
+ Choose **Restore to secondary region**.
:::image type="content" source="./media/backup-vault-overview/initiate-restore-to-secondary-region.png" alt-text="Screenshot shows how to initiate restores to the secondary region." lightbox="./media/backup-vault-overview/initiate-restore-to-secondary-region.png":::
Follow these steps:
:::image type="content" source="./media/backup-vault-overview/trigger-restores-from-respective-backup-instance.png" alt-text="Screenshot shows how to trigger restores from the respective backup instance." lightbox="./media/backup-vault-overview/trigger-restores-from-respective-backup-instance.png":::
-6. Select **Restore to secondary region** to review the target region selected, and then select the appropriate recovery point and restore parameters.
+1. Select **Restore to secondary region** to review the target region selected, and then select the appropriate recovery point and restore parameters.
-7. Once the restore starts, you can monitor the completion of the restore operation under **Backup Jobs** of the Backup vault by filtering **Jobs workload type** to *Azure Database for PostgreSQL servers* and **Instance Region** to *Secondary Region*.
+1. Once the restore starts, you can monitor the completion of the restore operation under **Backup Jobs** of the Backup vault by filtering **Jobs workload type** to *Azure Database for PostgreSQL servers* and **Instance Region** to *Secondary Region*.
:::image type="content" source="./media/backup-vault-overview/monitor-postgresql-restore-to-secondary-region.png" alt-text="Screenshot shows how to monitor the postgresql restore to the secondary region." lightbox="./media/backup-vault-overview/monitor-postgresql-restore-to-secondary-region.png":::
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 05/25/2023 Last updated : 06/01/2023
bastion Bastion Create Host Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-create-host-powershell.md
description: Learn how to deploy Azure Bastion using PowerShell.
Previously updated : 06/05/2023 Last updated : 06/06/2023 # Customer intent: As someone with a networking background, I want to deploy Bastion and connect to a VM.
In this article, you create a virtual network (if you don't already have one), d
* [Azure CLI](create-host-cli.md) * [Quickstart - deploy with default settings](quickstart-host-portal.md)
-## Prerequisites
+## Before beginning
-The following prerequisites are required.
+Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial).
-### Azure subscription
-Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial).
+### PowerShell
-### Azure PowerShell
-### <a name="values"></a>Example values
+### Example values
You can use the following example values when creating this configuration, or you can substitute your own.
This section helps you create a virtual network, subnets, and deploy Azure Basti
> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)] >
-1. Create an Azure resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup). A resource group is a logical container into which Azure resources are deployed and managed. If you're running PowerShell locally, open your PowerShell console with elevated privileges and connect to Azure using the `Connect-AzAccount` command.
-
- ```azurepowershell-interactive
- New-AzResourceGroup -Name TestRG1 -Location EastUS
- ```
-
-1. Create a virtual network.
+1. Create a resource group, a virtual network, and a front end subnet to which you'll deploy the VMs that you'll connect to via Bastion. If you're running PowerShell locally, open your PowerShell console with elevated privileges and connect to Azure using the `Connect-AzAccount` command.
```azurepowershell-interactive
- $virtualNetwork = New-AzVirtualNetwork `
- -ResourceGroupName TestRG1 `
- -Location EastUS `
- -Name VNet1 `
- -AddressPrefix 10.1.0.0/16
+ New-AzResourceGroup -Name TestRG1 -Location EastUS `
+ $frontendSubnet = New-AzVirtualNetworkSubnetConfig -Name FrontEnd `
+ -AddressPrefix "10.1.0.0/24" `
+ $virtualNetwork = New-AzVirtualNetwork `
+ -Name TestVNet1 -ResourceGroupName TestRG1 `
+ -Location EastUS -AddressPrefix "10.1.0.0/16" `
+ -Subnet $frontendSubnet `
+ $virtualNetwork | Set-AzVirtualNetwork
```
-1. Set the configuration for the virtual network.
-
- ```azurepowershell-interactive
- $virtualNetwork | Set-AzVirtualNetwork
- ```
-
-1. Configure and set a subnet for your virtual network. This will be the subnet to which you'll deploy a VM. The variable used for *-VirtualNetwork* was set in the previous steps.
-
- ```azurepowershell-interactive
- $subnetConfig = Add-AzVirtualNetworkSubnetConfig `
- -Name 'FrontEnd' `
- -AddressPrefix 10.1.0.0/24 `
- -VirtualNetwork $virtualNetwork
- ```
-
- ```azurepowershell-interactive
- $virtualNetwork | Set-AzVirtualNetwork
- ```
-
-1. Configure and set the Azure Bastion subnet for your virtual network. This subnet is reserved exclusively for Azure Bastion resources. You must create the Azure Bastion subnet using the name value **AzureBastionSubnet**. This value lets Azure know which subnet to deploy the Bastion resources to. The example below also helps you add an Azure Bastion subnet to an existing VNet.
+1. Configure and set the Azure Bastion subnet for your virtual network. This subnet is reserved exclusively for Azure Bastion resources. You must create this subnet using the name value **AzureBastionSubnet**. This value lets Azure know which subnet to deploy the Bastion resources to. The example in the following section helps you add an Azure Bastion subnet to an existing VNet.
[!INCLUDE [Important about BastionSubnet size.](../../includes/bastion-subnet-size.md)]
- Declare the variable.
-
- ```azurepowershell-interactive
- $virtualNetwork = Get-AzVirtualNetwork -Name "VNet1" `
- -ResourceGroupName "TestRG1"
- ```
-
- Add the configuration.
+ Set the variable.
```azurepowershell-interactive
- Add-AzVirtualNetworkSubnetConfig -Name "AzureBastionSubnet" `
- -VirtualNetwork $virtualNetwork -AddressPrefix "10.1.1.0/26" `
+ $vnet = Get-AzVirtualNetwork -Name "TestVNet1" -ResourceGroupName "TestRG1"
```
- Set the configuration.
+ Add the subnet.
```azurepowershell-interactive
- $virtualNetwork | Set-AzVirtualNetwork
+ Add-AzVirtualNetworkSubnetConfig `
+ -Name "AzureBastionSubnet" -VirtualNetwork $vnet `
+ -AddressPrefix "10.1.1.0/26" | Set-AzVirtualNetwork
```
-1. Create a public IP address for Azure Bastion. The public IP is the public IP address the Bastion resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource you're creating.
+1. Create a public IP address for Azure Bastion. The public IP is the public IP address of the Bastion resource on which RDP/SSH will be accessed (over port 443). The public IP address must be in the same region as the Bastion resource you're creating.
```azurepowershell-interactive
- $publicip = New-AzPublicIpAddress -ResourceGroupName "TestRG1" -name "VNet1-ip" -location "EastUS" -AllocationMethod Static -Sku Standard
+ $publicip = New-AzPublicIpAddress -ResourceGroupName "TestRG1" `
+ -name "VNet1-ip" -location "EastUS" `
+ -AllocationMethod Static -Sku Standard
```
-1. Create a new Azure Bastion resource in the AzureBastionSubnet using the [New-AzBastion](/powershell/module/az.network/new-azbastion) command. The following example uses the **Standard SKU**. The Standard SKU lets you configure more Bastion features and connect to VMs using more connection types. For more information, see [Bastion SKUs](configuration-settings.md#skus). If you want to deploy using the Basic SKU, change the -Sku value to "Basic".
+1. Create a new Azure Bastion resource in the AzureBastionSubnet using the [New-AzBastion](/powershell/module/az.network/new-azbastion) command. The following example uses the **Basic SKU**. However, you can also deploy Bastion using the Standard SKU by changing the -Sku value to "Standard". The Standard SKU lets you configure more Bastion features and connect to VMs using more connection types. For more information, see [Bastion SKUs](configuration-settings.md#skus).
```azurepowershell-interactive
- New-AzBastion -ResourceGroupName "TestRG1" -Name "VNet1-bastion" `
- -PublicIpAddressRgName "TestRG1" -PublicIpAddressName "VNet1-ip" `
- -VirtualNetworkRgName "TestRG1" -VirtualNetworkName "VNet1" `
- -Sku "Standard"
+ New-AzBastion -ResourceGroupName "TestRG1" -Name "VNet1-bastion" `
+ -PublicIpAddressRgName "TestRG1" -PublicIpAddressName "VNet1-ip" `
+ -VirtualNetworkRgName "TestRG1" -VirtualNetworkName "TestVNet1" `
+ -Sku "Basic"
``` 1. It takes about 10 minutes for the Bastion resources to deploy. You can create a VM in the next section while Bastion deploys to your virtual network. ## <a name="create-vm"></a>Create a VM
-You can create a VM using the [Quickstart: Create a VM using PowerShell](../virtual-machines/windows/quick-create-powershell.md) or [Quickstart: Create a VM using the portal](../virtual-machines/windows/quick-create-portal.md) articles. Be sure you deploy the VM to the virtual network to which you deployed Bastion. The VM you create in this section isn't a part of the Bastion configuration and doesn't become a bastion host. You connect to this VM later in this tutorial via Bastion.
+You can create a VM using the [Quickstart: Create a VM using PowerShell](../virtual-machines/windows/quick-create-powershell.md) or [Quickstart: Create a VM using the portal](../virtual-machines/windows/quick-create-portal.md) articles. Be sure you deploy the VM to the same virtual network to which you deployed Bastion. The VM you create in this section isn't a part of the Bastion configuration and doesn't become a bastion host. You connect to this VM later in this tutorial via Bastion.
The following required roles for your resources.
The following required roles for your resources.
## <a name="connect"></a>Connect to a VM
-You can use the [Connection steps](#steps) in the section below to connect to your VM. You can also use any of the following articles to connect to a VM. Some connection types require the Bastion [Standard SKU](configuration-settings.md#skus).
+You can use the [Connection steps](#steps) in the following section to connect to your VM. You can also use any of the following articles to connect to a VM. Some connection types require the Bastion [Standard SKU](configuration-settings.md#skus).
[!INCLUDE [Links to Connect to VM articles](../../includes/bastion-vm-connect-article-list.md)]
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
cognitive-services Call Read Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-read-api.md
The **Read** call takes images and documents as its input. They have the followi
* The file size of images must be less than 500 MB (4 MB for the free tier) and dimensions at least 50 x 50 pixels and at most 10000 x 10000 pixels. PDF files do not have a size limit. * The minimum height of the text to be extracted is 12 pixels for a 1024 x 768 image. This corresponds to about 8 font point text at 150 DPI.
+>[!NOTE]
+> You don't need to crop an image for text lines. Send the whole image to Read API and it will recognize all texts.
+ ## Determine how to process the data (optional) ### Specify the OCR model
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
Previously updated : 06/13/2022 Last updated : 06/05/2023 zone_pivot_groups: programming-languages-speech-sdk # Use pronunciation assessment
-In this article, you'll learn how to evaluate pronunciation with the Speech to text capability through the Speech SDK. To [get pronunciation assessment results](#get-pronunciation-assessment-results), you'll apply the `PronunciationAssessmentConfig` settings to a `SpeechRecognizer` object.
+In this article, you learn how to evaluate pronunciation with speech to text through the Speech SDK. To [get pronunciation assessment results](#get-pronunciation-assessment-results), you apply the `PronunciationAssessmentConfig` settings to a `SpeechRecognizer` object.
> [!NOTE]
-> Pronunciation assessment is not available with the Speech SDK for Go. You can read about the concepts below, but you must select another programming language for implementation details.
+> Usage of pronunciation assessment costs the same as standard Speech to text, whether pay-as-you-go or commitment tier [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). If you [purchase a commitment tier](../commitment-tier.md) for standard Speech to text, the spend for pronunciation assessment goes towards meeting the commitment.
You can get pronunciation assessment scores for:
You can get pronunciation assessment scores for:
- Phonemes in [SAPI](/previous-versions/windows/desktop/ee431828(v=vs.85)#american-english-phoneme-table) or [IPA](https://en.wikipedia.org/wiki/IPA) format > [!NOTE]
-> The syllable group, phoneme name, and spoken phoneme of pronunciation assessment are currently only available for the en-US locale.
->
-> Usage of pronunciation assessment costs the same as standard Speech to text, whether pay-as-you-go or commitment tier [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). If you [purchase a commitment tier](../commitment-tier.md) for standard Speech to text, the spend for pronunciation assessment goes towards meeting the commitment.
->
-> For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
-
+> The syllable group, phoneme name, and spoken phoneme of pronunciation assessment are currently only available for the en-US locale. For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
## Configuration parameters
-This table lists some of the key configuration parameters for pronunciation assessment.
-
-| Parameter | Description |
-|--|-|
-| `ReferenceText` | The text that the pronunciation will be evaluated against. |
-| `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. |
-| `Granularity` | Determines the lowest level of evaluation granularity. Scores for levels above or equal to the minimal value are returned. Accepted values are `Phoneme`, which shows the score on the full text, word, syllable, and phoneme level, `Syllable`, which shows the score on the full text, word, and syllable level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. The provided full reference text can be a word, sentence, or paragraph, and it depends on your input reference text. Default: `Phoneme`.|
-| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. To enable miscue calculation, set the `EnableMiscue` to `True`. You can refer to the code snippet below the table.|
-| `ScenarioId` | A GUID indicating a customized point system. |
+> [!NOTE]
+> Pronunciation assessment is not available with the Speech SDK for Go. You can read about the concepts in this guide, but you must select another programming language for implementation details.
You must create a `PronunciationAssessmentConfig` object with the reference text, grading system, and granularity. Enabling miscue and other configuration settings are optional.
do {
::: zone-end
+This table lists some of the key configuration parameters for pronunciation assessment.
+
+| Parameter | Description |
+|--|-|
+| `ReferenceText` | The text that the pronunciation is evaluated against. |
+| `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. |
+| `Granularity` | Determines the lowest level of evaluation granularity. Scores for levels greater than or equal to the minimal value are returned. Accepted values are `Phoneme`, which shows the score on the full text, word, syllable, and phoneme level, `Syllable`, which shows the score on the full text, word, and syllable level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. The provided full reference text can be a word, sentence, or paragraph, and it depends on your input reference text. Default: `Phoneme`.|
+| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. To enable miscue calculation, set the `EnableMiscue` to `True`. You can refer to the code snippet below the table.|
+| `ScenarioId` | A GUID indicating a customized point system. |
+ ## Syllable groups Pronunciation assessment can provide syllable-level assessment results. Grouping in syllables is more legible and aligned with speaking habits, as a word is typically pronounced syllable by syllable rather than phoneme by phoneme.
To request syllable-level results along with phonemes, set the granularity [conf
## Phoneme alphabet format
-For `en-US` locale, the phoneme name is provided together with the score, to help identify which phonemes were pronounced accurately or inaccurately. For other locales, you can only get the phoneme score.
+For the `en-US` locale, the phoneme name is provided together with the score, to help identify which phonemes were pronounced accurately or inaccurately. For other locales, you can only get the phoneme score.
The following table compares example SAPI phonemes with the corresponding IPA phonemes.
The following table compares example SAPI phonemes with the corresponding IPA ph
|luck|l ah k|l ʌ k| |photosynthesis|f ow t ax s ih n th ax s ih s|f oʊ t ə s ɪ n θ ə s ɪ s|
-To request IPA phonemes, set the phoneme alphabet to `"IPA"`. If you don't specify the alphabet, the phonemes will be in SAPI format by default.
+To request IPA phonemes, set the phoneme alphabet to `"IPA"`. If you don't specify the alphabet, the phonemes are in SAPI format by default.
::: zone pivot="programming-language-csharp"
pronunciationAssessmentConfig?.nbestPhonemeCount = 5
## Get pronunciation assessment results
+In the `SpeechRecognizer`, you can specify the language that you're learning or practicing improving pronunciation. The default locale is `en-US` if not otherwise specified.
+
+> [!TIP]
+> If you aren't sure which locale to set when a language has multiple locales (such as Spanish), try each locale (such as `es-ES` and `es-MX`) separately. Evaluate the results to determine which locale scores higher for your specific scenario.
+ When speech is recognized, you can request the pronunciation assessment results as SDK objects or a JSON string. ::: zone pivot="programming-language-csharp"
using (var speechRecognizer = new SpeechRecognizer(
var pronunciationAssessmentResultJson = speechRecognitionResult.Properties.GetProperty(PropertyId.SpeechServiceResponse_JsonResult); } ```
-
+
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/speech_recognition_samples.cs#LL1086C13-L1086C98).
+ ::: zone-end ::: zone pivot="programming-language-cpp"
auto pronunciationAssessmentResult =
// The pronunciation assessment result as a JSON string auto pronunciationAssessmentResultJson = speechRecognitionResult->Properties.GetProperty(PropertyId::SpeechServiceResponse_JsonResult); ```+
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp#L624).
::: zone-end
speechRecognizer.recognizeOnceAsync((speechRecognitionResult: SpeechSDK.SpeechRe
var pronunciationAssessmentResultJson = speechRecognitionResult.properties.getProperty(SpeechSDK.PropertyId.SpeechServiceResponse_JsonResult); }, {});
+```
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/js/node/pronunciationAssessmentContinue.js#LL37C4-L37C52).
-```
-
::: zone-end ::: zone pivot="programming-language-python"
pronunciation_assessment_result = speechsdk.PronunciationAssessmentResult(speech
pronunciation_assessment_result_json = speech_recognition_result.properties.get(speechsdk.PropertyId.SpeechServiceResponse_JsonResult) ```
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#LL937C1-L937C1).
+ ::: zone-end
SPXPronunciationAssessmentResult* pronunciationAssessmentResult = [[SPXPronuncia
NSString* pronunciationAssessmentResultJson = [speechRecognitionResult.properties getPropertyByName:SPXSpeechServiceResponseJsonResult]; ```
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L862).
+ ::: zone-end ::: zone pivot="programming-language-swift"
let pronunciationAssessmentResult = SPXPronunciationAssessmentResult(speechRecog
let pronunciationAssessmentResultJson = speechRecognitionResult!.properties?.getPropertyBy(SPXPropertyId.speechServiceResponseJsonResult) ```
+To learn how to specify the learning language for pronunciation assessment in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/swift/ios/speech-samples/speech-samples/ViewController.swift#L224).
+ ::: zone-end ::: zone pivot="programming-language-go"
This table lists some of the key pronunciation assessment results.
Pronunciation assessment results for the spoken word "hello" are shown as a JSON string in the following example. Here's what you should know: - The phoneme [alphabet](#phoneme-alphabet-format) is IPA. - The [syllables](#syllable-groups) are returned alongside phonemes for the same word. -- You can use the `Offset` and `Duration` values to align syllables with their corresponding phonemes. For example, the starting offset (11700000) of the second syllable ("loʊ") aligns with the third phoneme ("l").
+- You can use the `Offset` and `Duration` values to align syllables with their corresponding phonemes. For example, the starting offset (11700000) of the second syllable ("loʊ") aligns with the third phoneme ("l"). The offset represents the time at which the recognized speech begins in the audio stream, and it's measured in 100-nanosecond units. To learn more about `Offset` and `Duration`, see [response properties](rest-speech-to-text-short.md#response-properties).
- There are five `NBestPhonemes` corresponding to the number of [spoken phonemes](#spoken-phoneme) requested. - Within `Phonemes`, the most likely [spoken phonemes](#spoken-phoneme) was `"ə"` instead of the expected phoneme `"ɛ"`. The expected phoneme `"ɛ"` only received a confidence score of 47. Other potential matches received confidence scores of 52, 17, and 2.
For how to use Pronunciation Assessment in streaming mode in your own applicatio
::: zone pivot="programming-language-python"
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_sample.py#L915).
::: zone-end
For how to use Pronunciation Assessment in streaming mode in your own applicatio
::: zone pivot="programming-language-objectivec"
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/objective-c/ios/speech-samples/speech-samples/ViewController.m#L831).
+ ::: zone-end ::: zone pivot="programming-language-swift"
+For how to use Pronunciation Assessment in streaming mode in your own application, see [sample code](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/swift/ios/speech-samples/speech-samples/ViewController.swift#L191).
+ ::: zone-end ::: zone pivot="programming-language-go"
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
To improve Speech to text recognition accuracy, customization is available for s
# [Text to speech](#tab/tts)
-The tables in this section summarizes the locales and voices supported for Text to speech. Please see the table footnotes for more details.
+The table in this section summarizes the locales and voices supported for Text to speech. Please see the table footnotes for more details.
Additional remarks for Text to speech locales are included in the [Voice styles and roles](#voice-styles-and-roles), [Prebuilt neural voices](#prebuilt-neural-voices), and [Custom Neural Voice](#custom-neural-voice) sections below.
Each prebuilt neural voice model is available at 24kHz and high-fidelity 48kHz.
Please note that the following neural voices are retired. -- The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021. If you're using container Neural TTS, [download](speech-container-ntts.md#get-the-container-image-with-docker-pull) and deploy the latest version. Starting from October 30, 2021, all requests with previous versions will not succeed.
+- The English (United Kingdom) voice `en-GB-MiaNeural` retired on October 30, 2021. All service requests to `en-GB-MiaNeural` will be redirected to `en-GB-SoniaNeural` automatically as of October 30, 2021. If you're using container Neural TTS, [download](speech-container-ntts.md#get-the-container-image-with-docker-pull) and deploy the latest version. All requests with previous versions won't succeed starting from October 30, 2021.
- The `en-US-JessaNeural` voice is retired and replaced by `en-US-AriaNeural`. If you were using "Jessa" before, convert to "Aria." ### Custom Neural Voice
With the cross-lingual feature (preview), you can transfer your custom neural vo
# [Pronunciation assessment](#tab/pronunciation-assessment)
-The table in this section summarizes the locales supported for Pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service).
+The table in this section summarizes the locales supported for Pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). You should specify the language that you're learning or practicing to improve pronunciation. The default language is set as `en-US`. If you know your target learning language, set the locale accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario.
[!INCLUDE [Language support include](includes/language-support/pronunciation-assessment.md)]
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md
Azure Cognitive Service for Speech is updated on an ongoing basis. To stay up-to
## Recent highlights
-* Speech SDK 1.28.0 was released in May 2023.
+* Speech SDK 1.29.0 was released in June 2023.
* Speech to text and text to speech container versions were updated in March 2023. * Some Speech Studio [scenarios](speech-studio-overview.md#speech-studio-scenarios) are available to try without an Azure subscription. * Custom Speech to text container disconnected mode was released in January 2023.
cognitive-services Create Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/document-translation/how-to-guides/create-use-managed-identities.md
# Managed identities for Document Translation
-Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources. Managed identities are a safer way to grant access to data compared to SAS URLs.
+Managed identities for Azure resources are service principals that create an Azure Active Directory (Azure AD) identity and specific permissions for Azure managed resources. Managed identities are a safer way to grant access to storage data and replace the requirement for you to include shared access signature tokens (SAS) with your [source and target URLs](#post-request-body).
:::image type="content" source="../media/managed-identity-rbac-flow.png" alt-text="Screenshot of managed identity flow (RBAC).":::
-* You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications. Using managed identities replaces the requirement for you to include shared access signature tokens (SAS) with your [source and target URLs](#post-request-body).
+* You can use managed identities to grant access to any resource that supports Azure AD authentication, including your own applications.
* To grant access to an Azure resource, assign an Azure role to a managed identity using [Azure role-based access control (`Azure RBAC`)](../../../../role-based-access-control/overview.md).
cognitive-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-virtual-networks.md
Virtual networks (VNETs) are supported in [regions where Cognitive Services are
> [!NOTE] > If you're using, Azure OpenAI, LUIS, Speech Services, or Language services, the **CognitiveServicesManagement** tag only enables you use the service using the SDK or REST API. To access and use Azure OpenAI Studio, LUIS portal , Speech Studio or Language Studio from a virtual network, you will need to use the following tags:-
+>
> * **AzureActiveDirectory** > * **AzureFrontDoor.Frontend** > * **AzureResourceManager**
cognitive-services Business Continuity Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/business-continuity-disaster-recovery.md
keywords:
# Business Continuity and Disaster Recovery (BCDR) considerations with Azure OpenAI Service
-Azure OpenAI is available in two regions. Since subscription keys are region bound, when a customer acquires a key, they select the region in which their deployments will reside and from then on, all operations stay associated with that Azure server region.
+Azure OpenAI is available in multiple regions. Since subscription keys are region bound, when a customer acquires a key, they select the region in which their deployments will reside and from then on, all operations stay associated with that Azure server region.
It's rare, but not impossible, to encounter a network issue that hits an entire region. If your service needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two OpenAI resources in different regions. This article provides general recommendations for how to implement Business Continuity and Disaster Recovery (BCDR) for your Azure OpenAI applications.
Follow these steps to configure your client to monitor errors:
## BCDR requires custom code
-The recovery from regional failures for this usage type can be performed instantaneously and at a very low cost. This does however, require custom development of this functionality on the client side of your application.
+The recovery from regional failures for this usage type can be performed instantaneously and at a very low cost. This does however, require custom development of this functionality on the client side of your application.
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps
description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
container-instances Container Instances Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-managed-identity.md
TOKEN=$(curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=
Now use the access token to authenticate to key vault and read a secret. Be sure to substitute the name of your key vault in the URL (*https:\//mykeyvault.vault.azure.net/...*): ```bash
-curl https://mykeyvault.vault.azure.net/secrets/SampleSecret/?api-version=2016-10-01 -H "Authorization: Bearer $TOKEN"
+curl https://mykeyvault.vault.azure.net/secrets/SampleSecret/?api-version=7.4 -H "Authorization: Bearer $TOKEN"
``` The response looks similar to the following, showing the secret. In your code, you would parse this output to obtain the secret. Then, use the secret in a subsequent operation to access another Azure resource.
container-instances Container Instances Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-overview.md
Azure Container Instances also supports executing a command in a running contain
### Hypervisor-level security
-Historically, containers have offered application dependency isolation and resource governance but have not been considered sufficiently hardened for hostile multi-tenant usage. Azure Container Instances guarantees your application is as isolated in a container as it would be in a VM.
+Historically, containers have offered application dependency isolation and resource governance but haven't been considered sufficiently hardened for hostile multi-tenant usage. Azure Container Instances guarantees your application is as isolated in a container as it would be in a VM.
### Customer data
To retrieve and persist state with Azure Container Instances, we offer direct [m
## Linux and Windows containers
-Azure Container Instances can schedule both Windows and Linux containers with the same API. Simply specify the OS type when you create your [container groups](container-instances-container-groups.md).
+Azure Container Instances can schedule both Windows and Linux containers with the same API. You can specify your OS type preference when you create your [container groups](container-instances-container-groups.md).
Some features are currently restricted to Linux containers:
Azure Container Instances supports scheduling of [multi-container groups](contai
Azure Container Instances enables [deployment of container instances into an Azure virtual network](container-instances-vnet.md). When deployed into a subnet within your virtual network, container instances can communicate securely with other resources in the virtual network, including those that are on premises (through [VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoute](../expressroute/expressroute-introduction.md)). ## Confidential container deployment
-Confidential containers on ACI enables you to run containers in a trusted execution environment (TEE) that provides hardware-based confidentiality and integrity protections for your container workloads. Confidential containers on ACI can protect data-in-use and encrypts data being processed in memory. Confidential Containers on ACI is supported as a SKU that you can select when deploying your workload. For more information, see [confidential container groups](./container-instances-confidential-overview.md).
+
+Confidential containers on ACI enable you to run containers in a trusted execution environment (TEE) that provides hardware-based confidentiality and integrity protections for your container workloads. Confidential containers on ACI can protect data-in-use and encrypts data being processed in memory. Confidential containers on ACI are supported as a SKU that you can select when deploying your workload. For more information, see [confidential container groups](./container-instances-confidential-overview.md).
## Considerations
There are default limits that require quota increases. Not all quota increases m
If your container group stops working, we suggest trying to restart your container, checking your application code, or your local network configuration before opening a [support request][azure-support].
-Container Images cannot be larger than 15 GB, any images above this size may cause unexpected behavior: [How large can my container image be?](./container-instances-faq.yml)
+Container Images can't be larger than 15 GB, any images above this size may cause unexpected behavior: [How large can my container image be?](./container-instances-faq.yml)
Some Windows Server base images are no longer compatible with Azure Container Instances: [What Windows base OS images are supported?](./container-instances-faq.yml)
There are ports that are reserved for service functionality. We advise you not t
Your container groups may restart due to platform maintenance events. These maintenance events are done to ensure the continuous improvement of the underlying infrastructure: [Container had an isolated restart without explicit user input](./container-instances-faq.yml)
-ACI does not allow [privileged container operations](./container-instances-faq.yml). We advise you to not depend on using the root directory for your scenario
+ACI doesn't allow [privileged container operations](./container-instances-faq.yml). We advise you to not depend on using the root directory for your scenario
## Next steps
container-instances Container Instances Region Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-region-availability.md
The following regions and maximum resources are available to container groups wi
> The maximum resources in a region are different depending on your deployment. For example, a region may have a different maximum CPU and memory size in an Azure virtual network deployment than for a general deployment. That same region may also have a different set of maximum values for a deployment with GPU resources. Verify your deployment type before checking the below tables for the maximum values in your region. > [!NOTE]
-> Some regions don't support availability zones (denoted by a 'N/A' in the table below), and some regions have availability zones, but ACI doesn't currently leverage the capability (denoted by an 'N' in the table below). For more information, see [Azure regions with availability zones][az-region-support].
+> Some regions don't support availability zones (denoted by a 'N/A' in the table), and some regions have availability zones, but ACI doesn't currently leverage the capability (denoted by an 'N' in the table). For more information, see [Azure regions with availability zones][az-region-support].
-| Region | Max CPU | Max memory (GB) | VNET max CPU | VNET max memory (GB) | Storage (GB) | GPU SKUs (preview) | Availability Zone support | Confidential SKU (preview) |
-| -- | :: | :: | :-: | :--: | :-: | :-: | :-: | :-: |
+| Region | Max CPU | Max memory (GB) | VNET max CPU | VNET max memory (GB) | Storage (GB) | GPU SKUs (preview) | Availability Zone support | Confidential SKU (preview) |
+| -- | :: | :: | :-: | :--: | :-: | :-: | :-: | :-: | :-: |
| Australia East | 4 | 16 | 4 | 16 | 50 | N/A | Y | N | | Australia Southeast | 4 | 16 | 4 | 16 | 50 | N/A | N | N | | Brazil South | 4 | 16 | 4 | 16 | 50 | N/A | Y | N |
-| Canada Central | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
+| Canada Central | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
| Canada East | 4 | 16 | 4 | 16 | 50 | N/A | N | N | | Central India | 4 | 16 | 4 | 16 | 50 | V100 | N | N |
-| Central US | 4 | 16 | 4 | 16 | 50 | N/A | Y | N |
+| Central US | 4 | 16 | 4 | 16 | 50 | N/A | Y | N |
| East Asia | 4 | 16 | 4 | 16 | 50 | N/A | N | N | | East US | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y | Y | | East US 2 | 4 | 16 | 4 | 16 | 50 | N/A | Y | N |
The following regions and maximum resources are available to container groups wi
| Sweden South | 4 | 16 | 4 | 16 | 50 | N/A | N | N | | Switzerland North | 4 | 16 | 4 | 16 | 50 | N/A | N | N | | Switzerland West | 4 | 16 | N/A | N/A | 50 | N/A | N | N |
-| UAE North | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
-| UK South | 4 | 16 | 4 | 16 | 50 | N/A | Y | N |
+| UAE North | 4 | 16 | 4 | 16 | 50 | N/A | N | N |
+| UK South | 4 | 16 | 4 | 16 | 50 | N/A | Y | N |
| UK West | 4 | 16 | 4 | 16 | 50 | N/A | N | N | | West Central US| 4 | 16 | 4 | 16 | 50 | N/A | N | N | | West Europe | 4 | 16 | 4 | 16 | 50 | K80, P100, V100 | Y | Y |
The following regions and maximum resources are available to container groups wi
## Next steps
-Let the team know if you'd like to see additional regions or increased resource availability at [aka.ms/aci/feedback](https://aka.ms/aci/feedback).
+Let the team know if you'd like to see more regions or increased resource availability at [aka.ms/aci/feedback](https://aka.ms/aci/feedback).
For information on troubleshooting container instance deployment, see [Troubleshoot deployment issues with Azure Container Instances](container-instances-troubleshooting.md).
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 02/21/2023 Last updated : 06/01/2023 # Azure Policy built-in definitions for Azure Container Instances
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry
description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
cosmos-db Partial Document Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partial-document-update.md
Partial document update feature improves this experience significantly. The clie
- **Performance improvements**: Avoids extra CPU cycles on the client side, reduces end-to-end latency and network bandwidth. - **Multi-region writes**: Supports automatic and transparent conflict resolution with partial updates on discrete paths within the same document.
-> [!NOTE]
-> _Partial document update_ operation is based on the [RFC spec](https://www.rfc-editor.org/rfc/rfc6902#appendix-A.14). To escape a ~ character you need to add 0 or a 1 to the end.
+> [!NOTE]
+> The *Partial document update* operation is based on the [JSON Patch RFC](https://www.rfc-editor.org/rfc/rfc6902#appendix-A.14). Property names in paths need to escape the `~` and `/` characters as `~0` and `~1`, respectively.
An example target JSON document:
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
cosmos-db Concepts Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-cluster.md
Previously updated : 01/30/2023 Last updated : 06/05/2023 # Clusters in Azure Cosmos DB for PostgreSQL
assigned to a zone. (Only [certain
regions](https://azure.microsoft.com/global-infrastructure/geographies/#geographies) support availability zones.)
-If high availability is enabled for the cluster, and a node [fails
+Azure Cosmos DB for PostgreSQL allows you to set a preferred availability zone for cluster. Usually the reason for it is to put cluster nodes in the same availability zone where the application and the rest of the application stack components are.
+
+If [high availability](./concepts-high-availability.md) is enabled for the cluster, and a node [fails
over](concepts-high-availability.md) to a standby, you may see its availability zone differs from the other nodes. In this case, the nodes will be moved back into the same availability zone together during the next [maintenance
window](concepts-maintenance.md).
## Next steps * Learn to [provision a cluster](quickstart-create-portal.md)
+* Learn about [high availability fundamentals](./concepts-high-availability.md)
cosmos-db Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-high-availability.md
Previously updated : 07/15/2022 Last updated : 06/05/2023 # High availability in Azure Cosmos DB for PostgreSQL
happens within a few minutes, and promoted nodes always have fresh data through
PostgreSQL synchronous streaming replication. All primary nodes in a cluster are provisioned into one availability zone
-for better latency between the nodes. The standby nodes are provisioned into
-another zone. The Azure portal
+for better latency between the nodes. The preferred availability zone allows you to put all cluster nodes in the same availability zone where the application is deployed. This proximity could improve performance further by decreasing app-database latency. The standby nodes are provisioned into
+another availability zone. The Azure portal
[displays](concepts-cluster.md#node-availability-zone) the availability
-zone of each node in a cluster.
+zone of each primary node in a cluster.
Even without HA enabled, each node has its own locally redundant storage (LRS) with three synchronous replicas maintained by Azure
on primary nodes, and fails over to standby nodes with zero data loss.
To take advantage of HA on the coordinator node, database applications need to detect and retry dropped connections and failed transactions. The newly
-promoted coordinator will be accessible with the same connection string.
+promoted coordinator is accessible with the same connection string.
## High availability states
for clusters in the Azure portal.
* **Healthy**: HA is enabled and the node is fully replicated to its standby. * **Failover in progress**: A failure was detected on the primary node and
- a failover to standby was initiated. This state will transition into
+ a failover to standby was initiated. This state transitions into
**Creating standby** once failover to the standby node is completed, and the standby becomes the new primary. * **Creating standby**: The previous standby was promoted to primary, and a new standby is being created for it. When the new secondary is ready, this
- state will transition into **Replication in progress**.
+ state transitions into **Replication in progress**.
* **Replication in progress**: The new standby node is provisioned and data synchronization is in progress. Once all data is replicated to the new
- standby, synchronous replication will be enabled between the primary and
- standby nodes, and the nodes' state will transition back to **Healthy**.
+ standby, synchronous replication is enabled between the primary and
+ standby nodes, and the nodes' state transitions back to **Healthy**.
* **No**: HA isn't enabled on this node. ## Next steps -- Learn how to [enable high
- availability](howto-high-availability.md) in a cluster
+- Learn how to [enable high availability](howto-high-availability.md) in a cluster
cosmos-db Howto High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-high-availability.md
Previously updated : 01/30/2023 Last updated : 06/05/2023 # Configure high availability in Azure Cosmos DB for PostgreSQL
Last updated 01/30/2023
Azure Cosmos DB for PostgreSQL provides high availability (HA) to avoid database downtime. With HA enabled, every node in a cluster
-will get a standby. If the original node becomes unhealthy, its standby will be
+gets a standby. If the original node becomes unhealthy, its standby is
promoted to replace it. > [!IMPORTANT] > Because HA doubles the number of servers in the group, it will also double > the cost.
-Enabling HA is possible during cluster creation, or afterward in the
-**Compute + storage** tab for your cluster in the Azure portal. The user
-interface looks similar in either case. Drag the slider for **High
-availability** from NO to YES:
+Enabling HA is possible during cluster creation on **Scale** page. Once cluster is provisioned, set Set **Enable high availability (HA)** checkbox in the **High availability** tab for your cluster in the Azure portal.
:::image type="content" source="media/howto-high-availability/01-ha-slider.png" alt-text="ha slider"::: Click the **Save** button to apply your selection. Enabling HA can take some
-time as the cluster provisions standbys and streams data to them.
+time as the cluster provisions standby nodes and streams data to them.
-The **Overview** tab for the cluster will list all nodes and their
-standbys, along with a **High availability** column indicating whether HA is
-successfully enabled for each node.
+The **Overview** tab for the cluster lists all nodes along with a **High availability** column indicating whether HA is successfully enabled for each node and **Availability zone** column that shows actual availability zone for each primary cluster node.
:::image type="content" source="media/howto-high-availability/02-ha-column.png" alt-text="the ha column in cluster overview":::
cosmos-db Howto Scale Grow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-scale-grow.md
Title: Scale cluster - Azure Cosmos DB for PostgreSQL
-description: Adjust cluster memory, disk, and CPU resources to deal with increased load.
+ Title: Configure cluster - Azure Cosmos DB for PostgreSQL
+description: Adjust cluster CPU/memory and disk resources to deal with increased load or enable HA for improved availability.
Previously updated : 02/20/2023 Last updated : 06/05/2023 # Scale a cluster in Azure Cosmos DB for PostgreSQL
queries.
You can increase the capabilities of existing nodes. Adjusting compute capacity up and down can be useful for performance experiments, and short- or long-term changes to traffic demands.
-To change the vCores for all worker nodes, on the **Scale** screen, select a new value under **Compute per node**. To adjust the coordinator node's vCores, expand **Coordinator** and select a new value under **Coordinator compute**.
+To change the vCores for all worker nodes, on the **Scale** screen, select a new value under **Compute per node**. To adjust the coordinator's vCores, expand **Coordinator** and select a new value under **Coordinator compute**.
> [!NOTE] > You can scale compute on [cluster read replicas](concepts-read-replicas.md) independent of their primary cluster's compute.
To change the storage amount for all worker nodes, on the **Scale** screen, sele
> [!NOTE] > Once you increase storage and save, you can't decrease the amount of storage.
+## Choose preferred availability zone
+
+You can choose preferred [availability zone](./concepts-cluster.md#node-availability-zone) for nodes if your cluster is in an Azure region that supports availability zones. If you select preferred availability zone during cluster provisioning, Azure Cosmos DB for PostgreSQL provisions all cluster nodes into selected availability zone. If you select or change preferred availability zone after provisioning, all cluster nodes are moved to the new preferred availability zone during next [scheduled maintenance](./concepts-maintenance.md).
+
+To select preferred availability zone for all cluster nodes, on the **Scale** screen, specify a zone in **Preferred availability zone** list. To let Azure Cosmos DB for PostgreSQL service select an availability zone for cluster, choose 'No preference'.
+ ## Next steps - Learn more about cluster [performance options](resources-compute.md).
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 05/22/2023 Last updated : 06/05/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g
Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters. ### June 2023
+* General availability: Preferred availability zone (AZ) selection is now enabled in [all Azure Cosmos DB for PostgreSQL regions](./resources-regions.md) that support AZs.
+ * Learn about [cluster node availability zones](./concepts-cluster.md#node-availability-zone) and [how to set preferred availability zone](./howto-scale-grow.md#choose-preferred-availability-zone).
* General availability: The new domain name and FQDN format for cluster nodes. The change applies to newly provisioned clusters only. * See [details](./concepts-node-domain-name.md).
Updates that change cluster internals, such as installing a [new minor PostgreSQ
* General availability: Clusters are now always provisioned with the latest Citus version supported for selected PostgreSQL version. * See [this page](./reference-extensions.md#citus-extension) for the latest supported Citus versions. * See [this page](./concepts-upgrade.md) for information on PostgreSQL and Citus version in-place upgrade.
-* General availability: PgBouncer 1.19.0 is now available in all supported regions.
### April 2023
Updates that change cluster internals, such as installing a [new minor PostgreSQ
### March 2023
-* General availability: Clusters compute [start / stop functionality](./concepts-compute-start-stop.md) is now supported across all configurations.
+* General availability: Cluster compute [start / stop functionality](./concepts-compute-start-stop.md) is now supported across all configurations.
### February 2023
cosmos-db Restore Account Continuous Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/restore-account-continuous-backup.md
az cosmosdb restore \
```
-If `**--enable-public-network**` is not set, restored account is accessible from public network. Please ensure to pass `` False` to the `**--enable-public-network** `` option to prevent public network access for restored account.
+If `--enable-public-network` is not set, restored account is accessible from public network. Please ensure to pass `False` to the `--enable-public-network` option to prevent public network access for restored account.
> [!NOTE] > For restoring with public network access disabled, you'll need to install the cosmosdb-preview 0.23.0 of CLI extension by executing `az extension update --name cosmosdb-preview `. You would also require version 2.17.1 of the CLI.
cosmos-db Vercel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vercel-integration.md
+
+ Title: Vercel integration with Azure Cosmos DB
+description: Integrate web applications using the Vercel platform with Azure Cosmos DB for NOSQL or MongoDB as a data source.
+++++ Last updated : 05/28/2023++
+# Vercel Integration with Azure Cosmos DB
++
+Vercel offers a user-friendly and robust platform for web application development and deployment. This new integration improves productivity as developers can now easily create Vercel applications with a backend database already configured. This Integration helps developers transform their creative ideas into reality in real-time.
+
+## Getting started with Integrating Azure Cosmos DB with Vercel
+
+This documentation is designed for developers seeking to effectively combine the robust capabilities of Azure Cosmos DB - a globally distributed, multi-model database service - with Vercel's high-performance deployment and hosting solution.
+
+This integration enables developers to apply the benefits of a versatile and high-performance NoSQL database, while capitalizing on Vercel's serverless architecture and development platform.
+
+There are two ways to integrate Cosmos DB
+
+- [Via Vercel Integrations Marketplace](https://vercel.com/integrations/azurecosmosdb)
+- Via Command Line
+
+## Integrate Cosmos DB with Vercel via Integration Marketplace
+
+Use this guide if you have already identified the Vercel project(s) or want to integrate an existing vercel project with
+
+## Prerequisites
+
+- Vercel Account with Vercel Project ΓÇô [Learn how to create a new Vercel Project](https://vercel.com/docs/concepts/projects/overview#creating-a-project)
+
+- Azure Cosmos DB - [Quickstart: Create an Azure Cosmos DB account](../cosmos-db/nosql/quickstart-portal.md)
+
+- Some basic knowledge on Next.js, React and TypeScript
+
+## Steps for Integrating Azure Cosmos DB with Vercel
+
+1. Select Vercel Projects for the Integration with Azure Cosmos DB. After you have the prerequisites ready, visit the Cosmos DB [integrations page on the Vercel marketplace](https://vercel.com/integrations/azurecosmosdb) and select Add Integration
+
+ :::image type="content" source="./media/integrations/vercel/add-integration.png" alt-text="Screenshot shows the Azure Cosmos DB integration page on Vercel's marketplace." lightbox="./media/integrations/vercel/add-integration.png":::
+
+2. Choose All projects or Specific projects for the integration. In this guide we proceed by choosing specific projects, select continue
+
+ :::image type="content" source="./media/integrations/vercel/continue.png" alt-text="Screenshot shows to select vercel projects." lightbox="./media/integrations/vercel/continue.png":::
+
+3. Next screen will show the required permissions for the integration, select Add Integration
+
+ :::image type="content" source="./media/integrations/vercel/permissions.png" alt-text="Screenshot shows the permissions required for the integration." lightbox="./media/integrations/vercel/permissions.png":::
+
+4. Log in to Azure using your credentials to select the existing Azure Cosmos DB account for the integration
+
+ :::image type="content" source="./media/integrations/vercel/sign-in.png" alt-text="Screenshot shows to login to Azure account." lightbox="./media/integrations/vercel/sign-in.png":::
+
+5. Choose a Directory, subscription and the Azure Cosmos DB Account
+
+6. Verify Vercel Projects
+
+ :::image type="content" source="./media/integrations/vercel/projects.png" alt-text="Screenshot shows to verify the vercel projects for the integration." lightbox="./media/integrations/vercel/projects.png":::
+
+7. Select Integrate
+
+ :::image type="content" source="./media/integrations/vercel/integrate.png" alt-text="Screenshot shows to confirm the integration." lightbox="./media/integrations/vercel/integrate.png":::
+
+## Integrate Cosmos DB with Vercel via npm & Command Line
+
+1. Execute create-next-app with npm, yarn, or pnpm to bootstrap the example:
+
+ ```bash
+ npx create-next-app --example with-azure-cosmos with-azure-cosmos-app
+
+ yarn create next-app --example with-azure-cosmos with-azure-cosmos-app
+
+ pnpm create next-app --example with-azure-cosmos with-azure-cosmos-app
+ ```
+
+2. Modify pages/index.tsx to add your code.
+
+ Make changes to pages/index.tsx according to your needs. You could check out the code at **lib/cosmosdb.ts** to see how the `@azure/cosmos` JavaScript client is initialized.
+
+3. Push the changes to a GitHub repository.
+
+### Set up environment variables
+
+- COSMOSDB_CONNECTION_STRING - You need your Cosmos DB connection string. You can find these in the Azure portal in the keys section.
+
+- COSMOSDB_DATABASE_NAME - Name of the database you plan to use. This should already exist in the Cosmos DB account.
+
+- COSMOSDB_CONTAINER_NAME - Name of the container you plan to use. This should already exist in the previous database.
+
+## Integrate Cosmos DB with Vercel using marketplace template
+
+You could use this [template](https://vercel.com/new/clone?demo-title=CosmosDB%20Starter&demo-description=Starter%20app%20built%20on%20Next.js%20and%20CosmosDB.&demo-url=https://cosmosdb-starter-test.vercel.app/&project-name=CosmosDB%20Starter&repository-name=cosmosdb-starter&repository-url=https%3A%2F%2Fgithub.com%2Fv1k1%2Fcosmosdb-starter&from=templates&integration-ids=oac_mPA9PZCLjkhQGhlA5zntNs0L&env=COSMOSDB_CONNECTION_STRING%2C%E2%80%A2%09COSMOSDB_CONTAINER_NAME) to deploy a starter web app on Vercel with Azure Cosmos DB integration.
+
+1. Choose the GitHub repository, where you want to clone the starter repo.
+ :::image type="content" source="./media/integrations/vercel/create-git-repository.png" alt-text="Screenshot to create the repository." lightbox="./media/integrations/vercel/create-git-repository.png":::
+
+2. Select the integration to set up Cosmos DB connection keys, these steps are described in detail in previous section.
+
+ :::image type="content" source="./media/integrations/vercel/add-integrations.png" alt-text="Screenshot shows the required permissions." lightbox="./media/integrations/vercel/add-integrations.png":::
+
+3. Set the environment variables for the database name and container name, and finally select Deploy
+
+ :::image type="content" source="./media/integrations/vercel/configure-project.png" alt-text="Screenshot shows the required variables to establish the connection with Azure Cosmos DB." lightbox="./media/integrations/vercel/configure-project.png":::
+
+4. Upon successful completion, the completion page would contain the link to the deployed app, or you go to the Vercel project's dashboard to get the link of your app.
data-factory Data Flow Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-flow-troubleshoot-guide.md
Previously updated : 03/28/2023 Last updated : 06/01/2023 # Troubleshoot mapping data flows in Azure Data Factory
This section lists common error codes and messages reported by mapping data flow
- **Cause**: The User/password is missed. - **Recommendation**: Make sure that you have right credential settings in the related PostgreSQL linked service.
+### Error code: DF-SAPODATA-InvalidRunMode
+
+- **Message**: Failed to execute dataflow with invalid run mode.
+- **Cause**: Possible causes are:
+ 1. Only the read mode `fullLoad` can be specified when `enableCdc` is false.
+ 1. Only the run mode `incrementalLoad` or `fullAndIncrementalLoad` can be specified when `enableCdc` is true.
+ 1. Only `fullLoad`, `incrementalLoad` or `fullAndIncrementalLoad` can be specified.
+- **Recommendation**: Reconfigure the activity and run again. If the issue persists, contact Microsoft support for further assistance.
+
+### Error code: DF-SAPODATA-StageLinkedServiceMissed
+
+- **Message**: Failed to execute dataflow when Staging Linked Service is not existed in DSL. Please reconfigure the activity and run again. If issue persists, please contact Microsoft support for further assistance.
+- **Cause**: The staging linked service doesn't exist in DSL.
+- **Recommendation**: Reconfigure the activity and run again. If the issue persists, contact Microsoft support for further assistance.
+
+### Error code: DF-SAPOODATA-StageContainerMissed
+
+- **Message**: Container or file system is required for staging storage.
+- **Cause**: No container or file system is specified for the staging storage.
+- **Recommendation**: Specify the container and file system for your staging storage.
+
+### Error code: DF-SAPODATA-StageFolderPathMissed
+
+- **Message**: Folder path is required for staging storage.
+- **Cause**: No folder path is specified for the staging storage.
+- **Recommendation**: Specify the folder path for the staging storage.
+
+### Error code: DF-SAPODATA-ODataServiceOrEntityMissed
+
+- **Message**: Both SAP servicePath and entityName are required in import-schema, preview-data and read data operation.
+- **Cause**: **Service path** and **Entity name** can't be null when importing schema, previewing data or reading data.
+- **Recommendation**: Specify the **Service path** and **Entity name** when importing schema, previewing data or reading data.
+
+### Error code: DF-SAPODATA-TimeoutInvalid
+
+- **Message**: Timeout is invalid, it should be no more than 7 days.
+- **Cause**: The timeout can't exceed 7 days.
+- **Recommendation**: Specify the valid timeout.
+
+### Error code: DF-SAPODATA-ODataServiceMissed
+
+- **Message**: SAP servicePath is required when browsing entity name.
+- **Cause**: The **Service path** can't be null when browsing the entity name.
+- **Recommendation**: Specify the **Service path**.
+
+### Error code: DF-SAPODATA-SystemError
+
+- **Message**: System Error: Failed to get deltaToken from SAP. Please contact Microsoft support for further assistance.
+- **Cause**: Failed to get the delta token from SAP.
+- **Recommendation**: Contact Microsoft support for further assistance.
+
+### Error code: DF-SAPODATA-StageAuthInvalid
+
+- **Message**: Invalid client secret provided
+- **Cause**: The service principal credential of the staging storage is incorrect.
+- **Recommendation**: Test connection in your staging storage linked service, and confirm that the authentication settings in your staging storage are correct.
+
+### Error code: DF-SAPODATA-NotReached
+
+- **Causes and recommendations**: Failed to create OData connection to the request URL. Different causes may lead to this issue. Check the list below for possible causes and related recommendations.
+
+ | Cause analysis | Recommendation |
+ | : | : |
+ | Your SAP server is shut down. | Check if your SAP server is started. |
+ | Self-hosted integration runtime proxy issue. | Check your self-hosted integration runtime proxy. |
+ | Incorrect parameters input (for example, wrong SAP server name or password) | Check your input parameters: SAP server name, password.|
+
+### Error code: DF-SAPODATA-NoneODPService
+
+- **Message**: Current odata service doesn't support extracting ODP data, please enable ODP for the service
+- **Cause**: The current OData service doesn't support extracting ODP data.
+- **Recommendation**: Enable ODP for the service.
+ ### Error code: DF-SAPODP-AuthInvalid - **Message**: SapOdp Name or Password incorrect
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 02/21/2023 Last updated : 06/01/2023 # Azure Policy built-in definitions for Data Factory
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
Previously updated : 02/21/2023 Last updated : 06/01/2023
defender-for-cloud Azure Devops Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md
The Microsoft Security DevOps uses the following Open Source tools:
| Name | Language | License | |--|--|--|
-| [AntiMalware](https://www.microsoft.com/windows/comprehensive-security) | AntiMalware protection in Windows from Windows Defender, that scans source code and breaks the build if malware has been found | Not Open Source |
+| [AntiMalware](https://www.microsoft.com/windows/comprehensive-security) | AntiMalware protection in Windows from Microsoft Defender for Endpoint, that scans for malware and breaks the build if malware has been found. This tool scans by default on windows-latest agent. | Not Open Source |
| [Bandit](https://github.com/PyCQA/bandit) | Python | [Apache License 2.0](https://github.com/PyCQA/bandit/blob/master/LICENSE) | | [BinSkim](https://github.com/Microsoft/binskim) | Binary--Windows, ELF | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) | | [Credscan](detect-exposed-secrets.md) | Credential Scanner (also known as CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files <br> common types: default passwords, SQL connection strings, Certificates with private keys | Not Open Source |
defender-for-cloud Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md
Last updated 02/02/2023
# Create rich, interactive reports of Defender for Cloud data
-[Azure Monitor Workbooks](../azure-monitor/visualize/workbooks-overview.md) provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences.
+[Azure Workbooks](../azure-monitor/visualize/workbooks-overview.md) provide a flexible canvas for data analysis and the creation of rich visual reports within the Azure portal. They allow you to tap into multiple data sources from across Azure, and combine them into unified interactive experiences.
Workbooks provide a rich set of capabilities for visualizing your Azure data. For detailed examples of each visualization type, see the [visualizations examples and documentation](../azure-monitor/visualize/workbooks-text-visualizations.md).
To move workbooks that you've built in other Azure services into your Microsoft
1. From the toolbar, select **Edit**.
- :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-workbooks.png" alt-text="Editing an Azure Monitor workbook.":::
+ :::image type="content" source="media/custom-dashboards-azure-workbooks/editing-workbooks.png" alt-text="Editing a workbook.":::
1. From the toolbar, select **</>** to enter the Advanced Editor.
You'll find your saved workbook in the **Recently modified workbooks** category.
## Next steps
-This article described Defender for Cloud's integrated Azure Monitor Workbooks page with built-in reports and the option to build your own custom, interactive reports.
+This article described Defender for Cloud's integrated Azure Workbooks page with built-in reports and the option to build your own custom, interactive reports.
+
+- Learn more about [Azure Workbooks](../azure-monitor/visualize/workbooks-overview.md)
-- Learn more about [Azure Monitor Workbooks](../azure-monitor/visualize/workbooks-overview.md) - The built-in workbooks pull their data from Defender for Cloud's recommendations. Learn about the many security recommendations in [Security recommendations - a reference guide](recommendations-reference.md)++
defender-for-cloud Defender For Storage Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md
The malware scanning is regional, the scanned content stays within the same regi
The Malware Scanning service requires access to your data to scan your data for malware. During service enablement, a new Data Scanner resource called **StorageDataScanner** is created in your Azure subscription. This resource is granted with a **Storage Blob Data Owner** role assignment to access and change your data for Malware Scanning and Sensitive Data Discovery.
+### Private Endpoint is supported out-of-the-box
+Malware Scanning in Defender for Storage is supported in storage accounts that use private endpoints while maintaining data privacy.
+🔗[Private endpoints](../private-link/private-endpoint-overview.md) provide secure connectivity to your Azure storage services, eliminating public internet exposure, and are considered a best practice.
+ ## Providing scan results Malware Scanning scan results are available through four methods. After setup, you'll see scan results as **blob index tags** for every uploaded and scanned file in the storage account, and as **Microsoft Defender for Cloud security alerts** when a file is identified as malicious.
defender-for-cloud Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md
Security DevOps uses the following Open Source tools:
| Name | Language | License | |--|--|--|
-| [AntiMalware](https://www.microsoft.com/windows/comprehensive-security) | AntiMalware protection in Windows from Windows Defender, that scans source code and breaks the run if malware has been found | Not Open Source |
+| [AntiMalware](https://www.microsoft.com/windows/comprehensive-security) | AntiMalware protection in Windows from Microsoft Defender for Endpoint, that scans for malware and breaks the build if malware has been found. This tool scans by default on windows-latest agent. | Not Open Source |
| [Bandit](https://github.com/PyCQA/bandit) | Python | [Apache License 2.0](https://github.com/PyCQA/bandit/blob/master/LICENSE) | | [BinSkim](https://github.com/Microsoft/binskim) | Binary--Windows, ELF | [MIT License](https://github.com/microsoft/binskim/blob/main/LICENSE) | | [ESlint](https://github.com/eslint/eslint) | JavaScript | [MIT License](https://github.com/eslint/eslint/blob/main/LICENSE) |
Security DevOps uses the following Open Source tools:
- Open the [Microsoft Security DevOps GitHub action](https://github.com/marketplace/actions/security-devops-action) in a new window.
+- Ensure that [Workflow permissions are set to Read and Write](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#setting-the-permissions-of-the-github_token-for-your-repository) on the GitHub repository.
+ ## Configure the Microsoft Security DevOps GitHub action **To setup GitHub action**:
defender-for-cloud Incidents Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/incidents-reference.md
Title: Reference table for all incidents in Microsoft Defender for Cloud description: This article lists the incidents visible in Microsoft Defender for Cloud Previously updated : 06/01/2023 Last updated : 06/07/2023 # Incidents - a reference guide
Last updated 06/01/2023
This article lists the incidents you might get from Microsoft Defender for Cloud and any Microsoft Defender plans you've enabled. The incidents shown in your environment depend on the resources and services you're protecting, and your customized configuration.
-A [security incident](alerts-overview.md#what-are-security-incidents) is a correlation of alerts with an attack story that share an entity. For example, Resource, IP Address, User or share a [kill chain](alerts-reference.md#intentions) patterns.
+A [security incident](alerts-overview.md#what-are-security-incidents) is a correlation of alerts with an attack story that share an entity. For example, Resource, IP Address, User or share a [kill chain](alerts-reference.md#intentions) pattern.
You can select an incident to view all of the alerts that are related to the incident and get more information.
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions for Microsoft Defender for Cloud description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
description: A description of what's new and changed in Microsoft Defender for C
Previously updated : 05/03/2023 Last updated : 06/07/2023 # Archive for what's new in Defender for Cloud?
This page provides you with information about:
- Bug fixes - Deprecated functionality
+## December 2022
+
+Updates in December include:
+
+- [Announcing express configuration for vulnerability assessment in Defender for SQL](#announcing-express-configuration-for-vulnerability-assessment-in-defender-for-sql)
+
+### Announcing express configuration for vulnerability assessment in Defender for SQL
+
+The express configuration for vulnerability assessment in Microsoft Defender for SQL provides security teams with a streamlined configuration experience on Azure SQL Databases and Dedicated SQL Pools outside of Synapse Workspaces.
+
+With the express configuration experience for vulnerability assessments, security teams can:
+
+- Complete the vulnerability assessment configuration in the security configuration of the SQL resource, without any another settings or dependencies on customer-managed storage accounts.
+- Immediately add scan results to baselines so that the status of the finding changes from **Unhealthy** to **Healthy** without rescanning a database.
+- Add multiple rules to baselines at once and use the latest scan results.
+- Enable vulnerability assessment for all Azure SQL Servers when you turn on Microsoft Defender for databases at the subscription-level.
+
+Learn more about [Defender for SQL vulnerability assessment](sql-azure-vulnerability-assessment-overview.md).
+ ## November 2022 Updates in November include:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 06/06/2023 Last updated : 06/07/2023 # What's new in Microsoft Defender for Cloud?
Updates in June include:
|Date |Update | |||
-|June 6 | [Additional scopes added to existing Azure DevOps Connectors](#additional-scopes-added-to-existing-azure-devops-connectors) |
+| June 7 | [Express configuration for vulnerability assessments in Defender for SQL is now Generally Available](#express-configuration-for-vulnerability-assessments-in-defender-for-sql-is-now-generally-available) |
+|June 6 | [More scopes added to existing Azure DevOps Connectors](#more-scopes-added-to-existing-azure-devops-connectors) |
|June 5 | [Onboarding directly (without Azure Arc) to Defender for Servers is now Generally Available](#onboarding-directly-without-azure-arc-to-defender-for-servers-is-now-generally-available) | |June 4 | [Replacing agent-based discovery with agentless discovery for containers capabilities in Defender CSPM](#replacing-agent-based-discovery-with-agentless-discovery-for-containers-capabilities-in-defender-cspm) |
-### Additional scopes added to existing Azure DevOps Connectors
+### Express configuration for vulnerability assessments in Defender for SQL is now Generally Available
-June 6
+June 7, 2023
-Defender for DevOps added the following additional scopes to the Azure DevOps (ADO) application:
+Express configuration for vulnerability assessments in Defender for SQL is now Generally Available. Express configuration provides a streamlined onboarding experience for SQL vulnerability assessments by using a one-click configuration (or an API call). There's no extra settings or dependencies on managed storage accounts needed.
+
+Check out this [blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-sql-vulnerability-assessment-updates/ba-p/3837732) to learn more about express configuration.
+
+You can learn the differences between [express and classic configuration](sql-azure-vulnerability-assessment-overview.md#what-are-the-express-and-classic-configurations).
+
+### More scopes added to existing Azure DevOps Connectors
+
+June 6, 2023
+
+Defender for DevOps added the following extra scopes to the Azure DevOps (ADO) application:
- **Advance Security management**: `vso.advsec_manage`. Which is needed in order to allow you to enable, disable and manage GitHub Advanced Security for ADO.
June 5, 2023
Previously, Azure Arc was required to onboard non-Azure servers to Defender for Servers. However, with the latest release you can also onboard your on-premises servers to Defender for Servers using only the Microsoft Defender for Endpoint agent.
-This new method simplifies the onboarding process for customers focused on core endpoint protection and allows you to take advantage of Defender for ServersΓÇÖ consumption-based billing for both cloud and non-cloud assets. The direct onboarding option via Defender for Endpoint is available now, with billing for onboarded machines starting on July 1.
+This new method simplifies the onboarding process for customers focused on core endpoint protection and allows you to take advantage of Defender for ServersΓÇÖ consumption-based billing for both cloud and noncloud assets. The direct onboarding option via Defender for Endpoint is available now, with billing for onboarded machines starting on July 1.
For more information, see [Connect your non-Azure machines to Microsoft Defender for Cloud with Defender for Endpoint](onboard-machines-with-defender-for-endpoint.md).
Agentless scanning for VMs now supports processing of instances with encrypted d
This extended support increases coverage and visibility over your cloud estate without impacting your running workloads. Support for encrypted disks maintains the same zero impact method on running instances. - For new customers enabling agentless scanning in AWS - encrypted disks coverage is built in and supported by default.-- For existing customers that already have an AWS connector with agentless scanning enabled, you'll need to reapply the CloudFormation stack to your onboarded AWS accounts to update and add the new permissions that are required to process encrypted disks. The updated CloudFormation template includes new assignments that allow Defender for Cloud to process encrypted disks.
+- For existing customers that already have an AWS connector with agentless scanning enabled, you need to reapply the CloudFormation stack to your onboarded AWS accounts to update and add the new permissions that are required to process encrypted disks. The updated CloudFormation template includes new assignments that allow Defender for Cloud to process encrypted disks.
You can learn more about the [permissions used to scan AWS instances](concept-agentless-data-collection.md#which-permissions-are-used-by-agentless-scanning).
Learn how to [Find vulnerabilities and collect software inventory with agentless
### Defender for DevOps Pull Request annotations in Azure DevOps repositories now includes Infrastructure as Code misconfigurations
-Defender for DevOps has expanded its Pull Request (PR) annotation coverage in Azure DevOps to include Infrastructure as Code (IaC) misconfigurations that are detected in ARM and Bicep templates.
+Defender for DevOps has expanded its Pull Request (PR) annotation coverage in Azure DevOps to include Infrastructure as Code (IaC) misconfigurations that are detected in Azure Resource Manager and Bicep templates.
Developers can now see annotations for IaC misconfigurations directly in their PRs. Developers can also remediate critical security issues before the infrastructure is provisioned into cloud workloads. To simplify remediation, developers are provided with a severity level, misconfiguration description, and remediation instructions within each annotation.
Microsoft Defender for DevOps has expanded its preview and is now available in t
Learn more about [Microsoft Defender for DevOps](defender-for-devops-introduction.md).
-### The built-in policy \[Preview]: Private endpoint should be configured for Key Vault has been deprecated
+### The built-in policy [Preview]: Private endpoint should be configured for Key Vault has been deprecated
The built-in policy [`[Preview]: Private endpoint should be configured for Key Vault`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) has been deprecated and has been replaced with the [`[Preview]: Azure Key Vaults should use private link`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6abeaec-4d90-4a02-805f-6b26c4d3fbe9) policy.
The related [policy definition](https://portal.azure.com/#view/Microsoft_Azure_P
|--|--|--| | Diagnostic logs in Virtual Machine Scale Sets should be enabled | Enable logs and retain them for up to a year, enabling you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised. | Low |
-## December 2022
-
-Updates in December include:
--- [Announcing express configuration for vulnerability assessment in Defender for SQL](#announcing-express-configuration-for-vulnerability-assessment-in-defender-for-sql)-
-### Announcing express configuration for vulnerability assessment in Defender for SQL
-
-The express configuration for vulnerability assessment in Microsoft Defender for SQL provides security teams with a streamlined configuration experience on Azure SQL Databases and Dedicated SQL Pools outside of Synapse Workspaces.
-
-With the express configuration experience for vulnerability assessments, security teams can:
--- Complete the vulnerability assessment configuration in the security configuration of the SQL resource, without any another settings or dependencies on customer-managed storage accounts.-- Immediately add scan results to baselines so that the status of the finding changes from **Unhealthy** to **Healthy** without rescanning a database.-- Add multiple rules to baselines at once and use the latest scan results.-- Enable vulnerability assessment for all Azure SQL Servers when you turn on Microsoft Defender for databases at the subscription-level.-
-Learn more about [Defender for SQL vulnerability assessment](sql-azure-vulnerability-assessment-overview.md).
- ## Next steps For past changes to Defender for Cloud, see [Archive for what's new in Defender for Cloud?](release-notes-archive.md).+
defender-for-iot Ot Deploy Path https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/ot-deploy-path.md
When baseline learning ends, the OT monitoring deployment process is complete, a
> [Turn off learning mode manually](../how-to-manage-individual-sensors.md#turn-off-learning-mode-manually) if you feel that the current alerts in Defender for IoT reflect your network traffic accurately, and learning mode hasn't already ended automatically. >
+## Connect Defender for IoT data to your SIEM
+
+Once Defender for IoT has been deployed, send security alerts and manage OT/IoT incidents by integrating Defender for IoT with your security information and event management (SIEM) platform and existing SOC workflows and tools.
+Integrate Defender for IoT alerts with your organizational SIEM by [integrating with Microsoft Sentinel](../iot-advanced-threat-monitoring.md) and leveraging the out-of-the-box Microsoft Defender for IoT solution, or by [creating forwarding rules](../how-to-forward-alert-information-to-partners.md) to other SIEM systems.
+Defender for IoT integrates out-of-the-box with Microsoft Sentinel, as well as [a broad range of SIEM systems](../integrate-overview.md), such as Splunk, IBM QRadar, LogRhythm, Fortinet, and more.
+
+For more information, see:
+
+- [OT threat monitoring in enterprise SOCs](../concept-sentinel-integration.md)
+- [Tutorial: Connect Microsoft Defender for IoT with Microsoft Sentinel](../iot-solution.md)
+- [Connect on-premises OT network sensors to Microsoft Sentinel](../integrations/on-premises-sentinel.md)
+- [Integrations with Microsoft and partner services](../integrate-overview.md)
+- [Stream Defender for IoT cloud alerts to a partner SIEM](../integrations/send-cloud-data-to-partners.md)
+
+After integrating Defender for IoT alerts with a SIEM, we recommend the following next steps to operationalize OT/IoT alerts and fully integrate them with your existing SOC workflows and tools:
+
+- Identify and define relevant IoT/OT security threats and SOC incidents you would like to monitor based on your specific OT needs and environment.
+
+- Create detection rules and severity levels in the SIEM. Only relevant incidents will be triggered, thus reducing unnecessary noise. For example, you would define PLC code changes performed from unauthorized devices, or outside of work hours, as a high severity incident due to the high fidelity of this specific alert.
+
+ In Microsoft Sentinel, the Microsoft Defender for IoT solution includes [a set of out-of-the-box detection rules](../iot-advanced-threat-monitoring.md#detect-threats-out-of-the-box-with-defender-for-iot-data), which are built specifically for Defender for IoT data, and help you fine-tune the incidents created in Sentinel.
+
+- Define the appropriate workflow for mitigation, and create automated investigation playbooks for each use case. In Microsoft Sentinel, the Microsoft Defender for IoT solution includes [out-of-the-box playbooks for automated response to Defender for IoT alerts](../iot-advanced-threat-monitoring.md#automate-response-to-defender-for-iot-alerts).
+ ## Next steps Now that you understand the OT monitoring system deployment steps, you're ready to get started!
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
To understand whether a feature is supported in your sensor version, check the r
This version includes bug fixes for stability improvements. -- [New endpoint to send OT sensor logs to Defender for IoT](whats-new.md#new-endpoint-to-send-ot-sensor-logs-to-defender-for-iot)
+- [Improved monitoring and support for OT sensor logs](whats-new.md#improved-monitoring-and-support-for-ot-sensor-logs)
### 22.3.8
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
For more information, see:
|Service area |Updates | |||
-| **OT networks** | **Sensor version 22.3.9**: <br>- [New endpoint to send OT sensor logs to Defender for IoT](#new-endpoint-to-send-ot-sensor-logs-to-defender-for-iot) <br><br> **Sensor versions 22.3.x and higher**: <br>- [Configure Active Directory and NTP settings in the Azure portal](#configure-active-directory-and-ntp-settings-in-the-azure-portal) |
+| **OT networks** | **Sensor version 22.3.9**: <br>- [Improved monitoring and support for OT sensor logs](#improved-monitoring-and-support-for-ot-sensor-logs) <br><br> **Sensor versions 22.3.x and higher**: <br>- [Configure Active Directory and NTP settings in the Azure portal](#configure-active-directory-and-ntp-settings-in-the-azure-portal) |
-### New endpoint to send OT sensor logs to Defender for IoT
+### Improved monitoring and support for OT sensor logs
-In version 22.3.9, we've added a new endpoint to our list of required endpoints to allow OT sensors to send more log data to our support teams. The additional data helps us troubleshoot customer issues, providing faster response times and more targeted solutions and recommendations.
+In version 22.3.9, we've added a new capability to collect logs from the OT sensor through a new endpoint. The additional data helps us troubleshoot customer issues, providing faster response times and more targeted solutions and recommendations. The new endpoint has been added to our list of required endpoints that connect your OT sensors to Azure.
After updating your OT sensors, download the latest list of endpoints and ensure that your sensors can access all endpoints listed.
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
firewall Firewall Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-diagnostics.md
Previously updated : 11/15/2022 Last updated : 06/07/2023 #Customer intent: As an administrator, I want monitor Azure Firewall logs and metrics so that I can track firewall activity.
-# Monitor Azure Firewall logs and metrics
+# Monitor Azure Firewall logs (legacy) and metrics
+
+> [!TIP]
+> For an improved method to work with firewall logs, see [Azure Structured Firewall Logs](firewall-structured-logs.md).
You can monitor Azure Firewall using firewall logs. You can also use activity logs to audit operations on Azure Firewall resources. Using metrics, you can view performance counters in the portal.
You can access some of these logs through the portal. Logs can be sent to [Azure
Before starting, you should read [Azure Firewall logs and metrics](logs-and-metrics.md) for an overview of the diagnostics logs and metrics available for Azure Firewall.
-Additionally, for an improved method to work with firewall logs, see [Azure Structured Firewall Logs (preview)](firewall-structured-logs.md).
- ## Enable diagnostic logging through the Azure portal
-It can take a few minutes for the data to appear in your logs after you complete this procedure to turn on diagnostic logging. If you don't see anything at first, check again in a few more minutes.
+It can take a few minutes for the data to appear in your logs after you complete this procedure to turn on diagnostic logging. If you don't see anything at first, check again in a few more minutes.
1. In the Azure portal, open your firewall resource group and select the firewall. 2. Under **Monitoring**, select **Diagnostic settings**.
- For Azure Firewall, three service-specific logs are available:
+ For Azure Firewall, three service-specific legacy logs are available:
- * AzureFirewallApplicationRule
- * AzureFirewallNetworkRule
- * AzureFirewallDnsProxy
+ * Azure Firewall Application Rule (Legacy Azure Diagnostics)
+ * Azure Firewall Network Rule (Legacy Azure Diagnostics)
+ * Azure Firewall Dns Proxy (Legacy Azure Diagnostics)
3. Select **Add diagnostic setting**. The **Diagnostics settings** page provides the settings for the diagnostic logs.
-5. In this example, Azure Monitor logs stores the logs, so type **Firewall log analytics** for the name.
-6. Under **Log**, select **AzureFirewallApplicationRule**, **AzureFirewallNetworkRule**, and **AzureFirewallDnsProxy** to collect the logs.
+5. Type a name for the diagnostic setting.
+6. Under **Logs**, select **Azure Firewall Application Rule (Legacy Azure Diagnostics)**, **Azure Firewall Network Rule (Legacy Azure Diagnostics)**, and **Azure Firewall Dns Proxy (Legacy Azure Diagnostics)** to collect the logs.
7. Select **Send to Log Analytics** to configure your workspace. 8. Select your subscription.
-9. Select **Save**.
+1. For the **Destination table**, select **Azure diagnostics**.
+1. Select **Save**.
+
+ :::image type="content" source=".\media\firewall-diagnostics\diagnostic-setting-legacy.png" alt-text="Screenshot of Firewall Diagnostic setting.":::
- :::image type="content" source=".\media\tutorial-diagnostics\firewall-diagnostic-settings.png" alt-text="Screenshot of Firewall Diagnostic setting.":::
## Enable diagnostic logging by using PowerShell Activity logging is automatically enabled for every Resource Manager resource. Diagnostic logging must be enabled to start collecting the data available through those logs.
frontdoor Front Door Tutorial Rules Engine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-tutorial-rules-engine.md
Title: 'Tutorial: Configure Rules Engine'
+ Title: 'Tutorial: Configure rules engine'
-description: This article provides a tutorial on how to configure Rules Engine in both the Azure portal and CLI.
+description: This article provides a tutorial on how to configure Rules engine in both the Azure portal and Azure CLI.
Previously updated : 09/09/2020 Last updated : 06/06/2023 - # Customer intent: As an IT admin, I want to learn about Front Door and how to configure Rules Engine feature via the Azure portal or Azure CLI.
-# Tutorial: Configure your Rules Engine
+# Tutorial: Configure your rules engine
-This tutorial shows how to create a Rules Engine configuration and your first rule in both Azure portal and CLI.
+This tutorial shows how to create a Rules engine configuration and your first rule in both Azure portal and CLI.
In this tutorial, you learn how to: > [!div class="checklist"]
In this tutorial, you learn how to:
## Prerequisites
-* Before you can complete the steps in this tutorial, you must first create a Front Door. For more information, see [Quickstart: Create a Front Door](quickstart-create-front-door.md).
+* Before you can complete the steps in this tutorial, you must first create a Front Door. For more information, see [Create a Front Door (classic)](quickstart-create-front-door.md).
-## Configure Rules Engine in Azure portal
-1. Within your Front door resource, go to **Settings** and select **Rule Engine configuration**. Click **Add**, give your configuration a name, and start creating your first Rules Engine configuration.
+## Configure Rules engine in Azure portal
- ![Front Door settings menu](./media/front-door-rules-engine/rules-engine-tutorial-1.png)
+1. Within your Front Door (classic) resource, select **Rule Engine configuration** from under *Settings* on the left side menu pane. Select **+ Add**, give your configuration a name, and start creating your first Rules Engine configuration.
-1. Click **Add Rule** to create your first rule. Then, by clicking **Add condition** or **Add action** you can define your rule.
+ :::image type="content" source="./media/front-door-rules-engine/rules-engine-tutorial-1.png" alt-text="Screenshot of the rules engine configuration from the Front Door overview page.":::
++
+1. Enter a name for your first rule. Then select **+ Add condition** or **+ Add action** to define your rule.
> [!NOTE] > - To delete a condition or action from rule, use the trash can on the right-hand side of the specific condition or action. > - To create a rule that applies to all incoming traffic, do not specify any conditions. > - To stop evaluating rules once the first match condition is met, check **Stop evaluating remaining rule**. If this is checked and all of the match conditions of a particular rule are met, then the remaining rules in the configuration will not be executed.
- > - All paths in Rules Engine are case sensitive.
+ > - All paths in the rules engine configuration are case sensitive.
> - Header names should adhere to [RFC 7230](https://datatracker.ietf.org/doc/html/rfc7230#section-3.2.6).
- ![Rules Engine configuration](./media/front-door-rules-engine/rules-engine-tutorial-4.png)
+ :::image type="content" source="./media/front-door-rules-engine/rules-engine-tutorial-4.png" alt-text="Screenshot of the rules engine configuration page with a single rule.":::
1. Determine the priority of the rules within your configuration by using the Move up, Move down, and Move to top buttons. The priority is in ascending order, meaning the rule first listed is the most important rule.
In this tutorial, you learn how to:
> :::image type="content" source="./media/front-door-rules-engine/version-output.png" alt-text="Screenshot of custom header version output.":::
-1. Once you have created one or more rules, press **Save**. This action creates your Rules Engine configuration.
-
-1. Once you have created one or more configurations, associate a Rules Engine configuration with a Route Rule. While a single configuration can be applied to many route rules, a Route rule may only contain one Rules Engine configuration. To make the association, go to your **Front Door designer** > **Route rules**. Select the Route rule you'd like to add the Rules engine configuration to, go to **Route details** > **Rules engine configuration**, and select the configuration you'd like to associate.
+1. Once you have created one or more rules, select **Save**. This action creates your rules engine configuration.
- ![Configure to a routing rule](./media/front-door-rules-engine/rules-engine-tutorial-5.png)
+1. Once you have created a rule engine configuration, you can associate the configuration with a routing rule. A single configuration can be applied to multiple routing rules, but a routing rule can only have one rules engine configuration. To associate the configuration, go to the **Front Door designer** and select a **Route**. Then select the **Rules engine configuration** to associate to the routing rule.
+ :::image type="content" source="./media/front-door-rules-engine/rules-engine-tutorial-5.png" alt-text="Screenshot of rules engine configuration associate from the routing rule page.":::
## Configure Rules Engine in Azure CLI
-1. If you haven't already, install [Azure CLI](/cli/azure/install-azure-cli). Add ΓÇ£front-doorΓÇ¥ extension:- az extension add --name front-door. Then, login and switch to your subscription az account set --subscription <name_or_Id>.
+1. Install [Azure CLI](/cli/azure/install-azure-cli). Add ΓÇ£front-doorΓÇ¥ extension:- az extension add --name front-door. Then, sign in and switch to your subscription az account set --subscription <name_or_Id>.
1. Start by creating a Rules Engine - this example shows one rule with one header-based action and one match condition.
In this tutorial, you learn how to:
az network front-door routing-rule update -g {rg} -f {front_door} -n {routing_rule_name} --remove rulesEngine # case sensitive word ΓÇÿrulesEngineΓÇÖ ```
-For more information, a full list of AFD Rules Engine commands can be found [here](/cli/azure/network/front-door/rules-engine).
+For more information, see full list of [Azure Front Door (classic) Rules engine commands](/cli/azure/network/front-door/rules-engine).
## Clean up resources
-In the preceding steps, you configured and associated Rules Engine configuration to your routing rules. If you no longer want the Rules Engine configuration associated to your Front Door, you can remove the configuration by performing the following steps:
+In the preceding steps, you configured and associated rules engine configuration to your routing rules. If you no longer want the Rules engine configuration associated to your Front Door (classic), you can remove the configuration by performing the following steps:
-1. Disassociate any routing rules from the Rule Engine configuration by clicking the three dots next to Rule Engine name.
+1. Disassociate any routing rules from the rule engine configuration by selecting the three dots next to rule engine name and selecting **Associate routing rule**.
- :::image type="content" source="./media/front-door-rules-engine/front-door-rule-engine-routing-association.png" alt-text="Associate routing rules":::
+ :::image type="content" source="./media/front-door-rules-engine/front-door-rule-engine-routing-association.png" alt-text="Screenshot of the associate routing rules from the menu.":::
-1. Uncheck all routing rules this Rule Engine configuration is associated to and click save.
+1. Uncheck all routing rules this Rule Engine configuration is associated to and select save.
:::image type="content" source="./media/front-door-rules-engine/front-door-routing-rule-association.png" alt-text="Routing rule association":::
In the preceding steps, you configured and associated Rules Engine configuration
In this tutorial, you learned how to:
-* Create a Rule Engine configuration
-* Associate configuration to your Front Door routing rules.
+* Create a Rule engine configuration
+* Associate a configuration to a routing rule.
-To learn how to add security headers with Rule Engine, continue to the next tutorial.
+To learn how to add security headers with Rule engine, continue to the next tutorial.
> [!div class="nextstepaction"] > [Security headers with Rules Engine](front-door-security-headers.md)
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 02/21/2023 Last updated : 06/01/2023
The name on each built-in links to the initiative definition source on the
**category** property in **metadata**. To jump to a specific **category**, use the menu on the right side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature.
+## Automanage
++ ## ChangeTrackingAndInventory [!INCLUDE [azure-policy-reference-policysets-changetrackingandinventory](../../../../includes/policy/reference/bycat/policysets-changetrackingandinventory.md)]
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policysets-kubernetes](../../../../includes/policy/reference/bycat/policysets-kubernetes.md)]
+## Managed Identity
++ ## Monitoring [!INCLUDE [azure-policy-reference-policysets-monitoring](../../../../includes/policy/reference/bycat/policysets-monitoring.md)]
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policysets-security-center](../../../../includes/policy/reference/bycat/policysets-security-center.md)]
+## Tags
++ ## Trusted Launch [!INCLUDE [azure-policy-reference-policysets-trusted-launch](../../../../includes/policy/reference/bycat/policysets-trusted-launch.md)]
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 02/21/2023 Last updated : 06/01/2023
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-container-instance](../../../../includes/policy/reference/bycat/policies-container-instance.md)]
+## Container Instances
++ ## Container Registry [!INCLUDE [azure-policy-reference-policies-container-registry](../../../../includes/policy/reference/bycat/policies-container-registry.md)]
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-data-lake](../../../../includes/policy/reference/bycat/policies-data-lake.md)]
+## Databricks
++
+## Desktop Virtualization
++ ## Event Grid [!INCLUDE [azure-policy-reference-policies-event-grid](../../../../includes/policy/reference/bycat/policies-event-grid.md)]
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-managed-application](../../../../includes/policy/reference/bycat/policies-managed-application.md)]
+## Managed Grafana
++ ## Managed Identity [!INCLUDE [azure-policy-reference-policies-managed-identity](../../../../includes/policy/reference/bycat/policies-managed-identity.md)]
side of the page. Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browse
[!INCLUDE [azure-policy-reference-policies-sql](../../../../includes/policy/reference/bycat/policies-sql.md)]
+## SQL Server
++ ## Storage [!INCLUDE [azure-policy-reference-policies-storage](../../../../includes/policy/reference/bycat/policies-storage.md)]
hdinsight Apache Ambari Troubleshoot Directory Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-ambari-troubleshoot-directory-alerts.md
Title: Apache Ambari directory alerts in Azure HDInsight
description: Discussion and analysis of possible reasons and solutions for Apache Ambari directory alerts in HDInsight. Previously updated : 05/09/2022 Last updated : 06/07/2023 # Scenario: Apache Ambari directory alerts in Azure HDInsight
hdinsight Apache Ambari Troubleshoot Stale Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-ambari-troubleshoot-stale-alerts.md
Title: Apache Ambari stale alerts in Azure HDInsight
description: Discussion and analysis of possible reasons and solutions for Apache Ambari stale alerts in HDInsight. Previously updated : 05/10/2022 Last updated : 06/07/2023 # Scenario: Apache Ambari stale alerts in Azure HDInsight
There are various reasons why a health check might not run at its defined interv
* The cluster is busy executing many jobs or services during a period of heavy load.
-* A small number of hosts in the cluster are hosting many components and so are required to run many alerts. If the number of components is large, alert jobs might miss their scheduled intervals.
+* A few of hosts in the cluster are hosting many components and so are required to run many alerts. If the number of components is large, alert jobs might miss their scheduled intervals.
## Resolution
hdinsight Hdinsight Config For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-config-for-vscode.md
description: Introduce the configuration of Azure HDInsight extension.
Last updated 08/30/2022-+ # Azure HDInsight configuration settings reference
hdinsight Hdinsight Create Non Interactive Authentication Dotnet Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-create-non-interactive-authentication-dotnet-applications.md
Title: Non-interactive authentication .NET application - Azure HDInsight
description: Learn how to create non-interactive authentication Microsoft .NET applications in Azure HDInsight. -+ Last updated 12/23/2022- # Create a non-interactive authentication .NET HDInsight application
hdinsight Hdinsight Go Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-go-sdk-overview.md
Title: Azure HDInsight SDK for Go
description: Reference material for using Azure HDInsight SDK for Go and Apache Hadoop clusters -+ ms.devlang: golang Last updated 06/23/2022
hdinsight Hdinsight Hadoop Customize Cluster Bootstrap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-customize-cluster-bootstrap.md
Title: Customize Azure HDInsight cluster configurations using bootstrap
description: Learn how to customize HDInsight cluster configuration programmatically using .NET, PowerShell, and Resource Manager templates. -+ Last updated 11/17/2022
hdinsight Hdinsight Hadoop Development Using Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-development-using-azure-resource-manager.md
Title: Migrate to Azure Resource Manager tools for HDInsight description: How to migrate to Azure Resource Manager development tools for HDInsight clusters -+ Last updated 12/23/2022
hdinsight Hdinsight Hadoop Migrate Dotnet To Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-migrate-dotnet-to-linux.md
Title: Use .NET with Hadoop MapReduce on Linux-based HDInsight - Azure
description: Learn how to use .NET applications for streaming MapReduce on Linux-based HDInsight. -+ Last updated 08/05/2022- # Migrate .NET solutions for Windows-based HDInsight to Linux-based HDInsight
hdinsight Hdinsight Hadoop Windows Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-windows-tools.md
Title: Use a Windows PC with Hadoop on HDInsight - Azure
description: Work from a Windows PC in Hadoop on HDInsight. Manage and query clusters with PowerShell, Visual Studio, and Linux tools. Develop big data solutions with .NET. -+ Last updated 08/05/2022
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
To subscribe, click the ΓÇ£watchΓÇ¥ button in the banner and watch out for [HDIn
## Release date: May 08, 2023
-This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2304202354**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
+This release applies to HDInsight 4.x and 5.x HDInsight release will be available to all regions over several days. This release is applicable for image number **2304280205**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
HDInsight uses safe deployment practices, which involve gradual region deployment. it may take up to 10 business days for a new release or a new version to be available in all regions.
hdinsight Hdinsight Sdk Dotnet Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sdk-dotnet-samples.md
Title: 'Azure HDInsight: .NET samples' description: Find C# .NET examples on GitHub for common tasks using the HDInsight SDK for .NET. + Last updated 08/30/2022
hdinsight Hdinsight Sdk Java Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sdk-java-samples.md
Title: 'Azure HDInsight: Java samples' description: Find Java examples on GitHub for common tasks using the HDInsight SDK for Java.-+ Last updated 05/30/2022
hdinsight Interactive Query Troubleshoot Tez Hangs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-tez-hangs.md
Title: Apache Tez application hangs in Azure HDInsight
description: Apache Tez application hangs in Azure HDInsight Previously updated : 05/11/2022 Last updated : 06/07/2023 # Scenario: Apache Tez application hangs in Azure HDInsight
hdinsight Interactive Query Troubleshoot Tez View Slow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-tez-view-slow.md
Title: Apache Ambari Tez View loads slowly in Azure HDInsight
description: Apache Ambari Tez View may load slowly or may not load at all in Azure HDInsight Previously updated : 05/26/2022 Last updated : 06/07/2023 # Scenario: Apache Ambari Tez View loads slowly in Azure HDInsight
hdinsight Apache Kafka Producer Consumer Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-producer-consumer-api.md
Title: 'Tutorial: Apache Kafka Producer & Consumer APIs - Azure HDInsight' description: Learn how to use the Apache Kafka Producer and Consumer APIs with Kafka on HDInsight. In this tutorial, you learn how to use these APIs with Kafka on HDInsight from a Java application. -+ Last updated 04/24/2023 #Customer intent: As a developer, I need to create an application that uses the Kafka consumer/producer API with Kafka on HDInsight
hdinsight Kafka Mirrormaker 2 0 Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/kafka-mirrormaker-2-0-guide.md
description: How to use Kafka MirrorMaker 2.0 in data migration/replication and
Previously updated : 05/20/2022 Last updated : 06/07/2023 # How to use Kafka MirrorMaker 2.0 in data migration, replication and the use-cases
hdinsight Rest Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/rest-proxy.md
Title: Apache Kafka REST proxy - Azure HDInsight
description: Learn how to do Apache Kafka operations using a Kafka REST proxy on Azure HDInsight. -+ Last updated 02/17/2023
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
hdinsight Selective Logging Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/selective-logging-analysis.md
Title: Use selective logging with a script action in Azure HDInsight clusters
description: Learn how to use the selective logging feature with a script action to monitor logs. -+ Last updated 07/31/2022
hdinsight Set Up Pyspark Interactive Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/set-up-pyspark-interactive-environment.md
description: Learn how to use the Azure HDInsight Tools for Visual Studio Code t
keywords: VScode,Azure HDInsight Tools,Hive,Python,PySpark,Spark,HDInsight,Hadoop,LLAP,Interactive Hive,Interactive Query -+ Last updated 04/24/2023
hdinsight Apache Spark Custom Library Website Log Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-custom-library-website-log-analysis.md
Title: Analyze website logs with Python libraries in Spark - Azure description: This notebook demonstrates how to analyze log data using a custom library with Spark on Azure HDInsight. -+ Previously updated : 05/09/2022 Last updated : 06/07/2023 # Analyze website logs using a custom Python library with Apache Spark cluster on HDInsight
hdinsight Apache Spark Intellij Tool Failure Debug https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-intellij-tool-failure-debug.md
Title: 'Debug Spark job with IntelliJ Azure Toolkit (preview) - HDInsight'
description: Guidance using HDInsight Tools in Azure Toolkit for IntelliJ to debug applications keywords: debug remotely intellij, remote debugging intellij, ssh, intellij, hdinsight, debug intellij, debugging -+ Last updated 06/23/2022
hdinsight Apache Spark Jupyter Notebook Install Locally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-jupyter-notebook-install-locally.md
Title: Install Jupyter locally and connect to Spark in Azure HDInsight
description: Learn how to install Jupyter Notebook locally on your computer and connect it to an Apache Spark cluster. -+ Last updated 05/06/2022
hdinsight Apache Spark Manage Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-manage-dependencies.md
description: This article provides an introduction of how to manage Spark depend
-+ Last updated 10/18/2022 #Customer intent: As a developer for Apache Spark and Apache Spark in Azure HDInsight, I want to learn how to manage my Spark application dependencies and install packages on my HDInsight cluster.
hdinsight Apache Spark Microsoft Cognitive Toolkit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-microsoft-cognitive-toolkit.md
Title: Microsoft Cognitive Toolkit with Apache Spark - Azure HDInsight
description: Learn how a trained Microsoft Cognitive Toolkit deep learning model can be applied to a dataset using the Spark Python API in an Azure HDInsight Spark cluster. -+ Last updated 12/23/2022
hdinsight Apache Spark Troubleshoot Outofmemory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-troubleshoot-outofmemory.md
Title: OutOfMemoryError exceptions for Apache Spark in Azure HDInsight description: Various OutOfMemoryError exceptions for Apache Spark cluster in Azure HDInsight + Last updated 05/24/2023
hdinsight Safely Manage Jar Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/safely-manage-jar-dependency.md
Title: Safely manage jar dependencies - Azure HDInsight description: This article discusses best practices for managing Java Archive (JAR) dependencies for HDInsight applications.-+ Last updated 05/13/2022
hdinsight Troubleshoot Sqoop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/troubleshoot-sqoop.md
Title: Sqoop import/export command fails for some users in ESP clusters - Azure HDInsight description: 'Apache Sqoop import/export command fails with "Import Failed: java.io.IOException: The ownership on the staging directory /user/yourusername/.staging is not as expected" error for some users in Azure HDInsight ESP cluster' + Last updated 04/26/2023
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
healthcare-apis Dicomweb Standard Apis Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicomweb-standard-apis-python.md
description: This tutorial describes how to use DICOMweb Standard APIs with Pyth
+ Last updated 02/15/2022
healthcare-apis Deploy Choose Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-choose-method.md
description: Learn about the different methods for deploying the MedTech service
-+ Last updated 04/28/2023
healthcare-apis Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started.md
Title: Get started with the MedTech service - Azure Health Data Services
-description: This article describes the basic steps for deploying the MedTech service.
+description: Learn the basic steps for deploying the MedTech service.
Previously updated : 05/23/2023 Last updated : 06/06/2023
> [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-This article and diagram outlines the basic steps to get started with the MedTech service in the [Azure Health Data Services](../healthcare-apis-overview.md). These steps may help you analyze the MedTech service deployment options and determine which deployment method is best for you.
+This article and diagram outlines the basic steps to get started with the MedTech service in the [Azure Health Data Services](../healthcare-apis-overview.md). These steps may help you to assess the [MedTech service deployment methods](deploy-choose-method.md) and determine which deployment method is best for you.
-As a prerequisite, you need an Azure subscription and have been granted the proper permissions to deploy Azure resource groups and resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in Azure PowerShell, Azure CLI, and REST API scripts.
-
+As a prerequisite, you need an Azure subscription and have been granted the proper permissions to deploy Azure resource groups and resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in Azure PowerShell, Azure CLI, or REST API scripts.
> [!TIP]
-> See the MedTech service article, [Choose a deployment method for the MedTech service](deploy-choose-method.md), for a description of the different deployment methods that can help to simply and automate the deployment of the MedTech service.
+> See the MedTech service article, [Choose a deployment method for the MedTech service](deploy-choose-method.md), for a description of the different deployment methods that can help to simplify and automate the deployment of the MedTech service.
+ ## Deploy resources
After you obtain the required subscription prerequisites, the first step is to d
* Azure resource group. * Azure Event Hubs namespace and event hub.
-* Azure Health Data services workspace.
+* Azure Health Data Services workspace.
* Azure Health Data Services FHIR service. Once the prerequisite resources are available, deploy:
Deploy a [resource group](../../azure-resource-manager/management/manage-resourc
Deploy an Event Hubs namespace into the resource group. Event Hubs namespaces are logical containers for event hubs. Once the namespace is deployed, you can deploy an event hub, which the MedTech service reads device messages from. For information about deploying Event Hubs namespaces and event hubs, see [Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md).
-### Deploy a workspace
+### Deploy an Azure Health Data Services workspace
- Deploy a [workspace](../workspace-overview.md). After you create a workspace using the [Azure portal](../healthcare-apis-quickstart.md), a FHIR service and MedTech service can be deployed from the workspace.
+ Deploy an [Azure Health Data Services workspace](../workspace-overview.md). After you create an Azure Health Data Services workspace using the [Azure portal](../healthcare-apis-quickstart.md), a FHIR service and MedTech service can be deployed from the Azure Health Data Services workspace.
### Deploy a FHIR service
healthcare-apis How To Enable Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-enable-diagnostic-settings.md
Previously updated : 04/28/2023 Last updated : 06/07/2023
In this article, learn how to enable diagnostic settings for the MedTech service to:
-> [!div class="checklist"]
-> * Create a diagnostic setting to export logs and metrics for audit, analysis, or troubleshooting of the MedTech service.
-> * Use the Azure Log Analytics workspace to view the MedTech service logs.
-> * Access the MedTech service pre-defined Azure Log Analytics queries.
+* Create a diagnostic setting to export logs and metrics for audit, analysis, or troubleshooting of the MedTech service.
+* Use the Azure Log Analytics workspace to view the MedTech service logs.
+* Access the MedTech service pre-defined Azure Log Analytics queries.
## Create a diagnostic setting for the MedTech service
If you choose to include your Log Analytics workspace as a destination option fo
> [!TIP] > To learn how to use the Log Analytics workspace, see [Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md). >
-> To learn how to troubleshoot the MedTech service error messages and conditions, see [Troubleshoot MedTech service errors](troubleshoot-errors.md).
+> For assistance troubleshooting MedTech service errors, see [Troubleshoot errors using the MedTech service logs](troubleshoot-errors-logs.md).
## Accessing the MedTech service pre-defined Azure Log Analytics queries
The MedTech service comes with pre-defined queries that can be used anytime in y
> [!TIP] > To learn how to use the Log Analytics workspace, see [Azure Log Analytics workspace](../../azure-monitor/logs/log-analytics-workspace-overview.md). >
-> To learn how to troubleshoot the MedTech service error messages and conditions, see [Troubleshoot MedTech service errors](troubleshoot-errors.md).
+> For assistance troubleshooting MedTech service errors, see [Troubleshoot errors using the MedTech service logs](troubleshoot-errors-logs.md).
## Next steps
healthcare-apis How To Use Calculatedcontent Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-calculatedcontent-templates.md
Previously updated : 06/01/2023 Last updated : 06/07/2023
The resulting normalized message will look like this after the normalization sta
``` > [!TIP]
-> For assistance fixing common MedTech service deployment errors, see [Troubleshoot MedTech service deployment errors](troubleshoot-errors-deployment.md).
+> For assistance troubleshooting MedTech service deployment errors, see [Troubleshoot MedTech service deployment errors](troubleshoot-errors-deployment.md).
>
-> For assistance fixing MedTech service errors, see [Troubleshoot errors using the MedTech service logs](troubleshoot-errors-logs.md).
+> For assistance troubleshooting MedTech service errors, see [Troubleshoot errors using the MedTech service logs](troubleshoot-errors-logs.md).
## Next steps
hpc-cache Custom Flush Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/custom-flush-script.md
Title: Use a python library to customize file write-back
description: Advanced file write-back with Azure HPC Cache + Last updated 07/07/2022
internet-analyzer Internet Analyzer Embed Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-analyzer/internet-analyzer-embed-client.md
+ Last updated 10/16/2019
-# Customer intent: As someone interested in creating an Internet Analyzer resource, I want to learn how to install the JavaScript client, which is necessary to run tests.
-
+# Customer intent: As someone interested in creating an Internet Analyzer resource, I want to learn how to install the JavaScript client, which is necessary to run tests.
# Embed the Internet Analyzer client
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md
A device can use properties to report its state, such as whether a valve is open
IoT Central can also control devices by calling commands on the device. For example, instructing a device to download and install a firmware update.
-The [telemetry, properties, and commands](concepts-telemetry-properties-commands.md) that a device implements are collectively known as the device capabilities. You define these capabilities in a model that's shared between the device and the IoT Central application. In IoT Central, this model is part of the device template that defines a specific type of device. To learn more, see [Assign a device to a device template](concepts-device-templates.md#assign-a-device-to-a-device-template).
+The telemetry, properties, and commands that a device implements are collectively known as the device capabilities. You define these capabilities in a model that's shared between the device and the IoT Central application. In IoT Central, this model is part of the device template that defines a specific type of device. To learn more, see [Assign a device to a device template](concepts-device-templates.md#assign-a-device-to-a-device-template).
The [device implementation](tutorial-connect-device.md) should follow the [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md) to ensure that it can communicate with IoT Central. For more information, see the various language [SDKs and samples](../../iot-develop/about-iot-sdks.md).
iot-central Concepts Device Implementation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-implementation.md
Title: Device implementation
description: This article introduces the key concepts and best practices for implementing a device that connects to your IoT Central application. Previously updated : 02/13/2023 Last updated : 06/06/2023
A device model is defined by using the [DTDL V2](https://github.com/Azure/opendi
- The properties the device can receive from IoT Central. Optionally, you can mark a property as writable. For example, IoT Central sends a target temperature as a double to a device. - The commands a device responds to. The definition includes the name of the command, and the names and data types of any parameters. For example, a device responds to a reboot command that specifies how many seconds to wait before rebooting.
+> [!NOTE]
+> IoT Central defines some extensions to the DTDL v2 language. To learn more, see [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md).
+ A DTDL model can be a _no-component_ or a _multi-component_ model: - No-component model: A simple model doesn't use embedded or cascaded components. All the telemetry, properties, and commands are defined a single _root component_. For an example, see the [Thermostat](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/samples/Thermostat.json) model.
A device should follow the IoT Plug and Play conventions when it exchanges data
> [!NOTE] > Currently, IoT Central does not fully support the DTDL **Array** and **Geospatial** data types.
-To learn more about the format of the JSON messages that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md).
- To learn more about the IoT Plug and Play conventions, see [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md).
+To learn more about the format of the JSON messages that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md).
+ ### Device SDKs Use one of the [Azure IoT device SDKs](../../iot-hub/iot-hub-devguide-sdks.md#azure-iot-hub-device-sdks) to implement the behavior of your device. The code should:
Communication protocols that a device can use to connect to IoT Central include
If your device can't use any of the supported protocols, use Azure IoT Edge to do protocol conversion. IoT Edge supports other intelligence-on-the-edge scenarios to offload processing from the Azure IoT Central application.
+## Telemetry timestamps
+
+By default, IoT Central uses the message enqueued time when it displays telemetry on dashboards and charts. Message enqueued time is set internally when IoT Central receives the message from the device.
+
+A device can set the `iothub-creation-time-utc` property when it creates a message to send to IoT Central. If this property is present, IoT Central uses it when it displays telemetry on dashboards and charts.
+
+You can export both the enqueued time and the `iothub-creation-time-utc` property when you export telemetry from your IoT Central application.
+
+To learn more about message properties, see [System Properties of device-to-cloud IoT Hub messages](../../iot-hub/iot-hub-devguide-messages-construct.md#system-properties-of-d2c-iot-hub-messages).
+ ## Best practices These recommendations show how to implement devices to take advantage of the [built-in high availability, disaster recovery, and automatic scaling](concepts-faq-scalability-availability.md) in IoT Central.
iot-central Concepts Device Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-templates.md
Title: What are device templates in Azure IoT Central
description: Device templates let you specify the behavior of the devices connected to your application. They also define a UI for the device in IoT Central. Previously updated : 06/03/2022 Last updated : 06/05/2023
A device template in Azure IoT Central is a blueprint that defines the characteristics and behaviors of a type of device that connects to your application. For example, the device template defines the telemetry that a device sends so that IoT Central can create visualizations that use the correct units and data types.
-A solution builder adds device templates to an IoT Central application. A device developer writes the device code that implements the behaviors defined in the device template. To learn more about the data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md).
+A solution builder adds device templates to an IoT Central application. A device developer writes the device code that implements the behaviors defined in the device template. To learn more about the data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md).
A device template includes the following sections:
IoT Central can automatically assign a device to a device template when the devi
1. If the device template is already published in the IoT Central application, the device is assigned to the device template. 1. If the device template isn't already published in the IoT Central application, IoT Central looks for the device model in the [public model repository](https://github.com/Azure/iot-plugandplay-models). If IoT Central finds the model, it uses it to generate a basic device template.
-1. If IoT Central doesn't find the model in the public model repository, the device is marked as **Unassigned**. An operator can either create a device template for the device and then migrate the unassigned device to the new device template, or [autogenerate a device template](howto-set-up-template.md#autogenerate-a-device-template) based on the data the device sends.
+1. If IoT Central doesn't find the model in the public model repository, the device is marked as **Unassigned**. An operator can:
+
+ - Create a device template for the device and then migrate the unassigned device to the new device template.
+ - [Autogenerate a device template](howto-set-up-template.md#autogenerate-a-device-template) based on the data the device sends.
The following screenshot shows you how to view the model ID of a device template in IoT Central. In a device template, select a component, and then select **Edit identity**:
To learn more about the DPS payload, see the sample code used in the [Tutorial:
## Device models
-A device model defines how a device interacts with your IoT Central application. The device developer must make sure that the device implements the behaviors defined in the device model so that IoT Central can monitor and manage the device. A device model is made up of one or more _interfaces_, and each interface can define a collection of _telemetry_ types, _device properties_, and _commands_. A solution developer can import a JSON file that defines a complete device model or individual interface into a device template, or use the web UI in IoT Central to create or edit a device model.
-
-To learn more about editing a device model, see [Edit an existing device template](howto-edit-device-template.md)
-
-A solution developer can also export a JSON file from the device template that contains a complete device model or individual interface. A device developer can use this JSON document to understand how the device should communicate with the IoT Central application.
+A device model defines how a device interacts with your IoT Central application. The device developer must make sure that the device implements the behaviors defined in the device model so that IoT Central can monitor and manage the device. A device model is made up of one or more _interfaces_, and each interface can define a collection of _telemetry_ types, _device properties_, and _commands_. A solution developer can:
-The JSON file that defines the device model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). IoT Central expects the JSON file to contain the device model with the interfaces defined inline, rather than in separate files. To learn more, see [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md).
+- Import a JSON file that defines a complete device model or individual interface into a device template.
+- Use the web UI in IoT Central to create or edit a device model.
-A typical IoT device is made up of:
+> [!NOTE]
+> IoT Central accepts any valid JSON payload from a device but it can only use the data for visualizations if it matches a definition in the device model. You can export data that doesn't match a definition, see [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md).
-- Custom parts, which are the things that make your device unique.-- Standard parts, which are things that are common to all devices.
+To learn more about editing a device model, see [Edit an existing device template](howto-edit-device-template.md)
-These parts are called _interfaces_ in a device model. Interfaces define the details of each part your device implements. Interfaces are reusable across device models. In DTDL, a component refers to another interface, which may be defined in a separate DTDL file or in a separate section of the file.
+A solution developer can also export a JSON file from the device template that contains a complete device model or individual interface. A device developer can use this JSON document to understand how the device should communicate with the IoT Central application.
-The following example shows the outline of device model for a [temperature controller device](https://github.com/Azure/iot-plugandplay-models/blob/main/dtmi/com/example/temperaturecontroller-2.json). The root component includes definitions for `workingSet`, `serialNumber`, and `reboot`. The device model also includes two `thermostat` components and a `deviceInformation` component. The contents of the three components have been removed for the sake of brevity:
+The JSON file that defines the device model uses the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). IoT Central expects the JSON file to contain the device model with the interfaces defined inline, rather than in separate files. Models created in IoT Central have the context `dtmi:iotcentral:context;2` defined to indicate that the model was created in IoT Central:
```json
-[
- {
- "@context": [
- "dtmi:iotcentral:context;2",
- "dtmi:dtdl:context;2"
- ],
- "@id": "dtmi:com:example:TemperatureController;2",
- "@type": "Interface",
- "contents": [
- {
- "@type": [
- "Telemetry",
- "DataSize"
- ],
- "description": {
- "en": "Current working set of the device memory in KiB."
- },
- "displayName": {
- "en": "Working Set"
- },
- "name": "workingSet",
- "schema": "double",
- "unit": "kibibit"
- },
- {
- "@type": "Property",
- "displayName": {
- "en": "Serial Number"
- },
- "name": "serialNumber",
- "schema": "string",
- "writable": false
- },
- {
- "@type": "Command",
- "commandType": "synchronous",
- "description": {
- "en": "Reboots the device after waiting the number of seconds specified."
- },
- "displayName": {
- "en": "Reboot"
- },
- "name": "reboot",
- "request": {
- "@type": "CommandPayload",
- "description": {
- "en": "Number of seconds to wait before rebooting the device."
- },
- "displayName": {
- "en": "Delay"
- },
- "name": "delay",
- "schema": "integer"
- }
- },
- {
- "@type": "Component",
- "displayName": {
- "en": "thermostat1"
- },
- "name": "thermostat1",
- "schema": "dtmi:com:example:Thermostat;2"
- },
- {
- "@type": "Component",
- "displayName": {
- "en": "thermostat2"
- },
- "name": "thermostat2",
- "schema": "dtmi:com:example:Thermostat;2"
- },
- {
- "@type": "Component",
- "displayName": {
- "en": "DeviceInfo"
- },
- "name": "deviceInformation",
- "schema": "dtmi:azure:DeviceManagement:DeviceInformation;1"
- }
- ],
- "displayName": {
- "en": "Temperature Controller"
- }
- },
- {
- "@context": "dtmi:dtdl:context;2",
- "@id": "dtmi:com:example:Thermostat;2",
- "@type": "Interface",
- "displayName": "Thermostat",
- "description": "Reports current temperature and provides desired temperature control.",
- "contents": [
- ...
- ]
- },
- {
- "@context": "dtmi:dtdl:context;2",
- "@id": "dtmi:azure:DeviceManagement:DeviceInformation;1",
- "@type": "Interface",
- "displayName": "Device Information",
- "contents": [
- ...
- ]
- }
+"@context": [
+ "dtmi:iotcentral:context;2",
+ "dtmi:dtdl:context;2"
] ```
-An interface has some required fields:
--- `@id`: a unique ID in the form of a simple Uniform Resource Name.-- `@type`: declares that this object is an interface.-- `@context`: specifies the DTDL version used for the interface.-- `contents`: lists the properties, telemetry, and commands that make up your device. The capabilities may be defined in multiple interfaces.-
-There are some optional fields you can use to add more details to the capability model, such as display name and description.
-
-Each entry in the list of interfaces in the implements section has a:
--- `name`: the programming name of the interface.-- `schema`: the interface the capability model implements.-
-## Interfaces
-
-The DTDL lets you describe the capabilities of your device. Related capabilities are grouped into interfaces. Interfaces describe the properties, telemetry, and commands a part of your device implements:
--- `Properties`. Properties are data fields that represent the state of your device. Use properties to represent the durable state of the device, such as the on-off state of a coolant pump. Properties can also represent basic device properties, such as the firmware version of the device. You can declare properties as read-only or writable. Only devices can update the value of a read-only property. An operator can set the value of a writable property to send to a device.-- `Telemetry`. Telemetry fields represent measurements from sensors. Whenever your device takes a sensor measurement, it should send a telemetry event containing the sensor data.-- `Commands`. Commands represent methods that users of your device can execute on the device. For example, a reset command or a command to switch a fan on or off.-
-The following example shows the thermostat interface definition:
-
-```json
-{
- "@context": "dtmi:dtdl:context;2",
- "@id": "dtmi:com:example:Thermostat;2",
- "@type": "Interface",
- "displayName": "Thermostat",
- "description": "Reports current temperature and provides desired temperature control.",
- "contents": [
- {
- "@type": [
- "Telemetry",
- "Temperature"
- ],
- "name": "temperature",
- "displayName": "Temperature",
- "description": "Temperature in degrees Celsius.",
- "schema": "double",
- "unit": "degreeCelsius"
- },
- {
- "@type": [
- "Property",
- "Temperature"
- ],
- "name": "targetTemperature",
- "schema": "double",
- "displayName": "Target Temperature",
- "description": "Allows to remotely specify the desired target temperature.",
- "unit": "degreeCelsius",
- "writable": true
- },
- {
- "@type": [
- "Property",
- "Temperature"
- ],
- "name": "maxTempSinceLastReboot",
- "schema": "double",
- "unit": "degreeCelsius",
- "displayName": "Max temperature since last reboot.",
- "description": "Returns the max temperature since last device reboot."
- },
- {
- "@type": "Command",
- "name": "getMaxMinReport",
- "displayName": "Get Max-Min report.",
- "description": "This command returns the max, min and average temperature from the specified time to the current time.",
- "request": {
- "name": "since",
- "displayName": "Since",
- "description": "Period to return the max-min report.",
- "schema": "dateTime"
- },
- "response": {
- "name": "tempReport",
- "displayName": "Temperature Report",
- "schema": {
- "@type": "Object",
- "fields": [
- {
- "name": "maxTemp",
- "displayName": "Max temperature",
- "schema": "double"
- },
- {
- "name": "minTemp",
- "displayName": "Min temperature",
- "schema": "double"
- },
- {
- "name": "avgTemp",
- "displayName": "Average Temperature",
- "schema": "double"
- },
- {
- "name": "startTime",
- "displayName": "Start Time",
- "schema": "dateTime"
- },
- {
- "name": "endTime",
- "displayName": "End Time",
- "schema": "dateTime"
- }
- ]
- }
- }
- }
- ]
-}
-```
-
-This example shows two properties (one read-only and one writable), a telemetry type, and a command. A minimal field description has a:
+To learn more about DTDL models, see the [IoT Plug and Play modeling guide](../../iot-develop/concepts-modeling-guide.md).
-- `@type` to specify the type of capability: `Telemetry`, `Property`, or `Command`. In some cases, the type includes a semantic type to enable IoT Central to make some assumptions about how to handle the value.-- `name` for the telemetry value.-- `schema` to specify the data type for the telemetry or the property. This value can be a primitive type, such as double, integer, boolean, or string. Complex object types and maps are also supported.-
-Optional fields, such as display name and description, let you add more details to the interface and capabilities.
+> [!NOTE]
+> IoT Central defines some extensions to the DTDL v2 language. To learn more, see [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md).
## Properties By default, properties are read-only. Read-only properties mean that the device reports property value updates to your IoT Central application. Your IoT Central application can't set the value of a read-only property.
-You can also mark a property as writable on an interface. A device can receive an update to a writable property from your IoT Central application as well as reporting property value updates to your application.
+You can also mark a property as writable on an interface. A device can receive an update to a writable property from your IoT Central application and report property value updates to your application.
Devices don't need to be connected to set property values. The updated values are transferred when the device next connects to the application. This behavior applies to both read-only and writable properties.
Offline commands are one-way notifications to the device from your solution. Off
> [!NOTE] > Offline commands are marked as `durable` if you export the model as DTDL.
+Offline commands use [IoT Hub cloud-to-device messages](../../iot-hub/iot-hub-devguide-messages-c2d.md) to send the command and payload to the device.
+
+The payload of the message the device receives is the raw value of the parameter. A custom property called `method-name` stores the name of the IoT Central command. The following table shows some example payloads:
+
+| IoT Central request schema | Example payload received by device |
+| -- | - |
+| No request parameter | `@` |
+| Double | `1.23` |
+| String | `sample string` |
+| Object | `{"StartTime":"2021-01-05T08:00:00.000Z","Bank":2}` |
+
+The following snippet from a device model shows the definition of a command. The command has an object parameter with a datetime field and an enumeration:
+
+```json
+{
+ "@type": "Command",
+ "displayName": {
+ "en": "Generate Diagnostics"
+ },
+ "name": "GenerateDiagnostics",
+ "request": {
+ "@type": "CommandPayload",
+ "displayName": {
+ "en": "Payload"
+ },
+ "name": "Payload",
+ "schema": {
+ "@type": "Object",
+ "displayName": {
+ "en": "Object"
+ },
+ "fields": [
+ {
+ "displayName": {
+ "en": "StartTime"
+ },
+ "name": "StartTime",
+ "schema": "dateTime"
+ },
+ {
+ "displayName": {
+ "en": "Bank"
+ },
+ "name": "Bank",
+ "schema": {
+ "@type": "Enum",
+ "displayName": {
+ "en": "Enum"
+ },
+ "enumValues": [
+ {
+ "displayName": {
+ "en": "Bank 1"
+ },
+ "enumValue": 1,
+ "name": "Bank1"
+ },
+ {
+ "displayName": {
+ "en": "Bank2"
+ },
+ "enumValue": 2,
+ "name": "Bank2"
+ },
+ {
+ "displayName": {
+ "en": "Bank3"
+ },
+ "enumValue": 3,
+ "name": "Bank3"
+ }
+ ],
+ "valueSchema": "integer"
+ }
+ }
+ ]
+ }
+ }
+}
+```
+
+If you enable the **Queue if offline** option in the device template UI for the command in the previous snippet, then the message the device receives includes the following properties:
+
+| Property name | Example value |
+| - | -- |
+| `custom_properties` | `{'method-name': 'GenerateDiagnostics'}` |
+| `data` | `{"StartTime":"2021-01-05T08:00:00.000Z","Bank":2}` |
+ ## Views A solution developer creates views that let operators monitor and manage connected devices. Views are part of the device template, so a view is associated with a specific device type. A view can include:
A solution developer creates views that let operators monitor and manage connect
## Next steps
-Now that you've learned about device templates, a suggested next steps is to read [Telemetry, property, and command payloads](./concepts-telemetry-properties-commands.md) to learn more about the data a device exchanges with IoT Central.
+Now that you've learned about device templates, a suggested next step is to read [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md) to learn more about the data a device exchanges with IoT Central.
iot-central Concepts Faq Apaas Paas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-apaas-paas.md
Title: Move from IoT Central to a PaaS solution
-description: This article discusses to move between application platform as a service (aPaaS) and platform as a service (PaaS) Azure IoT solution approaches.
+description: This article discusses how to move between application platform as a service (aPaaS) and platform as a service (PaaS) Azure IoT solution approaches.
Last updated 11/28/2022
So that you can seamlessly migrate devices from your IoT Central applications to
- The device must be an IoT Plug and Play device that uses a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) model. IoT Central requires all devices to have a DTDL model. These models simplify the interoperability between an IoT PaaS solution and IoT Central. -- The device must follow the [IoT Central data formats for telemetry, property, and commands](concepts-telemetry-properties-commands.md).-
+- The device must follow the [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md).
- IoT Central uses the DPS to provision the devices. The PaaS solution must also use DPS to provision the devices. - The updatable DPS pattern ensures that the device can move seamlessly between IoT Central applications and the PaaS solution without any downtime.
+> [!NOTE]
+> IoT Central defines some extensions to the DTDL v2 language. To learn more, see [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md).
+ ## Move existing data out of IoT Central You can configure IoT Central to continuously export telemetry and property values. Export destinations are data stores such as Azure Data Lake, Event Hubs, and Webhooks. You can export device templates using either the IoT Central UI or the REST API. The REST API lets you export the users in an IoT Central application.
iot-central How To Connect Devices X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-devices-x509.md
Last updated 12/14/2022
-+ zone_pivot_groups: programming-languages-set-ten # - id: programming-languages-set-ten
-# # Owner: aahill
-# Title: Programming languages
-# prompt: Choose a programming language
-# pivots:
-# - id: programming-language-csharp
-# Title: C#
-# - id: programming-language-java
-# Title: Java
-# - id: programming-language-javascript
-# Title: JavaScript
-# - id: programming-language-python
# Title: Python
iot-central Howto Manage Device Templates With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-device-templates-with-rest-api.md
A device template contains a device model and view definitions. The REST API let
The device model section of a device template specifies the capabilities of a device you want to connect to your application. Capabilities include telemetry, properties, and commands. The model is defined using [DTDL V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md).
+> [!NOTE]
+> IoT Central defines some extensions to the DTDL language. To learn more, see [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md).
+ ## Device templates REST API The IoT Central REST API lets you:
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-set-up-template.md
You have several options to create device templates:
- Import a device template from the [Azure Certified for IoT device catalog](https://aka.ms/iotdevcat). Optionally, customize the device template to your requirements in IoT Central. - When the device connects to IoT Central, have it send the model ID of the model it implements. IoT Central uses the model ID to retrieve the model from the model repository and to create a device template. Add any cloud properties and views your IoT Central application needs to the device template. - When the device connects to IoT Central, let IoT Central [autogenerate a device template](#autogenerate-a-device-template) definition from the data the device sends.-- Author a device model using the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). Manually import the device model into your IoT Central application. Then add the cloud properties and views your IoT Central application needs.
+- Author a device model using the [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) and [IoT Central DTDL extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md). Manually import the device model into your IoT Central application. Then add the cloud properties and views your IoT Central application needs.
- You can also add device templates to an IoT Central application using the [How to use the IoT Central REST API to manage device templates](howto-manage-device-templates-with-rest-api.md) or the [CLI](howto-manage-iot-central-from-cli.md). > [!NOTE]
The following table shows the configuration settings for a telemetry capability:
| Name | The name of the field in the telemetry message. IoT Central generates a value for this field from the display name, but you can choose your own value if necessary. This field needs to be alphanumeric. | | Capability Type | Telemetry. | | Semantic Type | The semantic type of the telemetry, such as temperature, state, or event. The choice of semantic type determines which of the following fields are available. |
-| Schema | The telemetry data type, such as double, string, or vector. The available choices are determined by the semantic type. Schema isn't available for the event and state semantic types. |
+| Schema | The telemetry data type, such as double, string, or vector. The semantic type determines the available choices. Schema isn't available for the event and state semantic types. |
| Severity | Only available for the event semantic type. The severities are **Error**, **Information**, or **Warning**. | | State Values | Only available for the state semantic type. Define the possible state values, each of which has display name, name, enumeration type, and value. | | Unit | A unit for the telemetry value, such as **mph**, **%**, or **&deg;C**. |
The following table shows the configuration settings for a property capability:
| Name | The name of the property. IoT Central generates a value for this field from the display name, but you can choose your own value if necessary. This field needs to be alphanumeric. | | Capability Type | Property. | | Semantic Type | The semantic type of the property, such as temperature, state, or event. The choice of semantic type determines which of the following fields are available. |
-| Schema | The property data type, such as double, string, or vector. The available choices are determined by the semantic type. Schema isn't available for the event and state semantic types. |
-| Writable | If the property isn't writable, the device can report property values to IoT Central. If the property is writable, the device can report property values to IoT Central and IoT Central can send property updates to the device.
+| Schema | The property data type, such as double, string, or vector. The semantic type determines the available choices. Schema isn't available for the event and state semantic types. |
+| Writable | If the property isn't writable, the device can report property values to IoT Central. If the property is writable, the device can report property values to IoT Central, and IoT Central can send property updates to the device. |
| Severity | Only available for the event semantic type. The severities are **Error**, **Information**, or **Warning**. | | State Values | Only available for the state semantic type. Define the possible state values, each of which has display name, name, enumeration type, and value. | | Unit | A unit for the property value, such as **mph**, **%**, or **&deg;C**. | | Display Unit | A display unit for use on views and forms. | | Comment | Any comments about the property capability. | | Description | A description of the property capability. |
-|Color | This is an IoT Central extension to DTDL. |
-|Min value | Set minimum value - This is an IoT Central extension to DTDL. |
-|Max value | Set maximum value - This is an IoT Central extension to DTDL. |
-|Decimal places | This is an IoT Central extension to DTDL. |
+|Color | An IoT Central extension to DTDL. |
+|Min value | Set minimum value - An IoT Central extension to DTDL. |
+|Max value | Set maximum value - An IoT Central extension to DTDL. |
+|Decimal places | An IoT Central extension to DTDL. |
#### Commands
The following table shows the configuration settings for a command capability:
| Response | If enabled, a definition of the command response, including: name, display name, schema, unit, and display unit. | |Initial value | The default parameter value. This is an IoT Central extension to DTDL. |
-To learn more about how devices implement commands, see [Telemetry, property, and command payloads > Commands and long running commands](concepts-telemetry-properties-commands.md#commands).
+To learn more about how devices implement commands, see [Telemetry, property, and command payloads > Commands and long running commands](../../iot-develop/concepts-message-payloads.md#commands).
#### Offline commands
The following table shows the configuration settings for a cloud property:
| Display Name | The display name for the cloud property value used on views and forms. | | Name | The name of the cloud property. IoT Central generates a value for this field from the display name, but you can choose your own value if necessary. | | Semantic Type | The semantic type of the property, such as temperature, state, or event. The choice of semantic type determines which of the following fields are available. |
-| Schema | The cloud property data type, such as double, string, or vector. The available choices are determined by the semantic type. |
+| Schema | The cloud property data type, such as double, string, or vector. The semantic type determines the available choices. |
## Views
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data.md
The following table shows three example transformation types:
| Transformation | Description | Example | Notes | ||-|-|-|
-| Message Format | Convert to or manipulate JSON messages. | CSV to JSON | At ingress. IoT Central only accepts value JSON messages. To learn more, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md). |
+| Message Format | Convert to or manipulate JSON messages. | CSV to JSON | At ingress. IoT Central only accepts value JSON messages. To learn more, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md). |
| Computations | Math functions that [Azure Functions](../../azure-functions/index.yml) can execute. | Unit conversion from Fahrenheit to Celsius. | Transform using the egress pattern to take advantage of scalable device ingress through direct connection to IoT Central. Transforming the data lets you use IoT Central features such as visualizations and jobs. | | Message Enrichment | Enrichments from external data sources not found in device properties or telemetry. To learn more about internal enrichments, see [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md). | Add weather information to messages using [location data](howto-use-location-data.md) from devices. | Transform using the egress pattern to take advantage of scalable device ingress through direct connection to IoT Central. |
To build the custom module in the [Azure Cloud Shell](https://shell.azure.com/):
This scenario uses an IoT Edge gateway device to transform the data from any downstream devices. This section describes how to create IoT Central device template for the gateway device in your IoT Central application. IoT Edge devices use a deployment manifest to configure their modules.
-In this example, the downstream device doesn't need a device template. The downstream device is registered in IoT Central so you can generate the credentials it needs to connect the IoT Edge device. Because the IoT Edge module transforms the data, all the downstream device telemetry arrives in IoT Central as if it was sent by the IoT Edge device.
+In this example, the downstream device doesn't need a device template. The downstream device is registered in IoT Central so you can generate the credentials it needs to connect the IoT Edge device. Because the IoT Edge module transforms the data, all the downstream device telemetry arrives in IoT Central as if the IoT Edge device had sent it.
To create a device template for the IoT Edge gateway device:
To connect a downstream device to the IoT Edge gateway device:
git clone https://github.com/iot-for-all/iot-central-transform-with-iot-edge ```
-1. To copy the required certificate from the gateway device, run the following `scp` commands. This `scp` command uses the hostname `edgegateway` to identify the gateway virtual machine. You'll be prompted for your password:
+1. To copy the required certificate from the gateway device, run the following `scp` commands. This `scp` command uses the hostname `edgegateway` to identify the gateway virtual machine. You're prompted for your password:
```bash cd ~/iot-central-transform-with-iot-edge
iot-central Howto Use Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-commands.md
Title: How to use device commands in an Azure IoT Central solution
description: How to use device commands in Azure IoT Central solution. Learn how to define and call device commands from IoT Central, and respond in a device. Previously updated : 10/31/2022 Last updated : 06/06/2023
A device can:
By default, commands expect a device to be connected and fail if the device can't be reached. If you select the **Queue if offline** option in the device template UI a command can be queued until a device comes online. These *offline commands* are described in a separate section later in this article.
+To learn about the IoT Pug and Play command conventions, see [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md).
+
+To learn more about the command data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md).
+ To learn how to manage commands by using the IoT Central REST API, see [How to use the IoT Central REST API to control devices.](../core/howto-control-devices-with-rest-api.md) ## Define your commands
The following table shows the configuration settings for a command capability:
| Request | The payload for the device command.| | Response | The payload of the device command response.|
-The following snippet shows the JSON representation of the command in the device model. In this example, the response value is a complex **Object** type with multiple fields:
-
-```json
-{
- "@type": "Command",
- "name": "getMaxMinReport",
- "displayName": "Get Max-Min report.",
- "description": "This command returns the max, min and average temperature from the specified time to the current time.",
- "request": {
- "name": "since",
- "displayName": "Since",
- "description": "Period to return the max-min report.",
- "schema": "dateTime"
- },
- "response": {
- "name" : "tempReport",
- "displayName": "Temperature Report",
- "schema": {
- "@type": "Object",
- "fields": [
- {
- "name": "maxTemp",
- "displayName": "Max temperature",
- "schema": "double"
- },
- {
- "name": "minTemp",
- "displayName": "Min temperature",
- "schema": "double"
- },
- {
- "name" : "avgTemp",
- "displayName": "Average Temperature",
- "schema": "double"
- },
- {
- "name" : "startTime",
- "displayName": "Start Time",
- "schema": "dateTime"
- },
- {
- "name" : "endTime",
- "displayName": "End Time",
- "schema": "dateTime"
- }
- ]
- }
- }
-}
-```
-
-> [!TIP]
-> You can export a device model or interface from the device template page.
-
-You can relate this command definition to the screenshot of the UI using the following fields:
-
-* `@type` to specify the type of capability: `Command`
-* `name` for the command value.
+To learn about the Digital Twin Definition Language (DTDL) that Azure IoT Central uses to define commands in a device template, see [IoT Plug and Play conventions > Commands](../../iot-develop/concepts-convention.md#commands).
Optional fields, such as display name and description, let you add more details to the interface and capabilities. ## Standard commands
-This section shows you how a device sends a response value as soon as it receives the command.
-
-The following code snippet shows how a device can respond to a command immediately sending a success code:
-
-> [!NOTE]
-> This article uses Node.js for simplicity. For other language examples, see the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial.
-
-```javascript
-client.onDeviceMethod('getMaxMinReport', commandHandler);
-
-// ...
-
-const commandHandler = async (request, response) => {
- switch (request.methodName) {
- case 'getMaxMinReport': {
- console.log('MaxMinReport ' + request.payload);
- await sendCommandResponse(request, response, 200, deviceTemperatureSensor.getMaxMinReportObject());
- break;
- }
- default:
- await sendCommandResponse(request, response, 404, 'unknown method');
- break;
- }
-};
-
-const sendCommandResponse = async (request, response, status, payload) => {
- try {
- await response.send(status, payload);
- console.log('Response to method \'' + request.methodName +
- '\' sent successfully.' );
- } catch (err) {
- console.error('An error ocurred when sending a method response:\n' +
- err.toString());
- }
-};
-```
+To handle a standard command, a device sends a response value as soon as it receives the command from IoT Central. You can use the Azure IoT device SDK to handle standard commands invoked by your IoT Central application.
-The call to `onDeviceMethod` sets up the `commandHandler` method. This command handler:
-
-1. Checks the name of the command.
-1. For the `getMaxMinReport` command, it calls `getMaxMinReportObject` to retrieve the values to include in the return object.
-1. Calls `sendCommandResponse` to send the response back to IoT Central. This response includes the `200` response code to indicate success.
+For example implementations in multiple languages, see [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md).
The following screenshot shows how the successful command response displays in the IoT Central UI:
The following screenshot shows how the successful command response displays in t
## Long-running commands
+In a long-running command, a device doesn't immediately complete the command. Instead, the device acknowledges receipt of the command and then later confirms that the command completed. This approach lets a device complete a long-running operation without keeping the connection to IoT Central open.
+
+> [!NOTE]
+> Long-running commands are not part of the IoT Plug and Play conventions. IoT Central has its own convention to implement long-running commands.
+ This section shows you how a device can delay sending a confirmation that the command completed. The following code snippet shows how a device can implement a long-running command: > [!NOTE]
-> This article uses Node.js for simplicity. For other language examples, see the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial.
+> This article uses Node.js for simplicity.
```javascript client.onDeviceMethod('rundiagnostics', commandHandler);
The following screenshot shows the IoT Central UI when it receives the property
This section shows you how a device handles an offline command. If a device is online, it can handle the offline command as soon it's received. If a device is offline, it handles the offline command when it next connects to IoT Central. Devices can't send a return value in response to an offline command.
+> [!NOTE]
+> Offline commands are not part of the IoT Plug and Play conventions. IoT Central has its own convention to implement offline commands.
+ > [!NOTE] > This article uses Node.js for simplicity.
You can call commands on a device that isn't assigned to a device template. To c
## Next steps
-Now that you've learned how to use commands in your Azure IoT Central application, see [Payloads](concepts-telemetry-properties-commands.md) to learn more about command parameters and [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) to see complete code samples in different languages.
+Now that you've learned how to use commands in your Azure IoT Central application, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md) to learn more about command parameters and [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) to see complete code samples in different languages.
iot-central Howto Use Location Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-location-data.md
For reference, the [Digital Twins Definition Language (DTDL) V2](https://github.
``` > [!NOTE]
-> The **geopoint** schema type is not part of the [DTDL specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). IoT Central currently supports the **geopoint** schema type and the **location** semantic type for backwards compatibility.
+> The **geopoint** schema type is not part of the [DTDL specification](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). IoT Central currently supports the **geopoint** schema type and the **location** semantic type for backwards compatibility, see [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md).
## Send location data from a device
You can use location telemetry to create a geofencing rule that generates an ale
Now that you've learned how to use properties in your Azure IoT Central application, see:
-* [Payloads](concepts-telemetry-properties-commands.md)
+* [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md)
* [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)
iot-central Howto Use Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-properties.md
Title: Use properties in an Azure IoT Central solution
-description: Learn how to use read-only and writable properties in an Azure IoT Central solution. Define properties in IoT Central and use properties progrmatically.
+description: Learn how to use read-only and writable properties in an Azure IoT Central solution. Define properties in IoT Central and use properties programmatically.
Previously updated : 10/31/2022 Last updated : 06/06/2023
Properties represent point-in-time values. For example, a device can use a prope
You can also define cloud properties in an Azure IoT Central application. Cloud property values are never exchanged with a device and are out of scope for this article.
-To learn how to manage properties by using the IoT Central REST API, see [How to use the IoT Central REST API to control devices.](../core/howto-control-devices-with-rest-api.md).
+To learn about the IoT Pug and Play property conventions, see [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md).
+
+To learn more about the property data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md).
-To learn more about the property data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md).
+To learn how to manage properties by using the IoT Central REST API, see [How to use the IoT Central REST API to control devices.](../core/howto-control-devices-with-rest-api.md).
## Define your properties
The following table shows the configuration settings for a property capability.
| Name | The name of the property. Azure IoT Central generates a value for this field from the display name, but you can choose your own value if necessary. This field must be alphanumeric. The device code uses this **Name** value. | | Capability type | Property. | | Semantic type | The semantic type of the property, such as temperature, state, or event. The choice of semantic type determines which of the following fields are available. |
-| Schema | The property data type, such as double, string, or vector. The available choices are determined by the semantic type. Schema isn't available for the event and state semantic types. |
+| Schema | The property data type, such as double, string, or vector. The semantic type determines the available choices. Schema isn't available for the event and state semantic types. |
| Writable | If the property isn't writable, the device can report property values to Azure IoT Central. If the property is writable, the device can report property values to Azure IoT Central. Then Azure IoT Central can send property updates to the device. | | Severity | Only available for the event semantic type. The severities are **Error**, **Information**, or **Warning**. | | State values | Only available for the state semantic type. Define the possible state values, each of which has display name, name, enumeration type, and value. |
The following table shows the configuration settings for a property capability.
| Comment | Any comments about the property capability. | | Description | A description of the property capability. |
-The properties can also be defined in an interface in a device template as shown here:
-
-``` json
-{
- "@type": [
- "Property",
- "Temperature"
- ],
- "name": "targetTemperature",
- "schema": "double",
- "displayName": "Target Temperature",
- "description": "Allows to remotely specify the desired target temperature.",
- "unit" : "degreeCelsius",
- "writable": true
-},
-{
- "@type": [
- "Property",
- "Temperature"
- ],
- "name": "maxTempSinceLastReboot",
- "schema": "double",
- "unit" : "degreeCelsius",
- "displayName": "Max temperature since last reboot.",
- "description": "Returns the max temperature since last device reboot."
-}
-```
-
-This example shows two properties. These properties relate to the property definition in the UI:
-
-* `@type` specifies the type of capability: `Property`. The previous example also shows the semantic type `Temperature` for both properties.
-* `name` for the property.
-* `schema` specifies the data type for the property. This value can be a primitive type, such as double, integer, Boolean, or string. Complex object types and maps are also supported.
-* `writable` By default, properties are read-only. You can mark a property as writable by using this field.
+To learn about the Digital Twin Definition Language (DTDL) that Azure IoT Central uses to define properties in a device template, see [IoT Plug and Play conventions > Read-only properties](../../iot-develop/concepts-convention.md#read-only-properties).
Optional fields, such as display name and description, let you add more details to the interface and capabilities.
When you select the complex **Schema**, such as **Object**, you need to define t
:::image type="content" source="media/howto-use-properties/object.png" alt-text="Screenshot that shows how to define an object." lightbox="media/howto-use-properties/object.png":::
-The following code shows the definition of an Object property type. This object has two fields with types string and integer.
-
-``` json
-{
- "@type": "Property",
- "description": {
- "en": "Device model name."
- },
- "displayName": {
- "en": "Device model"
- },
- "name": "model",
- "writable": false,
- "schema": {
- "@type": "Object",
- "displayName": {
- "en": "Object"
- },
- "fields": [
- {
- "displayName": {
- "en": "Model Name"
- },
- "name": "ModelName",
- "schema": "string"
- },
- {
- "displayName": {
- "en": "Model ID"
- },
- "name": "ModelID",
- "schema": "integer"
- }
- ]
- }
-}
-```
- ## Implement read-only properties By default, properties are read-only. Read-only properties let a device report property value updates to your Azure IoT Central application. Your Azure IoT Central application can't set the value of a read-only property. Azure IoT Central uses device twins to synchronize property values between the device and the Azure IoT Central application. Device property values use device twin reported properties. For more information, see [device twins](../../iot-hub/tutorial-device-twins.md).
-The following snippet from a device model shows the definition of a read-only property type:
-
-``` json
-{
- "name": "model",
- "displayName": "Device model",
- "schema": "string",
- "comment": "Device model name or ID. Ex. Surface Book 2."
-}
-```
-
-Property updates are sent by a device as a JSON payload. For more information, see [payloads](./concepts-telemetry-properties-commands.md).
+A device sends property updates as a JSON payload. For more information, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md).
You can use the Azure IoT device SDK to send a property update to your Azure IoT Central application.
-Device twin properties can be sent to your Azure IoT Central application by using the following function:
+For example implementations in multiple languages, see [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md).
-``` javascript
-hubClient.getTwin((err, twin) => {
- const properties = {
- model: 'environmentalSensor1.0'
- };
- twin.properties.reported.update(properties, (err) => {
- console.log(err ? `error: ${err.toString()}` : `status: success` )
- });
-});
-```
-
-This article uses Node.js for simplicity. For other language examples, see the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial.
-
-The following view in Azure IoT Central application shows the properties you can see. The view automatically makes the **Device model** property a _read-only device property_.
+The following view in Azure IoT Central application shows the device read-only properties:
:::image type="content" source="media/howto-use-properties/read-only.png" alt-text="Screenshot that shows the view of a read-only property." lightbox="media/howto-use-properties/read-only.png"::: ## Implement writable properties
-Writable properties are set by an operator in the Azure IoT Central application on a form. Azure IoT Central sends the property to the device. Azure IoT Central expects an acknowledgment from the device.
-
-The following snippet from a device model shows the definition of a writable property type:
-
-``` json
-{
- "@type": "Property",
- "displayName": "Brightness Level",
- "description": "The brightness level for the light on the device. Can be specified as 1 (high), 2 (medium), 3 (low)",
- "name": "brightness",
- "writable": true,
- "schema": "long"
-}
-```
-
-To define and handle the writable properties your device responds to, you can use the following code:
-
-``` javascript
-hubClient.getTwin((err, twin) => {
- twin.on('properties.desired.brightness', function(desiredBrightness) {
- console.log( `Received setting: ${desiredBrightness.value}` );
- var patch = {
- brightness: {
- value: desiredBrightness.value,
- ad: 'success',
- ac: 200,
- av: desiredBrightness.$version
- }
- }
- twin.properties.reported.update(patch, (err) => {
- console.log(err ? `error: ${err.toString()}` : `status: success` )
- });
- });
-});
-```
-
-The response message should include the `ac` and `av` fields. The `ad` field is optional. See the following snippets for examples:
-
-* `ac` is a numeric field that uses the values in the following table.
-* `av` is the version number sent to the device.
-* `ad` is an option string description.
-
-| Value | Label | Description |
-| -- | -- | -- |
-| `'ac': 200` | Completed | The property change operation was successfully completed. |
-| `'ac': 202` or `'ac': 201` | Pending | The property change operation is pending or in progress. |
-| `'ac': 4xx` | Error | The requested property change wasn't valid or had an error. |
-| `'ac': 5xx` | Error | The device experienced an unexpected error when processing the requested change. |
-
-For more information on device twins, see [Configure your devices from a back-end service](../../iot-hub/tutorial-device-twins.md).
-
-When the operator sets a writable property in the Azure IoT Central application, the application uses a device twin desired property to send the value to the device. The device then responds by using a device twin reported property. When Azure IoT Central receives the reported property value, it updates the property view with a status of **Accepted**.
+An IoT Central operator sets writable properties on a form. Azure IoT Central sends the property to the device. Azure IoT Central expects an acknowledgment from the device.
+
+For example implementations in multiple languages, see [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md).
+
+The response message should include the `ac` and `av` fields. The `ad` field is optional. To learn more, see [IoT Plug and Play conventions > Writable properties](../../iot-develop/concepts-convention.md#writable-properties).
+
+When the operator sets a writable property in the Azure IoT Central UI, the application uses a device twin desired property to send the value to the device. The device then responds by using a device twin reported property. When Azure IoT Central receives the reported property value, it updates the property view with a status of **Accepted**.
When you enter the value and select **Save**, the initial status is **Pending**. When the device accepts the change, the status changes to **Accepted**.
You can update the writable properties in this view:
Now that you've learned how to use properties in your Azure IoT Central application, see:
-* [Payloads](concepts-telemetry-properties-commands.md)
+* [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md)
+* [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md)
* [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md)
iot-central Overview Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central.md
# What is Azure IoT Central?
-IoT Central is an IoT application platform as a service (aPaaS) that reduces the burden and cost of developing, managing, and maintaining IoT solutions. Use IoT Central to quickly evaluate your IoT scenario and assess the opportunities it can create for your business. IoT Central streamlines the development of a complex and continually evolving IoT infrastructure by letting you to focus your efforts on determining the business impact you can create with the IoT data stream.
+IoT Central is an IoT application platform as a service (aPaaS) that reduces the burden and cost of developing, managing, and maintaining IoT solutions. Use IoT Central to quickly evaluate your IoT scenario and assess the opportunities it can create for your business. To streamline the development of a complex and continually evolving IoT infrastructure, IoT Central lets you focus your efforts on determining the business impact you can create with the IoT data stream.
The web UI lets you quickly connect devices, monitor device conditions, create rules, and manage devices and their data throughout their life cycle. Furthermore, it enables you to act on device insights by extending IoT intelligence into line-of-business applications. After you've used IoT Central to evaluate your IoT scenario, you can then build your enterprise-ready Azure IoT solution.
Every device connected to IoT Central uses a _device template_. A device templat
- Telemetry it sends. Examples include temperature and humidity. Telemetry is streaming data. - Business properties that an operator can modify. Examples include a customer address and a last serviced date.-- Device properties that are set by a device and are read-only in the application. For example, the state of a valve as either open or shut.-- Properties that are set by an operator and that determine the behavior of the device. For example, a target temperature for the device.
+- Device properties that a device sets and that are read-only in the application. For example, the state of a valve as either open or shut.
+- Device properties that an operator sets and that determine the behavior of the device. For example, a target temperature for the device.
- Commands that are called by an operator and that run on a device. For example, a command to remotely reboot a device. Every [device template](howto-set-up-template.md) includes:
As with any IoT solution designed to operate at scale, a structured approach to
### Dashboards
-Start with a pre-built dashboard in an application template or create your own dashboards tailored to the needs of your operators. You can share dashboards with all users in your application, or keep them private.
+Start with a prebuilt dashboard in an application template or create your own dashboards tailored to the needs of your operators. You can share dashboards with all users in your application, or keep them private.
### Rules and actions
Build [custom rules](tutorial-create-telemetry-rules.md) based on device state a
## Integrate with other services
-As an application platform, IoT Central lets you transform your IoT data into the business insights that drive actionable outcomes. [Rules](./tutorial-create-telemetry-rules.md), [data export](./howto-export-to-blob-storage.md), and the [public REST API](tutorial-use-rest-api.md) are examples of how you can integrate IoT Central with line-of-business applications:
+As an application platform, IoT Central lets you transform your IoT data into the business insights that drive actionable outcomes. Examples include: determining machine efficiency trends and predicting future energy usage on a factory floor.
+
+[Rules](./tutorial-create-telemetry-rules.md), [data export](./howto-export-to-blob-storage.md), and the [public REST API](tutorial-use-rest-api.md) are examples of how you can integrate IoT Central with line-of-business applications:
![How IoT Central can transform your IoT data](media/overview-iot-central/transform.png)
-You can generate business insights, such as determining machine efficiency trends or predicting future energy usage on a factory floor, by building custom analytics pipelines to process telemetry from your devices and store the results. Configure data exports in your IoT Central application to export telemetry, device property changes, and device template changes to other services where you can analyze, store, and visualize the data with your preferred tools.
+Generate business insights by building custom analytics pipelines to process telemetry from your devices and store the results. Configure data exports in your IoT Central application to export telemetry, device property changes, and device template changes to other services where you can analyze, store, and visualize the data with your preferred tools.
### Build custom IoT solutions and integrations with the REST APIs
The IoT Central documentation refers to four user roles that interact with an Io
- A _solution builder_ is responsible for [creating an application](quick-deploy-iot-central.md), [configuring rules and actions](quick-configure-rules.md), [defining integrations with other services](quick-export-data.md), and further customizing the application for operators and device developers. - An _operator_ [manages the devices](howto-manage-devices-individually.md) connected to the application. - An _administrator_ is responsible for administrative tasks such as managing [user roles and permissions](howto-administer.md) within the application and [configuring managed identities](howto-manage-iot-central-from-portal.md#configure-a-managed-identity) for securing connects to other services.-- A _device developer_ [creates the code that runs on a device](concepts-telemetry-properties-commands.md) or [IoT Edge module](concepts-iot-edge.md) connected to your application.
+- A _device developer_ [creates the code that runs on a device](./tutorial-connect-device.md) or [IoT Edge module](concepts-iot-edge.md) connected to your application.
## Next steps
iot-central Troubleshoot Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-connection.md
If you chose to create a new template that models the data correctly, migrate de
### Invalid JSON
-If there are no errors reported, but a value isn't appearing, then it's probably malformed JSON in the payload the device sends. To learn more, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md).
+If there are no errors reported, but a value isn't appearing, then it's probably malformed JSON in the payload the device sends. To learn more, see [Telemetry, property, and command payloads](../../iot-develop/concepts-message-payloads.md).
You can't use the validate commands or the **Raw data** view in the UI to detect if the device is sending malformed JSON.
iot-central Tutorial Connect Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-connect-device.md
Title: Tutorial - Connect a client app to Azure IoT Central
description: This tutorial shows you how to connect a device running either a C, C#, Java, JavaScript, or Python client app to your Azure IoT Central application. Previously updated : 10/26/2022 Last updated : 06/06/2023 -+ zone_pivot_groups: programming-languages-set-twenty-six #- id: programming-languages-set-twenty-six
-## Owner: dobett
-# Title: Programming languages
-# prompt: Choose a programming language
-# pivots:
-# - id: programming-language-ansi-c
-# Title: C
-# - id: programming-language-csharp
-# Title: C#
-# - id: programming-language-java
-# Title: Java
-# - id: programming-language-javascript
-# Title: JavaScript
-# - id: programming-language-python
# Title: Python- #Customer intent: As a device developer, I want to try out using device code that uses one of the the Azure IoT device SDKs. I want to understand how to send telemetry from a device, synchronize properties with the device, and control the device using commands.
iot-develop Concepts Convention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-convention.md
# IoT Plug and Play conventions
-IoT Plug and Play devices should follow a set of conventions when they exchange messages with an IoT hub. IoT Plug and Play devices use the MQTT protocol to communicate with IoT Hub, AMQP is supported by IoT Hub and available in some device SDKs.
+IoT Plug and Play devices should follow a set of conventions when they exchange messages with an IoT hub. IoT Plug and Play devices use the MQTT protocol to communicate with IoT Hub. IoT Hub also supports the AMQP protocol which available in some IoT device SDKs.
A device can include [modules](../iot-hub/iot-hub-devguide-module-twins.md), or be implemented in an [IoT Edge module](../iot-edge/about-iot-edge.md) hosted by the IoT Edge runtime.
To identify the model that a device or module implements, a service can get the
- Telemetry sent from a no component device doesn't require any extra metadata. The system adds the `dt-dataschema` property. - Telemetry sent from a device using components must add the component name to the telemetry message.-- When using MQTT add the `$.sub` property with the component name to the telemetry topic, the system adds the `dt-subject` property.-- When using AMQP add the `dt-subject` property with the component name as a message annotation.
+- When using MQTT, add the `$.sub` property with the component name to the telemetry topic, the system adds the `dt-subject` property.
+- When using AMQP, add the `dt-subject` property with the component name as a message annotation.
> [!NOTE] > Telemetry from components requires one message per component.
+For more telemetry examples, see [Payloads > Telemetry](concepts-message-payloads.md#telemetry)
+ ## Read-only properties
-A read-only property is set by the device and reported to the back-end application.
+A device sets a read-only property which it then reports to the back-end application.
### Sample no component read-only property
Sample reported property payload:
} ```
+For more read-only property examples, see [Payloads > Properties](concepts-message-payloads.md#properties).
+ ## Writable properties
-A writable property can be set by the back-end application and sent to the device.
+A back-end application sets a writable property that IoT Hub then sends to the device.
The device or module should confirm that it received the property by sending a reported property. The reported property should include:
The device or module should confirm that it received the property by sending a r
### Acknowledgment responses
-When reporting writable properties the device should compose the acknowledgment message, using the four fields described above, to indicate the actual device state, as described in the following table:
+When reporting writable properties the device should compose the acknowledgment message, by using the four fields in the previous list, to indicate the actual device state, as described in the following table:
|Status(ac)|Version(av)|Value(value)|Description(av)| |:|:|:|:|
A device can use the reported property to provide other information to the hub.
} ```
-When the device reaches the target temperature it sends the following message:
+When the device reaches the target temperature, it sends the following message:
```json "reported": {
Sample reported property second payload:
> [!NOTE] > You could choose to combine these two reported property payloads into a single payload.
+For more writable property examples, see [Payloads > Properties](concepts-message-payloads.md#writable-property-types).
+ ## Commands No component interfaces use the command name without a prefix. On a device or module, multiple component interfaces use command names with the following format: `componentName*commandName`.
+For more command examples, see [Payloads > Commands](concepts-message-payloads.md#commands).
+
+> [!TIP]
+> IoT Central has its own conventions for implementing [Long-running commands](../iot-central/core/howto-use-commands.md#long-running-commands) and [Offline commands](../iot-central/core/howto-use-commands.md#offline-commands).
+ ## Next steps Now that you've learned about IoT Plug and Play conventions, here are some other resources:
iot-develop Concepts Developer Guide Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-device.md
Last updated 11/17/2022 + zone_pivot_groups: programming-languages-set-twenty-seven #- id: programming-languages-set-twenty-seven
-## Owner: dobett
-# Title: Programming languages
-# prompt: Choose a programming language
-# pivots:
-# - id: programming-language-ansi-c
-# Title: C
-# - id: programming-language-csharp
-# Title: C#
-# - id: programming-language-java
-# Title: Java
-# - id: programming-language-javascript
-# Title: JavaScript
-# - id: programming-language-python
-# Title: Python
-# - id: programming-language-embedded-c
# Title: Embedded C # IoT Plug and Play device developer guide
-IoT Plug and Play lets you build IoT devices that advertise their capabilities to Azure IoT applications. IoT Plug and Play devices don't require manual configuration when a customer connects them to IoT Plug and Play-enabled applications.
+IoT Plug and Play lets you build IoT devices that advertise their capabilities to Azure IoT applications. IoT Plug and Play devices don't require manual configuration when a customer connects them to IoT Plug and Play-enabled applications such as IoT Central.
You can implement an IoT device directly by using [modules](../iot-hub/iot-hub-devguide-module-twins.md), or by using [IoT Edge modules](../iot-edge/about-iot-edge.md).
iot-develop Concepts Developer Guide Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-developer-guide-service.md
Last updated 11/17/2022 + zone_pivot_groups: programming-languages-set-ten # - id: programming-languages-set-ten
-# Title: Programming languages
-# prompt: Choose a programming language
-# pivots:
-# - id: programming-language-csharp
-# Title: C#
-# - id: programming-language-java
-# Title: Java
-# - id: programming-language-javascript
-# Title: JavaScript
-# - id: programming-language-python
# Title: Python
IoT Plug and Play lets you build IoT devices that advertise their capabilities t
IoT Plug and Play lets you use devices that have announced their model ID with your IoT hub. For example, you can access the properties and commands of a device directly.
+If you're using IoT Central, you can use the IoT Central UI and REST API to interact with IoT Plug and Play devices connected to your application.
+ ## Service SDKs Use the Azure IoT service SDKs in your solution to interact with devices and modules. For example, you can use the service SDKs to read and update twin properties and invoke commands. Supported languages include C#, Java, Node.js, and Python.
Now that you've learned about device modeling, here are some more resources:
- [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) - [C device SDK](https://github.com/Azure/azure-iot-sdk-c/) - [IoT REST API](/rest/api/iothub/device)-- [IoT Plug and Play modeling guide](concepts-modeling-guide.md)
+- [IoT Plug and Play modeling guide](concepts-modeling-guide.md)
iot-develop Concepts Message Payloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-message-payloads.md
+
+ Title: Plug and Play device message payloads
+description: Understand the format of the telemetry, property, and command messages that a Plug and Play device can exchange with a service.
++ Last updated : 06/05/2023+++++
+# This article applies to device developers.
++
+# Telemetry, property, and command payloads
+
+A [device model](concepts-modeling-guide.md) defines the:
+
+* Telemetry a device sends to a service.
+* Properties a device synchronizes with a service.
+* Commands that the service calls on a device.
+
+> [!TIP]
+> Azure IoT Central is a service that follows the Plug and Play conventions. In IoT Central, the device model is part of a [device template](../iot-central/core/concepts-device-templates.md). IoT Central currently supports [DTDL v2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) with an [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md). An IoT Central application expects to receive UTF-8 encoded JSON data.
+
+This article describes the JSON payloads that devices send and receive for telemetry, properties, and commands defined in a DTDL device model.
+
+The article doesn't describe every possible type of telemetry, property, and command payload, but the examples illustrate key types.
+
+Each example shows a snippet from the device model that defines the type and example JSON payloads to illustrate how the device should interact with a Plug and Play aware service such as IoT Central.
+
+The example JSON snippets in this article use [Digital Twin Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md). There are also some [DTDL extensions that IoT Central](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md) uses.
+
+For sample device code that shows some of these payloads in use, see the [Connect a sample IoT Plug and Play device application running on Linux or Windows to IoT Hub](tutorial-connect-device.md) tutorial or the [Create and connect a client application to your Azure IoT Central application](../iot-central/core/tutorial-connect-device.md) tutorial.
+
+## View raw data
+
+If you're using IoT Central, you can view the raw data that a device sends to an application. This view is useful for troubleshooting issues with the payload sent from a device. To view the raw data a device is sending:
+
+1. Navigate to the device from the **Devices** page.
+
+1. Select the **Raw data** tab:
+
+ :::image type="content" source="media/concepts-message-payloads/raw-data.png" alt-text="Screenshot that shows the raw data view." lightbox="media/concepts-message-payloads/raw-data.png":::
+
+ On this view, you can select the columns to display and set a time range to view. The **Unmodeled data** column shows data from the device that doesn't match any property or telemetry definitions in the device template.
+
+For more troubleshooting tips, see [Troubleshoot why data from your devices isn't showing up in Azure IoT Central](../iot-central/core/troubleshoot-connection.md).
+
+## Telemetry
+
+To learn more about the DTDL telemetry naming rules, see [DTDL > Telemetry](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#telemetry). You can't start a telemetry name using the `_` character.
+
+Don't create telemetry types with the following names. IoT Central uses these reserved names internally. If you try to use these names, IoT Central will ignore your data:
+
+* `EventEnqueuedUtcTime`
+* `EventProcessedUtcTime`
+* `PartitionId`
+* `EventHub`
+* `User`
+* `$metadata`
+* `$version`
+
+### Telemetry in components
+
+If the telemetry is defined in a component, add a custom message property called `$.sub` with the name of the component as defined in the device model. To learn more, see [Tutorial: Connect an IoT Plug and Play multiple component device applications](tutorial-multiple-components.md). This tutorial shows how to use different programming languages to send telemetry from a component.
+
+> [!IMPORTANT]
+> To display telemetry from components hosted in IoT Edge modules correctly, use [IoT Edge version 1.2.4](https://github.com/Azure/azure-iotedge/releases/tag/1.2.4) or later. If you use an earlier version, telemetry from your components in IoT Edge modules displays as *_unmodeleddata*.
+
+### Telemetry in inherited interfaces
+
+If the telemetry is defined in an inherited interface, your device sends the telemetry as if it is defined in the root interface. Given the following device model:
+
+```json
+[
+ {
+ "@id": "dtmi:contoso:device;1",
+ "@type": "Interface",
+ "contents": [
+ {
+ "@type": [
+ "Property",
+ "Cloud",
+ "StringValue"
+ ],
+ "displayName": {
+ "en": "Device Name"
+ },
+ "name": "DeviceName",
+ "schema": "string"
+ }
+ ],
+ "displayName": {
+ "en": "Contoso Device"
+ },
+ "extends": [
+ "dtmi:contoso:sensor;1"
+ ],
+ "@context": [
+ "dtmi:iotcentral:context;2",
+ "dtmi:dtdl:context;2"
+ ]
+ },
+ {
+ "@context": [
+ "dtmi:iotcentral:context;2",
+ "dtmi:dtdl:context;2"
+ ],
+ "@id": "dtmi:contoso:sensor;1",
+ "@type": [
+ "Interface",
+ "NamedInterface"
+ ],
+ "contents": [
+ {
+ "@type": [
+ "Telemetry",
+ "NumberValue"
+ ],
+ "displayName": {
+ "en": "Meter Voltage"
+ },
+ "name": "MeterVoltage",
+ "schema": "double"
+ }
+ ],
+ "displayName": {
+ "en": "Contoso Sensor"
+ },
+ "name": "ContosoSensor"
+ }
+]
+```
+
+The device sends meter voltage telemetry using the following payload. The device doesn't include the interface name in the payload:
+
+```json
+{
+ "MeterVoltage": 5.07
+}
+```
+
+### Primitive types
+
+This section shows examples of primitive telemetry types that a device can stream.
+
+The following snippet from a device model shows the definition of a `boolean` telemetry type:
+
+```json
+{
+ "@type": "Telemetry",
+ "displayName": {
+ "en": "BooleanTelemetry"
+ },
+ "name": "BooleanTelemetry",
+ "schema": "boolean"
+}
+```
+
+A device client should send the telemetry as JSON that looks like the following example:
+
+```json
+{ "BooleanTelemetry": true }
+```
+
+The following snippet from a device model shows the definition of a `string` telemetry type:
+
+```json
+{
+ "@type": "Telemetry",
+ "displayName": {
+ "en": "StringTelemetry"
+ },
+ "name": "StringTelemetry",
+ "schema": "string"
+}
+```
+
+A device client should send the telemetry as JSON that looks like the following example:
+
+```json
+{ "StringTelemetry": "A string value - could be a URL" }
+```
+
+The following snippet from a device model shows the definition of an `integer` telemetry type:
+
+```json
+{
+ "@type": "Telemetry",
+ "displayName": {
+ "en": "IntegerTelemetry"
+ },
+ "name": "IntegerTelemetry",
+ "schema": "integer"
+}
+
+```
+
+A device client should send the telemetry as JSON that looks like the following example:
+
+```json
+{ "IntegerTelemetry": 23 }
+```
+
+The following snippet from a device model shows the definition of a `double` telemetry type:
+
+```json
+{
+ "@type": "Telemetry",
+ "displayName": {
+ "en": "DoubleTelemetry"
+ },
+ "name": "DoubleTelemetry",
+ "schema": "double"
+}
+```
+
+A device client should send the telemetry as JSON that looks like the following example:
+
+```json
+{ "DoubleTelemetry": 56.78 }
+```
+
+The following snippet from a device model shows the definition of a `dateTime` telemetry type:
+
+```json
+{
+ "@type": "Telemetry",
+ "displayName": {
+ "en": "DateTimeTelemetry"
+ },
+ "name": "DateTimeTelemetry",
+ "schema": "dateTime"
+}
+```
+
+A device client should send the telemetry as JSON that looks like the following example - `DateTime` types must be in ISO 8061 format:
+
+```json
+{ "DateTimeTelemetry": "2020-08-30T19:16:13.853Z" }
+```
+
+The following snippet from a device model shows the definition of a `duration` telemetry type:
+
+```json
+{
+ "@type": "Telemetry",
+ "displayName": {
+ "en": "DurationTelemetry"
+ },
+ "name": "DurationTelemetry",
+ "schema": "duration"
+}
+```
+
+A device client should send the telemetry as JSON that looks like the following example - durations must be in ISO 8601 format:
+
+```json
+{ "DurationTelemetry": "PT10H24M6.169083011336625S" }
+```
+
+### Complex types
+
+This section shows examples of complex telemetry types that a device can stream.
+
+The following snippet from a device model shows the definition of an `Enum` telemetry type:
+
+```json
+{
+ "@type": "Telemetry",
+ "displayName": {
+ "en": "EnumTelemetry"
+ },
+ "name": "EnumTelemetry",
+ "schema": {
+ "@type": "Enum",
+ "displayName": {
+ "en": "Enum"
+ },
+ "valueSchema": "integer",
+ "enumValues": [
+ {
+ "displayName": {
+ "en": "Item1"
+ },
+ "enumValue": 0,
+ "name": "Item1"
+ },
+ {
+ "displayName": {
+ "en": "Item2"
+ },
+ "enumValue": 1,
+ "name": "Item2"
+ },
+ {
+ "displayName": {
+ "en": "Item3"
+ },
+ "enumValue": 2,
+ "name": "Item3"
+ }
+ ]
+ }
+}
+```
+
+A device client should send the telemetry as JSON that looks like the following example. Possible values are `0`, `1`, and `2` that display in IoT Central as `Item1`, `Item2`, and `Item3`:
+
+```json
+{ "EnumTelemetry": 1 }
+```
+
+The following snippet from a device model shows the definition of an `Object` telemetry type. This object has three fields with types `dateTime`, `integer`, and `Enum`:
+
+```json
+{
+ "@type": "Telemetry",
+ "displayName": {
+ "en": "ObjectTelemetry"
+ },
+ "name": "ObjectTelemetry",
+ "schema": {
+ "@type": "Object",
+ "displayName": {
+ "en": "Object"
+ },
+ "fields": [
+ {
+ "displayName": {
+ "en": "Property1"
+ },
+ "name": "Property1",
+ "schema": "dateTime"
+ },
+ {
+ "displayName": {
+ "en": "Property2"
+ },
+ "name": "Property2",
+ "schema": "integer"
+ },
+ {
+ "displayName": {
+ "en": "Property3"
+ },
+ "name": "Property3",
+ "schema": {
+ "@type": "Enum",
+ "displayName": {
+ "en": "Enum"
+ },
+ "valueSchema": "integer",
+ "enumValues": [
+ {
+ "displayName": {
+ "en": "Item1"
+ },
+ "enumValue": 0,
+ "name": "Item1"
+ },
+ {
+ "displayName": {
+ "en": "Item2"
+ },
+ "enumValue": 1,
+ "name": "Item2"
+ },
+ {
+ "displayName": {
+ "en": "Item3"
+ },
+ "enumValue": 2,
+ "name": "Item3"
+ }
+ ]
+ }
+ }
+ ]
+ }
+}
+```
+
+A device client should send the telemetry as JSON that looks like the following example. `DateTime` types must be ISO 8061 compliant. Possible values for `Property3` are `0`, `1`, and that display in IoT Central as `Item1`, `Item2`, and `Item3`:
+
+```json
+{
+ "ObjectTelemetry": {
+ "Property1": "2020-09-09T03:36:46.195Z",
+ "Property2": 37,
+ "Property3": 2
+ }
+}
+```
+
+The following snippet from a device model shows the definition of a `vector` telemetry type:
+
+```json
+{
+ "@type": "Telemetry",
+ "displayName": {
+ "en": "VectorTelemetry"
+ },
+ "name": "VectorTelemetry",
+ "schema": "vector"
+}
+```
+
+A device client should send the telemetry as JSON that looks like the following example:
+
+```json
+{
+ "VectorTelemetry": {
+ "x": 74.72395045538597,
+ "y": 74.72395045538597,
+ "z": 74.72395045538597
+ }
+}
+```
+
+The following snippet from a device model shows the definition of a `geopoint` telemetry type:
+
+```json
+{
+ "@type": "Telemetry",
+ "displayName": {
+ "en": "GeopointTelemetry"
+ },
+ "name": "GeopointTelemetry",
+ "schema": "geopoint"
+}
+```
+
+> [!NOTE]
+> The **geopoint** schema type is part of the [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md) to DTDL. IoT Central currently supports the **geopoint** schema type and the **location** semantic type for backwards compatibility.
+
+A device client should send the telemetry as JSON that looks like the following example. IoT Central displays the value as a pin on a map:
+
+```json
+{
+ "GeopointTelemetry": {
+ "lat": 47.64263,
+ "lon": -122.13035,
+ "alt": 0
+ }
+}
+```
+
+### Event and state types
+
+This section shows examples of telemetry events and states that a device sends to an IoT Central application.
+
+> [!NOTE]
+> The **event** and **state** schema types are part of the [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md) to DTDL.
+
+The following snippet from a device model shows the definition of a `integer` event type:
+
+```json
+{
+ "@type": [
+ "Telemetry",
+ "Event"
+ ],
+ "displayName": {
+ "en": "IntegerEvent"
+ },
+ "name": "IntegerEvent",
+ "schema": "integer"
+}
+```
+
+A device client should send the event data as JSON that looks like the following example:
+
+```json
+{ "IntegerEvent": 74 }
+```
+
+The following snippet from a device model shows the definition of a `integer` state type:
+
+```json
+{
+ "@type": [
+ "Telemetry",
+ "State"
+ ],
+ "displayName": {
+ "en": "IntegerState"
+ },
+ "name": "IntegerState",
+ "schema": {
+ "@type": "Enum",
+ "valueSchema": "integer",
+ "enumValues": [
+ {
+ "displayName": {
+ "en": "Level1"
+ },
+ "enumValue": 1,
+ "name": "Level1"
+ },
+ {
+ "displayName": {
+ "en": "Level2"
+ },
+ "enumValue": 2,
+ "name": "Level2"
+ },
+ {
+ "displayName": {
+ "en": "Level3"
+ },
+ "enumValue": 3,
+ "name": "Level3"
+ }
+ ]
+ }
+}
+```
+
+A device client should send the state as JSON that looks like the following example. Possible integer state values are `1`, `2`, or `3`:
+
+```json
+{ "IntegerState": 2 }
+```
+
+## Properties
+
+To learn more about the DTDL property naming rules, see [DTDL > Property](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#property). You can't start a property name using the `_` character.
+
+### Properties in components
+
+If the property is defined in a component, wrap the property in the component name. The following example sets the `maxTempSinceLastReboot` in the `thermostat2` component. The marker `__t` indicates that this section defines a component:
+
+```json
+{
+ "thermostat2" : {
+ "__t" : "c",
+ "maxTempSinceLastReboot" : 38.7
+ }
+}
+```
+
+To learn more, see [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md).
+
+### Primitive types
+
+This section shows examples of primitive property types that a device sends to a service.
+
+The following snippet from a device model shows the definition of a `boolean` property type:
+
+```json
+{
+ "@type": "Property",
+ "displayName": {
+ "en": "BooleanProperty"
+ },
+ "name": "BooleanProperty",
+ "schema": "boolean",
+ "writable": false
+}
+```
+
+A device client should send a JSON payload that looks like the following example as a reported property in the device twin:
+
+```json
+{ "BooleanProperty": false }
+```
+
+The following snippet from a device model shows the definition of a `long` property type:
+
+```json
+{
+ "@type": "Property",
+ "displayName": {
+ "en": "LongProperty"
+ },
+ "name": "LongProperty",
+ "schema": "long",
+ "writable": false
+}
+```
+
+A device client should send a JSON payload that looks like the following example as a reported property in the device twin:
+
+```json
+{ "LongProperty": 439 }
+```
+
+The following snippet from a device model shows the definition of a `date` property type:
+
+```json
+{
+ "@type": "Property",
+ "displayName": {
+ "en": "DateProperty"
+ },
+ "name": "DateProperty",
+ "schema": "date",
+ "writable": false
+}
+```
+
+A device client should send a JSON payload that looks like the following example as a reported property in the device twin. `Date` types must be ISO 8061 compliant:
+
+```json
+{ "DateProperty": "2020-05-17" }
+```
+
+The following snippet from a device model shows the definition of a `duration` property type:
+
+```json
+{
+ "@type": "Property",
+ "displayName": {
+ "en": "DurationProperty"
+ },
+ "name": "DurationProperty",
+ "schema": "duration",
+ "writable": false
+}
+```
+
+A device client should send a JSON payload that looks like the following example as a reported property in the device twin - durations must be ISO 8601 Duration compliant:
+
+```json
+{ "DurationProperty": "PT10H24M6.169083011336625S" }
+```
+
+The following snippet from a device model shows the definition of a `float` property type:
+
+```json
+{
+ "@type": "Property",
+ "displayName": {
+ "en": "FloatProperty"
+ },
+ "name": "FloatProperty",
+ "schema": "float",
+ "writable": false
+}
+```
+
+A device client should send a JSON payload that looks like the following example as a reported property in the device twin:
+
+```json
+{ "FloatProperty": 1.9 }
+```
+
+The following snippet from a device model shows the definition of a `string` property type:
+
+```json
+{
+ "@type": "Property",
+ "displayName": {
+ "en": "StringProperty"
+ },
+ "name": "StringProperty",
+ "schema": "string",
+ "writable": false
+}
+```
+
+A device client should send a JSON payload that looks like the following example as a reported property in the device twin:
+
+```json
+{ "StringProperty": "A string value - could be a URL" }
+```
+
+### Complex types
+
+This section shows examples of complex property types that a device sends to a service.
+
+The following snippet from a device model shows the definition of an `Enum` property type:
+
+```json
+{
+ "@type": "Property",
+ "displayName": {
+ "en": "EnumProperty"
+ },
+ "name": "EnumProperty",
+ "writable": false,
+ "schema": {
+ "@type": "Enum",
+ "displayName": {
+ "en": "Enum"
+ },
+ "valueSchema": "integer",
+ "enumValues": [
+ {
+ "displayName": {
+ "en": "Item1"
+ },
+ "enumValue": 0,
+ "name": "Item1"
+ },
+ {
+ "displayName": {
+ "en": "Item2"
+ },
+ "enumValue": 1,
+ "name": "Item2"
+ },
+ {
+ "displayName": {
+ "en": "Item3"
+ },
+ "enumValue": 2,
+ "name": "Item3"
+ }
+ ]
+ }
+}
+```
+
+A device client should send a JSON payload that looks like the following example as a reported property in the device twin. Possible values are `0`, `1`, and that display in IoT Central as `Item1`, `Item2`, and `Item3`:
+
+```json
+{ "EnumProperty": 1 }
+```
+
+The following snippet from a device model shows the definition of an `Object` property type. This object has two fields with types `string` and `integer`:
+
+```json
+{
+ "@type": "Property",
+ "displayName": {
+ "en": "ObjectProperty"
+ },
+ "name": "ObjectProperty",
+ "writable": false,
+ "schema": {
+ "@type": "Object",
+ "displayName": {
+ "en": "Object"
+ },
+ "fields": [
+ {
+ "displayName": {
+ "en": "Field1"
+ },
+ "name": "Field1",
+ "schema": "integer"
+ },
+ {
+ "displayName": {
+ "en": "Field2"
+ },
+ "name": "Field2",
+ "schema": "string"
+ }
+ ]
+ }
+}
+```
+
+A device client should send a JSON payload that looks like the following example as a reported property in the device twin:
+
+```json
+{
+ "ObjectProperty": {
+ "Field1": 37,
+ "Field2": "A string value"
+ }
+}
+```
+
+The following snippet from a device model shows the definition of an `vector` property type:
+
+```json
+{
+ "@type": "Property",
+ "displayName": {
+ "en": "VectorProperty"
+ },
+ "name": "VectorProperty",
+ "schema": "vector",
+ "writable": false
+}
+```
+
+A device client should send a JSON payload that looks like the following example as a reported property in the device twin:
+
+```json
+{
+ "VectorProperty": {
+ "x": 74.72395045538597,
+ "y": 74.72395045538597,
+ "z": 74.72395045538597
+ }
+}
+```
+
+The following snippet from a device model shows the definition of a `geopoint` property type:
+
+```json
+{
+ "@type": "Property",
+ "displayName": {
+ "en": "GeopointProperty"
+ },
+ "name": "GeopointProperty",
+ "schema": "geopoint",
+ "writable": false
+}
+```
+
+> [!NOTE]
+> The **geopoint** schema type is part of the [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md) to DTDL. IoT Central currently supports the **geopoint** schema type and the **location** semantic type for backwards compatibility.
+
+A device client should send a JSON payload that looks like the following example as a reported property in the device twin:
+
+```json
+{
+ "GeopointProperty": {
+ "lat": 47.64263,
+ "lon": -122.13035,
+ "alt": 0
+ }
+}
+```
+
+### Writable property types
+
+This section shows examples of writable property types that a device receives from a service.
+
+If the writable property is defined in a component, the desired property message includes the component name. The following example shows the message requesting the device to update the `targetTemperature` in the `thermostat2` component. The marker `__t` indicates that this section defines a component:
+
+```json
+{
+ "thermostat2": {
+ "targetTemperature": {
+ "value": 57
+ },
+ "__t": "c"
+ },
+ "$version": 3
+}
+```
+
+To learn more, see [Connect an IoT Plug and Play multiple component device applications](tutorial-multiple-components.md).
+
+The device or module should confirm that it received the property by sending a reported property. The reported property should include:
+
+* `value` - the actual value of the property (typically the received value, but the device may decide to report a different value).
+* `ac` - an acknowledgment code that uses an HTTP status code.
+* `av` - an acknowledgment version that refers to the `$version` of the desired property. You can find this value in the desired property JSON payload.
+* `ad` - an optional acknowledgment description.
+
+To learn more about these fields, see [IoT Plug and Play conventions > Acknowledgment responses](concepts-convention.md#acknowledgment-responses)
+
+The following snippet from a device model shows the definition of a writable `string` property type:
+
+```json
+{
+ "@type": "Property",
+ "displayName": {
+ "en": "StringPropertyWritable"
+ },
+ "name": "StringPropertyWritable",
+ "writable": true,
+ "schema": "string"
+}
+```
+
+The device receives the following payload from the service:
+
+```json
+{
+ "StringPropertyWritable": "A string from IoT Central", "$version": 7
+}
+```
+
+The device should send the following JSON payload to the service after it processes the update. This message includes the version number of the original update received from the service.
+
+> [!TIP]
+> If the service is IoT Central, it marks the property as **synced** in the UI when it receives this message:
+
+```json
+{
+ "StringPropertyWritable": {
+ "value": "A string from IoT Central",
+ "ac": 200,
+ "ad": "completed",
+ "av": 7
+ }
+}
+```
+
+The following snippet from a device model shows the definition of a writable `Enum` property type:
+
+```json
+{
+ "@type": "Property",
+ "displayName": {
+ "en": "EnumPropertyWritable"
+ },
+ "name": "EnumPropertyWritable",
+ "writable": true,
+ "schema": {
+ "@type": "Enum",
+ "displayName": {
+ "en": "Enum"
+ },
+ "valueSchema": "integer",
+ "enumValues": [
+ {
+ "displayName": {
+ "en": "Item1"
+ },
+ "enumValue": 0,
+ "name": "Item1"
+ },
+ {
+ "displayName": {
+ "en": "Item2"
+ },
+ "enumValue": 1,
+ "name": "Item2"
+ },
+ {
+ "displayName": {
+ "en": "Item3"
+ },
+ "enumValue": 2,
+ "name": "Item3"
+ }
+ ]
+ }
+}
+```
+
+The device receives the following payload from the service:
+
+```json
+{
+ "EnumPropertyWritable": 1 , "$version": 10
+}
+```
+
+The device should send the following JSON payload to the service after it processes the update. This message includes the version number of the original update received from the service.
+
+> [!TIP]
+> If the service is IoT Central, it marks the property as **synced** in the UI when it receives this message:
+
+```json
+{
+ "EnumPropertyWritable": {
+ "value": 1,
+ "ac": 200,
+ "ad": "completed",
+ "av": 10
+ }
+}
+```
+
+## Commands
+
+To learn more about the DTDL command naming rules, see [DTDL > Command](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#command). You can't start a command name using the `_` character.
+
+If the command is defined in a component, the name of the command the device receives includes the component name. For example, if the command is called `getMaxMinReport` and the component is called `thermostat2`, the device receives a request to execute a command called `thermostat2*getMaxMinReport`.
+
+The following snippet from a device model shows the definition of a command that has no parameters and that doesn't expect the device to return anything:
+
+```json
+{
+ "@type": "Command",
+ "displayName": {
+ "en": "CommandBasic"
+ },
+ "name": "CommandBasic"
+}
+```
+
+The device receives an empty payload in the request and should return an empty payload in the response with a `200` HTTP response code to indicate success.
+
+The following snippet from a device model shows the definition of a command that has an integer parameter and that expects the device to return an integer value:
+
+```json
+{
+ "@type": "Command",
+ "request": {
+ "@type": "CommandPayload",
+ "displayName": {
+ "en": "RequestParam"
+ },
+ "name": "RequestParam",
+ "schema": "integer"
+ },
+ "response": {
+ "@type": "CommandPayload",
+ "displayName": {
+ "en": "ResponseParam"
+ },
+ "name": "ResponseParam",
+ "schema": "integer"
+ },
+ "displayName": {
+ "en": "CommandSimple"
+ },
+ "name": "CommandSimple"
+}
+```
+
+The device receives an integer value as the request payload. The device should return an integer value as the response payload with a `200` HTTP response code to indicate success.
+
+The following snippet from a device model shows the definition of a command that has an object parameter and that expects the device to return an object. In this example, both objects have integer and string fields:
+
+```json
+{
+ "@type": "Command",
+ "request": {
+ "@type": "CommandPayload",
+ "displayName": {
+ "en": "RequestParam"
+ },
+ "name": "RequestParam",
+ "schema": {
+ "@type": "Object",
+ "displayName": {
+ "en": "Object"
+ },
+ "fields": [
+ {
+ "displayName": {
+ "en": "Field1"
+ },
+ "name": "Field1",
+ "schema": "integer"
+ },
+ {
+ "displayName": {
+ "en": "Field2"
+ },
+ "name": "Field2",
+ "schema": "string"
+ }
+ ]
+ }
+ },
+ "response": {
+ "@type": "CommandPayload",
+ "displayName": {
+ "en": "ResponseParam"
+ },
+ "name": "ResponseParam",
+ "schema": {
+ "@type": "Object",
+ "displayName": {
+ "en": "Object"
+ },
+ "fields": [
+ {
+ "displayName": {
+ "en": "Field1"
+ },
+ "name": "Field1",
+ "schema": "integer"
+ },
+ {
+ "displayName": {
+ "en": "Field2"
+ },
+ "name": "Field2",
+ "schema": "string"
+ }
+ ]
+ }
+ },
+ "displayName": {
+ "en": "CommandComplex"
+ },
+ "name": "CommandComplex"
+}
+```
+
+The following snippet shows an example request payload sent to the device:
+
+```json
+{ "Field1": 56, "Field2": "A string value" }
+```
+
+The following snippet shows an example response payload sent from the device. Use a `200` HTTP response code to indicate success:
+
+```json
+{ "Field1": 87, "Field2": "Another string value" }
+```
+
+> [!TIP]
+> IoT Central has its own conventions for implementing [Long-running commands](../iot-central/core/howto-use-commands.md#long-running-commands) and [Offline commands](../iot-central/core/howto-use-commands.md#offline-commands).
+
+## Next steps
+
+Now that you've learned about device payloads, a suggested next steps is to read the [Device developer guide](concepts-developer-guide-device.md).
iot-develop Concepts Modeling Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-modeling-guide.md
This article describes how to design and author your own models and covers topic
To learn more, see the [Digital Twins Definition Language](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/README.md) specification.
+> [!NOTE]
+> IoT Central currently supports [DTDL v2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) with an [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md).
+ ## Model structure Properties, telemetry, and commands are grouped into interfaces. This section describes how you can use interfaces to describe simple and complex models by using components and inheritance.
To learn more, see [Device models repository](concepts-model-repository.md).
Applications, such as IoT Central, use device models. In IoT Central, a model is part of the device template that describes the capabilities of the device. IoT Central uses the device template to dynamically build a UI for the device, including dashboards and analytics.
+> [!NOTE]
+> IoT Central defines some extensions to the DTDL language. To learn more, see [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md).
+ A custom solution can use the [digital twins model parser](concepts-model-parser.md) to understand the capabilities of a device that implements the model. To learn more, see [Use IoT Plug and Play models in an IoT solution](concepts-model-discovery.md). ### Version
iot-develop Quickstart Send Telemetry Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-send-telemetry-central.md
Last updated 04/27/2023 zone_pivot_groups: iot-develop-set1-+ #Customer intent: As a device application developer, I want to learn the basic workflow of using an Azure IoT device SDK to build a client app on a device, connect the device securely to Azure IoT Central, and send telemetry.
iot-develop Quickstart Send Telemetry Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-send-telemetry-iot-hub.md
Last updated 04/27/2023 zone_pivot_groups: iot-develop-set1-+ ms.devlang: azurecli #Customer intent: As a device application developer, I want to learn the basic workflow of using an Azure IoT device SDK to build a client app on a device, connect the device securely to Azure IoT Hub, and send telemetry.
iot-develop Tutorial Connect Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-connect-device.md
Last updated 11/17/2022 + zone_pivot_groups: programming-languages-set-twenty-seven #- id: programming-languages-set-twenty-seven
-## Owner: dobett
-# Title: Programming languages
-# prompt: Choose a programming language
-# pivots:
-# - id: programming-language-ansi-c
-# Title: C
-# - id: programming-language-csharp
-# Title: C#
-# - id: programming-language-java
-# Title: Java
-# - id: programming-language-javascript
-# Title: JavaScript
-# - id: programming-language-python
-# Title: Python
-# - id: programming-language-embedded-c
# Title: Embedded C- #Customer intent: As a device builder, I want to see a working IoT Plug and Play device sample connecting to IoT Hub and sending properties and telemetry, and responding to commands. As a solution builder, I want to use a tool to view the properties, commands, and telemetry an IoT Plug and Play device reports to the IoT hub it connects to.
iot-develop Tutorial Multiple Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-multiple-components.md
Last updated 11/17/2022 + zone_pivot_groups: programming-languages-set-twenty-six #- id: programming-languages-set-twenty-six
-## Owner: dobett
-# Title: Programming languages
-# prompt: Choose a programming language
-# pivots:
-# - id: programming-language-ansi-c
-# Title: C
-# - id: programming-language-csharp
-# Title: C#
-# - id: programming-language-java
-# Title: Java
-# - id: programming-language-javascript
-# Title: JavaScript
-# - id: programming-language-python
# Title: Python- #Customer intent: As a device builder, I want to see a working IoT Plug and Play device sample connecting to IoT Hub and using multiple components to send properties and telemetry, and responding to commands. As a solution builder, I want to use a tool to view the properties, commands, and telemetry an IoT Plug and Play device reports to the IoT hub it connects to.
iot-develop Tutorial Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-service.md
Last updated 11/17/2022
-+ zone_pivot_groups: programming-languages-set-ten # - id: programming-languages-set-ten
-# Title: Programming languages
-# prompt: Choose a programming language
-# pivots:
-# - id: programming-language-csharp
-# Title: C#
-# - id: programming-language-java
-# Title: Java
-# - id: programming-language-javascript
-# Title: JavaScript
-# - id: programming-language-python
# Title: Python- #Customer intent: As a solution builder, I want to connect to and interact with an IoT Plug and Play device that's connected to my solution. For example, to collect telemetry from the device or to control the behavior of the device.
iot-develop Tutorial Use Mqtt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/tutorial-use-mqtt.md
Last updated 03/15/2023 + - #Customer intent: As a device builder, I want to see how I can use the MQTT protocol to create an IoT device client without using the Azure IoT Device SDKs.
iot-dps Concepts Control Access Dps Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-control-access-dps-azure-ad.md
Title: Access control and security for DPS with Azure AD
-description: Control access to Azure IoT Hub Device Provisioning Service (DPS) (DPS) for back-end apps. Includes information about Azure Active Directory and RBAC.
+description: Control access to Azure IoT Hub Device Provisioning Service (DPS) for back-end apps. Includes information about Azure Active Directory and RBAC.
Last updated 02/07/2022-+ # Control access to Azure IoT Hub Device Provisioning Service (DPS) by using Azure Active Directory (preview)
iot-dps Concepts Control Access Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-control-access-dps.md
Last updated 04/20/2022-+ # Control access to Azure IoT Hub Device Provisioning Service (DPS)
iot-dps Concepts Custom Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-custom-allocation.md
Last updated 09/09/2022 -+ # Understand custom allocation policies with Azure IoT Hub Device Provisioning Service
iot-dps How To Control Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-control-access.md
Last updated 09/22/2021 -+ # Control access to Azure IoT Hub Device Provisioning Service (DPS) with shared access signatures and security tokens
iot-dps How To Legacy Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-legacy-device-symm-key.md
Last updated 03/09/2023 + zone_pivot_groups: iot-dps-set1
iot-dps Quick Create Simulated Device Symm Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-symm-key.md
-+ zone_pivot_groups: iot-dps-set1 #Customer intent: As a new IoT developer, I want to connect a device to an IoT hub using the SDK, to learn how secure provisioning works with symmetric keys.
iot-dps Quick Create Simulated Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-tpm.md
zone_pivot_groups: iot-dps-set1-+ #Customer intent: As a new IoT developer, I want simulate a TPM device to learn how secure provisioning works.
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
-+ zone_pivot_groups: iot-dps-set1 #Customer intent: As a new IoT developer, I want to simulate an X.509 certificate device using the SDK, to learn how secure provisioning works.
iot-dps Quick Enroll Device Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-enroll-device-tpm.md
ms.devlang: csharp, java, nodejs-+ zone_pivot_groups: iot-dps-set2
iot-dps Quick Enroll Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-enroll-device-x509.md
ms.devlang: csharp, java, nodejs-+ zone_pivot_groups: iot-dps-set2
iot-dps Tutorial Custom Hsm Enrollment Group X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
Last updated 11/01/2022
-+ zone_pivot_groups: iot-dps-set1 #Customer intent: As a new IoT developer, I want provision groups of devices using X.509 certificate chains and the Azure IoT device SDK.
iot-dps Tutorial Group Enrollments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-group-enrollments.md
ms.devlang: java-+
If you plan to continue working on and exploring the device client sample, do no
In this tutorial, youΓÇÖve created a simulated X.509 device on your Windows machine and provisioned it to your IoT hub using the Azure IoT Hub Device Provisioning Service and enrollment groups. To learn more about your X.509 device, continue to device concepts. > [!div class="nextstepaction"]
-> [IoT Hub Device Provisioning Service concepts](concepts-service.md)
+> [IoT Hub Device Provisioning Service concepts](concepts-service.md)
iot-edge How To Connect Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-device.md
Last updated 01/09/2023
-+ # Connect a downstream device to an Azure IoT Edge gateway
iot-edge How To Deploy Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-blob.md
Title: Deploy blob storage on module to your device - Azure IoT Edge
description: Deploy an Azure Blob Storage module to your IoT Edge device to store data at the edge. Previously updated : 9/22/2022 Last updated : 06/06/2023
A deployment manifest is a JSON document that describes which modules to deploy,
#### Add modules
-1. In the **IoT Edge Modules** section of the page, click the **Add** dropdown and select **IoT Edge Module** to display the **Add IoT Edge Module** page.
+1. In the **IoT Edge Modules** section of the page, select the **Add** dropdown and select **IoT Edge Module** to display the **Add IoT Edge Module** page.
-2. On the **Module Settings** tab, provide a name for the module and then specify the container image URI:
+2. On the **Settings** tab, provide a name for the module and then specify the container image URI:
Examples: - **IoT Edge Module Name**: `azureblobstorageoniotedge` - **Image URI**: `mcr.microsoft.com/azure-blob-storage:latest`
- :::image type="content" source="./media/how-to-deploy-blob/addmodule-tab1.png" alt-text="Screenshot showing the Module Settings tab of the Add I o T Edge Module page. .":::
+ :::image type="content" source="./media/how-to-deploy-blob/addmodule-tab1.png" alt-text="Screenshot showing the Module Settings tab of the Add IoT Edge Module page. .":::
Don't select **Add** until you've specified values on the **Module Settings**, **Container Create Options**, and **Module Twin Settings** tabs as described in this procedure.
A deployment manifest is a JSON document that describes which modules to deploy,
3. Open the **Container Create Options** tab.
- :::image type="content" source="./media/how-to-deploy-blob/addmodule-tab3.png" alt-text="Screenshot showing the Container Create Options tab of the Add I o T Edge Module page..":::
-
- Copy and paste the following JSON into the box, to provide storage account information and a mount for the storage on your device.
+1. Copy and paste the following JSON into the box, to provide storage account information and a mount for the storage on your device.
```json { "Env":[
- "LOCAL_STORAGE_ACCOUNT_NAME=<your storage account name>",
- "LOCAL_STORAGE_ACCOUNT_KEY=<your storage account key>"
+ "LOCAL_STORAGE_ACCOUNT_NAME=<local storage account name>",
+ "LOCAL_STORAGE_ACCOUNT_KEY=<local storage account key>"
], "HostConfig":{ "Binds":[
- "<storage mount>"
+ "<mount>"
], "PortBindings":{ "11002/tcp":[{"HostPort":"11002"}]
A deployment manifest is a JSON document that describes which modules to deploy,
} ```
+ :::image type="content" source="./media/how-to-deploy-blob/addmodule-tab3.png" alt-text="Screenshot showing the Container Create Options tab of the Add IoT Edge Module page..":::
+ 4. Update the JSON that you copied into **Container Create Options** with the following information:
- - Replace `<your storage account name>` with a name that you can remember. Account names should be 3 to 24 characters long, with lowercase letters and numbers. No spaces.
+ - Replace `<local storage account name>` with a name that you can remember. Account names should be 3 to 24 characters long, with lowercase letters and numbers. No spaces.
- - Replace `<your storage account key>` with a 64-byte base64 key. You can generate a key with tools like [GeneratePlus](https://generate.plus/en/base64). You'll use these credentials to access the blob storage from other modules.
+ - Replace `<local storage account key>` with a 64-byte base64 key. You can generate a key with tools like [GeneratePlus](https://generate.plus/en/base64). You use these credentials to access the blob storage from other modules.
- - Replace `<storage mount>` according to your container operating system. Provide the name of a [volume](https://docs.docker.com/storage/volumes/) or the absolute path to an existing directory on your IoT Edge device where the blob module will store its data. The storage mount maps a location on your device that you provide to a set location in the module.
+ - Replace `<mount>` according to your container operating system. Provide the name of a [volume](https://docs.docker.com/storage/volumes/) or the absolute path to an existing directory on your IoT Edge device where the blob module stores its data. The storage mount maps a location on your device that you provide to a set location in the module.
- For Linux containers, the format is **\<your storage path or volume>:/blobroot**. For example: - use [volume mount](https://docs.docker.com/storage/volumes/): `my-volume:/blobroot`
A deployment manifest is a JSON document that describes which modules to deploy,
5. On the **Module Twin Settings** tab, copy the following JSON and paste it into the box.
- :::image type="content" source="./media/how-to-deploy-blob/addmodule-tab4.png" alt-text="Screenshot showing the Module Twin Settings tab of the Add I o T Edge Module page.":::
-
- Configure each property with an appropriate value, as indicated by the placeholders. If you are using the IoT Edge simulator, set the values to the related environment variables for these properties as described by [deviceToCloudUploadProperties](how-to-store-data-blob.md#devicetoclouduploadproperties) and [deviceAutoDeleteProperties](how-to-store-data-blob.md#deviceautodeleteproperties).
-
- > [!TIP]
- > The name for your `target` container has naming restrictions, for example using a `$` prefix is unsupported. To see all restrictions, view [Container Names](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata#container-names).
- ```json { "deviceAutoDeleteProperties": {
A deployment manifest is a JSON document that describes which modules to deploy,
} ```
+1. Configure each property with an appropriate value, as indicated by the placeholders. If you're using the IoT Edge simulator, set the values to the related environment variables for these properties as described by [deviceToCloudUploadProperties](how-to-store-data-blob.md#devicetoclouduploadproperties) and [deviceAutoDeleteProperties](how-to-store-data-blob.md#deviceautodeleteproperties).
+
+ > [!TIP]
+ > The name for your `target` container has naming restrictions, for example using a `$` prefix is unsupported. To see all restrictions, view [Container Names](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata#container-names).
+ > [!NOTE] > If your container target is unnamed or null within `storageContainersForUpload`, a default name will be assigned to the target. If you wanted to stop uploading to a container, it must be removed completely from `storageContainersForUpload`. For more information, see the `deviceToCloudUploadProperties` section of [Store data at the edge with Azure Blob Storage on IoT Edge](how-to-store-data-blob.md#devicetoclouduploadproperties).
+ :::image type="content" source="./media/how-to-deploy-blob/addmodule-tab4.png" alt-text="Screenshot showing the Module Twin Settings tab of the Add IoT Edge Module page.":::
+ For information on configuring deviceToCloudUploadProperties and deviceAutoDeleteProperties after your module has been deployed, see [Edit the Module Twin](https://github.com/Microsoft/vscode-azure-iot-toolkit/wiki/Edit-Module-Twin). For more information about desired properties, see [Define or update desired properties](module-composition.md#define-or-update-desired-properties). 6. Select **Add**.
Azure IoT Edge provides templates in Visual Studio Code to help you develop edge
1. Open *deployment.template.json* in your new solution workspace and find the **modules** section. Make the following configuration changes:
- 1. Delete the **SimulatedTemperatureSensor** module, as it's not necessary for this deployment.
+ 1. Copy and paste the following code into the `createOptions` field for the blob storage module:
- 1. Copy and paste the following code into the `createOptions` field:
+ ```json
```json "Env":[
- "LOCAL_STORAGE_ACCOUNT_NAME=<your storage account name>",
- "LOCAL_STORAGE_ACCOUNT_KEY=<your storage account key>"
+ "LOCAL_STORAGE_ACCOUNT_NAME=<local storage account name>",
+ "LOCAL_STORAGE_ACCOUNT_KEY=<local storage account key>"
], "HostConfig":{
- "Binds": ["<storage mount>"],
+ "Binds": ["<mount>"],
"PortBindings":{ "11002/tcp": [{"HostPort":"11002"}] } } ```
- :::image type="content" source="./media/how-to-deploy-blob/create-options.png" alt-text="Screenshot showing how to update module createOptions - Visual Studio Code .":::
+ :::image type="content" source="./media/how-to-deploy-blob/create-options.png" alt-text="Screenshot showing how to update module createOptions in Visual Studio Code.":::
-1. Replace `<your storage account name>` with a name that you can remember. Account names should be 3 to 24 characters long, with lowercase letters and numbers. No spaces.
+1. Replace `<local storage account name>` with a name that you can remember. Account names should be 3 to 24 characters long, with lowercase letters and numbers. No spaces.
-1. Replace `<your storage account key>` with a 64-byte base64 key. You can generate a key with tools like [GeneratePlus](https://generate.plus/en/base64). You'll use these credentials to access the blob storage from other modules.
+1. Replace `<local storage account key>` with a 64-byte base64 key. You can generate a key with tools like [GeneratePlus](https://generate.plus/en/base64). You use these credentials to access the blob storage from other modules.
-1. Replace `<storage mount>` according to your container operating system. Provide the name of a [volume](https://docs.docker.com/storage/volumes/) or the absolute path to a directory on your IoT Edge device where you want the blob module to store its data. The storage mount maps a location on your device that you provide to a set location in the module.
+1. Replace `<mount>` according to your container operating system. Provide the name of a [volume](https://docs.docker.com/storage/volumes/) or the absolute path to a directory on your IoT Edge device where you want the blob module to store its data. The storage mount maps a location on your device that you provide to a set location in the module.
- For Linux containers, the format is **\<your storage path or volume>:/blobroot**. For example: - use [volume mount](https://docs.docker.com/storage/volumes/): `my-volume:/blobroot`
Azure IoT Edge provides templates in Visual Studio Code to help you develop edge
> > * IoT Edge does not remove volumes attached to module containers. This behavior is by design, as it allows persisting the data across container instances such as upgrade scenarios. However, if these volumes are left unused, then it may lead to disk space exhaustion and subsequent system errors. If you use docker volumes in your scenario, then we encourage you to use docker tools such as [docker volume prune](https://docs.docker.com/engine/reference/commandline/volume_prune/) and [docker volume rm](https://docs.docker.com/engine/reference/commandline/volume_rm/) to remove the unused volumes, especially for production scenarios.
-1. Configure [deviceToCloudUploadProperties](how-to-store-data-blob.md#devicetoclouduploadproperties) and [deviceAutoDeleteProperties](how-to-store-data-blob.md#deviceautodeleteproperties) for your module by adding the following JSON to the *deployment.template.json* file. Configure each property with an appropriate value and save the file. If you are using the IoT Edge simulator, set the values to the related environment variables for these properties, which you can find in the explanation section of [deviceToCloudUploadProperties](how-to-store-data-blob.md#devicetoclouduploadproperties) and [deviceAutoDeleteProperties](how-to-store-data-blob.md#deviceautodeleteproperties)
+1. Configure [deviceToCloudUploadProperties](how-to-store-data-blob.md#devicetoclouduploadproperties) and [deviceAutoDeleteProperties](how-to-store-data-blob.md#deviceautodeleteproperties) for your module by adding the following JSON to the *deployment.template.json* file. Configure each property with an appropriate value and save the file. If you're using the IoT Edge simulator, set the values to the related environment variables for these properties, which you can find in the explanation section of [deviceToCloudUploadProperties](how-to-store-data-blob.md#devicetoclouduploadproperties) and [deviceAutoDeleteProperties](how-to-store-data-blob.md#deviceautodeleteproperties)
```json "<your azureblobstorageoniotedge module name>":{
Azure IoT Edge provides templates in Visual Studio Code to help you develop edge
} ```
- :::image type="content" source="./media/how-to-deploy-blob/devicetocloud-deviceautodelete.png" alt-text="Screenshot showing how to set desired properties for azureblobstorageoniotedge in Visual Studio Code .":::
+ :::image type="content" source="./media/how-to-deploy-blob/devicetocloud-deviceautodelete.png" alt-text="Screenshot showing how to set desired properties for azureblobstorageoniotedge in Visual Studio Code.":::
For information on configuring deviceToCloudUploadProperties and deviceAutoDeleteProperties after your module has been deployed, see [Edit the Module Twin](https://github.com/Microsoft/vscode-azure-iot-toolkit/wiki/Edit-Module-Twin). For more information about container create options, restart policy, and desired status, see [EdgeAgent desired properties](module-edgeagent-edgehub.md#edgeagent-desired-properties).
When you connect to additional blob storage modules, change the endpoint to poin
## Configure proxy support
-If your organization is using a proxy server, you will need to configure proxy support for the edgeAgent and edgeHub runtime modules. This process involves two tasks:
+If your organization is using a proxy server, you need to configure proxy support for the edgeAgent and edgeHub runtime modules. This process involves two tasks:
- Configure the runtime daemons and the IoT Edge agent on the device. - Set the HTTPS_PROXY environment variable for modules in the deployment manifest JSON file.
In addition, a blob storage module also requires the HTTPS_PROXY setting in the
1. Add `HTTPS_PROXY` for the **Name** and your proxy URL for the **Value**.
- :::image type="content" source="./media/how-to-deploy-blob/https-proxy-config.png" alt-text="Screenshot showing the Update I o T Edge Module pane where you can enter the specified values.":::
+ :::image type="content" source="./media/how-to-deploy-blob/https-proxy-config.png" alt-text="Screenshot showing the Update IoT Edge Module pane where you can enter the specified values.":::
-1. Click **Update**, then **Review + Create**.
+1. Select **Update**, then **Review + Create**.
1. Note that the proxy is added to the module in deployment manifest and select **Create**.
iot-hub C2d Messaging Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/c2d-messaging-dotnet.md
In this article, you learned how to send and receive cloud-to-device messages.
* To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md). * To learn more about IoT Hub message formats, see [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md).-
iot-hub C2d Messaging Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/c2d-messaging-java.md
ms.devlang: java Last updated 05/30/2023-+ # Send cloud-to-device messages with IoT Hub (Java)
iot-hub Device Management Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-dotnet.md
ms.devlang: csharp Last updated 08/20/2019-+ # Get started with device management (.NET)
iot-hub Device Management Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-management-java.md
ms.devlang: java Previously updated : 08/20/2019- Last updated : 05/30/2023+ # Get started with device management (Java)
iot-hub Device Twins Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-dotnet.md
ms.devlang: csharp Last updated 02/17/2023-+ # Get started with device twins (.NET)
iot-hub Device Twins Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/device-twins-java.md
ms.devlang: java Last updated 02/17/2023-+ # Get started with device twins (Java)
iot-hub File Upload Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-dotnet.md
ms.devlang: csharp Last updated 08/24/2021-+ # Upload files from your device to the cloud with Azure IoT Hub (.NET)
In this article, you learned how to use the file upload feature of IoT Hub to si
* [Azure blob storage API reference](../storage/blobs/reference.md)
-* [Azure IoT SDKs](iot-hub-devguide-sdks.md)
+* [Azure IoT SDKs](iot-hub-devguide-sdks.md)
iot-hub File Upload Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/file-upload-java.md
ms.devlang: java Last updated 07/18/2021-+ # Upload files from your device to the cloud with Azure IoT Hub (Java)
iot-hub How To Collect Device Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-collect-device-logs.md
Last updated 01/20/2023 + zone_pivot_groups: programming-languages-set-twenty-seven #- id: programming-languages-set-twenty-seven
-## Owner: dobett
-# Title: Programming languages
-# prompt: Choose a programming language
-# pivots:
-# - id: programming-language-ansi-c
-# Title: C
-# - id: programming-language-csharp
-# Title: C#
-# - id: programming-language-java
-# Title: Java
-# - id: programming-language-javascript
-# Title: JavaScript
-# - id: programming-language-python
-# Title: Python
-# - id: programming-language-embedded-c
# Title: Embedded C- #Customer intent: As a device builder, I want to see a working IoT Plug and Play device sample connecting to IoT Hub and sending properties and telemetry, and responding to commands. As a solution builder, I want to use a tool to view the properties, commands, and telemetry an IoT Plug and Play device reports to the IoT hub it connects to.
iot-hub Iot Hub Dev Guide Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-dev-guide-sas.md
Last updated 04/28/2022-+ # Control access to IoT Hub using Shared Access Signatures
iot-hub Iot Hub Devguide Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-security.md
Last updated 04/15/2021-+ # Control access to IoT Hub
There are three different ways for controlling access to IoT Hub:
- [Control access to IoT Hub using Azure Active Directory](iot-hub-dev-guide-azure-ad-rbac.md) - [Control access to IoT Hub using shared access signature](iot-hub-dev-guide-sas.md)-- [Authenticating a device to IoT Hub](iot-hub-dev-guide-sas.md#authenticating-a-device-to-iot-hub)
+- [Authenticating a device to IoT Hub](iot-hub-dev-guide-sas.md#authenticating-a-device-to-iot-hub)
iot-hub Iot Hub Rm Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-rest.md
ms.devlang: csharp Last updated 08/08/2017-+ # Create an IoT hub using the resource provider REST API (.NET)
To learn more about developing for IoT Hub, see the following articles:
To further explore the capabilities of IoT Hub, see:
-* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
+* [Deploying AI to edge devices with Azure IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub Migrate Hub Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/migrate-hub-arm.md
+ Last updated 04/14/2023
iot-hub Migrate Hub State Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/migrate-hub-state-cli.md
+ Last updated 04/14/2023
iot-hub Module Twins Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-dotnet.md
ms.devlang: csharp Last updated 08/07/2019-+ # Get started with IoT Hub module identity and module twin (.NET)
To continue getting started with IoT Hub and to explore other IoT scenarios, see
* [Getting started with device management](device-management-node.md)
-* [Getting started with IoT Edge](../iot-edge/quickstart-linux.md)
+* [Getting started with IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub Module Twins Portal Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-portal-dotnet.md
ms.devlang: csharp Last updated 08/20/2019-+ # Get started with IoT Hub module identity and module twin using the Azure portal and a .NET device
To continue getting started with IoT Hub and to explore other IoT scenarios, see
* [Getting started with device management (Node.js)](device-management-node.md)
-* [Getting started with IoT Edge](../iot-edge/quickstart-linux.md)
+* [Getting started with IoT Edge](../iot-edge/quickstart-linux.md)
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
iot-hub Quickstart Control Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-control-device.md
-+ Last updated 02/25/2022 zone_pivot_groups: iot-hub-set1 #Customer intent: As a developer new to IoT Hub, I need to see how to use a service application to control a device connected to the hub.
iot-hub Quickstart Send Telemetry Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-send-telemetry-cli.md
Title: Quickstart - Send telemetry to Azure IoT Hub (CLI) quickstart
description: This quickstart shows developers new to IoT Hub how to get started by using the Azure CLI to create an IoT hub, send telemetry, and view messages between a device and the hub. -+ Last updated 11/30/2022
iot-hub Schedule Jobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/schedule-jobs-dotnet.md
ms.devlang: csharp Last updated 08/20/2019-+ # Schedule and broadcast jobs (.NET)
You are now ready to run the apps.
In this article, you scheduled jobs to run a direct method and update the device twin's properties.
-To continue exploring IoT Hub and device management patterns, update an image in [Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Reference Image](../iot-hub-device-update/device-update-raspberry-pi.md).
+To continue exploring IoT Hub and device management patterns, update an image in [Device Update for Azure IoT Hub tutorial using the Raspberry Pi 3 B+ Reference Image](../iot-hub-device-update/device-update-raspberry-pi.md).
iot-hub Schedule Jobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/schedule-jobs-java.md
ms.devlang: java Last updated 08/16/2019-+ # Schedule and broadcast jobs (Java)
iot-hub Tutorial Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-connectivity.md
description: Tutorial - Use IoT Hub tools to troubleshoot, during development, d
-+ Last updated 02/01/2023 - #Customer intent: As a developer, I want to know what tools I can use to verify connectivity between my IoT devices and my IoT hub.
iot-hub Tutorial Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-device-twins.md
ms.devlang: javascript Last updated 12/15/2022-+ #Customer intent: As a developer, I want to be able to configure my devices from the cloud and receive status and compliance data from my devices.
key-vault Quick Create Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-go.md
ms.devlang: golang+ # Quickstart: Azure Key Vault certificate client library for Go
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-java.md
Title: Quickstart for Azure Key Vault Certificate client library - Java description: Learn about the the Azure Key Vault Certificate client library for Java with the steps in this quickstart. -+ Last updated 11/14/2022
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-net.md
ms.devlang: csharp-+ # Quickstart: Azure Key Vault certificate client library for .NET
key-vault Tutorial Net Create Vault Azure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-create-vault-azure-web-app.md
Last updated 01/17/2023 ms.devlang: csharp--+ #Customer intent: As a developer, I want to use Azure Key Vault to store secrets for my app to help keep them secure.- # Tutorial: Use a managed identity to connect Key Vault to an Azure web app in .NET
key-vault Tutorial Net Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-virtual-machine.md
Last updated 03/17/2021 ms.devlang: csharp--+ #Customer intent: As a developer I want to use Azure Key Vault to store secrets for my app, so that they are kept secure. # Tutorial: Use Azure Key Vault with a virtual machine in .NET
When they are no longer needed, delete the virtual machine and your key vault.
## Next steps > [!div class="nextstepaction"]
-> [Azure Key Vault REST API](/rest/api/keyvault/)
+> [Azure Key Vault REST API](/rest/api/keyvault/)
key-vault Hsm Protected Keys Byok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys-byok.md
tags: azure-resource-manager+ Last updated 03/07/2023 - # Import HSM-protected keys to Key Vault (BYOK)
key-vault Quick Create Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-go.md
ms.devlang: golang+ # Quickstart: Azure Key Vault keys client library for Go
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-java.md
Title: Quickstart - Azure Key Vault Key client library for Java description: Provides a quickstart for the Azure Key Vault Keys client library for Java. -+ Last updated 01/04/2023
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-net.md
ms.devlang: csharp-+ # Quickstart: Azure Key Vault key client library for .NET
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
key-vault Quick Create Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-go.md
ms.devlang: golang+ # Quickstart: Manage secrets by using the Azure Key Vault Go client library
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-java.md
Title: Quickstart - Azure Key Vault Secret client library for Java description: Provides a quickstart for the Azure Key Vault Secret client library for Java. -+ Last updated 01/11/2023
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-net.md
ms.devlang: csharp-+ # Quickstart: Azure Key Vault secret client library for .NET
key-vault Storage Keys Sas Tokens Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/storage-keys-sas-tokens-code.md
Last updated 01/11/2023 ms.devlang: csharp--+ # Customer intent: As a developer I want storage credentials and SAS tokens to be managed securely by Azure Key Vault. # Create SAS definition and fetch shared access signature tokens in code (legacy)
For guide on how to use retrieved from Key Vault SAS token to access Azure Stora
## Next steps - Learn how to [Grant limited access to Azure Storage resources using SAS](../../storage/common/storage-sas-overview.md). - Learn how to [Manage storage account keys with Key Vault and the Azure CLI](overview-storage-keys.md) or [Azure PowerShell](overview-storage-keys-powershell.md).-- See [Managed storage account key samples](https://github.com/Azure-Samples?utf8=%E2%9C%93&q=key+vault+storage&type=&language=)
+- See [Managed storage account key samples](https://github.com/Azure-Samples?utf8=%E2%9C%93&q=key+vault+storage&type=&language=)
lab-services Class Type Jupyter Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-type-jupyter-notebook.md
description: Learn how to set up a lab VM in Azure Lab Services to teach data science using Python and Jupyter Notebooks. +
lab-services Class Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/class-types.md
description: Learn about different example class types for which you can set up labs using Azure Lab Services. +
lab-services How To Create Lab Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-bicep.md
description: Learn how to create an Azure Lab Services lab by using Bicep. Last updated 05/23/2022-+ # Create a lab in Azure Lab Services using a Bicep file
lab-services How To Create Lab Plan Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-plan-bicep.md
description: Learn how to create an Azure Lab Services lab plan by using Bicep. Last updated 05/23/2022-+ # Create a lab plan in Azure Lab Services using a Bicep file
lab-services How To Create Lab Plan Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-plan-powershell.md
description: Learn how to create an Azure Lab Services lab plan using PowerShell
Last updated 06/15/2022-+ # Create a lab plan in Azure Lab Services using PowerShell and the Azure modules
lab-services How To Create Lab Plan Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-plan-python.md
description: Learn how to create an Azure Lab Services lab plan using Python and the Azure Python SDK. + Last updated 02/15/2022
if __name__ == "__main__":
In this article, you created a resource group and a lab plan. As an admin, you can learn more about [Azure PowerShell module](/powershell/azure) and [Az.LabServices cmdlets](/powershell/module/az.labservices/). > [!div class="nextstepaction"]
-> [Create a lab using Python and the Azure Python SDK](how-to-create-lab-python.md)
+> [Create a lab using Python and the Azure Python SDK](how-to-create-lab-python.md)
lab-services How To Create Lab Plan Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-plan-template.md
description: Learn how to create an Azure Lab Services lab plan by using Azure Resource Manager template (ARM template). -+ Last updated 06/04/2022
lab-services How To Create Lab Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-python.md
description: Learn how to create an Azure Lab Services lab using Python and the Azure Python libraries (SDK). + Last updated 02/15/2022
lab-services How To Create Lab Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-template.md
description: Learn how to create an Azure Lab Services lab by using Azure Resource Manager template (ARM template). -+ Last updated 05/10/2022
lab-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md
Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
lighthouse Monitor At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/how-to/monitor-at-scale.md
Title: Monitor delegated resources at scale
description: Azure Lighthouse helps you use Azure Monitor Logs in a scalable way across customer tenants. Last updated 05/23/2023 -+ # Monitor delegated resources at scale
alertsmanagementresources
- Try out the [Activity Logs by Domain](https://github.com/Azure/Azure-Lighthouse-samples/tree/master/templates/workbook-activitylogs-by-domain) workbook on GitHub. - Explore this [MVP-built sample workbook](https://github.com/scautomation/Azure-Automation-Update-Management-Workbooks), which tracks patch compliance reporting by [querying Update Management logs](../../automation/update-management/query-logs.md) across multiple Log Analytics workspaces.-- Learn about other [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md).
+- Learn about other [cross-tenant management experiences](../concepts/cross-tenant-management-experience.md).
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
load-balancer Quickstart Basic Internal Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-cli.md
Last updated 04/10/2023 -+ #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs. # Quickstart: Create an internal basic load balancer to load balance VMs by using the Azure CLI
load-balancer Load Balancer Multiple Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-multiple-ip-powershell.md
Last updated 09/25/2017 -+ # Load balancing on multiple IP configurations using PowerShell
load-balancer Python Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/python-samples.md
description: With these samples, load balance traffic to multiple websites. Depl
documentationcenter: load-balancer -+ Last updated 02/28/2023 - # Python Samples for Azure Load Balancer
The following table includes links to code samples built using Python.
| Script | Description | |-|-| | [Getting Started with Azure Resource Manager for public and internal load balancers in Python](/samples/azure-samples/azure-samples-python-management/network-python-manage-loadbalancer) | Creates virtual machines in a load-balanced configuration. Sample includes internal and public load balancers. |--
load-balancer Quickstart Load Balancer Standard Internal Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md
Last updated 05/01/2023 -+ #Customer intent: I want to create a load balancer so that I can load balance internal traffic to VMs.
load-balancer Quickstart Load Balancer Standard Public Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-cli.md
Last updated 03/16/2022 -+ #Customer intent: I want to create a load balancer so that I can load balance internet traffic to VMs.
load-balancer Tutorial Deploy Cross Region Load Balancer Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-deploy-cross-region-load-balancer-template.md
Last updated 04/12/2023-+ #Customer intent: As a administrator, I want to deploy a cross-region load balancer for global high availability of my application or service.
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
ms.suite: integration
Last updated 04/13/2023-+ # Authenticate access to Azure resources with managed identities in Azure Logic Apps
logic-apps Logic Apps Add Run Inline Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-add-run-inline-code.md
ms.suite: integration
Last updated 10/01/2022-+ # Run code snippets in workflows with Inline Code operations in Azure Logic Apps
For more information about the **Execute JavaScript Code** action's structure an
## Next steps * [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
-* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
+* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md)
logic-apps Logic Apps Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-azure-functions.md
ms.suite: integration
Last updated 03/07/2023-+ # Create and run code from workflows in Azure Logic Apps using Azure Functions
logic-apps Logic Apps Workflow Actions Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-workflow-actions-triggers.md
ms.suite: integration
Last updated 08/20/2022-+ # Schema reference guide for trigger and action types in Azure Logic Apps
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023 ms.suite: integration
machine-learning Create Python Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/create-python-model.md
description: Learn how to use the Create Python Model component in Azure Machine
+
This article shows how to use **Create Python Model** with a simple pipeline. He
## Next steps
-See the [set of components available](component-reference.md) to Azure Machine Learning.
+See the [set of components available](component-reference.md) to Azure Machine Learning.
machine-learning Designer Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/designer-error-codes.md
-+ Last updated 03/25/2021
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data.md
Last updated 01/23/2023-+ #Customer intent: As an experienced Python developer, I need secure access to my data in my Azure storage solutions, and I need to use that data to accomplish my machine learning tasks.
machine-learning Concept Fairness Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-fairness-ml.md
Last updated 08/17/2022-+ #Customer intent: As a data scientist, I want to learn about machine learning fairness and how to assess and mitigate unfairness in machine learning models.
machine-learning Concept Machine Learning Registries Mlops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-machine-learning-registries-mlops.md
Last updated 9/9/2022 -+ # Machine Learning registries for MLOps
machine-learning Concept Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-ml-pipelines.md
Last updated 05/10/2022-+ monikerRange: 'azureml-api-2 || azureml-api-1'
machine-learning Concept Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow-models.md
Last updated 11/04/2022 -+ # From artifacts to models in MLflow
There are two workflows available for loading models:
## Start logging models We recommend starting taking advantage of MLflow models in Azure Machine Learning. There are different ways to start using the model's concept with MLflow. Read [How to log MLFlow models](how-to-log-mlflow-models.md) to a comprehensive guide.-
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
Last updated 08/15/2022 -+ # MLflow and Azure Machine Learning
machine-learning Concept V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-v2.md
Last updated 11/04/2022-+ #Customer intent: As a data scientist, I want to know whether to use v1 or v2 of CLI, SDK.
machine-learning Dsvm Samples And Walkthroughs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-samples-and-walkthroughs.md
description: Through these samples and walkthroughs, learn how to handle common
keywords: data science tools, data science virtual machine, tools for data science, linux data science + Last updated 05/12/2021-
Sign in with the same password that you use to log in to the Data Science Virtua
<br/>![SparkML samples](./media/sparkml-samples.png)<br/> ## XGBoost
-<br/>![XGBoost samples](./media/xgboost-samples.png)<br/>
+<br/>![XGBoost samples](./media/xgboost-samples.png)<br/>
machine-learning Dsvm Secure Access Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-secure-access-keys.md
description: Learn how to securely store access credentials on the Data Science
keywords: deep learning, AI, data science tools, data science virtual machine, geospatial analytics, team data science process -+
machine-learning Dsvm Tools Data Science https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-data-science.md
description: Learn about the machine-learning tools and frameworks that are prei
keywords: data science tools, data science virtual machine, tools for data science, linux data science +
machine-learning Dsvm Tools Deep Learning Frameworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-tools-deep-learning-frameworks.md
description: Available deep learning frameworks and tools on Azure Data Science
keywords: data science tools, data science virtual machine, tools for data science, linux data science -+
machine-learning Tools Included https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/tools-included.md
Last updated 06/23/2022-+ # What tools are included on the Azure Data Science Virtual Machine?
machine-learning Vm Do Ten Things https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/vm-do-ten-things.md
description: Perform data exploration and modeling tasks on the Windows Data Science Virtual Machine. -+ Last updated 06/23/2022- # Data science with a Windows Data Science Virtual Machine
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
Last updated 04/14/2023-+ ms.devlang: azurecli monikerRange: 'azureml-api-2 || azureml-api-1'
This article is part of a series on securing an Azure Machine Learning workflow.
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md)
-For more information on configuring Azure Firewall, see [Tutorial: Deploy and configure Azure Firewall using the Azure portal](../firewall/tutorial-firewall-deploy-portal.md).
+For more information on configuring Azure Firewall, see [Tutorial: Deploy and configure Azure Firewall using the Azure portal](../firewall/tutorial-firewall-deploy-portal.md).
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
-+ Last updated 01/27/2023 show_latex: true
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
-+ Last updated 07/13/2022 #Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model architecture, hyperparameters, and training and deployment environments.
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
-+ Last updated 03/15/2022 #Customer intent: I'm a data scientist with ML knowledge in the natural language processing space, looking to build ML models using language specific data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
machine-learning How To Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connection.md
Last updated 05/25/2023--+ # Customer intent: As an experienced data scientist with Python skills, I have data located in external sources outside of Azure. I need to make that data available to the Azure Machine Learning platform, to train my machine learning models.
ml_client.connections.create_or_update(workspace_connection=wps_connection)
## Next steps - [Import data assets](how-to-import-data-assets.md)-- [Import data assets on a schedule](reference-yaml-schedule-data-import.md)
+- [Import data assets on a schedule](reference-yaml-schedule-data-import.md)
machine-learning How To Deploy Automl Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md
Last updated 05/11/2022 -+ ms.devlang: azurecli
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
One way to create a managed online endpoint in the studio is from the **Models**
> * Optionally, you can add a description and tags to your endpoint. 1. Keep the default selections: __Managed__ for the compute type and __key-based authentication__ for the authentication type.
-1. Select __Next__, until you get to the "Deployment" page. Here, check the box for __Enable Application Insights diagnostics and data collection__ to allow you view graphs of your endpoint's activities in the studio later.
+1. Select __Next__, until you get to the "Deployment" page. Here, toggle __Application Insights diagnostics__ to Enabled to allow you view graphs of your endpoint's activities in the studio later and analyze metrics and logs using Application Insights.
1. Select __Next__ to go to the "Environment" page. Here, select the following options: * __Select scoring file and dependencies__: Browse and select the `\azureml-examples\cli\endpoints\online\model-1\onlinescoring\score.py` file from the repo you cloned or downloaded earlier.
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-export-delete-data.md
description: Learn how to export or delete your workspace with the Azure Machine
+
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
Last updated 09/23/2022 -+ # Set up authentication between Azure Machine Learning and other services
machine-learning How To Log Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-mlflow-models.md
Last updated 07/8/2022 -+ # Logging MLflow models
machine-learning How To Machine Learning Interpretability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability.md
-+ Last updated 11/04/2022
machine-learning How To Manage Environments In Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-in-studio.md
Last updated 01/30/2023 -+ # Manage software environments in Azure Machine Learning studio
machine-learning How To Manage Environments V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-v2.md
Last updated 09/27/2022-+ # Manage Azure Machine Learning environments with the CLI & SDK (v2)
machine-learning How To Manage Models Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models-mlflow.md
Last updated 06/08/2022 -+ # Manage models registries in Azure Machine Learning with MLflow
machine-learning How To Manage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md
Last updated 04/15/2022 -+ # Work with models in Azure Machine Learning
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
Azure Machine Learning managed online endpoints have limits described in the fol
To determine the current usage for an endpoint, [view the metrics](how-to-monitor-online-endpoints.md#metrics).
-To request an exception from the Azure Machine Learning product team, use the steps in the [Request quota increases](#request-quota-increases).
+To request an exception from the Azure Machine Learning product team, use the steps in the [Endpoint quota increases](#endpoint-quota-increases).
### Azure Machine Learning kubernetes online endpoints
machine-learning How To Manage Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-registries.md
Last updated 05/23/2023 -+ # Manage Azure Machine Learning registries
machine-learning How To Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-rest.md
Last updated 09/14/2022 -+ # Create, run, and delete Azure Machine Learning resources using REST
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
Last updated 09/21/2022 -+ # Manage Azure Machine Learning workspaces in the portal or with the Python SDK (v2)
machine-learning How To Migrate From V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-v1.md
Last updated 09/23/2022 -+ monikerRange: 'azureml-api-2 || azureml-api-1'
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
Last updated 08/19/2022 -+ monikerRange: 'azureml-api-2 || azureml-api-1'
machine-learning How To R Deploy R Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-r-deploy-r-model.md
ms.devlang: r+ # How to deploy a registered R model to an online (real time) endpoint
machine-learning How To Read Write Data V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-read-write-data-v2.md
Last updated 06/02/2023-+ #Customer intent: As an experienced Python developer, I need to read my data, to make it available to a remote compute resource, to train my machine learning models.
machine-learning How To Responsible Ai Insights Sdk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-insights-sdk-cli.md
Last updated 11/09/2022-+ # Generate a Responsible AI insights with YAML and Python
machine-learning How To Run Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-jupyter-notebooks.md
-+ Last updated 02/28/2022 #Customer intent: As a data scientist, I want to run Jupyter notebooks in my workspace in Azure Machine Learning studio.
machine-learning How To Schedule Pipeline Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-schedule-pipeline-job.md
Last updated 03/27/2023 -+ # Schedule machine learning pipeline jobs
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-authentication.md
Last updated 09/23/2022 -+ # Set up authentication for Azure Machine Learning resources and workflows
machine-learning How To Share Data Across Workspaces With Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-share-data-across-workspaces-with-registries.md
Last updated 03/21/2023 -+ # Share data across workspaces with registries (preview)
machine-learning How To Share Models Pipelines Across Workspaces With Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-share-models-pipelines-across-workspaces-with-registries.md
Last updated 05/23/2022 -+ # Share models, components, and environments across workspaces with registries
machine-learning How To Track Experiments Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md
Last updated 06/08/2022 -+ # Query & compare experiments and runs with MLflow
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-monitor-analyze-runs.md
Last updated 06/24/2022 -+ # Monitor and analyze jobs in studio
machine-learning How To Train Mlflow Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-mlflow-projects.md
Last updated 11/04/2022 -+ # Train with MLflow Projects in Azure Machine Learning (Preview)
The [MLflow with Azure Machine Learning notebooks](https://github.com/Azure/Mach
* [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md). * [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md). * [Guidelines for deploying MLflow models](how-to-deploy-mlflow-models.md).-
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-scikit-learn.md
Last updated 09/29/2022 -+ #Customer intent: As a Python scikit-learn developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my machine learning models at scale.
After you've registered your model, you can deploy it the same way as any other
In this article, you trained and registered a scikit-learn model, and you learned about deployment options. See these other articles to learn more about Azure Machine Learning. * [Track run metrics during training](how-to-log-view-metrics.md)
-* [Tune hyperparameters](how-to-tune-hyperparameters.md)
+* [Tune hyperparameters](how-to-tune-hyperparameters.md)
machine-learning How To Troubleshoot Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-data-access.md
Last updated 02/13/2023 -+ # Troubleshoot data access errors
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-tune-hyperparameters.md
Last updated 05/02/2022 -+ # Hyperparameter tuning a model (v2)
machine-learning How To Use Automl Onnx Model Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-onnx-model-dotnet.md
-+ # Make predictions with an AutoML ONNX model in .NET
To learn more about making predictions in ML.NET, see the [use a model to make p
## Next steps - [Deploy your model as an ASP.NET Core Web API](/dotnet/machine-learning/how-to-guides/serve-model-web-api-ml-net)-- [Deploy your model as a serverless .NET Azure Function](/dotnet/machine-learning/how-to-guides/serve-model-serverless-azure-functions-ml-net)
+- [Deploy your model as a serverless .NET Azure Function](/dotnet/machine-learning/how-to-guides/serve-model-serverless-azure-functions-ml-net)
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-small-object-detect.md
Last updated 10/13/2021-+ # Train a small object detection model with AutoML
See the [object detection sample notebook](https://github.com/Azure/azureml-exam
* For definitions and examples of the performance charts and metrics provided for each job, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md). * [Tutorial: Train an object detection model with AutoML and Python](tutorial-auto-train-image-models.md). * See [what hyperparameters are available for computer vision tasks](reference-automl-images-hyperparameters.md).
-* [Make predictions with ONNX on computer vision models from AutoML](how-to-inference-onnx-automl-image-models.md)
+* [Make predictions with ONNX on computer vision models from AutoML](how-to-inference-onnx-automl-image-models.md)
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
Last updated 07/01/2022 -+ # Track Azure Databricks ML experiments with MLflow and Azure Machine Learning
machine-learning How To Use Mlflow Azure Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-synapse.md
Last updated 07/06/2022 -+ # Track Azure Synapse Analytics ML experiments with MLflow and Azure Machine Learning
If you wish to keep your Azure Synapse Analytics workspace, but no longer need t
## Next steps * [Track experiment runs with MLflow and Azure Machine Learning](how-to-use-mlflow.md). * [Deploy MLflow models in Azure Machine Learning](how-to-deploy-mlflow-models.md).
-* [Manage your models with MLflow](how-to-manage-models-mlflow.md).
+* [Manage your models with MLflow](how-to-manage-models-mlflow.md).
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
Last updated 11/04/2022 -+ ms.devlang: azurecli
machine-learning How To Use Mlflow Configure Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-configure-tracking.md
Last updated 11/04/2022 -+ ms.devlang: azurecli
machine-learning How To Use Pipeline Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-component.md
Last updated 04/12/2023-+ # How to use pipeline component to build nested pipeline job (V2) (preview)
machine-learning How To Workspace Diagnostic Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-workspace-diagnostic-api.md
Last updated 09/14/2022 -+ monikerRange: 'azureml-api-2 || azureml-api-1'
machine-learning Overview What Is Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-is-azure-machine-learning.md
Last updated 09/22/2022-+ adobe-target: true
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
machine-learning Quickstart Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-spark-jobs.md
-+ Last updated 05/22/2023 #Customer intent: As a Full Stack ML Pro, I want to submit a Spark job in Azure Machine Learning.
First, upload the parameterized Python code `titanic.py` to the Azure Blob stora
- [Interactive Data Wrangling with Apache Spark in Azure Machine Learning](./interactive-data-wrangling-with-apache-spark-azure-ml.md) - [Submit Spark jobs in Azure Machine Learning](./how-to-submit-spark-jobs.md) - [Code samples for Spark jobs using Azure Machine Learning CLI](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/spark)-- [Code samples for Spark jobs using Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark)
+- [Code samples for Spark jobs using Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark)
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md
description: Learn how to format your JSONL files for data consumption in automa
-+
machine-learning Reference Yaml Job Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-command.md
-+
machine-learning Reference Yaml Job Sweep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-sweep.md
-+ Last updated 11/28/2022
machine-learning Tutorial Azure Ml In A Day https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-azure-ml-in-a-day.md
Last updated 03/15/2023-+ #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
machine-learning Tutorial Cloud Workstation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-cloud-workstation.md
description: Learn how to get started prototyping and developing machine learnin
+
machine-learning Tutorial Deploy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-deploy-model.md
Last updated 03/15/2023-+ #Customer intent: This tutorial is intended to show users what is needed for deployment and present a high-level overview of how Azure Machine Learning handles deployment. Deployment isn't typically done by a data scientist, so the tutorial won't use Azure CLI examples. We will link to existing articles that use Azure CLI as needed. The code in the tutorial will use SDK v2. The tutorial will continue where the "Create reusable pipelines" tutorial stops.
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-pipeline-python-sdk.md
Last updated 03/15/2023-+ #Customer intent: This tutorial is intended to introduce Azure Machine Learning to data scientists who want to scale up or publish their ML projects. By completing a familiar end-to-end project, which starts by loading the data and ends by creating and calling an online inference endpoint, the user should become familiar with the core concepts of Azure Machine Learning and their most common usage. Each step of this tutorial can be modified or performed in other ways that might have security or scalability advantages. We will cover some of those in the Part II of this tutorial, however, we suggest the reader use the provide links in each section to learn more on each topic.
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/azure-machine-learning-release-notes.md
description: Learn about the latest updates to Azure Machine Learning Python SDK
-+
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-data.md
Last updated 10/21/2021-+ #Customer intent: As an experienced Python developer, I need to securely access my data in my Azure storage solutions and use it to accomplish my machine learning tasks.
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-mlflow.md
Last updated 10/21/2021 -+ # MLflow and Azure Machine Learning (v1)
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-access-data.md
Last updated 05/11/2022-+ #Customer intent: As an experienced Python developer, I need to make my data in Azure storage available to my remote compute to train my machine learning models.
Azure Data Factory provides efficient and resilient data transfer with more than
* [Create an Azure machine learning dataset](how-to-create-register-datasets.md) * [Train a model](how-to-set-up-training-targets.md)
-* [Deploy a model](how-to-deploy-and-where.md)
+* [Deploy a model](how-to-deploy-and-where.md)
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-forecast.md
-+ Last updated 11/18/2021 show_latex: true
See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples
* Learn more about [How to deploy an AutoML model to an online endpoint](../how-to-deploy-automl-endpoint.md). * Learn about [Interpretability: model explanations in automated machine learning (preview)](how-to-machine-learning-interpretability-automl.md). * Learn about [how AutoML builds forecasting models](../concept-automl-forecasting-methods.md). -
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-image-models.md
Last updated 01/18/2022-+ #Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-nlp-models.md
-+ Last updated 03/15/2022 #Customer intent: I'm a data scientist with ML knowledge in the natural language processing space, looking to build ML models using language specific data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
machine-learning How To Cicd Data Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-cicd-data-ingestion.md
-+ Last updated 08/17/2022- # Customer intent: As an experienced data engineer, I need to create a production data ingestion pipeline for the data used to train my models.- # DevOps for a data ingestion pipeline
stages:
* [Source Control in Azure Data Factory](../../data-factory/source-control.md) * [Continuous integration and delivery in Azure Data Factory](../../data-factory/continuous-integration-delivery.md)
-* [DevOps for Azure Databricks](https://marketplace.visualstudio.com/items?itemName=riserrad.azdo-databricks)
+* [DevOps for Azure Databricks](https://marketplace.visualstudio.com/items?itemName=riserrad.azdo-databricks)
machine-learning How To Configure Databricks Automl Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-databricks-automl-environment.md
Last updated 10/21/2021 -+ monikerRange: 'azureml-api-1'
machine-learning How To Consume Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-consume-web-service.md
Last updated 11/16/2022 ms.devlang: csharp, golang, java, python-+ #Customer intent: As a developer, I need to understand how to create a client application that consumes the web service of a deployed ML model.
machine-learning How To Create Machine Learning Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-machine-learning-pipelines.md
Last updated 10/21/2021 -+ # Create and run machine learning pipelines with Azure Machine Learning SDK
machine-learning How To Data Prep Synapse Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-data-prep-synapse-spark-pool.md
Last updated 11/28/2022-+ #Customer intent: As a data scientist, I want to prepare my data at scale, and to train my machine learning models from a single notebook using Azure Machine Learning.
See the example notebooks for more concepts and demonstrations of the Azure Syna
## Next steps * [Train a model](how-to-set-up-training-targets.md).
-* [Train with Azure Machine Learning dataset](how-to-train-with-datasets.md).
+* [Train with Azure Machine Learning dataset](how-to-train-with-datasets.md).
machine-learning How To Debug Parallel Run Step https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-debug-parallel-run-step.md
-+
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-and-where.md
Last updated 11/16/2022 -+ adobe-target: true
machine-learning How To Deploy Fpga Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-fpga-web-service.md
Last updated 10/21/2021 -+ # Deploy ML models to field-programmable gate arrays (FPGAs) with Azure Machine Learning
machine-learning How To Deploy Inferencing Gpus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-inferencing-gpus.md
Last updated 11/16/2022 -+ # Deploy a deep learning model for inference with GPU
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-mlflow-models.md
Last updated 11/04/2022 -+ # Deploy MLflow models as Azure web services
machine-learning How To Deploy Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-pipelines.md
Last updated 10/21/2021 -+ # Publish and track machine learning pipelines
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-log-view-metrics.md
-+ Last updated 10/26/2022
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace.md
Last updated 03/08/2022 -+ # Manage Azure Machine Learning workspaces with the Python SDK (v1)
machine-learning How To Migrate From Estimators To Scriptrunconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-migrate-from-estimators-to-scriptrunconfig.md
Last updated 09/14/2022 -+ # Migrating from Estimators to ScriptRunConfig
machine-learning How To Prebuilt Docker Images Inference Python Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prebuilt-docker-images-inference-python-extensibility.md
Last updated 08/15/2022 -+ # Python package extensibility for prebuilt Docker images (preview)
machine-learning How To Set Up Training Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-set-up-training-targets.md
Last updated 10/21/2021 -+ # Configure and submit training jobs
machine-learning How To Setup Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-setup-authentication.md
Last updated 07/18/2022 -+ # Set up authentication for Azure Machine Learning resources and workflows using SDK v1
machine-learning How To Track Designer Experiments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-track-designer-experiments.md
Last updated 10/21/2021 -+ # Enable logging in Azure Machine Learning designer pipelines
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-scikit-learn.md
Last updated 11/04/2022 -+ #Customer intent: As a Python scikit-learn developer, I need to combine open-source with a cloud platform to train, evaluate, and deploy my machine learning models at scale.
machine-learning How To Train With Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-with-datasets.md
Last updated 10/21/2021 -+ #Customer intent: As an experienced Python developer, I need to make my data available to my local or remote compute target to train my machine learning models.
If you don't include the leading forward slash, '/', you'll need to prefix the
* [Train image classification models](https://aka.ms/filedataset-samplenotebook) with FileDatasets.
-* [Train with datasets using pipelines](./how-to-create-machine-learning-pipelines.md).
+* [Train with datasets using pipelines](./how-to-create-machine-learning-pipelines.md).
machine-learning How To Troubleshoot Deployment Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-troubleshoot-deployment-local.md
Last updated 08/15/2022 -+ #Customer intent: As a data scientist, I want to try a local deployment so that I can troubleshoot my model deployment problems.
machine-learning How To Troubleshoot Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-troubleshoot-deployment.md
-+ #Customer intent: As a data scientist, I want to figure out why my model deployment fails so that I can fix it.
For more information, visit the [interactive debugging in VS Code guide](../how-
Learn more about deployment: * [How to deploy and where](how-to-deploy-and-where.md)zzs
-* [How to run and debug experiments locally](../how-to-debug-visual-studio-code.md)
+* [How to run and debug experiments locally](../how-to-debug-visual-studio-code.md)
machine-learning How To Troubleshoot Prebuilt Docker Image Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-troubleshoot-prebuilt-docker-image-inference.md
Last updated 08/15/2022 -+ # Troubleshooting prebuilt docker images for inference
GPU base images can't be used for local deployment, unless the local deployment
## Next steps * [Add Python packages to prebuilt images](how-to-prebuilt-docker-images-inference-python-extensibility.md).
-* [Use a prebuilt package as a base for a new Dockerfile](how-to-extend-prebuilt-docker-image-inference.md).
+* [Use a prebuilt package as a base for a new Dockerfile](how-to-extend-prebuilt-docker-image-inference.md).
machine-learning How To Tune Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-tune-hyperparameters.md
Last updated 05/02/2022 -+ # Hyperparameter tuning a model with Azure Machine Learning (v1)
machine-learning How To Use Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-mlflow.md
Last updated 10/21/2021 -+ # Track ML models with MLflow and Azure Machine Learning
machine-learning How To Use Private Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-private-python-packages.md
Last updated 10/21/2021-+ # Use private Python packages with Azure Machine Learning
machine-learning How To Version Track Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-version-track-datasets.md
Last updated 08/17/2022 -+ #Customer intent: As a data scientist, I want to version and track datasets so I can use and share them across multiple machine learning experiments.
machine-learning Reference Pipeline Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-pipeline-yaml.md
Last updated 07/31/2020-+ # CLI (v1) pipeline job YAML schema
machine-learning Tutorial Train Deploy Notebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-train-deploy-notebook.md
Last updated 09/14/2022-+ #Customer intent: As a professional data scientist, I can build an image classification model with Azure Machine Learning by using Python in a Jupyter Notebook.
Use these steps to delete your Azure Machine Learning workspace and all compute
+ Learn how to [authenticate to the deployed model](../how-to-authenticate-online-endpoint.md). + [Make predictions on large quantities of data](../tutorial-pipeline-batch-scoring-classification.md) asynchronously. + Monitor your Azure Machine Learning models with [Application Insights](how-to-enable-app-insights.md).-
managed-grafana How To Set Up Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-set-up-private-access.md
Last updated 02/16/2023-+ # Set up private access (preview)
managed-grafana How To Smtp Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-smtp-settings.md
To activate SMTP settings, enable email notifications and configure an email con
+> [!TIP]
+> Here are some tips for properly configuring SMTP:
+>- When using a business email account such as Office 365, you may need to contact your email administrator to enable SMTP AUTH (for example, [enable-smtp-auth-for-specific-mailboxes](/exchange/clients-and-mobile-in-exchange-online/authenticated-client-smtp-submission#enable-smtp-auth-for-specific-mailboxes)). You should be able to create an app password afterwards and use it as the SMTP *password* setting.
+>- When using a personal email account such as Outlook or Gmail, you should create an app password and use it as the SMTP *password* setting. Note that your account won't work for email notification if it's configured with multi-factor authentication.
+ ## Configure Grafana contact points and send a test email Configuring Grafana contact points is done in the Grafana portal:
mariadb Howto Read Replicas Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-read-replicas-powershell.md
Last updated 06/24/2022--- devx-track-azurepowershell-- kr2b-contr-experiment+ # How to create and manage read replicas in Azure Database for MariaDB using PowerShell
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Previously updated : 02/21/2023 Last updated : 06/01/2023 # Azure Policy built-in definitions for Azure Database for MariaDB
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
migrate Tutorial App Containerization Java App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-app-service.md
ms. + Last updated 5/2/2022 # Java web app containerization and migration to Azure App Service
To troubleshoot any issues with the tool, you can look at the log files on the W
- Containerizing Java web apps on Apache Tomcat (on Linux servers) and deploying them on Linux containers on AKS. [Learn more](./tutorial-app-containerization-java-kubernetes.md) - Containerizing ASP.NET web apps and deploying them on Windows containers on AKS. [Learn more](./tutorial-app-containerization-aspnet-kubernetes.md)-- Containerizing ASP.NET web apps and deploying them on Windows containers on Azure App Service. [Learn more](./tutorial-app-containerization-aspnet-app-service.md)
+- Containerizing ASP.NET web apps and deploying them on Windows containers on Azure App Service. [Learn more](./tutorial-app-containerization-aspnet-app-service.md)
migrate Tutorial App Containerization Java Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-kubernetes.md
description: Tutorial:Containerize & migrate Java web applications to Azure Kube
ms.-+ Last updated 01/04/2023
mysql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-csharp.md
ms.devlang: csharp-+ Last updated 05/03/2023
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-java.md
-+ ms.devlang: java Last updated 05/03/2023
mysql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-python.md
ms.devlang: python-+ Last updated 9/21/2020
mysql Tutorial Add Redis To Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-add-redis-to-mysql.md
Title: 'Quickstart: Boost performance for Azure Database for MySQL - Flexible Se
description: "This tutorial shows how to add Azure Cache for Redis to boost performance for your Azure Database for MySQL - Flexible Server." +
mysql 09 Data Migration With Mysql Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/mysql-on-premises-azure-db/09-data-migration-with-mysql-workbench.md
-+ Last updated 06/21/2021
You've successfully completed an on-premises to Azure Database for MySQL migrati
## Next steps > [!div class="nextstepaction"]
-> [Post Migration Management](./10-post-migration-management.md)
+> [Post Migration Management](./10-post-migration-management.md)
mysql Connect Cpp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-cpp.md
ms.devlang: cpp -+ Last updated 06/20/2022
mysql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-csharp.md
ms.devlang: csharp -+ Last updated 06/20/2022
mysql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-go.md
-+ ms.devlang: golang Last updated 05/03/2023
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-java.md
ms.devlang: java -+ Last updated 05/03/2023
mysql How To Configure Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-ssl.md
ms.devlang: csharp, golang, java, javascript, php, python, ruby-+ Last updated 06/20/2022
Azure Database for MySQL supports connecting your Azure Database for MySQL serve
## Step 1: Obtain SSL certificate
-Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt.pem](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt.pem) and save the certificate file to your local drive (this tutorial uses c:\ssl for example).
+Download the certificate needed to communicate over SSL with your Azure Database for MySQL server from [https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem) and save the certificate file to your local drive (this tutorial uses c:\ssl for example).
**For Microsoft Internet Explorer and Microsoft Edge:** After the download has completed, rename the certificate to BaltimoreCyberTrustRoot.crt.pem. See the following links for certificates for servers in sovereign clouds: [Azure Government](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt.pem), [Azure China](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem), and [Azure Germany](https://www.d-trust.net/cgi-bin/D-TRUST_Root_Class_3_CA_2_2009.crt).
Configure MySQL Workbench to connect securely over SSL.
1. Update the **Use SSL** field to "Require".
-1. In the **SSL CA File:** field, enter the file location of the **BaltimoreCyberTrustRoot.crt.pem**.
+1. In the **SSL CA File:** field, enter the file location of the **DigiCertGlobalRootG2.crt.pem**.
:::image type="content" source="./media/how-to-configure-ssl/mysql-workbench-ssl.png" alt-text="Save SSL configuration":::
For existing connections, you can bind SSL by right-clicking on the connection i
Another way to bind the SSL certificate is to use the MySQL command-line interface by executing the following commands. ```bash
-mysql.exe -h mydemoserver.mysql.database.azure.com -u Username@mydemoserver -p --ssl-mode=REQUIRED --ssl-ca=c:\ssl\BaltimoreCyberTrustRoot.crt.pem
+mysql.exe -h mydemoserver.mysql.database.azure.com -u Username@mydemoserver -p --ssl-mode=REQUIRED --ssl-ca=c:\ssl\DigiCertGlobalRootG2.crt.pem
``` > [!NOTE]
Refer to the list of [compatible drivers](concepts-compatibility.md) supported b
```php $conn = mysqli_init();
-mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/BaltimoreCyberTrustRoot.crt.pem", NULL, NULL);
+mysqli_ssl_set($conn,NULL,NULL, "/var/www/html/DigiCertGlobalRootG2.crt.pem", NULL, NULL);
mysqli_real_connect($conn, 'mydemoserver.mysql.database.azure.com', 'myadmin@mydemoserver', 'yourpassword', 'quickstartdb', 3306, MYSQLI_CLIENT_SSL); if (mysqli_connect_errno()) { die('Failed to connect to MySQL: '.mysqli_connect_error());
die('Failed to connect to MySQL: '.mysqli_connect_error());
```phppdo $options = array(
- PDO::MYSQL_ATTR_SSL_CA => '/var/www/html/BaltimoreCyberTrustRoot.crt.pem'
+ PDO::MYSQL_ATTR_SSL_CA => '/var/www/html/DigiCertGlobalRootG2.crt.pem'
); $db = new PDO('mysql:host=mydemoserver.mysql.database.azure.com;port=3306;dbname=databasename', 'username@mydemoserver', 'yourpassword', $options); ```
try:
password='yourpassword', database='quickstartdb', host='mydemoserver.mysql.database.azure.com',
- ssl_ca='/var/www/html/BaltimoreCyberTrustRoot.crt.pem')
+ ssl_ca='/var/www/html/DigiCertGlobalRootG2.crt.pem')
except mysql.connector.Error as err: print(err) ```
conn = pymysql.connect(user='myadmin@mydemoserver',
password='yourpassword', database='quickstartdb', host='mydemoserver.mysql.database.azure.com',
- ssl={'ca': '/var/www/html/BaltimoreCyberTrustRoot.crt.pem'})
+ ssl={'ca': '/var/www/html/DigiCertGlobalRootG2.crt.pem'})
``` ### Django (PyMySQL)
DATABASES = {
'HOST': 'mydemoserver.mysql.database.azure.com', 'PORT': '3306', 'OPTIONS': {
- 'ssl': {'ca': '/var/www/html/BaltimoreCyberTrustRoot.crt.pem'}
+ 'ssl': {'ca': '/var/www/html/DigiCertGlobalRootG2.crt.pem'}
} } }
client = Mysql2::Client.new(
:username => 'myadmin@mydemoserver', :password => 'yourpassword', :database => 'quickstartdb',
- :sslca => '/var/www/html/BaltimoreCyberTrustRoot.crt.pem'
+ :sslca => '/var/www/html/DigiCertGlobalRootG2.crt.pem'
) ```
client = Mysql2::Client.new(
```go rootCertPool := x509.NewCertPool()
-pem, _ := ioutil.ReadFile("/var/www/html/BaltimoreCyberTrustRoot.crt.pem")
+pem, _ := ioutil.ReadFile("/var/www/html/DigiCertGlobalRootG2.crt.pem")
if ok := rootCertPool.AppendCertsFromPEM(pem); !ok { log.Fatal("Failed to append PEM.") }
var builder = new MySqlConnectionStringBuilder
Password = "yourpassword", Database = "quickstartdb", SslMode = MySqlSslMode.VerifyCA,
- SslCa = "BaltimoreCyberTrustRoot.crt.pem",
+ SslCa = "DigiCertGlobalRootG2.crt.pem",
}; using (var connection = new MySqlConnection(builder.ConnectionString)) {
using (var connection = new MySqlConnection(builder.ConnectionString))
```node var fs = require('fs'); var mysql = require('mysql');
-const serverCa = [fs.readFileSync("/var/www/html/BaltimoreCyberTrustRoot.crt.pem", "utf8")];
+const serverCa = [fs.readFileSync("/var/www/html/DigiCertGlobalRootG2.crt.pem", "utf8")];
var conn=mysql.createConnection({ host:"mydemoserver.mysql.database.azure.com", user:"myadmin@mydemoserver",
mysql How To Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-connection-string.md
-+ Last updated 06/20/2022- # How to connect applications to Azure Database for MySQL
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Previously updated : 02/21/2023 Last updated : 06/01/2023 # Azure Policy built-in definitions for Azure Database for MySQL
mysql Sample Scripts Java Connection Pooling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/sample-scripts-java-connection-pooling.md
Title: Java samples to illustrate connection pooling
description: This article lists Java samples to illustrate connection pooling. -+
public class MySQLConnectionPool {
}
-```
+```
network-watcher Network Watcher Nsg Flow Logging Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-cli.md
az network watcher flow-log create --name 'myFlowLog' --nsg 'myNSG' --resource-g
``` > [!NOTE]
-> - The storage account can't have network rules that restrict network access to only Microsoft services or specific virtual networks.
> - If the storage account is in a different subscription, the network security group and storage account must be associated with the same Azure Active Directory tenant. The account you use for each subscription must have the [necessary permissions](required-rbac-permissions.md). > - If the storage account is in a different resource group or subscription, you must specify the full ID of the storage account instead of only its name. For example, if **myStorageAccount** storage account is in a resource group named **StorageRG** while the network security group is in the resource group **myResourceGroup**, you must use `/subscriptions/{SubscriptionID}/resourceGroups/RG-Storage/providers/Microsoft.Storage/storageAccounts/myStorageAccount` for `--storage-account` parameter instead of `myStorageAccount`.
network-watcher Network Watcher Nsg Flow Logging Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-powershell.md
Register-AzResourceProvider -ProviderNamespace 'Microsoft.Insights'
``` > [!NOTE]
- > - The storage account can't have network rules that restrict network access to only Microsoft services or specific virtual networks.
> - If the storage account is in a different subscription, the network security group and storage account must be associated with the same Azure Active Directory tenant. The account you use for each subscription must have the [necessary permissions](required-rbac-permissions.md). 1. Create the flow log using [New-AzNetworkWatcherFlowLog](/powershell/module/az.network/new-aznetworkwatcherflowlog). The flow log is created in the Network Watcher default resource group **NetworkWatcherRG**.
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nodejs-use-node-modules-azure-apps.md
ms.assetid: c0e6cd3d-932d-433e-b72d-e513e23b4eb6 ms.prod: azure-nodejs ms.devlang: javascript+ Last updated 08/17/2016
Now that you understand how to use Node.js modules with Azure, learn how to [spe
For more information, see the [Node.js Developer Center](/azure/developer/javascript/). [specify the Node.js version]: ./app-service/overview.md
-[How to use the Azure Command-Line Interface for Mac and Linux]:cli-install-nodejs.md
+[How to use the Azure Command-Line Interface for Mac and Linux]:cli-install-nodejs.md
notification-hubs Notification Hubs Aspnet Backend Android Secure Google Gcm Push Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-aspnet-backend-android-secure-google-gcm-push-notification.md
android ms.devlang: java+ Last updated 08/07/2020
notification-hubs Notification Hubs Aspnet Backend Gcm Android Push To User Google Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-aspnet-backend-gcm-android-push-to-user-google-notification.md
mobile-android ms.devlang: java -+ Last updated 01/04/2019
notification-hubs Notification Hubs Aspnet Backend Ios Apple Apns Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-aspnet-backend-ios-apple-apns-notification.md
ios ms.devlang: objective-c+ Last updated 08/07/2020
notification-hubs Notification Hubs Aspnet Backend Windows Dotnet Wns Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-aspnet-backend-windows-dotnet-wns-notification.md
mobile-windows ms.devlang: csharp -+ Last updated 08/17/2020
notification-hubs Notification Hubs Aspnet Backend Windows Dotnet Wns Secure Push Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-aspnet-backend-windows-dotnet-wns-secure-push-notification.md
Last updated 09/14/2020
ms.lastreviewed: 01/04/2019-+ # Send secure push notifications from Azure Notification Hubs
notification-hubs Notification Hubs Baidu China Android Notifications Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-baidu-china-android-notifications-get-started.md
Last updated 03/18/2020
ms.lastreviewed: 06/19/2019-+ # Get started with Notification Hubs using Baidu
notification-hubs Notification Hubs Deploy And Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-deploy-and-manage-powershell.md
ms.assetid: 7c58f2c8-0399-42bc-9e1e-a7f073426451
powershell+ Last updated 01/04/2019
notification-hubs Notification Hubs Ios Xplat Localized Apns Push Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-ios-xplat-localized-apns-push-notification.md
ios ms.devlang: objective-c+ Last updated 01/04/2019
In this tutorial, you sent localized notifications to iOS devices. To learn how
[Windows Developer Preview registration steps for Mobile Services]: ../mobile-services-windows-developer-preview-registration.md [wns object]: /previous-versions/azure/reference/jj860484(v=azure.100) [Notification Hubs Guidance]: /previous-versions/azure/azure-services/jj927170(v=azure.100)
-[Notification Hubs How-To for iOS]: /previous-versions/azure/reference/dn223264(v=azure.100)
+[Notification Hubs How-To for iOS]: /previous-versions/azure/reference/dn223264(v=azure.100)
notification-hubs Notification Hubs Java Push Notification Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-java-push-notification-tutorial.md
Last updated 08/23/2021
ms.lastreviewed: 01/04/2019-+ # How to use Notification Hubs from Java
notification-hubs Push Notifications Android Specific Users Firebase Cloud Messaging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/push-notifications-android-specific-users-firebase-cloud-messaging.md
mobile-android ms.devlang: java -+ Last updated 09/11/2019
open-datasets Dataset Boston Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-boston-safety.md
Title: Boston Safety Data description: Learn how to use the Boston Safety Data dataset in Azure Open Datasets. + Last updated 04/16/2021
open-datasets Dataset Chicago Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-chicago-safety.md
Title: Chicago Safety Data description: Learn how to use the Chicago Safety Data dataset in Azure Open Datasets. + Last updated 04/16/2021
open-datasets Dataset New York City Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-new-york-city-safety.md
Title: New York City Safety Data description: Learn how to use the New York City Safety dataset in Azure Open Datasets. + Last updated 04/16/2021
open-datasets Dataset Oj Sales Simulated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-oj-sales-simulated.md
Title: OJ Sales Simulated description: Learn how to use the OJ Sales Simulated dataset in Azure Open Datasets. + Last updated 04/16/2021
open-datasets Dataset Public Holidays https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-public-holidays.md
Title: Public Holidays description: Learn how to use the Public Holidays dataset in Azure Open Datasets. + Last updated 04/16/2021
open-datasets Dataset San Francisco Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-san-francisco-safety.md
Title: San Francisco Safety Data description: Learn how to use the San Francisco Safety dataset in Azure Open Datasets. + Last updated 04/16/2021
open-datasets Dataset Seattle Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-seattle-safety.md
Title: Seattle Safety Data description: Learn how to use the Seattle Safety dataset in Azure Open Datasets. + Last updated 04/16/2021
openshift Howto Deploy Java Jboss Enterprise Application Platform App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-app.md
Last updated 12/20/2022
keywords: java, jakartaee, microprofile, EAP, JBoss EAP, ARO, OpenShift, JBoss Enterprise Application Platform-+ # Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift (ARO) 4 cluster
openshift Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-liberty-app.md
Last updated 10/30/2020 keywords: java, jakartaee, javaee, microprofile, open-liberty, websphere-liberty, aro, openshift, red hat-+ # Deploy a Java application with Open Liberty/WebSphere Liberty on an ARO cluster
You can learn more from references used in this guide:
* [Open Liberty Server Configuration](https://openliberty.io/docs/ref/config/) * [Liberty Maven Plugin](https://github.com/OpenLiberty/ci.maven#liberty-maven-plugin) * [Open Liberty Container Images](https://github.com/OpenLiberty/ci.docker)
-* [WebSphere Liberty Container Images](https://github.com/WASdev/ci.docker)
+* [WebSphere Liberty Container Images](https://github.com/WASdev/ci.docker)
operator-nexus Howto Hybrid Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-hybrid-aks.md
Last updated 02/02/2023-+ # How to manage and lifecycle the AKS-Hybrid cluster
operator-nexus Howto Track Async Operations Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-track-async-operations-cli.md
Last updated 02/09/2023-+ # Tracking asynchronous operations using Azure CLI
operator-nexus Quickstarts Tenant Workload Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-prerequisites.md
Last updated 01/25/2023-+ # Prerequisites for deploying tenant workloads
operator-nexus Troubleshoot Internet Host Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/troubleshoot-internet-host-virtual-machine.md
+
+ Title: Troubleshoot accessing a CSN connected internet hostname within the AKS hybrid cluster for Azure Operator Nexus
+description: Troubleshoot accessing a CSN connected internet hostname within the AKS hybrid cluster for Azure Operator Nexus
+++ Last updated : 06/06/2023++++
+# Accessing a CSN connected internet hostname within the AKS hybrid cluster
+
+This article outlines troubleshooting for scenarios where the end user is having issues reaching an internet hostname, which is part of the CSN
+attached to AKS hybrid cluster.
+
+## Prerequisites
+
+* Subscription ID
+* Cluster name and resource group
+* AKS-Hybrid cluster name and resource group
+* Familiar with procedures provided in [Connect to AKS Hybrid](/azure/AkS/Hybrid/create-aks-hybrid-preview-cli#connect-to-the-aks-hybrid-cluster)
+
+Once these prerequisites have been applied, we can finalize the repair of the ΓÇ£Curl -vkΓÇ¥ by performing the below workaround.
+
+## Common scenarios
+
+User logged in to the jump server and ssh would into AKS hybrid VM using the IP address of the worker nodes or control plane VMs. From AKS hybrid VM users couldnΓÇÖt reach any egress endpoints mentioned while creating an AKS hybrid that uses the CSN
+
+The user is encountering an error when trying to access the fully qualified domain name (FQDN) of internet hostnames
+
+~~~bash
+curl -vk [http://www.ubuntu.com](http://www.ubuntu.com)
+~~~
+
+~~~output
+\*   Trying 192.xxx.xxx.xxx:xx...
+\* TCP_NODELAY set
+\*   Trying 2607:f8b0:xxxx:c17::xx:xx...
+\* TCP_NODELAY set
+\* Immediate connect fail for 2607:f8b0:xxxx:c17::xx: Network is
+unreachable
+\*   Trying 2607:f8b0:xxxx:c17::xx:xx...
+\* TCP_NODELAY set
+\* Immediate connect fail for 2607:f8b0:xxxx:c17::xx: Network is
+unreachable
+\*   Trying 2607:f8b0:xxxx:c17::xx:xx...
+\* TCP_NODELAY set
+\* Immediate connect fail for 2607:f8b0:xxxx:c17::xx: Network is
+unreachable
+\*   Trying 2607:f8b0:xxxx:c17::93:xx...
+~~~
+
+## Suggested solutions
+
+~~~bash
+curl -x "http_proxy=http://169.xxx.x.xx.xxxx" -vk "https://ubuntu.com"
+
+https_proxy=<http://169.xxx.x.xx.xxxx> tdnf -y install openssh-clients
+~~~
+
+### First attempt with the above commands generated a user
+
+~~~output
+
+error:**
+\* Could not resolve proxy: http_proxy=http
+\* Closing connection 0
+
+curl: (5) Could not resolve proxy: http_proxy=http
+https_proxy=[http://169.xxx.x.xx.xxxx](http://169.xxx.x.xx.xxxx/) curl
+-vk "https://ubuntu.com" shows connected but user get 403 Access denied.
+~~~
+
+### Second attempt with the commands
+
+\# option 1 (set proxy inline on curl, settings will go away after
+command is complete)
+
+~~~bash
+curl -x "http://169.xxx.x.xx.xxxx" -vk "https://ubuntu.com"
+~~~
+
+\# option 2 (proxy setting is effective while they remain in the shell)
+
+~~~bash
+export https_proxy="http://169.xxx.x.xx.xxxx"
+export HTTPS_PROXY="http://169.xxx.x.xx.xxxx"
+curl -vk <https://ubuntu.com>
+~~~
+
+If a customer is running their rpm install with a shell script, they must ensure setting the http(s)\_proxy locally inside their shell script explicitly. They can also try setting the proxy as an option inline on rpm with the '--httpproxy' and '--httpport' options.
+
+[For additional information](https://www.xmodulo.com/how-to-install-rpm-packages-behind-proxy.html)
+
+### How to install.RPM packages behind proxy
+
+RPM has a proxy flag, which must be set:
+
+~~~bash
+sudo rpm --import <https://aglet.packages.cloudpassage.com/cloudpassage.packages.key>
+--httpproxy 169.xxx.x.xx  --httpport 3128
+~~~
+
+>[!Note]
+>Keep in mind if you set them system-wide, they may "lose" their ability to run kubectl locally. Set them inline within the script first to help minimize the effects.
postgresql Concepts Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-data-encryption.md
Last updated 1/24/2023 -+
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-security.md
When you're running Azure Database for PostgreSQL - Flexible Server, you have tw
## Access management Best way to manage PostgreSQL database access permissions at scale is using the concept of [roles](https://www.postgresql.org/docs/current/user-manag.html). A role can be either a database user or a group of database users. Roles can own the database objects and assign privileges on those objects to other roles to control who has access to which objects. It is also possible to grant membership in a role to another role, thus allowing the member role to use privileges assigned to another role.
-PostgreSQL lets you grant permissions directly to the database users. As a good security practice, it can be recommended that you create roles with specific sets of permissions based on minimum application and access requirements. You can then assign the appropriate roles to each user. Roles are used to enforce a *least privilege model* for accessing database objects.
+PostgreSQL lets you grant permissions directly to the database users. **As a good security practice, it can be recommended that you create roles with specific sets of permissions based on minimum application and access requirements. You can then assign the appropriate roles to each user. Roles are used to enforce a *least privilege model* for accessing database objects.**
The Azure Database for PostgreSQL server is created with the 3 default roles defined. You can see these roles by running the command: ```sql
oid | 24827
- ``` [Audit logging](concepts-audit.md) is also available with Flexible Server to track activity in your databases.
oid | 24827
> Azure Database for PostgreSQL - Flexible Server currently doesn't support [Microsoft Defender for Cloud protection](../../security-center/azure-defender.md).
+### Controlling schema access
+
+Newly created databases in PostgreSQL will have a default set of privileges in the database's public schema that allow all database users and roles to create objects. To better limit application user access to the databases that you create on your Flexible Server, we recommend that you consider revoking these default public privileges. After doing so, you can then grant specific privileges for database users on a more granular basis. For example:
+
+* To prevent application database users from creating objects in the public schema, revoke create privileges to *public* schema
+ ```sql
+ REVOKE CREATE ON SCHEMA public FROM PUBLIC;
+
+ ```
+* Next, create new database:
+```sql
+CREATE DATABASE Test_db;
+
+```
+* Revoke all privileges from the PUBLIC schema on this new database.
+```sql
+REVOKE ALL ON DATABASE Test_db FROM PUBLIC;
+
+```
+* Create custom role for application db users
+```sql
+CREATE ROLE Test_db_user;
+```
+* Give database users with this role the ability to connect to the database.
+```sql
+GRANT CONNECT ON DATABASE Test_db TO Test_db_user;
+GRANT ALL PRIVILEGES ON DATABASE Test_db TO Test_db_user;
++
+```
+* Create database user
+```sql
+CREATE ROLE user1 LOGIN PASSWORD 'Password_to_change'
+```
+* Assign role, with its connect and select privileges to user
+```sql
+GRANT Test_db_user TO user1;
++
+```
+In this example, user *user1* can connect and has all privileges in our test database *Test_db*, but not any other db on the server. It would be recommended further, instead of giving this user\role *ALL PRIVILEGES* on that database and its objects, to provide more selective permissions, such as *SELECT*,*INSERT*,*EXECUTE*, etc. For more information about privileges in PostgreSQL databases, see the [GRANT](https://www.postgresql.org/docs/current/sql-grant.html) and [REVOKE](https://www.postgresql.org/docs/current/sql-revoke.html) commands in the PostgreSQL docs.
+ ## Row level security [Row level security (RLS)](https://www.postgresql.org/docs/current/ddl-rowsecurity.html) is a PostgreSQL security feature that allows database administrators to define policies to control how specific rows of data display and operate for one or more roles. Row level security is an additional filter you can apply to a PostgreSQL database table. When a user tries to perform an action on a table, this filter is applied before the query criteria or other filtering, and the data is narrowed or rejected according to your security policy. You can create row level security policies for specific commands like *SELECT*, *INSERT*, *UPDATE*, and *DELETE*, specify it for ALL commands. Use cases for row level security include PCI compliant implementations, classified environments, as well as shared hosting / multi-tenant applications.
postgresql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-csharp.md
-+ ms.devlang: csharp Last updated 11/30/2021
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-java.md
-+ ms.devlang: java Last updated 11/07/2022
az group delete \
## Next steps > [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](../howto-migrate-using-export-and-import.md)
+> [Migrate your database using Export and Import](../howto-migrate-using-export-and-import.md)
postgresql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-python.md
ms.devlang: python-+ Last updated 11/30/2021
postgresql How To Connect With Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-with-managed-identity.md
Last updated 01/24/2023
-
- - devx-track-csharp
- - devx-track-azurecli
+ # Connect with Managed Identity to Azure Database for PostgreSQL Flexible Server
postgresql Release Notes Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes-api.md
+
+ Title: Azure Database for PostgreSQL - Flexible Server API Release notes
+description: API Release notes of Azure Database for PostgreSQL - Flexible Server.
++++++ Last updated : 06/06/2023++
+# API Release notes - Azure Database for PostgreSQL - Flexible Server
++
+This page provides latest news and updates regarding the recommended API versions to be used. The API versions that are not listed here might be supported, but will be retired soon.
+
+## API Releases
+
+| API Version | Stable/Preview | Comments |
+| | | |
+| 2023-03-01-preview | Preview | New GA version features (2022-12-01) +<br>Geo + CMK<br>Storage auto growth<br>IOPS scaling<br>New location capability api<br>Azure Defender<br>Server Logs<br>Migrations<br> |
+| 2022-12-01 | Stable (GA) | Earlier GA features +<br>AAD<br>CMK<br>Backups<br>Administrators<br>Replicas<br>GeoRestore<br>MVU<br> |
+| 2022-05-01-preview | Preview | CheckMigrationNameAvailability<br>Migrations<br> |
+| 2021-06-01 | Stable (GA) | Earlier GA features +<br>Server CRUD<br>CheckNameAvailability<br>Configurations (Server parameters)<br>Database<br>Firewall rules<br>Private<br>DNS zone suffix<br>PITR<br>Server Restart<br>Server Start<br>Server Stop<br>Maintenance window<br>Virtual network subnet usage<br> |
++
+## Contacts
+
+For any questions or suggestions you might have on Azure Database for PostgreSQL flexible server, send an email to the Azure Database for PostgreSQL Team ([@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)). Please note that this email address isn't a technical support alias.
+
+In addition, consider the following points of contact as appropriate:
+
+- To contact Azure Support, [file a ticket from the Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
+- To fix an issue with your account, file a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) in the Azure portal.
+- To provide feedback or to request new features, create an entry via [UserVoice](https://feedback.azure.com/forums/597976-azure-database-for-postgresql).
+
+## Next steps
+
+Now that you've read our API Release Notes on Azure Database for PostgreSQL flexible server, you're ready to create your first server: [Create an Azure Database for PostgreSQL - Flexible Server using Azure portal](./quickstart-create-server-portal.md)
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
## Release: May 2023 * Public preview of [Database availability metric](./concepts-monitoring.md#database-availability-metric) for Azure Database for PostgreSQL ΓÇô Flexible Server.
-* Postgres 15 is now available in public preview for Azure Database for PostgreSQL ΓÇô Flexible Server in limited regions.
+* Postgres 15 is now available in public preview for Azure Database for PostgreSQL ΓÇô Flexible Server in limited regions (West Europe, East US, West US2, South East Asia, UK South, North Europe, Japan east).
* General availability: [Pgvector extension](how-to-use-pgvector.md) for Azure Database for PostgreSQL - Flexible Server. * General availability :[Azure Key Vault Managed HSM](./concepts-data-encryption.md#using-azure-key-vault-managed-hsm) with Azure Database for PostgreSQL- Flexible server * General availability [32 TB Storage](./concepts-compute-storage.md) with Azure Database for PostgreSQL- Flexible server
postgresql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-csharp.md
-+ ms.devlang: csharp Last updated 06/24/2022
postgresql Connect Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-go.md
-+ ms.devlang: golang Last updated 06/24/2022
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/connect-java.md
ms.devlang: java-+ Last updated 09/27/2022
az group delete \
## Next steps > [!div class="nextstepaction"]
-> [Migrate your database using Export and Import](./how-to-migrate-using-export-and-import.md)
+> [Migrate your database using Export and Import](./how-to-migrate-using-export-and-import.md)
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Previously updated : 02/21/2023 Last updated : 06/01/2023 # Azure Policy built-in definitions for Azure Database for PostgreSQL
private-5g-core Deploy Private Mobile Network With Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-powershell.md
-+ Last updated 03/15/2023
If you do not want to keep your deployment, [delete the resource group](../azure
If you have kept your deployment, you can either begin designing policy control to determine how your private mobile network handles traffic, or you can add more sites to your private mobile network. - [Learn more about designing the policy control configuration for your private mobile network](policy-control.md).-- [Collect the required information for a site](collect-required-information-for-a-site.md).
+- [Collect the required information for a site](collect-required-information-for-a-site.md).
private-link Disable Private Link Service Network Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/disable-private-link-service-network-policy.md
Last updated 02/02/2023 -+ ms.devlang: azurecli
public-multi-access-edge-compute-mec Tutorial Create Vm Using Go Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/tutorial-create-vm-using-go-sdk.md
Last updated 11/22/2022-+ # Tutorial: Deploy resources in Azure public MEC using the Go SDK
public-multi-access-edge-compute-mec Tutorial Create Vm Using Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/tutorial-create-vm-using-python-sdk.md
Last updated 11/22/2022-+ # Tutorial: Deploy a virtual machine in Azure public MEC using the Python SDK
purview Create Microsoft Purview Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-microsoft-purview-dotnet.md
ms.devlang: csharp Last updated 06/17/2022-+ # Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using .NET SDK
purview Create Microsoft Purview Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-microsoft-purview-python.md
ms.devlang: python Last updated 06/17/2022-+ # Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using Python
Follow these next articles to learn how to navigate the Microsoft Purview govern
* [How to use the Microsoft Purview governance portal](use-azure-purview-studio.md) * [Grant users permissions to the governance portal](catalog-permissions.md) * [Create a collection](quickstart-create-collection.md)-
purview How To Receive Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-receive-share.md
-+ Last updated 02/16/2023 # Receive Azure Storage in-place share with Microsoft Purview Data Sharing (preview)
purview How To Share Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-share-data.md
-+ Last updated 02/16/2023 # Share Azure Storage data in-place with Microsoft Purview Data Sharing (preview)
purview Manage Integration Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-integration-runtimes.md
+ Last updated 05/08/2023
purview Manage Kafka Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-kafka-dotnet.md
ms.devlang: csharp Last updated 12/13/2022-+ # Use Event Hubs and .NET to send and receive Atlas Kafka topics messages
purview Quickstart Data Share Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-data-share-dotnet.md
ms.devlang: csharp Last updated 02/16/2023-+ # Quickstart: Share and receive data with the Microsoft Purview Data Sharing .NET SDK
purview Quickstart Data Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-data-share.md
-+ Last updated 02/16/2023 # Quickstart: Share and receive Azure Storage data in-place with Microsoft Purview Data Sharing (preview)
purview Tutorial Using Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-using-python-sdk.md
description: This tutorial describes how to use the Microsoft Purview Python SDK
+ Last updated 05/27/2022- # Customer intent: I can use the scanning and catalog Python SDKs to perform CRUD operations on data sources and scans, trigger scans and also to search the catalog.
except HttpResponseError as e:
> [!div class="nextstepaction"] > [Learn more about the Python Microsoft Purview Scanning Client](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-purview-scanning/1.0.0b2/https://docsupdatetracker.net/index.html) > [Learn more about the Python Microsoft Purview Catalog Client](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-purview-catalog/1.0.0b2/https://docsupdatetracker.net/index.html)-
remote-rendering Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/how-tos/tokens.md
Last updated 02/11/2020 -+ # Get service access tokens
remote-rendering Convert Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/quickstarts/convert-model.md
Last updated 01/23/2020 -+ # Quickstart: Convert a model for rendering
remote-rendering Powershell Example Scripts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/samples/powershell-example-scripts.md
Last updated 02/12/2020 -+ # Example PowerShell scripts
Use `-Poll` to wait until conversion is done or an error occurred.
- [Quickstart: Render a model with Unity](../quickstarts/render-model.md) - [Quickstart: Convert a model for rendering](../quickstarts/convert-model.md)-- [Model conversion](../how-tos/conversion/model-conversion.md)
+- [Model conversion](../how-tos/conversion/model-conversion.md)
resource-mover Remove Move Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/remove-move-resources.md
Last updated 02/22/2020 --+ #Customer intent: As an Azure admin, I want remove resources I've added to a move collection.- # Manage move collections and resource groups
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
sap Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-devops.md
Last updated 12/1/2022
-+ # Use SAP on Azure Deployment Automation Framework from Azure DevOps Services
sap Configure Sap Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-sap-parameters.md
Last updated 03/17/2023
+ # Configure SAP Installation parameters
configuration_settings = {
> [!div class="nextstepaction"] > [Deploy SAP System](deploy-system.md)-
sap Manual Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/manual-deployment.md
Last updated 11/17/2021
+ # Get started with manual deployment
sap Run Ansible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/run-ansible.md
Last updated 11/17/2021
+ # Get started Ansible configuration
The following tasks are executed on the Central services instance virtual machin
- Download the software ---
sap Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/software.md
Last updated 11/17/2021
+ # Download SAP software
sap Sap Hana High Availability Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-rhel.md
vm-linux+ Last updated 04/06/2023 - # High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux
You can test a manual failover by stopping the cluster on the hn1-db-0 node:
* [Azure Virtual Machines planning and implementation for SAP][planning-guide] * [Azure Virtual Machines deployment for SAP][deployment-guide] * [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]
-* [SAP HANA VM storage configurations](./hana-vm-operations-storage.md)
+* [SAP HANA VM storage configurations](./hana-vm-operations-storage.md)
search Cognitive Search Tutorial Blob Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-tutorial-blob-dotnet.md
Last updated 12/10/2021-+ # Tutorial: Use .NET and AI to generate searchable content from Azure blobs
You can find and manage resources in the portal, using the All resources or Reso
Now that you're familiar with all of the objects in an AI enrichment pipeline, let's take a closer look at skillset definitions and individual skills. > [!div class="nextstepaction"]
-> [How to create a skillset](cognitive-search-defining-skillset.md)
+> [How to create a skillset](cognitive-search-defining-skillset.md)
search Index Add Suggesters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-add-suggesters.md
Last updated 12/02/2022-+ # Create a suggester to enable autocomplete and suggested results in a query
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 02/21/2023 Last updated : 06/01/2023
search Samples Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-dotnet.md
+ Last updated 01/04/2023
search Samples Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-java.md
+ Last updated 01/04/2023
search Samples Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-javascript.md
+ Last updated 01/04/2023
search Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-python.md
+ Last updated 01/04/2023
search Search Add Autocomplete Suggestions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-add-autocomplete-suggestions.md
Last updated 09/12/2022-+ # Add autocomplete and suggestions to client apps using Azure Cognitive Search
The Autocomplete function takes the search term input. The method creates an [Au
The following tutorial demonstrates a search-as-you-type experience. > [!div class="nextstepaction"]
-> [Add search to a web site (JavaScript)](tutorial-javascript-search-query-integration.md#azure-function-suggestions-from-the-catalog)
+> [Add search to a web site (JavaScript)](tutorial-javascript-search-query-integration.md#azure-function-suggestions-from-the-catalog)
search Search Api Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-versions.md
+ Last updated 03/22/2023
search Search Dotnet Mgmt Sdk Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-mgmt-sdk-migration.md
ms.devlang: csharp-+ Last updated 10/03/2022
search Search Dotnet Sdk Migration Version 11 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-dotnet-sdk-migration-version-11.md
ms.devlang: csharp Last updated 05/31/2022-+ # Upgrade to Azure Cognitive Search .NET SDK version 11
search Search Get Started Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-dotnet.md
ms.devlang: csharp Last updated 01/27/2023-+ # Quickstart: Create a search index using the Azure.Search.Documents client library
search Search Get Started Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-java.md
ms.devlang: java
Last updated 12/23/2022-+ # Quickstart: Create an Azure Cognitive Search index in Java
search Search Howto Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-dotnet-sdk.md
ms.devlang: csharp
Last updated 10/04/2022-+ # How to use Azure.Search.Documents in a C# .NET Application
search Search Howto Index Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb.md
description: Set up a search indexer to index data stored in Azure Cosmos DB for
-+ Last updated 01/18/2023
You can now control how you [run the indexer](search-howto-run-reset-indexers.md
+ [Set up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md) + [Index large data sets](search-howto-large-index.md)
-+ [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md)
++ [Indexer access to content protected by Azure network security features](search-indexer-securing-resources.md)
search Search Howto Monitor Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-monitor-indexers.md
+ Last updated 09/15/2022
For more information about status codes and indexer monitoring information, see
* [GetIndexerStatus (REST API)](/rest/api/searchservice/get-indexer-status) * [IndexerStatus](/dotnet/api/azure.search.documents.indexes.models.indexerstatus) * [IndexerExecutionStatus](/dotnet/api/azure.search.documents.indexes.models.indexerexecutionstatus)
-* [IndexerExecutionResult](/dotnet/api/azure.search.documents.indexes.models.indexerexecutionresult)
+* [IndexerExecutionResult](/dotnet/api/azure.search.documents.indexes.models.indexerexecutionresult)
search Search Indexer Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-tutorial.md
Last updated 10/04/2022--+ # Tutorial: Index Azure SQL data using the .NET SDK
You can find and manage resources in the portal, using the All resources or Reso
Now that you're familiar with the basics of SQL Database indexing, let's take a closer look at indexer configuration. > [!div class="nextstepaction"]
-> [Configure a SQL Database indexer](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)
+> [Configure a SQL Database indexer](search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md)
search Search Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-traffic-analytics.md
Last updated 1/29/2021-+ # Collect telemetry data for search traffic analytics
search Tutorial Csharp Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-create-load-index.md
Last updated 12/04/2022-+ ms.devlang: csharp
The script uses the Azure SDK for Cognitive Search:
## Next steps
-[Deploy your Static Web App](tutorial-csharp-deploy-static-web-app.md)
+[Deploy your Static Web App](tutorial-csharp-deploy-static-web-app.md)
search Tutorial Csharp Deploy Static Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-deploy-static-web-app.md
Last updated 11/01/2022-+ ms.devlang: csharp
search Tutorial Csharp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-overview.md
Last updated 11/01/2022-+ ms.devlang: csharp
search Tutorial Csharp Search Query Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-search-query-integration.md
Last updated 11/01/2022-+ ms.devlang: csharp
search Tutorial Javascript Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-create-load-index.md
Last updated 12/04/2022-+ ms.devlang: javascript
The script uses the Azure SDK for Cognitive Search:
## Next steps
-[Deploy your Static Web App](tutorial-javascript-deploy-static-web-app.md)
+[Deploy your Static Web App](tutorial-javascript-deploy-static-web-app.md)
search Tutorial Multiple Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-multiple-data-sources.md
Last updated 08/29/2022-+ # Tutorial: Index from multiple data sources using the .NET SDK
security Threat Modeling Tool Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-authentication.md
na
Last updated 02/07/2017 -+ # Security Frame: Authentication | Mitigations
security Threat Modeling Tool Sensitive Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-sensitive-data.md
na
Last updated 02/07/2017 -+ # Security Frame: Sensitive Data | Mitigations
security Threat Modeling Tool Session Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-session-management.md
na
Last updated 02/07/2017 -+ # Security Frame: Session Management
config.SuppressDefaultHostAuthentication();
config.Filters.Add(new HostAuthenticationFilter(OAuthDefaults.AuthenticationType)); ```
-The SuppressDefaultHostAuthentication method tells Web API to ignore any authentication that happens before the request reaches the Web API pipeline, either by IIS or by OWIN middleware. That way, we can restrict Web API to authenticate only using bearer tokens.
+The SuppressDefaultHostAuthentication method tells Web API to ignore any authentication that happens before the request reaches the Web API pipeline, either by IIS or by OWIN middleware. That way, we can restrict Web API to authenticate only using bearer tokens.
security Azure CA Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-CA-details.md
description: Certificate Authority details for Azure services that utilize x509
+ Last updated 05/30/2023
To learn more about Certificate Authorities and PKI, see:
- [Microsoft PKI Repository, including CRL and policy information](https://www.microsoft.com/pki/mscorp/cps/default.htm) - [Azure Firewall Premium certificates](../../firewall/premium-certificates.md) - [PKI certificates and Configuration Manager](/mem/configmgr/core/plan-design/security/plan-for-certificates)-- [Securing PKI](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn786443(v=ws.11))
+- [Securing PKI](/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn786443(v=ws.11))
sentinel Connect Cef Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-ama.md
To avoid this scenario, use one of these methods:
```kusto source |
- where ProcessName !contains ΓÇ£\ΓÇ£CEF\ΓÇ¥ΓÇ¥
+ where ProcessName !contains \"CEF\"
``` ### Configure a log forwarder