Updates from: 06/13/2023 01:10:41
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory How To Mfa Registration Campaign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-registration-campaign.md
Previously updated : 01/05/2023 Last updated : 06/10/2023
You can nudge users to set up Microsoft Authenticator during sign-in. Users will
In addition to choosing who can be nudged, you can define how many days a user can postpone, or "snooze", the nudge. If a user taps **Not now** to snooze the app setup, they'll be nudged again on the next MFA attempt after the snooze duration has elapsed.
+>[!NOTE]
+>As users go through their regular sign-in, Conditional Access policies that govern security info registration apply before the user is prompted to set up Authenticator. For example, if a Conditional Access policy requires security info updates can only occur on an internal network, then users won't be prompted to set up Authenticator unless they are on the internal network.
+ ## Prerequisites - Your organization must have enabled Azure AD Multi-Factor Authentication. Every edition of Azure AD includes Azure AD Multi-Factor Authentication. No additional license is needed for a registration campaign.
active-directory Active Directory V2 Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-v2-protocols.md
Knowing about OAuth or OpenID Connect (OIDC) at the protocol level isn't require
Four parties are generally involved in an OAuth 2.0 and OpenID Connect authentication and authorization exchange. These exchanges are often called *authentication flows* or *auth flows*.
-![Diagram showing the OAuth 2.0 roles](./media/active-directory-v2-flows/protocols-roles.svg)
+![Diagram showing the OAuth 2.0 roles](./media/v2-flows/protocols-roles.svg)
* **Authorization server** - The identity platform is the authorization server. Also called an *identity provider* or *IdP*, it securely handles the end-user's information, their access, and the trust relationships between the parties in the auth flow. The authorization server issues the security tokens your apps and APIs use for granting, denying, or revoking access to resources (authorization) after the user has signed in (authenticated).
active-directory Authentication Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-protocols.md
+
+ Title: Microsoft identity platform authentication protocols
+description: An overview of the authentication protocols supported by the Microsoft identity platform
++++++++ Last updated : 09/27/2021++++++
+# Microsoft identity platform authentication protocols
+
+The Microsoft identity platform supports several of the most widely used authentication and authorization protocols. The topics in this section describe the supported protocols and their implementation in Microsoft identity platform. The topics included a review of supported claim types, an introduction to the use of federation metadata, detailed OAuth 2.0. and SAML 2.0 protocol reference documentation, and a troubleshooting section.
+
+## Authentication protocols articles and reference
+
+* [Important Information About Signing Key Rollover in Microsoft identity platform](active-directory-signing-key-rollover.md) ΓÇô Learn about Microsoft identity platformΓÇÖs signing key rollover cadence, changes you can make to update the key automatically, and discussion for how to update the most common application scenarios.
+* [Supported Token and Claim Types](id-tokens.md) - Learn about the claims in the tokens that the Microsoft identity platform issues.
+* [OAuth 2.0 in Microsoft identity platform](v2-oauth2-auth-code-flow.md) - Learn about the implementation of OAuth 2.0 in Microsoft identity platform.
+* [OpenID Connect 1.0](v2-protocols-oidc.md) - Learn how to use OAuth 2.0, an authorization protocol, for authentication.
+* [Service to Service Calls with Client Credentials](v2-oauth2-client-creds-grant-flow.md) - Learn how to use OAuth 2.0 client credentials grant flow for service to service calls.
+* [Service to Service Calls with On-Behalf-Of Flow](v2-oauth2-on-behalf-of-flow.md) - Learn how to use OAuth 2.0 On-Behalf-Of flow for service to service calls.
+* [SAML Protocol Reference](active-directory-saml-protocol-reference.md) - Learn about the Single Sign-On and Single Sign-out SAML profiles of Microsoft identity platform.
+
+## See also
+
+* [Microsoft identity platform overview](v2-overview.md)
+* [Active Directory Code Samples](sample-v2-code.md)
active-directory Certificate Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/certificate-credentials.md
+
+ Title: Microsoft identity platform certificate credentials
+description: This article discusses the registration and use of certificate credentials for application authentication.
++++++++ Last updated : 02/27/2023+++++
+# Microsoft identity platform application authentication certificate credentials
+
+The Microsoft identity platform allows an application to use its own credentials for authentication anywhere a client secret could be used, for example, in the OAuth 2.0 [client credentials grant](v2-oauth2-client-creds-grant-flow.md) flow and the [on-behalf-of](v2-oauth2-on-behalf-of-flow.md) (OBO) flow.
+
+One form of credential that an application can use for authentication is a [JSON Web Token](./security-tokens.md#json-web-tokens-and-claims) (JWT) assertion signed with a certificate that the application owns. This is described in the [OpenID Connect](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication) specification for the `private_key_jwt` client authentication option.
+
+If you're interested in using a JWT issued by another identity provider as a credential for your application, please see [workload identity federation](workload-identity-federation.md) for how to set up a federation policy.
+
+## Assertion format
+
+To compute the assertion, you can use one of the many JWT libraries in the language of your choice - [MSAL supports this using `.WithCertificate()`](msal-net-client-assertions.md). The information is carried by the token in its **Header**, **Claims**, and **Signature**.
+
+### Header
+
+| Parameter | Remark |
+| | |
+| `alg` | Should be **RS256** |
+| `typ` | Should be **JWT** |
+| `x5t` | Base64url-encoded SHA-1 thumbprint of the X.509 certificate's DER encoding. For example, given an X.509 certificate hash of `84E05C1D98BCE3A5421D225B140B36E86A3D5534` (Hex), the `x5t` claim would be `hOBcHZi846VCHSJbFAs26Go9VTQ` (Base64url). |
+
+### Claims (payload)
+
+Claim type | Value | Description
+- | - | -
+`aud` | `https://login.microsoftonline.com/{tenantId}/oauth2/v2.0/token` | The "aud" (audience) claim identifies the recipients that the JWT is intended for (here Azure AD) See [RFC 7519, Section 4.1.3](https://tools.ietf.org/html/rfc7519#section-4.1.3). In this case, that recipient is the login server (login.microsoftonline.com).
+`exp` | 1601519414 | The "exp" (expiration time) claim identifies the expiration time on or after which the JWT **must not** be accepted for processing. See [RFC 7519, Section 4.1.4](https://tools.ietf.org/html/rfc7519#section-4.1.4). This allows the assertion to be used until then, so keep it short - 5-10 minutes after `nbf` at most. Azure AD does not place restrictions on the `exp` time currently.
+`iss` | {ClientID} | The "iss" (issuer) claim identifies the principal that issued the JWT, in this case your client application. Use the GUID application ID.
+`jti` | (a Guid) | The "jti" (JWT ID) claim provides a unique identifier for the JWT. The identifier value **must** be assigned in a manner that ensures that there is a negligible probability that the same value will be accidentally assigned to a different data object; if the application uses multiple issuers, collisions MUST be prevented among values produced by different issuers as well. The "jti" value is a case-sensitive string. [RFC 7519, Section 4.1.7](https://tools.ietf.org/html/rfc7519#section-4.1.7)
+`nbf` | 1601519114 | The "nbf" (not before) claim identifies the time before which the JWT MUST NOT be accepted for processing. [RFC 7519, Section 4.1.5](https://tools.ietf.org/html/rfc7519#section-4.1.5). Using the current time is appropriate.
+`sub` | {ClientID} | The "sub" (subject) claim identifies the subject of the JWT, in this case also your application. Use the same value as `iss`.
+`iat` | 1601519114 | The "iat" (issued at) claim identifies the time at which the JWT was issued. This claim can be used to determine the age of the JWT. [RFC 7519, Section 4.1.5](https://tools.ietf.org/html/rfc7519#section-4.1.5).
+
+### Signature
+
+The signature is computed by applying the certificate as described in the [JSON Web Token RFC7519 specification](https://tools.ietf.org/html/rfc7519).
+
+## Example of a decoded JWT assertion
+
+```JSON
+{
+ "alg": "RS256",
+ "typ": "JWT",
+ "x5t": "gx8tGysyjcRqKjFPnd7RFwvwZI0"
+}
+.
+{
+ "aud": "https: //login.microsoftonline.com/contoso.onmicrosoft.com/oauth2/v2.0/token",
+ "exp": 1484593341,
+ "iss": "97e0a5b7-d745-40b6-94fe-5f77d35c6e05",
+ "jti": "22b3bb26-e046-42df-9c96-65dbd72c1c81",
+ "nbf": 1484592741,
+ "sub": "97e0a5b7-d745-40b6-94fe-5f77d35c6e05"
+}
+.
+"Gh95kHCOEGq5E_ArMBbDXhwKR577scxYaoJ1P{a lot of characters here}KKJDEg"
+```
+
+## Example of an encoded JWT assertion
+
+The following string is an example of encoded assertion. If you look carefully, you notice three sections separated by dots (`.`):
+
+* The first section encodes the *header*
+* The second section encodes the *claims* (payload)
+* The last section is the *signature* computed with the certificates from the content of the first two sections
+
+```
+"eyJhbGciOiJSUzI1NiIsIng1dCI6Imd4OHRHeXN5amNScUtqRlBuZDdSRnd2d1pJMCJ9.eyJhdWQiOiJodHRwczpcL1wvbG9naW4ubWljcm9zb2Z0b25saW5lLmNvbVwvam1wcmlldXJob3RtYWlsLm9ubWljcm9zb2Z0LmNvbVwvb2F1dGgyXC90b2tlbiIsImV4cCI6MTQ4NDU5MzM0MSwiaXNzIjoiOTdlMGE1YjctZDc0NS00MGI2LTk0ZmUtNWY3N2QzNWM2ZTA1IiwianRpIjoiMjJiM2JiMjYtZTA0Ni00MmRmLTljOTYtNjVkYmQ3MmMxYzgxIiwibmJmIjoxNDg0NTkyNzQxLCJzdWIiOiI5N2UwYTViNy1kNzQ1LTQwYjYtOTRmZS01Zjc3ZDM1YzZlMDUifQ.
+Gh95kHCOEGq5E_ArMBbDXhwKR577scxYaoJ1P{a lot of characters here}KKJDEg"
+```
+
+## Register your certificate with Microsoft identity platform
+
+You can associate the certificate credential with the client application in the Microsoft identity platform through the Azure portal using any of the following methods:
+
+### Uploading the certificate file
+
+In the **App registrations** tab for the client application:
+1. Select **Certificates & secrets** > **Certificates**.
+2. Click on **Upload certificate** and select the certificate file to upload.
+3. Click **Add**.
+ Once the certificate is uploaded, the thumbprint, start date, and expiration values are displayed.
+
+### Updating the application manifest
+
+After acquiring a certificate, compute these values:
+
+- `$base64Thumbprint` - Base64-encoded value of the certificate hash
+- `$base64Value` - Base64-encoded value of the certificate raw data
+
+Provide a GUID to identify the key in the application manifest (`$keyId`).
+
+In the Azure app registration for the client application:
+1. Select **Manifest** to open the application manifest.
+2. Replace the *keyCredentials* property with your new certificate information using the following schema.
+
+ ```JSON
+ "keyCredentials": [
+ {
+ "customKeyIdentifier": "$base64Thumbprint",
+ "keyId": "$keyid",
+ "type": "AsymmetricX509Cert",
+ "usage": "Verify",
+ "value": "$base64Value"
+ }
+ ]
+ ```
+3. Save the edits to the application manifest and then upload the manifest to Microsoft identity platform.
+
+ The `keyCredentials` property is multi-valued, so you may upload multiple certificates for richer key management.
+
+## Using a client assertion
+
+Client assertions can be used anywhere a client secret would be used. For example, in the [authorization code flow](v2-oauth2-auth-code-flow.md), you can pass in a `client_secret` to prove that the request is coming from your app. You can replace this with `client_assertion` and `client_assertion_type` parameters.
+
+| Parameter | Value | Description|
+|--|-||
+|`client_assertion_type`|`urn:ietf:params:oauth:client-assertion-type:jwt-bearer`| This is a fixed value, indicating that you are using a certificate credential. |
+|`client_assertion`| `JWT` |This is the JWT created above. |
+
+## Next steps
+
+The [MSAL.NET library handles this scenario](msal-net-client-assertions.md) in a single line of code.
+
+The [.NET Core daemon console application using Microsoft identity platform](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2) code sample on GitHub shows how an application uses its own credentials for authentication. It also shows how you can [create a self-signed certificate](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/1-Call-MSGraph#optional-use-the-automation-script) using the `New-SelfSignedCertificate` PowerShell cmdlet. You can also use the [app creation scripts](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/blob/master/1-Call-MSGraph/AppCreationScripts-withCert/AppCreationScripts.md) in the sample repo to create certificates, compute the thumbprint, and so on.
active-directory Configure App Multi Instancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configure-app-multi-instancing.md
+
+ Title: Configure app multi-instancing
+description: Learn about multi-instancing, which is needed for configuring multiple instances of the same application within a tenant.
++++++++ Last updated : 06/09/2023++++
+# Configure app multi-instancing
+
+App multi-instancing refers to the need for the configuration of multiple instances of the same application within a tenant. For example, the organization has multiple accounts, each of which needs a separate service principal to handle instance-specific claims mapping and roles assignment. Or the customer has multiple instances of an application, which doesn't need special claims mapping, but does need separate service principals for separate signing keys.
+
+## Sign-in approaches
+
+A user can sign-in to an application one of the following ways:
+
+- Through the application directly, which is known as service provider (SP) initiated single sign-on (SSO).
+- Go directly to the identity provider (IDP), known as IDP initiated SSO.
+
+Depending on which approach is used within your organization, follow the appropriate instructions described in this article.
+
+## SP initiated SSO
+
+In the SAML request of SP initiated SSO, the `issuer` specified is usually the app ID URI. Utilizing App ID URI doesn't allow the customer to distinguish which instance of an application is being targeted when using SP initiated SSO.
+
+### Configure SP initiated SSO
+
+Update the SAML single sign-on service URL configured within the service provider for each instance to include the service principal guid as part of the URL. For example, the general SSO sign-in URL for SAML is `https://login.microsoftonline.com/<tenantid>/saml2`, the URL can be updated to target a specific service principal, such as `https://login.microsoftonline.com/<tenantid>/saml2/<issuer>`.
+
+Only service principal identifiers in GUID format are accepted for the issuer value. The service principal identifiers override the issuer in the SAML request and response, and the rest of the flow is completed as usual. There's one exception: if the application requires the request to be signed, the request is rejected even if the signature was valid. The rejection is done to avoid any security risks with functionally overriding values in a signed request.
+
+## IDP initiated SSO
+
+The IDP initiated SSO feature exposes the following settings for each application:
+
+- An **audience override** option exposed for configuration by using claims mapping or the portal. The intended use case is applications that require the same audience for multiple instances. This setting is ignored if no custom signing key is configured for the application.
+
+- An **issuer with application id** flag to indicate the issuer should be unique for each application instead of unique for each tenant. This setting is ignored if no custom signing key is configured for the application.
+
+### Configure IDP initiated SSO
+
+1. Open any SSO enabled enterprise app and navigate to the SAML single sign-on blade.
+1. Select **Edit** on the **User Attributes & Claims** panel.
+1. Select **Edit** to open the advanced options blade.
+1. Configure both options according to your preferences and then select **Save**.
+
+## Next steps
+
+- To learn more about how to configure this policy see [Customize app SAML token claims](active-directory-saml-claims-customization.md)
active-directory Enterprise App Role Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/enterprise-app-role-management.md
Title: Configure the role claim for enterprise applications
+ Title: Configure the role claim
description: Learn how to configure the role claim issued in the SAML token for enterprise applications in Azure Active Directory.
Previously updated : 02/10/2023 Last updated : 06/09/2023
-# Configure the role claim issued in the SAML token
+# Configure the role claim
-In Azure Active Directory (Azure AD), you can customize the role claim in the access token that is received after an application is authorized. Use this feature if your application expects custom roles in the token returned by Azure AD. You can create as many roles as you need.
+You can customize the role claim in the access token that is received after an application is authorized. Use this feature if your application expects custom roles in the token. You can create as many roles as you need.
## Prerequisites
In Azure Active Directory (Azure AD), you can customize the role claim in the ac
- A user account that is assigned to the role. For more information, see [Quickstart: Create and assign a user account](../manage-apps/add-application-portal-assign-users.md). > [!NOTE]
-> This article explains how to create, update, or delete application roles on the service principal using APIs in Azure AD. To use the new user interface for App Roles, see [Add app roles to your application and receive them in the token](howto-add-app-roles-in-azure-ad-apps.md).
+> This article explains how to create, update, or delete application roles on the service principal using APIs. To use the new user interface for App Roles, see [Add app roles to your application and receive them in the token](howto-add-app-roles-in-azure-ad-apps.md).
## Locate the enterprise application
Use the following steps to locate the enterprise application:
1. Enter the name of the existing application in the search box, and then select the application from the search results. 1. After the application is selected, copy the object ID from the overview pane.
- :::image type="content" source="media/active-directory-enterprise-app-role-management/record-objectid.png" alt-text="Screenshot that shows how to locate and record the object identifier for the application.":::
+ :::image type="content" source="media/enterprise-app-role-management/record-objectid.png" alt-text="Screenshot that shows how to locate and record the object identifier for the application.":::
## Add roles
Use the Microsoft Graph Explorer to add roles to an enterprise application.
} ```
- You can only add new roles after msiam_access for the patch operation. Also, you can add as many roles as your organization needs. Azure AD sends the value of these roles as the claim value in the SAML response. To generate the GUID values for the ID of new roles use the web tools, such as the [Online GUID / UUID Generator](https://www.guidgenerator.com/). The appRoles property should now represent what was in the request body of the query.
+ You can only add new roles after msiam_access for the patch operation. Also, you can add as many roles as your organization needs. The value of these roles is sent as the claim value in the SAML response. To generate the GUID values for the ID of new roles use the web tools, such as the [Online GUID / UUID Generator](https://www.guidgenerator.com/). The appRoles property should represent what was in the request body of the query.
## Edit attributes
Update the attributes to define the role claim that is included in the token.
1. From the **Source attribute** list, select **user.assignedroles**. 1. Select **Save**. The new **Role Name** attribute should now appear in the **Attributes & Claims** section. The claim should now be included in the access token when signing into the application.
- :::image type="content" source="media/active-directory-enterprise-app-role-management/attributes-summary.png" alt-text="Screenshot that shows a display of the list of attributes and claims defined for the application.":::
+ :::image type="content" source="media/enterprise-app-role-management/attributes-summary.png" alt-text="Screenshot that shows a display of the list of attributes and claims defined for the application.":::
## Assign roles
After the service principal is patched with more roles, you can assign users to
1. Select **None Selected**, select the role from the list, and then select **Select**. 1. Select **Assign** to assign the role to the user.
- :::image type="content" source="media/active-directory-enterprise-app-role-management/assign-role.png" alt-text="Screenshot that shows how to assign a role to a user of an application.":::
+ :::image type="content" source="media/enterprise-app-role-management/assign-role.png" alt-text="Screenshot that shows how to assign a role to a user of an application.":::
## Update roles
active-directory Howto Add App Roles In Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-add-app-roles-in-apps.md
+
+ Title: Add app roles and get them from a token
+description: Learn how to add app roles to an application registered in Azure Active Directory. Assign users and groups to these roles, and receive them in the 'roles' claim in the token.
++++++++ Last updated : 09/27/2022+++++
+# Add app roles to your application and receive them in the token
+
+Role-based access control (RBAC) is a popular mechanism to enforce authorization in applications. RBAC allows administrators to grant permissions to roles rather than to specific users or groups. The administrator can then assign roles to different users and groups to control who has access to what content and functionality.
+
+By using RBAC with application role and role claims, developers can securely enforce authorization in their apps with less effort.
+
+Another approach is to use Azure Active Directory (Azure AD) groups and group claims as shown in the [active-directory-aspnetcore-webapp-openidconnect-v2](https://aka.ms/groupssample) code sample on GitHub. Azure AD groups and application roles aren't mutually exclusive; they can be used together to provide even finer-grained access control.
+
+## Declare roles for an application
+
+You define app roles by using the [Azure portal](https://portal.azure.com) during the [app registration process](quickstart-register-app.md). App roles are defined on an application registration representing a service, app or API. When a user signs in to the application, Azure AD emits a `roles` claim for each role that the user or service principal has been granted. This can be used to implement claim-based authorization. App roles can be assigned [to a user or a group of users](../manage-apps/add-application-portal-assign-users.md). App roles can also be assigned to the service principal for another application, or [to the service principal for a managed identity](../managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md).
+
+Currently, if you add a service principal to a group, and then assign an app role to that group, Azure AD doesn't add the `roles` claim to tokens it issues.
+
+App roles are declared using App roles UI in the Azure portal:
+
+The number of roles you add counts toward application manifest limits enforced by Azure AD. For information about these limits, see the [Manifest limits](./reference-app-manifest.md#manifest-limits) section of [Azure Active Directory app manifest reference](reference-app-manifest.md).
+
+### App roles UI
+
+To create an app role by using the Azure portal's user interface:
+
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant that contains the app registration to which you want to add an app role.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations**, and then select the application you want to define app roles in.
+1. Select **App roles**, and then select **Create app role**.
+
+ :::image type="content" source="media/howto-add-app-roles-in-apps/app-roles-overview-pane.png" alt-text="An app registration's app roles pane in the Azure portal":::
+
+1. In the **Create app role** pane, enter the settings for the role. The table following the image describes each setting and their parameters.
+
+ :::image type="content" source="media/howto-add-app-roles-in-apps/app-roles-create-context-pane.png" alt-text="An app registration's app roles create context pane in the Azure portal":::
+
+ | Field | Description | Example |
+ | - | -- | -- |
+ | **Display name** | Display name for the app role that appears in the admin consent and app assignment experiences. This value may contain spaces. | `Survey Writer` |
+ | **Allowed member types** | Specifies whether this app role can be assigned to users, applications, or both.<br/><br/>When available to `applications`, app roles appear as application permissions in an app registration's **Manage** section > **API permissions > Add a permission > My APIs > Choose an API > Application permissions**. | `Users/Groups` |
+ | **Value** | Specifies the value of the roles claim that the application should expect in the token. The value should exactly match the string referenced in the application's code. The value can't contain spaces. | `Survey.Create` |
+ | **Description** | A more detailed description of the app role displayed during admin app assignment and consent experiences. | `Writers can create surveys.` |
+ | **Do you want to enable this app role?** | Specifies whether the app role is enabled. To delete an app role, deselect this checkbox and apply the change before attempting the delete operation. | _Checked_ |
+
+1. Select **Apply** to save your changes.
+
+## Assign users and groups to roles
+
+Once you've added app roles in your application, you can assign users and groups to the roles. Assignment of users and groups to roles can be done through the portal's UI, or programmatically using [Microsoft Graph](/graph/api/user-post-approleassignments). When the users assigned to the various app roles sign in to the application, their tokens will have their assigned roles in the `roles` claim.
+
+To assign users and groups to roles by using the Azure portal:
+
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+1. In **Azure Active Directory**, select **Enterprise applications** in the left-hand navigation menu.
+1. Select **All applications** to view a list of all your applications. If your application doesn't appear in the list, use the filters at the top of the **All applications** list to restrict the list, or scroll down the list to locate your application.
+1. Select the application in which you want to assign users or security group to roles.
+1. Under **Manage**, select **Users and groups**.
+1. Select **Add user** to open the **Add Assignment** pane.
+1. Select the **Users and groups** selector from the **Add Assignment** pane. A list of users and security groups is displayed. You can search for a certain user or group and select multiple users and groups that appear in the list.
+1. Once you've selected users and groups, select the **Select** button to proceed.
+1. Select **Select a role** in the **Add assignment** pane. All the roles that you've defined for the application are displayed.
+1. Choose a role and select the **Select** button.
+1. Select the **Assign** button to finish the assignment of users and groups to the app.
+
+Confirm that the users and groups you added appear in the **Users and groups** list.
+
+## Assign app roles to applications
+
+Once you've added app roles in your application, you can assign an app role to a client app by using the Azure portal or programmatically by using [Microsoft Graph](/graph/api/user-post-approleassignments).
+
+When you assign app roles to an application, you create _application permissions_. Application permissions are typically used by daemon apps or back-end services that need to authenticate and make authorized API call as themselves, without the interaction of a user.
+
+To assign app roles to an application by using the Azure portal:
+
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+1. In **Azure Active Directory**, select **App registrations** in the left-hand navigation menu.
+1. Select **All applications** to view a list of all your applications. If your application doesn't appear in the list, use the filters at the top of the **All applications** list to restrict the list, or scroll down the list to locate your application.
+1. Select the application to which you want to assign an app role.
+1. Select **API permissions** > **Add a permission**.
+1. Select the **My APIs** tab, and then select the app for which you defined app roles.
+1. Select **Application permissions**.
+1. Select the role(s) you want to assign.
+1. Select the **Add permissions** button complete addition of the role(s).
+
+The newly added roles should appear in your app registration's **API permissions** pane.
+
+### Grant admin consent
+
+Because these are _application permissions_, not delegated permissions, an admin must grant consent to use the app roles assigned to the application.
+
+1. In the app registration's **API permissions** pane, select **Grant admin consent for \<tenant name\>**.
+1. Select **Yes** when prompted to grant consent for the requested permissions.
+
+The **Status** column should reflect that consent has been **Granted for \<tenant name\>**.
+
+<a name="use-app-roles-in-your-web-api"></a>
+
+## Usage scenario of app roles
+
+If you're implementing app role business logic that signs in the users in your application scenario, first define the app roles in **App registrations**. Then, an admin assigns them to users and groups in the **Enterprise applications** pane. These assigned app roles are included with any token that's issued for your application, either access tokens when your app is the API being called by an app or ID tokens when your app is signing in a user.
+
+If you're implementing app role business logic in an app-calling-API scenario, you have two app registrations. One app registration is for the app, and a second app registration is for the API. In this case, define the app roles and assign them to the user or group in the app registration of the API. When the user authenticates with the app and requests an access token to call the API, a roles claim is included in the token. Your next step is to add code to your web API to check for those roles when the API is called.
+
+To learn how to add authorization to your web API, see [Protected web API: Verify scopes and app roles](scenario-protected-web-api-verification-scope-app-roles.md).
+
+## App roles vs. groups
+
+Though you can use app roles or groups for authorization, key differences between them can influence which you decide to use for your scenario.
+
+| App roles | Groups |
+| | - |
+| They're specific to an application and are defined in the app registration. They move with the application. | They aren't specific to an app, but to an Azure AD tenant. |
+| App roles are removed when their app registration is removed. | Groups remain intact even if the app is removed. |
+| Provided in the `roles` claim. | Provided in `groups` claim. |
+
+Developers can use app roles to control whether a user can sign in to an app or an app can obtain an access token for a web API. To extend this security control to groups, developers and admins can also assign security groups to app roles.
+
+App roles are preferred by developers when they want to describe and control the parameters of authorization in their app themselves. For example, an app using groups for authorization will break in the next tenant as both the group ID and name could be different. An app using app roles remains safe. In fact, assigning groups to app roles is popular with SaaS apps for the same reasons as it allows the SaaS app to be provisioned in multiple tenants.
+
+## Next steps
+
+Learn more about app roles with the following resources.
+
+- Code samples on GitHub
+ - [Add authorization using app roles & roles claims to an ASP\.NET Core web app](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-1-Roles/README.md)
+- Reference documentation
+ - [Azure AD app manifest](./reference-app-manifest.md)
+- Video: [Implement authorization in your applications with Microsoft identity platform](https://www.youtube.com/watch?v=LRoc-na27l0) (1:01:15)
active-directory Howto Convert App To Be Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-convert-app-to-be-multi-tenant.md
To learn more about making API calls to Azure AD and Microsoft 365 services like
[MSFT-Graph-permission-scopes]: /graph/permissions-reference <!--Image references-->
-[AAD-Sign-In]: ./media/active-directory-devhowto-multi-tenant-overview/sign-in-with-microsoft-light.png
+[AAD-Sign-In]: ./media/devhowto-multi-tenant-overview/sign-in-with-microsoft-light.png
[Consent-Single-Tier]: ./media/howto-convert-app-to-be-multi-tenant/consent-flow-single-tier.svg [Consent-Multi-Tier-Known-Client]: ./media/howto-convert-app-to-be-multi-tenant/consent-flow-multi-tier-known-clients.svg [Consent-Multi-Tier-Multi-Party]: ./media/howto-convert-app-to-be-multi-tenant/consent-flow-multi-tier-multi-party.svg
active-directory Reference App Multi Instancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-app-multi-instancing.md
- Title: Configure SAML app multi-instancing for an application
-description: Learn about SAML App Multi-Instancing, which is needed for the configuration of multiple instances of the same application within a tenant.
-------- Previously updated : 01/06/2023----
-# Configure SAML app multi-instancing for an application in Azure Active Directory
-
-App multi-instancing refers to the need for the configuration of multiple instances of the same application within a tenant. For example, the organization has multiple Amazon Web Services accounts, each of which needs a separate service principal to handle instance-specific claims mapping (adding the AccountID claim for that AWS tenant) and roles assignment. Or the customer has multiple instances of Box, which doesn't need special claims mapping, but does need separate service principals for separate signing keys.
-
-## IDP versus SP initiated SSO
-
-A user can sign-in to an application one of two ways, either through the application directly, which is known as service provider (SP) initiated single sign-on (SSO), or by going directly to the identity provider (IDP), known as IDP initiated SSO. Depending on which approach is used within your organization you'll need to follow the appropriate instructions below.
-
-## SP Initiated
-
-In the SAML request of SP initiated SSO, the Issuer specified is usually the App ID Uri. Utilizing App ID Uri doesn't allow the customer to distinguish which instance of an application is being targeted when using SP initiated SSO.
-
-## SP Initiated Configuration Instructions
-
-Update the SAML single sign-on service URL configured within the service provider for each instance to include the service principal guid as part of the URL. For example, the general SSO sign-in URL for SAML would have been `https://login.microsoftonline.com/<tenantid>/saml2`, the URL can now be updated to target a specific service principal as follows `https://login.microsoftonline.com/<tenantid>/saml2/<issuer>`.
-
-Only service principal identifiers in GUID format are accepted for the `issuer` value. The service principal identifiers override the issuer in the SAML request and response, and the rest of the flow is completed as usual. There's one exception: if the application requires the request to be signed, the request is rejected even if the signature was valid. The rejection is done to avoid any security risks with functionally overriding values in a signed request.
-
-## IDP Initiated
-
-The IDP initiated feature exposes two settings for each application.
--- An **audience override** option exposed for configuration by using claims mapping or the portal. The intended use case is applications that require the same audience for multiple instances. This setting is ignored if no custom signing key is configured for the application.--- An **issuer with application id** flag to indicate the issuer should be unique for each application instead of unique for each tenant. This setting is ignored if no custom signing key is configured for the application.-
-## IDP Initiated Configuration Instructions
-
-1. Open any SSO enabled enterprise app and navigate to the SAML single sign on blade.
-1. Select **Edit** on the **User Attributes & Claims** panel.
-![Edit Configuration](./media/reference-app-multi-instancing/userattributesclaimsedit.png)
-1. Open the advanced options blade.
-![Open Advanced Options](./media/reference-app-multi-instancing/advancedoptionsblade.png)
-1. Configure both options according to your preferences and then select **Save**.
-![Configure Options](./media/reference-app-multi-instancing/advancedclaimsoptions.png)
-
-## Next steps
--- To explore the claims mapping policy in graph see [Claims mapping policy](/graph/api/resources/claimsMappingPolicy?view=graph-rest-1.0&preserve-view=true) -- To learn more about how to configure this policy see [Customize app SAML token claims](active-directory-saml-claims-customization.md)
active-directory Reference Third Party Cookies Spas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-third-party-cookies-spas.md
SPAs have two more restrictions:
- [The redirect URI must be marked as type `spa`](v2-oauth2-auth-code-flow.md#redirect-uris-for-single-page-apps-spas) to enable CORS on login endpoints. - Refresh tokens issued through the authorization code flow to `spa` redirect URIs have a 24-hour lifetime rather than a 90-day lifetime. ## Performance and UX implications
There are two ways of accomplishing sign-in:
### Using iframes
-A common pattern in web apps is to use an iframe to embed one app inside another: the top-level frame handles authenticating the user and the application hosted in the iframe can trust that the user is signed in, fetching tokens silently using the implicit flow. However, there are couple of caveats to this assumption irrespective of whether third-party cookies are enabled or blocked in the browser.
+A common pattern in web apps is to use an iframe to embed one app inside another: the top-level frame handles authenticating the user and the application hosted in the iframe can trust that the user is signed in, fetching tokens silently using the implicit flow. However, there are a couple of caveats to this assumption irrespective of whether third-party cookies are enabled or blocked in the browser.
Silent token acquisition no longer works when third-party cookies are blocked - the application embedded in the iframe must switch to using popups to access the user's session as it can't navigate to the login page.
active-directory Saml Protocol Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/saml-protocol-reference.md
+
+ Title: How the Microsoft identity platform uses the SAML protocol
+description: This article provides an overview of the single sign-on and Single Sign-Out SAML profiles in Azure Active Directory.
++++++++ Last updated : 11/4/2022+++++
+# How the Microsoft identity platform uses the SAML protocol
+
+The Microsoft identity platform uses the SAML 2.0 and other protocols to enable applications to provide a single sign-on (SSO) experience to their users. The [SSO](single-sign-on-saml-protocol.md) and [Single Sign-Out](single-sign-out-saml-protocol.md) SAML profiles of Azure Active Directory (Azure AD) explain how SAML assertions, protocols, and bindings are used in the identity provider service.
+
+The SAML protocol requires the identity provider (Microsoft identity platform) and the service provider (the application) to exchange information about themselves.
+
+When an application is registered with Azure AD, the app developer registers federation-related information with Azure AD. This information includes the **Redirect URI** and **Metadata URI** of the application.
+
+The Microsoft identity platform uses the cloud service's **Metadata URI** to retrieve the signing key and the logout URI. This way the Microsoft identity platform can send the response to the correct URL. In the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>;
+
+- Open the app in **Azure Active Directory** and select **App registrations**
+- Under **Manage**, select **Authentication**. From there you can update the Logout URL.
+
+Azure AD exposes tenant-specific and common (tenant-independent) SSO and single sign-out endpoints. These URLs represent addressable locations, and aren't only identifiers. You can then go to the endpoint to read the metadata.
+
+- The tenant-specific endpoint is located at `https://login.microsoftonline.com/<TenantDomainName>/FederationMetadata/2007-06/FederationMetadata.xml`. The *\<TenantDomainName>* placeholder represents a registered domain name or TenantID GUID of an Azure AD tenant. For example, the federation metadata of the `contoso.com` tenant is at: https://login.microsoftonline.com/contoso.com/FederationMetadata/2007-06/FederationMetadata.xml
+
+- The tenant-independent endpoint is located at
+ `https://login.microsoftonline.com/common/FederationMetadata/2007-06/FederationMetadata.xml`. In this endpoint address, *common* appears instead of a tenant domain name or ID.
+
+## Next steps
+
+For information about the federation metadata documents that Azure AD publishes, see [Federation Metadata](../azuread-dev/azure-ad-federation-metadata.md).
active-directory Security Best Practices For App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/security-best-practices-for-app-registration.md
This article describes security best practices for the following application pro
It's important to keep Redirect URIs of your application up to date. Under **Authentication** for the application in the Azure portal, a platform must be selected for the application and then the **Redirect URI** property can be defined. Consider the following guidance for redirect URIs:
Consider the following guidance for redirect URIs:
Scenarios that required **implicit flow** can now use **Auth code flow** to reduce the risk of compromise associated with implicit flow misuse. Under **Authentication** for the application in the Azure portal, a platform must be selected for the application and then the **Access tokens (used for implicit flows)** property can be set. Consider the following guidance related to implicit flow:
Consider the following guidance related to implicit flow:
Certificates and secrets, also known as credentials, are a vital part of an application when it's used as a confidential client. Under **Certificates and secrets** for the application in the Azure portal, certificates and secrets can be added or removed. Consider the following guidance related to certificates and secrets:
Consider the following guidance related to certificates and secrets:
The **Application ID URI** property of the application specifies the globally unique URI used to identify the web API. It's the prefix for scopes and in access tokens, it's also the value of the audience claim and it must use a verified customer owned domain. For multi-tenant applications, the value must also be globally unique. It's also referred to as an identifier URI. Under **Expose an API** for the application in the Azure portal, the **Application ID URI** property can be defined. Consider the following guidance related to defining the Application ID URI:
Consider the following guidance related to defining the Application ID URI:
Owners can manage all aspects of a registered application. It's important to regularly review the ownership of all applications in the organization. For more information, see [Azure AD access reviews](../governance/access-reviews-overview.md). Under **Owners** for the application in the Azure portal, the owners of the application can be managed. Consider the following guidance related to specifying application owners:
Consider the following guidance related to specifying application owners:
The **Integration assistant** in Azure portal can be used to make sure that an application meets a high quality bar and to provide secure integration. The integration assistant highlights best practices and recommendation that help avoid common oversights when integrating with the Microsoft identity platform. ## Next steps
active-directory Single Sign On Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-on-saml-protocol.md
This article covers the SAML 2.0 authentication requests and responses that Azur
The protocol diagram below describes the single sign-on sequence. The cloud service (the service provider) uses an HTTP Redirect binding to pass an `AuthnRequest` (authentication request) element to Azure AD (the identity provider). Azure AD then uses an HTTP post binding to post a `Response` element to the cloud service.
-![Single Sign-On (SSO) Workflow](./media/single-sign-on-saml-protocol/active-directory-saml-single-sign-on-workflow.png)
+![Screenshot of the Single Sign-On (SSO) Workflow.](./media/single-sign-on-saml-protocol/saml-single-sign-on-workflow.png)
> [!NOTE] > This article discusses using SAML for single sign-on. For more information on other ways to handle single sign-on (for example, by using OpenID Connect or integrated Windows authentication), see [Single sign-on to applications in Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
active-directory Single Sign Out Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-out-saml-protocol.md
If the app is [added to the Azure App Gallery](../manage-apps/v2-howto-app-galle
The following diagram shows the workflow of the Azure AD single sign-out process.
-![Azure AD Single Sign Out Workflow](./media/single-sign-out-saml-protocol/active-directory-saml-single-sign-out-workflow.png)
+![Screenshot of the Azure AD Single Sign Out Workflow.](./media/single-sign-out-saml-protocol/saml-single-sign-out-workflow.png)
## LogoutRequest
active-directory Tutorial V2 Javascript Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-javascript-spa.md
sampleApp/
## How the sample app works
-![Diagram that shows how the sample app generated by this tutorial works.](media/active-directory-develop-guidedsetup-javascriptspa-introduction/javascriptspa-intro.svg)
+![Diagram that shows how the sample app generated by this tutorial works.](media/develop-guidedsetup-javascriptspa-introduction/javascriptspa-intro.svg)
The application that you create in this tutorial enables a JavaScript SPA to query the Microsoft Graph API. This querying can also work for a web API that's set up to accept tokens from the Microsoft identity platform. After the user signs in, the SPA requests an access token and adds it to the HTTP requests through the authorization header. The SPA will use this token to acquire the user's profile and emails via the Microsoft Graph API.
Now that you've set up the code, you need to test it:
After the browser loads your *https://docsupdatetracker.net/index.html* file, select **Sign In**. You're prompted to sign in with the Microsoft identity platform. ### Provide consent for application access The first time that you sign in to your application, you're prompted to grant it access to your profile and sign you in. Select **Accept** to continue. ### View application results After you sign in, you can select **Read More** under your displayed name. Your user profile information is returned in the displayed Microsoft Graph API response. ### More information about scopes and delegated permissions
active-directory Tutorial V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-desktop.md
In this tutorial:
## How the sample app generated by this guide works
-![Shows how the sample app generated by this tutorial works](./media/active-directory-develop-guidedsetup-windesktop-intro/windesktophowitworks.svg)
+![Shows how the sample app generated by this tutorial works](./media/develop-guidedsetup-windesktop-intro/windesktophowitworks.svg)
The sample application that you create with this guide enables a Windows Desktop application that queries the Microsoft Graph API or a web API that accepts tokens from a Microsoft identity-platform endpoint. For this scenario, you add a token to HTTP requests via the Authorization header. The Microsoft Authentication Library (MSAL) handles token acquisition and renewal.
active-directory V2 App Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-app-types.md
Many modern apps have a single-page app front end written primarily in JavaScrip
The flow diagram below demonstrates the OAuth 2.0 authorization code grant (with details around PKCE omitted), where the app receives a code from the Microsoft identity platform `authorize` endpoint, and redeems it for an access token and a refresh token using cross-site web requests. For single-page apps (SPAs), the access token is valid for 1 hour, and once expired, must request another code using the refresh token. In addition to the access token, an `id_token` that represents the signed-in user to the client application is typically also requested through the same flow and/or a separate OpenID Connect request (not shown here). To see this scenario in action, check out the [Tutorial: Sign in users and call the Microsoft Graph API from a JavaScript SPA using auth code flow](tutorial-v2-javascript-auth-code.md).
active-directory Web App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart.md
zone_pivot_groups: web-app-quickstart
::: zone-end ::: zone pivot="devlang-nodejs-msal" ::: zone-end ::: zone pivot="devlang-java"
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-management-azure-portal.md
Previously updated : 07/18/2022 Last updated : 06/12/2023
Global readers, Cloud Device Administrators, Intune Administrators, and Global A
The exported list includes these device identity attributes:
-`accountEnabled, approximateLastLogonTimeStamp, deviceOSType, deviceOSVersion, deviceTrustType, dirSyncEnabled, displayName, isCompliant, isManaged, lastDirSyncTime, objectId, profileType, registeredOwners, systemLabels, registrationTime, mdmDisplayName`
+`displayName,accountEnabled,operatingSystem,operatingSystemVersion,joinType (trustType),registeredOwners,userNames,mdmDisplayName,isCompliant,registrationTime,approximateLastSignInDateTime,deviceId,isManaged,objectId,profileType,systemLabels,model`
## Configure device settings
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md
+ # Set up self-service group management in Azure Active Directory You can enable users to create and manage their own security groups or Microsoft 365 groups in Azure Active Directory (Azure AD), part of Microsoft Entra. The owner of the group can approve or deny membership requests, and can delegate control of group membership. Self-service group management features are not available for [mail-enabled security groups or distribution lists](../fundamentals/concept-learn-about-groups.md).
Groups created in | Security group default behavior | Microsoft 365 group defaul
1. Sign in to the [Azure portal](https://portal.azure.com) with an account that's been assigned the Global Administrator or Groups Administrator role for the directory.
-1. Browse to **Azure Active Directory** > **Groups**, and then select **General** settings.
+2. Browse to **Azure Active Directory** > **Groups**, and then select **General** settings.
- ![Azure Active Directory groups general settings.](./media/groups-self-service-management/groups-settings-general.png)
+ ![Azure Active Directory groups general settings.](./media/groups-self-service-management/groups-settings-general.png)
+ > [!NOTE]
+ > In November 2023, the setting **Restrict users access to My Groups** will change to **Restrict users ability to see and edit security groups in My Groups.** If the setting is currently set to ΓÇÿYes,ΓÇÖ end users will be able to access My Groups in November 2023, but will not be able to see security groups.
-1. Set **Owners can manage group membership requests in the Access Panel** to **Yes**.
+3. Set **Owners can manage group membership requests in the Access Panel** to **Yes**.
-1. Set **Restrict user ability to access groups features in the Access Panel** to **No**.
+4. Set **Restrict user ability to access groups features in the Access Panel** to **No**.
-1. Set **Users can create security groups in Azure portals, API or PowerShell** to **Yes** or **No**.
+5. Set **Users can create security groups in Azure portals, API or PowerShell** to **Yes** or **No**.
For more information about this setting, see the next section [Group settings](#group-settings).
-1. Set **Users can create Microsoft 365 groups in Azure portals, API or PowerShell** to **Yes** or **No**.
+6. Set **Users can create Microsoft 365 groups in Azure portals, API or PowerShell** to **Yes** or **No**.
For more information about this setting, see the next section [Group settings](#group-settings).
These articles provide additional information on Azure Active Directory.
* [Application Management in Azure Active Directory](../manage-apps/what-is-application-management.md) * [What is Azure Active Directory?](../fundamentals/active-directory-whatis.md) * [Integrate your on-premises identities with Azure Active Directory](../hybrid/whatis-hybrid-identity.md)+++
active-directory Concept Branding Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/concept-branding-customers.md
The following list and image outline the elements of the default Microsoft sign-
5. Microsoft footer hyperlinks, for example, Privacy & cookies, Terms of use and troubleshooting details also known as ellipsis in the right bottom corner of the screen. 6. Microsoft overlay.
- :::image type="content" source="media/how-to-customize-branding-customers/azure-ad-microsoft-branding.png" alt-text="Screenshot of the Azure AD default Microsoft branding." lightbox="media/how-to-customize-branding-customers/azure-ad-microsoft-branding.png":::
+ :::image type="content" source="media/how-to-customize-branding-customers/microsoft-branding.png" alt-text="Screenshot of the Azure AD default Microsoft branding." lightbox="media/how-to-customize-branding-customers/microsoft-branding.png":::
The following image displays the neutral default branding of the customer tenant: :::image type="content" source="media/how-to-customize-branding-customers/ciam-neutral-branding.png" alt-text="Screenshot of the CIAM neutral branding." lightbox="media/how-to-customize-branding-customers/ciam-neutral-branding.png":::
active-directory How To Customize Branding Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-customize-branding-customers.md
The following list and image outline the elements of the default Microsoft sign-
5. Microsoft footer hyperlinks, for example, Privacy & cookies, Terms of use and troubleshooting details also known as ellipsis in the right bottom corner of the screen. 6. Microsoft overlay.
- :::image type="content" source="media/how-to-customize-branding-customers/azure-ad-microsoft-branding.png" alt-text="Screenshot of the Azure AD default Microsoft branding." lightbox="media/how-to-customize-branding-customers/azure-ad-microsoft-branding.png":::
+ :::image type="content" source="media/how-to-customize-branding-customers/microsoft-branding.png" alt-text="Screenshot of the Azure AD default Microsoft branding." lightbox="media/how-to-customize-branding-customers/microsoft-branding.png":::
The following image displays the neutral default branding of the customer tenant: :::image type="content" source="media/how-to-customize-branding-customers/ciam-neutral-branding.png" alt-text="Screenshot of the CIAM neutral branding." lightbox="media/how-to-customize-branding-customers/ciam-neutral-branding.png":::
active-directory How To Single Page App Vanillajs Configure Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-configure-authentication.md
Previously updated : 05/25/2023 Last updated : 06/09/2023 #Customer intent: As a developer, I want to learn how to configure vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant. # Tutorial: Handle authentication flows in a vanilla JavaScript single-page app
-In the [previous article](./how-to-single-page-app-vanillajs-prepare-app.md), you created a vanilla JavaScript (JS) single-page application (SPA) and a server to host it. In this article, you'll configure the application to authenticate and authorize users to access protected resources. Authentication and authorization are handled by the [Microsoft Authentication Library for JavaScript (MSAL.js)](/javascript/api/overview/).
+In the [previous article](./how-to-single-page-app-vanillajs-prepare-app.md), you created a vanilla JavaScript (JS) single-page application (SPA) and a server to host it. This article shows you how to configure the application to authenticate and authorize users to access protected resources.
-In this tutorial you'll;
+In this tutorial;
> [!div class="checklist"] > * Configure the settings for the application
The application uses the [Implicit Grant Flow](../../develop/v2-oauth2-implicit-
``` 1. Replace the following values with the values from the Azure portal:
- - Find the `Enter_the_Application_Id_Here` value and replace it with the **application ID (clientId)** of the app you registered in the Microsoft Entra admin center.
+ - Find the `Enter_the_Application_Id_Here` value and replace it with the **Application ID (clientId)** of the app you registered in the Microsoft Entra admin center.
- In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is *caseyjensen@onmicrosoft.com*, the value you should enter is *casyjensen*. 1. Save the file.
The application uses *authPopup.js* to handle the authentication flow when the u
## Next steps > [!div class="nextstepaction"]
-> [Sign in and sign out of the Vanilla JS SPA](./how-to-single-page-app-vanillajs-sign-in-sign-out.md)
+> [Sign in and sign out of the vanilla JS SPA](./how-to-single-page-app-vanillajs-sign-in-sign-out.md)
active-directory How To Single Page App Vanillajs Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-prepare-app.md
Previously updated : 05/25/2023 Last updated : 06/09/2023 #Customer intent: As a developer, I want to learn how to configure vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure AD for customers tenant.
-# Tutorial: Prepare a vanilla JavaScript single-page app (SPA) for authentication in a customer tenant
+# Tutorial: Prepare a vanilla JavaScript single-page app for authentication in a customer tenant
-In the [previous article](./how-to-single-page-app-vanillajs-prepare-tenant.md), you registered an application and configured user flows in your Azure Active Directory (AD) for customers tenant. This article shows you how to create a vanilla JavaScript SPA
+In the [previous article](./how-to-single-page-app-vanillajs-prepare-tenant.md), you registered an application and configured user flows in your Azure Active Directory (AD) for customers tenant. This article shows you how to create a vanilla JavaScript (JS) single-page app (SPA) and configure it to sign in and sign out users with your customer tenant.
-In this tutorial you'll;
+In this tutorial;
> [!div class="checklist"]
-> * Create a vanilla Javascript project in Visual Studio Code
+> * Create a vanilla JavaScript project in Visual Studio Code
> * Install required packages > * Add code to *server.js* to create a server ## Prerequisites
-* Completion of the prerequisites and steps in [Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app](how-to-single-page-app-vanillajs-prepare-tenant.md).
-* Although any integrated development environment (IDE) that supports Vanilla JS applications can be used, **Visual Studio Code** is recommended for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page.
+* Completion of the prerequisites and steps in [Prepare your customer tenant to authenticate a vanilla JavaScript single-page app](how-to-single-page-app-vanillajs-prepare-tenant.md).
+* Although any integrated development environment (IDE) that supports vanilla JS applications can be used, **Visual Studio Code** is recommended for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page.
* [Node.js](https://nodejs.org/en/download/).
-## Create a new Vanilla JS project and install dependencies
+## Create a new vanilla JS project and install dependencies
1. Open Visual Studio Code, select **File** > **Open Folder...**. Navigate to and select the location in which to create your project. 1. Open a new terminal by selecting **Terminal** > **New Terminal**.
active-directory How To Single Page App Vanillajs Prepare Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-prepare-tenant.md
Previously updated : 05/25/2023 Last updated : 06/09/2023
-#Customer intent: As a developer, I want to learn how to configure a Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant.
+#Customer intent: As a developer, I want to learn how to configure a vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant.
-# Tutorial: Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app (SPA)
+# Tutorial: Prepare your customer tenant to authenticate a vanilla JavaScript single-page app
-This tutorial series demonstrates how to build a Vanilla JavaScript single-page application (SPA) and prepare it for authentication using the Microsoft Entra admin center. You'll use the [Microsoft Authentication Library for JavaScript](/javascript/api/overview/msal-overview) library to authenticate your app with your Azure Active Directory (Azure AD) for customers tenant. Finally, you'll run the application and test the sign-in and sign-out experiences.
+This tutorial series demonstrates how to build a vanilla JavaScript single-page application (SPA) and prepare it for authentication using the Microsoft Entra admin center. You'll use the [Microsoft Authentication Library for JavaScript](/javascript/api/overview/msal-overview) library to authenticate your app with your Azure Active Directory (Azure AD) for customers tenant. Finally, you'll run the application and test the sign-in and sign-out experiences.
In this tutorial, you'll; > [!div class="checklist"]
-> * Register a web application in the Microsoft Entra admin center, and record its identifiers
-> * Create a client secret for the web application
+> * Register a SPA in the Microsoft Entra admin center, and record its identifiers
> * Define the platform and URLs
-> * Grant permissions to the web application to access the Microsoft Graph API
+> * Grant permissions to the SPA to access the Microsoft Graph API
> * Create a sign in and sign out user flow in the Microsoft Entra admin center
-> * Associate your web application with the user flow
+> * Associate your SPA with the user flow
## Prerequisites - An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.- - This Azure account must have permissions to manage applications. Any of the following Azure AD roles include the required permissions: * Application administrator
active-directory How To Single Page App Vanillajs Sign In Sign Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-sign-in-sign-out.md
Last updated 05/25/2023
#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant.
-# Tutorial: Add sign-in and sign-out to a Vanilla JavaScript single-page app (SPA) for a customer tenant
+# Tutorial: Add sign-in and sign-out to a vanilla JavaScript single-page app for a customer tenant
In the [previous article](how-to-single-page-app-vanillajs-configure-authentication.md), you edited the popup and redirection files that handle the sign-in page response. This tutorial demonstrates how to build a responsive user interface (UI) that contains a **Sign-In** and **Sign-Out** button and run the project to test the sign-in and sign-out functionality.
-In this tutorial you'll;
+In this tutorial;
> [!div class="checklist"]
-> * Add code to the *https://docsupdatetracker.net/index.html* file to create the user interface (UI)
+> * Add code to the *https://docsupdatetracker.net/index.html* file to create the user interface
> * Add code to the *signout.html* file to create the sign-out page > * Sign in and sign out of the application
In this tutorial you'll;
## Add code to the *https://docsupdatetracker.net/index.html* file The main page of the SPA, *https://docsupdatetracker.net/index.html*, is the first page that is loaded when the application is started. It's also the page that is loaded when the user selects the **Sign-Out** button. + 1. Open *public/https://docsupdatetracker.net/index.html* and add the following code snippet: ```html
The main page of the SPA, *https://docsupdatetracker.net/index.html*, is the first page that is loaded when th
When authorization has been configured, the user interface can be created to allow users to sign in and sign out when the project is run. To build the user interface (UI) for the application, [Bootstrap](https://getbootstrap.com/) is used to create a responsive UI that contains a **Sign-In** and **Sign-Out** button. 1. Open *public/ui.js* and add the following code snippet:+ ```javascript // Select DOM elements to work with const signInButton = document.getElementById('signIn');
When authorization has been configured, the user interface can be created to all
```css .navbarStyle {
- padding: .5rem 1rem !important;
+ padding: .5rem 1rem !important;
} .table-responsive-ms {
Now that all the required code snippets have been added, the application can be
1. Select **No account? Create one**, which starts the sign-up flow. 1. In the **Create account** window, enter the email address registered to your Azure Active Directory (AD) for customers tenant, which starts the sign-up flow as a user for your application. 1. After entering a one-time passcode from the customer tenant, enter a new password and more account details, this sign-up flow is completed.+ 1. If a window appears prompting you to **Stay signed in**, choose either **Yes** or **No**.+ 1. The SPA will now display a button saying **Request Profile Information**. Select it to display profile data.+ :::image type="content" source="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png" alt-text="Screenshot of sign in into a vanilla JS SPA." lightbox="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png"::: ## Sign out of the application
active-directory How To Single Page Application React Configure Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-application-react-configure-authentication.md
+
+ Title: Tutorial - Handle authentication flows in a React single-page app
+description: Learn how to configure authentication for a React single-page app (SPA) with your Azure Active Directory (AD) for customers tenant.
++++++++ Last updated : 06/09/2023+
+#Customer intent: As a developer, I want to learn how to configure a React single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant.
++
+# Tutorial: Handle authentication flows in a React single-page app
+
+In the [previous article](./how-to-single-page-application-react-prepare-app.md), you created a React single-page app (SPA) and prepared it for authentication with your Azure Active Directory (Azure AD) for customers tenant. In this article, you'll learn how to handle authentication flows in your app by adding components.
+
+In this tutorial;
+
+> [!div class="checklist"]
+> * Add a *DataDisplay* component to the app
+> * Add a *ProfileContent* component to the app
+> * Add a *PageLayout* component to the app
+
+## Prerequisites
+
+* Completion of the prerequisites and steps in [Prepare an single-page app for authentication](./how-to-single-page-application-react-prepare-app.md).
+
+## Add components to the application
+
+Functional components are the building blocks of React apps, and are used to build the sign-in and sign-out experiences in your React SPA.
+
+### Add the DataDisplay component
+
+1. Open *src/components/DataDisplay.jsx* and add the following code snippet
+
+ ```jsx
+ import { Table } from 'react-bootstrap';
+ import { createClaimsTable } from '../utils/claimUtils';
+
+ import '../styles/App.css';
+
+ export const IdTokenData = (props) => {
+ const tokenClaims = createClaimsTable(props.idTokenClaims);
+
+ const tableRow = Object.keys(tokenClaims).map((key, index) => {
+ return (
+ <tr key={key}>
+ {tokenClaims[key].map((claimItem) => (
+ <td key={claimItem}>{claimItem}</td>
+ ))}
+ </tr>
+ );
+ });
+ return (
+ <>
+ <div className="data-area-div">
+ <p>
+ See below the claims in your <strong> ID token </strong>. For more information, visit:{' '}
+ <span>
+ <a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/id-tokens#claims-in-an-id-token">
+ docs.microsoft.com
+ </a>
+ </span>
+ </p>
+ <div className="data-area-div">
+ <Table responsive striped bordered hover>
+ <thead>
+ <tr>
+ <th>Claim</th>
+ <th>Value</th>
+ <th>Description</th>
+ </tr>
+ </thead>
+ <tbody>{tableRow}</tbody>
+ </Table>
+ </div>
+ </div>
+ </>
+ );
+ };
+ ```
+
+1. Save the file.
+
+### Add the NavigationBar component
+
+1. Open *src/components/NavigationBar.jsx* and add the following code snippet
+
+ ```jsx
+ import { AuthenticatedTemplate, UnauthenticatedTemplate, useMsal } from '@azure/msal-react';
+ import { Navbar, Button } from 'react-bootstrap';
+ import { loginRequest } from '../authConfig';
+
+ export const NavigationBar = () => {
+ const { instance } = useMsal();
+
+ const handleLoginRedirect = () => {
+ instance.loginRedirect(loginRequest).catch((error) => console.log(error));
+ };
+
+ const handleLogoutRedirect = () => {
+ instance.logoutRedirect().catch((error) => console.log(error));
+ };
+
+ /**
+ * Most applications will need to conditionally render certain components based on whether a user is signed in or not.
+ * msal-react provides 2 easy ways to do this. AuthenticatedTemplate and UnauthenticatedTemplate components will
+ * only render their children if a user is authenticated or unauthenticated, respectively.
+ */
+ return (
+ <>
+ <Navbar bg="primary" variant="dark" className="navbarStyle">
+ <a className="navbar-brand" href="/">
+ Microsoft identity platform
+ </a>
+ <AuthenticatedTemplate>
+ <div className="collapse navbar-collapse justify-content-end">
+ <Button variant="warning" onClick={handleLogoutRedirect}>
+ Sign out
+ </Button>
+ </div>
+ </AuthenticatedTemplate>
+ <UnauthenticatedTemplate>
+ <div className="collapse navbar-collapse justify-content-end">
+ <Button onClick={handleLoginRedirect}>Sign in</Button>
+ </div>
+ </UnauthenticatedTemplate>
+ </Navbar>
+ </>
+ );
+ };
+ ```
+
+1. Save the file.
+
+### Add the PageLayout component
+
+1. Open *src/components/PageLayout.jsx* and add the following code snippet
+
+ ```jsx
+ import { AuthenticatedTemplate } from '@azure/msal-react';
+
+ import { NavigationBar } from './NavigationBar.jsx';
+
+ export const PageLayout = (props) => {
+ /**
+ * Most applications will need to conditionally render certain components based on whether a user is signed in or not.
+ * msal-react provides 2 easy ways to do this. AuthenticatedTemplate and UnauthenticatedTemplate components will
+ * only render their children if a user is authenticated or unauthenticated, respectively.
+ */
+ return (
+ <>
+ <NavigationBar />
+ <br />
+ <h5>
+ <center>Welcome to the Microsoft Authentication Library For React Tutorial</center>
+ </h5>
+ <br />
+ {props.children}
+ <br />
+ <AuthenticatedTemplate>
+ <footer>
+ <center>
+ How did we do?
+ <a
+ href="https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR_ivMYEeUKlEq8CxnMPgdNZUNDlUTTk2NVNYQkZSSjdaTk5KT1o4V1VVNS4u"
+ rel="noopener noreferrer"
+ target="_blank"
+ >
+ {' '}
+ Share your experience!
+ </a>
+ </center>
+ </footer>
+ </AuthenticatedTemplate>
+ </>
+ );
+ }
+ ```
+
+1. Save the file.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Sign in and sign out of the React SPA](./how-to-single-page-application-react-sign-in-out.md)
active-directory How To Single Page Application React Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-application-react-prepare-app.md
In the [previous article](./how-to-single-page-application-react-prepare-tenant.md), you registered an application and configured user flows in your Azure Active Directory (AD) for customers tenant. This tutorial demonstrates how to create a React single-page app using `npm` and create files needed for authentication and authorization.
-In this tutorial you'll;
+In this tutorial;
> [!div class="checklist"] > * Create a React project in Visual Studio Code
In this tutorial you'll;
cd reactspalocal npm start ```
+1. Create additional folders and files to achieve the following folder structure:
+
+ ```text
+ reactspalocal
+ Γö£ΓöÇΓöÇΓöÇ public
+ Γöé ΓööΓöÇΓöÇΓöÇ https://docsupdatetracker.net/index.html
+ ΓööΓöÇΓöÇΓöÇsrc
+ Γö£ΓöÇΓöÇΓöÇ components
+ Γöé ΓööΓöÇΓöÇΓöÇ DataDisplay.jsx
+ Γöé ΓööΓöÇΓöÇΓöÇ NavigationBar.jsx
+ Γöé ΓööΓöÇΓöÇΓöÇ PageLayout.jsx
+ ΓööΓöÇΓöÇΓöÇstyles
+ Γöé ΓööΓöÇΓöÇΓöÇ App.css
+ Γöé ΓööΓöÇΓöÇΓöÇ index.css
+ ΓööΓöÇΓöÇΓöÇ utils
+ Γöé ΓööΓöÇΓöÇΓöÇ claimUtils.js
+ ΓööΓöÇΓöÇ App.jsx
+ ΓööΓöÇΓöÇ authConfig.js
+ ΓööΓöÇΓöÇ index.js
+ ```
-## Install identity and bootstrap packages
+## Install app dependencies
-Identity related **npm** packages must be installed in the project to enable user authentication. For project styling, we'll use **Bootstrap**.
+Identity related **npm** packages must be installed in the project to enable user authentication. For project styling, **Bootstrap** is used.
1. In the **Terminal** bar, select the **+** icon to create a new terminal. A new terminal window will open enabling the other terminal to continue running in the background.
-1. If necessary, navigate to the *reactspalocal* again and enter the following commands into the terminal to install the relevant `msal` and `bootstrap` packages.
+1. If necessary, navigate to *reactspalocal* and enter the following commands into the terminal to install the `msal` and `bootstrap` packages.
```powershell npm install @azure/msal-browser @azure/msal-react
Identity related **npm** packages must be installed in the project to enable use
## Create the authentication configuration file, *authConfig.js*
-1. Navigate to the *src* folder, and create a new file called *authConfig.js*.
-1. Open *authConfig.js* and add the following code snippet:
+1. In the *src* folder, open *authConfig.js* and add the following code snippet:
```javascript /*
Identity related **npm** packages must be installed in the project to enable use
// }; ```
-1. Replace the following values with the values from the Azure portal:
+1. Replace the following values with the values from the Azure admin center:
- Replace `Enter_the_Application_Id_Here` with the **Application (client) ID** value that was recorded earlier from the **Overview** page of the registered application. - In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is *caseyjensen@onmicrosoft.com*, the value you should enter is *casyjensen*.
All parts of the app that require authentication must be wrapped in the [`MsalPr
## Next steps > [!div class="nextstepaction"]
-> [Sign in and sign out of the React SPA](./how-to-single-page-application-react-sign-in-out.md)
+> [Configure SPA for authentication](./how-to-single-page-application-react-configure-authentication.md)
active-directory How To Single Page Application React Prepare Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-application-react-prepare-tenant.md
# Tutorial: Prepare your customer tenant to authenticate users in a React single-page app (SPA)
-This tutorial series demonstrates how to build a React single-page application from scratch and prepare it for authentication using the Microsoft Entra admin center. You'll use the [Microsoft Authentication Library for JavaScript](/javascript/api/overview/msal-overview) library to authenticate your app with your Azure Active Directory (Azure AD) for customers tenant. Finally, you'll run the application and test the sign-in and sign-out experiences.
+This tutorial series demonstrates how to build a React single-page application (SPA) and prepare it for authentication using the Microsoft Entra admin center. You'll use the [Microsoft Authentication Library for JavaScript](/javascript/api/overview/msal-overview) library to authenticate your app with your Azure Active Directory (Azure AD) for customers tenant. Finally, you'll run the application and test the sign-in and sign-out experiences.
-In this tutorial, you'll;
+In this tutorial;
> [!div class="checklist"]
-> * Register a web application in the Microsoft Entra admin center, and record its identifiers
-> * Create a client secret for the web application
+> * Register a SPA in the Microsoft Entra admin center, and record its identifiers
> * Define the platform and URLs > * Grant permissions to the web application to access the Microsoft Graph API
-> * Create a sign in and sign out user flow in the Microsoft Entra admin center
-> * Associate your web application with the user flow
+> * Create a sign-in and sign-out user flow in the Microsoft Entra admin center
+> * Associate your SPA with the user flow
## Prerequisites - An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.- - This Azure account must have permissions to manage applications. Any of the following Azure AD roles include the required permissions:+ * Application administrator * Application developer * Cloud application administrator
active-directory How To Single Page Application React Sign In Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-application-react-sign-in-out.md
# Tutorial: Add sign-in and sign-out to a React single-page app (SPA) for a customer tenant
-In the [previous article](./how-to-single-page-application-react-prepare-app.md), you created a React single-page app (SPA) in Visual Studio Code and configured it for authentication.
+In the [previous article](./how-to-single-page-application-react-configure-authentication.md), you created a React single-page app (SPA) in Visual Studio Code and configured it for authentication. This tutorial shows you how to add sign-in and sign-out functionality to the app.
-In this tutorial you'll;
+In this tutorial;
> [!div class="checklist"]
-> * Add functional components to the application
> * Create a page layout and add the sign in and sign out experience > * Replace the default function to render authenticated information > * Sign in and sign out of the application using the user flow
In this tutorial you'll;
* Completion of the prerequisites and steps in [Prepare an single-page app for authentication](./how-to-single-page-application-react-prepare-app.md).
-## Add components to the application
-Functional components are the building blocks of React apps, and are used to build the sign in and sign out experiences in a React SPA.
-
-1. Right click on *src*, select **New Folder** and call it *components*.
-1. Right click on *components* and using the **New File** option, create the following files to create a structure as depicted in the following code block;
- - *PageLayout.jsx*
- - *SignInButton.jsx*
- - *SignOutButton.jsx*
-
- ```txt
- reactspalocal/
- Γö£ΓöÇΓöÇ src/
- Γöé Γö£ΓöÇΓöÇ components/
- Γöé Γöé Γö£ΓöÇΓöÇ PageLayout.jsx
- Γöé Γöé Γö£ΓöÇΓöÇ SignInButton.jsx
- Γöé Γöé ΓööΓöÇΓöÇ SignOutButton.jsx
- Γöé ΓööΓöÇΓöÇ ...
- ΓööΓöÇΓöÇ ...
- ```
-
-### Add the page layout
-
-1. Open *PageLayout.jsx* and add the following code to render the page layout. The [useIsAuthenticated](/javascript/api/@azure/msal-react) hook returns whether or not a user is currently signed-in.
-
- ```javascript
- /*
- * Copyright (c) Microsoft Corporation. All rights reserved.
- * Licensed under the MIT License.
- */
-
- import React from "react";
- import Navbar from "react-bootstrap/Navbar";
-
- import { useIsAuthenticated } from "@azure/msal-react";
- import { SignInButton } from "./SignInButton";
- import { SignOutButton } from "./SignOutButton";
-
- /**
- * Renders the navbar component with a sign in or sign out button depending on whether or not a user is authenticated
- * @param props
- */
- export const PageLayout = (props) => {
- const isAuthenticated = useIsAuthenticated();
-
- return (
- <>
- <Navbar bg="primary" variant="dark" className="navbarStyle">
- <a className="navbar-brand" href="/">
- Microsoft Identity Platform
- </a>
- <div className="collapse navbar-collapse justify-content-end">
- {isAuthenticated ? <SignOutButton /> : <SignInButton />}
- </div>
- </Navbar>
- <br />
- <br />
- <h5>
- <center>
- Welcome to the Microsoft Authentication Library For Javascript -
- React SPA Tutorial
- </center>
- </h5>
- <br />
- <br />
- {props.children}
- </>
- );
- };
- ```
-
-1. Save the file.
-
-### Add the sign in experience
-
-1. Open *SignInButton.jsx* and add the following code, which creates a button that signs in the user using either a pop-up or redirect. The `useMsal` hook is used to retrieve an access token to allow user sign in:
-
- ```javascript
- import React from "react";
- import { useMsal } from "@azure/msal-react";
- import { loginRequest } from "../authConfig";
- import DropdownButton from "react-bootstrap/DropdownButton";
- import Dropdown from "react-bootstrap/Dropdown";
-
- /**
- * Renders a drop down button with child buttons for logging in with a popup or redirect
- * Note the [useMsal] package
- */
-
- export const SignInButton = () => {
- const { instance } = useMsal();
-
- const handleLogin = (loginType) => {
- if (loginType === "popup") {
- instance.loginPopup(
- ...loginRequest,
- redirectUri: '/redirect',
- ).catch((e) => {
- console.log(e);
- });
- } else if (loginType === "redirect") {
- instance.loginRedirect(loginRequest).catch((e) => {
- console.log(e);
- });
- }
- };
- return (
- <DropdownButton
- variant="secondary"
- className="ml-auto"
- drop="start"
- title="Sign In"
- >
- <Dropdown.Item as="button" onClick={() => handleLogin("popup")}>
- Sign in using Popup
- </Dropdown.Item>
- <Dropdown.Item as="button" onClick={() => handleLogin("redirect")}>
- Sign in using Redirect
- </Dropdown.Item>
- </DropdownButton>
- );
- };
- ```
-
-1. Save the file.
-
-### Add the sign out experience
-
-1. Open *SignOutButton.jsx* and add the following code, which creates a button that signs out the user using either a pop-up or redirect.
-
- ```javascript
- import React from "react";
- import { useMsal } from "@azure/msal-react";
- import DropdownButton from "react-bootstrap/DropdownButton";
- import Dropdown from "react-bootstrap/Dropdown";
-
- /**
- * Renders a sign out button
- */
- export const SignOutButton = () => {
- const { instance } = useMsal();
-
- const handleLogout = (logoutType) => {
- if (logoutType === "popup") {
- instance.logoutPopup({
- postLogoutRedirectUri: "/",
- mainWindowRedirectUri: "/",
- });
- } else if (logoutType === "redirect") {
- instance.logoutRedirect({
- postLogoutRedirectUri: "/",
- });
- }
- };
-
- return (
- <DropdownButton
- variant="secondary"
- className="ml-auto"
- drop="start"
- title="Sign Out"
- >
- <Dropdown.Item as="button" onClick={() => handleLogout("popup")}>
- Sign out using Popup
- </Dropdown.Item>
- <Dropdown.Item as="button" onClick={() => handleLogout("redirect")}>
- Sign out using Redirect
- </Dropdown.Item>
- </DropdownButton>
- );
- };
- ```
-
-## Change filename and add required imports
+## Change filename and add function to render authenticated information
By default, the application runs via a JavaScript file called *App.js*. It needs to be changed to a *.jsx* file, which is an extension that allows a developer to write HTML in React. 1. Rename *App.js* to *App.jsx*.
-1. Replace the existing imports with the following snippet:
+1. Replace the existing code with the following snippet:
```javascript
- import React, { useState } from 'react';
-
- import { PageLayout } from './components/PageLayout';
- import { loginRequest } from './authConfig';
-
- import { AuthenticatedTemplate, UnauthenticatedTemplate, useMsal } from '@azure/msal-react';
-
- import './App.css';
-
- import Button from 'react-bootstrap/Button';
- ```
-
-### Replacing the default function to render authenticated information
-
-1. Replace the default function `App()` to render authenticated information with the following code:
-
- ```javascript
+ import { MsalProvider, AuthenticatedTemplate, useMsal, UnauthenticatedTemplate } from '@azure/msal-react';
+ import { Container, Button } from 'react-bootstrap';
+ import { PageLayout } from './components/PageLayout';
+ import { IdTokenData } from './components/DataDisplay';
+ import { loginRequest } from './authConfig';
+
+ import './styles/App.css';
+
/**
- * If a user is authenticated the ProfileContent component above is rendered. Otherwise a message indicating a user is not authenticated is rendered.
- */
+ * Most applications will need to conditionally render certain components based on whether a user is signed in or not.
+ * msal-react provides 2 easy ways to do this. AuthenticatedTemplate and UnauthenticatedTemplate components will
+ * only render their children if a user is authenticated or unauthenticated, respectively. For more, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/getting-started.md
+ */
const MainContent = () => {
+ /**
+ * useMsal is hook that returns the PublicClientApplication instance,
+ * that tells you what msal is currently doing. For more, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/hooks.md
+ */
+ const { instance } = useMsal();
+ const activeAccount = instance.getActiveAccount();
+
+ const handleRedirect = () => {
+ instance
+ .loginRedirect({
+ ...loginRequest,
+ prompt: 'create',
+ })
+ .catch((error) => console.log(error));
+ };
return ( <div className="App"> <AuthenticatedTemplate>
- <ProfileContent />
+ {activeAccount ? (
+ <Container>
+ <IdTokenData idTokenClaims={activeAccount.idTokenClaims} />
+ </Container>
+ ) : null}
</AuthenticatedTemplate>
-
<UnauthenticatedTemplate>
- <h5>
- <center>
- Please sign-in to see your profile information.
- </center>
- </h5>
+ <Button className="signInButton" onClick={handleRedirect} variant="primary">
+ Sign up
+ </Button>
</UnauthenticatedTemplate> </div> ); };
-
- export default function App() {
+
+
+ /**
+ * msal-react is built on the React context API and all parts of your app that require authentication must be
+ * wrapped in the MsalProvider component. You will first need to initialize an instance of PublicClientApplication
+ * then pass this to MsalProvider as a prop. All components underneath MsalProvider will have access to the
+ * PublicClientApplication instance via context as well as all hooks and components provided by msal-react. For more, visit:
+ * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-react/docs/getting-started.md
+ */
+ const App = ({ instance }) => {
return (
- <PageLayout>
- <center>
+ <MsalProvider instance={instance}>
+ <PageLayout>
<MainContent />
- </center>
- </PageLayout>
+ </PageLayout>
+ </MsalProvider>
);
- }
+ };
+
+ export default App;
``` ## Run your project and sign in
active-directory How To Use App Roles Customers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-use-app-roles-customers.md
Previously updated : 05/09/2023 Last updated : 06/13/2023
Though you can use app roles or groups for authorization, key differences betwee
## Declare roles for an application
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Make sure you're using the directory that contains your Azure AD customer tenant: Select the **Directories + subscriptions** icon for switching directories in the toolbar, and then find your customer tenant in the list. If it's not the current directory, select **Switch**.
-1. In the left menu, under **Applications**, select **App registrations**, and then select the application you want to define app roles in.
-1. Select **App roles**, and then select **Create app role**.
-1. In the **Create app role** pane, enter the settings for the role. The following table describes each setting and its parameters.
-
- | Field | Description | Example |
- | -- | -- | -- |
- | **Display name** | Display name for the app role that appears in the app assignment experiences. This value may contain spaces. | `Orders manager`|
- | **Allowed member types** | Specifies whether this app role can be assigned to users, applications, or both. | `Users/Groups` |
- | **Value** | Specifies the value of the roles claim that the application should expect in the token. The value should exactly match the string referenced in the application's code. The value can't contain spaces.| `Orders.Manager` |
- | **Description** | A more detailed description of the app role displayed during admin app assignment experiences. | `Manage online orders.` |
- | **Do you want to enable this app role?** | Specifies whether the app role is enabled. To delete an app role, deselect this checkbox and apply the change before attempting the delete operation.| _Checked_ |
-
-1. Select **Apply** to create the application role.
### Assign users and groups to roles
-Once you've added app roles in your application, administrator can assign users and groups to the roles. Assignment of users and groups to roles can be done through the admin center, or programmatically using [Microsoft Graph](/graph/api/user-post-approleassignments). When the users assigned to the various app roles sign in to the application, their tokens have their assigned roles in the `roles` claim.
-To assign users and groups to application roles by using the Azure portal:
-
-1. In the left menu, under **Applications**, select **Enterprise applications**.
-1. Select **All applications** to view a list of all your applications. If your application doesn't appear in the list, use the filters at the top of the **All applications** list to restrict the list, or scroll down the list to locate your application.
-1. Select the application in which you want to assign users or security group to roles.
-1. Under **Manage**, select **Users and groups**.
-1. Select **Add user** to open the **Add Assignment** pane.
-1. In the **Add Assignment** pane, select **Users and groups**. A list of users and security groups appears. You can select multiple users and groups in the list.
-1. Once you've selected users and groups, choose **Select**.
-2. In the **Add assignment** pane, choose **Select a role**. All the roles you defined for the application appear.
-3. Select a role, and then choose **Select**.
-4. Select **Assign** to finish the assignment of users and groups to the app.
-5. Confirm that the users and groups you added appear in the **Users and groups** list.
-6. To test your application, sign out and sign in again with the user you assigned the roles.
+To test your application, sign out and sign in again with the user you assigned the roles. Inspect the security token to make sure that it contains the user's role.
## Add group claims to security tokens
-To emit the group membership claims in security tokens, follow these steps:
-
-1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a Conditional Access Administrator, Security Administrator, or Global Administrator.
-1. Make sure you're using the directory that contains your Azure AD customer tenant: Select the **Directories + subscriptions** icon for switching directories in the toolbar, and then find your customer tenant in the list. If it's not the current directory, select **Switch**.
-1. In the left menu, under **Applications**, select **App registrations**, and then select the application in which you want to add the groups claim.
-1. Under **Manage**, select **Token configuration**.
-2. Select **Add groups claim**.
-3. Select **group types** to include in the security tokens.
-4. For the **Customize token properties by type**, select **Group ID**.
-5. Select **Add** to add the groups claim.
### Add members to a group
-Now that you've added app groups claim in your application, add users to the security groups. If you don't have security group, [create one](../../fundamentals/how-to-manage-groups.md#create-a-basic-group-and-add-members).
-1. In the left menu, select **Groups**, and then select **All groups**.
-1. Select the group you want to manage.
-1. Select **Members**.
-1. Select **+ Add members**.
-1. Scroll through the list or enter a name in the search box. You can choose multiple names. When you're ready, choose **Select**.
-2. The **Group Overview** page updates to show the number of members who are now added to the group.
-3. To test your application, sign out, and then sign in again with the user you added to the security group.
+To test your application, sign out, and then sign in again with the user you added to the security group. Inspect the security token to make sure that it contains the user's group membership.
## Groups and application roles support
The following table shows which features are currently available.
| Change security group members using the Microsoft Entra admin center | Yes | | Change security group members using the Microsoft Graph API | Yes | | Scale up to 50,000 users and 50,000 groups | Not currently available |
-| Add 50,000 users to at least two groups | Not currently available |
+| Add 50,000 users to at least two groups | Not currently available |
active-directory Backup Authentication System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/backup-authentication-system.md
Microsoft is continuously expanding the number of supported scenarios.
## Which non-Microsoft workloads are supported?
-The backup authentication system automatically provides incremental resilience to tens of thousands of supported non-Microsoft applications based on their authentication patterns. Seethe appendix for a list of the most [common non-Microsoft applications and their coverage status](#appendix). For an in depth explanation of which authentication patterns are supported, see the article [Understanding Application Support for the backup authentication system](backup-authentication-system-apps.md) article.
+The backup authentication system automatically provides incremental resilience to tens of thousands of supported non-Microsoft applications based on their authentication patterns. See the appendix for a list of the most [common non-Microsoft applications and their coverage status](#appendix). For an in depth explanation of which authentication patterns are supported, see the article [Understanding Application Support for the backup authentication system](backup-authentication-system-apps.md) article.
-- Native applications using the OAuth 2.0 protocol to access resource applications, such as popular non-Microsoft e-mail and IM clients like: Apple Mail, Aqua Mail, Gmail, Samsung Email, Spark, and Thunderbird
+- Native applications using the OAuth 2.0 protocol to access resource applications, such as popular non-Microsoft e-mail and IM clients like: Apple Mail, Aqua Mail, Gmail, Samsung Email, and Spark.
- Line of business web applications configured to authenticate with OpenID Connect using only ID tokens. - Web applications authenticating with the SAML protocol, when configured for IDP-Initiated Single Sign On (SSO) like: ADP, Atlassian Cloud, AWS, GoToMeeting, Kronos, Marketo, Palo Alto Networks, SAP Cloud Identity Trello, Workday, and Zscaler.
The backup authentication system is supported in all cloud environments except A
| Slack | No | SAML SP-initiated | | Smartsheet | No | SAML SP-initiated | | Spark | Yes | Protected |
-| Thunderbird | Yes | Protected |
| UKG pro | Yes \* | Protected | | VMware Boxer | Yes | Protected | | walkMe | No | SAML SP-initiated |
active-directory Identity Fundamental Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/identity-fundamental-concepts.md
+
+ Title: Introduction to identity
+description: Learn the fundamental concepts of identity and access management (IAM). Learn about identities, resources, authentication, authorization, permissions, identity providers, and more.
+++++++ Last updated : 06/05/2023++++
+# Identity and access management (IAM) fundamental concepts
+
+This article provides fundamental concepts and terminology to help you understand identity and access management (IAM).
+
+## What is identity and access management (IAM)?
+
+Identity and access management ensures that the right people, machines, and software components get access to the right resources at the right time. First, the person, machine, or software component proves they're who or what they claim to be. Then, the person, machine, or software component is allowed or denied access to or use of certain resources.
+
+Here are some fundamental concepts to help you understand identity and access management:
+
+## Identity
+
+A digital identity is a collection of unique identifiers or attributes that represent a human, software component, machine, asset, or resource in a computer system. An identifier can be:
+- An email address
+- Sign-in credentials (username/password)
+- Bank account number
+- Government issued ID
+- MAC address or IP address
+
+Identities are used to authenticate and authorize access to resources, communicate with other humans, conduct transactions, and other purposes.
+
+At a high level, there are three types of identities:
+
+- **Human identities** represent people such as employees (internal workers and front line workers) and external users (customers, consultants, vendors, and partners).
+- **Workload identities** represent software workloads such as an application, service, script, or container.
+- **Device identities** represent devices such as desktop computers, mobile phones, IoT sensors, and IoT managed devices. Device identities are distinct from human identities.
+
+## Authentication
+
+Authentication is the process of challenging a person, software component, or hardware device for credentials in order to verify their identity, or prove they're who or what they claim to be. Authentication typically requires the use of credentials (like username and password, fingerprints, certificates, or one-time passcodes). Authentication is sometimes shortened to *AuthN*.
+
+Multi-factor authentication (MFA) is a security measure that requires users to provide more than one piece of evidence to verify their identities, such as:
+- Something they know, for example a password.
+- Something they have, like a badge or [security token](/azure/active-directory/develop/security-tokens).
+- Something they are, like a biometric (fingerprint or face).
+
+Single sign-on (SSO) allows users to authenticate their identity once and then later silently authenticate when accessing various resources that rely on the same identity. Once authenticated, the IAM system acts as the source of identity truth for the other resources available to the user. It removes the need for signing on to multiple, separate target systems.
+
+## Authorization
+
+Authorization validates that the user, machine, or software component has been granted access to certain resources. Authorization is sometimes shortened to *AuthZ*.
+
+## Authentication vs. authorization
+
+The terms authentication and authorization are sometimes used interchangeably, because they often seem like a single experience to users. They're actually two separate processes:
+- Authentication proves the identity of a user, machine, or software component
+- Authorization grants or denies the user, machine, or software component access to certain resources
++
+Here's a quick overview of authentication and authorization:
+
+| Authentication | Authorization |
+| - | -- |
+| Can be thought of as a gatekeeper, allowing access only to those who provide valid credentials. | Can be thought of as a guard, ensuring that only those with the proper clearance can enter certain areas. |
+| Verifies whether a user, machine, or software is who or what they claim to be.| Determines if the user, machine, or software is allowed to access a particular resource. |
+| Challenges the user, machine, or software for verifiable credentials (for example, passwords, biometric identifiers, or certificates).| Determines what level of access a user, machine, or software has.|
+| Done before authorization. | Done after successful authentication. |
+| Information is transferred in an ID token. | Information is transferred in an access token. |
+| Often uses the OpenID Connect (OIDC) (which is built on the OAuth 2.0 protocol) or SAML protocols. | Often uses the OAuth 2.0 protocol. |
+
+For more detailed information, read [Authentication vs. authorization](/azure/active-directory/develop/authentication-vs-authorization).
+
+### Example
+
+Suppose you want to spend the night in a hotel. You can think of authentication and authorization as the security system for the hotel building. Users are people who want to stay at the hotel, resources are the rooms or areas that people want to use. Hotel staff is another type of user.
+
+If you're staying at the hotel, you first go to reception to start the "authentication process". You show an identification card and credit card and the receptionist matches your ID against the online reservation. After the receptionist has verified who you are, the receptionist grants you permission to access the room you've been assigned. You're given a keycard and can go now to your room.
++
+The doors to the hotel rooms and other areas have keycard sensors. Swiping the keycard in front of a sensor is the "authorization process". The keycard only lets you open the doors to rooms you're permitted to access, such as your hotel room and the hotel exercise room. If you swipe your keycard to enter any other hotel guest room, your access is denied. Individual [permissions](/azure/active-directory/fundamentals/users-default-permissions?context=/azure/active-directory/roles/context/ugr-context), such as accessing the exercise room and a specific guest room, are collected into [roles](/azure/active-directory/roles/concept-understand-roles) which can be granted to individual users. When you're staying at the hotel, you're granted the Hotel Patron role. Hotel room service staff would be granted the Hotel Room Service role. This role permits access to all hotel guest rooms (but only between 11am and 4pm), the laundry room, and the supply closets on each floor.
++
+## Identity provider
+
+An identity provider creates, maintains, and manages identity information while offering authentication, authorization, and auditing services.
++
+With modern authentication, all services, including all authentication services, are supplied by a central identity provider. Information that's used to authenticate the user with the server is stored and managed centrally by the identity provider.
+
+With a central identity provider, organizations can establish authentication and authorization policies, monitor user behavior, identify suspicious activities, and reduce malicious attacks.
+
+[Microsoft Azure Active Directory](/azure/active-directory/) is an example of a cloud-based identity provider. Other examples include Twitter, Google, Amazon, LinkedIn, and GitHub.
+
+## Next steps
+
+- Read [Introduction to identity and access management](introduction-identity-access-management.md) to learn more.
+- Learn about [Single sign-on (SSO)](/azure/active-directory/manage-apps/what-is-single-sign-on).
+- Learn about [Multi-factor authentication (MFA)](/azure/active-directory/authentication/concept-mfa-howitworks).
active-directory Identity Secure Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/identity-secure-score.md
Controls can be scored in two ways. Some are scored in a binary fashion - you ge
Actions labeled as [Not Scored] are ones you can perform in your organization but won't be scored because they aren't hooked up in the tool (yet!). So, you can still improve your security, but you won't get credit for those actions right now.
+In addition, the recommended actions:
+* Protect all users with a user risk policy
+* Protect all users with a sign-in risk policy
+
+Also won't give you credits when configured using Conditional Access Policies, yet, for the same reason as above. For now, these actions give credits only when configured through Identity Protection policies.
+ ### How often is my score updated? The score is calculated once per day (around 1:00 AM PST). If you make a change to a measured action, the score will automatically update the next day. It takes up to 48 hours for a change to be reflected in your score.
active-directory Introduction Identity Access Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/introduction-identity-access-management.md
+
+ Title: What is identity and access management (IAM)?
+description: Learn what identity and access management (IAM) is, why it's important, and how it works. Learn about authentication and authorization, single sign-on (SSO), and multi-factor authentication (MFA). Learn about SAML, Open ID Connect (OIDC), and OAuth 2.0 and other authentication and authorization standards, tokens, and more.
+++++++ Last updated : 06/05/2023++++
+# What is identity and access management (IAM)?
+
+In this article, you learn some of the fundamental concepts of Identity and Access Management (IAM), why it's important, and how it works.
+
+Identity and access management ensures that the right people, machines, and software components get access to the right resources at the right time. First, the person, machine, or software component proves they're who or what they claim to be. Then, the person, machine, or software component is allowed or denied access to or use of certain resources.
+
+To learn about the basic terms and concepts, see [Identity fundamentals](identity-fundamentals.md).
+
+## What does IAM do?
+
+IAM systems typically provide the following core functionality:
+
+- **Identity management** - The process of creating, storing, and managing identity information. Identity providers (IdP) are software solutions that are used to track and manage user identities, as well as the permissions and access levels associated with those identities.
+
+- **Identity federation** - You can allow users who already have passwords elsewhere (for example, in your enterprise network or with an internet or social identity provider) to get access to your system.
+
+- **Provisioning and deprovisioning of users** - The process of creating and managing user accounts, which includes specifying which users have access to which resources, and assigning permissions and access levels.
+
+- **Authentication of users** - Authenticate a user, machine, or software component by confirming that they're who or what they say they are. You can add multi-factor authentication (MFA) for individual users for extra security or single sign-on (SSO) to allow users to authenticate their identity with one portal instead of many different resources.
+
+- **Authorization of users** - Authorization ensures a user is granted the exact level and type of access to a tool that they're entitled to. Users can also be portioned into groups or roles so large cohorts of users can be granted the same privileges.
+
+- **Access control** - The process of determining who or what has access to which resources. This includes defining user roles and permissions, as well as setting up authentication and authorization mechanisms. Access controls regulate access to systems and data.
+
+- **Reports and monitoring** - Generate reports after actions taken on the platform (like sign-in time, systems accessed, and type of authentication) to ensure compliance and assess security risks. Gain insights into the security and usage patterns of your environment.
+
+## How IAM works
+
+This section provides an overview of the authentication and authorization process and the more common standards.
+
+### Authenticating, authorizing, and accessing resources
+
+Let's say you have an application that signs in a user and then accesses a protected resource.
++
+1. The user (resource owner) initiates an authentication request with the identity provider/authorization server from the client application.
+
+1. If the credentials are valid, the identity provider/authorization server first sends an ID token containing information about the user back to the client application.
+
+1. The identity provider/authorization server also obtains end-user consent and grants the client application authorization to access the protected resource. Authorization is provided in an access token, which is also sent back to the client application.
+
+1. The access token is attached to subsequent requests made to the protected resource server from the client application.
+
+1. The identity provider/authorization server validates the access token. If successful the request for protected resources is granted, and a response is sent back to the client application.
+
+For more information, read [Authentication and authorization](/azure/active-directory/develop/authentication-vs-authorization#authentication-and-authorization-using-the-microsoft-identity-platform).
+
+### Authentication and authorization standards
+
+These are the most well-known and commonly used authentication and authorization standards:
+
+#### OAuth 2.0
+
+OAuth is an open-standards identity management protocol that provides secure access for websites, mobile apps, and Internet of Things and other devices. It uses tokens that are encrypted in transit and eliminates the need to share credentials. OAuth 2.0, the latest release of OAuth, is a popular framework used by major social media platforms and consumer services, from Facebook and LinkedIn to Google, PayPal, and Netflix. To learn more, read about [OAuth 2.0 protocol](/azure/active-directory/develop/active-directory-v2-protocols).
+#### OpenID Connect (OIDC)
+
+With the release of the OpenID Connect (which uses public-key encryption), OpenID became a widely adopted authentication layer for OAuth. Like SAML, OpenID Connect (OIDC) is widely used for single sign-on (SSO), but OIDC uses REST/JSON instead of XML. OIDC was designed to work with both native and mobile apps by using REST/JSON protocols. The primary use case for SAML, however, is web-based apps. To learn more, read about [OpenID Connect protocol](/azure/active-directory/develop/active-directory-v2-protocols).
+
+#### JSON web tokens (JWTs)
+
+JWTs are an open standard that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. JWTs can be verified and trusted because theyΓÇÖre digitally signed. They can be used to pass the identity of authenticated users between the identity provider and the service requesting the authentication. They also can be authenticated and encrypted. To learn more, read [JSON Web Tokens](/azure/active-directory/develop/active-directory-v2-protocols#tokens).
+
+#### Security Assertion Markup Language (SAML)
+
+SAML is an open standard utilized for exchanging authentication and authorization information between, in this case, an IAM solution and another application. This method uses XML to transmit data and is typically the method used by identity and access management platforms to grant users the ability to sign in to applications that have been integrated with IAM solutions. To learn more, read [SAML protocol](/azure/active-directory/develop/active-directory-saml-protocol-reference).
+
+#### System for Cross-Domain Identity Management (SCIM)
+
+Created to simplify the process of managing user identities, SCIM provisioning allows organizations to efficiently operate in the cloud and easily add or remove users, benefitting budgets, reducing risk, and streamlining workflows. SCIM also facilitates communication between cloud-based applications. To learn more, read [Develop and plan provisioning for a SCIM endpoint](/azure/active-directory/app-provisioning/use-scim-to-provision-users-and-groups?toc=/azure/active-directory/develop/toc.json&bc=/azure/active-directory/develop/breadcrumb/toc.json).
+
+#### Web Services Federation (WS-Fed)
+
+WS-Fed was developed by Microsoft and used extensively in their applications, this standard defines the way security tokens can be transported between different entities to exchange identity and authorization information. To learn more, read Web Services Federation Protocol.
+
+## Next steps
+
+To learn more, see:
+
+- [Single sign-on (SSO)](/azure/active-directory/manage-apps/what-is-single-sign-on)
+- [Multi-factor authentication (MFA)](/azure/active-directory/authentication/concept-mfa-howitworks)
+- [Authentication vs authorization](/azure/active-directory/develop/authentication-vs-authorization)
+- [OAuth 2.0 and OpenID Connect](/azure/active-directory/develop/active-directory-v2-protocols)
+- [App types and authentication flows](/azure/active-directory/develop/authentication-flows-app-scenarios)
+- [Security tokens](/azure/active-directory/develop/security-tokens)
active-directory What Is Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/what-is-deprecated.md
Use the following table to learn about changes including deprecations, retiremen
|Functionality, feature, or service|Change|Change date | |||:|
-|[My Groups experience](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023|
-|[My Apps browser extension](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023|
|[Microsoft Authenticator Lite for Outlook mobile](../../active-directory/authentication/how-to-mfa-authenticator-lite.md)|Feature change|Jun 9, 2023| |[System-preferred authentication methods](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|Sometime after GA| |[Azure AD Authentication Library (ADAL)](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Retirement|Jun 30, 2023|
Use the following table to learn about changes including deprecations, retiremen
|Functionality, feature, or service|Change|Change date | |||:|
+|[My Groups experience](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023|
+|[My Apps browser extension](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/microsoft-entra-change-announcements-march-2023-train/ba-p/2967448)|Feature change|May 2023|
|Microsoft Authenticator app [Number matching](../authentication/how-to-mfa-number-match.md)|Feature change|May 8, 2023| |[Azure AD Domain Services virtual network deployments](../../active-directory-domain-services/overview.md)|Retirement|Mar 1, 2023| |[License management API, PowerShell](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/migrate-your-apps-to-access-the-license-managements-apis-from/ba-p/2464366)|Retirement|*Mar 31, 2023|
active-directory What Is Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/what-is-cloud-sync.md
Previously updated : 01/17/2023 Last updated : 06/09/2023
The following short video provides an excellent overview of Azure AD Connect clo
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWJ8l5]
+## Choose the right sync client
+To determine if cloud sync is right for your organization, use the link below. It will take you to a tool that will help you evaluate your synchronization needs. For more information, evaluate your options using the [Wizard to evaluate sync options](https://aka.ms/EvaluateSyncOptions)
+ ## Comparison between Azure AD Connect and cloud sync
active-directory F5 Big Ip Forms Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-forms-advanced.md
An access profile binds the APM elements that manage access to BIG-IP virtual se
14. On **SAML authentication SP**, change the **Name** to **Azure AD Auth**. 15. In the **AAA Server** dropdown, enter the SAML service provider object you created.
- ![Screenshot showing the Azure AD Authentication server settings.](./media/f5-big-ip-forms-advanced/azure-ad-auth-server.png)
+ ![Screenshot showing the Azure AD Authentication server settings.](./media/f5-big-ip-forms-advanced/auth-server.png)
16. On the **Successful** branch, select the **+** sign. 17. In the pop-up, select **Authentication**.
active-directory Migrate Adfs Apps Phases Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-apps-phases-overview.md
To ensure that the users can easily and securely access applications, your goal
[Azure AD](../fundamentals/active-directory-whatis.md) offers a universal identity platform that provides your employees, partners, and customers a single identity to access the applications they want and collaborate from any platform and device. Azure AD has a [full suite of identity management capabilities](../fundamentals/active-directory-whatis.md#which-features-work-in-azure-ad). Standardizing your app authentication and authorization to Azure AD gets you the benefits that these capabilities provide.
active-directory Markit Procurement Service Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/markit-procurement-service-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
1. Determine what data to [map between Azure AD and Markit Procurement Service](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Markit Procurement Service to support provisioning with Azure AD
-Contact Markit Procurement Service support to configure Markit Procurement Service to support provisioning with Azure AD.
+You can begin the process of connecting your Markit environment to Azure AD provisioning by reaching out to the [Markit support team](mailto:support@markit.eu) or directly with your Markit account manager. You're provided a document that contains your **Tenant URL**, along with a **Secret Token**. Markit account managers can assist you with setting up this integration and are available to answer any questions about its configuration or use.
## Step 3. Add Markit Procurement Service from the Azure AD application gallery
The Azure AD provisioning service allows you to scope who will be provisioned ba
## Step 5. Configure automatic user provisioning to Markit Procurement Service
-This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD.
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in Markit Procurement Service based on user assignments in Azure AD.
### To configure automatic user provisioning for Markit Procurement Service in Azure AD:
This section guides you through the steps to configure the Azure AD provisioning
1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Markit Procurement Service**.
+1. Uncheck **Create** checkbox. Markit recommends unchecking the create option. By unchecking create options, users are created on demand during first time user login.
+
+ ![Screenshot of Uncheck create option.](media/markit-procurement-service-provisioning-tutorial/create-uncheck.png)
+ 1. Review the user attributes that are synchronized from Azure AD to Markit Procurement Service in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Markit Procurement Service for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Markit Procurement Service API supports filtering users based on that attribute. Select the **Save** button to commit any changes. |Attribute|Type|Supported for filtering|Required by Markit Procurement Service|
active-directory Zoom Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zoom-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and Zoom](../app-provisioning/customize-application-attributes.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Zoom](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure Zoom to support provisioning with Azure AD
The scenario outlined in this tutorial assumes that you already have the followi
![Screenshot of Zoom Integrations.](media/zoom-provisioning-tutorial/app-navigations.png)
-2. Navigate to **Manage** in the top-right corner of the page.
+1. Navigate to **Manage** in the top-right corner of the page.
![Screenshot of the Zoom App Marketplace with the Manage option called out.](media/zoom-provisioning-tutorial/zoom-manage.png)
-3. Navigate to your created Azure AD app.
+1. Navigate to your created Azure AD app.
![Screenshot of the Created Apps section with the Azure A D app called out.](media/zoom-provisioning-tutorial/zoom03.png) > [!NOTE]
- > If you don't have an Azure AD app already created, then have a [JWT type Azure AD app](https://marketplace.zoom.us/docs/guides/build/jwt-app) created.
+ > If you don't have an Azure AD app already created, then have a [JWT type Azure AD app](https://developers.zoom.us/docs/platform/build/jwt-app/) created.
-4. Select **App Credentials** in the left navigation pane.
+1. Select **App Credentials** in the left navigation pane.
![Screenshot of the left navigation pane with the App Credentials option highlighted.](media/zoom-provisioning-tutorial/zoom04.png)
-5. Copy and save the **JWT Token**. This value will be entered in the **Secret Token** field in the Provisioning tab of your Zoom application in the Azure portal. If you need a new non-expiring token, you will need to reconfigure the expiration time which will auto generate a new token.
+1. Copy and save the **JWT Token**. This value will be entered in the **Secret Token** field in the Provisioning tab of your Zoom application in the Azure portal. If you need a new non-expiring token, you will need to reconfigure the expiration time which will auto generate a new token.
![Screenshot of the App Credentials page.](media/zoom-provisioning-tutorial/zoom05.png)
This section guides you through the steps to configure the Azure AD provisioning
1. Sign in to the [Azure portal](https://portal.azure.com/?feature.userProvisioningV2Authentication=true), ensure you are using the link (https://portal.azure.com/?feature.userProvisioningV2Authentication=true) then Select **Enterprise Applications**, then select **All applications**.
- ![Enterprise applications blade](common/enterprise-applications.png)
+ ![Screenshot of the Enterprise applications blade.](common/enterprise-applications.png)
-2. In the applications list, select **Zoom**.
+1. In the applications list, select **Zoom**.
- ![The Zoom link in the Applications list](common/all-applications.png)
+ ![Screenshot of the Zoom link in the Applications list.](common/all-applications.png)
-3. Select the **Provisioning** tab.
+1. Select the **Provisioning** tab.
![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+1. Set the **Provisioning Mode** to **Automatic**.
![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, enter `https://api.zoom.us/scim` in **Tenant URL**. Input the **JWT Token** value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to Zoom. If the connection fails, ensure your Zoom account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, select desired **Authentication Method**.
- ![Zoom provisioning](./media/zoom-provisioning-tutorial/provisioning.png)
+ * If the Authentication Method is **OAuth2 Authorization Code Grant**, enter `https://api.zoom.us/scim` in **Tenant URL**, click on **Authorize**, make sure that you enter your Zoom account's Admin credentials. Click **Test Connection** to ensure Azure AD can connect to Zoom. If the connection fails, ensure your Zoom account has Admin permissions and try again.
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+ ![Screenshot of the Zoom provisioning Token.](./media/zoom-provisioning-tutorial/provisioning-oauth.png)
- ![Notification Email](common/provisioning-notification-email.png)
+ * If the Authentication Method is **Bearer Authentication**, enter `https://api.zoom.us/scim` in **Tenant URL**. Input the **JWT Token** value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to Zoom. If the connection fails, ensure your Zoom account has Admin permissions and try again.
-7. Select **Save**.
+ ![Screenshot of the Zoom provisioning OAuth.](./media/zoom-provisioning-tutorial/provisioning-bearer-token.png)
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Zoom**.
+ > [!NOTE]
+ > You will have two options for your Authentication Method: **Bearer Authentication** and **OAuth2 Authorization Code Grant**. Make sure that you select OAuth2 Authorization Code Grant.
-9. Review the user attributes that are synchronized from Azure AD to Zoom in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Zoom for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Zoom API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of the Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Zoom**.
+
+1. Review the user attributes that are synchronized from Azure AD to Zoom in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Zoom for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Zoom API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
|Attribute|Type| |||
This section guides you through the steps to configure the Azure AD provisioning
|emails[type eq "work"]|String| |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|
-10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. To configure scoping filters, refer to the following instructions provided in the [Screenshot of the Scoping filter tutorial.](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-11. To enable the Azure AD provisioning service for Zoom, change the **Provisioning Status** to **On** in the **Settings** section.
+1. To enable the Azure AD provisioning service for Zoom, change the **Provisioning Status** to **On** in the **Settings** section.
- ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+ ![Screenshot of the Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
-12. Define the users and/or groups that you would like to provision to Zoom by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users and/or groups that you would like to provision to Zoom by choosing the desired values in **Scope** in the **Settings** section.
- ![Provisioning Scope](common/provisioning-scope.png)
+ ![Screenshot of the Provisioning Scope.](common/provisioning-scope.png)
-13. When you are ready to provision, click **Save**.
+1. When you are ready to provision, click **Save**.
- ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
Once you've configured provisioning, use the following resources to monitor your
## Change log * 05/14/2020 - Support for UPDATE operations added for emails[type eq "work"] attribute.
-* 10/20/2020 - Added support for two new roles "Licensed" and "on-premises" to replace existing roles "Pro" and "Corp". Support for roles "Pro" and "Corp" will be removed in the future.
+* 10/20/2020 - Added support for two new roles **Licensed** and **on-premises** to replace existing roles **Pro** and **Corp**. Support for roles **Pro** and **Corp** will be removed in the future.
+* 05/30/2023 - Added support for new authentication method i.e. **OAuth 2.0**.
## Additional resources
aks Cilium Enterprise Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cilium-enterprise-marketplace.md
- Title: Isovalent Cilium Enterprise on Azure Marketplace (Preview)-
-description: Learn about Isovalent Cillium Enterprise on Azure Marketplace and how to deploy it on Azure.
----- Previously updated : 04/18/2023---
-# Isovalent Cilium Enterprise on Azure Marketplace (Preview)
-
-Isovalent Cilium Enterprise on Azure Marketplace is a powerful tool for securing and managing KubernetesΓÇÖ workloads on Azure. Cilium Enterprise's range of features and easy deployment make it an ideal solution for organizations of all sizes looking to secure their cloud-native applications.
-
-Isovalent Cilium Enterprise is a network security platform for modern cloud-native workloads that provides visibility, security, and compliance across Kubernetes clusters. It uses eBPF technology to deliver network and application-layer security, while also providing observability and tracing for Kubernetes workloads. Azure Marketplace is an online store for buying and selling cloud computing solutions that allows you to deploy Isovalent Cilium Enterprise to Azure with ease.
--
-> [!IMPORTANT]
-> Isovalent Cilium Enterprise is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-Designed for platform teams and using the power of eBPF, Isovalent Cilium Enterprise:
-
-* Combines network and runtime behavior with Kubernetes identity to provide a single source of data for cloud native forensics, audit, compliance monitoring, and threat detection. Isovalent Cilium Enterprise is integrated into your SIEM/Log aggregation platform of choice.
-
-* Scales effortlessly for any deployment size. With capabilities such as traffic management, load balancing, and infrastructure monitoring.
-
-* Fully back-ported and tested. Available with 24x7 support.
-
-* Enables self-service for monitoring, troubleshooting, and security workflows in Kubernetes. Teams can access current and historical views of flow data, metrics, and visualizations for their specific namespaces.
-
-> [!NOTE]
-> If you are upgrading an existing AKS cluster, then it must be created with Azure CNI powered by Cilium. For more information, see [Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS)](azure-cni-powered-by-cilium.md).
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).--- An existing Azure Kubernetes Service (AKS) cluster running Azure CNI powered by Cilium. If you don't have an existing AKS cluster, you can create one from the Azure portal. For more information, see [Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS)](azure-cni-powered-by-cilium.md).-
-## Deploy Isovalent Cilium Enterprise on Azure Marketplace
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-
-1. In the search box at the top of the portal, enter **Cilium Enterprise** and select **Isovalent Cilium Enterprise** from the results.
-
-1. In the **Basics** tab of **Create Isovalent Cilium Enterprise**, enter or select the following information:
-
-| Setting | Value |
-| | |
-| **Project details** | |
-| Subscription | Select your subscription |
-| Resource group | Select **Create new** </br> Enter **test-rg** in **Name**. </br> Select **OK**. </br> Or, select an existing resource group that contains your AKS cluster. |
-| **Instance details** | |
-| Supported Regions | Select **West US 2**. |
-| Create new dev cluster? | Leave the default of **No**. |
-
-1. Select **Next: Cluster Details**.
-
-1. Select your AKS cluster from the **AKS Cluster Name** dropdown.
-
-1. Select **Review + create**.
-
-1. Select **Create**.
-
-Azure deploys Isovalent Cilium Enterprise to your selected subscription and resource group. This process may take some time and must be completed.
-
-> [!IMPORTANT]
-> Note that Marketplace applications are deployed as AKS extensions onto AKS clusters. If you are upgrading the existing AKS cluster, AKS replaces the Cilium OSS images with Isovalent Cilium Enterprise images seamlessly without any downtime.
-
-When the deployment is complete, you can access the Isovalent Cilium Enterprise by navigating to the resource group that contains the **Cilium Enterprise** resource in the Azure portal.
-
-Cilium can be reconfigured after deployment by updating the Helm values with Azure CLI:
-
-```azurecli
-az k8s-extension update -c <cluster> -t managedClusters -g <region> -n cilium --configuration-settings debug.enabled=true
-```
-
-You can uninstall an Isovalent Cilium Enterprise offer using the AKS extension delete command. Uninstall flow per AKS Cluster isn't added in Marketplace yet until ISVΓÇÖs stop sell the whole offer. For more information about AKS extension delete, see [az k8s-extension delete](/cli/azure/k8s-extension#az-k8s-extension-delete).
-
-## Next steps
--- [Configure Azure CNI Powered by Cilium in Azure Kubernetes Service (AKS)](azure-cni-powered-by-cilium.md)--- [What is Azure Kubernetes Service?](intro-kubernetes.md)
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
Gen2 VMs are supported on Linux. Gen2 VMs on Windows are supported for WS2022 on
* If your Kubernetes version is greater than 1.25, you only need to set the `vm_size` to get the generation 2 node pool. You can still use WS2019 generation 1 if you define that in the `os_sku`. * If your Kubernetes version less than 1.25, you can set the `os_sku` to WS2022 and set the `vm_size` to generation 2 to get the generation 2 node pool.
-Follow the Azure CLI commands to use generation 2 VMs on Windows:
+#### Install the aks-preview Azure CLI extension
-```azurecli
-# Sample command
+* Install or update the aks-preview Azure CLI extension using the [`az extension add`][az-extension-add] or the [`az extension update`][az-extension-update] command.
-az aks nodepool add --resource-group myResourceGroup --cluster-name myAKSCluster --name gen2np
kubernetes-version 1.23.5 --node-vm-size Standard_D32_v4 --os-type Windows --os_sku Windows2022
+ ```azurecli
+ # Install the aks-preview extension
+ az extension add --name aks-preview
-# Default command
+ # Update to the latest version of the aks-preview extension
+ az extension update --name aks-preview
+ ```
-az aks nodepool add --resource-group myResourceGroup --cluster-name myAKSCluster --name gen2np --os-type Windows --kubernetes-version 1.23.5
+#### Register the AKSWindows2022Gen2Preview feature flag
-```
+1. Register the AKSWindows2022Gen2Preview feature flag using the [`az feature register`][az-feature-register] command.
-To determine if you're on generation 1 or generation 2, run the following command from the nodepool level and check that the `nodeImageVersion` contains `gen2`:
+ ```azurecli-interactive
+ az feature register --namespace "Microsoft.ContainerService" --name "AKSWindows2022Gen2Preview"
+ ```
-```azurecli
-az aks nodepool show
-```
+ It takes a few minutes for the status to show *Registered*.
-To determine available generation 2 VM sizes, run the following command:
+2. Verify the registration using the [`az feature show`][az-feature-show] command.
-```azurecli
-az vm list -skus -l $region
-```
+ ```azurecli-interactive
+ az feature show --namespace "Microsoft.ContainerService" --name "AKSWindows2022Gen2Preview"
+ ```
+
+3. When the status reflects *Registered*, refresh the registration of the `Microsoft.ContainerService` resource provider using the [`az provider register`][az-provider-register] command.
+
+ ```azurecli-interactive
+ az provider register --namespace "Microsoft.ContainerService"
+ ```
+
+#### Add a Windows node pool with a generation 2 VM
+
+* Add a node pool with generation 2 VMs on Windows using the [`az aks nodepool add`][az-aks-nodepool-add] command.
+
+ ```azurecli
+ # Sample command
+ az aks nodepool add --resource-group myResourceGroup --cluster-name myAKSCluster --name gen2np
+ --kubernetes-version 1.23.5 --node-vm-size Standard_D32_v4 --os-type Windows --os-sku Windows2022
+
+ # Default command
+ az aks nodepool add --resource-group myResourceGroup --cluster-name myAKSCluster --name gen2np --os-type Windows --kubernetes-version 1.23.5
+ ```
+
+* Determine whether you're on generation 1 or generation 2 using the [`az aks nodepool show`][az-aks-nodepool-show] command, and check that the `nodeImageVersion` contains `gen2`.
+
+ ```azurecli
+ az aks nodepool show
+ ```
+
+* Check available generation 2 VM sizes using the [`az vm list`][az-vm-list] command.
+
+ ```azurecli
+ az vm list -skus -l $region
+ ```
For more information, see [Support for generation 2 VMs on Azure](../virtual-machines/generation-2.md).
az aks update -n aksTest -g aksTest ΓÇô-nrg-lockdown-restriction-level Unrestric
[az-aks-update]: /cli/azure/aks#az-aks-update [baseline-reference-architecture-aks]: /azure/architecture/reference-architectures/containers/aks/baseline-aks [whatis-nrg]: ./concepts-clusters-workloads.md#node-resource-group
+[az-feature-show]: /cli/azure/feature#az_feature_show
+[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
+[az-aks-nodepool-show]: /cli/azure/aks/nodepool#az_aks_nodepool_show
+[az-vm-list]: /cli/azure/vm#az_vm_list
aks Limit Egress Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/limit-egress-traffic.md
You need to configure Azure Firewall inbound and outbound rules. The main purpos
```azurecli FWPUBLIC_IP=$(az network public-ip show -g $RG -n $FWPUBLICIP_NAME --query "ipAddress" -o tsv)
- FWPRIVATE_IP=$(az network firewall show -g $RG -n $FWNAME --query "ipConfigurations[0].privateIpAddress" -o tsv)
+ FWPRIVATE_IP=$(az network firewall show -g $RG -n $FWNAME --query "ipConfigurations[0].privateIPAddress" -o tsv)
``` > [!NOTE]
api-management Api Management Howto Aad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad-b2c.md
Azure Active Directory B2C is a cloud identity management solution for consumer-
In this tutorial, you'll learn the configuration required in your API Management service to integrate with Azure Active Directory B2C. As noted later in this article, if you are using the deprecated legacy developer portal, some steps will differ.
-For an overview of options to secure the developer portal, see [Authentication and authorization in API Management](authentication-authorization-overview.md#developer-portal-user-plane).
+For an overview of options to secure the developer portal, see [Secure access to the API Management developer portal](secure-developer-portal-access.md).
> [!IMPORTANT] > * This article has been updated with steps to configure an Azure AD B2C app using the Microsoft Authentication Library ([MSAL](../active-directory/develop/msal-overview.md)).
api-management Api Management Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad.md
In this article, you'll learn how to:
> * Enable access to the developer portal for users from Azure Active Directory (Azure AD). > * Manage groups of Azure AD users by adding external groups that contain the users.
-For an overview of options to secure the developer portal, see [Authentication and authorization in API Management](authentication-authorization-overview.md#developer-portal-user-plane).
+For an overview of options to secure the developer portal, see [Secure access to the API Management developer portal](secure-developer-portal-access.md).
> [!IMPORTANT] > * This article has been updated with steps to configure an Azure AD app using the Microsoft Authentication Library ([MSAL](../active-directory/develop/msal-overview.md)).
api-management Api Management Howto Mutual Certificates For Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-mutual-certificates-for-clients.md
API Management provides the capability to secure access to APIs (that is, client
For information about securing access to the backend service of an API using client certificates (that is, API Management to backend), see [How to secure back-end services using client certificate authentication](./api-management-howto-mutual-certificates.md).
-For a conceptual overview of API authorization, see [Authentication and authorization in API Management](authentication-authorization-overview.md#gateway-data-plane).
+For a conceptual overview of API authorization, see [Authentication and authorization to APIs in API Management](authentication-authorization-overview.md).
## Certificate options
api-management Api Management Howto Protect Backend With Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-protect-backend-with-aad.md
In this article, you'll learn high level steps to configure your [Azure API Management](api-management-key-concepts.md) instance to protect an API, by using the [OAuth 2.0 protocol with Azure Active Directory (Azure AD)](../active-directory/develop/active-directory-v2-protocols.md).
-For a conceptual overview of API authorization, see [Authentication and authorization in API Management](authentication-authorization-overview.md#gateway-data-plane).
+For a conceptual overview of API authorization, see [Authentication and authorization to APIs in API Management](authentication-authorization-overview.md).
## Prerequisites
api-management Authentication Authorization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-authorization-overview.md
Title: Authentication and authorization - Overview
+ Title: API authentication and authorization - Overview
-description: Learn about authentication and authorization features in Azure API Management to secure access to the management, API, and developer features
-
+description: Learn about authentication and authorization features in Azure API Management to secure access to APIs, including options for OAuth 2.0 authorization.
+ Previously updated : 09/08/2022- Last updated : 06/06/2023+
-# Authentication and authorization in Azure API Management
+# Authentication and authorization to APIs in Azure API Management
-This article is an introduction to API Management capabilities that help you secure users' access to API Management features and APIs.
+This article is an introduction to a rich, flexible set of features in API Management that help you secure users' access to managed APIs.
-API Management provides a rich, flexible set of features to support API authentication and authorization in addition to the standard control-plane authentication and role-based access control (RBAC) required when interacting with Azure services.
+API authentication and authorization in API Management involve securing the end-to-end communication of client apps to the API Management gateway and through to backend APIs. In many customer environments, OAuth 2.0 is the preferred API authorization protocol. API Management supports OAuth 2.0 authorization between the client and the API Management gateway, between the gateway and the backend API, or both independently.
-API Management also provides a fully customizable, standalone, managed [developer portal](api-management-howto-developer-portal.md), which can be used externally (or internally) to allow developer users to discover and interact with the APIs published through API Management. The developer portal has several options to facilitate secure user sign-up and sign-in.
-The following diagram is a conceptual view of Azure API Management, showing the management plane (Azure control plane), API gateway (data plane), and developer portal (user plane), each with at least one option to secure interaction. For an overview of API Management components, see [What is Azure API Management?](api-management-key-concepts.md)
+API Management supports other client-side and service-side authentication and authorization mechanisms that supplement OAuth 2.0 or that are useful when OAuth 2.0 authorization for APIs isn't possible. How you choose from among these options depends on the maturity of your organization's API environment, your security and compliance requirements, and your organization's approach to [mitigating common API threats](mitigate-owasp-api-threats.md).
+> [!IMPORTANT]
+> Securing users' access to APIs is one of many considerations for securing your API Management environment. For more information, see [Azure security baseline for API Management](/security/benchmark/azure/baselines/api-management-security-baseline?toc=%2Fazure%2Fapi-management%2F&bc=%2Fazure%2Fapi-management%2Fbreadcrumb%2Ftoc.json).
-## Management plane
+> [!NOTE]
+> Other API Management components have separate mechanisms to secure and restrict user access:
+> * For managing the API Management instance through the Azure control plane, API Management relies on Azure AD and Azure [role-based access control (RBAC)](api-management-role-based-access-control.md).
+> * The API Management developer portal supports [several options](secure-developer-portal-access.md) to facilitate secure user sign-up and sign-in.
+
+## Authentication versus authorization
-Administrators, operators, developers, and DevOps service principals are examples of the different personas required to manage an Azure API Management instance in a customer environment.
+Here's a brief explanation of authentication and authorization in the context of access to APIs:
-Azure API Management relies on Azure Active Directory (Azure AD), which includes optional features such as multifactor authentication (MFA), and Azure RBAC to enable fine-grained access to the API Management service and its entities including APIs and policies. For more information, see [How to use role-based access control in Azure API Management](api-management-role-based-access-control.md).
+* **Authentication** - The process of verifying the identity of a user or app that accesses the API. Authentication may be done through credentials such as username and password, a certificate, or through single sign-on (SSO) or other methods.
-The management plane can be accessed via an Azure AD login (or token) through the Azure portal, infrastructure-as-code templates (such as Azure Resource Manager or Bicep), the REST API, client SDKs, the Azure CLI, or Azure PowerShell.
+* **Authorization** - The process of determining whether a user or app has permission to access a particular API, often through a token-based protocol such as OAuth 2.0.
-## Gateway (data plane)
+> [!NOTE]
+> To supplement authentication and authorization, access to APIs should also be secured using TLS to protect the credentials or tokens that are used for authentication or authorization.
-API authentication and authorization in API Management involve the end-to-end communication of client apps *through* the API Management gateway to backend APIs.
+## OAuth 2.0 concepts
-In many customer environments, [OAuth 2.0](https://oauth.net/2/) is the preferred API authorization protocol. API Management supports OAuth 2.0 across the data plane.
+[OAuth 2.0](https://oauth.net/2) is a standard authorization framework that is widely used to secure access to resources such as web APIs. OAuth 2.0 restricts actions of what a client app can perform on resources on behalf of the user, without ever sharing the user's credentials. While OAuth 2.0 isn't an authentication protocol, it's often used with OpenID Connect (OIDC), which extends OAuth 2.0 by providing user authentication and SSO functionality.
-### OAuth concepts
+### OAuth flow
-What happens when a client app calls an API with a request that is secured using TLS and OAuth? The following is an abbreviated example flow:
+What happens when a client app calls an API with a request that is secured using TLS and OAuth 2.0? The following is an abbreviated example flow:
* The client (the calling app, or *bearer*) authenticates using credentials to an *identity provider*. * The client obtains a time-limited *access token* (a JSON web token, or JWT) from the identity provider's *authorization server*.
What happens when a client app calls an API with a request that is secured using
The identity provider (for example, Azure AD) is the *issuer* of the token, and the token includes an *audience claim* that authorizes access to a *resource server* (for example, to a backend API, or to the API Management gateway itself). * The client calls the API and presents the access token - for example, in an Authorization header. * The *resource server* validates the access token. Validation is a complex process that includes a check that the *issuer* and *audience* claims contain expected values.
-* Based on token validation criteria, access to resources of the [backend] API is then granted.
+* Based on token validation criteria, access to resources of the [backend](backends.md) API is then granted.
-Depending on the type of client app and scenarios, different *authentication flows* are needed to request and manage tokens. For example, the authorization code flow and grant type are commonly used in apps that call web APIs. Learn more about [OAuth flows and application scenarios in Azure AD](../active-directory/develop/authentication-flows-app-scenarios.md).
+Depending on the type of client app and scenarios, different *authorization flows* are needed to request and manage tokens. For example, the authorization code flow and grant type are commonly used in apps that call web APIs. Learn more about [OAuth flows and application scenarios in Azure AD](../active-directory/develop/authentication-flows-app-scenarios.md).
+## OAuth 2.0 authorization scenarios in API Management
-### OAuth 2.0 authorization scenarios
+### Scenario 1 - Client app authorizes directly to backend
-#### Audience is the backend
+A common authorization scenario is when the calling application requests access to the backend API directly and presents an OAuth 2.0 token in an authorization header to the gateway. Azure API Management then acts as a "transparent" proxy between the caller and backend API, and passes the token through unchanged to the backend. The scope of the access token is between the calling application and backend API.
-The most common scenario is when the Azure API Management instance is a "transparent" proxy between the caller and backend API, and the calling application requests access to the API directly. The scope of the access token is between the calling application and backend API.
+The following image shows an example where Azure AD is the authorization provider. The client app might be a single-page application (SPA).
-In this scenario, the access token sent along with the HTTP request is intended for the backend API, not API Management. However, API Management still allows for a defense in depth approach. For example, configure policies to [validate the token](validate-jwt-policy.md), rejecting requests that arrive without a token, or a token that's not valid for the intended backend API. You can also configure API Management to check other claims of interest extracted from the token.
+Although the access token sent along with the HTTP request is intended for the backend API, API Management still allows for a defense in depth approach. For example, configure policies to [validate the JWT](validate-jwt-policy.md), rejecting requests that arrive without a token, or a token that's not valid for the intended backend API. You can also configure API Management to check other claims of interest extracted from the token.
-For an example, see [Protect an API in Azure API Management using OAuth 2.0 authorization with Azure Active Directory](api-management-howto-protect-backend-with-aad.md).
+Example:
-#### Audience is API Management
+* [Protect an API in Azure API Management using OAuth 2.0 authorization with Azure Active Directory](api-management-howto-protect-backend-with-aad.md)
-In this scenario, the API Management service acts on behalf of the API, and the calling application requests access to the API Management instance. The scope of the access token is between the calling application and API Management.
+> [!TIP]
+> In the special case when API access is protected using Azure AD, you can configure the [validate-azure-ad-token](validate-azure-ad-token-policy.md) policy for token validation.
+### Scenario 2 - Client app authorizes to API Management
-There are different reasons for wanting to do this. For example:
+In this scenario, the API Management service acts on behalf of the API, and the calling application requests access to the API Management instance. The scope of the access token is between the calling application and the API Management gateway. In API Management, configure a policy ([validate-jwt](validate-jwt-policy.md) or [validate-azure-ad-token](validate-azure-ad-token-policy.md)) to validate the token before the gateway passes the request to the backend. A separate mechanism typically secures the connection between the gateway and the backend API.
-* The backend is a legacy API that can't be updated to support OAuth.
+In the following example, Azure AD is again the authorization provider, and mutual TLS (mTLS) authentication secures the connection between the gateway and the backend.
- API Management should first be configured to validate the token (checking the issuer and audience claims at a minimum). After validation, use one of several options available to secure onward connections from API Management. See [other options](#other-options), later in this article.
-* The context required by the backend isnΓÇÖt possible to establish from the caller.
+There are different reasons for doing this. For example:
- After API Management has successfully validated the token received from the caller, it then needs to obtain an access token for the backend API using its own context, or context derived from the calling application. This scenario can be accomplished using either:
-
- * A custom policy to obtain an onward access token valid for the backend API from a configured identity provider.
-
- * The API Management instance's own identity ΓÇô passing the token from the API Management resource's system-assigned or user-assigned [managed identity](authentication-managed-identity-policy.md) to the backend API.
+* **The backend is a legacy API that can't be updated to support OAuth**
-### Token management by API Management
+ API Management should first be configured to validate the token (checking the issuer and audience claims at a minimum). After validation, use one of several options available to secure onward connections from API Management, such as mutual TLS (mTLS) authentication. See [Service side options](#service-side-options), later in this article.
-API Management also supports acquisition and secure storage of OAuth 2.0 tokens for certain downstream services using the [authorizations](authorizations-overview.md) feature, including through use of custom policies and caching.
-
-With authorizations, API Management manages the tokens for access to OAuth 2.0 backends, allowing you to delegate authentication to your API Management instance to simplify access by client apps to a given backend service or SaaS platform.
-
-### Other options
-
-Although authorization is preferred and OAuth 2.0 has become the dominant method of enabling strong authorization for APIs, API Management enables other authentication options that can be useful if the backend or calling applications are legacy or don't yet support OAuth. Options include:
-
-* Mutual TLS (mTLS), also known as client certificate authentication, between the client (app) and API Management. This authentication can be end-to-end, with the call between API Management and the backend API secured in the same way. For more information, see [How to secure APIs using client certificate authentication in API Management](api-management-howto-mutual-certificates-for-clients.md)
-* Basic authentication, using the [authentication-basic](authentication-basic-policy.md) policy.
-* Subscription key, also known as an API key. For more information, see [Subscriptions in API Management](api-management-subscriptions.md).
-
-> [!NOTE]
-> We recommend using a subscription (API) key *in addition to* another method of authentication or authorization. On its own, a subscription key isn't a strong form of authentication, but use of the subscription key might be useful in certain scenarios, for example, tracking individual customers' API usage.
+* **The context required by the backend isn't possible to establish from the caller**
-## Developer portal (user plane)
-
-The managed developer portal is an optional API Management feature that allows internal or external developers and other interested parties to discover and use APIs that are published through API Management.
-
-If you elect to customize and publish the developer portal, API Management provides different options to secure it:
-
-* **External users** - The preferred option when the developer portal is consumed externally is to enable business-to-consumer access control through Azure Active Directory B2C (Azure AD B2C).
- * Azure AD B2C provides the option of using Azure AD B2C native accounts: users sign up to Azure AD B2C and use that identity to access the developer portal.
- * Azure AD B2C is also useful if you want users to access the developer portal using existing social media or federated organizational accounts.
- * Azure AD B2C provides many features to improve the end user sign-up and sign-in experience, including conditional access and MFA.
-
- For steps to enable Azure AD B2C authentication in the developer portal, see [How to authorize developer accounts by using Azure Active Directory B2C in Azure API Management](api-management-howto-aad-b2c.md).
--
-* **Internal users** - The preferred option when the developer portal is consumed internally is to leverage your corporate Azure AD. Azure AD provides a seamless single sign-on (SSO) experience for corporate users who need to access and discover APIs through the developer portal.
-
- For steps to enable Azure AD authentication in the developer portal, see [How to authorize developer accounts by using Azure Active Directory in Azure API Management](api-management-howto-aad.md).
-
-
-* **Basic authentication** - A default option is to use the built-in developer portal [username and password](developer-portal-basic-authentication.md) provider, which allows developer users to register directly in API Management and sign in using API Management user accounts. User sign up through this option is protected by a CAPTCHA service.
-
-### Developer portal test console
-In addition to providing configuration for developer users to sign up for access and sign in, the developer portal includes a test console where the developers can send test requests through API Management to the backend APIs. This test facility also exists for contributing users of API Management who manage the service using the Azure portal.
+ After API Management has successfully validated the token received from the caller, it then needs to obtain an access token for the backend API using its own context, or context derived from the calling application. This scenario can be accomplished using either:
-In either both cases, if the API exposed through Azure API Management is secured with OAuth 2.0 - that is, a calling application (*bearer*) needs to obtain and pass a valid access token - you can configure API Management to generate a valid token on behalf of an Azure portal or developer portal test console user. For more information, see [How to authorize test console of developer portal by configuring OAuth 2.0 user authorization](api-management-howto-oauth2.md) .
+ * A custom policy such as [send-request](send-request-policy.md) to obtain an onward access token valid for the backend API from a configured identity provider.
-This OAuth configuration for API testing is independent of the configuration required for user access to the developer portal. However, the identity provider and user could be the same. For example, an intranet application could require user access to the developer portal using SSO with their corporate identity, and that same corporate identity could obtain a token, through the test console, for the backend service being called with the same user context.
+ * The API Management instance's own identity ΓÇô passing the token from the API Management resource's system-assigned or user-assigned [managed identity](authentication-managed-identity-policy.md) to the backend API.
-## Scenarios
+* **The organization wants to adopt a standardized authorization approach**
-Different authentication and authorization options apply to different scenarios. The following sections explore high level configurations for three example scenarios. More steps are required to fully secure and configure APIs exposed through API Management to either internal or external audiences. However, the scenarios intentionally focus on the minimum configurations recommended in each case to provide the required authentication and authorization.
+ Regardless of the authentication and authorization mechanisms on their API backends, organizations may choose to converge on OAuth 2.0 for a standardized authorization approach on the front end. API Management's gateway can enable consistent authorization configuration and a common experience for API consumers as the organization's backends evolve.
-### Scenario 1 - Intranet API and applications
+### Scenario 3: API management authorizes to backend
-* An API Management contributor and backend API developer wants to publish an API that is secured by OAuth 2.0.
-* The API will be consumed by desktop applications whose users sign in using SSO through Azure AD.
-* The desktop application developers also need to discover and test the APIs via the API Management developer portal.
+With [API authorizations](authorizations-overview.md), you configure API Management itself to authorize access to one or more backend or SaaS services, such as LinkedIn, GitHub, or other OAuth 2.0-compatible backends. In this scenario, a user or client app makes a request to the API Management gateway, with gateway access controlled using an identity provider or other [client side options](#client-side-options). Then, through [policy configuration](get-authorization-context-policy.md), the user or client app delegates backend authentication and authorization to API Management.
-Key configurations:
+In the following example, a subscription key is used between the client and the gateway, and GitHub is the authorization provider for the backend API.
-|Configuration |Reference |
-|||
-| Authorize developer users of the API Management developer portal using their corporate identities and Azure AD. | [Authorize developer accounts by using Azure Active Directory in Azure API Management](api-management-howto-aad.md) |
-|Set up the test console in the developer portal to obtain a valid OAuth 2.0 token for the desktop app developers to exercise the backend API. <br/><br/>The same configuration can be used for the test console in the Azure portal, which is accessible to the API Management contributors and backend developers. <br/><br/>The token could be used in combination with an API Management subscription key. | [How to authorize test console of developer portal by configuring OAuth 2.0 user authorization](api-management-howto-oauth2.md)<br/><br/>[Subscriptions in Azure API Management](api-management-subscriptions.md) |
-| Validate the OAuth 2.0 token and claims when an API is called through API Management with an access token. | [Validate JWT policy](validate-jwt-policy.md) |
+With an API authorization, API Management acquires and refreshes the tokens for API access in the OAuth 2.0 flow. Authorizations simplify token management in multiple scenarios, such as:
-Go a step further with this scenario by moving API Management into the network perimeter and controlling ingress through a reverse proxy. For a reference architecture, see [Protect APIs with Application Gateway and API Management](/azure/architecture/reference-architectures/apis/protect-apis).
-
-### Scenario 2 - External API, partner application
+* A client app might need to authorize to multiple SaaS backends to resolve multiple fields using GraphQL resolvers.
+* Users authenticate to API Management by SSO from their identity provider, but authorize to a backend SaaS provider (such as LinkedIn) using a common organizational account
-* An API Management contributor and backend API developer wants to undertake a rapid proof-of-concept to expose a legacy API through Azure API Management. The API through API Management will be externally (internet) facing.
-* The API uses client certificate authentication and will be consumed by a new public-facing single-page Application (SPA) being developed and delivered offshore by a partner.
-* The SPA uses OAuth 2.0 with Open ID Connect (OIDC).
-* Application developers will access the API in a test environment through the developer portal, using a test backend endpoint to accelerate frontend development.
+Examples:
-Key configurations:
+* [Create an authorization with the Microsoft Graph API](authorizations-how-to-azure-ad.md)
+* [Create an authorization with the GitHub API](authorizations-how-to-github.md)
-|Configuration |Reference |
-|||
-| Configure frontend developer access to the developer portal using the default username and password authentication.<br/><br/>Developers can also be invited to the developer portal. | [Configure users of the developer portal to authenticate using usernames and passwords](developer-portal-basic-authentication.md)<br/><br/>[How to manage user accounts in Azure API Management](api-management-howto-create-or-invite-developers.md) |
-| Validate the OAuth 2.0 token and claims when the SPA calls API Management with an access token. In this case, the audience is API Management. | [Validate JWT policy](validate-jwt-policy.md) |
-| Set up API Management to use client certificate authentication to the backend. | [Secure backend services using client certificate authentication in Azure API Management](api-management-howto-mutual-certificates.md) |
+## Other options to secure APIs
-Go a step further with this scenario by using the [developer portal with Azure AD authorization](api-management-howto-aad.md) and Azure AD [B2B collaboration](../active-directory/external-identities/what-is-b2b.md) to allow the delivery partners to collaborate more closely. Consider delegating access to API Management through RBAC in a development or test environment and enable SSO into the developer portal using their own corporate credentials.
+While authorization is preferred, and OAuth 2.0 has become the dominant method of enabling strong authorization for APIs, API Management provides several other mechanisms to secure or restrict access between client and gateway (client side) or between gateway and backend (service side). Depending on the organization's requirements, these may be used to supplement OAuth 2.0. Alternatively, configure them independently if the calling applications or backend APIs are legacy or don't yet support OAuth 2.0.
-### Scenario 3 - External API, SaaS, open to the public
+### Client side options
-* An API Management contributor and backend API developer is writing several new APIs that will be available to community developers.
-* The APIs will be publicly available, with full functionality protected behind a paywall and secured using OAuth 2.0. After purchasing a license, the developer will be provided with their own client credentials and subscription key that is valid for production use.
-* External community developers will discover the APIs using the developer portal. Developers will sign up and sign in to the developer portal using their social media accounts.
-* Interested developer portal users with a test subscription key can explore the API functionality in a test context, without needing to purchase a license. The developer portal test console will represent the calling application and generate a default access token to the backend API.
-
- > [!CAUTION]
- > Extra care is required when using a client credentials flow with the developer portal test console. See [security considerations](api-management-howto-oauth2.md#security-considerations).
+|Mechanism |Description |Considerations |
+||||
+|[mTLS](api-management-howto-mutual-certificates-for-clients.md) | [Validate certificate](validate-client-certificate-policy.md) presented by the connecting client and check certificate properties against a certificate managed in API Management | Certificate may be stored in a key vault. |
+|[Restrict caller IPs](ip-filter-policy.md) | Filter (allow/deny) calls from specific IP addresses or address ranges. | Use to restrict access to certain users or organizations, or to traffic from upstream services. |
+|[Subscription key](api-management-subscriptions.md) | Limit access to one or more APIs based on an API Management [subscription](api-management-howto-create-subscriptions.md) | We recommend using a subscription (API) key *in addition to* another method of authentication or authorization. On its own, a subscription key isn't a strong form of authentication, but use of the subscription key might be useful in certain scenarios, for example, tracking individual customers' API usage or granting access to specific API products. |
-Key configurations:
+> [!TIP]
+> For defense in depth, deploying a web application firewall upstream of the API Management instance is strongly recommended. For example, use [Azure Application Gateway](/azure/architecture/reference-architectures/apis/protect-apis) or [Azure Front Door](front-door-api-management.md).
-|Configuration |Reference |
-|||
-| Set up products in Azure API Management to represent the combinations of APIs that are exposed to community developers.<br/><br/> Set up subscriptions to enable developers to consume the APIs. | [Tutorial: Create and publish a product](api-management-howto-add-products.md)<br/><br/>[Subscriptions in Azure API Management](api-management-subscriptions.md) |
-| Configure community developer access to the developer portal using Azure AD B2C. Azure AD B2C can then be configured to work with one or more downstream social media identity providers. | [How to authorize developer accounts by using Azure Active Directory B2C in Azure API Management](api-management-howto-aad-b2c.md) |
-| Set up the test console in the developer portal to obtain a valid OAuth 2.0 token to the backend API using the client credentials flow. | [How to authorize test console of developer portal by configuring OAuth 2.0 user authorization](api-management-howto-oauth2.md)<br/><br/>Adjust configuration steps shown in this article to use the [client credentials grant flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) instead of the authorization code grant flow. |
-Go a step further by delegating [user registration or product subscription](api-management-howto-setup-delegation.md) and extend the process with your own logic.
+### Service side options
+|Mechanism |Description |Considerations |
+||||
+|[Managed identity authentication](authentication-managed-identity-policy.md) | Authenticate to backend API with a system-assigned or user-assigned [managed identity](api-management-howto-use-managed-service-identity.md). | Recommended for scoped access to a protected backend resource by obtaining a token from Azure AD. |
+|[Certificate authentication](authentication-certificate-policy.md) | Authenticate to backend API using a client certificate. | Certificate may be stored in key vault. |
+|[Basic authentication](authentication-basic-policy.md) | Authenticate to backend API with username and password that are passed through an Authorization header. | Discouraged if better options are available. |
## Next steps * Learn more about [authentication and authorization](../active-directory/develop/authentication-vs-authorization.md) in the Microsoft identity platform.
api-management Authentication Basic Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authentication-basic-policy.md
Use the `authentication-basic` policy to authenticate with a backend service usi
### Usage notes - This policy can only be used once in a policy section.
+- We recommend using [named values](api-management-howto-properties.md) to provide credentials, with secrets protected in a key vault.
## Example
api-management Cache Lookup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-policy.md
Use the `cache-lookup` policy to perform cache lookup and return a valid cached
|Name|Description|Required| |-|--|--| |vary-by-header|Add one or more of these elements to start caching responses per value of specified header, such as `Accept`, `Accept-Charset`, `Accept-Encoding`, `Accept-Language`, `Authorization`, `Expect`, `From`, `Host`, `If-Match`.|No|
-|vary-by-query-parameter|Add one or more of these elements to start caching responses per value of specified query parameters. Enter a single or multiple parameters. Use semicolon as a separator. If none are specified, all query parameters are used.|No|
+|vary-by-query-parameter|Add one or more of these elements to start caching responses per value of specified query parameters. Enter a single or multiple parameters. Use semicolon as a separator. |No|
## Usage
For more information, see [Policy expressions](api-management-policy-expressions
* [API Management caching policies](api-management-caching-policies.md)
api-management Developer Portal Basic Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-basic-authentication.md
In the developer portal for Azure API Management, the default authentication method for users is to provide a username and password. In this article, learn how to set up users with basic authentication credentials to the developer portal.
-For an overview of options to secure the developer portal, see [Authentication and authorization in API Management](authentication-authorization-overview.md#developer-portal-user-plane).
+For an overview of options to secure the developer portal, see [Secure access to the API Management developer portal](secure-developer-portal-access.md).
## Prerequisites
api-management Get Authorization Context Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-authorization-context-policy.md
class Authorization
authorization-id="auth-01" context-variable-name="auth-context" identity-type="managed"
- identity="@(context.Request.Headers["Authorization"][0].Replace("Bearer ", ""))"
ignore-error="false" /> <!-- Return the token --> <return-response>
class Authorization
* [API Management access restriction policies](api-management-access-restriction-policies.md)
api-management Howto Protect Backend Frontend Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/howto-protect-backend-frontend-azure-ad-b2c.md
This scenario shows you how to configure your Azure API Management instance to protect an API. We'll use the Azure AD B2C SPA (Auth Code + PKCE) flow to acquire a token, alongside API Management to secure an Azure Functions backend using EasyAuth.
-For a conceptual overview of API authorization, see [Authentication and authorization in API Management](authentication-authorization-overview.md#gateway-data-plane).
+For a conceptual overview of API authorization, see [Authentication and authorization to APIs in API Management](authentication-authorization-overview.md).
## Aims
api-management Import Api From Oas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-api-from-oas.md
After importing the API, if needed, you can update the settings by using the [Se
[!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)]
+## Validate against an OpenAPI specification
+
+You can configure API Management [validation policies](api-management-policies.md#validation-policies) to validate requests and responses (or elements of them) against the schema in an OpenAPI specification. For example, use the [validate-content](validate-content-policy.md) policy to validate the size or content of a request or response body.
+ ## Next steps > [!div class="nextstepaction"]
api-management Policy Fragments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-fragments.md
Limitations:
* A policy fragment can't include a policy section identifier (`<inbound>`, `<outbound>`, etc.) or the `<base/>` element. * Currently, a policy fragment can't nest another policy fragment.
+* The maximum size of a policy fragment is 32 KB.
## Prerequisites
api-management Secure Developer Portal Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/secure-developer-portal-access.md
+
+ Title: Secure access to developer portal
+
+description: Learn about options to secure access to the API Management developer portal, including Azure AD, Azure AD B2C, and basic authentication
++++ Last updated : 06/06/2023+++
+# Secure access to the API Management developer portal
++
+API Management has a fully customizable, standalone, managed [developer portal](api-management-howto-developer-portal.md), which can be used externally (or internally) to allow developer users to discover and interact with the APIs published through API Management. The developer portal has several options to facilitate secure user sign-up and sign-in.
+
+## Authentication options
+
+* **External users** - The preferred option when the developer portal is consumed externally is to enable business-to-consumer access control through Azure Active Directory B2C (Azure AD B2C).
+ * Azure AD B2C provides the option of using Azure AD B2C native accounts: users sign up to Azure AD B2C and use that identity to access the developer portal.
+ * Azure AD B2C is also useful if you want users to access the developer portal using existing social media or federated organizational accounts.
+ * Azure AD B2C provides many features to improve the end user sign-up and sign-in experience, including conditional access and MFA.
+
+ For steps to enable Azure AD B2C authentication in the developer portal, see [How to authorize developer accounts by using Azure Active Directory B2C in Azure API Management](api-management-howto-aad-b2c.md).
++
+* **Internal users** - The preferred option when the developer portal is consumed internally is to leverage your corporate Azure AD. Azure AD provides a seamless single sign-on (SSO) experience for corporate users who need to access and discover APIs through the developer portal.
+
+ For steps to enable Azure AD authentication in the developer portal, see [How to authorize developer accounts by using Azure Active Directory in Azure API Management](api-management-howto-aad.md).
+
+
+* **Basic authentication** - A default option is to use the built-in developer portal [username and password](developer-portal-basic-authentication.md) provider, which allows developers to register directly in API Management and sign in using API Management user accounts. User sign up through this option is protected by a CAPTCHA service.
++
+## Developer portal test console
+In addition to providing configuration for developer users to sign up for access and sign in, the developer portal includes a test console where the developers can send test requests through API Management to the backend APIs. This test facility also exists for contributing users of API Management who manage the service using the Azure portal.
+
+If the API exposed through Azure API Management is secured with OAuth 2.0 - that is, a calling application (*bearer*) needs to obtain and pass a valid access token - you can configure API Management to generate a valid token on behalf of an Azure portal or developer portal test console user. For more information, see [How to authorize test console of developer portal by configuring OAuth 2.0 user authorization](api-management-howto-oauth2.md).
+
+This OAuth 2.0 configuration for API testing is independent of the configuration required for user access to the developer portal. However, the identity provider and user could be the same. For example, an intranet application could require user access to the developer portal using SSO with their corporate identity. That same corporate identity could obtain a token, through the test console, for the backend service being called with the same user context.
+
+## Scenarios
+
+Different authentication and authorization options apply to different scenarios. The following sections explore high level configurations for three example scenarios. More steps are required to fully secure and configure APIs exposed through API Management. However, the scenarios intentionally focus on the minimum configurations recommended in each case to provide the required authentication and authorization.
+
+### Scenario 1 - Intranet API and applications
+
+* An API Management contributor and backend API developer wants to publish an API that is secured by OAuth 2.0.
+* The API will be consumed by desktop applications whose users sign in using SSO through Azure AD.
+* The desktop application developers also need to discover and test the APIs via the API Management developer portal.
+
+Key configurations:
++
+|Configuration |Reference |
+|||
+| Authorize developer users of the API Management developer portal using their corporate identities and Azure AD. | [Authorize developer accounts by using Azure Active Directory in Azure API Management](api-management-howto-aad.md) |
+|Set up the test console in the developer portal to obtain a valid OAuth 2.0 token for the desktop app developers to exercise the backend API. <br/><br/>The same configuration can be used for the test console in the Azure portal, which is accessible to the API Management contributors and backend developers. <br/><br/>The token could be used in combination with an API Management subscription key. | [How to authorize test console of developer portal by configuring OAuth 2.0 user authorization](api-management-howto-oauth2.md)<br/><br/>[Subscriptions in Azure API Management](api-management-subscriptions.md) |
+| Validate the OAuth 2.0 token and claims when an API is called through API Management with an access token. | [Validate JWT policy](validate-jwt-policy.md) |
+
+Go a step further with this scenario by moving API Management into the network perimeter and controlling ingress through a reverse proxy. For a reference architecture, see [Protect APIs with Application Gateway and API Management](/azure/architecture/reference-architectures/apis/protect-apis).
+
+### Scenario 2 - External API, partner application
+
+* An API Management contributor and backend API developer wants to undertake a rapid proof-of-concept to expose a legacy API through Azure API Management. The API through API Management will be externally (internet) facing.
+* The API uses client certificate authentication and will be consumed by a new public-facing single-page app (SPA) being developed offshore by a partner.
+* The SPA uses OAuth 2.0 with Open ID Connect (OIDC).
+* Application developers will access the API in a test environment through the developer portal, using a test backend endpoint to accelerate frontend development.
+
+Key configurations:
+
+|Configuration |Reference |
+|||
+| Configure frontend developer access to the developer portal using the default username and password authentication.<br/><br/>Developers can also be invited to the developer portal. | [Configure users of the developer portal to authenticate using usernames and passwords](developer-portal-basic-authentication.md)<br/><br/>[How to manage user accounts in Azure API Management](api-management-howto-create-or-invite-developers.md) |
+| Validate the OAuth 2.0 token and claims when the SPA calls API Management with an access token. In this case, the audience is API Management. | [Validate JWT policy](validate-jwt-policy.md) |
+| Set up API Management to use client certificate authentication to the backend. | [Secure backend services using client certificate authentication in Azure API Management](api-management-howto-mutual-certificates.md) |
+
+Go a step further with this scenario by using the [developer portal with Azure AD authorization](api-management-howto-aad.md) and Azure AD [B2B collaboration](../active-directory/external-identities/what-is-b2b.md) to allow the delivery partners to collaborate more closely. Consider delegating access to API Management through RBAC in a development or test environment and enable SSO into the developer portal using their own corporate credentials.
+
+### Scenario 3 - External API, SaaS, open to the public
+
+* An API Management contributor and backend API developer is writing several new APIs that will be available to community developers.
+* The APIs will be publicly available, with full functionality protected behind a paywall and secured using OAuth 2.0. After purchasing a license, the developer will be provided with their own client credentials and subscription key that is valid for production use.
+* External community developers will discover the APIs using the developer portal. Developers will sign up and sign in to the developer portal using their social media accounts.
+* Interested developer portal users with a test subscription key can explore the API functionality in a test context, without needing to purchase a license. The developer portal test console will represent the calling application and generate a default access token to the backend API.
+
+ > [!CAUTION]
+ > Extra care is required when using a client credentials flow with the developer portal test console. See [security considerations](api-management-howto-oauth2.md#security-considerations).
+
+Key configurations:
+
+|Configuration |Reference |
+|||
+| Set up products in Azure API Management to represent the combinations of APIs that are exposed to community developers.<br/><br/> Set up subscriptions to enable developers to consume the APIs. | [Tutorial: Create and publish a product](api-management-howto-add-products.md)<br/><br/>[Subscriptions in Azure API Management](api-management-subscriptions.md) |
+| Configure community developer access to the developer portal using Azure AD B2C. Azure AD B2C can then be configured to work with one or more downstream social media identity providers. | [How to authorize developer accounts by using Azure Active Directory B2C in Azure API Management](api-management-howto-aad-b2c.md) |
+| Set up the test console in the developer portal to obtain a valid OAuth 2.0 token to the backend API using the client credentials flow. | [How to authorize test console of developer portal by configuring OAuth 2.0 user authorization](api-management-howto-oauth2.md)<br/><br/>Adjust configuration steps shown in this article to use the [client credentials grant flow](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) instead of the authorization code grant flow. |
+
+Go a step further by delegating [user registration or product subscription](api-management-howto-setup-delegation.md) and extend the process with your own logic.
++
+## Next steps
+* Learn more about [authentication and authorization](../active-directory/develop/authentication-vs-authorization.md) in the Microsoft identity platform.
+* Learn how to [mitigate OWASP API security threats](mitigate-owasp-api-threats.md) using API Management.
api-management Self Hosted Gateway Arc Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-arc-reference.md
+
+ Title: Reference - Self-hosted gateway Azure Arc settings - Azure API Management
+description: Reference for the required and optional settings to configure when using the Azure Arc extension for Azure API Management self-hosted gateway.
+++++ Last updated : 06/04/2023+++
+# Reference: Self-hosted gateway Azure Arc configuration settings
+
+This article provides a reference for required and optional settings that are used to configure the Azure Arc extension for API Management [self-hosted gateway container](self-hosted-gateway-overview.md).
++
+## Configuration API integration
+
+The Configuration API is used by the self-hosted gateway to connect to Azure API Management to get the latest configuration and send metrics, when enabled.
+
+Here's an overview of all configuration options:
+
+| Name | Description | Required | Default |
+|-||-|-|
+| `gateway.configuration.uri` | Configuration endpoint in Azure API Management for the self-hosted gateway. Find this value in the Azure portal under **Gateways** > **Deployment**. | Yes | N/A |
+| `gateway.auth.token` | Authentication key to authenticate with to Azure API Management service. Typically starts with `GatewayKey`. | Yes | N/A |
+| `gateway.configuration.backup.enabled` | If enabled will store a backup copy of the latest downloaded configuration on a storage volume | `false` |
+| `gateway.configuration.backup.persistentVolumeClaim.accessMode` | Access mode for the Persistent Volume Claim (PVC) pod | `ReadWriteMany` |
+| `gateway.configuration.backup.persistentVolumeClaim.size` | Size of the Persistent Volume Claim (PVC) to be created | `50Mi` |
+| `gateway.configuration.backup.persistentVolumeClaim.storageClassName` | Storage class name to be used for the Persistent Volume Claim (PVC). When no value is assigned (`null`), the platform default will be used. The specified storage class should support `ReadWriteMany` access mode, learn more about the [supported volume providers and their supported access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes).| `null` |
+
+## Cross-instance discovery & synchronization
+
+| Name | Description | Required | Default |
+|-||-|-|
+| `service.instance.heartbeat.port` | UDP port used for instances of a self-hosted gateway deployment to send heartbeats to other instances. | No | 4291 |
+| `service.instance.synchronization.port` | UDP port used for self-hosted gateway instances to synchronize rate limiting across multiple instances. | No | 4290 |
+
+## Metrics
+
+| Name | Description | Required | Default |
+|-||-|-|
+| `telemetry.metrics.cloud` | Indication whether or not to [enable emitting metrics to Azure Monitor](how-to-configure-cloud-metrics-logs.md). | No | `true` |
+| `telemetry.metrics.local` | Enable [local metrics collection](how-to-configure-local-metrics-logs.md) through StatsD. Value is one of the following options: `none`, `statsd`. | No | N/A |
+| `telemetry.metrics.localStatsd.endpoint` | StatsD endpoint. | Yes, if `telemetry.metrics.local` is set to `statsd`; otherwise no. | N/A |
+| `telemetry.metrics.localStatsd.sampling` | StatsD metrics sampling rate. Value must be between 0 and 1, for example, 0.5. | No | N/A |
+| `telemetry.metrics.localStatsd.tagFormat` | StatsD exporter [tagging format](https://github.com/prometheus/statsd_exporter#tagging-extensions). Value is one of the following options: `ibrato`, `dogStatsD`, `influxDB`. | No | N/A |
+| `telemetry.metrics.opentelemetry.enabled` | Indication whether or not to enable [emitting metrics to an OpenTelemetry collector](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md) on Kubernetes. | No | `false` |
+| `telemetry.metrics.opentelemetry.collector.uri` | URI of the OpenTelemetry collector to send metrics to. | Yes, if `observability.opentelemetry.enabled` is set to `true`; otherwise no. | N/A |
+
+## Logs
+
+| Name | Description | Required | Default |
+| - | - | - | -|
+| `telemetry.logs.std` |[Enable logging](how-to-configure-local-metrics-logs.md#logs) to a standard stream. Value is one of the following options: `none`, `text`, `json`. | No | `text` |
+| `telemetry.logs.local` | [Enable local logging](how-to-configure-local-metrics-logs.md#logs). Value is one of the following options: `none`, `auto`, `localsyslog`, `rfc5424`, `journal`, `json` | No | `auto` |
+| `telemetry.logs.localConfig.localsyslog.endpoint` | Endpoint for local syslogs | Yes if `telemetry.logs.local` is set to `localsyslog`; otherwise no. | N/A |
+| `telemetry.logs.localConfig.localsyslog.facility` | Specifies local syslog [facility code](https://en.wikipedia.org/wiki/Syslog#Facility), for example, `7`. | No | N/A |
+| `telemetry.logs.localConfig.rfc5424.endpoint` | rfc5424 endpoint. | Yes if `telemetry.logs.local` is set to `rfc5424`; otherwise no. | N/A |
+| `telemetry.logs.localConfig.rfc5424.facility` | Facility code per [rfc5424](https://tools.ietf.org/html/rfc5424), for example, `7` | No | N/A |
+| `telemetry.logs.localConfig.journal.endpoint` | Journal endpoint. |Yes if `telemetry.logs.local` is set to `journal`; otherwise no. | N/A |
+
+## Traffic routing
+
+| Name | Description | Required | Default |
+| - | - | - | -|
+| `service.type` | Type of Kubernetes service to use for exposing the gateway. ([docs](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types)) | No | `ClusterIP` |
+| `service.http.port` | Port to use for exposing HTTP traffic. | No | `8080` |
+| `service.http.nodePort` | Port on the node to use for exposing HTTP traffic. This requires `NodePort` as service type. | No | N/A |
+| `service.https.port` | Port to use for exposing HTTPS traffic. | No | `8081` |
+| `service.https.nodePort` | Port on the node to use for exposing HTTPS traffic. This requires `NodePort` as service type. | No | N/A |
+| `service.annotations` | Annotations to add to the Kubernetes service for the gateway. | No | N/A |
+| `ingress.annotations` | Annotations to add to the Kubernetes Ingress for the gateway. ([experimental](https://github.com/Azure/api-management-self-hosted-gateway-ingress)) | No | N/A |
+| `ingress.enabled` | Indication whether or not Kubernetes Ingress should be used. ([experimental](https://github.com/Azure/api-management-self-hosted-gateway-ingress)) | No | `false` |
+| `ingress.tls` | TLS configuration for Kubernetes Ingress. ([experimental](https://github.com/Azure/api-management-self-hosted-gateway-ingress)) | No | N/A |
+| `ingress.hosts` | Configuration of hosts to use for Kubernetes Ingress. ([experimental](https://github.com/Azure/api-management-self-hosted-gateway-ingress)) | No | N/A |
+
+## Integrations
+
+The self-hosted gateway integrates with various other technologies. This section provides an overview of the available configuration options you can use.
+
+### Dapr
+
+| Name | Description | Required | Default |
+| - | - | - | -|
+| `dapr.enabled` |Indication whether or not Dapr integration should be used. | No | `false` |
+| `dapr.app.id` | Application ID to use for Dapr integration | None |
+| `dapr.config` | Defines which Configuration CRD Dapr should use | `tracing` |
+| `dapr.logging.level` | Level of log verbosity of Dapr sidecar | `info` |
+| `dapr.logging.useJsonOutput` | Indication whether or not logging should be in JSON format | `true` |
+
+### Azure Monitor
+
+| Name | Description | Required | Default |
+| - | - | - | -|
+| `monitoring.customResourceId` | Resource ID of the Azure Log Analytics workspace to send logs to. | No | N/A |
+| `monitoring.ingestionKey` | Ingestion key to authenticate with Azure Log Analytics workspace to send logs to. | No | N/A |
+| `monitoring.workspaceId` | Workspace ID of the Azure Log Analytics workspace to send logs to. | No | N/A |
+
+## Image & workload scheduling
+
+Kubernetes is a powerful orchestration platform that gives much flexibility in what should be deployed and how it should be scheduled.
+
+This section provides an overview of the available configuration options you can use to influence the image that is used, how it gets scheduled and configured to self-heal.
+
+| Name | Description | Required | Default |
+| - | - | - | -|
+| `replicaCount` | Number of instances of the self-hosted gateway to run. | No | `3` |
+| `image.repository` | Image to run. | No | `mcr.microsoft.com/azure-api-management/gateway` |
+| `image.pullPolicy` | Policy to use for pulling container images. | No | `IfNotPresent` |
+| `image.tag` | Container image tag to use. | No | App version of extension is used |
+| `imagePullSecrets` | Kubernetes secret to use for authenticating with container registry when pulling the container image. | No | N/A |
+| `probes.readiness.httpGet.path` | URI path to use for readiness probes of the container | No | `/status-0123456789abcdef` |
+| `probes.readiness.httpGet.port` | Port to use for liveness probes of the container | No | `http` |
+| `probes.liveness.httpGet.path` | URI path to use for liveness probes of the container | No | `/status-0123456789abcdef` |
+| `probes.liveness.httpGet.port` | Port to use for liveness probes of the container | No | `http` |
+| `highAvailability.enabled` | Indication whether or not the gateway should be scheduled highly available in the cluster. | No | `false` |
+| `highAvailability.disruption.maximumUnavailable` | Amount of pods that are allowed to be unavailable due to [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions). | No | `25%` |
+| `highAvailability.podTopologySpread.whenUnsatisfiable` | Indication how pods should be spread across nodes in case the requirement can't be met. Learn more in the [Kubernetes docs](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/) | No | `ScheduleAnyway` |
+| `resources` | Capability to define CPU/Memory resources to assign to gateway | No | N/A |
+| `nodeSelector` | Capability to use selectors to identify the node on which the gateway should run. | No | N/A |
+| `affinity` | Affinity for pod scheduling ([docs](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/)) | No | N/A |
+| `tolerations` | Tolerations for pod scheduling ([docs](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/)) | No | N/A |
+
+## Next steps
+
+- Learn more about guidance for [running the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md)
+- [Deploy self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md)
+- [Deploy self-hosted gateway to Kubernetes](how-to-deploy-self-hosted-gateway-kubernetes.md)
+- [Deploy self-hosted gateway to Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md)
+- [Enable Dapr support on self-hosted gateway](self-hosted-gateway-enable-dapr.md)
+- Learn more about configuration options for the [self-hosted gateway container image](self-hosted-gateway-settings-reference.md)
api-management Self Hosted Gateway Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-settings-reference.md
helm install azure-api-management-gateway \
- [Deploy self-hosted gateway to Kubernetes](how-to-deploy-self-hosted-gateway-kubernetes.md) - [Deploy self-hosted gateway to Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md) - [Enable Dapr support on self-hosted gateway](self-hosted-gateway-enable-dapr.md)
+- Learn more about configuration options for [Azure Arc extension](self-hosted-gateway-arc-reference.md)
app-service Deploy Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-azure-pipelines.md
Learn more about [Azure Pipelines ecosystem support](/azure/devops/pipelines/eco
:::image type="content" source="media/deploy-azure-pipelines/azure-web-app-task.png" alt-text="Screenshot of Azure web app task.":::
-1. Select **Azure Resource Manager** for the **Connection type** and choose your **Azure subscription**. Make sure to **Authorize** your connection.
-
-1. Select **Web App on Linux** and enter your `azureSubscription`, `appName`, and `package`. Your complete YAML should look like this.
+1. Select **Azure Resource Manager** for the **Connection type** and choose your **Azure subscription**. Make sure to **Authorize** your connection.
1. Select **Web App on Linux** and enter your `azureSubscription`, `appName`, and `package`. Your complete YAML should look like this.
application-gateway Create Multiple Sites Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-multiple-sites-portal.md
Previously updated : 07/14/2022 Last updated : 06/08/2023 #Customer intent: As an IT administrator, I want to use the Azure portal to set up an application gateway so I can host multiple sites.
Sign in to the [Azure portal](https://portal.azure.com).
- **Name**: Enter *myVNet* for the name of the virtual network.
- - **Subnet name** (application gateway subnet): The **Subnets** grid will show a subnet named *Default*. Change the name of this subnet to *myAGSubnet*.<br>The application gateway subnet can contain only application gateways. No other resources are allowed.
-
- - **Subnet name** (backend server subnet): In the second row of the **Subnets** grid, enter *myBackendSubnet* in the **Subnet name** column.
-
- - **Address range** (backend server subnet): In the second row of the **Subnets** Grid, enter an address range that doesn't overlap with the address range of *myAGSubnet*. For example, if the address range of *myAGSubnet* is 10.0.0.0/24, enter *10.0.1.0/24* for the address range of *myBackendSubnet*.
+ - **Subnet name** (application gateway subnet): The **Subnets** grid will show a subnet named *Default*. Change the name of this subnet to *myAGSubnet*.<br>The application gateway subnet can contain only application gateways. No other resources are allowed. The default IP address range provided is 10.0.0.0/24.
Select **OK** to close the **Create virtual network** window and save the virtual network settings.
On the **Configuration** tab, you'll connect the frontend and backend pools you
4. On the **Backend targets** tab, select **contosoPool** for the **Backend target**.
-5. For the **HTTP setting**, select **Add new** to create a new HTTP setting. The HTTP setting will determine the behavior of the routing rule. In the **Add an HTTP setting** window that opens, enter *contosoHTTPSetting* for the **HTTP setting name**. Accept the default values for the other settings in the **Add an HTTP setting** window, then select **Add** to return to the **Add a routing rule** window.
+5. For the **Backend setting**, select **Add new** to add a new Backend setting. The Backend setting will determine the behavior of the routing rule. In the **Add Backend setting** window that opens, enter *contosoSetting* for the **Backend settings name** and *80* for the **Backend port**. Accept the default values for the other settings in the **Add Backend setting** window, then select **Add** to return to the **Add a routing rule** window.
6. On the **Add a routing rule** window, select **Add** to save the routing rule and return to the **Configuration** tab.
-7. Select **Add a routing rule** and add a similar rule, listener, backend target, and HTTP setting for Fabrikam.
+7. Select **Add a routing rule** and add a similar rule, listener, backend target, and backend setting for Fabrikam.
:::image type="content" source="./media/create-multiple-sites-portal/fabrikam-rule.png" alt-text="Fabrikam rule":::
In this example, you'll use virtual machines as the target backend. You can eith
To add backend targets, you'll:
-1. Create two new VMs, *contosoVM* and *fabrikamVM*, to be used as backend servers.
-2. Install IIS on the virtual machines to verify that the application gateway was created successfully.
-3. Add the backend servers to the backend pools.
+1. Add a backend subnet.
+2. Create two new VMs, *contosoVM* and *fabrikamVM*, to be used as backend servers.
+3. Install IIS on the virtual machines to verify that the application gateway was created successfully.
+4. Add the backend servers to the backend pools.
+
+### Add a backend subnet
+
+1. On the Azure portal, search for **virtual networks** and select **myVNet*.
+2. Under **Settings**, select **Subnets**.
+3. Select **+ Subnet** and in the **Add subnet** pane, enter *myBackendSubnet* for **Name** and accept *10.0.1.0/24* as the **Subnet address range**.
+4. Accept all other default settings and select **Save**.
### Create a virtual machine
application-gateway Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-portal.md
description: In this quickstart, you learn how to use the Azure portal to create
Previously updated : 10/13/2022 Last updated : 06/08/2023
You'll create the application gateway using the tabs on the **Create application
- **Name**: Enter *myVNet* for the name of the virtual network.
- - **Subnet name** (Application Gateway subnet): The **Subnets** grid will show a subnet named *default*. Change the name of this subnet to *myAGSubnet*.<br>The application gateway subnet can contain only application gateways. No other resources are allowed.
-
- - **Subnet name** (backend server subnet): In the second row of the **Subnets** grid, enter *myBackendSubnet* in the **Subnet name** column.
-
- - **Address range** (backend server subnet): In the second row of the **Subnets** Grid, enter an address range that doesn't overlap with the address range of *myAGSubnet*. For example, if the address range of *myAGSubnet* is 10.0.0.0/24, enter *10.0.1.0/24* for the address range of *myBackendSubnet*.
+ - **Subnet name** (Application Gateway subnet): The **Subnets** grid will show a subnet named *default*. Change the name of this subnet to *myAGSubnet*.<br>The application gateway subnet can contain only application gateways. No other resources are allowed. The default IP address range provided is 10.0.0.0/24.
Select **OK** to close the **Create virtual network** window and save the virtual network settings.
application-gateway Ssl Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ssl-overview.md
Previously updated : 03/27/2023 Last updated : 06/09/2023
Transport Layer Security (TLS), previously known as Secure Sockets Layer (SSL),
Application Gateway supports TLS termination at the gateway, after which traffic typically flows unencrypted to the backend servers. There are a number of advantages of doing TLS termination at the application gateway: -- **Improved performance** ΓÇô The biggest performance hit when doing TLS decryption is the initial handshake. To improve performance, the server doing the decryption caches TLS session IDs and manages TLS session tickets. If this is done at the application gateway, all requests from the same client can use the cached values. If itΓÇÖs done on the backend servers, then each time the clientΓÇÖs requests go to a different server the client must reΓÇæauthenticate. The use of TLS tickets can help mitigate this issue, but they aren't supported by all clients and can be difficult to configure and manage.
+- **Improved performance** ΓÇô The biggest performance hit when doing TLS decryption is the initial handshake. To improve performance, the server doing the decryption caches TLS session IDs and manages TLS session tickets. If this is done at the application gateway, all requests from the same client can use the cached values. If itΓÇÖs done on the backend servers, then each time the clientΓÇÖs requests go to a different server the client must reauthenticate. The use of TLS tickets can help mitigate this issue, but they aren't supported by all clients and can be difficult to configure and manage.
- **Better utilization of the backend servers** ΓÇô SSL/TLS processing is very CPU intensive, and is becoming more intensive as key sizes increase. Removing this work from the backend servers allows them to focus on what they are most efficient at, delivering content. - **Intelligent routing** ΓÇô By decrypting the traffic, the application gateway has access to the request content, such as headers, URI, and so on, and can use this data to route requests. - **Certificate management** ΓÇô Certificates only need to be purchased and installed on the application gateway and not all backend servers. This saves both time and money.
In this example, requests using TLS1.2 are routed to backend servers in Pool1 us
## End to end TLS and allow listing of certificates
-Application Gateway only communicates with those backend servers that have either allow-listed their certificate with the Application Gateway or whose certificates are signed by well-known CA authorities and the certificate's CN matches the host name in the HTTP backend settings. There are some differences in the end-to-end TLS setup process with respect to the version of Application Gateway used. The following section explains them individually.
+Application Gateway only communicates with those backend servers that have either allow-listed their certificate with the Application Gateway or whose certificates are signed by well-known CA authorities and the certificate's CN matches the host name in the HTTP backend settings. There are some differences in the end-to-end TLS setup process with respect to the version of Application Gateway used. The following section explains the versions individually.
## End-to-end TLS with the v1 SKU
The following tables outline the differences in SNI between the v1 and v2 SKU in
| If the client doesn't specify a SNI header and if all the multi-site headers are enabled with "Require SNI" | Resets the connection | Returns the certificate of the first HTTPS listener according to the order specified by the request routing rules associated with the HTTPS listeners | If the client doesn't specify SNI header and if there's a basic listener configured with a certificate | Returns the certificate configured in the basic listener to the client (default or fallback certificate) | Returns the certificate configured in the basic listener |
+> [!TIP]
+> The SNI flag can be configured with PowerShell or by using an ARM template. For more information, see [RequireServerNameIndication](/powershell/module/az.network/set-azapplicationgatewayhttplistener#-requireservernameindication) and [Quickstart: Direct web traffic with Azure Application Gateway - ARM template](quick-create-template.md#review-the-template).
+ ### Backend TLS connection (application gateway to the backend server) #### For probe traffic
automanage Automanage Hotpatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/automanage-hotpatch.md
Hotpatching is a new way to install updates on supported _Windows Server Azure E
| Windows Server 2022 Datacenter: Azure Edition Server Core | Generally available (GA) | Public preview | | Windows Server 2022 Datacenter: Azure Edition with Desktop Experience | Public preview | Public preview |
+> [!NOTE]
+> You can set Hotpatch in Windows Server 2022 Datacenter: Azure Edition with Desktop Experience in the Azure portal by getting the VM preview image from [this link](https://ms.portal.azure.com/#view/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/microsoftwindowsserver.windowsserverhotpatch-previews/resourceGroupId//resourceGroupLocation//dontDiscardJourney~/false/_provisioningContext~/%7B%22initialValues%22%3A%7B%22subscriptionIds%22%3A%5B%222389fa6a-fd51-4c60-8a9e-95e7e17b76b6%22%2C%221b3510f5-e7cd-457d-a84e-1a0a61013375%22%5D%2C%22resourceGroupNames%22%3A%5B%5D%2C%22locationNames%22%3A%5B%22westus2%22%2C%22centralus%22%2C%22eastus%22%5D%7D%2C%22telemetryId%22%3A%22b73b4782-5eee-41b0-ad74-9a0b98365009%22%2C%22marketplaceItem%22%3A%7B%22categoryIds%22%3A%5B%5D%2C%22id%22%3A%22Microsoft.Portal%22%2C%22itemDisplayName%22%3A%22NoMarketplace%22%2C%22products%22%3A%5B%5D%2C%22version%22%3A%22%22%2C%22productsWithNoPricing%22%3A%5B%5D%2C%22publisherDisplayName%22%3A%22Microsoft.Portal%22%2C%22deploymentName%22%3A%22NoMarketplace%22%2C%22launchingContext%22%3A%7B%22telemetryId%22%3A%22b73b4782-5eee-41b0-ad74-9a0b98365009%22%2C%22source%22%3A%5B%5D%2C%22galleryItemId%22%3A%22%22%7D%2C%22deploymentTemplateFileUris%22%3A%7B%7D%2C%22uiMetadata%22%3Anull%7D%7D). For related information, please refer [this](https://azure.microsoft.com/updates/hotpatch-is-now-available-on-preview-images-of-windows-server-vms-on-azure-with-the-desktop-experience-installation-mode/) Azure update.
+ ## How hotpatch works Hotpatch works by first establishing a baseline with a Windows Update Latest Cumulative Update. Hotpatches are periodically released (for example, on the second Tuesday of the month) that builds on that baseline. Hotpatches will contain updates that don't require a reboot. Periodically (starting at every three months), the baseline is refreshed with a new Latest Cumulative Update.
automation Overview Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview-monitoring-agent.md
Title: Azure Automation Change Tracking and Inventory overview using Azure Monit
description: This article describes the Change Tracking and Inventory feature using Azure monitoring agent (Preview), which helps you identify software and Microsoft service changes in your environment. Previously updated : 02/23/2023 Last updated : 05/29/2023 # Overview of change tracking and inventory using Azure Monitoring Agent (Preview)
-**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: Windows Registry :heavy_check_mark: Windows Files :heavy_check_mark: Linux Files :heavy_check_mark: Windows Software
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: Windows Registry :heavy_check_mark: Windows Files :heavy_check_mark: Linux Files :heavy_check_mark: Windows Software :heavy_check_mark: Windows Services & Linux Daemons
> [!Important] > Currently, Change tracking and inventory uses Log Analytics Agent and this is scheduled to retire by 31.August.2024. We recommend that you use Azure Monitoring Agent as the new supporting agent.
Change Tracking and Inventory using Azure Monitoring Agent (Preview) doesn't sup
- If network traffic is high, change records can take up to six hours to display. - If you modify a configuration while a machine or server is shut down, it might post changes belonging to the previous configuration. - Collecting Hotfix updates on Windows Server 2016 Core RS3 machines.
+- Linux daemons might show a changed state even though no change has occurred. This issue arises because of how the `SvcRunLevels` data in the Azure Monitor [ConfigurationChange](https://learn.microsoft.com/azure/azure-monitor/reference/tables/configurationchange) table is written.
+ ## Limits
The following table shows the tracked item limits per machine for change trackin
|Registry|250|| |Windows software|250|Doesn't include software updates.| |Linux packages|1,250||
+|Windows Services |250||
+|Linux Daemons | 250||
## Supported operating systems
The next table shows the data collection frequency for the types of changes supp
| Windows registry | 50 minutes | | Windows file | 30 to 40 minutes | | Linux file | 15 minutes |
-| Windows services | 10 seconds to 30 minutes</br> Default: 30 minutes |
+| Windows services | 10 minutes to 30 minutes</br> Default: 30 minutes |
| Windows software | 30 minutes | | Linux software | 5 minutes |
+| Linux Daemons | 5 minutes |
The following table shows the tracked item limits per machine for Change Tracking and Inventory. | **Resource** | **Limit** |
-||||
+|||
|File|500| |Registry|250| |Windows software (not including hotfixes) |250| |Linux packages|1250|
+|Windows Services | 250 |
+|Linux Daemons| 500|
+
+### Windows services data
+
+#### Prerequisites
+
+To enable tracking of Windows Services data, you must upgrade CT extension and use extension more than or equal to 2.11.0.0
+
+#### [For Windows Azure VMs](#tab/win-az-vm)
+
+```powershell-interactive
+- az vm extension set --publisher Microsoft.Azure.ChangeTrackingAndInventory --version 2.11.0 --ids /subscriptions/<subscriptionids>/resourceGroups/<resourcegroupname>/providers/Microsoft.Compute/virtualMachines/<vmname> --name ChangeTracking-Windows --enable-auto-upgrade true
+```
+#### [For Linux Azure VMs](#tab/lin-az-vm)
+
+```powershell-interactive
+ΓÇô az vm extension set --publisher Microsoft.Azure.ChangeTrackingAndInventory --version 2.11.0 --ids /subscriptions/<subscriptionids>/resourceGroups/<resourcegroupname>/providers/Microsoft.Compute/virtualMachines/<vmname> --name ChangeTracking-Linux --enable-auto-upgrade true
+```
+#### [For Arc-enabled Windows VMs](#tab/win-arc-vm)
+
+```powershell-interactive
+ΓÇô az connectedmachine extension create --name ChangeTracking-Linux --publisher Microsoft.Azure.ChangeTrackingAndInventory --type ChangeTracking-Linux --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
+```
+
+#### [For Arc-enabled Linux VMs](#tab/lin-arc-vm)
+
+```powershell-interactive
+- az connectedmachine extension create --name ChangeTracking-Windows --publisher Microsoft.Azure.ChangeTrackingAndInventory --type ChangeTracking-Windows --machine-name <arc-server-name> --resource-group <resource-group-name> --location <arc-server-location> --enable-auto-upgrade true
+```
++
+#### Configure frequency
+
+The default collection frequency for Windows services is 30 minutes. To configure the frequency:
+- under **Edit** Settings, use a slider on the **Windows services** tab.
+
-> [!NOTE]
-> Change Tracking with Support Windows Services & Daemons will be supported by GA.
## Support for alerts on configuration state
automation Disable Local Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/disable-local-authentication.md
Title: Disable local authentication in Azure Automation
description: This article describes disabling local authentication in Azure Automation. Previously updated : 09/30/2022 Last updated : 06/12/2023 #Customer intent: As an administrator, I want disable local authentication so that I can enhance security.
The following table describes the behaviors or features that are prevented from
|Starting a runbook using a webhook. | Start a runbook job using Azure Resource Manager template, which uses Azure AD authentication. | |Using Automation Desired State Configuration.| Use [Azure Policy Guest configuration](../governance/machine-configuration/overview.md).  | |Using agent-based Hybrid Runbook Workers.| Use [extension-based Hybrid Runbook Workers (Preview)](./extension-based-hybrid-runbook-worker-install.md).|
+|Using Automation Update management |Use [Update management center (preview)](../update-center/overview.md)
## Next steps
automation Migrate Run As Accounts Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/migrate-run-as-accounts-managed-identity.md
Title: Migrate from a Run As account to Managed identities
description: This article describes how to migrate from a Run As account to managed identities in Azure Automation. Previously updated : 05/29/2023 Last updated : 06/06/2023
Before you migrate from a Run As account or Classic Run As account to a managed
> - There are two ways to use managed identities in hybrid runbook worker scripts: either the system-assigned managed identity for the Automation account *or* the virtual machine (VM) managed identity for an Azure VM running as a hybrid runbook worker. > - The VM's user-assigned managed identity and the VM's system-assigned managed identity will *not* work in an Automation account that's configured with an Automation account's managed identity. When you enable the Automation account's managed identity, you can use only the Automation account's system-assigned managed identity and not the VM managed identity. For more information, see [Use runbook authentication with managed identities](automation-hrw-run-runbooks.md).
-1. Assign the same role to the managed identity to access the Azure resources that match the Run As account. Follow the steps in [Check the role assignment for the Azure Automation Run As account](manage-run-as-account.md#check-role-assignment-for-azure-automation-run-as-account).
+1. Assign the same role to the managed identity to access the Azure resources that match the Run As account. Follow the steps in [Check the role assignment for the Azure Automation Run As account](manage-run-as-account.md#check-role-assignment-for-azure-automation-run-as-account). Use this [script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/AssignMIRunAsRoles.ps1) to enable the System assigned identity in an Automation account and assign the same set of permissions present in Azure Automation Run as account to System Assigned identity of the Automation account.
Ensure that you don't assign high-privilege permissions like contributor or owner to the Run As account. Follow the role-based access control (RBAC) guidelines to limit the permissions from the default contributor permissions assigned to a Run As account by using [this script](manage-run-as-account.md#limit-run-as-account-permissions). For example, if the Automation account is required only to start or stop an Azure VM, then the permissions assigned to the Run As account need to be only for starting or stopping the VM. Similarly, assign read-only permissions if a runbook is reading from Azure Blob Storage. For more information, see [Azure Automation security guidelines](../automation/automation-security-guidelines.md#authentication-certificate-and-identities). 1. If you're using Classic Run As accounts, ensure that you have [migrated](../virtual-machines/classic-vm-deprecation.md) resources deployed through classic deployment model to Azure Resource Manager.
-1. Use [this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/Check-AutomationRunAsAccountRoleAssignments.ps1) to find out which Automation accounts are using a Run As account. If your Azure Automation accounts contain a Run As account, it will have the built-in contributor role assigned to it by default. You can use the script to check the Azure Automation Run As accounts and determine if their role assignment is the default one or if it has been changed to a different role definition.
+1. Use [this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/Check-AutomationRunAsAccountRoleAssignments.ps1) to find out which Automation accounts are using a Run As account. If your Azure Automation accounts contain a Run As account, it has the built-in contributor role assigned to it by default. You can use the script to check the Azure Automation Run As accounts and determine if their role assignment is the default one or if it has been changed to a different role definition.
1. Use [this script](https://github.com/azureautomation/runbooks/blob/master/Utility/AzRunAs/IdentifyRunAsRunbooks.ps1) to find out if all runbooks in your Automation account are using the Run As account. ## Migrate from an Automation Run As account to a managed identity
For more information, see the sample runbook name **AzureAutomationTutorialWithI
## Next steps -- Review the [frequently asked questions for migrating to managed identities](automation-managed-identity-faq.md).
+- Review the [frequently asked questions for migrating to managed identities](automation-managed-identity-faq.md)
- If your runbooks aren't finishing successfully, review [Troubleshoot Azure Automation managed identity issues](troubleshoot/managed-identity.md).
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
Download for [Windows](https://download.microsoft.com/download/e/b/2/eb2f2d87-6382-463e-9d01-45b40c93c05b/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
+### Known issue
+
+You may encounter error `AZCM0026: Network Error` accompanied by a message about "no IP addresses found" when connecting a server to Azure Arc using a proxy server. At this time, Microsoft recommends using [agent version 1.30](#version-131june-2023) in networks that require a proxy server. Microsoft has also reverted the agent download URL [aka.ms/AzureConnectedMachineAgent](https://aka.ms/AzureConnectedMachineAgent) to agent version 1.30 to allow existing installation scripts to succeed.
+
+If you've already installed agent version 1.31 and are seeing the error message above, [uninstall the agent](manage-agent.md#uninstall-from-control-panel) and run your installation script again. You do not need to downgrade to agent 1.30 if your agent is connected to Azure.
+
+Microsoft will update the release notes when this issue is resolved.
+ ### New features - Added support for Amazon Linux 2023
azure-functions Azfd0004 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/errors-diagnostics/diagnostic-events/azfd0004.md
Title: "AZFD0004: Host ID collision" description: "AZFD0004: Host ID collision"--++ Last updated 01/28/2023
azure-functions Functions Target Based Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-target-based-scaling.md
In [runtime scale monitoring](functions-networking-options.md?tabs=azure-cli#pre
| Service Bus | 5.9.0 | | Azure Cosmos DB | 4.1.0 |
-Additionally, target-based scaling is currently an **opt-in** feature with runtime scale monitoring. In order to use target-based scaling with the Premium plan when runtime scale monitoring is enabled, add the following app setting to your function app:
-
-| App Setting | Value |
-| -- | -- |
-|`TARGET_BASED_SCALING_ENABLED` | 1 |
- ## Dynamic concurrency support Target-based scaling introduces faster scaling, and uses defaults for _target executions per instance_. When using Service Bus or Storage queues, you can also enable [dynamic concurrency](functions-concurrency.md#dynamic-concurrency). In this configuration, the _target executions per instance_ value is determined automatically by the dynamic concurrency feature. It starts with limited concurrency and identifies the best setting over time.
For **v2.x+** of the Storage extension, modify the `host.json` setting `batchSiz
} ```
+> [!NOTE]
+> **Scale efficiency:** For the storage queue extension, messages with [visibilityTimeout](/rest/api/storageservices/put-message#uri-parameters) are still counted in _event source length_ by the Storage Queue APIs. This can cause overscaling of your function app. Consider using Service Bus queues que scheduled messages, [limiting scale out](event-driven-scaling.md#limit-scale-out), or not using visibilityTimeout for your solution.
++ ### Azure Cosmos DB Azure Cosmos DB uses a function-level attribute, `MaxItemsPerInvocation`. The way you set this function-level attribute depends on your function language.
azure-maps Power Bi Visual On Object Interaction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-on-object-interaction.md
Visual like you interact with other Microsoft products or web applications.
## Use on-object interaction in your Power BI Visual - On-object interaction can be used to edit chart titles, legends, bubble layers, Map style and Map controls.
azure-maps Power Bi Visual Understanding Layers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-understanding-layers.md
There are two types of layers available in an Azure Maps Power BI visual. The fi
**3D column layer** Renders points as 3D columns on the map.+ ![3D column layer on map](media/power-bi-visual/3d-column-layer-thumb.png) :::column-end::: :::row-end:::
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm Rsyslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md
none 849 root txt REG 0,1 8632 0 16764 / (deleted)
rsyslogd 1484 syslog 14w REG 8,1 3601566564 0 35280 /var/log/syslog (deleted) ```
-### Rsyslog default configuration logs all facilities to /var/log/syslog
-On some popular distros (for example Ubuntu 18.04 LTS), rsyslog ships with a default configuration file (`/etc/rsyslog.d/50-default.conf`) which will log events from nearly all facilities to disk at `/var/log/syslog`.
+### Rsyslog default configuration logs all facilities to /var/log/
+On some popular distros (for example Ubuntu 18.04 LTS), rsyslog ships with a default configuration file (`/etc/rsyslog.d/50-default.conf`) which logs events from nearly all facilities to disk at `/var/log/syslog`. Note that for RedHat/CentOS family syslog events will be stored under `/var/log/` but in a different file: `/var/log/messages`.
-AMA doesn't rely on syslog events being logged to `/var/log/syslog`. Instead, it configures rsyslog to forward events over a socket directly to the azuremonitoragent service process (mdsd).
+AMA doesn't rely on syslog events being logged to `/var/log/`. Instead, it configures rsyslog service to forward events over a socket directly to the azuremonitoragent service process (mdsd).
#### Fix: Remove high-volume facilities from /etc/rsyslog.d/50-default.conf
-If you're sending a high log volume through rsyslog, consider modifying the default rsyslog config to avoid logging these events to this location `/var/log/syslog`. The events for this facility would still be forwarded to AMA because of the config in `/etc/rsyslog.d/10-azuremonitoragent.conf`.
+If you're sending a high log volume through rsyslog and your system is setup to log events for these facilities, consider modifying the default rsyslog config to avoid logging and storing them under `/var/log/`. The events for this facility would still be forwarded to AMA because rsyslog is using a different configuration for forwarding placed in `/etc/rsyslog.d/10-azuremonitoragent.conf`.
-1. For example, to remove local4 events from being logged at `/var/log/syslog`, change this line in `/etc/rsyslog.d/50-default.conf` from this:
+1. For example, to remove local4 events from being logged at `/var/log/syslog` or `/var/log/messages`, change this line in `/etc/rsyslog.d/50-default.conf` from this:
```config *.*;auth,authpriv.none -/var/log/syslog ```
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
Alerts triggered by these alert rules contain a payload that uses the [common al
1. On the **Actions** tab, select or create the required [action groups](./action-groups.md).
- > [!NOTE]
- > We're continually adding more regions for regional data processing.
- 1. (Optional) In the <a name="custom-props">**Custom properties**</a> section, if you've configured action groups for this alert rule, you can add custom properties in key:value pairs to the alert notification payload to add more information to it. Add the property **Name** and **Value** for the custom property you want included in the payload. You can also use custom properties to extract and manipulate data from alert payloads that use the common schema. You can use those values in the action group webhook or logic app.
Alerts triggered by these alert rules contain a payload that uses the [common al
1. On the **Details** tab, define the **Project details**. - Select the **Subscription**. - Select the **Resource group**.
- - (Optional) If you're creating a metric alert rule that monitors a custom metric with the scope defined as one of the following regions and you want to make sure that the data processing for the alert rule takes place within that region, you can select to process the alert rule in one of these regions:
- - North Europe
- - West Europe
- - Sweden Central
- - Germany West Central
-
- > [!NOTE]
- > We're continually adding more regions for regional data processing.
+ 1. Define the **Alert rule details**. #### [Metric alert](#tab/metric) 1. Select the **Severity**. 1. Enter values for the **Alert rule name** and the **Alert rule description**.
- 1. Select the **Region**.
+ 1. (Optional) If you're creating a metric alert rule that monitors a custom metric with the scope defined as one of the following regions and you want to make sure that the data processing for the alert rule takes place within that region, you can select to process the alert rule in one of these regions:
+ - North Europe
+ - West Europe
+ - Sweden Central
+ - Germany West Central
+
+ We're continually adding more regions for regional data processing.
+ 1. (Optional) In the **Advanced options** section, you can set several options. |Field |Description |
Alerts triggered by these alert rules contain a payload that uses the [common al
#### [Activity log alert](#tab/activity-log) 1. Enter values for the **Alert rule name** and the **Alert rule description**.
- 1. Select the **Region**.
1. Select **Enable upon creation** for the alert rule to start running as soon as you're done creating it. :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule.":::
Alerts triggered by these alert rules contain a payload that uses the [common al
#### [Resource Health alert](#tab/resource-health) 1. Enter values for the **Alert rule name** and the **Alert rule description**.
- 1. Select the **Region**.
1. Select **Enable upon creation** for the alert rule to start running as soon as you're done creating it. :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule."::: #### [Service Health alert](#tab/service-health) 1. Enter values for the **Alert rule name** and the **Alert rule description**.
- 1. Select the **Region**.
1. Select **Enable upon creation** for the alert rule to start running as soon as you're done creating it. :::image type="content" source="media/alerts-create-new-alert-rule/alerts-activity-log-rule-details-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new activity log alert rule.":::
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
For apps written using [ASP.NET Core](asp-net-core.md#add-telemetryinitializers)
} ``` ### JavaScript telemetry initializers
-*JavaScript*
+
+Insert a JavaScript telemetry initializer, if needed. For more information on the telemetry initializers for the Application Insights JavaScript SDK, see [Telemetry initializers](https://github.com/microsoft/ApplicationInsights-JS#telemetry-initializers).
+
+#### [SDK Loader Script](#tab/sdkloaderscript)
Insert a telemetry initializer by adding the onInit callback function in the [SDK Loader Script configuration](./javascript-sdk.md?tabs=sdkloaderscript#sdk-loader-script-configuration):
cfg: { // Application Insights Configuration
</script> ```
+#### [npm package](#tab/npmpackage)
+
+ ```js
+ import { ApplicationInsights } from '@microsoft/applicationinsights-web'
+
+ const appInsights = new ApplicationInsights({ config: {
+ connectionString: 'YOUR_CONNECTION_STRING'
+ /* ...Other Configuration Options... */
+ } });
+ appInsights.loadAppInsights();
+ // To insert a telemetry initializer, uncomment the following code.
+ /** var telemetryInitializer = (envelope) => { envelope.data = envelope.data || {}; envelope.data.someField = 'This item passed through my telemetry initializer';
+ };
+ appInsights.addTelemetryInitializer(telemetryInitializer); **/
+ appInsights.trackPageView();
+ ```
+++ For a summary of the noncustom properties available on the telemetry item, see [Application Insights Export Data Model](./export-telemetry.md#application-insights-export-data-model). You can add as many initializers as you like. They're called in the order that they're added.
azure-monitor Asp Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md
You have now successfully configured server-side application monitoring. If you
## Add client-side monitoring
-The previous sections provided guidance on methods to automatically and manually configure server-side monitoring. To add client-side monitoring, use the [client-side JavaScript SDK](javascript.md). You can monitor any web page's client-side transactions by adding a [JavaScript SDK Loader Script](./javascript-sdk.md?tabs=sdkloaderscript#enable-application-insights) before the closing `</head>` tag of the page's HTML.
+The previous sections provided guidance on methods to automatically and manually configure server-side monitoring. To add client-side monitoring, use the [client-side JavaScript SDK](javascript.md). You can monitor any web page's client-side transactions by adding a [JavaScript SDK Loader Script](./javascript-sdk.md?tabs=sdkloaderscript#get-started) before the closing `</head>` tag of the page's HTML.
Although it's possible to manually add the SDK Loader Script to the header of each HTML page, we recommend that you instead add the SDK Loader Script to a primary page. That action injects the SDK Loader Script into all pages of a site.
-For the template-based ASP.NET MVC app from this article, the file that you need to edit is *_Layout.cshtml*. You can find it under **Views** > **Shared**. To add client-side monitoring, open *_Layout.cshtml* and follow the [SDK Loader Script-based setup instructions](./javascript-sdk.md?tabs=sdkloaderscript#enable-application-insights) from the article about client-side JavaScript SDK configuration.
+For the template-based ASP.NET MVC app from this article, the file that you need to edit is *_Layout.cshtml*. You can find it under **Views** > **Shared**. To add client-side monitoring, open *_Layout.cshtml* and follow the [SDK Loader Script-based setup instructions](./javascript-sdk.md?tabs=sdkloaderscript#get-started) from the article about client-side JavaScript SDK configuration.
## Troubleshooting
azure-monitor Distributed Tracing Telemetry Correlation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/distributed-tracing-telemetry-correlation.md
It's important to make sure the incoming and outgoing configurations are exactly
This feature is in `Microsoft.ApplicationInsights.JavaScript`. It's disabled by default. To enable it, use `distributedTracingMode` config. AI_AND_W3C is provided for backward compatibility with any legacy services instrumented by Application Insights. -- **[npm-based setup](./javascript-sdk.md?tabs=npmpackage#enable-application-insights)**
+- **[npm-based setup](./javascript-sdk.md?tabs=npmpackage#get-started)**
Add the following configuration: ```JavaScript distributedTracingMode: DistributedTracingModes.W3C ``` -- **[SDK Loader Script-based setup](./javascript-sdk.md?tabs=sdkloaderscript#enable-application-insights)**
+- **[SDK Loader Script-based setup](./javascript-sdk.md?tabs=sdkloaderscript#get-started)**
Add the following configuration: ```
azure-monitor Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-sdk.md
-# Microsoft Azure Monitor Application Insights JavaScript SDK
+# Enable Azure Monitor Application Insights Real User Monitoring
-[Microsoft Azure Monitor Application Insights](app-insights-overview.md) JavaScript SDK allows you to monitor and analyze the performance of JavaScript web applications.
+The Microsoft Azure Monitor Application Insights JavaScript SDK allows you to monitor and analyze the performance of JavaScript web applications. This is commonly referred to as Real User Monitoring or RUM.
## Prerequisites
- Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource) - An application that uses [JavaScript](/visualstudio/javascript)
-## Enable Application Insights
+## Get started
-To enable Application Insights, follow these steps.
+Follow the steps in this section to instrument your application with the Application Insights JavaScript SDK.
> [!TIP] > Good news! We're making it even easier to enable JavaScript. Check out where [SDK Loader Script injection by configuration is available](./codeless-overview.md#sdk-loader-script-injection-by-configuration)!
-### 1. Add the JavaScript code
+> [!NOTE]
+> If you have a React, React Native, or Angular application, you can [optionally add these plug-ins after you follow the steps to get started](#5-optional-advanced-sdk-configuration).
-Two methods are available to add the code to enable Application Insights via the Application Insights JavaScript SDK.
+### 1. Add the JavaScript code
-#### [SDK Loader Script](#tab/sdkloaderscript)
+Two methods are available to add the code to enable Application Insights via the Application Insights JavaScript SDK:
-The benefits of this method are:
-
-- You never have to update the SDK because you get the latest updates automatically.-- You have control over which pages you add the Application Insights JavaScript SDK to.
+| Method | When would I use this method? |
+|:-|:|
+| SDK Loader Script | For most customers, we recommend the SDK Loader Script because you never have to update the SDK and you get the latest updates automatically. Also, you have control over which pages you add the Application Insights JavaScript SDK to. |
+| npm package | You want to bring the SDK into your code and enable IntelliSense. This option is only needed for developers who require more custom events and configuration. |
-To add the SDK Loader Script and its optional configuration, follow these steps:
+#### [SDK Loader Script](#tab/sdkloaderscript)
1. Paste the SDK Loader Script at the top of each page for which you want to enable Application Insights.
To add the SDK Loader Script and its optional configuration, follow these steps:
#### [npm package](#tab/npmpackage)
-Use this method if you're creating your own bundles and you want to include the Application Insights code in your own bundle.
-
-The npm setup installs the JavaScript SDK as a dependency to your project and enables IntelliSense.
-
-This option is only needed for developers who require more custom events and configuration.
- 1. Use the following command to install the Microsoft Application Insights JavaScript SDK - Web package. ```sh
This option is only needed for developers who require more custom events and con
-### 2. Add your connection string
+### 2. Paste the connection string in your environment
-To add your connection string, follow these steps:
+To paste the connection string in your environment, follow these steps:
1. Navigate to the **Overview** pane of your Application Insights resource. 1. Locate the **Connection String**.
To add SDK configuration, add each configuration option directly under `connecti
If you can't run the application or you aren't getting data as expected, see the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting).
-### Analytics
-
-To query your telemetry collected by the JavaScript SDK, select the **View in Logs (Analytics)** button. By adding a `where` statement of `client_Type == "Browser"`, you only see data from the JavaScript SDK. Any server-side telemetry collected by other SDKs is excluded.
-
-```kusto
-// average pageView duration by name
-let timeGrain=5m;
-let dataset=pageViews
-// additional filters can be applied here
-| where timestamp > ago(1d)
-| where client_Type == "Browser" ;
-// calculate average pageView duration for all pageViews
-dataset
-| summarize avg(duration) by bin(timestamp, timeGrain)
-| extend pageView='Overall'
-// render result in a chart
-| render timechart
-```
+### 5. (Optional) Advanced SDK configuration
-## Advanced SDK configuration
+If you want to use the extra features provided by plugins for specific frameworks, see:
-Additional information is available for the following advanced scenarios:
--- [JavaScript SDK advanced topics](javascript-sdk-advanced.md) - [React plugin](javascript-framework-extensions.md?tabs=react) - [React native plugin](javascript-framework-extensions.md?tabs=reactnative) - [Angular plugin](javascript-framework-extensions.md?tabs=reactnative)-- [Click Analytics plugin](javascript-feature-extensions.md)-
-## Frequently asked questions
-
-#### What is the SDK performance/overhead?
-
-The Application Insights JavaScript SDK has a minimal overhead on your website. At just 36 KB gzipped, and taking only ~15 ms to initialize, the SDK adds a negligible amount of load time to your website. The minimal components of the library are quickly loaded when you use the SDK, and the full script is downloaded in the background.
-
-Additionally, while the script is downloading from the CDN, all tracking of your page is queued, so you don't lose any telemetry during the entire life cycle of your page. This setup process provides your page with a seamless analytics system that's invisible to your users.
-
-#### What browsers are supported?
-
-![Chrome](https://raw.githubusercontent.com/alrra/browser-logos/master/src/chrome/chrome_48x48.png) | ![Firefox](https://raw.githubusercontent.com/alrra/browser-logos/master/src/firefox/firefox_48x48.png) | ![IE](https://raw.githubusercontent.com/alrra/browser-logos/master/src/edge/edge_48x48.png) | ![Opera](https://raw.githubusercontent.com/alrra/browser-logos/master/src/opera/opera_48x48.png) | ![Safari](https://raw.githubusercontent.com/alrra/browser-logos/master/src/safari/safari_48x48.png)
- | | | | |
-Chrome Latest Γ£ö | Firefox Latest Γ£ö | IE 9+ & Microsoft Edge Γ£ö<br>IE 8- Compatible | Opera Latest Γ£ö | Safari Latest Γ£ö |
-
-#### Where can I find code examples?
-
-For runnable examples, see [Application Insights JavaScript SDK samples](https://github.com/microsoft/ApplicationInsights-JS/tree/master/examples).
-
-#### How can I upgrade from the old version of Application Insights?
-
-For more information, see [Upgrade from old versions of the Application Insights JavaScript SDK](javascript-sdk-upgrade.md).
-
-#### What is the ES3/Internet Explorer 8 compatibility?
-
-We need to take necessary measures to ensure that this SDK continues to "work" and doesn't break the JavaScript execution when loaded by an older browser. It would be ideal to not support older browsers, but numerous large customers can't control which browser their users choose to use.
-
-This statement doesn't mean that we only support the lowest common set of features. We need to maintain ES3 code compatibility. New features need to be added in a manner that wouldn't break ES3 JavaScript parsing and added as an optional feature.
-
-See GitHub for full details on [Internet Explorer 8 support](https://github.com/Microsoft/ApplicationInsights-JS#es3ie8-compatibility).
-
-#### Is the Application Insights SDK open-source?
-
-Yes, the Application Insights JavaScript SDK is open source. To view the source code or to contribute to the project, see the [official GitHub repository](https://github.com/Microsoft/ApplicationInsights-JS).
-
-#### How can I update my third-party server configuration?
-
-The server side needs to be able to accept connections with those headers present. Depending on the `Access-Control-Allow-Headers` configuration on the server side, it's often necessary to extend the server-side list by manually adding `Request-Id`, `Request-Context`, and `traceparent` (W3C distributed header).
-
-Access-Control-Allow-Headers: `Request-Id`, `traceparent`, `Request-Context`, `<your header>`
-
-#### How can I disable distributed tracing?
-
-Distributed tracing can be disabled in configuration.
-
-#### What is collected automatically?
-
-When you enable the App Insights JavaScript SDK, the following data classes are collected automatically:
--- Uncaught exceptions in your app, including information on
- - Stack trace
- - Exception details and message accompanying the error
- - Line & column number of error
- - URL where error was raised
-- Network Dependency Requests made by your app XHR and Fetch (fetch collection is disabled by default) requests, include information on
- - Url of dependency source
- - Command & Method used to request the dependency
- - Duration of the request
- - Result code and success status of the request
- - ID (if any) of user making the request
- - Correlation context (if any) where request is made
-- User information (for example, Location, network, IP)-- Device information (for example, Browser, OS, version, language, model)-- Session information-
-> [!Note]
-> For some applications, such as single-page applications (SPAs), the duration may not be recorded and will default to 0.
-
-For more information, see the following link: https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/azure-monitor/app/data-retention-privacy.md
-## Troubleshooting
+> [!TIP]
+> We collect page views by default. But if you want to also collect clicks by default, consider adding the [Click Analytics plug-in](javascript-feature-extensions.md).
-See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-webpages-issues).
-## Release notes
+## Support
-Detailed release notes regarding updates and bug fixes can be found on [GitHub](https://github.com/microsoft/ApplicationInsights-JS/releases)
+- If you're having trouble with enabling Application Insights, see the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/javascript-sdk-troubleshooting).
+- For common question about the JavaScript SDK, see the [FAQ](/azure/azure-monitor/faq#can-i-filter-out-or-modify-some-telemetry-).
+- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).
+- For a list of open issues related to the Application Insights JavaScript SDK, see the [GitHub Issues Page](https://github.com/microsoft/ApplicationInsights-JS/issues).
## Next steps
Detailed release notes regarding updates and bug fixes can be found on [GitHub](
* [JavaScript telemetry initializers](api-filtering-sampling.md#javascript-telemetry-initializers) * [Build-measure-learn](usage-overview.md) * [JavaScript SDK advanced topics](javascript-sdk-advanced.md)
+* See the detailed [release notes](https://github.com/microsoft/ApplicationInsights-JS/releases) on GitHub for updates and bug fixes.
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Examples of using the Python logging library can be found on [GitHub](https://gi
**Footnotes**-- <a name="FOOTNOTEONE">1</a>: Supports automatic reporting of unhandled exceptions
+- <a name="FOOTNOTEONE">1</a>: Supports automatic reporting of *unhandled/uncaught* exceptions
- <a name="FOOTNOTETWO">2</a>: Supports OpenTelemetry Metrics - <a name="FOOTNOTETHREE">3</a>: By default, logging is only collected at INFO level or higher. To change this setting, see the [configuration options](./java-standalone-config.md#autocollected-logging). - <a name="FOOTNOTEFOUR">4</a>: By default, logging is only collected at WARNING level or higher..
azure-monitor Opentelemetry Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md
Title: OpenTelemetry with Azure Monitor overview
-description: This article provides an overview of how to use OpenTelemetry with Azure Monitor.
+ Title: Data Collection Basics of Azure Monitor Application Insights
+description: This article provides an overview of how to collect telemetry to send to Azure Monitor Application Insights.
Previously updated : 05/10/2023 Last updated : 06/08/2023
-# OpenTelemetry overview
+# Data Collection Basics of Azure Monitor Application Insights
-Microsoft is excited to embrace [OpenTelemetry](https://opentelemetry.io/) as the future of telemetry instrumentation. You, our customers, have asked for vendor-neutral instrumentation, and we're pleased to partner with the OpenTelemetry community to create consistent APIs and SDKs across languages.
+In the following sections, we cover some data collection basics of Azure Monitor Application Insights.
-Microsoft worked with project stakeholders from two previously popular open-source telemetry projects, [OpenCensus](https://opencensus.io/) and [OpenTracing](https://opentracing.io/). Together, we helped to create a single project, OpenTelemetry. OpenTelemetry includes contributions from all major cloud and Application Performance Management (APM) vendors and lives within the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/). Microsoft is a Platinum Member of the CNCF.
+## Instrumentation Options
-## Concepts
+At a basic level, "instrumenting" is simply enabling an application to capture telemetry.
-Telemetry, the data collected to observe your application, can be broken into three types or "pillars":
+There are two methods to instrument your application:
-- Distributed Tracing-- Metrics-- Logs
+- Automatic instrumentation (auto-instrumentation)
+- Manual instrumentation
-A complete observability story includes all three pillars. Our [Azure Monitor OpenTelemetry Distros for ASP.NET Core, Java, JavaScript (Node.js), and Python](opentelemetry-enable.md) include everything you need to power Application Performance Monitoring on Azure. The Distro itself is free to install, and you only pay for the data you ingest in Azure Monitor.
+**Auto-instrumentation** enables telemetry collection through configuration without touching the application's code. Although it's more convenient, it tends to be less configurable. It's also not available in all languages. See [Auto-Instrumentation Supported Environments and Languages](codeless-overview.md). When auto-instrumentation is available, it's the easiest way to enable Azure Monitor Application Insights.
-The following sources explain the three pillars:
+**Manual instrumentation** is coding against the Application Insights or OpenTelemetry API. In the context of a user, it typically refers to installing a language-specific SDK in an application. There are two options for manual instrumentation:
-- [OpenTelemetry community website](https://opentelemetry.io/docs/concepts/data-collection/)-- [OpenTelemetry specifications](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/overview.md)-- [Distributed Systems Observability](https://www.oreilly.com/library/view/distributed-systems-observability/9781492033431/ch04.html) by Cindy Sridharan
+- [Application Insights SDKs](asp-net-core.md)
+- [Azure Monitor OpenTelemetry Distros](opentelemetry-enable.md).
-In the following sections, we'll cover some telemetry collection basics.
+While we see OpenTelemetry as our future direction, we have no plans to stop collecting data from older SDKs. We still have a way to go before our Azure OpenTelemetry Distros [reach feature parity with our Application Insights SDKs](../faq.yml#what-s-the-current-release-state-of-features-within-the-azure-monitor-opentelemetry-distro-). In many cases, customers will continue to choose to use Application Insights SDKs for quite some time.
-### Instrument your application
+> [!IMPORTANT]
+> "Manual" doesn't mean you'll be required to write complex code to define spans for distributed traces, although it remains an option. Instrumentation Libraries packaged into our Distros enable you to effortlessly capture telemetry signals across common frameworks and libraries. We're actively working to [instrument the most popular Azure Service SDKs using OpenTelemetry](https://devblogs.microsoft.com/azure-sdk/introducing-experimental-opentelemetry-support-in-the-azure-sdk-for-net/) so these signals are available to customers who use the Azure Monitor OpenTelemetry Distro.
-At a basic level, "instrumenting" is simply enabling an application to capture telemetry.
+## Telemetry Types
-There are two methods to instrument your application:
+Telemetry, the data collected to observe your application, can be broken into three types or "pillars":
-- Manual instrumentation-- Automatic instrumentation (auto-instrumentation)
+- Distributed Tracing
+- Metrics
+- Logs
-Manual instrumentation is coding against the OpenTelemetry API. In the context of a user, it typically refers to installing a language-specific SDK in an application. Manual instrumentation packages consist of [Azure Monitor OpenTelemetry Distros for .NET, Python, and JavaScript (Node.js)](opentelemetry-enable.md).
+A complete observability story includes all three pillars, and Application Insights further breaks down these pillars into tables based on our [data model](data-model-complete.md). Our Application Insights SDKs or Azure Monitor OpenTelemetry Distros include everything you need to power Application Performance Monitoring on Azure. The package itself is free to install, and you only pay for the data you ingest in Azure Monitor.
-> [!IMPORTANT]
-> "Manual" doesn't mean you'll be required to write complex code to define spans for distributed traces, although it remains an option. A rich and growing set of instrumentation libraries maintained by OpenTelemetry contributors will enable you to effortlessly capture telemetry signals across common frameworks and libraries.
->
-> A subset of OpenTelemetry instrumentation libraries are included in the Azure Monitor OpenTelemetry Distros, informed by customer feedback. We're also working to [instrument the most popular Azure Service SDKs using OpenTelemetry](https://devblogs.microsoft.com/azure-sdk/introducing-experimental-opentelemetry-support-in-the-azure-sdk-for-net/).
+The following sources explain the three pillars:
-Auto-instrumentation enables telemetry collection through configuration without touching the application's code. Although it's more convenient, it tends to be less configurable. It's also not available in all languages. The [Azure Monitor OpenTelemetry Java Distro](opentelemetry-enable.md?tabs=java) uses the auto-instrumentation method.
+- [OpenTelemetry community website](https://opentelemetry.io/docs/concepts/data-collection/)
+- [OpenTelemetry specifications](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/overview.md)
+- [Distributed Systems Observability](https://www.oreilly.com/library/view/distributed-systems-observability/9781492033431/ch04.html) by Cindy Sridharan
-### Send your telemetry
+## Telemetry Routing
There are two ways to send your data to Azure Monitor (or any vendor):
There are two ways to send your data to Azure Monitor (or any vendor):
A direct exporter sends telemetry in-process (from the application's code) directly to the Azure Monitor ingestion endpoint. The main advantage of this approach is onboarding simplicity.
-*The currently available Azure Monitor OpenTelemetry Distros rely on a direct exporter*.
+*The currently available Application Insights SDKs and Azure Monitor OpenTelemetry Distros rely on a direct exporter*.
-Alternatively, sending telemetry via an agent will provide a path for any OpenTelemetry-supported language to send to Azure Monitor via [Open Telemetry Protocol (OTLP)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/README.md). Receiving OTLP will enable customers to observe applications written in languages beyond our [supported languages](platforms.md).
+Alternatively, sending application telemetry via an agent like OpenTelemetry-Collector can have some benefits including sampling, post-processing, and more. Azure Monitor is developing an agent and ingestion endpoint that supports [Open Telemetry Protocol (OTLP)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/README.md), providing a path for any OpenTelemetry-supported programming language beyond our [supported languages](platforms.md) to use to Azure Monitor.
> [!NOTE] > For Azure Monitor's position on the [OpenTelemetry-Collector](https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/design.md), see the [OpenTelemetry FAQ](../faq.yml#can-i-use-the-opentelemetry-collector-).
-## Terms
+> [!TIP]
+> If you are planning to use OpenTelemetry-Collector for sampling or additional data processing, you may be able to get these same capabilities built-in to Azure Monitor. Customers who have migrated to [Workspace-based Appplication Insights](convert-classic-resource.md) can benefit from [Ingestion-time Transformations](../essentials/data-collection-transformations.md). To enable, follow the details in the [tutorial](../logs/tutorial-workspace-transformations-portal.md), skipping the step that shows how to set up a diagnostic setting since with Workspace-centric Application Insights this is already configured. If youΓÇÖre filtering less than 50% of the overall volume, itΓÇÖs no additional cost. After 50%, there is a cost but much less than the standard per GB charge.
+
+## OpenTelemetry
+
+Microsoft is excited to embrace [OpenTelemetry](https://opentelemetry.io/) as the future of telemetry instrumentation. You, our customers, have asked for vendor-neutral instrumentation, and we're pleased to partner with the OpenTelemetry community to create consistent APIs and SDKs across languages.
+
+Microsoft worked with project stakeholders from two previously popular open-source telemetry projects, [OpenCensus](https://opencensus.io/) and [OpenTracing](https://opentracing.io/). Together, we helped to create a single project, OpenTelemetry. OpenTelemetry includes contributions from all major cloud and Application Performance Management (APM) vendors and lives within the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/). Microsoft is a Platinum Member of the CNCF.
For terminology, see the [glossary](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/glossary.md) in the OpenTelemetry specifications.
-Some legacy terms in Application Insights are confusing because of the industry convergence on OpenTelemetry. The following table highlights these differences. Eventually, Application Insights terms will be replaced by OpenTelemetry terms.
+Some legacy terms in Application Insights are confusing because of the industry convergence on OpenTelemetry. The following table highlights these differences. Eventually, OpenTelemetry terms will replace Application Insights terms.
Application Insights | OpenTelemetry |
Auto-collectors | Instrumentation libraries
Channel | Exporter Codeless / Agent-based | Auto-instrumentation Traces | Logs
+Requests | Server Spans
+Dependencies | Other Span Types (Client, Internal, etc.)
## Next steps
-1. The following websites consist of language-by-language guidance to enable and configure Microsoft's OpenTelemetry-based offerings.
+Select your enablement approach:
-- [.NET](opentelemetry-enable.md?tabs=net)-- [Java](opentelemetry-enable.md?tabs=java)-- [JavaScript](opentelemetry-enable.md?tabs=nodejs)-- [Python](opentelemetry-enable.md?tabs=python)
+- [Auto-instrumentation](codeless-overview.md)
+- Application Insights SDKs
+ - [ASP.NET](./asp-net.md)
+ - [ASP.NET Core](./asp-net-core.md)
+ - [Node.js](./nodejs.md)
+ - [Python](./opencensus-python.md)
+ - [JavaScript: Web](./javascript.md)
+- [Azure Monitor OpenTelemetry Distro](opentelemetry-enable.md)
-2. Check out the [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry).
+Check out the [Azure Monitor Application Insights FAQ](/azure/azure-monitor/faq#application-insights) and [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry) for more information.
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
For more information, see [Connection string configuration](./java-standalone-co
JavaScript doesn't support the use of environment variables. You have two options: -- To use the SDK Loader Script, see [SDK Loader Script](./javascript-sdk.md?tabs=sdkloaderscript#enable-application-insights).
+- To use the SDK Loader Script, see [SDK Loader Script](./javascript-sdk.md?tabs=sdkloaderscript#get-started).
- Manual setup: ```javascript import { ApplicationInsights } from '@microsoft/applicationinsights-web'
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
The best experience is obtained by installing Application Insights both in your
* If you don't want to install server code, [create an Application Insights resource](./create-new-resource.md).
-1. **Webpage code:** Add the [SDK Loader Script](./javascript-sdk.md?tabs=sdkloaderscript#enable-application-insights) to your webpage before the closing ``</head>``. Replace the connection string with the appropriate value for your Application Insights resource.
+1. **Webpage code:** Use the JavaScript SDK to collect data from webpages. See [Get started with the JavaScript SDK](./javascript-sdk.md).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
Learn more about [custom events](./api-custom-events-metrics.md#trackevent) and
In the Users, Sessions, and Events tools, you can slice and dice custom events by user, event name, and properties. :::image type="content" source="./media/usage-overview/events.png" alt-text="Screenshot that shows the Events tab filtered by AnalyticsItemsOperation and split by AppID." lightbox="./media/usage-overview/events.png":::+
+Whenever youΓÇÖre in any usage experience, click the **Open the last run query** icon to take you back to the underlying query.
++
+You can then modify the underlying query to get the kind of information youΓÇÖre looking for.
+
+HereΓÇÖs an example of an underlying query about page views. Go ahead and paste it directly into the query editor to test it out.
+
+```kusto
+// average pageView duration by name
+let timeGrain=5m;
+let dataset=pageViews
+// additional filters can be applied here
+| where timestamp > ago(1d)
+| where client_Type == "Browser" ;
+// calculate average pageView duration for all pageViews
+dataset
+| summarize avg(duration) by bin(timestamp, timeGrain)
+| extend pageView='Overall'
+// render result in a chart
+| render timechart
+```
## Design the telemetry with the app
azure-monitor Data Collection Transformations Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations-structure.md
description: Structure of transformation in Azure Monitor including limitations
Previously updated : 06/29/2022 Last updated : 06/09/2023 ms.reviwer: nikeist
source
| extend Galaxy_CF = galaxyDictionary[Location] ```
-### has operator
-Transformations don't currently support [has](/azure/data-explorer/kusto/query/has-operator). Use [contains](/azure/data-explorer/kusto/query/contains-operator) which is supported and performs similar functionality.
- ### Handling dynamic data Consider the following input with [dynamic data](/azure/data-explorer/kusto/query/scalar-data-types/dynamic):
The following [String operators](/azure/data-explorer/kusto/query/datatypes-stri
- !contains - contains_cs - !contains_cs
+- has
+- !has
+- has_cs
+- !has_cs
- startswith - !startswith - startswith_cs
The following [String operators](/azure/data-explorer/kusto/query/datatypes-stri
- in - !in + #### Bitwise operators The following [Bitwise operators](/azure/data-explorer/kusto/query/binoperators) are supported.
azure-monitor Prometheus Rule Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-rule-groups.md
There are two types of Prometheus rules as described in the following table.
| Type | Description | |:|:|
-| Alert | Alert rules let you create an Azure Monitor alert based on the results of a Prometheus Query Language (Prom QL) query. |
-| Recording | Recording rules allow you to precompute frequently needed or computationally extensive expressions and store their result as a new set of time series. Querying the precomputed result will then often be much faster than executing the original expression every time it's needed. This is especially useful for dashboards, which need to query the same expression repeatedly every time they refresh, or for use in alert rules, where multiple alert rules may be based on the same complex query. Time series created by recording rules are ingested back to your Azure Monitor workspace as new Prometheus metrics. |
-
-## View Prometheus rule groups
-You can view the rule groups and their included rules in the Azure portal by selecting **Rule groups** from the Azure Monitor workspace.
---
-## Enable rules
-To enable or disable a rule, click on the rule in the Azure portal. Select either **Enable** or **Disable** to change its status.
--
-> [!NOTE]
-> After you disable or re-enable a rule or a rule group, it may take few minutes for the rule group list to reflect the updated status of the rule or the group.
-
+| Alert |[Alert rules ](https://aka.ms/azureprometheus-promio-alertrules)let you create an Azure Monitor alert based on the results of a Prometheus Query Language (Prom QL) query. Alerts fired by Azure Managed Prometheus alert rules are processed and trigger notifications in similar way to other Azure Monitor alerts.|
+| Recording |[Recording rules](https://aka.ms/azureprometheus-promio-recrules) allow you to precompute frequently needed or computationally extensive expressions and store their result as a new set of time series. Time series created by recording rules are ingested back to your Azure Monitor workspace as new Prometheus metrics. |
## Create Prometheus rules
-In the public preview, rule groups, recording rules and alert rules are configured using Azure Resource Manager (ARM) templates, the API, and provisioning tools. This uses a new resource called **Prometheus Rule Group**. You can create and configure rule group resources where the alert rules and recording rules are defined as part of the rule group properties. Azure Monitor Managed Prometheus rule groups are defined with a scope of a specific [Azure Monitor workspace](azure-monitor-workspace-overview.md).
--
-You can use a Resource Manager template to create and configure Prometheus rule groups, alert rules, and recording rules. Resource Manager templates enable you to programmatically set up alert and recording rules in a consistent and reproducible way across all your environments.
-
-The basic steps are as follows:
-
-1. Use the templates below as a JSON file that describes how to create the rule group.
-2. Deploy the template using any deployment method, such as [Azure portal](../../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../../azure-resource-manager/templates/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md), or [Rest API](../../azure-resource-manager/templates/deploy-rest.md).
+Azure Managed Prometheus rule groups, recording rules and alert rules can be created and configured using The Azure resource type **Microsoft.AlertsManagement/prometheusRuleGroups**, where the alert rules and recording rules are defined as part of the rule group properties.Prometheus rule groups are defined with a scope of a specific [Azure Monitor workspace](azure-monitor-workspace-overview.md). Prometheus rule groups can be created using Azure Resource Manager (ARM) templates, API, Azure CLI, or PowerShell.
> [!NOTE] > For your AKS or ARC Kubernetes clusters, you can use some of the recommended alerts rules. See pre-defined alert rules [here](../containers/container-insights-metric-alerts.md#enable-prometheus-alert-rules). - ### Limiting rules to a specific cluster You can optionally limit the rules in a rule group to query data originating from a specific cluster, using the rule group `clusterName` property.
-You should try to limit rules to a single cluster if your Azure Monitor workspace contains a large amount of data from multiple clusters and if there's a concern that running a single set of rules on all the data may cause performance or throttling issues. By using the `clusterName` property, you can create multiple rule groups, each configured with the same rules, and therefore limit each group to cover a different cluster.
+You should limit rules to a single cluster if your Azure Monitor workspace contains a large amount of data from multiple clusters. In such a case, there's a concern that running a single set of rules on all the data may cause performance or throttling issues. By using the `clusterName` property, you can create multiple rule groups, each configured with the same rules, and therefore limit each group to cover a different cluster.
- The `clusterName` value must be identical to the `cluster` label that is added to the metrics from a specific cluster during data collection.-- If `clusterName` is not specified for a specific rule group, the rules in the group will query all the data in the workspace from all clusters.
+- If `clusterName` isn't specified for a specific rule group, the rules in the group query all the data in the workspace from all clusters.
+### Creating Prometheus rule group using Resource Manager template
+
+You can use a Resource Manager template to create and configure Prometheus rule groups, alert rules, and recording rules. Resource Manager templates enable you to programmatically create and configure rule groups in a consistent and reproducible way across all your environments.
+
+The basic steps are as follows:
+
+1. Use the following template as a JSON file that describes how to create the rule group.
+2. Deploy the template using any deployment method, such as [Azure portal](../../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../../azure-resource-manager/templates/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md), or [Rest API](../../azure-resource-manager/templates/deploy-rest.md).
### Template example for a Prometheus rule group
-Below is a sample template that creates a Prometheus rule group, including one recording rule and one alert rule. This creates a resource of type `Microsoft.AlertsManagement/prometheusRuleGroups`. The rules are executed in the order they appear within a group.
+Following is a sample template that creates a Prometheus rule group, including one recording rule and one alert rule. This template creates a resource of type `Microsoft.AlertsManagement/prometheusRuleGroups`. The rules are executed in the order they appear within a group.
``` json {
Below is a sample template that creates a Prometheus rule group, including one r
The following tables describe each of the properties in the rule definition. ### Rule group
-The rule group will always have the following properties, whether it includes an alerting rule, a recording rule, or both.
+The rule group contains the following properties.
| Name | Required | Type | Description | |:|:|:|:|
The rule group will always have the following properties, whether it includes an
| `properties.interval` | False | string | Group evaluation interval. Default = PT1M | ### Recording rules
-The `rules` section will have the following properties for recording rules.
+The `rules` section contains the following properties for recording rules.
| Name | Required | Type | Description | |:|:|:|:|
-| `record` | True | string | Recording rule name. This is the name that will be used for the new time series. |
+| `record` | True | string | Recording rule name. This name is used for the new time series. |
| `expression` | True | string | PromQL expression to calculate the new time series value. |
-| `labels` | True | string | Prometheus rule labels key-value pairs, will be added to the recorded time series. |
+| `labels` | True | string | Prometheus rule labels key-value pairs. These labels are added to the recorded time series. |
| `enabled` | False | boolean | Enable/disable group. Default is true. | - ### Alerting rules
-The `rules` section will have the following properties for alerting rules.
+The `rules` section contains the following properties for alerting rules.
| Name | Required | Type | Description | Notes | |:|:|:|:|:| | `alert` | False | string | Alert rule name | | `expression` | True | string | PromQL expression to evaluate. | | `for` | False | string | Alert firing timeout. Values - 'PT1M', 'PT5M' etc. |
-| `labels` | False | object | labels key-value pairs | Prometheus alert rule labels, will be added to the fired alert. |
+| `labels` | False | object | labels key-value pairs | Prometheus alert rule labels. These labels are added to alerts fired by this rule. |
| `rules.annotations` | False | object | Annotations key-value pairs to add to the alert. | | `enabled` | False | boolean | Enable/disable group. Default is true. | | `rules.severity` | False | integer | Alert severity. 0-4, default is 3 (informational) |
The `rules` section will have the following properties for alerting rules.
| `rules.resolveConfigurations.timeToResolve` | False | string | Alert auto resolution timeout. Default = "PT5M" | | `rules.action[].actionGroupId` | false | string | One or more action group resource IDs. Each is activated when an alert is fired. |
+### Creating Prometheus rule group using Azure CLI
+
+You can use Azure CLI to create and configure Prometheus rule groups, alert rules, and recording rules. The following code examples use [Azure Cloud Shell](../../cloud-shell/overview.md).
+
+1. In the [portal](https://portal.azure.com/), select **Cloud Shell**. At the prompt, use the commands that follow.
+
+2. To create a Prometheus rule group, use the `az alerts-management prometheus-rule-group create` command. You can see detailed documentation on the Prometheus rule group create command in the `az alerts-management prometheus-rule-group create` section of the [Azure CLI commands for creating and managing Prometheus rule groups](/cli/azure/alerts-management/prometheus-rule-group#commands).
+
+Example: Create a new Prometheus rule group with rules
+
+```azurecli
+ az alerts-management prometheus-rule-group create -n TestPrometheusRuleGroup -g TestResourceGroup -l westus --enabled --description "test" --interval PT10M --scopes "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/testrg/providers/microsoft.monitor/accounts/testaccount" --rules [{"record":"test","expression":"test","labels":{"team":"prod"}},{"alert":"Billing_Processing_Very_Slow","expression":"test","enabled":"true","severity":2,"for":"PT5M","labels":{"team":"prod"},"annotations":{"annotationName1":"annotationValue1"},"resolveConfiguration":{"autoResolved":"true","timeToResolve":"PT10M"},"actions":[{"actionGroupId":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/microsoft.insights/actionGroups/test-action-group-name1","actionProperties":{"key11":"value11","key12":"value12"}},{"actionGroupId":"/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testrg/providers/microsoft.insights/actionGroups/test-action-group-name2","actionProperties":{"key21":"value21","key22":"value22"}}]}]
+```
+
+### Create a new Prometheus rule group with PowerShell
+
+To create a Prometheus rule group using PowerShell, use the [new-azprometheusrulegroup](/powershell/module/az.alertsmanagement/new-azprometheusrulegroup) cmdlet.
+
+Example: Create Prometheus rule group definition with rules.
+
+```powershell
+$rule1 = New-AzPrometheusRuleObject -Record "job_type:billing_jobs_duration_seconds:99p5m"
+$action = New-AzPrometheusRuleGroupActionObject -ActionGroupId /subscriptions/fffffffff-ffff-ffff-ffff-ffffffffffff/resourceGroups/MyresourceGroup/providers/microsoft.insights/actiongroups/MyActionGroup -ActionProperty @{"key1" = "value1"}
+$Timespan = New-TimeSpan -Minutes 15
+$rule2 = New-AzPrometheusRuleObject -Alert Billing_Processing_Very_Slow -Expression "job_type:billing_jobs_duration_seconds:99p5m > 30" -Enabled $false -Severity 3 -For $Timespan -Label @{"team"="prod"} -Annotation @{"annotation" = "value"} -ResolveConfigurationAutoResolved $true -ResolveConfigurationTimeToResolve $Timespan -Action $action
+$rules = @($rule1, $rule2)
+$scope = "/subscriptions/fffffffff-ffff-ffff-ffff-ffffffffffff/resourcegroups/MyresourceGroup/providers/microsoft.monitor/accounts/MyAccounts"
+New-AzPrometheusRuleGroup -ResourceGroupName MyresourceGroup -RuleGroupName MyRuleGroup -Location eastus -Rule $rules -Scope $scope -Enabled
+```
+
+## View Prometheus rule groups
+You can view the rule groups and their included rules in the Azure portal by selecting **Rule groups** from the Azure Monitor workspace.
++
+## Disable and enable rules
+To enable or disable a rule, select the rule in the Azure portal. Select either **Enable** or **Disable** to change its status.
++
+> [!NOTE]
+> After you disable or re-enable a rule or a rule group, it may take few minutes for the rule group list to reflect the updated status of the rule or the group.
+ ## Next steps - [Learn more about the Azure alerts](../alerts/alerts-types.md). - [Prometheus documentation for recording rules](https://aka.ms/azureprometheus-promio-recrules). - [Prometheus documentation for alerting rules](https://aka.ms/azureprometheus-promio-alertrules).++
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
Title: Azure Monitor customer-managed key
description: Information and steps to configure Customer-managed key to encrypt data in your Log Analytics workspaces using an Azure Key Vault key. Previously updated : 05/01/2022 Last updated : 06/01/2023 # Azure Monitor customer-managed key
-Data in Azure Monitor is encrypted with Microsoft-managed keys. You can use your own encryption key to protect the data and saved queries in your workspaces. Customer-managed keys in Azure Monitor gives you greater flexibility to manage access controls to logs. Once configure, new data for linked workspaces is encrypted with your key stored in [Azure Key Vault](../../key-vault/general/overview.md), or [Azure Key Vault Managed "HSM"](../../key-vault/managed-hsm/overview.md).
+Data in Azure Monitor is encrypted with Microsoft-managed keys. You can use your own encryption key to protect the data and saved queries in your workspaces. Customer-managed keys in Azure Monitor give you greater flexibility to manage access controls to logs. Once configure, new data for linked workspaces is encrypted with your key stored in [Azure Key Vault](../../key-vault/general/overview.md), or [Azure Key Vault Managed "HSM"](../../key-vault/managed-hsm/overview.md).
We recommend you review [Limitations and constraints](#limitationsandconstraints) below before configuration.
Customer-managed key configuration isn't supported in Azure portal currently and
A [portfolio of Azure Key Management products](../../key-vault/managed-hsm/mhsm-control-data.md#portfolio-of-azure-key-management-products) lists the vaults and managed HSMs that can be used.
-Create or use an existing Azure Key Vault in the region that the cluster is planed, and generate or import a key to be used for logs encryption. The Azure Key Vault must be configured as recoverable, to protect your key and the access to your data in Azure Monitor. You can verify this configuration under properties in your Key Vault, both *Soft delete* and *Purge protection* should be enabled.
+Create or use an existing Azure Key Vault in the region that the cluster is planed, and generate or import a key to be used for logs encryption. The Azure Key Vault must be configured as recoverable, to protect your key and the access to your data in Azure Monitor. You can verify this configuration under properties in your Key Vault, both **Soft delete** and **Purge protection** should be enabled.
[![Soft delete and purge protection settings](media/customer-managed-keys/soft-purge-protection.png "Screenshot of Key Vault soft delete and purge protection properties")](media/customer-managed-keys/soft-purge-protection.png#lightbox)
These settings can be updated in Key Vault via CLI and PowerShell:
## Create cluster
-Clusters uses managed identity for data encryption with your Key Vault. Configure identity `type` property to `SystemAssigned` when creating your cluster to allow access to your Key Vault for "wrap" and "unwrap" operations.
+Clusters use managed identity for data encryption with your Key Vault. Configure identity `type` property to `SystemAssigned` when creating your cluster to allow access to your Key Vault for "wrap" and "unwrap" operations.
Identity settings in cluster for System-assigned managed identity ```json
Follow the procedure illustrated in [Dedicated Clusters article](./logs-dedicate
## Grant Key Vault permissions
-There are two permission models in Key Vault to grant permissions to your cluster and underlay storageΓÇöΓÇöVault access policy, and Azure role-based access control.
+There are two permission models in Key Vault to grant access to your cluster and underlay storageΓÇöAzure role-based access control (Azure RBAC), and Vault access policies (legacy).
-1. Vault access policy
+1. Assign Azure RBAC you control (recommended)
+
+ To add role assignments, you must have Microsoft.Authorization/roleAssignments/write and Microsoft.Authorization/roleAssignments/delete permissions, such as [User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../../role-based-access-control/built-in-roles.md#owner).
- Open your Key Vault in Azure portal and click *Access Policies*, select *Vault access policy*, then click *+ Add Access Policy* to create a policy with these settings:
+ Open your Key Vault in Azure portal, **click Access configuration** in **Settings**, and select **Azure role-based access control** option. Then enter **Access control (IAM)** and add **Key Vault Crypto Service Encryption User** role assignment.
- - Key permissionsΓÇöselect *Get*, *Wrap Key* and *Unwrap Key*.
+ [<img src="media/customer-managed-keys/grant-key-vault-permissions-rbac-8bit.png" alt="Screenshot of Grant Key Vault RBAC permissions." title="Grant Key Vault RBAC permissions" width="80%"/>](media/customer-managed-keys/grant-key-vault-permissions-rbac-8bit.png#lightbox)
+
+1. Assign vault access policy (legacy)
+
+ Open your Key Vault in Azure portal and click **Access Policies**, select **Vault access policy**, then click **+ Add Access Policy** to create a policy with these settings:
+
+ - Key permissionsΓÇöselect **Get**, **Wrap Key** and **Unwrap Key**.
- Select principalΓÇödepending on the identity type used in the cluster (system or user assigned managed identity) - System assigned managed identity - enter the cluster name or cluster principal ID - User assigned managed identity - enter the identity name
- [![grant Key Vault permissions](media/customer-managed-keys/grant-key-vault-permissions-8bit.png "Screenshot of Key Vault access policy permissions")](media/customer-managed-keys/grant-key-vault-permissions-8bit.png#lightbox)
-
- The *Get* permission is required to verify that your Key Vault is configured as recoverable to protect your key and the access to your Azure Monitor data.
+ [<img src="media/customer-managed-keys/grant-key-vault-permissions-8bit.png" alt="Screenshot of Grant Key Vault access policy permissions." title="Grant Key Vault access policy permissions" width="80%"/>](media/customer-managed-keys/grant-key-vault-permissions-8bit.png#lightbox)
-2. Azure role-based access control
- Open your Key Vault in Azure portal and click *Access Policies*, select *Azure role-based access control*, then enter *Access control (IAM)* and add *Key Vault Crypto Service Encryption User* role assignment.
+ The **Get** permission is required to verify that your Key Vault is configured as recoverable to protect your key and the access to your Azure Monitor data.
## Update cluster with key identifier details
Content-type: application/json
**Response**
-It takes the propagation of the key a while to complete. You can check the update state by sending GET request on the cluster and look at the *KeyVaultProperties* properties. Your recently updated key should return in the response.
+It takes the propagation of the key a while to complete. You can check the update state by sending GET request on the cluster and look at the **KeyVaultProperties** properties. Your recently updated key should return in the response.
Response to GET request when key update is completed: 202 (Accepted) and header
All your data remains accessible after the key rotation operation. Data always e
## Customer-managed key for saved queries and log alerts
-The query language used in Log Analytics is expressive and can contain sensitive information in comments, or in the query syntax. Some organizations require that such information is kept protected under Customer-managed key policy and you need save your queries encrypted with your key. Azure Monitor enables you to store *saved-searches* and *log alerts* queries encrypted with your key in your own Storage Account when connected to your workspace.
+The query language used in Log Analytics is expressive and can contain sensitive information in comments, or in the query syntax. Some organizations require that such information is kept protected under Customer-managed key policy and you need save your queries encrypted with your key. Azure Monitor enables you to store saved queries and log alerts encrypted with your key in your own Storage Account when linked to your workspace.
> [!NOTE]
-> Log Analytics queries can be saved in various stores depending on the scenario used. Queries remain encrypted with Microsoft key ("MMK") in the following scenarios regardless Customer-managed key configuration: Workbooks in Azure Monitor, Azure dashboards, Azure Logic App, Azure Notebooks and Automation Runbooks.
+> Queries remain encrypted with Microsoft key ("MMK") in the following scenarios regardless Customer-managed key configuration: Workbooks in Azure Monitor, Azure dashboards, Azure Logic App, Azure Notebooks and Automation Runbooks.
-When linking your own storage (BYOS) to workspace, the service stores *saved-searches* and *log alerts* queries to your Storage Account. With the control on Storage Account and the [encryption-at-rest policy](../../storage/common/customer-managed-keys-overview.md), you can protect *saved-searches* and *log alerts* with Customer-managed key. You will, however, be responsible for the costs associated with that Storage Account.
+When linking your Storage Account for saved queries, the service stores saved-queries and log alerts queries in your Storage Account. Having control on your Storage Account [encryption-at-rest policy](../../storage/common/customer-managed-keys-overview.md), you can protect saved queries and log alerts with Customer-managed key. You will, however, be responsible for the costs associated with that Storage Account.
**Considerations before setting Customer-managed key for queries** * You need to have "write" permissions on your workspace and Storage Account. * Make sure to create your Storage Account in the same region as your Log Analytics workspace is located.
-* The *saves searches* in storage is considered as service artifacts and their format may change.
-* Existing *saves searches* are removed from your workspace. Copy any *saves searches* that you need before this configuration. You can view your *saved-searches* using [PowerShell](/powershell/module/az.operationalinsights/get-azoperationalinsightssavedsearch).
+* The saves queries in storage is considered as service artifacts and their format may change.
+* Linking a Storage Account for queries removed existing saves queries from your workspace. Copy saves queries that you need before this configuration. You can view your saved queries using [PowerShell](/powershell/module/az.operationalinsights/get-azoperationalinsightssavedsearch).
* Query 'history' and 'pin to dashboard' aren't supported when linking Storage Account for queries.
-* You can link a single Storage Account to a workspace, which can be used for both *saved-searches* and *log alerts* queries.
+* You can link a single Storage Account to a workspace, which can be used for both saved queries and log alerts queries.
* Fired log alerts will not contain search results or alert query. You can use [alert dimensions](../alerts/alerts-unified-log.md#split-by-alert-dimensions) to get context in the fired alerts.
-**Configure BYOS for saved-searches queries**
+**Configure BYOS for saved queries**
-Link a Storage Account for *Query* to keep *saved-searches* queries in your Storage Account.
+Link a Storage Account for queries to keep saved queries in your Storage Account.
# [Azure portal](#tab/portal)
Customer-Managed key is provided on dedicated cluster and these operations are r
- [Double encryption](../../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption) is configured automatically for clusters created from October 2020 in supported regions. You can verify if your cluster is configured for double encryption by sending a GET request on the cluster and observing that the `isDoubleEncryptionEnabled` value is `true` for clusters with Double encryption enabled. - If you create a cluster and get an errorΓÇö"region-name doesnΓÇÖt support Double Encryption for clusters", you can still create the cluster without Double encryption, by adding `"properties": {"isDoubleEncryptionEnabled": false}` in the REST request body.
- - Double encryption settings can not be changed after the cluster has been created.
+ - Double encryption settings cannot be changed after the cluster has been created.
Deleting a linked workspace is permitted while linked to cluster. If you decide to [recover](./delete-workspace.md#recover-a-workspace) the workspace during the [soft-delete](./delete-workspace.md#soft-delete-behavior) period, it returns to previous state and remains linked to cluster.
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
A data export rule defines the destination and tables for which data is exported
1. Follow the steps, and then select **Create**.
- <img src="media/logs-data-export/export-create-2.png" alt="Screenshot of data export rule configuration." title="Export rule configuration" width="80%"/>
+ [<img src="media/logs-data-export/export-create-2.png" alt="Screenshot of export rule configuration." title="Export rule configuration" width="80%"/>](media/logs-data-export/export-create-2.png#lightbox)
# [PowerShell](#tab/powershell)
If the data export rule includes an unsupported table, the configuration will su
| ASCDeviceEvents | | | ASimDnsActivityLogs | | | ASimNetworkSessionLogs | |
-| ASimNetworkSessionLogs,ASimWebSessionLogs | |
+| ASimNetworkSessionLogs, ASimWebSessionLogs | |
| ASimWebSessionLogs | | | ATCExpressRouteCircuitIpfix | | | AuditLogs | |
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 06/09/2023 Last updated : 06/12/2023 # Solution architectures using Azure NetApp Files
This section provides references for solutions for Linux OSS applications and da
### Product Lifecycle Management * [Use Teamcenter PLM with Azure NetApp Files](/azure/architecture/example-scenario/manufacturing/teamcenter-plm-netapp-files)
+* [Siemens Teamcenter baseline architecture](/azure/architecture/example-scenario/manufacturing/teamcenter-baseline)
### Machine Learning * [Cloudera Machine Learning](https://docs.cloudera.com/machine-learning/cloud/requirements-azure/topics/ml-requirements-azure.html)
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
Several features of Azure NetApp Files require that you have an Active Directory connection. For example, you need to have an Active Directory connection before you can create an [SMB volume](azure-netapp-files-create-volumes-smb.md), a [NFSv4.1 Kerberos volume](configure-kerberos-encryption.md), or a [dual-protocol volume](create-volumes-dual-protocol.md). This article shows you how to create and manage Active Directory connections for Azure NetApp Files. - ## <a name="requirements-for-active-directory-connections"></a>Requirements and considerations for Active Directory connections > [!IMPORTANT]
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md
If you're using Azure NetApp Files with Azure Active Directory Domain Services,
## How do the Netlogon protocol changes in the April 2023 Windows Update affect Azure NetApp Files?
-The Windows April 2023 update will include a patch for Netlogon protocol changes, however these changes are not enforced at this time.
-
-You should not modify the `RequireSeal` value to 2 at this time. Azure NetApp Files adds support for setting `RequireSeal` to 2 in May 2023.
+The Windows April 2023 updated included a patch for Netlogon protocol changes, which were not enforced at release.
-The enforcement of setting `RequireSeal` value to 2 will occur by default with the June 2023 Azure update.
+The upgrades to the Azure NetApp File storage resource have been completed. The enforcement of setting `RequireSeal` value to 2 will occur by default with the June 2023 Azure update. No action is required regarding the June 13 enforcement phase.
-For more information, see [KB5021130: How to manage the Netlogon protocol changes related to CVE-2022-38023](https://support.microsoft.com/topic/kb5021130-how-to-manage-the-netlogon-protocol-changes-related-to-cve-2022-38023-46ea3067-3989-4d40-963c-680fd9e8ee25#timing5021130).
+For more information about this update, see [KB5021130: How to manage the Netlogon protocol changes related to CVE-2022-38023](https://support.microsoft.com/topic/kb5021130-how-to-manage-the-netlogon-protocol-changes-related-to-cve-2022-38023-46ea3067-3989-4d40-963c-680fd9e8ee25#timing5021130).
## What versions of Windows Server Active Directory are supported?
azure-netapp-files Modify Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/modify-active-directory-connections.md
Once you've [created an Active Directory connection](create-active-directory-connections.md) in Azure NetApp Files, you can modify it. When you're modifying an Active Directory connection, not all configurations are modifiable. - ## Modify Active Directory connections 1. Select **Active Directory connections**. Then, select **Edit** to edit an existing AD connection.
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
Proper Active Directory Domain Services (AD DS) design and planning are key to s
This article provides recommendations to help you develop an AD DS deployment strategy for Azure NetApp Files. Before reading this article, you need to have a good understanding about how AD DS works on a functional level. - ## <a name="ad-ds-requirements"></a> Identify AD DS requirements for Azure NetApp Files Before you deploy Azure NetApp Files volumes, you must identify the AD DS integration requirements for Azure NetApp Files to ensure that Azure NetApp Files is well connected to AD DS. _Incorrect or incomplete AD DS integration with Azure NetApp Files might cause client access interruptions or outages for SMB, dual-protocol, or Kerberos NFSv4.1 volumes_.
backup Backup Azure Enhanced Soft Delete Configure Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-enhanced-soft-delete-configure-manage.md
Title: Configure and manage enhanced soft delete for Azure Backup (preview) description: This article describes about how to configure and manage enhanced soft delete for Azure Backup. Previously updated : 05/15/2023 Last updated : 06/12/2023
This article describes how to configure and use enhanced soft delete to protect
- Enhanced soft delete is supported for Recovery Services vaults and Backup vaults. - It's supported for new and existing vaults. - All existing Recovery Services vaults in the [preview regions](backup-azure-enhanced-soft-delete-about.md#supported-scenarios) are upgraded with an option to use enhanced soft delete.-
+- For hybrid backups (using MARS, DPM, or MABS), enabling always-on soft delete will disallow server deregistration and deletion of backups via the Azure portal. If you don't want to retain the backed-up data, we recommend you not to enable *always-on soft-delete* for the vault or perform *stop protection with delete data* before the server is decommissioned.
## Enable soft delete with always-on state
bastion Kerberos Authentication Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/kerberos-authentication-portal.md
description: Learn how to configure Bastion to use Kerberos authentication via t
Previously updated : 08/03/2022 Last updated : 06/12/2023
-# How to configure Bastion for Kerberos authentication using the Azure portal (Preview)
+# Configure Bastion for Kerberos authentication using the Azure portal (Preview)
This article shows you how to configure Azure Bastion to use Kerberos authentication. Kerberos authentication can be used with both the Basic and the Standard Bastion SKUs. For more information about Kerberos authentication, see the [Kerberos authentication overview](/windows-server/security/kerberos/kerberos-authentication-overview). For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Title: Chaos Studio fault and action library
-description: Understand the available actions you can use with Chaos Studio, including any prerequisites and parameters.
+ Title: Azure Chaos Studio Preview fault and action library
+description: Understand the available actions you can use with Azure Chaos Studio Preview, including any prerequisites and parameters.
-# Chaos Studio fault and action library
+# Azure Chaos Studio Preview fault and action library
-The faults listed in this article are currently available for use. To understand which resource types are supported, see [Supported resource types and role assignments for Chaos Studio](./chaos-studio-fault-providers.md).
+The faults listed in this article are currently available for use. To understand which resource types are supported, see [Supported resource types and role assignments for Azure Chaos Studio Preview](./chaos-studio-fault-providers.md).
## Time delay
Currently, the Windows agent doesn't reduce memory pressure when other applicati
|-|-| | Capability name | TimeChange-1.0 | | Target type | Microsoft-Agent |
-| Supported OS types | Windows. |
+| Supported OS types | Windows |
| Description | Changes the system time of the VM where it's injected and resets the time at the end of the experiment or if the experiment is canceled. | | Prerequisites | None. | | Urn | urn:csci:microsoft:agent:timeChange/1.0 |
chaos-studio Chaos Studio Tutorial Aks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-cli.md
Title: Create an experiment that uses an AKS Chaos Mesh fault using Azure Chaos Studio with the Azure CLI
-description: Create an experiment that uses an AKS Chaos Mesh fault with the Azure CLI
+ Title: Create a chaos experiment using a Chaos Mesh fault with Azure CLI
+description: Create an experiment that uses an AKS Chaos Mesh fault by using Azure Chaos Studio Preview with the Azure CLI.
Last updated 04/21/2022
ms.devlang: azurecli
# Create a chaos experiment that uses a Chaos Mesh fault with the Azure CLI
-You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this guide, you will cause periodic Azure Kubernetes Service pod failures on a namespace using a chaos experiment and Azure Chaos Studio. Running this experiment can help you defend against service unavailability when there are sporadic failures.
+You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause periodic Azure Kubernetes Service (AKS) pod failures on a namespace by using a chaos experiment and Azure Chaos Studio Preview. Running this experiment can help you defend against service unavailability when there are sporadic failures.
-Azure Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-source chaos engineering platform for Kubernetes to inject faults into an AKS cluster. Chaos Mesh faults are [service-direct](chaos-studio-tutorial-aks-portal.md) faults that require Chaos Mesh to be installed on the AKS cluster. These same steps can be used to set up and run an experiment for any AKS Chaos Mesh fault.
+Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-source chaos engineering platform for Kubernetes, to inject faults into an AKS cluster. Chaos Mesh faults are [service-direct](chaos-studio-tutorial-aks-portal.md) faults that require Chaos Mesh to be installed on the AKS cluster. You can use these same steps to set up and run an experiment for any AKS Chaos Mesh fault.
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -- An AKS cluster with Linux node pools. If you do not have an AKS cluster, see the AKS quickstart [using the Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md).
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An AKS cluster with Linux node pools. If you don't have an AKS cluster, see the AKS quickstart that uses the [Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or the [Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md).
> [!WARNING] > AKS Chaos Mesh faults are only supported on Linux node pools.
-## Launch Azure Cloud Shell
+## Open Azure Cloud Shell
-The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
-To open the Cloud Shell, just select **Try it** from the upper right corner of a code block. You can also open Cloud Shell in a separate browser tab by going to [https://shell.azure.com/bash](https://shell.azure.com/bash). Select **Copy** to copy the blocks of code, paste it into the Cloud Shell, and select **Enter** to run it.
+To open Cloud Shell, select **Try it** in the upper-right corner of a code block. You can also open Cloud Shell in a separate browser tab by going to [Bash](https://shell.azure.com/bash). Select **Copy** to copy the blocks of code, paste it into Cloud Shell, and select **Enter** to run it.
If you prefer to install and use the CLI locally, this tutorial requires Azure CLI version 2.0.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). > [!NOTE]
-> These instructions use a Bash terminal in Azure Cloud Shell. Some commands may not work as described if running the CLI locally or in a PowerShell terminal.
+> These instructions use a Bash terminal in Cloud Shell. Some commands might not work as described if you run the CLI locally or in a PowerShell terminal.
## Set up Chaos Mesh on your AKS cluster
-Before you can run Chaos Mesh faults in Chaos Studio, you need to install Chaos Mesh on your AKS cluster.
+Before you can run Chaos Mesh faults in Chaos Studio, you must install Chaos Mesh on your AKS cluster.
-1. Run the following commands in an [Azure Cloud Shell](../cloud-shell/overview.md) window where you have the active subscription set to be the subscription where your AKS cluster is deployed. Replace `$RESOURCE_GROUP` and `$CLUSTER_NAME` with the resource group and name of your cluster resource.
+1. Run the following commands in a [Cloud Shell](../cloud-shell/overview.md) window where you have the active subscription set to be the subscription where your AKS cluster is deployed. Replace `$RESOURCE_GROUP` and `$CLUSTER_NAME` with the resource group and name of your cluster resource.
```azurecli-interactive az aks get-credentials -g $RESOURCE_GROUP -n $CLUSTER_NAME
Before you can run Chaos Mesh faults in Chaos Studio, you need to install Chaos
helm install chaos-mesh chaos-mesh/chaos-mesh --namespace=chaos-testing --set chaosDaemon.runtime=containerd --set chaosDaemon.socketPath=/run/containerd/containerd.sock ```
-2. Verify that the Chaos Mesh pods are installed by running the following command:
+1. Verify that the Chaos Mesh pods are installed by running the following command:
```azurecli-interactive kubectl get po -n chaos-testing ```
-You should see output similar to the following (a chaos-controller-manager and one or more chaos-daemons):
+You should see output similar to the following example (a chaos-controller-manager and one or more chaos-daemons):
```bash NAME READY STATUS RESTARTS AGE
chaos-dashboard-98c4c5f97-tx5ds 1/1 Running 0 2d5h
You can also [use the installation instructions on the Chaos Mesh website](https://chaos-mesh.org/docs/production-installation-using-helm/). - ## Enable Chaos Studio on your AKS cluster
-Chaos Studio cannot inject faults against a resource unless that resource has been onboarded to Chaos Studio first. You onboard a resource to Chaos Studio by creating a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. AKS clusters only have one target type (service-direct), but other resources may have up to two target types - one for service-direct faults and one for agent-based faults. Each type of Chaos Mesh fault is represented as a capability (PodChaos, NetworkChaos, IOChaos, etc.).
+Chaos Studio can't inject faults against a resource unless that resource is added to Chaos Studio first. To add a resource to Chaos Studio, create a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. AKS clusters have only one target type (service-direct), but other resources might have up to two target types. One target type is for service-direct faults. Another target type is for agent-based faults. Each type of Chaos Mesh fault is represented as a capability like PodChaos, NetworkChaos, and IOChaos.
-1. Create a target by replacing `$RESOURCE_ID` with the resource ID of the AKS cluster you are onboarding:
+1. Create a target by replacing `$RESOURCE_ID` with the resource ID of the AKS cluster you're adding.
```azurecli-interactive az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh?api-version=2021-09-15-preview" --body "{\"properties\":{}}" ```
-2. Create the capabilities on the target by replacing `$RESOURCE_ID` with the resource ID of the AKS cluster you are onboarding and `$CAPABILITY` with the [name of the fault capability you are enabling](chaos-studio-fault-library.md).
+1. Create the capabilities on the target by replacing `$RESOURCE_ID` with the resource ID of the AKS cluster you're adding. Replace `$CAPABILITY` with the [name of the fault capability you're enabling](chaos-studio-fault-library.md).
```azurecli-interactive az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh/capabilities/$CAPABILITY?api-version=2021-09-15-preview" --body "{\"properties\":{}}" ```
- For example, if enabling the PodChaos capability:
+ For example, if you're enabling the `PodChaos` capability:
```azurecli-interactive az rest --method put --url "https://management.azure.com/subscriptions/b65f2fec-d6b2-4edd-817e-9339d8c01dc4/resourceGroups/myRG/providers/Microsoft.ContainerService/managedClusters/myCluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh/capabilities/PodChaos-2.1?api-version=2021-09-15-preview" --body "{\"properties\":{}}" ```
- This must be done for each capability you want to enable on the cluster.
+ This step must be done for each capability you want to enable on the cluster.
-You have now successfully onboarded your AKS cluster to Chaos Studio.
+You've now successfully added your AKS cluster to Chaos Studio.
## Create an experiment
-With your AKS cluster now onboarded, you can create your experiment. A chaos experiment defines the actions you want to take against target resources, organized into steps, which run sequentially, and branches, which run in parallel.
+Now you can create your experiment. A chaos experiment defines the actions you want to take against target resources. The actions are organized and run in sequential steps. The chaos experiment also defines the actions you want to take against branches, which run in parallel.
-1. Create a Chaos Mesh jsonSpec:
- 1. Visit the Chaos Mesh documentation for a fault type, [for example, the PodChaos type](https://chaos-mesh.org/docs/simulate-pod-chaos-on-kubernetes/#create-experiments-using-yaml-configuration-files).
- 2. Formulate the YAML configuration for that fault type using the Chaos Mesh documentation.
+1. Create a Chaos Mesh `jsonSpec`:
+ 1. See the Chaos Mesh documentation for a fault type, [for example, the PodChaos type](https://chaos-mesh.org/docs/simulate-pod-chaos-on-kubernetes/#create-experiments-using-yaml-configuration-files).
+ 1. Formulate the YAML configuration for that fault type by using the Chaos Mesh documentation.
```yaml apiVersion: chaos-mesh.org/v1alpha1
With your AKS cluster now onboarded, you can create your experiment. A chaos exp
namespaces: - default ```
- 3. Remove any YAML outside of the `spec` (including the spec property name), and remove the indentation of the spec details.
+ 1. Remove any YAML outside of the `spec`, including the spec property name. Remove the indentation of the spec details.
```yaml action: pod-failure
With your AKS cluster now onboarded, you can create your experiment. A chaos exp
namespaces: - default ```
- 4. Use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minimize it.
+ 1. Use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minimize it.
```json {"action":"pod-failure","mode":"all","duration":"600s","selector":{"namespaces":["default"]}} ```
- 5. Use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec.
+ 1. Use a [JSON string escape tool like this one](https://www.freeformatter.com/json-escape.html) to escape the JSON spec.
```json {\"action\":\"pod-failure\",\"mode\":\"all\",\"duration\":\"600s\",\"selector\":{\"namespaces\":[\"default\"]}} ```
-2. Create your experiment JSON starting with the JSON sample below. Modify the JSON to correspond to the experiment you want to run using the [Create Experiment API](/rest/api/chaosstudio/experiments/create-or-update), the [fault library](chaos-studio-fault-library.md), and the jsonSpec created in the previous step.
+1. Create your experiment JSON by starting with the following JSON sample. Modify the JSON to correspond to the experiment you want to run by using the [Create Experiment API](/rest/api/chaosstudio/experiments/create-or-update), the [fault library](chaos-studio-fault-library.md), and the `jsonSpec` created in the previous step.
```json {
With your AKS cluster now onboarded, you can create your experiment. A chaos exp
} ```
-2. Create the experiment using the Azure CLI, replacing `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. Make sure you have saved and uploaded your experiment JSON and update `experiment.json` with your JSON filename.
+1. Create the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. Make sure you've saved and uploaded your experiment JSON. Update `experiment.json` with your JSON filename.
```azurecli-interactive az rest --method put --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME?api-version=2021-09-15-preview --body @experiment.json ```
- Each experiment creates a corresponding system-assigned managed identity. Note of the `principalId` for this identity in the response for the next step.
+ Each experiment creates a corresponding system-assigned managed identity. Note the principal ID for this identity in the response for the next step.
-## Give experiment permission to your AKS cluster
+## Give the experiment permission to your AKS cluster
When you create a chaos experiment, Chaos Studio creates a system-assigned managed identity that executes faults against your target resources. This identity must be given [appropriate permissions](chaos-studio-fault-providers.md) to the target resource for the experiment to run successfully.
-Give the experiment access to your resource(s) using the command below, replacing `$EXPERIMENT_PRINCIPAL_ID` with the principalId from the previous step and `$RESOURCE_ID` with the resource ID of the target resource (in this case, the AKS cluster resource ID). Run this command for each resource targeted in your experiment.
+Give the experiment access to your resources by using the following command. Replace `$EXPERIMENT_PRINCIPAL_ID` with the principal ID from the previous step. Replace `$RESOURCE_ID` with the resource ID of the target resource. In this case, it's the AKS cluster resource ID. Run this command for each resource targeted in your experiment.
```azurecli-interactive az role assignment create --role "Azure Kubernetes Cluster Admin Role" --assignee-object-id $EXPERIMENT_PRINCIPAL_ID --scope $RESOURCE_ID ``` ## Run your experiment
-You are now ready to run your experiment. To see the impact, we recommend opening your AKS cluster overview and going to **Insights** in a separate browser tab. Live data for the **Active Pod Count** will show the impact of running your experiment.
+You're now ready to run your experiment. To see the effect, we recommend that you open your AKS cluster overview and go to **Insights** in a separate browser tab. Live data for the **Active Pod Count** shows the effect of running your experiment.
-1. Start the experiment using the Azure CLI, replacing `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment.
+1. Start the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment.
```azurecli-interactive az rest --method post --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME/start?api-version=2021-09-15-preview ```
-2. The response includes a status URL that you can use to query experiment status as the experiment runs.
+1. The response includes a status URL that you can use to query experiment status as the experiment runs.
## Next steps
-Now that you have run an AKS Chaos Mesh service-direct experiment, you are ready to:
+Now that you've run an AKS Chaos Mesh service-direct experiment, you're ready to:
- [Create an experiment that uses agent-based faults](chaos-studio-tutorial-agent-based-portal.md) - [Manage your experiment](chaos-studio-run-experiment.md)
chaos-studio Chaos Studio Tutorial Aks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-portal.md
Title: Create an experiment that uses an AKS Chaos Mesh fault using Azure Chaos Studio with the Azure portal
-description: Create an experiment that uses an AKS Chaos Mesh fault with the Azure portal
+ Title: Create an experiment using a Chaos Mesh fault with the Azure portal
+description: Create an experiment that uses an AKS Chaos Mesh fault by using Azure Chaos Studio Preview with the Azure portal.
Last updated 04/21/2022
# Create a chaos experiment that uses a Chaos Mesh fault to kill AKS pods with the Azure portal
-You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this guide, you will cause periodic Azure Kubernetes Service pod failures on a namespace using a chaos experiment and Azure Chaos Studio. Running this experiment can help you defend against service unavailability when there are sporadic failures.
+You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause periodic Azure Kubernetes Service (AKS) pod failures on a namespace by using a chaos experiment and Azure Chaos Studio Preview. Running this experiment can help you defend against service unavailability when there are sporadic failures.
-Azure Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-source chaos engineering platform for Kubernetes to inject faults into an AKS cluster. Chaos Mesh faults are [service-direct](chaos-studio-tutorial-aks-portal.md) faults that require Chaos Mesh to be installed on the AKS cluster. These same steps can be used to set up and run an experiment for any AKS Chaos Mesh fault.
+Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-source chaos engineering platform for Kubernetes, to inject faults into an AKS cluster. Chaos Mesh faults are [service-direct](chaos-studio-tutorial-aks-portal.md) faults that require Chaos Mesh to be installed on the AKS cluster. You can use these same steps to set up and run an experiment for any AKS Chaos Mesh fault.
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -- An AKS cluster with a Linux node pool. If you do not have an AKS cluster, see the AKS quickstart [using the Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [using Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or [using the Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md).
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An AKS cluster with a Linux node pool. If you don't have an AKS cluster, see the AKS quickstart that uses the [Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [Azure PowerShell](../aks/learn/quick-kubernetes-deploy-powershell.md), or the [Azure portal](../aks/learn/quick-kubernetes-deploy-portal.md).
> [!WARNING] > AKS Chaos Mesh faults are only supported on Linux node pools. ## Limitations -- Previously, Chaos Mesh faults didn't work with private clusters. You can now use Chaos Mesh faults with private clusters by configuring [VNet Injection in Chaos Studio](chaos-studio-private-networking.md).
+Previously, Chaos Mesh faults didn't work with private clusters. You can now use Chaos Mesh faults with private clusters by configuring [virtual network injection in Chaos Studio](chaos-studio-private-networking.md).
## Set up Chaos Mesh on your AKS cluster
-Before you can run Chaos Mesh faults in Chaos Studio, you need to install Chaos Mesh on your AKS cluster.
+Before you can run Chaos Mesh faults in Chaos Studio, you must install Chaos Mesh on your AKS cluster.
1. Run the following commands in an [Azure Cloud Shell](../cloud-shell/overview.md) window where you have the active subscription set to be the subscription where your AKS cluster is deployed. Replace `$RESOURCE_GROUP` and `$CLUSTER_NAME` with the resource group and name of your cluster resource.
-```azurecli
-az aks get-credentials -g $RESOURCE_GROUP -n $CLUSTER_NAME
-```
+ ```azurecli
+ az aks get-credentials -g $RESOURCE_GROUP -n $CLUSTER_NAME
+ ```
+
+ ```bash
+ helm repo add chaos-mesh https://charts.chaos-mesh.org
+ helm repo update
+ kubectl create ns chaos-testing
+ helm install chaos-mesh chaos-mesh/chaos-mesh --namespace=chaos-testing --set chaosDaemon.runtime=containerd --set chaosDaemon.socketPath=/run/containerd/containerd.sock
+ ```
+
+1. Verify that the Chaos Mesh pods are installed by running the following command:
+
+ ```bash
+ kubectl get po -n chaos-testing
+ ```
+
+ You should see output similar to the following example (a chaos-controller-manager and one or more chaos-daemons):
+
+ ```bash
+ NAME READY STATUS RESTARTS AGE
+ chaos-controller-manager-69fd5c46c8-xlqpc 1/1 Running 0 2d5h
+ chaos-daemon-jb8xh 1/1 Running 0 2d5h
+ chaos-dashboard-98c4c5f97-tx5ds 1/1 Running 0 2d5h
+ ```
-```bash
-helm repo add chaos-mesh https://charts.chaos-mesh.org
-helm repo update
-kubectl create ns chaos-testing
-helm install chaos-mesh chaos-mesh/chaos-mesh --namespace=chaos-testing --set chaosDaemon.runtime=containerd --set chaosDaemon.socketPath=/run/containerd/containerd.sock
-```
+You can also [use the installation instructions on the Chaos Mesh website](https://chaos-mesh.org/docs/production-installation-using-helm/).
-2. Verify that the Chaos Mesh pods are installed by running the following command:
+## Enable Chaos Studio on your AKS cluster
-```bash
-kubectl get po -n chaos-testing
-```
+Chaos Studio can't inject faults against a resource unless that resource is added to Chaos Studio first. You add a resource to Chaos Studio by creating a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. AKS clusters have only one target type (service-direct), but other resources might have up to two target types. One target type is for service-direct faults. Another target type is for agent-based faults. Each type of Chaos Mesh fault is represented as a capability like PodChaos, NetworkChaos, and IOChaos.
-You should see output similar to the following (a chaos-controller-manager and one or more chaos-daemons):
+1. Open the [Azure portal](https://portal.azure.com).
+1. Search for **Chaos Studio (preview)** in the search bar.
+1. Select **Targets** and go to your AKS cluster.
-```bash
-NAME READY STATUS RESTARTS AGE
-chaos-controller-manager-69fd5c46c8-xlqpc 1/1 Running 0 2d5h
-chaos-daemon-jb8xh 1/1 Running 0 2d5h
-chaos-dashboard-98c4c5f97-tx5ds 1/1 Running 0 2d5h
-```
+ ![Screenshot that shows the Targets view in the Azure portal.](images/tutorial-aks-targets.png)
+1. Select the checkbox next to your AKS cluster. Select **Enable targets** and then select **Enable service-direct targets** from the dropdown menu.
-You can also [use the installation instructions on the Chaos Mesh website](https://chaos-mesh.org/docs/production-installation-using-helm/).
+ ![Screenshot that shows enabling targets in the Azure portal.](images/tutorial-aks-targets-enable.png)
+1. A notification appears that indicates that the resources you selected were successfully enabled.
+ ![Screenshot that shows the notification showing that the target was successfully enabled.](images/tutorial-aks-targets-enable-confirm.png)
-## Enable Chaos Studio on your AKS cluster
+You've now successfully added your AKS cluster to Chaos Studio. In the **Targets** view, you can also manage the capabilities enabled on this resource. Select the **Manage actions** link next to a resource to display the capabilities enabled for that resource.
-Chaos Studio cannot inject faults against a resource unless that resource has been onboarded to Chaos Studio first. You onboard a resource to Chaos Studio by creating a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. AKS clusters only have one target type (service-direct), but other resources may have up to two target types - one for service-direct faults and one for agent-based faults. Each type of Chaos Mesh fault is represented as a capability (PodChaos, NetworkChaos, IOChaos, etc.).
+## Create an experiment
+Now you can create your experiment. A chaos experiment defines the actions you want to take against target resources. The actions are organized and run in sequential steps. The chaos experiment also defines the actions you want to take against branches, which run in parallel.
-1. Open the [Azure portal](https://portal.azure.com).
-2. Search for **Chaos Studio (preview)** in the search bar.
-3. Click on **Targets** and navigate to your AKS cluster.
-![Targets view in the Azure portal](images/tutorial-aks-targets.png)
-4. Check the box next to your AKS cluster and click **Enable targets** then **Enable service-direct targets** from the dropdown menu.
-![Enabling targets in the Azure portal](images/tutorial-aks-targets-enable.png)
-5. A notification will appear indicating that the resource(s) selected were successfully enabled.
-![Notification showing target successfully enabled](images/tutorial-aks-targets-enable-confirm.png)
+1. Select the **Experiments** tab in Chaos Studio. In this view, you can see and manage all your chaos experiments. Select **Add an experiment**
-You have now successfully onboarded your AKS cluster to Chaos Studio. In the **Targets** view you can also manage the capabilities enabled on this resource. Clicking the **Manage actions** link next to a resource will display the capabilities enabled for that resource.
+ ![Screenshot that shows the Experiments view in the Azure portal.](images/tutorial-aks-add.png)
+1. Fill in the **Subscription**, **Resource Group**, and **Location** where you want to deploy the chaos experiment. Give your experiment a name. Select **Next: Experiment designer**.
-## Create an experiment
-With your AKS cluster now onboarded, you can create your experiment. A chaos experiment defines the actions you want to take against target resources, organized into steps, which run sequentially, and branches, which run in parallel.
+ ![Screenshot that shows adding basic experiment details.](images/tutorial-aks-add-basics.png)
+1. You're now in the Chaos Studio experiment designer. The experiment designer allows you to build your experiment by adding steps, branches, and faults. Give a friendly name to your **Step** and **Branch** and select **Add fault**.
-1. Click on the **Experiments** tab in the Chaos Studio navigation. In this view, you can see and manage all of your chaos experiments. Click on **Add an experiment**
-![Experiments view in the Azure portal](images/tutorial-aks-add.png)
-2. Fill in the **Subscription**, **Resource Group**, and **Location** where you want to deploy the chaos experiment. Give your experiment a **Name**. Click **Next : Experiment designer >**
-![Adding basic experiment details](images/tutorial-aks-add-basics.png)
-3. You are now in the Chaos Studio experiment designer. The experiment designer allows you to build your experiment by adding steps, branches, and faults. Give a friendly name to your **Step** and **Branch**, then click **Add fault**.
-![Experiment designer](images/tutorial-aks-add-designer.png)
-4. Select **AKS Chaos Mesh Pod Chaos** from the dropdown, then fill in the **Duration** with the number of minutes you want the failure to last and **jsonSpec** with the information below:
+ ![Screenshot that shows the experiment designer.](images/tutorial-aks-add-designer.png)
+1. Select **AKS Chaos Mesh Pod Chaos** from the dropdown list. Fill in **Duration** with the number of minutes you want the failure to last and **jsonSpec** with the following information:
- To formulate your Chaos Mesh jsonSpec:
- 1. Visit the Chaos Mesh documentation for a fault type, [for example, the PodChaos type](https://chaos-mesh.org/docs/simulate-pod-chaos-on-kubernetes/#create-experiments-using-yaml-configuration-files).
- 2. Formulate the YAML configuration for that fault type using the Chaos Mesh documentation.
+ To formulate your Chaos Mesh `jsonSpec`:
+ 1. See the Chaos Mesh documentation for a fault type, [for example, the PodChaos type](https://chaos-mesh.org/docs/simulate-pod-chaos-on-kubernetes/#create-experiments-using-yaml-configuration-files).
+ 1. Formulate the YAML configuration for that fault type by using the Chaos Mesh documentation.
```yaml apiVersion: chaos-mesh.org/v1alpha1
With your AKS cluster now onboarded, you can create your experiment. A chaos exp
namespaces: - default ```
- 3. Remove any YAML outside of the `spec` (including the spec property name), and remove the indentation of the spec details.
+ 1. Remove any YAML outside of the `spec` (including the spec property name) and remove the indentation of the spec details.
```yaml action: pod-failure
With your AKS cluster now onboarded, you can create your experiment. A chaos exp
namespaces: - default ```
- 4. Use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minimize it.
+ 1. Use a [YAML-to-JSON converter like this one](https://www.convertjson.com/yaml-to-json.htm) to convert the Chaos Mesh YAML to JSON and minimize it.
```json {"action":"pod-failure","mode":"all","duration":"600s","selector":{"namespaces":["default"]}} ```
- 5. Paste the minimized JSON into the **jsonSpec** field in the portal.
+ 1. Paste the minimized JSON into the **jsonSpec** field in the portal.
+1. Select **Next: Target resources**.
-Click **Next: Target resources >**
-![Fault properties](images/tutorial-aks-add-fault.png)
-5. Select your AKS cluster, and click **Next**
-![Add a target](images/tutorial-aks-add-targets.png)
-6. Verify that your experiment looks correct, then click **Review + create**, then **Create.**
-![Review and create experiment](images/tutorial-aks-add-review.png)
+ ![Screenshot that shows fault properties.](images/tutorial-aks-add-fault.png)
+1. Select your AKS cluster and select **Next**.
-## Give experiment permission to your AKS cluster
+ ![Screenshot that shows adding a target.](images/tutorial-aks-add-targets.png)
+1. Verify that your experiment looks correct and select **Review + create** > **Create**.
+
+ ![Screenshot that shows reviewing and creating an experiment.](images/tutorial-aks-add-review.png)
+
+## Give the experiment permission to your AKS cluster
When you create a chaos experiment, Chaos Studio creates a system-assigned managed identity that executes faults against your target resources. This identity must be given [appropriate permissions](chaos-studio-fault-providers.md) to the target resource for the experiment to run successfully.
-1. Navigate to your AKS cluster and click on **Access control (IAM)**.
-![AKS overview page](images/tutorial-aks-access-resource.png)
-2. Click **Add** then click **Add role assignment**.
-![Access control overview](images/tutorial-aks-access-iam.png)
-3. Search for **Azure Kubernetes Service Cluster Admin Role** and select the role. Click **Next**
-![Assigning AKS Cluster Admin role](images/tutorial-aks-access-role.png)
-4. Click **Select members** and search for your experiment name. Select your experiment and click **Select**. If there are multiple experiments in the same tenant with the same name, your experiment name will be truncated with random characters added.
-![Adding experiment to role](images/tutorial-aks-access-experiment.png)
-5. Click **Review + assign** then **Review + assign**.
+1. Go to your AKS cluster and select **Access control (IAM)**.
+
+ ![Screenshot that shows the AKS Overview page.](images/tutorial-aks-access-resource.png)
+1. Select **Add** > **Add role assignment**.
+
+ ![Screenshot that shows the Access control (IAM) overview.](images/tutorial-aks-access-iam.png)
+1. Search for **Azure Kubernetes Service Cluster Admin Role** and select the role. Select **Next**.
+
+ ![Screenshot that shows assigning the AKS Cluster Admin role.](images/tutorial-aks-access-role.png)
+1. Choose **Select members** and search for your experiment name. Select your experiment and choose **Select**. If there are multiple experiments in the same tenant with the same name, your experiment name is truncated with random characters added.
+
+ ![Screenshot that shows adding an experiment to a role.](images/tutorial-aks-access-experiment.png)
+1. Select **Review + assign** > **Review + assign**.
## Run your experiment
-You are now ready to run your experiment. To see the impact, we recommend opening your AKS cluster overview and going to **Insights** in a separate browser tab. Live data for the **Active Pod Count** will show the impact of running your experiment.
+You're now ready to run your experiment. To see the effect, we recommend that you open your AKS cluster overview and go to **Insights** in a separate browser tab. Live data for the **Active Pod Count** shows the effect of running your experiment.
+
+1. In the **Experiments** view, select your experiment. Select **Start** > **OK**.
-1. In the **Experiments** view, click on your experiment, and click **Start**, then click **OK**.
-![Starting an experiment](images/tutorial-aks-start.png)
-2. When the **Status** changes to **Running**, click **Details** for the latest run under **History** to see details for the running experiment.
+ ![Screenshot that shows starting an experiment.](images/tutorial-aks-start.png)
+1. When the **Status** changes to *Running*, select **Details** for the latest run under **History** to see details for the running experiment.
## Next steps
-Now that you have run an AKS Chaos Mesh service-direct experiment, you are ready to:
+Now that you've run an AKS Chaos Mesh service-direct experiment, you're ready to:
- [Create an experiment that uses agent-based faults](chaos-studio-tutorial-agent-based-portal.md) - [Manage your experiment](chaos-studio-run-experiment.md)
chaos-studio Chaos Studio Tutorial Service Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-service-direct-portal.md
Title: Create an experiment that uses a service-direct fault with Azure Chaos Studio
-description: Create an experiment that uses a service-direct fault
+ Title: Create an experiment using a service-direct fault with Chaos Studio
+description: Create an experiment that uses a service-direct fault with Azure Chaos Studio Preview to fail over an Azure Cosmos DB instance.
# Create a chaos experiment that uses a service-direct fault to fail over an Azure Cosmos DB instance
-You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this guide, you will cause a multi-read, single-write Azure Cosmos DB failover using a chaos experiment and Azure Chaos Studio. Running this experiment can help you defend against data loss when a failover event occurs.
+You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause a multi-read, single-write Azure Cosmos DB failover by using a chaos experiment and Azure Chaos Studio Preview. Running this experiment can help you defend against data loss when a failover event occurs.
-These same steps can be used to set up and run an experiment for any service-direct fault. A **service-direct** fault runs directly against an Azure resource without any need for instrumentation, unlike agent-based faults, which require installation of the chaos agent.
+You can use these same steps to set up and run an experiment for any service-direct fault. A *service-direct* fault runs directly against an Azure resource without any need for instrumentation. Agent-based faults require installation of the chaos agent.
## Prerequisites -- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] -- An Azure Cosmos DB account. If you do not have an Azure Cosmos DB account, you can [follow these steps to create one](../cosmos-db/sql/create-cosmosdb-resources-portal.md).
+- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
+- An Azure Cosmos DB account. If you don't have an Azure Cosmos DB account, follow these steps to [create one](../cosmos-db/sql/create-cosmosdb-resources-portal.md).
- At least one read and one write region setup for your Azure Cosmos DB account. - ## Enable Chaos Studio on your Azure Cosmos DB account
-Chaos Studio cannot inject faults against a resource unless that resource has been onboarded to Chaos Studio first. You onboard a resource to Chaos Studio by creating a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. Azure Cosmos DB accounts only have one target type (service-direct) and one capability (failover), but other resources may have up to two target types - one for service-direct faults and one for agent-based faults - and many capabilities.
+Chaos Studio can't inject faults against a resource unless that resource is added to Chaos Studio first. You add a resource to Chaos Studio by creating a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. Azure Cosmos DB accounts have only one target type (service-direct) and one capability (failover). Other resources might have up to two target types. One target type is for service-direct faults. Another target type is for agent-based faults. Other resources might have many other capabilities.
1. Open the [Azure portal](https://portal.azure.com).
-2. Search for **Chaos Studio (preview)** in the search bar.
-3. Click on **Targets** and navigate to your Azure Cosmos DB account.
-![Targets view in the Azure portal](images/tutorial-service-direct-targets.png)
-4. Check the box next to your Azure Cosmos DB account and click **Enable targets** then **Enable service-direct targets** from the dropdown menu.
-![Enabling targets in the Azure portal](images/tutorial-service-direct-targets-enable.png)
-5. A notification will appear indicating that the resource(s) selected were successfully enabled.
-![Notification showing target successfully enabled](images/tutorial-service-direct-targets-enable-confirm.png)
+1. Search for **Chaos Studio (preview)** in the search bar.
+1. Select **Targets** and go to your Azure Cosmos DB account.
+
+ ![Screenshot that shows the Targets view in the Azure portal.](images/tutorial-service-direct-targets.png)
+1. Select the checkbox next to your Azure Cosmos DB account. Select **Enable targets** and then select **Enable service-direct targets** from the dropdown menu.
+
+ ![Screenshot that shows enabling targets in the Azure portal.](images/tutorial-service-direct-targets-enable.png)
+1. A notification appears that indicates that the resources selected were successfully enabled.
-You have now successfully onboarded your Azure Cosmos DB account to Chaos Studio. In the **Targets** view you can also manage the capabilities enabled on this resource. Clicking the **Manage actions** link next to a resource will display the capabilities enabled for that resource.
+ ![Screenshot that shows a notification showing the target was successfully enabled.](images/tutorial-service-direct-targets-enable-confirm.png)
+
+You've now successfully added your Azure Cosmos DB account to Chaos Studio. In the **Targets** view, you can also manage the capabilities enabled on this resource. Selecting the **Manage actions** link next to a resource displays the capabilities enabled for that resource.
## Create an experiment
-With your Azure Cosmos DB account now onboarded, you can create your experiment. A chaos experiment defines the actions you want to take against target resources, organized into steps, which run sequentially, and branches, which run in parallel.
-
-1. Click on the **Experiments** tab in the Chaos Studio navigation. In this view, you can see and manage all of your chaos experiments. Click on **Add an experiment**
-![Experiments view in the Azure portal](images/tutorial-service-direct-add.png)
-2. Fill in the **Subscription**, **Resource Group**, and **Location** where you want to deploy the chaos experiment. Give your experiment a **Name**. Click **Next : Experiment designer >**
-![Adding basic experiment details](images/tutorial-service-direct-add-basics.png)
-3. You are now in the Chaos Studio experiment designer. The experiment designer allows you to build your experiment by adding steps, branches, and faults. Give a friendly name to your **Step** and **Branch**, then click **Add fault**.
-![Experiment designer](images/tutorial-service-direct-add-designer.png)
-4. Select **CosmosDB Failover** from the dropdown, then fill in the **Duration** with the number of minutes you want the failure to last and **readRegion** with the read region of your Azure Cosmos DB account. Click **Next: Target resources >**
-![Fault properties](images/tutorial-service-direct-add-fault.png)
-5. Select your Azure Cosmos DB account, and click **Next**
-![Add a target](images/tutorial-service-direct-add-target.png)
-6. Verify that your experiment looks correct, then click **Review + create**, then **Create.**
-![Review and create experiment](images/tutorial-service-direct-add-review.png)
-
-## Give experiment permission to your target resource
-When you create a chaos experiment, Chaos Studio creates a system-assigned managed identity that executes faults against your target resources. This identity must be given [appropriate permissions](chaos-studio-fault-providers.md) to the target resource for the experiment to run successfully. These steps can be used for any resource and target type by modifying the role assignment in step #3 to match the [appropriate role for that resource and target type](chaos-studio-fault-providers.md).
-
-1. Navigate to your Azure Cosmos DB account and click on **Access control (IAM)**.
-![Azure Cosmos DB overview page](images/tutorial-service-direct-access-resource.png)
-2. Click **Add** then click **Add role assignment**.
-![Access control overview](images/tutorial-service-direct-access-iam.png)
-3. Search for **Cosmos DB Operator** and select the role. Click **Next**
-![Assigning Azure Cosmos DB Operator role](images/tutorial-service-direct-access-role.png)
-4. Click **Select members** and search for your experiment name. Select your experiment and click **Select**. If there are multiple experiments in the same tenant with the same name, your experiment name will be truncated with random characters added.
-![Adding experiment to role](images/tutorial-service-direct-access-experiment.png)
-5. Click **Review + assign** then **Review + assign**.
+Now you can create your experiment. A chaos experiment defines the actions you want to take against target resources. The actions are organized and run in sequential steps. The chaos experiment also defines the actions you want to take against branches, which run in parallel.
+
+1. Select the **Experiments** tab in Chaos Studio. In this view, you can see and manage all your chaos experiments. Select **Add an experiment**.
+
+ ![Screenshot that shows the Experiments view in the Azure portal.](images/tutorial-service-direct-add.png)
+1. Fill in the **Subscription**, **Resource Group**, and **Location** where you want to deploy the chaos experiment. Give your experiment a name. Select **Next: Experiment designer**.
+
+ ![Screenshot that shows adding basic experiment details.](images/tutorial-service-direct-add-basics.png)
+1. You're now in the Chaos Studio experiment designer. The experiment designer allows you to build your experiment by adding steps, branches, and faults. Give a friendly name to your **Step** and **Branch** and select **Add fault**.
+
+ ![Screenshot that shows the experiment designer.](images/tutorial-service-direct-add-designer.png)
+1. Select **CosmosDB Failover** from the dropdown list. Fill in **Duration** with the number of minutes you want the failure to last and **readRegion** with the read region of your Azure Cosmos DB account. Select **Next: Target resources**.
+
+ ![Screenshot that shows fault properties.](images/tutorial-service-direct-add-fault.png)
+1. Select your Azure Cosmos DB account and select **Next**.
+
+ ![Screenshot that shows adding a target.](images/tutorial-service-direct-add-target.png)
+1. Verify that your experiment looks correct and select **Review + create** > **Create**.
+
+ ![Screenshot that shows reviewing and creating an experiment.](images/tutorial-service-direct-add-review.png)
+
+## Give the experiment permission to your target resource
+When you create a chaos experiment, Chaos Studio creates a system-assigned managed identity that executes faults against your target resources. This identity must be given [appropriate permissions](chaos-studio-fault-providers.md) to the target resource for the experiment to run successfully. You can use these steps for any resource and target type by modifying the role assignment in step 3 to match the [appropriate role for that resource and target type.](chaos-studio-fault-providers.md).
+
+1. Go to your Azure Cosmos DB account and select **Access control (IAM)**.
+
+ ![Screenshot that shows the Azure Cosmos DB Overview page.](images/tutorial-service-direct-access-resource.png)
+1. Select **Add** > **Add role assignment**.
+
+ ![Screenshot that shows the Access control overview.](images/tutorial-service-direct-access-iam.png)
+1. Search for **Cosmos DB Operator** and select the role. Select **Next**.
+
+ ![Screenshot that shows assigning the Azure Cosmos DB Operator role.](images/tutorial-service-direct-access-role.png)
+1. Choose **Select members** and search for your experiment name. Select your experiment and choose **Select**. If there are multiple experiments in the same tenant with the same name, your experiment name is truncated with random characters added.
+
+ ![Screenshot that shows adding an experiment to a role.](images/tutorial-service-direct-access-experiment.png)
+1. Select **Review + assign** > **Review + assign**.
## Run your experiment
-You are now ready to run your experiment. To see the impact, we recommend opening your Azure Cosmos DB account overview and going to **Replicate data globally** in a separate browser tab. Refreshing periodically during the experiment will show the region swap.
+You're now ready to run your experiment. To see the effect, we recommend that you open your Azure Cosmos DB account overview and go to **Replicate data globally** in a separate browser tab. Refreshing periodically during the experiment shows the region swap.
-1. In the **Experiments** view, click on your experiment, and click **Start**, then click **OK**.
-2. When the **Status** changes to **Running**, click **Details** for the latest run under **History** to see details for the running experiment.
+1. In the **Experiments** view, select your experiment. Select **Start** > **OK**.
+1. When **Status** changes to *Running*, select **Details** for the latest run under **History** to see details for the running experiment.
## Next steps
-Now that you have run a Azure Cosmos DB service-direct experiment, you are ready to:
+Now that you've run an Azure Cosmos DB service-direct experiment, you're ready to:
- [Create an experiment that uses agent-based faults](chaos-studio-tutorial-agent-based-portal.md) - [Manage your experiment](chaos-studio-run-experiment.md)
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md
Access is limited to customers that meet the following requirements:
* [Language Detection](../language-service/language-detection/how-to/use-containers.md#run-the-container-disconnected-from-the-internet)
+## Container image and license updates
++ ## Usage records When operating Docker containers in a disconnected environment, the container will write usage records to a volume where they're collected over time. You can also call a REST endpoint to generate a report about service usage.
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
These models can be used with Completion API requests. `gpt-35-turbo` is the onl
| text-davinci-002 | East US, South Central US, West Europe | N/A | 4,097 | Jun 2021 | | text-davinci-003 | East US, West Europe | N/A | 4,097 | Jun 2021 | | text-davinci-fine-tune-002 | N/A | N/A | | |
-| gpt-35-turbo<sup>1</sup> (ChatGPT) | East US, France Central, South Central US, West Europe | N/A | 4,096 | Sep 2021 |
+| gpt-35-turbo<sup>1</sup> (ChatGPT) | East US, France Central, South Central US, UK South, West Europe | N/A | 4,096 | Sep 2021 |
<br><sup>1</sup> Currently, only version `0301` of this model is available. > [!IMPORTANT]
-> The currently listed deprecation dates in Azure AI Studio and via REST API for gpt-35-turbo (0301) is a temporary placeholder. Deprecation will not happen prior to October 1st 2023.
+> The currently listed deprecation dates in Azure OpenAI Studio and via REST API for gpt-35-turbo (0301) is a temporary placeholder. Deprecation will not happen prior to October 1st 2023.
### GPT-4 Models
These models can only be used with the Chat Completion API.
<sup>2</sup> Currently, only version `0314` of this model is available. > [!IMPORTANT]
-> The currently listed deprecation dates in Azure AI Studio and via REST API for the gpt-4 and gpt-4-32k (0314) models are temporary placeholders. Deprecation will not happen prior to October 1st 2023.
+> The currently listed deprecation dates in Azure OpenAI Studio and via REST API for the gpt-4 and gpt-4-32k (0314) models are temporary placeholders. Deprecation will not happen prior to October 1st 2023.
### Dall-E Models
cognitive-services Dall E Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/dall-e-quickstart.md
zone_pivot_groups: openai-quickstart-dall-e
::: zone-end +++
cognitive-services Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/completions.md
keywords:
The completions endpoint can be used for a wide variety of tasks. It provides a simple but powerful text-in, text-out interface to any of our [models](../concepts/models.md). You input some text as a prompt, and the model will generate a text completion that attempts to match whatever context or pattern you gave it. For example, if you give the API the prompt, "As Descartes said, I think, therefore", it will return the completion " I am" with high probability.
-The best way to start exploring completions is through our playground in [Azure AI Studio](https://oai.azure.com). It's a simple text box where you can submit a prompt to generate a completion. You can start with a simple example like the following:
+The best way to start exploring completions is through our playground in [Azure OpenAI Studio](https://oai.azure.com). It's a simple text box where you can submit a prompt to generate a completion. You can start with a simple example like the following:
`write a tagline for an ice cream shop`
cognitive-services Content Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/content-filters.md
The configurability feature is available in preview and allows customers to adju
<sup>\*</sup> Only approved customers have full content filtering control, including configuring content filters at severity level high only or turning the content filters off. Managed customers only can apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu)
-## Configuring content filters via Azure AI Studio (preview)
+## Configuring content filters via Azure OpenAI Studio (preview)
The following steps show how to set up a customized content filtering configuration for your resource.
-1. Go to Azure AI Studio and navigate to the Content Filters tab (in the bottom left navigation, as designated by the red box below).
+1. Go to Azure OpenAI Studio and navigate to the Content Filters tab (in the bottom left navigation, as designated by the red box below).
:::image type="content" source="../media/content-filters/studio.png" alt-text="Screenshot of the AI Studio UI with Content Filters highlighted" lightbox="../media/content-filters/studio.png":::
cognitive-services Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/quota.md
Different model deployments, also called model classes have unique max TPM value
- GPT-4 - GPT-4-32K-- GPT-35-Turbo - Text-Davinci-003 All other model classes have a common max TPM value.
cognitive-services Work With Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/work-with-code.md
You can use Codex for a variety of tasks including:
## How to use the Codex models
-Here are a few examples of using Codex that can be tested in [Azure AI Studio's](https://oai.azure.com) playground with a deployment of a Codex series model, such as `code-davinci-002`.
+Here are a few examples of using Codex that can be tested in [Azure OpenAI Studio's](https://oai.azure.com) playground with a deployment of a Codex series model, such as `code-davinci-002`.
### Saying "Hello" (Python)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/overview.md
keywords:
# What is Azure OpenAI Service?
-Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-3, Codex and Embeddings model series. In addition, the new GPT-4 and ChatGPT (gpt-35-turbo) model series have now reached general availability. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure AI Studio.
+Azure OpenAI Service provides REST API access to OpenAI's powerful language models including the GPT-3, Codex and Embeddings model series. In addition, the new GPT-4 and ChatGPT (gpt-35-turbo) model series have now reached general availability. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or our web-based interface in the Azure OpenAI Studio.
### Features overview
Azure OpenAI Service provides REST API access to OpenAI's powerful language mode
| Virtual network support & private link support | Yes | | Managed Identity| Yes, via Azure Active Directory | | UI experience | **Azure portal** for account & resource management, <br> **Azure OpenAI Service Studio** for model exploration and fine tuning |
-| Regional availability | East US <br> South Central US <br> West Europe <br> France Central |
+| Model regional availability | [Model availability](./concepts/models.md) |
| Content filtering | Prompts and completions are evaluated against our content policy with automated systems. High severity content will be filtered. | ## Responsible AI
Azure OpenAI is a new product offering on Azure. You can get started with Azure
Once you create an Azure OpenAI Resource, you must deploy a model before you can start making API calls and generating text. This action can be done using the Deployment APIs. These APIs allow you to specify the model you wish to use.
-### In-context learning
+### Prompt engineering
-The models used by Azure OpenAI use natural language instructions and examples provided during the generation call to identify the task being asked and skill required. When you use this approach, the first part of the prompt includes natural language instructions and/or examples of the specific task desired. The model then completes the task by predicting the most probable next piece of text. This technique is known as "in-context" learning. These models aren't retrained during this step but instead give predictions based on the context you include in the prompt.
+GPT-3, GPT-3.5, and GPT-4 models from OpenAI are prompt-based. With prompt-based models, the user interacts with the model by entering a text prompt, to which the model responds with a text completion. This completion is the modelΓÇÖs continuation of the input text.
-There are three main approaches for in-context learning: Few-shot, one-shot and zero-shot. These approaches vary based on the amount of task-specific data that is given to the model:
+While these models are extremely powerful, their behavior is also very sensitive to the prompt. This makes [prompt engineering](./concepts/prompt-engineering.md) an important skill to develop.
-**Few-shot**: In this case, a user includes several examples in the call prompt that demonstrate the expected answer format and content. The following example shows a few-shot prompt where we provide multiple examples (the model will generate the last answer):
-
-```
- Convert the questions to a command:
- Q: Ask Constance if we need some bread.
- A: send-msg `find constance` Do we need some bread?
- Q: Send a message to Greg to figure out if things are ready for Wednesday.
- A: send-msg `find greg` Is everything ready for Wednesday?
- Q: Ask Ilya if we're still having our meeting this evening.
- A: send-msg `find ilya` Are we still having a meeting this evening?
- Q: Contact the ski store and figure out if I can get my skis fixed before I leave on Thursday.
- A: send-msg `find ski store` Would it be possible to get my skis fixed before I leave on Thursday?
- Q: Thank Nicolas for lunch.
- A: send-msg `find nicolas` Thank you for lunch!
- Q: Tell Constance that I won't be home before 19:30 tonight ΓÇö unmovable meeting.
- A: send-msg `find constance` I won't be home before 19:30 tonight. I have a meeting I can't move.
- Q: Tell John that I need to book an appointment at 10:30.
- A:
-```
-
-The number of examples typically range from 0 to 100 depending on how many can fit in the maximum input length for a single prompt. Maximum input length can vary depending on the specific models you use. Few-shot learning enables a major reduction in the amount of task-specific data required for accurate predictions. This approach will typically perform less accurately than a fine-tuned model.
-
-**One-shot**: This case is the same as the few-shot approach except only one example is provided.
-
-**Zero-shot**: In this case, no examples are provided to the model and only the task request is provided.
+Prompt construction can be difficult. In practice, the prompt acts to configure the model weights to complete the desired task, but it's more of an art than a science, often requiring experience and intuition to craft a successful prompt.
### Models
cognitive-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/quotas-limits.md
To minimize issues related to rate limits, it's a good idea to use the following
### How to request increases to the default quotas and limits
-Quota increase requests can be submitted from the [Quotas](./how-to/quota.md) page of Azure AI Studio. Please note that due to overwhelming demand, we are not currently approving new quota increase requests. Your request will be queued until it can be filled at a later time.
+Quota increase requests can be submitted from the [Quotas](./how-to/quota.md) page of Azure OpenAI Studio. Please note that due to overwhelming demand, we are not currently approving new quota increase requests. Your request will be queued until it can be filled at a later time.
For other rate limits, please [submit a service request](/azure/cognitive-services/cognitive-services-support-options?context=%2Fazure%2Fcognitive-services%2Fopenai%2Fcontext%2Fcontext).
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/tutorials/embeddings.md
To successfully make a call against Azure OpenAI, you'll need an **endpoint** an
|Variable name | Value | |--|-|
-| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in **Azure AI Studio** > **Playground** > **Code View**. An example endpoint is: `https://docs-test-001.openai.azure.com`.|
+| `ENDPOINT` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. Alternatively, you can find the value in **Azure OpenAI Studio** > **Playground** > **Code View**. An example endpoint is: `https://docs-test-001.openai.azure.com`.|
| `API-KEY` | This value can be found in the **Keys & Endpoint** section when examining your resource from the Azure portal. You can use either `KEY1` or `KEY2`.| Go to your resource in the Azure portal. The **Endpoint and Keys** can be found in the **Resource Management** section. Copy your endpoint and access key as you'll need both for authenticating your API calls. You can use either `KEY1` or `KEY2`. Always having two keys allows you to securely rotate and regenerate keys without causing a service disruption.
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/whats-new.md
Previously updated : 05/15/2023 Last updated : 06/12/2023 recommendations: false keywords: # What's new in Azure OpenAI Service
+## June 2023
+
+### UK South
+
+- Azure OpenAI is now available in the UK South region. Check the [models page](concepts/models.md), for the latest information on model availability in each region.
+
+### Content filtering & annotations (Preview)
+
+- How to [configure content filters](how-to/content-filters.md) with Azure OpenAI Service.
+- [Enable annotations](concepts/content-filter.md) to view content filtering category and severity information as part of your GPT based Completion and Chat Completion calls.
+
+### Quota
+
+- Quota provides the flexibility to actively [manage the allocation of rate limits across the deployments](how-to/quota.md) within your subscription.
+ ## May 2023
+### Java & JavaScript SDK support
+
+- NEW Azure OpenAI preview SDKs offering support for [JavaScript](/azure/cognitive-services/openai/quickstart?tabs=command-line&pivots=programming-language-javascript) and [Java](/azure/cognitive-services/openai/quickstart?tabs=command-line&pivots=programming-language-java).
+ ### Azure OpenAI Chat Completion General Availability (GA) - General availability support for:
cognitive-services Use Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/use-key-vault.md
If you're using a multi-service resource or Language resource, you can update [y
## Next steps
-* See [What are Cognitive Services](./what-are-cognitive-services.md) for available features you can develop along with [Azure key vault](../key-vault/general/index.yml).
+* See [What are Cognitive Services](./what-are-cognitive-services.md) for available features you can develop along with [Azure Key Vault](../key-vault/general/index.yml).
* For additional information on secure application development, see: * [Best practices for using Azure Key Vault](../key-vault/general/best-practices.md) * [Cognitive Services security](cognitive-services-security.md)
communication-services Rooms Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/insights/rooms-insights.md
+
+ Title: Azure Communication Services Rooms Insights Dashboard
+
+description: Descriptions of data visualizations available for Rooms Communications Services via Workbooks
++++ Last updated : 05/25/2023+++++
+# Rooms Insights
+
+In this document, we outline the available insights dashboard to monitor Rooms logs and metrics.
+
+## Overview
+Within your Communications Resource, we've provided a **Rooms Insights** feature that displays many data visualizations conveying insights from the Azure Monitor logs and metrics monitored for Rooms. The visualizations within Insights are made possible via [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md). In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](../enable-logging.md). To enable Workbooks, you need to send your logs to a [Log Analytics workspace](../../../../azure-monitor/logs/log-analytics-overview.md) destination.
++
+## Prerequisites
+
+- In order to take advantage of Workbooks, follow the instructions outlined in [Enable Azure Monitor in Diagnostic Settings](../enable-logging.md). You need to enable `Operational Rooms Logs`
+- To use Workbooks, you need to send your logs to a [Log Analytics workspace](../../../../azure-monitor/logs/log-analytics-overview.md) destination.
+
+## Accessing Rooms Insights for Communication Services
+
+Inside your Azure Communication Services resource, scroll down on the left nav bar to the **Monitor** category and click on the **Insights** tab:
++
+## Rooms insights
+
+The **Rooms** tab displays Rooms API success Rate, Rooms API Volume by Operation Type/ Response Code, and Rooms Operation Drill-Down:
++++
+## More information about workbooks
+
+For an in-depth description of workbooks, refer to the [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md) documentation.
+
+## Editing dashboards
+
+The **Rooms Insights** dashboards provided with your **Communication Service** resource can be customized by clicking on the **Edit** button on the top navigation bar:
++
+Editing these dashboards doesn't modify the **Insights** tab, but rather creates a separate workbook that can be accessed on your resourceÆs Workbooks tab:
++
+For an in-depth description of workbooks, refer to the [Azure Monitor Workbooks](../../../../azure-monitor/visualize/workbooks-overview.md) documentation.
communication-services Rooms Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/rooms-logs.md
+
+ Title: Azure Communication Services Rooms logs
+
+description: Learn about logging for Azure Communication Services Rooms.
++++ Last updated : 05/25/2023+++++
+# Azure Communication Services Rooms Logs
+
+Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal.
+
+> [!IMPORTANT]
+> The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/faq.yml)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+## Pre-requisites
+
+Azure Communications Services provides monitoring and analytics features via [Azure Monitor Logs overview](../../../../azure-monitor/logs/data-platform-logs.md) and [Azure Monitor Metrics](../../../../azure-monitor/essentials/data-platform-metrics.md). Each Azure resource requires its own diagnostic setting, which defines the following criteria:
+ * Categories of logs and metric data sent to the destinations defined in the setting. The available categories will vary for different resource types.
+ * One or more destinations to send the logs. Current destinations include Log Analytics workspace, Event Hubs, and Azure Storage.
+ * A single diagnostic setting can define no more than one of each of the destinations. If you want to send data to more than one of a particular destination type (for example, two different Log Analytics workspaces), then create multiple settings. Each resource can have up to five diagnostic settings.
+
+The following are instructions for configuring your Azure Monitor resource to start creating logs and metrics for your Communications Services. For detailed documentation about using Diagnostic Settings across all Azure resources, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
+
+> [!NOTE]
+> Under diagnostic setting name please select "Operational Rooms Logs" to enable the logs for Rooms.
+
+## Overview
+
+Rooms operational logs are records of events and activities that provide insights into your Rooms API requests. They capture details about the performance and functionality of the Rooms primitive, including the status of each Rooms request as well as additional properties.
+Rooms operational logs contain information that help identify trends and patterns of Rooms usage.
+
+## Log categories
+
+Communication Services offers the following types of logs that you can enable:
+
+* **Operational Rooms logs** - provides basic information related to the Rooms service
++
+### Operational Rooms logs schema
+
+| Property | Description |
+| -- | |
+| `Correlation ID` | Unique ID of the request. |
+| `Level` | The severity level of the event. |
+| `Operation Name` | The operation associated with log record. E.g., CreateRoom, PatchRoom, GetRoom, ListRooms, DeleteRoom, GetParticipants, UpdateParticipants.|
+| `Operation Version` | The api-version associated with the operation. |
+| `ResultType` | The status of the operation. |
+| `ResultSignature` | The sub status of the operation. If this operation corresponds to a REST API call, this field is the HTTP status code of the corresponding REST call. |
+|.`RoomId` | The ID of the Room. |
+| `RoomLifeSpan` | The Room lifespan in minutes. |
+| `AddedRoomParticipantsCount` | The count of participants added to a Room. |
+| `UpsertedRoomParticipantsCount` | The count of participants upserted in a Room. |
+| `RemovedRoomParticipantsCount` | The count of participants removed from a Room. |
+| `TimeGenerated` | The timestamp (UTC) of when the log was generated. |
++
+#### Example CreateRoom log
+
+```json
+ [
+ {
+ "CorrelationId": "Y4x6ZabFE0+E8ERwMpd68w",
+ "Level": "Informational",
+ "OperationName": "CreateRoom",
+ "OperationVersion": "2022-03-31-preview",
+ "ResultType": "Succeeded",
+ "ResultSignature": 201,
+ "RoomId": "99466898241024408",
+ "RoomLifespan": 61,
+ "AddedRoomParticipantsCount": 4,
+ "TimeGenerated": "5/25/2023, 4:32:49.469 AM",
+ }
+ ]
+```
+
+#### Example GetRoom log
+
+```json
+ [
+ {
+ "CorrelationId": "CNiZIX7fvkumtBSpFq7fxg",
+ "Level": "Informational",
+ "OperationName": "GetRoom",
+ "OperationVersion": "2022-03-31-preview",
+ "ResultType": "Succeeded",
+ "ResultSignature": "200",
+ "RoomId": "99466387192310000",
+ "RoomLifespan": 61,
+ "TimeGenerated": "2022-08-19T17:07:30.2400300Z",
+ },
+ ]
+```
+
+#### Example UpdateRoom log
+
+```json
+ [
+ {
+ "CorrelationId": "Bwqzh0pdnkGPDwNcMnBkng",
+ "Level": "Informational",
+ "OperationName": "UpdateRoom",
+ "OperationVersion": "2022-03-31-preview",
+ "ResultType": "Succeeded",
+ "ResultSignature": "200",
+ "RoomId": "99466387192310000",
+ "RoomLifespan": 121,
+ "TimeGenerated": "2022-08-19T17:07:30.3543160Z",
+ },
+ ]
+```
+
+#### Example DeleteRoom log
+
+```json
+ [
+ {
+ "CorrelationId": "x7rMXmihYEe3GFho9T/H2w",
+ "Level": "Informational",
+ "OperationName": "DeleteRoom",
+ "OperationVersion": "2022-02-01",
+ "ResultType": "Succeeded",
+ "ResultSignature": "204",
+ "RoomId": "99466387192310000",
+ "RoomLifespan": 121,
+ "TimeGenerated": "2022-08-19T17:07:30.5393800Z",
+ },
+ ]
+```
+
+ #### Example ListRooms log
+
+```json
+ [
+ {
+ "CorrelationId": "KibM39CaXkK+HTInfsiY2w",
+ "Level": "Informational",
+ "OperationName": "ListRooms",
+ "OperationVersion": "2022-03-31-preview",
+ "ResultType": "Succeeded",
+ "ResultSignature": "200",
+ "TimeGenerated": "2022-08-19T17:07:30.5393800Z",
+ },
+ ]
+```
+
+#### Example UpdateParticipants log
+
+```json
+ [
+ {
+ "CorrelationId": "zHT8snnUMkaXCRDFfjQDJw",
+ "Level": "Informational",
+ "OperationName": "UpdateParticipants",
+ "OperationVersion": "2022-03-31-preview",
+ "ResultType": "Succeeded",
+ "ResultSignature": "200",
+ "RoomId": "99466387192310000",
+ "RoomLifespan": 121,
+ "UpsertedRoomParticipantsCount": 5,
+ "RemovedRoomParticipantsCount": 1,
+ "TimeGenerated": "2023-04-14T17:07:30.5393800Z",
+ },
+ ]
+```
+
+ (See also [FAQ](../../../../azure-monitor/faq.yml)).
communication-services Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/metrics.md
The following operations are available on Chat API request metrics:
| Operation / Route | Description | | -- | - |
-| GetChatMessage | Gets a message by message id. |
+| GetChatMessage | Gets a message by message ID. |
| ListChatMessages | Gets a list of chat messages from a thread. | | SendChatMessage | Sends a chat message to a thread. | | UpdateChatMessage | Updates a chat message. |
The following operations are available on Chat API request metrics:
:::image type="content" source="./media/chat-metric.png" alt-text="Chat API Request Metric.":::
-If a request is made to an operation that isn't recognized, you'll receive a "Bad Route" value response.
+If a request is made to an operation that isn't recognized, you receive a "Bad Route" value response.
### SMS API requests
The following operations are available on SMS API request metrics:
| Operation / Route | Description | | -- | - |
-| SMSMessageSent | Sends a SMS message. |
+| SMSMessageSent | Sends an SMS message. |
| SMSDeliveryReportsReceived | Gets SMS Delivery Reports | | SMSMessagesReceived | Gets SMS messages. |
The following operations are available on Network Traversal API request metrics:
:::image type="content" source="./media/acs-turn-metrics.png" alt-text="TURN Token Request Metric." lightbox="./media/acs-turn-metrics.png":::
+### Rooms API requests
+
+The following operations are available on Rooms API request metrics:
+
+| Operation / Route | Description |
+| -- | - |
+| CreateRoom | Creates a Room. |
+| DeleteRoom | Deletes a Room. |
+| GetRoom | Gets a Room by Room ID. |
+| PatchRoom | Updates a Room by Room ID. |
+| ListRooms | Lists all the Rooms for an ACS Resource. |
+| AddParticipants | Adds participants to a Room.|
+| RemoveParticipants | Removes participants from a Room. |
+| GetParticipants | Gets list of participants for a Room. |
+| UpdateParticipants | Updates list of participants for a Room. |
++ ## Next steps - Learn more about [Data Platform Metrics](../../azure-monitor/essentials/data-platform-metrics.md)
communication-services Number Lookup Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/number-lookup-sdk.md
[!INCLUDE [Private Preview Notice](../../includes/private-preview-include.md)]
-Azure Communication Services Number Lookup is part of the Phone Numbers SDK. It can be used for your applications to add additional checks before sending and SMS or placing a call.
+Azure Communication Services Number Lookup is part of the Phone Numbers SDK. It can be used for your applications to add additional checks before sending an SMS or placing a call.
## Number Lookup SDK capabilities
The following list presents the set of features which are currently available in
> [!div class="nextstepaction"] > [Get started with Number Lookup API](../../quickstarts/telephony/number-lookup.md) -- [Number Lookup Concept](../numbers/number-lookup-concept.md)
+- [Number Lookup Concept](../numbers/number-lookup-concept.md)
communication-services Room Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md
The tables below provide detailed capabilities mapped to the roles. At a high le
- Use the [QuickStart to create, manage and join a room](../../quickstarts/rooms/get-started-rooms.md). - Learn how to [join a room call](../../quickstarts/rooms/join-rooms-call.md). - Review the [Network requirements for media and signaling](../voice-video-calling/network-requirements.md).
+- Analyze your Rooms data, see: [Rooms Logs](../Analytics/logs/rooms-logs.md).
+- Learn how to use the Log Analytics workspace, see: [Log Analytics Tutorial](../../../azure-monitor/logs/log-analytics-tutorial.md).
+- Create your own queries in Log Analytics, see: [Get Started Queries](../../../azure-monitor/logs/get-started-queries.md).
communication-services Preferred Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/preferred-worker.md
zone_pivot_groups: acs-js-csharp
# Target a Preferred Worker
-In the context of a call center, customers might be assigned an account manager or have a relationship with a specific worker. As such, You'd want to route a specific job to a specific worker if possible.
+In the context of a call center, customers might be assigned an account manager or have a relationship with a specific worker. In that event, you might want to route a specific job to a specific worker if possible.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A deployed Azure Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
- Optional: Complete the quickstart to [get started with Job Router](../../quickstarts/router/get-started-router.md) ## Setup worker selectors Every worker automatically has an `Id` label. You can apply worker selectors to the job, to target a specific worker.
-In the following example, a job is created that targets a specific worker. If that worker does not accept the job within the TTL of 1 minute, the condition for the specific worker is no longer be valid and the job could go to any worker.
+In the following example, a job is created that targets a specific worker. If that worker does not accept the job within the time to live(TTL) of 1 minute, the condition for the specific worker is no longer valid and the job can go to any worker.
::: zone pivot="programming-language-csharp"
confidential-computing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/overview.md
Running workloads on the cloud requires trust. You give this trust to various pr
- **Infrastructure providers**: Trust cloud providers or manage your own on-premises data centers. ## Reducing the attack surface
-The trusted computing base (TCB) refers to all of a system's hardware, firmware, and software components that provide a secure environment. The components inside the TCB are considered "critical". If one component inside the TCB is compromised, the entire system's security may be jeopardized. A lower TCB means higher security. There's less risk of exposure to various vulnerabilities, malware, attacks, and malicious people.
-
+The Trusted Computing Base (TCB) refers to all of a system's hardware, firmware, and software components that provide a secure environment. The components inside the TCB are considered "critical". If one component inside the TCB is compromised, the entire system's security may be jeopardized. A lower TCB means higher security. There's less risk of exposure to various vulnerabilities, malware, attacks, and malicious people.
### Next steps [Microsoft's offerings](https://aka.ms/azurecc) for confidential computing extend from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) and as well as developer tools to support your journey to data and code confidentiality in the cloud.
Learn more about confidential computing on Azure
> [!div class="nextstepaction"] > [Overview of Azure Confidential Computing](overview-azure-products.md)+
container-apps Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/alerts.md
Title: Set up alerts in Azure Container Apps description: Set up alerts to monitor your container app. -+ Last updated 08/30/2022-+ # Set up alerts in Azure Container Apps
container-apps Azure Arc Enable Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md
Title: 'Tutorial: Enable Azure Container Apps on Azure Arc-enabled Kubernetes' description: 'Tutorial: learn how to set up Azure Container Apps in your Azure Arc-enabled Kubernetes clusters.' -+ Last updated 3/24/2023-+ # Tutorial: Enable Azure Container Apps on Azure Arc-enabled Kubernetes (Preview)
container-apps Container Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/container-console.md
Title: Connect to a container console in Azure Container Apps description: Connect to a container console in your container app. -+ Last updated 08/30/2022-+
container-apps Dapr Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-github-actions.md
Title: Tutorial - Deploy a Dapr application with GitHub Actions for Azure Container Apps description: Learn about multiple revision management by deploying a Dapr application with GitHub Actions and Azure Container Apps. --++
container-apps Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started.md
Title: 'Quickstart: Deploy your first container app with containerapp up' description: Deploy your first application to Azure Container Apps using the Azure CLI containerapp up command. -+ Last updated 03/29/2023-+ ms.devlang: azurecli
container-apps Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs.md
To authenticate the request, add an `Authorization` header with a valid bearer t
+The execution history for scheduled & event-based jobs is limited to the most recent `100` successful and failed job executions.
+ To list all executions of a job or to get detailed output from a job, query the logs provider configured for your Container Apps environment. ## Advanced job configuration
container-apps Log Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-monitoring.md
Title: Monitor logs in Azure Container Apps with Log Analytics description: Monitor your container app logs with Log Analytics -+ Last updated 08/30/2022-+ # Monitor logs in Azure Container Apps with Log Analytics
container-apps Log Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-options.md
Title: Log storage and monitoring options in Azure Container Apps description: Description of logging options in Azure Container Apps -+ Last updated 09/29/2022-+ # Log storage and monitoring options in Azure Container Apps
container-apps Log Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/log-streaming.md
Title: View log streams in Azure Container Apps description: View your container app's log stream. -+ Last updated 03/24/2023-+ # View log streams in Azure Container Apps
container-apps Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/logging.md
Title: Application logging in Azure Container Apps description: Description of logging in Azure Container Apps -+ Last updated 09/29/2022-+ # Application Logging in Azure Container Apps
container-apps Managed Identity Image Pull https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity-image-pull.md
Title: Azure Container Apps image pull from Azure Container Registry with managed identity description: Set up Azure Container Apps to authenticate Azure Container Registry image pulls with managed identity -+ Last updated 09/16/2022-+ zone_pivot_groups: container-apps-interface-types
container-apps Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity.md
Title: Managed identities in Azure Container Apps description: Using managed identities in Container Apps -+ Last updated 09/29/2022-+ # Managed identities in Azure Container Apps
container-apps Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/metrics.md
Title: Monitor Azure Container Apps metrics description: Monitor your running apps metrics -+ Last updated 08/30/2022-+ # Monitor Azure Container Apps metrics
container-apps Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/observability.md
Title: Observability in Azure Container Apps description: Monitor your running app in Azure Container Apps -+ Last updated 07/29/2022-+ # Observability in Azure Container Apps
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources.--++ Last updated 06/01/2023
container-apps Quickstart Code To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-code-to-cloud.md
Title: "Quickstart: Build and deploy your app from a repository to Azure Container Apps" description: Build your container app from a local or GitHub source repository and deploy in Azure Container Apps using az containerapp up. -+ - Last updated 03/29/2023-+ zone_pivot_groups: container-apps-image-build-from-repo
container-apps Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quickstart-portal.md
Title: 'Quickstart: Deploy your first container app using the Azure portal' description: Deploy your first application to Azure Container Apps using the Azure portal. -+ Last updated 12/13/2021-+
container-instances Container Instances Container Group Automatic Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-container-group-automatic-ssl.md
+
+ Title: Enable automatic HTTPS with Caddy as a sidecar container
+description: This guide describes how Caddy can be used as a reverse proxy to enhance your application with automatic HTTPS
+++++ Last updated : 06/12/2023++
+# Enable automatic HTTPS with Caddy in a sidecar container
+
+This article describes how Caddy can be used as a sidecar container in a [container group](container-instances-container-groups.md) acting as a reverse proxy to provide an automatically managed HTTPS endpoint for your application.
+
+Caddy is a powerful, enterprise-ready, open source web server with automatic HTTPS written in Go and represents an alternative to Nginx.
+
+The automatization of certificates is possible because Caddy supports the ACMEv2 API ([RFC 8555](https://www.rfc-editor.org/rfc/rfc8555)) that interacts with [Let's Encrypt](https://letsencrypt.org/) to issue certificates.
+
+In this example, only the Caddy container gets exposed on ports 80/TCP and 443/TCP. The application behind the reverse proxy remains private. The network communication between Caddy and your application happens via localhost.
+
+> [!NOTE]
+> This stands in contrast to the intra container group communication known from docker compose, where containers can be referenced by name.
+
+The example mounts the [Caddyfile](https://caddyserver.com/docs/caddyfile), which is required to configure the reverse proxy, from a file share hosted on an Azure Storage account.
+
+> [!NOTE]
+> For production deployments, most users will want to bake the Caddyfile into a custom docker image based on [caddy](https://hub.docker.com/_/caddy). This way, there is no need to mount files into the container.
++
+- This article requires version 2.0.55 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+## Prepare the Caddyfile
+
+Create a file called `Caddyfile` and paste the following configuration. This configuration creates a reverse proxy configuration, pointing to your application container listening on 5000/TCP.
+
+```console
+my-app.westeurope.azurecontainer.io {
+ reverse_proxy http://localhost:5000
+}
+```
+
+It's important to note, that the configuration references a domain name instead of an IP address. Caddy needs to be reachable by this URL to carry out the challenge step required by the ACME protocol and to successfully retrieve a certificate from Let's Encrypt.
+
+> [!NOTE]
+> For production deployment, users might want to use a domain name they control, e.g., `api.company.com` and create a CNAME record pointing to e.g. `my-app.westeurope.azurecontainer.io`. If so, it needs to be ensured, that the custom domain name is also used in the Caddyfile, instead of the one assigned by Azure (e.g., `*.westeurope.azurecontainer.io`). Further, the custom domain name, needs to be referenced in the ACI YAML configuration described later in this example.
+
+## Prepare storage account
+
+Create a storage account
+
+```azurecli
+az storage account create \
+ --name <storage-account> \
+ --resource-group <resource-group> \
+ --location westeurope
+```
+
+Store the connection string to an environment variable
+
+```azurecli
+AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string --name <storage-account> --resource-group <resource-group> --output tsv)
+```
+
+Create the file shares required to store the container state and caddy configuration.
+
+```azurecli
+az storage share create \
+ --name proxy-caddyfile \
+ --account-name <storage-account>
+
+az storage share create \
+ --name proxy-config \
+ --account-name <storage-account>
+
+ az storage share create \
+ --name proxy-data \
+ --account-name <storage-account>
+```
+
+Retrieve the storage account keys and make a note for later use
+
+```azurecli
+az storage account keys list -g <resource-group> -n <storage-account>
+```
+
+## Deploy container group
+
+### Create YAML file
+
+Create a file called `ci-my-app.yaml` and paste the following content. Ensure to replace `<account-key>` with one of the access keys previously received and `<storage-account>` accordingly.
+
+This YAML file defines two containers `reverse-proxy` and `my-app`. The `reverse-proxy` container mounts the three previously created file shares. The configuration also exposes port 80/TCP and 443/TCP of the `reverse-proxy` container. The communication between both containers happens on localhost only.
+
+>[!NOTE]
+> It's important to note, that the `dnsNameLabel` key, defines the public DNS name, under which the container instance group will be reachable, it needs to match the FQDN defined in the `Caddyfile`
+
+```yml
+name: ci-my-app
+apiVersion: "2021-10-01"
+location: westeurope
+properties:
+ containers:
+ - name: reverse-proxy
+ properties:
+ image: caddy:2.6
+ ports:
+ - protocol: TCP
+ port: 80
+ - protocol: TCP
+ port: 443
+ resources:
+ requests:
+ memoryInGB: 1.0
+ cpu: 1.0
+ limits:
+ memoryInGB: 1.0
+ cpu: 1.0
+ volumeMounts:
+ - name: proxy-caddyfile
+ mountPath: /etc/caddy
+ - name: proxy-data
+ mountPath: /data
+ - name: proxy-config
+ mountPath: /config
+ - name: my-app
+ properties:
+ image: mcr.microsoft.com/azuredocs/aci-helloworld
+ ports:
+ - port: 5000
+ protocol: TCP
+ environmentVariables:
+ - name: PORT
+ value: 5000
+ resources:
+ requests:
+ memoryInGB: 1.0
+ cpu: 1.0
+ limits:
+ memoryInGB: 1.0
+ cpu: 1.0
+ ipAddress:
+ ports:
+ - protocol: TCP
+ port: 80
+ - protocol: TCP
+ port: 443
+ type: Public
+ dnsNameLabel: my-app
+ osType: Linux
+ volumes:
+ - name: proxy-caddyfile
+ azureFile:
+ shareName: proxy-caddyfile
+ storageAccountName: "<storage-account>"
+ storageAccountKey: "<account-key>"
+ - name: proxy-data
+ azureFile:
+ shareName: proxy-data
+ storageAccountName: "<storage-account>"
+ storageAccountKey: "<account-key>"
+ - name: proxy-config
+ azureFile:
+ shareName: proxy-config
+ storageAccountName: "<storage-account>"
+ storageAccountKey: "<account-key>"
+```
+
+### Deploy the container group
+
+Create a resource group with the [az group create](/cli/azure/group#az-group-create) command:
+
+```azurecli
+az group create --name <resource-group> --location westeurope
+```
+
+Deploy the container group with the [az container create](/cli/azure/container#az-container-create) command, passing the YAML file as an argument.
+
+```azurecli
+az container create --resource-group <resource-group> --file ci-my-app.yaml
+```
+
+### View the deployment state
+
+To view the state of the deployment, use the following [az container show](/cli/azure/container#az-container-show) command:
+
+```azurecli
+az container show --resource-group <resource-group> --name ci-my-app --output table
+```
+
+### Verify TLS connection
+
+Before verifying if everything went well, give the container group some time to fully start and for Caddy to request a certificate.
+
+#### OpenSSL
+
+We can use the `s_client` subcommand of OpenSSL for that purpose.
+
+```bash
+echo "Q" | openssl s_client -connect my-app.westeurope.azurecontainer.io:443
+```
+
+```console
+CONNECTED(00000188)
+
+Certificate chain
+ 0 s:CN = my-app.westeurope.azurecontainer.io
+ i:C = US, O = Let's Encrypt, CN = R3
+ 1 s:C = US, O = Let's Encrypt, CN = R3
+ i:C = US, O = Internet Security Research Group, CN = ISRG Root X1
+ 2 s:C = US, O = Internet Security Research Group, CN = ISRG Root X1
+ i:O = Digital Signature Trust Co., CN = DST Root CA X3
+
+Server certificate
+--BEGIN CERTIFICATE--
+MIIEgTCCA2mgAwIBAgISAxxidSnpH4vVuCZk9UNG/pd2MA0GCSqGSIb3DQEBCwUA
+MDIxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MQswCQYDVQQD
+EwJSMzAeFw0yMzA0MDYxODAzMzNaFw0yMzA3MDUxODAzMzJaMC4xLDAqBgNVBAMT
+I215LWFwcC53ZXN0ZXVyb3BlLmF6dXJlY29udGFpbmVyLmlvMFkwEwYHKoZIzj0C
+AQYIKoZIzj0DAQcDQgAEaaN/wGyFcimM+1O4WzbFgO6vIlXxXqp9vgmLZHpFrNwV
+aO8JbaB7hE+M5EAg34LDY80RyHgY+Ff4vTh2Z96rVqOCAl4wggJaMA4GA1UdDwEB
+/wQEAwIHgDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDAYDVR0TAQH/
+BAIwADAdBgNVHQ4EFgQUoL5DP+4PWiyE79hL5o+v8uymHdAwHwYDVR0jBBgwFoAU
+FC6zF7dYVsuuUAlA5h+vnYsUwsYwVQYIKwYBBQUHAQEESTBHMCEGCCsGAQUFBzAB
+hhVodHRwOi8vcjMuby5sZW5jci5vcmcwIgYIKwYBBQUHMAKGFmh0dHA6Ly9yMy5p
+LmxlbmNyLm9yZy8wLgYDVR0RBCcwJYIjbXktYXBwLndlc3RldXJvcGUuYXp1cmVj
+b250YWluZXIuaW8wTAYDVR0gBEUwQzAIBgZngQwBAgEwNwYLKwYBBAGC3xMBAQEw
+KDAmBggrBgEFBQcCARYaaHR0cDovL2Nwcy5sZXRzZW5jcnlwdC5vcmcwggEEBgor
+BgEEAdZ5AgQCBIH1BIHyAPAAdgC3Pvsk35xNunXyOcW6WPRsXfxCz3qfNcSeHQmB
+Je20mQAAAYdX8+CQAAAEAwBHMEUCIQC9Ztqd3DXoJhOIHBW+P7ketGrKlVA6nPZl
+9CiOrn6t8gIgXHcrbBqItemndRMv+UJ3DaBfTkYOqECecOJCgLhSYNUAdgDoPtDa
+PvUGNTLnVyi8iWvJA9PL0RFr7Otp4Xd9bQa9bgAAAYdX8+CAAAAEAwBHMEUCIBJ1
+24z44vKFUOLCi1a7ymVuWErkmLb/GtysvcxILaj0AiEAr49hyKfen4BbSTwC8Fg4
+/LgZnn2F3uHI+9p+ZMO9xTAwDQYJKoZIhvcNAQELBQADggEBACqxa21eiW3JrZwk
+FHgpd6SxhUeecrYXxFNva1Y6G//q2qCmGeKK3GK+ZGPqDtcoASH5t5ghV4dIT4WU
+auVDLFVywXzR8PT6QUu3W8QxU+W7406twBf23qMIgrF8PIWhStI5mn1uCpeqlnf5
+HpRaj2f5/5n19pcCZcrRx94G9qhPYdMzuy4mZRhxXRqrpIsabqX3DC2ld8dszCvD
+pkV61iuARgm3MIQz1yL/x5Bn4nywjnhYZA4KFktC0Ti55cPRh1mkzGQAsYQDdWrq
+dVav+U9dOLQ4Sq4suaDmzDzApr+hpQSJhwgRN16+tLMyZ6INAU2JWKDxiyDTdOuH
+jz456og=
+--END CERTIFICATE--
+subject=CN = my-app.westeurope.azurecontainer.io
+
+issuer=C = US, O = Let's Encrypt, CN = R3
++
+No client certificate CA names sent
+Peer signing digest: SHA256
+Peer signature type: ECDSA
+Server Temp Key: X25519, 253 bits
+
+SSL handshake has read 4208 bytes and written 401 bytes
+Verification error: unable to get local issuer certificate
+
+New, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256
+Server public key is 256 bit
+Secure Renegotiation IS NOT supported
+Compression: NONE
+Expansion: NONE
+No ALPN negotiated
+Early data was not sent
+Verify return code: 20 (unable to get local issuer certificate)
++
+Post-Handshake New Session Ticket arrived:
+SSL-Session:
+ Protocol : TLSv1.3
+ Cipher : TLS_AES_128_GCM_SHA256
+ Session-ID: 85F1A4290F99A0DD28C8CB21EF4269E7016CC5D23485080999A8548057729B24
+ Session-ID-ctx:
+ Resumption PSK: 752D438C19A5DBDBF10781F863D5E5D9A8859230968A9EAFFF7BBA86937D004F
+ PSK identity: None
+ PSK identity hint: None
+ SRP username: None
+ TLS session ticket lifetime hint: 604800 (seconds)
+ TLS session ticket:
+ 0000 - 2f 25 98 90 9d 46 9b 01-03 78 db bd 4d 64 b3 a6 /%...F...x..Md..
+ 0010 - 52 c0 7a 8a b6 3d b8 4b-c0 d7 fc 04 e8 63 d4 bb R.z..=.K.....c..
+ 0020 - 15 b3 25 b7 be 64 3d 30-2b d7 dc 7a 1a d1 22 63 ..%..d=0+..z.."c
+ 0030 - 42 30 90 65 6b b5 e1 83-a3 6c 76 c8 f6 ae e9 31 B0.ek....lv....1
+ 0040 - 45 91 33 57 8e 9f 4b 6a-2e 2c 9b f9 87 5f 71 1d E.3W..Kj.,..._q.
+ 0050 - 5a 84 59 50 17 31 1f 62-2b 0e 1e e5 70 03 d9 e9 Z.YP.1.b+...p...
+ 0060 - 50 1c 5d 1f a4 3c 8a 0e-f4 c5 7d ce 9e 5c 98 de P.]..<....}..\..
+ 0070 - e5 .
+
+ Start Time: 1680808973
+ Timeout : 7200 (sec)
+ Verify return code: 20 (unable to get local issuer certificate)
+ Extended master secret: no
+ Max Early Data: 0
+
+read R BLOCK
+```
+
+#### Chrome browser
+
+Navigate to https://my-app.westeurope.azurecontainer.io and verify the certificate by clicking on the padlock next to the URL.
++
+To see the certificate details, click on "Connection is secure" followed by "certificate is valid".
++
+## Next steps
+- [Caddy documentation](https://caddyserver.com/docs/)
+- [GitHub aci-helloworld](https://github.com/Azure-Samples/aci-helloworld)
+- [YAML reference: Azure Container Instances](container-instances-reference-yaml.md)
+- [Secure your codeless REST API with automatic HTTPS using Data API builder and Caddy](https://www.azureblue.io/secure-your-codeless-rest-api-with-automatic-https-using-data-api-builder-and-caddy/)
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
df = spark.read\
.format("cosmos.olap")\ .option("spark.synapse.linkedService","<your-linked-service-name>")\ .option("spark.synapse.container","<your-container-name>")\
- .option("spark.synapse.dropColumn","FirstName,LastName")\
+ .option("spark.cosmos.dropColumn","FirstName,LastName")\
.load() # Removing multiple columns:
df = spark.read\
.format("cosmos.olap")\ .option("spark.synapse.linkedService","<your-linked-service-name>")\ .option("spark.synapse.container","<your-container-name>")\
- .option("spark.synapse.dropColumn","FirstName,LastName;StreetName,StreetNumber")\
+ .option("spark.cosmos.dropColumn","FirstName,LastName;StreetName,StreetNumber")\
.option("spark.cosmos.dropMultiColumnSeparator", ";")\ .load() ```
cosmos-db How To Multi Master https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-multi-master.md
CosmosClient client = cosmosClientBuilder.Build();
## <a id="java4-multi-region-writes"></a> Java V4 SDK
-To enable multi-region writes in your application, call `.multipleWriteRegionsEnabled(true)` and `.preferredRegions(preferredRegions)` in the client builder, where `preferredRegions` is a `List` containing one element. That element is the region in which the application is being deployed and where Azure Cosmos DB is replicated:
+To enable multi-region writes in your application, call `.multipleWriteRegionsEnabled(true)` and `.preferredRegions(preferredRegions)` in the client builder, where `preferredRegions` is a `List` of regions the data is replicated into ordered by preference - ideally the regions with shortest distance/best latency first:
# [Async](#tab/api-async)
To enable multi-region writes in your application, call `.multipleWriteRegionsEn
## <a id="java2-multi-region-writes"></a> Async Java V2 SDK
-The Java V2 SDK used the Maven [com.microsoft.azure::azure-cosmosdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb). To enable multi-region writes in your application, set `policy.setUsingMultipleWriteLocations(true)` and set `policy.setPreferredLocations` to the region in which the application is being deployed and where Azure Cosmos DB is replicated:
+The Java V2 SDK used the Maven [com.microsoft.azure::azure-cosmosdb](https://mvnrepository.com/artifact/com.microsoft.azure/azure-cosmosdb). To enable multi-region writes in your application, set `policy.setUsingMultipleWriteLocations(true)` and set `policy.setPreferredLocations` to the `List` of regions the data is replicated into ordered by preference - ideally the regions with shortest distance/best latency first:
```java ConnectionPolicy policy = new ConnectionPolicy();
AsyncDocumentClient client =
## <a id="javascript"></a>Node.js, JavaScript, and TypeScript SDKs
-To enable multi-region writes in your application, set `connectionPolicy.UseMultipleWriteLocations` to `true`. Also, set `connectionPolicy.PreferredLocations` to the region in which the application is being deployed and where Azure Cosmos DB is replicated:
+To enable multi-region writes in your application, set `connectionPolicy.UseMultipleWriteLocations` to `true`. Also, set `connectionPolicy.PreferredLocations` to the regions the data is replicated into ordered by preference - ideally the regions with shortest distance/best latency first:
```javascript const connectionPolicy: ConnectionPolicy = new ConnectionPolicy();
const client = new CosmosClient({
## <a id="python"></a>Python SDK
-To enable multi-region writes in your application, set `connection_policy.UseMultipleWriteLocations` to `true`. Also, set `connection_policy.PreferredLocations` to the region in which the application is being deployed and where Azure Cosmos DB is replicated.
+To enable multi-region writes in your application, set `connection_policy.UseMultipleWriteLocations` to `true`. Also, set `connection_policy.PreferredLocations` to the regions the data is replicated into ordered by preference - ideally the regions with shortest distance/best latency first.
```python connection_policy = documents.ConnectionPolicy()
cosmos-db Computed Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/computed-properties.md
The constraints on computed property query definitions are:
## Creating computed properties
-During the preview, computed properties must be created using the .NET v3 SDK. Once the computed properties have been created, you can execute queries that reference them using any method including all SDKs and Data Explorer in the Azure portal.
+During the preview, computed properties must be created using the .NET v3 or Java v4 SDK. Once the computed properties have been created, you can execute queries that reference them using any method including all SDKs and Data Explorer in the Azure portal.
|**SDK** |**Supported version** |**Notes** | |--|-|-| |.NET SDK v3 |>= [3.34.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.34.0-preview) |Computed properties are currently only available in preview package versions. |
+|Java SDK v4 |>= [4.46.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.46.0) |Computed properties are currently under preview version. |
### Create computed properties using the SDK You can either create a new container with computed properties defined, or add them to an existing container.
-Here's an example of how to create computed properties in a new container using the .NET SDK:
+Here's an example of how to create computed properties in a new container:
+
+### [.NET](#tab/dotnet)
```csharp ContainerProperties containerProperties = new ContainerProperties("myContainer", "/pk")
Here's an example of how to create computed properties in a new container using
Container container = await client.GetDatabase("myDatabase").CreateContainerAsync(containerProperties); ```
-Here's an example of how to update computed properties on an existing container using the .NET SDK:
+### [Java](#tab/java)
+
+```java
+CosmosContainerProperties containerProperties = new CosmosContainerProperties("myContainer", "/pk");
+List<ComputedProperty> computedProperties = new ArrayList<>(List.of(new ComputedProperty("cp_lowerName", "SELECT VALUE LOWER(c.name) FROM c")));
+containerProperties.setComputedProperties(computedProperties);
+client.getDatabase("myDatabase").createContainer(containerProperties);
+```
++
+Here's an example of how to update computed properties on an existing container:
+
+### [.NET](#tab/dotnet)
```csharp var container = client.GetDatabase("myDatabase").GetContainer("myContainer");
Here's an example of how to update computed properties on an existing container
await container.ReplaceContainerAsync(containerProperties); ```
+### [Java](#tab/java)
+
+```java
+CosmosContainer container = client.getDatabase("myDatabase").getContainer("myContainer");
+// Read the current container properties
+CosmosContainerProperties containerProperties = container.read().getProperties();
+// Make the necessary updates to the container properties
+Collection<ComputedProperty> modifiedComputedProperites = containerProperties.getComputedProperties();
+modifiedComputedProperites.add(new ComputedProperty("cp_upperName", "SELECT VALUE UPPER(c.firstName) FROM c"));
+containerProperties.setComputedProperties(modifiedComputedProperites);
+// Update container with changes
+container.replace(containerProperties);
+```
++ > [!TIP] > Every time you update container properties, the old values are overwritten. > If you have existing computed properties and want to add new ones, ensure you add both new and existing computed properties to the collection.
cosmos-db Vercel Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vercel-integration.md
Use this guide if you have already identified the Vercel project(s) or want to i
- Vercel Account with Vercel Project ΓÇô [Learn how to create a new Vercel Project](https://vercel.com/docs/concepts/projects/overview#creating-a-project) -- Azure Cosmos DB - [Quickstart: Create an Azure Cosmos DB account](../cosmos-db/nosql/quickstart-portal.md)
+- Azure Cosmos DB - [Quickstart: Create an Azure Cosmos DB account](../cosmos-db/nosql/quickstart-portal.md) or Create a free [Try Cosmos DB Account](https://aka.ms/trycosmosdbvercel)
- Some basic knowledge on Next.js, React and TypeScript
Use this guide if you have already identified the Vercel project(s) or want to i
## Integrate Cosmos DB with Vercel using marketplace template
-You could use this [template](https://vercel.com/new/clone?demo-title=CosmosDB%20Starter&demo-description=Starter%20app%20built%20on%20Next.js%20and%20CosmosDB.&demo-url=https://cosmosdb-starter-test.vercel.app/&project-name=CosmosDB%20Starter&repository-name=cosmosdb-starter&repository-url=https%3A%2F%2Fgithub.com%2Fv1k1%2Fcosmosdb-starter&from=templates&integration-ids=oac_mPA9PZCLjkhQGhlA5zntNs0L&env=COSMOSDB_CONNECTION_STRING%2C%E2%80%A2%09COSMOSDB_CONTAINER_NAME) to deploy a starter web app on Vercel with Azure Cosmos DB integration.
+We have an [Azure Cosmos DB Next.js Starter](https://aka.ms/azurecosmosdb-vercel-template), which a great ready-to-use template with guided structure and configuration, saving you time and effort in setting up the initial project setup. Click on Deploy to Deploy on Vercel and View Repo to view the (source code)[https://github.com/Azure/azurecosmosdb-vercel-starter].
1. Choose the GitHub repository, where you want to clone the starter repo. :::image type="content" source="./media/integrations/vercel/create-git-repository.png" alt-text="Screenshot to create the repository." lightbox="./media/integrations/vercel/create-git-repository.png":::
You could use this [template](https://vercel.com/new/clone?demo-title=CosmosDB%2
:::image type="content" source="./media/integrations/vercel/configure-project.png" alt-text="Screenshot shows the required variables to establish the connection with Azure Cosmos DB." lightbox="./media/integrations/vercel/configure-project.png":::
-4. Upon successful completion, the completion page would contain the link to the deployed app, or you go to the Vercel project's dashboard to get the link of your app.
+4. Upon successful completion, the completion page would contain the link to the deployed app, or you go to the Vercel project's dashboard to get the link of your app. Now your app is successfully deployed to vercel.
+
+## Next steps
+
+- To learn more about Azure Cosmos DB, see [Welcome to Azure Cosmos DB](../cosmos-db/introduction.md).
+- Create a new [Vercel project](https://vercel.com/dashboard).
+- Learn about [Try Cosmos DB and limits](../cosmos-db/try-free.md).
cost-management-billing Automate Budget Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/automate-budget-creation.md
Languages supported by a culture code:
## Configure cost-based orchestration for budget alerts
-You can configure budgets to start automated actions using Azure Action Groups. To learn more about automating actions using budgets, see [Automation with Azure Budgets](../manage/cost-management-budget-scenario.md).
+You can configure budgets to start automated actions using Azure Action Groups. To learn more about automating actions using budgets, see [Automation with budgets](../manage/cost-management-budget-scenario.md).
## Next steps
cost-management-billing Analyze Cost Data Azure Cost Management Power Bi Template App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/analyze-cost-data-azure-cost-management-power-bi-template-app.md
This error occurs because of a bug with the underlying metadata. The issue happe
#### Solution -- Until the bug is fixed, you can work around the problem by adding a test budget in the Azure portal at the billing account/EA enrollment level. The test budget unblocks connecting with Power BI. For more information about creating a budget, see [Tutorial: Create and manage Azure budgets](tutorial-acm-create-budgets.md).
+- Until the bug is fixed, you can work around the problem by adding a test budget in the Azure portal at the billing account/EA enrollment level. The test budget unblocks connecting with Power BI. For more information about creating a budget, see [Tutorial: Create and manage budgets](tutorial-acm-create-budgets.md).
### Invalid credentials for AzureBlob error
cost-management-billing Aws Integration Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/aws-integration-manage.md
After you've set up and configured AWS Cost and Usage report integration for Cos
If you haven't already configured the integration, see [Set up and configure AWS Usage report integration](aws-integration-set-up-configure.md).
-_Before you begin_: If you're unfamiliar with cost analysis, see the [Explore and analyze costs with Cost analysis](quick-acm-cost-analysis.md) quickstart. And, if you're unfamiliar with budgets in Azure, see the [Create and manage Azure budgets](tutorial-acm-create-budgets.md) tutorial.
+_Before you begin_: If you're unfamiliar with cost analysis, see the [Explore and analyze costs with Cost analysis](quick-acm-cost-analysis.md) quickstart. And, if you're unfamiliar with budgets in Azure, see the [Create and manage budgets](tutorial-acm-create-budgets.md) tutorial.
## View AWS costs in cost analysis
cost-management-billing Aws Integration Set Up Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/aws-integration-set-up-configure.md
AWS linked accounts always inherit permissions from the management group that th
- Now that you've set up and configured AWS Cost and Usage report integration, continue to [Manage AWS costs and usage](aws-integration-manage.md). - If you're unfamiliar with cost analysis, see [Explore and analyze costs with cost analysis](quick-acm-cost-analysis.md) quickstart.-- If you're unfamiliar with budgets in Azure, see [Create and manage Azure budgets](tutorial-acm-create-budgets.md).
+- If you're unfamiliar with budgets in Azure, see [Create and manage budgets](tutorial-acm-create-budgets.md).
cost-management-billing Cost Mgt Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/cost-mgt-best-practices.md
For more information about exporting billing data, see [Create and manage export
### Create budgets
-After you've identified and analyzed your spending patterns, it's important to begin setting limits for yourself and your teams. Azure budgets give you the ability to set either a cost or usage-based budget with many thresholds and alerts. Make sure to review the budgets that you create regularly to see your budget burn-down progress and make changes as needed. Azure budgets also allow you to configure an automation trigger when a given budget threshold is reached. For example, you can configure your service to shut down VMs. Or you can move your infrastructure to a different pricing tier in response to a budget trigger.
+After you've identified and analyzed your spending patterns, it's important to begin setting limits for yourself and your teams. Budgets give you the ability to set either a cost or usage-based budget with many thresholds and alerts. Make sure to review the budgets that you create regularly to see your budget burn-down progress and make changes as needed. Budgets also allow you to configure an automation trigger when a given budget threshold is reached. For example, you can configure your service to shut down VMs. Or you can move your infrastructure to a different pricing tier in response to a budget trigger.
-For more information, see [Azure Budgets](tutorial-acm-create-budgets.md).
+For more information, see [Create budgets](tutorial-acm-create-budgets.md).
For more information about budget-based automation, see [Budget Based Automation](../manage/cost-management-budget-scenario.md).
cost-management-billing Manage Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/manage-automation.md
GET https://management.azure.com/{scope}/providers/Microsoft.Consumption/usageDe
## Automate alerts and actions with budgets
-There are two critical components to maximizing the value of your investment in the cloud. One is automatic budget creation. The other is configuring cost-based orchestration in response to budget alerts. There are different ways to automate Azure budget creation. Various alert responses happen when your configured alert thresholds are exceeded.
+There are two critical components to maximizing the value of your investment in the cloud. One is automatic budget creation. The other is configuring cost-based orchestration in response to budget alerts. There are different ways to automate budget creation. Various alert responses happen when your configured alert thresholds are exceeded.
The following sections cover available options and provide sample API requests to get you started with budget automation.
Request URL: `PUT https://management.azure.com/subscriptions/{SubscriptionId} /p
### Configure cost-based orchestration for budget alerts
-You can configure budgets to start automated actions using Azure Action Groups. To learn more about automating actions using budgets, see [Automation with Azure Budgets](../manage/cost-management-budget-scenario.md).
+You can configure budgets to start automated actions using Azure Action Groups. To learn more about automating actions using budgets, see [Automation with budgets](../manage/cost-management-budget-scenario.md).
## Data latency and rate limits
cost-management-billing Quick Create Budget Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-create-budget-bicep.md
Title: Quickstart - Create an Azure budget with Bicep
+ Title: Quickstart - Create a budget with Bicep
description: Quickstart showing how to create a budget with Bicep.
The Bicep file used in this quickstart is from [Azure Quickstart Templates](http
One Azure resource is defined in the Bicep file: -- [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create an Azure budget.
+- [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create a budget.
### Deploy the Bicep file
The Bicep file used in this quickstart is from [Azure Quickstart Templates](http
One Azure resource is defined in the Bicep file: -- [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create an Azure budget.
+- [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create a budget.
### Deploy the Bicep file
The Bicep file used in this quickstart is from [Azure Quickstart Templates](http
One Azure resource is defined in the Bicep file: -- [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create an Azure budget.
+- [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create a budget.
### Deploy the Bicep file
Remove-AzConsumptionBudget -Name MyBudget
## Next steps
-In this quickstart, you created an Azure budget and deployed it using Bicep. To learn more about Cost Management and Billing and Bicep, continue on to the articles below.
+In this quickstart, you created a budget and deployed it using Bicep. To learn more about Cost Management and Billing and Bicep, continue on to the articles below.
- Read the [Cost Management and Billing](../cost-management-billing-overview.md) overview. - [Create budgets](tutorial-acm-create-budgets.md) in the Azure portal.
cost-management-billing Quick Create Budget Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-create-budget-template.md
The template used in this quickstart is from [Azure Quickstart Templates](https:
One Azure resource is defined in the template:
-* [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create an Azure budget.
+* [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create a budget.
### Deploy the template
The template used in this quickstart is from [Azure Quickstart Templates](https:
One Azure resource is defined in the template:
-* [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create an Azure budget.
+* [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create a budget.
### Deploy the template
The template used in this quickstart is from [Azure Quickstart Templates](https:
One Azure resource is defined in the template:
-* [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create an Azure budget.
+* [Microsoft.Consumption/budgets](/azure/templates/microsoft.consumption/budgets): Create a budget.
### Deploy the template
Write-Host "Press [ENTER] to continue..."
## Next steps
-In this quickstart, you created an Azure budget and deployed it. To learn more about Cost Management and Billing and Azure Resource Manager, continue on to the articles below.
+In this quickstart, you created a budget and deployed it. To learn more about Cost Management and Billing and Azure Resource Manager, continue on to the articles below.
- Read the [Cost Management and Billing](../cost-management-billing-overview.md) overview - [Create budgets](tutorial-acm-create-budgets.md) in the Azure portal
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
Title: Tutorial - Create and manage Azure budgets
+ Title: Tutorial - Create and manage budgets
description: This tutorial helps you plan and account for the costs of Azure services that you consume.
-# Tutorial: Create and manage Azure budgets
+# Tutorial: Create and manage budgets
Budgets in Cost Management help you plan for and drive organizational accountability. They help you proactively inform others about their spending to manage costs and monitor how spending progresses over time.
To toggle between configuring an Actual vs Forecasted cost alert, use the `Type`
If you want to receive emails, add azure-noreply@microsoft.com to your approved senders list so that emails don't go to your junk email folder. For more information about notifications, see [Use cost alerts](./cost-mgt-alerts-monitor-usage-spending.md).
-In the following example, an email alert gets generated when 90% of the budget is reached. If you create a budget with the Budgets API, you can also assign roles to people to receive alerts. Assigning roles to people isn't supported in the Azure portal. For more about the Azure budgets API, see [Budgets API](/rest/api/consumption/budgets). If you want to have an email alert sent in a different language, see [Supported locales for budget alert emails](../automate/automate-budget-creation.md#supported-locales-for-budget-alert-emails).
+In the following example, an email alert gets generated when 90% of the budget is reached. If you create a budget with the Budgets API, you can also assign roles to people to receive alerts. Assigning roles to people isn't supported in the Azure portal. For more about the Budgets API, see [Budgets API](/rest/api/consumption/budgets). If you want to have an email alert sent in a different language, see [Supported locales for budget alert emails](../automate/automate-budget-creation.md#supported-locales-for-budget-alert-emails).
Alert limits support a range of 0.01% to 1000% of the budget threshold that you've provided.
When you create or edit a budget for a subscription or resource group scope, you
Action Groups are currently only supported for subscription and resource group scopes. For more information about creating action groups, see [action groups](../../azure-monitor/alerts/action-groups.md).
-For more information about using budget-based automation with action groups, see [Manage costs with Azure budgets](../manage/cost-management-budget-scenario.md).
+For more information about using budget-based automation with action groups, see [Manage costs with budgets](../manage/cost-management-budget-scenario.md).
To create or update action groups, select **Manage action group** while you're creating or editing a budget.
cost-management-billing Cost Management Budget Scenario https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cost-management-budget-scenario.md
Last updated 12/06/2022
-# Manage costs with Azure Budgets
+# Manage costs with budgets
Cost control is a critical component to maximizing the value of your investment in the cloud. There are several scenarios where cost visibility, reporting, and cost-based orchestration are critical to continued business operations. [Cost Management APIs](/rest/api/consumption/) provide a set of APIs to support each of these scenarios. The APIs provide usage details, allowing you to view granular instance level costs.
These actions included in this tutorial allow you to:
- Create an Azure Automation Runbook to stop VMs by using webhooks. - Create an Azure Logic App to be triggered based on the budget threshold value and call the runbook with the right parameters. - Create an Azure Monitor Action Group that will be configured to trigger the Azure Logic App when the budget threshold is met.-- Create the Azure budget with the wanted thresholds and wire it to the action group.
+- Create the budget with the wanted thresholds and wire it to the action group.
## Create an Azure Automation Runbook
When you create the action group, you'll point to the Logic App that you created
You're done with all the supporting components needed to effectively orchestrate your budget. Now all you need to do is create the budget and configure it to use the action group you created.
-## Create the Azure Budget
+## Create the budget
You can create a budget in the Azure portal using the [Budget feature](../costs/tutorial-acm-create-budgets.md) in Cost Management. Or, you can create a budget using REST APIs, PowerShell cmdlets, or use the CLI. The following procedure uses the REST API. Before calling the REST API, you'll need an authorization token. To create an authorization token, you can use the [ARMClient](https://github.com/projectkudu/ARMClient) project. The **ARMClient** allows you to authenticate yourself to the Azure Resource Manager and get a token to call the APIs.
By using this tutorial, you learned:
- How to create an Azure Automation Runbook to stop VMs. - How to create an Azure Logic App that is triggered based on the budget threshold values and call the related runbook with the right parameters. - How to create an Azure Monitor Action Group that was configured to trigger the Azure Logic App when the budget threshold is met.-- How to create the Azure budget with the desired thresholds and wire it to the action group.
+- How to create the budget with the desired thresholds and wire it to the action group.
You now have a fully functional budget for your subscription that will shut down your VMs when you reach your configured budget thresholds.
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
If your Enterprise Agreement doesn't have a support plan and you try to transfer
## Manage department and account spending with budgets
-EA customers can set budgets for each department and account under an enrollment. Budgets in Cost Management help you plan for and drive organizational accountability. They help you inform others about their spending to proactively manage costs, and to monitor how spending progresses over time. You can configure alerts based on your actual cost or forecasted cost to ensure that your spending is within your organizational spend limit. When the budget thresholds you've created are exceeded, only notifications are triggered. None of your resources are affected and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs. For more information about how to create budgets, see [Tutorial: Create and manage Azure budgets](../costs/tutorial-acm-create-budgets.md).
+EA customers can set budgets for each department and account under an enrollment. Budgets in Cost Management help you plan for and drive organizational accountability. They help you inform others about their spending to proactively manage costs, and to monitor how spending progresses over time. You can configure alerts based on your actual cost or forecasted cost to ensure that your spending is within your organizational spend limit. When the budget thresholds you've created are exceeded, only notifications are triggered. None of your resources are affected and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs. For more information about how to create budgets, see [Tutorial: Create and manage budgets](../costs/tutorial-acm-create-budgets.md).
## Enterprise Agreement user roles
cost-management-billing Mca Setup Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-setup-account.md
Notification contacts are sent email communications about the Azure Enterprise A
### Spending quotas
-Spending quotas that were set for departments in your Enterprise Agreement enrollment are replaced with budgets in the new billing account. A budget is created for each spending quota set on departments in your enrollment. For more information on budgets, see [Tutorial: Create and manage Azure budgets](../costs/tutorial-acm-create-budgets.md).
+Spending quotas that were set for departments in your Enterprise Agreement enrollment are replaced with budgets in the new billing account. A budget is created for each spending quota set on departments in your enrollment. For more information on budgets, see [Tutorial: Create and manage budgets](../costs/tutorial-acm-create-budgets.md).
### Cost centers
cost-management-billing Mosp New Customer Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mosp-new-customer-experience.md
For example, if your billing period was November 24 to December 23 for your old
#### Budgets
-You can now create budgets for the billing account, allowing you to track costs across subscriptions. You can also stay on top of your purchase charges using budgets. For more information about budgets, see [Create and manage Azure budgets](../costs/tutorial-acm-create-budgets.md).
+You can now create budgets for the billing account, allowing you to track costs across subscriptions. You can also stay on top of your purchase charges using budgets. For more information about budgets, see [Create and manage budgets](../costs/tutorial-acm-create-budgets.md).
#### Exports
data-factory Self Hosted Integration Runtime Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/self-hosted-integration-runtime-troubleshoot-guide.md
# Troubleshoot self-hosted integration runtime This article explores common troubleshooting methods for self-hosted integration runtime (IR) in Azure Data Factory and Synapse workspaces. ## Gather self-hosted IR logs
+### Azure Data Factory and Azure Synapse Analytics
+ For failed activities that are running on a self-hosted IR or a shared IR, the service supports viewing and uploading error logs. To get the error report ID, follow the instructions here, and then enter the report ID to search for related known issues. 1. On the Monitor page for the service UI, select **Pipeline runs**.
For failed activities that are running on a self-hosted IR or a shared IR, the s
:::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/send-logs.png" alt-text="Screenshot of the activity logs for the failed activity.":::
-3. For further assistance, select **Send logs**.
+1. For further assistance, select **Send logs**.
The **Share the self-hosted integration runtime (IR) logs with Microsoft** window opens.
For failed activities that are running on a self-hosted IR or a shared IR, the s
> [!NOTE] > Log viewing and uploading requests are executed on all online self-hosted IR instances. If any logs are missing, make sure that all the self-hosted IR instances are online.
+### Microsoft Purview
+
+For failed Microsoft Purview activities that are running on a self-hosted IR or shared IR, the service supports viewing and uploading error logs from the [Windows Event Viewer](/shows/inside/event-viewer).
+
+You can look up any errors you see in the error guide below.
+To get support and troubleshooting guidance for SHIR issues, you may need to generate an error report ID and [reach out to Microsoft support](https://azure.microsoft.com/support/create-ticket/).
+
+To generate the error report ID for Microsoft Support, follow these instructions:
+
+1. Before starting a scan in the Microsoft Purview governance portal:
+
+ 1. Navigate to the machine where the self-hosted integration runtime is installed and open the Windows Event Viewer.
+ 1. Clear the Windows Event Viewer logs in the **Integration Runtime** section. Right-click on the logs and select the clear logs option.
+ 1. Navigate back to the Microsoft Purview governance portal and start the scan.
+
+1. Once the scan shows status **Failed**, navigate back to the SHIR VM, or machine and refresh the event viewer in the **Integration Runtime** section.
+1. The activity logs are displayed for the failed scan run.
+
+1. For further assistance from Microsoft, select **Send Logs**.
+
+ The **Share the self-hosted integration runtime (SHIR) logs with Microsoft** window opens.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/send-logs-integration-runtime.png" lightbox="media/self-hosted-integration-runtime-troubleshoot-guide/send-logs-integration-runtime.png" alt-text="Screenshot of the send logs button on the self-hosted integration runtime (SHIR) to upload logs to Microsoft.":::
+
+1. Select which logs you want to send.
+
+ * For a *self-hosted IR*, you can upload logs that are related to the failed activity or all logs on the self-hosted IR node.
+ * For a *shared IR*, you can upload only logs that are related to the failed activity.
+
+1. When the logs are uploaded, keep a record of the Report ID for later use if you need further assistance to solve the issue.
+
+ :::image type="content" source="media/self-hosted-integration-runtime-troubleshoot-guide/send-logs-complete.png" lightbox="media/self-hosted-integration-runtime-troubleshoot-guide/send-logs-complete.png" alt-text="Screenshot of the displayed report ID in the upload progress window for the Purview SHIR logs.":::
+
+> [!NOTE]
+> Log viewing and uploading requests are executed on all online self-hosted IR instances. If any logs are missing, make sure that all the self-hosted IR instances are online.
## Self-hosted IR general failure or error
data-manager-for-agri Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/release-notes.md
Azure Data Manager for Agriculture Preview is updated on an ongoing basis. To st
> > Microsoft Azure Data Manager for Agriculture requires registration and is available to only approved customers and partners during the preview period. To request access to Microsoft Data Manager for Agriculture during the preview period, use this [**form**](https://aka.ms/agridatamanager).
-## March 2023
+## May 2023
-### Key Announcement: Preview Release
-Azure Data Manager for Agriculture is now available in preview. See our blog post [here](https://azure.microsoft.com/blog/announcing-microsoft-azure-data-manager-for-agriculture-accelerating-innovation-across-the-agriculture-value-chain/).
+### Understanding throttling
+Azure Data Manager for Agriculture implements API throttling to ensure consistent performance by limiting the number of requests within a specified time frame. Throttling prevents resource overuse and maintains optimal performance and reliability for all customers. Details are available [here](concepts-understanding-throttling.md).
## April 2023
You can connect to Azure Data Manager for Agriculture service from your virtual
### BYOL for satellite imagery To support scalable ingestion of geometry-clipped imagery, we've partnered with Sentinel Hub by Sinergise to provide a seamless bring your own license (BYOL) experience. Read more about our satellite connector [here](concepts-ingest-satellite-imagery.md).
-## May 2023
+## March 2023
-### Understanding throttling
-Azure Data Manager for Agriculture implements API throttling to ensure consistent performance by limiting the number of requests within a specified time frame. Throttling prevents resource overuse and maintains optimal performance and reliability for all customers. Details are available [here](concepts-understanding-throttling.md).
+### Key Announcement: Preview Release
+Azure Data Manager for Agriculture is now available in preview. See our blog post [here](https://azure.microsoft.com/blog/announcing-microsoft-azure-data-manager-for-agriculture-accelerating-innovation-across-the-agriculture-value-chain/).
## Next steps * See the Hierarchy Model and learn how to create and organize your agriculture data [here](./concepts-hierarchy-model.md).
defender-for-cloud Concept Agentless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-containers.md
Title: Agentless Container Posture for Microsoft Defender for Cloud
description: Learn how Agentless Container Posture offers discovery, visibility, and vulnerability assessment for Containers without installing an agent on your machines. Previously updated : 05/30/2023 Last updated : 06/01/2023
Learn more about [CSPM](concept-cloud-security-posture-management.md).
Agentless Container Posture provides the following capabilities: -- [Agentless discovery and visibility](#agentless-discovery-and-visibility-within-kubernetes-components) within Kubernetes components.-- [Agentless container registry vulnerability assessment](#agentless-container-registry-vulnerability-assessment), using the image scanning results of your Azure Container Registry (ACR) with cloud security explorer. - Using Kubernetes [attack path analysis](concept-attack-path.md) to visualize risks and threats to Kubernetes environments. - Using [cloud security explorer](how-to-manage-cloud-security-explorer.md) for risk hunting by querying various risk scenarios. - Viewing security insights, such as internet exposure, and other predefined security scenarios. For more information, search for `Kubernetes` in the [list of Insights](attack-path-reference.md#insights).
+- [Agentless discovery and visibility](#agentless-discovery-and-visibility-within-kubernetes-components) within Kubernetes components.
+- [Agentless container registry vulnerability assessment](#agentless-container-registry-vulnerability-assessment), using the image scanning results of your Azure Container Registry (ACR) with cloud security explorer.
+All of these capabilities are available as part of the [Defender Cloud Security Posture Management](concept-cloud-security-posture-management.md) plan.
## Agentless discovery and visibility within Kubernetes components
The discovery process is based on snapshots taken at intervals:
:::image type="content" source="media/concept-agentless-containers/diagram-permissions-architecture.png" alt-text="Diagram of the permissions architecture." lightbox="media/concept-agentless-containers/diagram-permissions-architecture.png":::
-By enabling the Agentless discovery for Kubernetes extension, the following process occurs:
+When you enable the Agentless discovery for Kubernetes extension, the following process occurs:
- **Create**: MDC (Microsoft Defender for Cloud) creates an identity in customer environments called CloudPosture/securityOperator/DefenderCSPMSecurityOperator.
By enabling the Agentless discovery for Kubernetes extension, the following proc
- **Bind**: Upon discovery of an AKS cluster, MDC performs an AKS bind operation between the created identity and the Kubernetes role ΓÇ£Microsoft.Security/pricings/microsoft-defender-operatorΓÇ¥. The role is visible via API and gives MDC data plane read permission inside the cluster.
+### What's the refresh interval?
+
+Agentless information in Defender CSPM is updated through a snapshot mechanism. It can take up to **24 hours** to see results in Cloud Security Explorer and Attack Path.
## Agentless Container registry vulnerability assessment
+> [!NOTE]
+> This feature supports scanning of images in the Azure Container Registry (ACR) only. If you want to find vulnerabilities stored in other container registries, you can import the images into ACR, after which the imported images are scanned by the built-in vulnerability assessment solution. Learn how to [import container images to a container registry](https://learn.microsoft.com/azure/container-registry/container-registry-import-images?tabs=azure-cli).
+ - Container registry vulnerability assessment scans images in your Azure Container Registry (ACR) to provide recommendations for improving your posture by remediating vulnerabilities. - Vulnerability assessment for Containers in Defender Cloud Security Posture Management (CSPM) gives you frictionless, wide, and instant visibility on actionable posture issues without the need for installed agents, network connectivity requirements, or container performance impact.
Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerabi
| Container registry images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management) | Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. | - **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via the ARG](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg). -- **Query vulnerability information via sub-assessment API** - You can get scan results via REST API. See the [sub-assessment list](/rest/api/defenderforcloud/sub-assessments/get?tabs=HTTP).
+- **Query vulnerability information via sub-assessment API** - You can get scan results via REST API. See the [subassessment list](/rest/api/defenderforcloud/sub-assessments/get?tabs=HTTP).
- **Support for exemptions** - Learn how to [create exemption rules for a management group, resource group, or subscription](how-to-enable-agentless-containers.md#support-for-exemptions). ### Scan Triggers
Container registry vulnerability assessment scans container images stored in you
1. Once a day, all discovered images are pulled and an inventory is created for each image that is discovered. 1. Vulnerability reports for known vulnerabilities (CVEs) are generated for each software that is present on an image inventory. 1. Vulnerability reports are refreshed daily for any image pushed during the last 90 days to a registry or currently running on a Kubernetes cluster monitored by Defender CSPM Agentless discovery and visibility for Kubernetes, or monitored by the Defender for Containers agent (profile or extension).
+
+### If I remove an image from my registry, how long before vulnerabilities reports on that image would be removed?
+It currently takes 3 days to remove findings for a deleted image. We are working on providing quicker deletion for removed images.
## Next steps - Learn about [support and prerequisites for agentless containers posture](support-agentless-containers-posture.md)
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Previously updated : 09/11/2022 Last updated : 06/12/2023 # Overview of Microsoft Defender for Containers
Learn more about:
## Run-time protection for Kubernetes nodes and clusters
-Defender for Containers provides real-time threat protection for [supported containerized environments](support-matrix-defender-for-containers.md) and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the Defender agent and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
+Defender for Containers provides real-time threat protection for [supported containerized environments](support-matrix-defender-for-containers.md) and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers.
-Defender for Containers also includes host-level threat detection with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload. For a full list of the cluster level alerts, see the [reference table of alerts](alerts-reference.md#alerts-k8scluster).
+Threat protection at the cluster level is provided by the Defender agent and analysis of the Kubernetes audit logs. This means that security alerts are only triggered for actions and deployments that occur after you've enabled Defender for Containers on your subscription.
+
+Examples of security events that Microsoft Defenders for Containers monitors include:
+
+- Exposed Kubernetes dashboards
+- Creation of high privileged roles
+- Creation of sensitive mounts
+
+You can view security alerts by selecting the Security alerts tile at the top of the Defender for Cloud's overview page, or the link from the sidebar.
+
+ :::image type="content" source="media/managing-and-responding-alerts/overview-page-alerts-links.png" alt-text="Screenshot showing how to get to the security alerts page from Microsoft Defender for Cloud's overview page." lightbox="media/managing-and-responding-alerts/overview-page-alerts-links.png":::
+
+The security alerts page opens.
+
+ :::image type="content" source="media/defender-for-containers/view-containers-alerts.png" alt-text="Screenshot showing you where to view the list of alerts." lightbox="media/defender-for-containers/view-containers-alerts.png":::
+
+Security alerts for runtime workload in the clusters can be recognized by the `K8S.NODE_` prefix of the alert type. For a full list of the cluster level alerts, see the [reference table of alerts](alerts-reference.md#alerts-k8scluster).
+
+Defender for Containers also includes host-level threat detection with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload.
Defender for Cloud monitors the attack surface of multicloud Kubernetes deployments based on the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft.
defender-for-cloud How To Enable Agentless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-enable-agentless-containers.md
Title: How-to enable Agentless Container posture in Microsoft Defender CSPM description: Learn how to onboard Agentless Containers - Previously updated : 05/30/2023 Last updated : 06/01/2023 # Onboard Agentless Container posture in Defender CSPM
-Onboarding Agentless Container posture in Defender CSPM will allow you to gain all its [capabilities](concept-agentless-containers.md#capabilities).
+Onboarding Agentless Container posture in Defender CSPM will allow you to gain all its [capabilities](concept-agentless-containers.md#capabilities).
+Defender CSPM includes [two extensions](#what-are-the-extensions-for-agentless-container-posture-management) that allow for agentless visibility into Kubernetes and containers registries across your organization's SDLC and runtime.
**To onboard Agentless Container posture in Defender CSPM:**
Onboarding Agentless Container posture in Defender CSPM will allow you to gain a
A notification message pops up in the top right corner that will verify that the settings were saved successfully.
+## What are the extensions for Agentless Container Posture management?
+
+There are two extensions that provide agentless CSPM functionality:
+
+- **Container registries vulnerability assessments**: Provides agentless containers registries vulnerability assessments. Recommendations are available based on the vulnerability assessment timeline. Learn more about [image scanning](concept-agentless-containers.md#agentless-container-registry-vulnerability-assessment).
+- **Agentless discovery for Kubernetes**: Provides API-based discovery of information about Kubernetes cluster architecture, workload objects, and setup.
+
+## How can I onboard multiple subscriptions at once?
+
+To onboard multiple subscriptions at once, you can use this [script](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Powershell%20scripts/Agentless%20Container%20Posture).
++ ## Why don't I see results from my clusters?+ If you don't see results from your clusters, check the following: - Do you have stopped clusters?-- Are your clusters Read only (locked)?
+- Are your [resource groups, subscriptions, or clusters locked](#what-do-i-do-if-i-have-locked-resource-groups-subscriptions-or-clusters)?
+
+## What can I do if I have stopped clusters?
+
+We do not support or charge stopped clusters. To get the value of agentless capabilities on a stopped cluster, you can rerun the cluster.
+
+## What do I do if I have locked resource groups, subscriptions, or clusters?
+
+We suggest that you unlock the locked resource group/subscription/cluster, make the relevant requests manually, and then re-lock the resource group/subscription/cluster by doing the following:
+
+1. Enable the feature flag manually via CLI:
+
+ ``` CLI
+
+ ΓÇ£az feature register --namespace "Microsoft.ContainerService" --name "TrustedAccessPreviewΓÇ¥
+
+ ```
+
+1. Perform the bind operation in the CLI:
+
+ ``` CLI
+
+ az account set -s <SubscriptionId>
+
+ az extension add --name aks-preview
+
+ az aks trustedaccess rolebinding create --resource-group <cluster resource group> --cluster-name <cluster name> --name defender-cloudposture --source-resource-id /subscriptions/<SubscriptionId>/providers/Microsoft.Security/pricings/CloudPosture/securityOperators/DefenderCSPMSecurityOperator --roles "Microsoft.Security/pricings/microsoft-defender-operator"
+
+ ```
+
+For locked clusters, you can also do one of the following:
+
+- Remove the lock.
+- Perform the bind operation manually by making an API request.
-## What do I do if I have stopped clusters?
-We suggest that you rerun the cluster to solve this issue.
+Learn more about [locked resources](/azure/azure-resource-manager/management/lock-resources?tabs=json).
-## Support for exemptions
+ ## Support for exemptions
You can customize your vulnerability assessment experience by exempting management groups, subscriptions, or specific resources from your secure score. Learn how to [create an exemption](exempt-resource.md) for a resource or subscription. ## Next Steps
- Learn how to [view and remediate vulnerability assessment findings for registry images and running images](view-and-remediate-vulnerability-assessment-findings.md).
+- Learn more about [Trusted Access](/azure/aks/trusted-access-feature).
+- Learn how to [view and remediate vulnerability assessment findings for registry images and running images](view-and-remediate-vulnerability-assessment-findings.md).
+- Learn how to [create an exemption](exempt-resource.md) for a resource or subscription.
+- Learn more about [Cloud Security Posture Management](concept-cloud-security-posture-management.md).
defender-for-cloud Multi Factor Authentication Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md
Title: Microsoft Defender for Cloud's security recommendations for MFA description: Learn how to enforce multi-factor authentication for your Azure subscriptions using Microsoft Defender for Cloud Previously updated : 01/24/2023 Last updated : 06/11/2023 # Manage multi-factor authentication (MFA) enforcement on your subscriptions
Defender for Cloud places a high value on MFA. The security control that contrib
The recommendations in the Enable MFA control ensure you're meeting the recommended practices for users of your subscriptions: -- MFA should be enabled on accounts with owner permissions on your subscription-- MFA should be enabled on accounts with write permissions on your subscription
+- Accounts with owner permissions on Azure resources should be MFA enabled
+- Accounts with write permissions on Azure resources should be MFA enabled
+- Accounts with read permissions on Azure resources should be MFA enabled
There are three ways to enable MFA and be compliant with the two recommendations in Defender for Cloud: security defaults, per-user assignment, conditional access (CA) policy.
defender-for-cloud Plan Defender For Servers Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-agents.md
You can onboard the Azure Arc agent to your AWS or GCP servers automatically wit
To plan for Azure Arc deployment: 1. Review the Azure Arc [planning recommendations](../azure-arc/servers/plan-at-scale-deployment.md) and [deployment prerequisites](../azure-arc/servers/prerequisites.md).
+1. Open the [network ports for Azure Arc](support-matrix-defender-for-servers.md#network-requirements) in your firewall.
1. Azure Arc installs the Connected Machine agent to connect to and manage machines that are hosted outside of Azure. Review the following information: - The [agent components and data collected from machines](../azure-arc/servers/agent-overview.md#agent-resources).
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 06/07/2023 Last updated : 06/11/2023 # What's new in Microsoft Defender for Cloud?
Updates in June include:
|Date |Update | |||
-| June 7 | [Express configuration for vulnerability assessments in Defender for SQL is now Generally Available](#express-configuration-for-vulnerability-assessments-in-defender-for-sql-is-now-generally-available) |
+|June 11 | [Planning of cloud migration with an Azure Migrate business case now includes Defender for Cloud](#planning-of-cloud-migration-with-an-azure-migrate-business-case-now-includes-defender-for-cloud) |
+|June 7 | [Express configuration for vulnerability assessments in Defender for SQL is now Generally Available](#express-configuration-for-vulnerability-assessments-in-defender-for-sql-is-now-generally-available) |
|June 6 | [More scopes added to existing Azure DevOps Connectors](#more-scopes-added-to-existing-azure-devops-connectors) | |June 5 | [Onboarding directly (without Azure Arc) to Defender for Servers is now Generally Available](#onboarding-directly-without-azure-arc-to-defender-for-servers-is-now-generally-available) | |June 4 | [Replacing agent-based discovery with agentless discovery for containers capabilities in Defender CSPM](#replacing-agent-based-discovery-with-agentless-discovery-for-containers-capabilities-in-defender-cspm) |
+### Planning of cloud migration with an Azure Migrate business case now includes Defender for Cloud
+
+June 11, 2023
+
+Now you can discover potential cost savings in security by leveraging Defender for Cloud within the context of an [Azure Migrate business case](/azure/migrate/how-to-build-a-business-case).
+ ### Express configuration for vulnerability assessments in Defender for SQL is now Generally Available June 7, 2023
defender-for-cloud Support Agentless Containers Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-agentless-containers-posture.md
Title: Support and prerequisites for agentless container posture - Microsoft Defender for Cloud description: Learn about the requirements for agentless container posture in Microsoft Defender for Cloud - Previously updated : 05/09/2023 Last updated : 06/01/2023 # Support and prerequisites for agentless containers posture All of the agentless container capabilities are available as part of the [Defender Cloud Security Posture Management](concept-cloud-security-posture-management.md) plan.
-Review the requirements on this page before setting up [agentless containers posture](concept-data-security-posture.md) in Microsoft Defender for Cloud.
+Review the requirements on this page before setting up [agentless containers posture](concept-agentless-containers.md) in Microsoft Defender for Cloud.
> [!IMPORTANT]
-> The Agentless Container Posture preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available" and are excluded from the service-level agreements and limited warranty. Agentless Container Posture previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use.
+> Agentless Posture is currently in Preview. Previews are provided "as is" and "as available" and are excluded from the service-level agreements and limited warranty.
## Availability | Aspect | Details | |||
-|Release state:|Preview|
+|Release state:|Preview |
|Pricing:|Requires [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) and is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) | | Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP accounts |
-| Permissions | You need to have access as a Subscription Owner, or, User Access Admin as well as Security Admin permissions for the Azure subscription used for onboarding |
+| Permissions | You need to have access as a:<br><br> - Subscription Owner, **or** <br> - User Access Admin and Security Admin permissions for the Azure subscription used for onboarding |
## Registries and images
Review the requirements on this page before setting up [agentless containers pos
## Prerequisites
-You need to have a Defender for CSPM plan enabled. There's no dependency on Defender for Containers​.
+You need to have a Defender CSPM plan enabled. There's no dependency on Defender for Containers​.
This feature uses trusted access. Learn more about [AKS trusted access prerequisites](/azure/aks/trusted-access-feature#prerequisites).
-### What do I do if I have Read only clusters (locked)?
+### Are you using an updated version of AKS?
-We suggest that you do one of the following:
--- Remove the lock.-- Perform the bind operation manually by doing an API request.-
-Learn more about [locked resources](/azure/azure-resource-manager/management/lock-resources?tabs=json).
+Learn more about [supported Kubernetes versions in Azure Kubernetes Service (AKS)](/azure/aks/supported-kubernetes-versions?tabs=azure-cli).
## Next steps
defender-for-cloud Support Matrix Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-servers.md
description: Review support requirements for the Defender for Servers plan in Mi
Previously updated : 01/01/2023 Last updated : 06/11/2023 # Defender for Servers support This article summarizes support information for the Defender for Servers plan in Microsoft Defender for Cloud.
-## Azure cloud support
+## Network requirements
+
+Validate the following endpoints are configured for outbound access so that Azure Arc extension can connect to Microsoft Defender for Cloud to send security data and events:
+
+- For Defender for Server multicloud deployments, make sure that the [addresses and ports required by Azure Arc](../azure-arc/dat#details-on-internet-addresses-ports-encryption-and-proxy-server-support) are open.
+
+- For deployments with GCP connectors, open port 443 to these URLs:
+ - `osconfig.googleapis.com`
+ - `compute.googleapis.com`
+ - `containeranalysis.googleapis.com`
+ - `agentonboarding.defenderforservers.security.azure.com`
+ - `gbl.his.arc.azure.com`
-This table summarizes Azure cloud support for Defender for Servers features.
+- For deployments with AWS connectors, open port 443 to these URLs:
+
+ - `ssm.<region>.amazonaws.com`
+ - `ssmmessages.<region>.amazonaws.com`
+ - `ec2messages.<region>.amazonaws.com`
+ - `gbl.his.arc.azure.com`
+
+## Azure cloud support
+
+This table summarizes Azure cloud support for Defender for Servers features.
**Feature/Plan** | **Azure** | **Azure Government** | **Azure China**<br/>**21Vianet**
- | | |
+ | | |
[Microsoft Defender for Endpoint integration](./integration-defender-for-endpoint.md) | GA | GA | NA [Compliance standards](./regulatory-compliance-dashboard.md)<br/>Compliance standards might differ depending on the cloud type.| GA | GA | GA [Microsoft Cloud Security Benchmark recommendations for OS hardening](apply-security-baseline.md) | GA | GA | GA
This table summarizes Azure cloud support for Defender for Servers features.
[Adaptive network hardening](./adaptive-network-hardening.md) | GA | NA | NA [Docker host hardening](./harden-docker-hosts.md) | GA | GA | GA - ## Windows machine support The following table shows feature support for Windows machines in Azure, Azure Arc, and other clouds.
The following table shows feature support for AWS and GCP machines.
| [Network security assessment](protect-network-resources.md) | - | - | | [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | Γ£ö | - | -- ## Endpoint protection support The following table provides a matrix of supported endpoint protection solutions. The table indicates whether you can use Defender for Cloud to install each solution for you.
The following table provides a matrix of supported endpoint protection solutions
| Microsoft Defender for Endpoint Unified Solution<sup>[2](#footnote2)</sup> | Windows Server 2012 R2 and Windows 2016 | Via extension | | Sophos V9+ | Linux (GA) | No | - <sup><a name="footnote1"></a>1</sup> It's not enough to have Microsoft Defender for Endpoint on the Linux machine: the machine will only appear as healthy if the always-on scanning feature (also known as real-time protection (RTP)) is active. By default, the RTP feature is **disabled** to avoid clashes with other AV software. <sup><a name="footnote2"></a>2</sup> With the MDE unified solution on Server 2012 R2, it automatically installs Microsoft Defender Antivirus in Active mode. For Windows Server 2016, Microsoft Defender Antivirus is built into the OS.
The following table provides a matrix of supported endpoint protection solutions
## Next steps Start planning your [Defender for Servers deployment](plan-defender-for-servers.md).-
defender-for-cloud View And Remediate Vulnerability Assessment Findings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/view-and-remediate-vulnerability-assessment-findings.md
Title: How-to view and remediate vulnerability assessment findings for registry images description: Learn how to view and remediate vulnerability assessment findings for registry images - Previously updated : 05/16/2023 Last updated : 05/30/2023 # View and remediate vulnerability assessment findings for registry images
The resources are grouped into tabs:
## Next Steps
- Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
+ - Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).
devtest-labs Devtest Lab Use Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-use-resource-manager-template.md
description: Learn how to view, edit, save, and store ARM virtual machine (VM) t
Previously updated : 01/11/2022 Last updated : 06/09/2023 # Use ARM templates to create DevTest Labs virtual machines
dns Private Dns Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-overview.md
Previously updated : 09/27/2022 Last updated : 06/09/2023 #Customer intent: As an administrator, I want to evaluate Azure Private DNS so I can determine if I want to use it instead of my current DNS service.
The Domain Name System, or DNS, is responsible for translating (or resolving) a service name to an IP address. Azure DNS is a hosting service for domains and provides naming resolution using the Microsoft Azure infrastructure. Azure DNS not only supports internet-facing DNS domains, but it also supports private DNS zones.
-Azure Private DNS provides a reliable and secure DNS service for your virtual network. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution. By using private DNS zones, you can use your own custom domain name instead of the Azure-provided names during deployment. Using a custom domain name helps you tailor your virtual network architecture to best suit your organization's needs. It provides a naming resolution for virtual machines (VMs) within a virtual network and connected virtual networks. Additionally, you can configure zones names with a split-horizon view, which allows a private and a public DNS zone to share the name.
+Azure Private DNS provides a reliable and secure DNS service for your virtual networks. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution. By using private DNS zones, you can use your own custom domain name instead of the Azure-provided names during deployment. Using a custom domain name helps you tailor your virtual network architecture to best suit your organization's needs. It provides a naming resolution for virtual machines (VMs) within a virtual network and connected virtual networks. Additionally, you can configure zones names with a split-horizon view, which allows a private and a public DNS zone to share the name.
To resolve the records of a private DNS zone from your virtual network, you must link the virtual network with the zone. Linked virtual networks have full access and can resolve all DNS records published in the private zone. You can also enable autoregistration on a virtual network link. When you enable autoregistration on a virtual network link, the DNS records for the virtual machines in that virtual network are registered in the private zone. When autoregistration gets enabled, Azure DNS will update the zone record whenever a virtual machine gets created, changes its' IP address, or gets deleted.
To resolve the records of a private DNS zone from your virtual network, you must
> [!NOTE] > As a best practice, do not use a *.local* domain for your private DNS zone. Not all operating systems support this.
+## Private zone resiliency
+
+When you create a private DNS zone, Azure stores the zone data as a global resource. This means that the private zone is not dependent on a single VNet or region. You can link the same private zone to multiple VNets in different regions. If service is interrupted in one VNet, your private zone is still available. For more information, see [Azure Private DNS zone resiliency](private-dns-resiliency.md).
+ ## Benefits Azure Private DNS provides the following benefits:
dns Private Dns Resiliency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-resiliency.md
+
+ Title: Azure Private DNS zone resiliency
+description: In this article, learn about resiliency in Azure Private DNS zones.
++++ Last updated : 06/09/2023+++
+# Azure Private DNS zone resiliency
+
+DNS private zones are resilient to regional outages because zone data is globally available. Resource records in a private zone are automatically replicated across regions.
+
+## Resiliency example
+
+The following figure illustrates the availability of private zone data across multiple regions.
+
+![Regional failure example showing three VNets with one red and two green](media/private-dns-resiliency/resiliency-example.png)
+
+In this example:
+- The private zone azure.contoso.com is linked to VNets in three different regions. Autoregistration is enabled in two regions.
+- A temporary outage occurs in region A.
+- Regions B and C are still able to successfully query DNS names in the private zone, including names that are autoregistered from region A (ex: VM1).
+- Region B can add, edit, or delete records from the private DNS zone as needed.
+- Service interruption in region A doesn't affect name resolution in the other regions.
+
+The example shown here doesn't illustrate a disaster recovery scenario, however the global nature of private zones also makes it possible to recreate VM1 in another VNet and assume its workload.
+
+> [!NOTE]
+> Azure Private DNS is an availability zone foundational, zone-reduntant service. For more information, see [Azure services with availability zone support](/azure/reliability/availability-zones-service-support#azure-services-with-availability-zone-support).
+
+## Next steps
+- To learn more about Private DNS zones, see [Using Azure DNS for private domains](private-dns-overview.md).
+- Learn how to [create a Private DNS zone](./private-dns-getstarted-powershell.md) in Azure DNS.
+- Learn about DNS zones and records by visiting: [DNS zones and records overview](dns-zones-records.md).
+- Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
event-hubs Passwordless Migration Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/passwordless-migration-event-hubs.md
+
+ Title: Migrate applications to use passwordless authentication with Azure Event Hubs
+
+description: Learn to migrate existing applications away from Shared Key authorization with the account key to instead use Azure AD and Azure RBAC for enhanced security with Azure Event Hubs.
++ Last updated : 06/12/2023+++++
+# Migrate an application to use passwordless connections with Azure Event Hubs
++
+## Configure your local development environment
+
+Passwordless connections can be configured to work for both local and Azure-hosted environments. In this section, you'll apply configurations to allow individual users to authenticate to Azure Event Hubs for local development.
+
+### Assign user roles
++
+### Sign-in to Azure locally
++
+### Update the application code to use passwordless connections
+
+The Azure Identity client library, for each of the following ecosystems, provides a `DefaultAzureCredential` class that handles passwordless authentication to Azure:
+
+- [.NET](/dotnet/api/overview/azure/Identity-readme?view=azure-dotnet&preserve-view=true#defaultazurecredential)
+- [C++](https://github.com/Azure/azure-sdk-for-cpp/blob/main/sdk/identity/azure-identity/README.md#defaultazurecredential)
+- [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#readme-defaultazurecredential)
+- [Java](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true#defaultazurecredential)
+- [Node.js](/javascript/api/overview/azure/identity-readme?view=azure-node-latest&preserve-view=true#defaultazurecredential)
+- [Python](/python/api/overview/azure/identity-readme?view=azure-python&preserve-view=true#defaultazurecredential)
+
+`DefaultAzureCredential` supports multiple authentication methods. The method to use is determined at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code. See the preceding links for the order and locations in which `DefaultAzureCredential` looks for credentials.
+
+## [.NET](#tab/dotnet)
+
+1. To use `DefaultAzureCredential` in a .NET application, install the `Azure.Identity` package:
+
+ ```dotnetcli
+ dotnet add package Azure.Identity
+ ```
+
+1. At the top of your file, add the following code:
+
+ ```csharp
+ using Azure.Identity;
+ ```
+
+1. Identify the locations in your code that create an `EventHubProducerClient` or `EventProcessorClient` object to connect to Azure Event Hubs. Update your code to match the following example:
+
+ ```csharp
+ DefaultAzureCredential credential = new();
+ var eventHubNamespace = $"https://{namespace}.servicebus.windows.net";
+
+ // Event Hubs producer
+ EventHubProducerClient producerClient = new(
+ eventHubNamespace,
+ eventHubName,
+ credential);
+
+ // Event Hubs processor
+ EventProcessorClient processorClient = new(
+ storageClient,
+ EventHubConsumerClient.DefaultConsumerGroupName,
+ eventHubNamespace,
+ eventHubName,
+ credential);
+ ```
+
+## [Go](#tab/go)
+
+1. To use `DefaultAzureCredential` in a Go application, install the `azidentity` module:
+
+ ```bash
+ go get -u github.com/Azure/azure-sdk-for-go/sdk/azidentity
+ ```
+
+1. At the top of your file, add the following code:
+
+ ```go
+ import (
+ "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
+ )
+ ```
+
+1. Identify the locations in your code that create a `ProducerClient` or `ConsumerClient` instance to connect to Azure Event Hubs. Update your code to match the following example:
+
+ ```go
+ credential, err := azidentity.NewDefaultAzureCredential(nil)
+ eventHubNamespace := fmt.Sprintf(
+ "https://%s.servicebus.windows.net",
+ namespace)
+
+ if err != nil {
+ // handle error
+ }
+
+ // Event Hubs producer
+ producerClient, err = azeventhubs.NewProducerClient(
+ eventHubNamespace,
+ eventHubName,
+ credential,
+ nil)
+
+ if err != nil {
+ // handle error
+ }
+
+ // Event Hubs processor
+ processorClient, err = azeventhubs.NewConsumerClient(
+ eventHubNamespace,
+ eventHubName,
+ azeventhubs.DefaultConsumerGroup,
+ credential,
+ nil)
+
+ if err != nil {
+ // handle error
+ }
+ ```
+
+## [Java](#tab/java)
+
+1. To use `DefaultAzureCredential` in a Java application, install the `azure-identity` package via one of the following approaches:
+ 1. [Include the BOM file](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true#include-the-bom-file).
+ 1. [Include a direct dependency](/java/api/overview/azure/identity-readme?view=azure-java-stable&preserve-view=true#include-direct-dependency).
+
+1. At the top of your file, add the following code:
+
+ ```java
+ import com.azure.identity.DefaultAzureCredentialBuilder;
+ ```
+
+1. Identify the locations in your code that create an `EventHubProducerClient` or `EventProcessorClient` object to connect to Azure Event Hubs. Update your code to match the following example:
+
+ ```java
+ DefaultAzureCredential credential = new DefaultAzureCredentialBuilder()
+ .build();
+ String eventHubNamespace = "https://" + namespace + ".servicebus.windows.net";
+
+ // Event Hubs producer
+ EventHubProducerClient producerClient = new EventHubClientBuilder()
+ .credential(eventHubNamespace, eventHubName, credential)
+ .buildProducerClient();
+
+ // Event Hubs processor
+ EventProcessorClient processorClient = new EventProcessorClientBuilder()
+ .consumerGroup(consumerGroupName)
+ .credential(eventHubNamespace, eventHubName, credential)
+ .checkpointStore(new SampleCheckpointStore())
+ .processEvent(eventContext -> {
+ System.out.println(
+ "Partition ID = " +
+ eventContext.getPartitionContext().getPartitionId() +
+ " and sequence number of event = " +
+ eventContext.getEventData().getSequenceNumber());
+ })
+ .processError(errorContext -> {
+ System.out.println(
+ "Error occurred while processing events " +
+ errorContext.getThrowable().getMessage());
+ })
+ .buildEventProcessorClient();
+ ```
+
+## [Node.js](#tab/nodejs)
+
+1. To use `DefaultAzureCredential` in a Node.js application, install the `@azure/identity` package:
+
+ ```bash
+ npm install --save @azure/identity
+ ```
+
+1. At the top of your file, add the following code:
+
+ ```nodejs
+ import { DefaultAzureCredential } from "@azure/identity";
+ ```
+
+1. Identify the locations in your code that create an `EventHubProducerClient` or `EventHubConsumerClient` object to connect to Azure Event Hubs. Update your code to match the following example:
+
+ ```nodejs
+ const credential = new DefaultAzureCredential();
+ const eventHubNamespace = `https://${namespace}.servicebus.windows.net`;
+
+ // Event Hubs producer
+ const producerClient = new EventHubProducerClient(
+ eventHubNamespace,
+ eventHubName,
+ credential);
+
+ // Event Hubs processor
+ const processorClient = new EventHubConsumerClient(
+ consumerGroupName,
+ eventHubNamespace,
+ eventHubName,
+ credential
+ );
+ ```
+
+## [Python](#tab/python)
+
+1. To use `DefaultAzureCredential` in a Python application, install the `azure-identity` package:
+
+ ```bash
+ pip install azure-identity
+ ```
+
+1. At the top of your file, add the following code:
+
+ ```python
+ from azure.identity import DefaultAzureCredential
+ ```
+
+1. Identify the locations in your code that create an `EventHubProducerClient` or `EventHubConsumerClient` object to connect to Azure Event Hubs. Update your code to match the following example:
+
+ ```python
+ credential = DefaultAzureCredential()
+ event_hub_namespace = "https://%s.servicebus.windows.net" % namespace
+
+ # Event Hubs producer
+ producer_client = EventHubProducerClient(
+ fully_qualified_namespace = event_hub_namespace,
+ eventhub_name = event_hub_name,
+ credential = credential
+ )
+
+ # Event Hubs processor
+ processor_client = EventHubConsumerClient(
+ fully_qualified_namespace = event_hub_namespace,
+ eventhub_name = event_hub_name,
+ consumer_group = "$Default",
+ checkpoint_store = checkpoint_store,
+ credential = credential
+ )
+ ```
+++
+4. Make sure to update the event hubs namespace in the URI of your `EventHubProducerClient` or `EventProcessorClient` objects. You can find the namespace name on the overview page of the Azure portal.
+
+ :::image type="content" source="media/event-hubs-passwordless/event-hubs-namespace.png" alt-text="Screenshot showing how to find the namespace name.":::
+
+### Run the app locally
+
+After making these code changes, run your application locally. The new configuration should pick up your local credentials, such as the Azure CLI, Visual Studio, or IntelliJ. The roles you assigned to your user in Azure allows your app to connect to the Azure service locally.
+
+## Configure the Azure hosting environment
+
+Once your application is configured to use passwordless connections and runs locally, the same code can authenticate to Azure services after it's deployed to Azure. The sections that follow explain how to configure a deployed application to connect to Azure Event Hubs using a [managed identity](/azure/active-directory/managed-identities-azure-resources/overview). Managed identities provide an automatically managed identity in Azure Active Directory (Azure AD) for applications to use when connecting to resources that support Azure AD authentication. Learn more about managed identities:
+
+* [Passwordless Overview](/azure/developer/intro/passwordless-overview)
+* [Managed identity best practices](/azure/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations)
+
+### Create the managed identity
++
+#### Associate the managed identity with your web app
+
+You need to configure your web app to use the managed identity you created. Assign the identity to your app using either the Azure portal or the Azure CLI.
+
+# [Azure portal](#tab/azure-portal-associate)
+
+Complete the following steps in the Azure portal to associate an identity with your app. These same steps apply to the following Azure
+
+* Azure Spring Apps
+* Azure Container Apps
+* Azure virtual machines
+* Azure Kubernetes Service
+
+1. Navigate to the overview page of your web app.
+1. Select **Identity** from the left navigation.
+1. On the **Identity** page, switch to the **User assigned** tab.
+1. Select **+ Add** to open the **Add user assigned managed identity** flyout.
+1. Select the subscription you used previously to create the identity.
+1. Search for the **MigrationIdentity** by name and select it from the search results.
+1. Select **Add** to associate the identity with your app.
+
+ :::image type="content" source="../../articles/storage/common/media/create-user-assigned-identity-small.png" alt-text="Screenshot showing how to create a user assigned identity." lightbox="../../articles/storage/common/media/create-user-assigned-identity.png":::
+
+# [Azure CLI](#tab/azure-cli-associate)
++
+# [Service Connector](#tab/service-connector-associate)
++++
+### Assign roles to the managed identity
+
+Next, you need to grant permissions to the managed identity you created to access your event hub. Grant permissions by assigning a role to the managed identity, just like you did with your local development user.
+
+### [Azure portal](#tab/assign-role-azure-portal)
+
+1. Navigate to your event hub overview page and select **Access Control (IAM)** from the left navigation.
+
+1. Choose **Add role assignment**
+
+ :::image type="content" source="../../includes/passwordless/media/migration-add-role-small.png" alt-text="Screenshot showing how to add a role to a managed identity." lightbox="../../includes/passwordless/media/migration-add-role.png" :::
+
+1. In the **Role** search box, search for *Azure Event Hub Data Sender*, which is a common role used to manage data operations for queues. You can assign whatever role is appropriate for your use case. Select the *Azure Event Hub Data Sender* from the list and choose **Next**.
+
+1. On the **Add role assignment** screen, for the **Assign access to** option, select **Managed identity**. Then choose **+Select members**.
+
+1. In the flyout, search for the managed identity you created by name and select it from the results. Choose **Select** to close the flyout menu.
+
+ :::image type="content" source="../../includes/passwordless/media/migration-select-identity-small.png" alt-text="Screenshot showing how to select the assigned managed identity." lightbox="../../includes/passwordless/media/migration-select-identity.png":::
+
+1. Select **Next** a couple times until you're able to select **Review + assign** to finish the role assignment.
+
+1. Repeat these steps for the **Azure Event Hub Data Receiver** role.
+
+### [Azure CLI](#tab/assign-role-azure-cli)
+
+To assign a role at the resource level using the Azure CLI, you first must retrieve the resource ID using the [az eventhubs eventhub show](/cli/azure/eventhubs/eventhub) show command. You can filter the output properties using the `--query` parameter.
+
+```azurecli
+az eventhubs eventhub show \
+ --resource-group '<your-resource-group-name>' \
+ --namespace-name '<your-event-hubs-namespace>' \
+ --name '<your-event-hub-name>' \
+ --query id
+```
+
+Copy the output ID from the preceding command. You can then assign roles using the [az role assignment](/cli/azure/role/assignment) command of the Azure CLI.
+
+```azurecli
+az role assignment create --assignee "<user@domain>" \
+ --role "Azure Event Hubs Data Receiver" \
+ --scope "<your-resource-id>"
+
+az role assignment create --assignee "<user@domain>" \
+ --role "Azure Event Hubs Data Sender" \
+ --scope "<your-resource-id>"
+```
+
+### [Service Connector](#tab/assign-role-service-connector)
+
+If you connected your services using Service Connector you don't need to complete this step. The necessary role configurations were handled for you when you ran the Service Connector CLI commands.
++++
+### Test the app
+
+After deploying the updated code, browse to your hosted application in the browser. Your app should be able to connect to the event hub successfully. Keep in mind that it may take several minutes for the role assignments to propagate through your Azure environment. Your application is now configured to run both locally and in a production environment without the developers having to manage secrets in the application itself.
+
+## Next steps
+
+In this tutorial, you learned how to migrate an application to passwordless connections.
+
+You can read the following resources to explore the concepts discussed in this article in more depth:
+
+* [Passwordless connections for Azure services](/azure/developer/intro/passwordless-overview)
+* To learn more about .NET, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[CoreSite](https://www.coresite.com/solutions/cloud-services/public-cloud-providers/microsoft-azure-expressroute)** |Supported |Supported | Chicago, Chicago2, Denver, Los Angeles, New York, Silicon Valley, Silicon Valley2, Washington DC, Washington DC2 | | **[Cox Business Cloud Port](https://www.cox.com/business/networking/cloud-connectivity.html)** |Supported |Supported | Dallas, Phoenix, Silicon Valley, Washington DC | | **Crown Castle** |Supported |Supported | New York |
-| **[DE-CIX](https://www.de-cix.net/en/services/microsoft-azure-peering-service)** | Supported |Supported | Amsterdam2, Chennai, Chicago2, Dallas, Dubai2, Frankfurt, Frankfurt2, Kuala Lumpur, Madrid, Marseille, Mumbai, Munich, New York, Phoenix, Singapore2 |
+| **[DE-CIX](https://www.de-cix.net/en/services/directcloud/microsoft-azure)** | Supported |Supported | Amsterdam2, Chennai, Chicago2, Dallas, Dubai2, Frankfurt, Frankfurt2, Kuala Lumpur, Madrid, Marseille, Mumbai, Munich, New York, Phoenix, Singapore2 |
| **[Cirion Technologies](https://lp.ciriontechnologies.com/cloud-connect-lp-latam?c_campaign=HOTSITE&c_tactic=&c_subtactic=&utm_source=SOLUCIONES-CTA&utm_medium=Organic&utm_content=&utm_term=&utm_campaign=HOTSITE-ESP)** | Supported | Supported | Bogota, Queretaro, Rio De Janeiro | | **[Claro](https://www.usclaro.com/enterprise-mnc/connectivity/mpls/)** |Supported |Supported | Miami | | **Cloudflare** |Supported |Supported | Los Angeles |
external-attack-surface-management Data Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/data-connections.md
-# Required metadata
-# For more information, see https://review.learn.microsoft.com/en-us/help/platform/learn-editor-add-metadata?branch=main
-# For valid values of ms.service, ms.prod, and ms.topic, see https://review.learn.microsoft.com/en-us/help/platform/metadata-taxonomies?branch=main
- Title: Defender EASM Data Connections description: "The data connector sends Defender EASM asset data to two different platforms: Microsoft Log Analytics and Azure Data Explorer. Users need to be active customers to export Defender EASM data to either tool, and data connections are subject to the pricing model for each respective platform."
Please note that use of this data connection is subject to the pricing structure
## Configuring Data Explorer permissions
-1. Open the Data Explorer cluster that will ingest your Defender EASM data or [create a new cluster](/azure/data-explorer/create-cluster-database-portal).
-1. Select **Databases** in the Data section of the left-hand navigation menu.
-1. Select **+ Add Database** to create a database to house your Defender EASM data.
+1. First, ensure that the Defender "EASM API" service principal has access to the correct roles in the database where you wish to export your attack surface data. For this reason, first ensure that your Defender EASM resource has been created in the appropriate tenant as this action provisions the EASM API principal.
+5. Open the Data Explorer cluster that will ingest your Defender EASM data or [create a new cluster](/azure/data-explorer/create-cluster-database-portal).
+6. Select **Databases** in the Data section of the left-hand navigation menu.
+7. Select **+ Add Database** to create a database to house your Defender EASM data.
![Screenshot of Azure Data Explorer Add database.](media/data-connections/data-connector-4.png) 1. Name your database, configure retention and cache periods, then select **Create**.
external-attack-surface-management Deploying The Defender Easm Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/deploying-the-defender-easm-azure-resource.md
Before you create a Defender EASM resource group, we recommend that you are fami
- westeurope - northeurope - switzerlandnorth
+ - canadacentral
+ - centralus
+ - norwayeast
+ - francecentral
![Screenshot of create resource group basics tab](media/QuickStart-3.png)
external-attack-surface-management Inventory Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/inventory-filters.md
Defender EASM offers a wide variety of filters to obtain results of differing le
## Operators
-Inventory filters can be used with the following operators. Some operators are not available for every filter; some operators are hidden if they are not logically applicable to the specific filter.
+Inventory filters can be used with the following operators. Some operators aren't available for every filter; some operators are hidden if they aren't logically applicable to the specific filter.
| Operator | Description | |--|:- | | `Equals` | Returns results that exactly match the search value. This filter only returns results for one value at a time. For filters that populate a drop-down list of options, only one option can be selected at a time. To select multiple values, see ΓÇ£inΓÇ¥ operator. |
-| `Not Equals` | Returns results where the field does not exactly match the search value. |
+| `Not Equals` | Returns results where the field doesn't exactly match the search value. |
| `Starts with` | Returns results where the field starts with the search value. |
-| `Does not start with` | Returns results where the field does not start with the search value. |
+| `Does not start with` | Returns results where the field doesn't start with the search value. |
| `Matches` | Returns results where a tokenized term in the field exactly matches the search value. |
-| `Does not match` | Returns results where a tokenized term in the field does not exactly matches the search value. |
+| `Does not match` | Returns results where a tokenized term in the field doesn't exactly matches the search value. |
| `In` | Returns results where the field exactly matches one of the search values. For drop-down lists, multiple options can be selected. |
-| `Not In` | Returns results where the field does not exactly match any of the search values. Multiple options can be selected, and manually inputted fields exclude results that match an exact value. |
+| `Not In` | Returns results where the field doesn't exactly match any of the search values. Multiple options can be selected, and manually inputted fields exclude results that match an exact value. |
| `Starts with in` | Returns results where the field starts with one of the search values. |
-| `Does not start with in` | Returns results where the field does not start with any of the search values. |
+| `Does not start with in` | Returns results where the field doesn't start with any of the search values. |
| `Matches in` | Returns results where a tokenized term in the field exactly matches one of the search values. |
-| `Does not match in` | Returns results where a tokenized term in the field does not exactly match any of the search values. |
+| `Does not match in` | Returns results where a tokenized term in the field doesn't exactly match any of the search values. |
| `Contains` | Returns results where the field content contains the search value. |
-| `Does Not Contain` | Returns results where the field content does not contain the search value. |
+| `Does Not Contain` | Returns results where the field content doesn't contain the search value. |
| `Contains in` | Returns results where the field content contains one of the search values. |
-| `Does Not Contain In` | Returns results where a tokenized term in the field content does not contain any of the search values. |
-| `Empty` | Returns assets that do not return any value for the specified filter. |
+| `Does Not Contain In` | Returns results where a tokenized term in the field content doesn't contain any of the search values. |
+| `Empty` | Returns assets that don't return any value for the specified filter. |
| `Not Empty` | Returns all assets that return a value for the specified filter, regardless of the value. | | `Greater Than or Equal To` | Returns results that are greater than or equal to a numerical value. This includes dates. | | `Between` | Returns results within a numerical range. This includes date ranges. |
These filters apply to all kinds of assets within inventory. These filters can b
### Defined value filters The following filters provide a drop-down list of options to select. The available values are pre-defined.
- | Filter name | Description | Selectable values | Available operators |
+ | Filter name | Description | Selectable values | Available operators |
|--|-||--| | Kind | Filters by specific web property types that comprise your inventory. | ASN, Contact, Domain, Host, IP Address, IP Block, Page, SSL Cert | `Equals` `Not Equals` `In` `Not In` `Empty` `Not Empty` | | State | The state assigned to assets to distinguish their relevance to your organization and how Defender EASM monitors them. | Approved, Candidate, Dependency, Monitor only, Requires investigation | |
These filters apply to all kinds of assets within inventory. These filters can b
| Last Seen | Filters by the date that an asset was last observed by the Defender EASM detection system. | Date range via calendar dropdown | | | | Labels | Filters for labels manually applied to inventory assets. | Accepts free-form responses, but also offers a dropdown of labels available in your Defender EASM resource. | | Updated At | Filters by the date that asset data was last updated in inventory. | Date range via calendar dropdown | | |
-| Wildcard | A wildcard DNS record answers DNS requests for subdomains that have not already been defined. For example: *.contoso.com | True, False | `Equals` `Not Equals` |
+| Wildcard | A wildcard DNS record answers DNS requests for subdomains that haven't already been defined. For example: *.contoso.com | True, False | `Equals` `Not Equals` |
### Free form filters
The following filters require that the user manually enters the value with which
| External ID | An identifier provided by a third party. | Typically a numerical value. | `Equals` `Not Equals` `Starts with` `Does not start with` `Matches` `Does not match` `In` `Not In` `Starts with in` `Does not start with in` `Matches in` `Does not match in` `Contains` `Does Not Contain` `Contains In` `Does Not Contain In` `Empty` `Not Empty` |
+## Filtering for assets outside of your approved inventory
+
+1. Select **Inventory** on the left-hand navigation bar to view your inventory.
+
+2. To remove the Approved Inventory filter, select the "X" next to the **State = Approved** filter. This will expand your inventory list to include assets in other states (e.g. Dismissed).
+
+![Screenshot of Approved Inventory filter highlighted.](media/filters-2.png)
+
+3. Identify the asset(s) you'd want to find and how to best surface them using the inventory filters. You may wish to review all assets in the "Candidate" state, adding any assets within your organization's purview to "Approved Inventory".
+
+![Screenshot of query editor showing search for candidate assets.](media/filters-3.png)
+![Screenshot of results returned when filtering for candidate assets.](media/filters-4.png)
+
+4. Instead, you may need to find a single specific asset that you wish to add to Approved Inventory. To discover a specific asset, apply a filter searching for the name.
+
+![Screenshot of query editor searching for a specific named asset.](media/filters-5.png)
+![Screenshot of results returned when filtering for an asset by name.](media/filters-6.png)
+
+4. Once your inventory list contains the unapproved assets that you were searching for, you can modify the assets. For more information on updating assets, see the [Modifying inventory assets](labeling-inventory-assets.md) article.
+++ ## Next Steps [Understanding asset details](understanding-asset-details.md)
external-attack-surface-management Labeling Inventory Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/labeling-inventory-assets.md
Title: Labeling inventory assets -
-description: This article outlines how to label assets with custom text values of a user's choice for improved categorization and operationalization of their inventory data.
+ Title: Modifying inventory assets
+
+description: This article outlines how to update assets with labels (custom text values of a user's choice) for improved categorization and operationalization of their inventory data. It also dives into
Last updated 3/1/2022
-# Labeling inventory assets
+# Modifying inventory assets
-Labels help you organize your attack surface and apply business context in a highly customizable way; you can apply any text label to a subset of assets to group assets and better operationalize your inventory. Customers commonly categorize assets that:
+This article outlines how to modify inventory assets. Users can change the state of an asset or apply labels to help better contextualize and operationalize inventory data. This article describes how to modify a single asset or multiple assets, and track any updates with the Task Manager.
-- have recently come under your organizationΓÇÖs ownership through a merger or acquisition -- require compliance monitoring -- are owned by a specific business unit in their organization -- are impacted by a specific vulnerability that requires mitigation -- relate to a particular brand owned by the organization -- were added to your inventory within a specific time range -- Labels are free-form text fields, so you can create a label for any use case that applies to your organization.
+## Labeling assets
+
+Labels help you organize your attack surface and apply business context in a highly customizable way. You can apply any text label to a subset of assets to group assets and better operationalize your inventory. Customers commonly categorize assets that:
+
+- Have recently come under your organizationΓÇÖs ownership through a merger or acquisition.
+- Require compliance monitoring.
+- Are owned by a specific business unit in their organization.
+- Are impacted by a specific vulnerability that requires mitigation.
+- Relate to a particular brand owned by the organization.
+- Were added to your inventory within a specific time range.
+
+Labels are free-form text fields, so you can create a label for any use case that applies to your organization.
[![Screenshot of inventory list view with filters column visible.](media/labels-1a.png)](media/labels-1a.png#lightbox)
-## Apply labels
+## Applying labels and modifying asset states
-Users can apply labels from both the inventory list and asset details pages. You can apply labels to a single asset from the asset details page, or multiple assets from the inventory list page. The following sections describe how to apply labels from the two inventory views depending on your use case.
+Users can apply labels or modify asset states from both the inventory list and asset details pages. You can make changes to a single asset from the asset details page, or multiple assets from the inventory list page. The following sections describe how to apply changes from the two inventory views depending on your use case.
### Inventory list page
-You should apply labels from the inventory list page if you want to update numerous assets at once. This process also allows you to refine your asset list based on filter parameters, helping you identify assets that should be categorized with the desired label. To apply labels from this page:
+You should modify assets from the inventory list page if you want to update numerous assets at once. This process also allows you to refine your asset list based on filter parameters, helping you identify assets that should be categorized with the desired label or state change. To modify assets from this page:
1. Select the **Inventory** page from the left-hand navigation pane of your Defender EASM resource.
-2. Apply filters that will produce your intended results. In this example, we are looking for domains expiring within 30 days that require renewal. The applied label will help you more quickly access any expiring domains, simplifying the remediation process. This is a simple use case; users can apply as many filters as needed to obtain the specific results needed. For more information on filters, see the [Inventory filters overview](inventory-filters.md) article.
+2. Apply filters to produce your intended results. In this example, we are looking for domains expiring within 30 days that require renewal. The applied label helps you more quickly access any expiring domains, simplifying the remediation process. This is a simple use case; users can apply as many filters as needed to obtain the specific results needed. For more information on filters, see the [Inventory filters overview](inventory-filters.md) article.
![Screenshot of inventory list view with 'add filter' dropdown opened, displaying the query editor.](media/labels-2.png)
-3. Once your inventory list is filtered, select the assets that you wish to modify by adding a label. You can either select all using the checkbox next to the ΓÇ£AssetΓÇ¥ table header, or individually select the assets you wish to label.
+3. Once your inventory list is filtered, select the dropdown by checkbox next to the "Asset" table header. This dropdown gives you the option to select all results that match your query, the results on that specific page (up to 25), or "none" which unselects all assets. You can also choose to select only specific results on the page by selecting the individual checkmarks next to each asset.
+
+![Screenshot of inventory list view with bulk selection dropdown opened.](media/labels-14.png)
4. Select **Modify assets**.
-![Screenshot of inventory list view with assets selected and 'Modify Assets' button highlighted.](media/labels-3.png)
+5. This action opens a new ΓÇ£Modify AssetsΓÇ¥ pane on the right-hand side of your screen. From this screen, you can quickly change the state of the selected asset(s). For this example, we will create a new label. Select **Create a new label**.
-5. This action opens a new ΓÇ£Modify AssetsΓÇ¥ pane on the right-hand side of your screen. Select **Create a new label**.
-
-6. Determine the label name and display text values. The label name cannot be changed after you initially create the label, but the display text can be edited at a later time. The label name will be used to query for the label in the product interface or via API, so edits are disabled to ensure these queries work properly. To edit a label name, you need to delete the original label and create a new one.
+6. Determine the label name and display text values. The label name cannot be changed after you initially create the label, but the display text can be edited at a later time. The label name is used to query for the label in the product interface or via API, so edits are disabled to ensure these queries work properly. To edit a label name, you need to delete the original label and create a new one.
-Select a color for your new label, then select **Add**. This action will navigate you back to the ΓÇ£Modify AssetsΓÇ¥ screen.
+Select a color for your new label, then select **Add**. This action navigates you back to the ΓÇ£Modify AssetsΓÇ¥ screen.
![Screenshot of "Add label" pane that displays the configuration fields.](media/labels-4.png)
Select a color for your new label, then select **Add**. This action will navigat
![Screenshot of "Modify Asset" pane with newly created label applied.](media/labels-5.png)
-8. Allow a few moments for the labels to be applied. Once complete, the page will automatically refresh and display your asset list with the labels visible. A banner at the top of the screen will confirm that your labels have been applied.
+8. Allow a few moments for the labels to be applied. You will immediately see a notification that confirms the update is in progress. Once complete, you'll see a "completed" notification and the page automatically refreshes, displaying your asset list with the labels visible. A banner at the top of the screen confirms that your labels have been applied.
[![Screenshot of inventory list view with the selected assets now displaying the new label.](media/labels-6.png)](media/labels-6.png#lightbox) ### Asset details page
-Users can also apply labels to a single asset from the asset details page. This is ideal for situations when assets need to be thoroughly reviewed before a label is applied.
+Users can also modify a single asset from the asset details page. This is ideal for situations when assets need to be thoroughly reviewed before a label or state change is applied.
1. Select the **Inventory** page from the left-hand navigation pane of your Defender EASM resource.
-2. Select the specific asset to which you want to apply a label to open the asset details page.
+2. Select the specific asset to which you want to modify to open the asset details page.
3. From this page, select **Modify asset**.
Users can also apply labels to a single asset from the asset details page. This
4. Follow steps 5-7 as listed above in the ΓÇ£Inventory list pageΓÇ¥ section.
-5. Once complete, the asset details page will refresh, displaying the newly applied label and a banner that indicates the asset was successfully updated.
+5. Once complete, the asset details page refreshes, displaying the newly applied label or state change and a banner that indicates the asset was successfully updated.
## Modify, remove or delete labels
This page displays all the labels within your Defender EASM inventory. Please no
2. To edit a label, select the pencil icon in the **Actions** column of the label you wish to edit. This action will open the right-hand pane that allows you to modify the name or color of a label. Once done, select **Update**.
-3. To remove a label, select the trash can icon from the **Actions** column of the label you wish to delete. A box will appear that asks you to confirm the removal of this label; select **Remove Label** to confirm.
+3. To remove a label, select the trash can icon from the **Actions** column of the label you wish to delete. A box appears that asks you to confirm the removal of this label; select **Remove Label** to confirm.
![Screenshot of "Confirm Remove" option from Labels management page.](media/labels-9a.png)
-The Labels page will automatically refresh and the label will be removed from the list, as well as removed from any assets that had the label applied. A banner will appear to confirm the removal.
+The Labels page will automatically refresh and the label will be removed from the list, as well as removed from any assets that had the label applied. A banner appears to confirm the removal.
++
+## Task manager and notifications
+
+Once a task is submitted, you will immediately see a notification pop-up that confirms that the update is in progress. From any page in Azure, simply click on the notification (bell) icon to view additional information about recent tasks.
+
+![Screenshot of "Task submitted" notification immediately after submitting a task.](media/labels-12.png) ![Screenshot of opened Notifications panel displaying recent task statuses.](media/labels-13.png)
++
+The Defender EASM system can take seconds to update a handful of assets or minutes to update thousands. The Task Manager enables you to check on the status of any modification tasks in progress. This section outlines how to access the Task Manager and use it to better understand the completion of submitted updates.
+
+1. From your Defender EASM resource, select **Task Manager** on the left-hand navigation menu.
+
+![Screenshot of "Task Manager" page with appropriate section in navigation pane highlighted.](media/labels-11a.png)
+
+2. This page displays all your recent tasks and their status. Tasks will be listed as "Completed", "Failed" or "In Progress" with a completion percentage and progress bar also displayed. To see more details about a specific task, simply select the task name. A right-hand pane will open that provides additional information.
+
+3. Select **Refresh** to see the latest status of all items in the Task Manager.
+ ## Filtering for labels
external-attack-surface-management Understanding Asset Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-asset-details.md
In the right-hand pane of the Asset Details page, users can access more expansiv
The Overview tab provides key additional context to ensure that significant insights are quickly identifiable when viewing the details of an asset. This section will include key discovery data for all asset types, providing insight about how Microsoft maps the asset to your known infrastructure. This section can also include dashboard widgets that visualize insights that are particularly relevant to the asset type in question.
-![Screenshot of asset details, right-hand overview pane highlighted](media/Inventory_2.png)
+![Screenshot of asset details, right-hand overview pane highlighted.](media/Inventory_2.png)
### Discovery chain
The discovery chain outlines the observed connections between a discovery seed a
In the example below, we see that the seed domain is tied to this asset through the contact email in its WhoIs record. That same contact email was used to register the IP block that includes this particular IP address asset.
-![Screenshot of discovery chain](media/Inventory_3.png)
+![Screenshot of discovery chain.](media/Inventory_3.png)
### Discovery information
The IP reputation tab displays a list of potential threats related to a given IP
Defender EASMΓÇÖs IP reputation data displays instances when the IP address was detected on a threat list. For instance, the recent detection in the example below shows that the IP address relates to a host known to be running a cryptocurrency miner. This data was derived from a suspicious host list supplied by CoinBlockers. Results are organized by the ΓÇ£last seenΓÇ¥ date, surfacing the most relevant detections first. In this example, the IP address is present on an abnormally high number of threat feeds, indicating that the asset should be thoroughly investigated to prevent malicious activity in the future.
-![Screenshot of asset details, IP reputation tab](media/Inventory_4.png)
+![Screenshot of asset details, IP reputation tab.](media/Inventory_4.png)
### Services The ΓÇ£ServicesΓÇ¥ tab is available for IP address, domain and host assets. This section provides information on services observed to be running on the asset, and includes IP addresses, name and mail servers, and open ports that correspond with additional types of infrastructure (e.g. remote access services). Defender EASMΓÇÖs Services data is key to understanding the infrastructure powering your asset. It can also alert you of resources that are exposed on the open internet that should be protected.
-![Screenshot of asset details, services tab](media/Inventory_5.png)
+![Screenshot of asset details, services tab.](media/Inventory_5.png)
### IP Addresses
This section provides insight on any IP addresses that are running on the asset
This section provides a list of any mail servers running on the asset, indicating that the asset is capable of sending emails. In this section, Defender EASM provides the name of the mail server, the first and last seen dates, and a recency column that indicates whether the mail server was detected during our most recent scan of the asset.
-![Screenshot of asset details, mail server section of services tab](media/Inventory_7.png)
+![Screenshot of asset details, mail server section of services tab.](media/Inventory_7.png)
### Name Servers This section displays any name servers running on the asset, providing resolution for a host. In this section, we provide the name of the mail server, the first and last seen dates, and a recency column that indicates whether the name server was detected during our most recent scan of the asset.
-![Screenshot of asset details, name server section of services tab](media/Inventory_8.png)
+![Screenshot of asset details, name server section of services tab.](media/Inventory_8.png)
### Open Ports
This section lists any open ports detected on the asset. Microsoft scans around
In this section, Defender EASM provides the open port number, a description of the port, the last state it was observed in, the first and last seen dates, and a recency column that indicates whether the port was observed as open during MicrosoftΓÇÖs most recent scan.
-![Screenshot of asset details, open ports section of services tab](media/Inventory_9.png)
+![Screenshot of asset details, open ports section of services tab.](media/Inventory_9.png)
### Trackers
Trackers are unique codes or values found within web pages and often are used to
In this section, Defender EASM provides the tracker type (e.g. GoogleAnalyticsID), the unique identifier value, and the first and last seen dates.
-### Web components & CVEs
+### Web components
Web components are details describing the infrastructure of an asset as observed through a Microsoft scan. These components provide a high-level understanding of the technologies leveraged on the asset. Microsoft categorizes the specific components and includes version numbers when possible.
-![Screenshot of top of Web components & CVEs tab](media/Inventory_10.png)
+![Screenshot of top of Web components tab.](media/Inventory_10.png)
The Web components section provides the category, name and version of the component, as well as a list of any applicable CVEs that should be remediated. Defender EASM also provides a first and last seen date as well as a recency indicator; a checked box indicates that this infrastructure was observed during our most recent scan of the asset.
Web components are categorized based on their function. Options include:
| Network device | Cisco Router, Motorola WAP, ZyXEL Modem | | Building control | Linear eMerge, ASI Controls Weblink, Optergy |
-Below the Web components section, users can view a list of all CVEs applicable to the list of web components. This provides a more granular view of the CVEs themselves, and the CVSS score indicating the level of risk it poses to your organization.
-![Screenshot of CVEs section of tab](media/Inventory_11.png)
+### Observations
+
+The Observations tab displays any insights from the Attack Surface Priorities dashboard that pertain to the asset. These priorities can include critical CVEs, known associations to compromised infrastructure, use of deprecated technology, infrastructure best practice violations, or compliance issues. For more information on Observations, see the [Understanding dashboards](understanding-dashboards.md) article. For each observation, Defender EASM provides the name of the observation, categorizes it by type, assigns a priority, and lists both CVSS v2 and v3 scores where applicable.
+
+![Screenshot of observations tab.](media/Inventory-15.png)
### Resources The Resources tab provides insight on any JavaScript resources running on any page or host assets. When applicable to a host, these resources are aggregated to represent the JavaScript running on all pages on that host. This section provides an inventory of the JavaScript detected on each asset so that your organization has full visibility into these resources and can detect any changes. Defender EASM provides the resource URL and host, MD5 value, and first and last seen dates to help organizations effectively monitor the use of JavaScript resources across their inventory.
-![Screenshot of resources tab](media/Inventory_12.png)
+![Screenshot of resources tab.](media/Inventory_12.png)
### SSL certificates Certificates are used to secure communications between a browser and a web server via Secure Sockets Layer (SSL). This ensures that sensitive data in transit cannot be read, tampered with, or forged. This section of Defender EASM lists any SSL certificates detected on the asset, including key data like the issue and expiry dates.
-![Screenshot of SSL certificates tab](media/Inventory_13.png)
+![Screenshot of SSL certificates tab.](media/Inventory_13.png)
### WhoIs WhoIs is a protocol that is leveraged to query and respond to the databases that store data related to the registration and ownership of Internet resources. WhoIs contains key registration data that can apply to domains, hosts, IP addresses and IP blocks in Defender EASM. In the WhoIs data tab, Microsoft provides a robust amount of information associated with the registry of the asset.
-![Screenshot of WhoIs values tab](media/Inventory_14.png)
+![Screenshot of WhoIs values tab.](media/Inventory_14.png)
Fields include:
external-attack-surface-management Using And Managing Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/using-and-managing-discovery.md
The Discovery page defaults to a list view of Discovery Groups, but users can al
### Seeds
-The seed list view displays seed values with three columns: type, source name, and discovery group. The ΓÇ£type" field displays the category of the seed asset; the most common seeds are domains, hosts and IP blocks, but you can also use email contacts, ASNs, certificate common names or WhoIs organizations. The source name is simply the value that was inputted in the appropriate type box when creating the discovery group. The final column shows a list of discovery groups that use the seed; each value is clickable, taking you to the details page for that discovery group.
+The seed list view displays seed values with three columns: type, source name, and discovery group. The ΓÇ£type" field displays the category of the seed asset; the most common seeds are domains, hosts and IP blocks, but you can also use email contacts, ASNs, certificate common names or WhoIs organizations. The source name is simply the value that was inputted in the appropriate type box when creating the discovery group. The final column shows a list of discovery groups that use the seed; each value is clickable, taking you to the details page for that discovery group.
+
+When inputting seeds, remember to validate the appropriate format for each entry. When saving the Discovery Group, the platform will run a series of validation checks and alert you of any misconfigured seeds. For example, IP Blocks should be inputted by network address (i.e. the start of the IP range).
:::image type="content" source="media/Discovery_11.png" alt-text="Screenshot of seeds view of discovery page.":::
firewall Deploy Multi Public Ip Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-multi-public-ip-powershell.md
This feature enables the following scenarios: - **DNAT** - You can translate multiple standard port instances to your backend servers. For example, if you have two public IP addresses, you can translate TCP port 3389 (RDP) for both IP addresses.-- **SNAT** - Additional ports are available for outbound SNAT connections, reducing the potential for SNAT port exhaustion. At this time, Azure Firewall randomly selects the source public IP address to use for a connection. If you have any downstream filtering on your network, you need to allow all public IP addresses associated with your firewall. Consider using a [public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md) to simplify this configuration.
+- **SNAT** - Additional ports are available for outbound SNAT connections, reducing the potential for SNAT port exhaustion. Azure Firewall uses the primary public IP address first before it uses the other associated public IP addresses. If you have any downstream filtering on your network, you need to allow all public IP addresses associated with your firewall. Consider using a [public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md) to simplify this configuration.
Azure Firewall with multiple public IP addresses is available via the Azure portal, Azure PowerShell, Azure CLI, REST, and templates. You can deploy an Azure Firewall with up to 250 public IP addresses.
frontdoor Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/managed-identity.md
Title: Use managed identities with Azure Front Door Standard/Premium (Preview)
-description: This article will show you how to set up managed identities to use with your Azure Front Door Standard or Premium profile.
-
+ Title: Use managed identities to access Azure Key Vault certificates
+
+description: This article shows you how to set up managed identities with Azure Front Door to access certificates in an Azure Key Vault.
+ Previously updated : 11/02/2022 Last updated : 05/16/2023
-# Use managed identities with Azure Front Door Standard/Premium (Preview)
+# Use managed identities to access Azure Key Vault certificates
-Azure Front Door also supports using managed identities to access Key Vault certificate. A managed identity generated by Azure Active Directory (Azure AD) allows your Azure Front Door instance to easily and securely access other Azure AD-protected resources, such as Azure Key Vault. Azure manages this identity, so you don't have to create or rotate any secrets. For more information about managed identities, seeΓÇ»[What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md).
+A managed identity generated by Azure Active Directory (Azure AD) allows your Azure Front Door instance to easily and securely access other Azure AD-protected resources, such as Azure Key Vault. Azure manages the identity resource, so you don't have to create or rotate any secrets. For more information about managed identities, seeΓÇ»[What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md).
-> [!IMPORTANT]
-> Managed identity for Azure Front Door is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+Once you enable managed identity for Azure Front Door and grant proper permissions to access your Azure Key Vault, Front Door only uses managed identity to access the certificates. If you don't **add the managed identity permission to your Key Vault**, custom certificate autorotation and adding new certificates fails without permissions to Key Vault. If you disable managed identity, Azure Front Door falls back to using the original configured Azure Active Directory App. This solution isn't recommended and will be retired in the future.
-> [!NOTE]
-> Once you enable managed identity in Azure Front Door and grant proper permissions to access Key Vault, Azure Front Door will always use managed identity to access Key Vault for customer certificate. **Make sure you add the managed identity permission to allow access to Key Vault after enabling**. If you fail to complete this step, custom certificate autorotation and adding new certificates will fail without permissions to Key Vault. If you disable managed identity, Azure Front Door will fallback to use the original configured AAD App. This is not the recommended solution.
->
-> You can grant two types of identities to an Azure Front Door profile:
-> * A **system-assigned** identity is tied to your service and is deleted if your service is deleted. The service can have only **one** system-assigned identity.
-> * A **user-assigned** identity is a standalone Azure resource that can be assigned to your service. The service can have **multiple** user-assigned identities.
->
-> Managed identities are specific to the Azure AD tenant where your Azure subscription is hosted. They don't get updated if a subscription gets moved to a different directory. If a subscription gets moved, you'll need to recreate and configure the identities.
+You can grant two types of identities to an Azure Front Door profile:
+
+* A **system-assigned** identity is tied to your service and is deleted if your service is deleted. The service can have only **one** system-assigned identity.
+
+* A **user-assigned** identity is a standalone Azure resource that can be assigned to your service. The service can have **multiple** user-assigned identities.
+
+Managed identities are specific to the Azure AD tenant where your Azure subscription is hosted. They don't get updated if a subscription gets moved to a different directory. If a subscription gets moved, you need to recreate and reconfigure the identity.
## Prerequisites
-Before you can set up managed identities for Front Door, you must have a Front Door Standard or Premium profile. To create an Azure Front Door profile, see [create an Azure Front Door](create-front-door-portal.md).
+Before you can set up managed identity for Azure Front Door, you must have an Azure Front Door Standard or Premium profile created. To create a new Front Door profile, see [create an Azure Front Door](create-front-door-portal.md).
## Enable managed identity
-1. Go to an existing Azure Front Door Standard or Premium profile. Select **Identity (preview)** under *Settings*.
+1. Go to an existing Azure Front Door profile. Select **Identity** from under *Security* on the left side menu pane.
:::image type="content" source="./media/managed-identity/overview.png" alt-text="Screenshot of the identity button under settings for a Front Door profile.":::
-1. Select either **System assigned** or **User assigned**.
+1. Select either a **System assigned** or a **User assigned** managed identity.
- * **System assigned** - a managed identity is created for the Azure Front Door profile lifecycle and is used to access a Key Vault.
+ * **System assigned** - a managed identity is created for the Azure Front Door profile lifecycle and is used to access Azure Key Vault.
- * **User assigned** - a standalone managed identity resource used to authenticate to a Key Vault and has its own lifecycle.
+ * **User assigned** - a standalone managed identity resource is used to authenticate to Azure Key Vault and has its own lifecycle.
-### System assigned
+# [System assigned](#tab/system-assigned)
1. Toggle the *Status* to **On** and then select **Save**. :::image type="content" source="./media/managed-identity/system-assigned.png" alt-text="Screenshot of the system assigned managed identity configuration page.":::
-1. You'll be prompted with a message to confirm you would like to create a system managed identity for the Front Door profile. Select **Yes** to confirm.
+1. You're prompted with a message to confirm that you would like to create a system managed identity for your Front Door profile. Select **Yes** to confirm.
:::image type="content" source="./media/managed-identity/system-assigned-confirm.png" alt-text="Screenshot of the system assigned managed identity confirmation message.":::
-1. Once the system assigned managed identity has been created and registered with Azure AD, you can use the **Object (principal) ID** to allow Azure Front Door access to your Key Vault.
+1. Once the system assigned managed identity has been created and registered with Azure Active Directory, you can use the **Object (principal) ID** to grant Azure Front Door access to your Azure Key Vault.
:::image type="content" source="./media/managed-identity/system-assigned-created.png" alt-text="Screenshot of the system assigned managed identity registered with Azure Active Directory.":::
-### User assigned
+# [User assigned](#tab/user-assigned)
-1. You must have a user managed identity already created. For more information, see [create a user assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+You must already have a user managed identity created. To create a new identity, see [create a user assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
-1. Select the **User assigned** tab and then select **+ Add**.
+1. In the **User assigned** tab, select **+ Add** to add a user assigned managed identity.
:::image type="content" source="./media/managed-identity/user-assigned.png" alt-text="Screenshot of the user assigned managed identity configuration page.":::
Before you can set up managed identities for Front Door, you must have a Front D
:::image type="content" source="./media/managed-identity/add-user-managed-identity.png" alt-text="Screenshot of the add user assigned managed identity page.":::
-1. You'll now see the name of the user assigned managed identity you've selected show in the Azure Front Door profile.
+1. You'll see the name of the user assigned managed identity you've selected show in the Azure Front Door profile.
:::image type="content" source="./media/managed-identity/user-assigned-configured.png" alt-text="Screenshot of the add user assigned managed identity added to Front Door profile.":::
-## Configure Key Vault access policy
-
-1. Navigate to your Azure Key Vault.
+
- :::image type="content" source="./media/managed-identity/key-vault-list.png" alt-text="Screenshot of the Key Vault resource list.":::
+## Configure Key Vault access policy
-1. Select **Access policies** from under *Settings* and then select **+ Create**.
+1. Navigate to your Azure Key Vault. Select **Access policies** from under *Settings* and then select **+ Create**.
:::image type="content" source="./media/managed-identity/access-policies.png" alt-text="Screenshot of the access policies page for a Key Vault.":::
-1. On the **Permissions** tab of the *Create an access policy* page, select **List** and **Get** under *Secret permissions*. Then select **Next** to configure the next tab.
+1. On the **Permissions** tab of the *Create an access policy* page, select **List** and **Get** under *Secret permissions*. Then select **Next** to configure the principal tab.
:::image type="content" source="./media/managed-identity/permissions.png" alt-text="Screenshot of the permissions tab for the Key Vault access policy.":::
-1. On the *Principal* tab, paste the **object (principal) ID** if you're using a system managed identity or enter a **name** if you're using a user assigned manged identity. Then select **Next** to configure the next tab.
+1. On the *Principal* tab, paste the **object (principal) ID** if you're using a system managed identity or enter a **name** if you're using a user assigned manged identity. Then select **Review + create** tab. The *Application* tab is skipped since Azure Front Door gets selected for you already.
:::image type="content" source="./media/managed-identity/system-principal.png" alt-text="Screenshot of the principal tab for the Key Vault access policy.":::
-1. On the *Application* tab, the application has already been selected for you. Select **Next** to go to the *Review + create* tab.
-
- :::image type="content" source="./media/managed-identity/application.png" alt-text="Screenshot of the application tab for the Key Vault access policy.":::
- 1. Review the access policy settings and then select **Create** to set up the access policy. :::image type="content" source="./media/managed-identity/create.png" alt-text="Screenshot of the review and create tab for the Key Vault access policy.":::
Before you can set up managed identities for Front Door, you must have a Front D
## Next steps
-* Learn how to [configure HTTPS on an Azure Front Door custom domain](standard-premium/how-to-configure-https-custom-domain.md).
* Learn more about [End-to-end TLS encryption](end-to-end-tls.md).
+* Learn how to [configure HTTPS on an Azure Front Door custom domain](standard-premium/how-to-configure-https-custom-domain.md).
frontdoor Migrate Tier Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/migrate-tier-powershell.md
+
+ Title: Migrate Azure Front Door (classic) to Standard/Premium tier with Azure PowerShell
+description: This article provides step-by-step instructions on how to migrate from an Azure Front Door (classic) profile to an Azure Front Door Standard or Premium tier profile with Azure PowerShell.
++++ Last updated : 06/05/2023+++
+# Migrate Azure Front Door (classic) to Standard/Premium tier with Azure PowerShell
+
+Azure Front Door Standard and Premium tier bring the latest cloud delivery network features to Azure. With enhanced security features and an all-in-one service, your application content is secured and closer to your end users using the Microsoft global network. This article guides you through the migration process to move your Azure Front Door (classic) profile to either a Standard or Premium tier profile with Azure PowerShell.
+
+## Prerequisites
+
+* Review the [About Front Door tier migration](tier-migration.md) article.
+* Ensure your Front Door (classic) profile can be migrated:
+ * Azure Front Door Standard and Premium require all custom domains to use HTTPS. If you don't have your own certificate, you can use an Azure Front Door managed certificate. The certificate is free of charge and gets managed for you.
+ * Session affinity gets enabled in the origin group settings for an Azure Front Door Standard or Premium profile. In Azure Front Door (classic), session affinity is set at the domain level. As part of the migration, session affinity is based on the Front Door (classic) profile settings. If you have two domains in your classic profile that shares the same backend pool (origin group), session affinity has to be consistent across both domains in order for migration validation to pass.
+* Latest Azure PowerShell module installed locally or Azure Cloud Shell. For more information, see [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
++
+> [!NOTE]
+> You don't need to make any DNS changes before or during the migration process. However, once the migration completes and traffic is flowing through your new Azure Front Door profile, you need to update your DNS records. For more information, see [Update DNS records](#update-dns-records).
+
+## Validate compatibility
+
+1. Open Azure PowerShell and connect to your Azure account. For more information, see [Connect to Azure PowerShell](/powershell/azure/authenticate-azureps).
+
+1. Test your Azure Front Door (classic) profile to see if it's compatible for migration. You can use the [Test-AzFrontDoorCdnProfileMigration](/powershell/module/az.cdn/test-azfrontdoorcdnprofilemigration) command to test your profile. Replace the values for the resource group name and resource ID with your own values. Use [Get-AzFrontDoor](/powershell/module/az.frontdoor/get-azfrontdoor) to get the resource ID for your Front Door (classic) profile.
+
+ Replace the following values in the command:
+
+ * `<subscriptionId>`: Your subscription ID.
+ * `<resourceGroupName>`: The resource group name of the Front Door (classic).
+ * `<frontdoorClassicName>`: The name of the Front Door (classic) profile.
+
+ ```powershell-interactive
+ Test-AzFrontDoorCdnProfileMigration -ResourceGroupName <resourceGroupName> -ClassicResourceReferenceId /subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/Microsoft.Network/frontdoors/<frontdoorClassicName>
+ ```
+
+ If the migration is compatible for migration, you see the following output:
+
+ ```
+ CanMigrate DefaultSku
+ - -
+ True Standard_AzureFrontDoor or Premium_AzureFrontDoor
+ ```
+
+ If the migration isn't compatible, you see the following output:
+
+ ```
+ CanMigrate DefaultSku
+ - -
+ False
+ ```
+
+## Prepare for migration
+
+#### [Without WAF and BYOC (Bring your own certificate)](#tab/without-waf-byoc)
+
+Run the [Start-AzFrontDoorCdnProfilePrepareMigration](/powershell/module/az.cdn/start-azfrontdoorcdnprofilepreparemigration) command to prepare for migration. Replace the values for the resource group name, resource ID, profile name with your own values. For *SkuName* use either **Standard_AzureFrontDoor** or **Premium_AzureFrontDoor**. The *SkuName* is based on the output from the [Test-AzFrontDoorCdnProfileMigration](/powershell/module/az.cdn/test-azfrontdoorcdnprofilemigration) command.
+
+Replace the following values in the command:
+
+* `<subscriptionId>`: Your subscription ID.
+* `<resourceGroupName>`: The resource group name of the Front Door (classic).
+* `<frontdoorClassicName>`: The name of the Front Door (classic) profile.
+
+```powershell-interactive
+Start-AzFrontDoorCdnProfilePrepareMigration -ResourceGroupName <resourceGroupName> -ClassicResourceReferenceId /subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/Microsoft.Network/frontdoors/<frontdoorClassicName> -ProfileName myAzureFrontDoor -SkuName Premium_AzureFrontDoor
+```
+
+The output looks similar to the following:
+
+```
+Starting the parameter validation process.
+The parameters have been successfully validated.
+Your new Front Door profile is being created. Please wait until the process has finished completely. This may take several minutes.
+
+Your new Front Door profile with the configuration has been successfully created.
+```
+
+#### [With WAF](#tab/with-waf)
+
+1. Run the [Get-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/get-azfrontdoorwafpolicy) command to get the resource ID for your WAF policy. Replace the values for the resource group name and WAF policy name with your own values.
+
+ ```powershell-interactive
+ Get-AzFrontDoorWafPolicy -ResourceGroupName myAFDResourceGroup -Name myClassicFrontDoorWAF
+ ```
+ The output looks similar to the following:
+
+ ```
+ PolicyMode : Detection
+ PolicyEnabledState : Enabled
+ RedirectUrl :
+ CustomBlockResponseStatusCode : 403
+ CustomBlockResponseBody :
+ RequestBodyCheck : Disabled
+ CustomRules : {}
+ ManagedRules : {Microsoft.Azure.Commands.FrontDoor.Models.PSAzureManagedRule}
+ Etag :
+ ProvisioningState : Succeeded
+ Sku : Classic_AzureFrontDoor
+ Tags :
+ Id : /subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourcegroups/myAFDResourceGroup/providers/Microsoft.Network/frontdoorwebapplicationfirewallpolicies/myClassicFrontDoorWAF
+ Name : myFrontDoorWAF
+ Type :
+ ```
+
+1. Run the [New-AzFrontDoorCdnMigrationWebApplicationFirewallMappingObject](/powershell/module/az.cdn/new-azfrontdoorcdnmigrationwebapplicationfirewallmappingobject) command to create an in-memory object for WAF policy migration. Use the WAF ID in the last step for `MigratedFromId`. To use an existing WAF policy, replace the value for `MigratedToId` with a resource ID of a WAF policy that matches the Front Door tier you're migrating to. If you're creating a new WAF policy copy, you can change the name of the WAF policy in the resource ID.
++
+ ```powershell-interactive
+ $wafMapping = New-AzFrontDoorCdnMigrationWebApplicationFirewallMappingObject -MigratedFromId /subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourcegroups/myAFDResourceGroup/providers/Microsoft.Network/frontdoorwebapplicationfirewallpolicies/myClassicFrontDoorWAF -MigratedToId /subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourcegroups/myAFDResourceGroup/providers/Microsoft.Network/frontdoorwebapplicationfirewallpolicies/myFrontDoorWAF
+
+1. Run the [Start-AzFrontDoorCdnProfilePrepareMigration](/powershell/module/az.cdn/start-azfrontdoorcdnprofilepreparemigration) command to prepare for migration. Replace the values for the resource group name, resource ID, profile name with your own values. For *SkuName* use either **Standard_AzureFrontDoor** or **Premium_AzureFrontDoor**. The *SkuName* is based on the output from the [Test-AzFrontDoorCdnProfileMigration](/powershell/module/az.cdn/test-azfrontdoorcdnprofilemigration) command.
+
+ Replace the following values in the command:
+
+ * `<subscriptionId>`: Your subscription ID.
+ * `<resourceGroupName>`: The resource group name of the Front Door (classic).
+ * `<frontdoorClassicName>`: The name of the Front Door (classic) profile.
+
+ ```powershell-interactive
+ Start-AzFrontDoorCdnProfilePrepareMigration -ResourceGroupName <resourceGroupName> -ClassicResourceReferenceId /subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/Microsoft.Network/frontdoors/<frontdoorClassicName> -ProfileName myAzureFrontDoor -SkuName Premium_AzureFrontDoor -MigrationWebApplicationFirewallMapping $wafMapping
+ ```
+
+ The output looks similar to the following:
+
+ ```
+ Starting the parameter validation process.
+ The parameters have been successfully validated.
+ Your new Front Door profile is being created. Please wait until the process has finished completely. This may take several minutes.
+
+ Your new Front Door profile with the configuration has been successfully created.
+ ```
+
+#### [With BYOC](#tab/with-byoc)
+
+If you're migrating a Front Door profile with BYOC, you need to enable managed identity on the Front Door profile. You need to grant the Front Door profile access to the key vault where the certificate is stored.
+
+Run the [Start-AzFrontDoorCdnProfilePrepareMigration](/powershell/module/az.cdn/start-azfrontdoorcdnprofilepreparemigration) command to prepare for migration. Replace the values for the resource group name, resource ID, profile name with your own values. For *SkuName* use either **Standard_AzureFrontDoor** or **Premium_AzureFrontDoor**. The *SkuName* is based on the output from the [Test-AzFrontDoorCdnProfileMigration](/powershell/module/az.cdn/test-azfrontdoorcdnprofilemigration) command.
+
+### System assigned
+
+For *IdentityType* use **SystemAssigned**.
+
+```powershell-interactive
+Start-AzFrontDoorCdnProfilePrepareMigration -ResourceGroupName myAFDResourceGroup -ClassicResourceReferenceId /subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourcegroups/myAFDResourceGroup/providers/Microsoft.Network/Frontdoors/myAzureFrontDoorClassic -ProfileName myAzureFrontDoor -SkuName Premium_AzureFrontDoor -IdentityType SystemAssigned
+```
+
+### User assigned
+
+1. Run the [Get-AzUserAssignedIdentity](/powershell/module/az.managedserviceidentity/get-azuserassignedidentity) command to the get the resource ID for a user assigned identity.
+
+ ```powershell-interactive
+ $id = Get-AzUserAssignedIdentity -ResourceGroupName myResourceGroup -Name afduseridentity
+ $id.Id
+ ```
+
+ The output looks similar to the following:
+
+ ```
+ /subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourcegroups/myAFDResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/afduseridentity
+ ```
+
+1. For IdentityType use UserAssigned and for IdentityUserAssignedIdentity,* use the resource ID from the previous step.
+
+ Replace the following values in the command:
+
+ * `<subscriptionId>`: Your subscription ID.
+ * `<resourceGroupName>`: The resource group name of the Front Door (classic).
+ * `<frontdoorClassicName>`: The name of the Front Door (classic) profile.
+
+ ```powershell-interactive
+ Start-AzFrontDoorCdnProfilePrepareMigration -ResourceGroupName <resourceGroupName> -ClassicResourceReferenceId /subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/Microsoft.Network/frontdoors/<frontdoorClassicName> -ProfileName myAzureFrontDoor -SkuName Premium_AzureFrontDoor -IdentityType UserAssigned -IdentityUserAssignedIdentity @{"/subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourceGroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/afduseridentity" = @{}}
+ ```
+
+ The output looks similar to the following:
+
+ ```
+ Starting the parameter validation process.
+ The parameters have been successfully validated.
+ Your new Front Door profile is being created. Please wait until the process has finished completely. This may take several minutes.
+
+ Your new Front Door profile with the configuration has been successfully created.
+ ```
+
+#### [Multiple WAF and managed identity](#tab/multiple-waf-managed-identity)
+
+This example shows how to migrate a Front Door profile with multiple WAF policies and enable both system assigned and user assigned identity.
+
+1. Run the [Get-AzFrontDoorWafPolicy](/powershell/module/az.frontdoor/get-azfrontdoorwafpolicy) command to get the resource ID for your WAF policy. Replace the values for the resource group name and WAF policy name with your own values.
+
+ ```powershell-interactive
+ Get-AzFrontDoorWafPolicy -ResourceGroupName myAFDResourceGroup -Name myClassicFrontDoorWAF
+ ```
+ The output looks similar to the following:
+
+ ```
+ PolicyMode : Detection
+ PolicyEnabledState : Enabled
+ RedirectUrl :
+ CustomBlockResponseStatusCode : 403
+ CustomBlockResponseBody :
+ RequestBodyCheck : Disabled
+ CustomRules : {}
+ ManagedRules : {Microsoft.Azure.Commands.FrontDoor.Models.PSAzureManagedRule}
+ Etag :
+ ProvisioningState : Succeeded
+ Sku : Classic_AzureFrontDoor
+ Tags :
+ Id : /subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourcegroups/myAFDResourceGroup/providers/Microsoft.Network/frontdoorwebapplicationfirewallpolicies/myClassicFrontDoorWAF
+ Name : myFrontDoorWAF
+ Type :
+ ```
+
+1. Run the [New-AzFrontDoorCdnMigrationWebApplicationFirewallMappingObject](/powershell/module/az.cdn/new-azfrontdoorcdnmigrationwebapplicationfirewallmappingobject) command to create an in-memory object for WAF policy migration. Use the WAF ID in the last step for `MigratedFromId`. To use an existing WAF policy, replace the value for `MigratedToId` with a resource ID of a WAF policy that matches the Front Door tier you're migrating to. If you're creating a new WAF policy copy, you can change the name of the WAF policy in the resource ID.
+
+ ```powershell-interactive
+ $wafMapping1 = New-AzFrontDoorCdnMigrationWebApplicationFirewallMappingObject -MigratedFromId /subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourcegroups/myAFDResourceGroup/providers/Microsoft.Network/frontdoorwebapplicationfirewallpolicies/myClassicFrontDoorWAF1 -MigratedToId /subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourcegroups/myAFDResourceGroup/providers/Microsoft.Network/frontdoorwebapplicationfirewallpolicies/myFrontDoorWAF1
+
+ $wafMapping2 = New-AzFrontDoorCdnMigrationWebApplicationFirewallMappingObject -MigratedFromId /subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourcegroups/myAFDResourceGroup/providers/Microsoft.Network/frontdoorwebapplicationfirewallpolicies/myClassicFrontDoorWAF2 -MigratedToId /subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourcegroups/myAFDResourceGroup/providers/Microsoft.Network/frontdoorwebapplicationfirewallpolicies/myFrontDoorWAF2
+ ```
+
+1. Specify both managed identity types in a variable.
+
+ ```powershell-interactive
+ $identityType = "SystemAssigned, UserAssigned"
+ ```
+
+1. Run the [Get-AzUserAssignedIdentity](/powershell/module/az.managedserviceidentity/get-azuserassignedidentity) command to the get the resource ID for a user assigned identity.
+
+ ```powershell-interactive
+ $id1 = Get-AzUserAssignedIdentity -ResourceGroupName myResourceGroup -Name afduseridentity1
+ $id1.Id
+ $id2 = Get-AzUserAssignedIdentity -ResourceGroupName myResourceGroup -Name afduseridentity2
+ $id2.Id
+ ```
+
+ The output looks similar to the following:
+
+ ```
+ /subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourcegroups/myAFDResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/afduseridentity1
+ /subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourcegroups/myAFDResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/afduseridentity2
+ ```
+
+1. Specify the user assigned identity resource ID in a variable.
+
+ ```powershell-interactive
+ $userInfo = @{
+ "subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourceGroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/afduseridentity1" = @{}}
+ "subscriptions/abcdef12-3456-7890-abcd-ef1234567890/resourceGroups/myResourceGroup/providers/Microsoft.ManagedIdentity/userAssignedIdentities/afduseridentity2" = @{}}
+ }
+ ```
+
+1. Run the [Start-AzFrontDoorCdnProfilePrepareMigration](/powershell/module/az.cdn/start-azfrontdoorcdnprofilepreparemigration) command to prepare for migration. Replace the values for the resource group name, resource ID, profile name with your own values. For *SkuName* use either **Standard_AzureFrontDoor** or **Premium_AzureFrontDoor**. The *SkuName* is based on the output from the [Test-AzFrontDoorCdnProfileMigration](/powershell/module/az.cdn/test-azfrontdoorcdnprofilemigration) command. The *MigrationWebApplicationFirewallMapping* parameter takes an array of WAF policy migration objects. The *IdentityType* parameter takes a comma separated list of identity types. The *IdentityUserAssignedIdentity* parameter takes a hash table of user assigned identity resource IDs.
+
+ Replace the following values in the command:
+
+ * `<subscriptionId>`: Your subscription ID.
+ * `<resourceGroupName>`: The resource group name of the Front Door (classic).
+ * `<frontdoorClassicName>`: The name of the Front Door (classic) profile.
+
+ ```powershell-interactive
+ Start-AzFrontDoorCdnProfilePrepareMigration -ResourceGroupName <resourceGroupName> -ClassicResourceReferenceId /subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/Microsoft.Network/frontdoors/<frontdoorClassicName> -ProfileName myAzureFrontDoor -SkuName Premium_AzureFrontDoor -MigrationWebApplicationFirewallMapping @($wafMapping1, $wafMapping2) -IdentityType $identityType -IdentityUserAssignedIdentity $userInfo
+ ```
+
+ The output looks similar to the following:
+
+ ```
+ Starting the parameter validation process.
+ The parameters have been successfully validated.
+ Your new Front Door profile is being created. Please wait until the process has finished completely. This may take several minutes.
+
+ Your new Front Door profile with the configuration has been successfully created.
+ ```
++
+## Migrate
+
+#### [Migrate profile](#tab/migrate-profile)
+
+Run the [Enable-AzFrontDoorCdnProfileMigration](/powershell/module/az.cdn/enable-azfrontdoorcdnprofilemigration) command to migrate your Front Door (classic).
+
+```powershell-interactive
+Enable-AzFrontDoorCdnProfileMigration -ProfileName myAzureFrontDoor -ResourceGroupName myAFDResourceGroup
+```
+
+The output looks similar to the following:
+
+```
+Start to migrate.
+This process will disable your Front Door (classic) profile and move all your traffic and configurations to the new Front Door profile.
+Migrate succeeded.
+```
+
+#### [Abort migration](#tab/abort-migration)
+
+Run the [Stop-AzFrontDoorCdnProfileMigration](/powershell/module/az.cdn/stop-azfrontdoorcdnprofilemigration) command to abort the migration process.
+
+```powershell-interactive
+Stop-AzFrontDoorCdnProfileMigration -ProfileName myAzureFrontDoor -ResourceGroupName myAFDResourceGroup
+```
+
+The output looks similar to the following:
+
+```
+Start to abort the migration.
+Your new Front Door Profile will be deleted and your existing profile will remain active. WAF policies will not be deleted.
+Please wait until the process has finished completely. This may take several minutes.
+Abort succeeded.
+```
+
+## Update DNS records
+
+Your old Azure Front Door (classic) instance uses a different fully qualified domain name (FQDN) than Azure Front Door Standard and Premium. For example, an Azure Front Door (classic) endpoint might be `contoso.azurefd.net`, while the Azure Front Door Standard or Premium endpoint might be `contoso-mdjf2jfgjf82mnzx.z01.azurefd.net`. For more information about Azure Front Door Standard and Premium endpoints, see [Endpoints in Azure Front Door](endpoint.md).
+
+You don't need to update your DNS records before or during the migration process. Azure Front Door automatically sends traffic that it receives on the Azure Front Door (classic) endpoint to your Azure Front Door Standard or Premium profile without you making any configuration changes.
+
+However, once your migration is finished, we strongly recommend that you update your DNS records to direct traffic to the new Azure Front Door Standard or Premium endpoint. Changing your DNS records helps to ensure that your profile continues to work in the future. The change in DNS record doesn't cause any downtime. You don't need to plan ahead for this update to happen, and can schedule it at your convenience.
+
+## Next steps
+
+* Understand the [mapping between Front Door tiers](tier-mapping.md) settings.
+* Learn more about the [Azure Front Door tier migration process](tier-migration.md).
frontdoor Migrate Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/migrate-tier.md
Title: Migrate Azure Front Door (classic) to Standard/Premium tier using the Azure portal (Preview)
+ Title: Migrate Azure Front Door (classic) to Standard/Premium tier
description: This article provides step-by-step instructions on how to migrate from an Azure Front Door (classic) profile to an Azure Front Door Standard or Premium tier profile. Previously updated : 02/22/2023 Last updated : 05/24/2023
-# Migrate Azure Front Door (classic) to Standard/Premium tier using the Azure portal (Preview)
+# Migrate Azure Front Door (classic) to Standard/Premium tier
-Azure Front Door Standard and Premium tier bring the latest cloud delivery network features to Azure. With enhanced security features and an all-in-one service, your application content is secured and closer to your end users with the Microsoft global network. This article will guide you through the migration process to migrate your Front Door (classic) profile to either a Standard or Premium tier profile to begin using these latest features.
-
-> [!IMPORTANT]
-> Migration capability for Azure Front Door is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+Azure Front Door Standard and Premium tier bring the latest cloud delivery network features to Azure. With enhanced security features and an all-in-one service, your application content is secured and closer to your end users using the Microsoft global network. This article will guide you through the migration process to move your Azure Front Door (classic) profile to either a Standard or Premium tier profile.
## Prerequisites * Review the [About Front Door tier migration](tier-migration.md) article. * Ensure your Front Door (classic) profile can be migrated:
- * HTTPS is required for all custom domains. Azure Front Door Standard and Premium enforce HTTPS on all domains. If you don't have your own certificate, you can use an Azure Front Door managed certificate. The certificate is free and managed for you.
- * If you use BYOC (Bring your own certificate) for Azure Front Door (classic), you'll need to grant Key Vault access to your Azure Front Door Standard or Premium profile by completing the following steps:
- * Register the service principal for **Microsoft.AzureFrontDoor-Cdn** as an app in your Azure Active Directory using Azure PowerShell.
- * Grant **Microsoft.AzureFrontDoor-Cdn** access to your Key Vault.
- * Session affinity gets enabled from the origin group settings in the Azure Front Door Standard or Premium profile. In Azure Front Door (classic), session affinity is managed at the domain level. As part of the migration, session affinity is based on the Classic profile's configuration. If you have two domains in the Classic profile that shares the same backend pool (origin group), session affinity has to be consistent across both domains in order for migration to be compatible.
+ * Azure Front Door Standard and Premium requires all custom domains to use HTTPS. If you don't have your own certificate, you can use an Azure Front Door managed certificate. The certificate is free of charge and gets managed for you.
+ * Session affinity gets enabled in the origin group settings for an Azure Front Door Standard or Premium profile. In Azure Front Door (classic), session affinity is set at the domain level. As part of the migration, session affinity is based on the Front Door (classic) profile settings. If you have two domains in your classic profile that shares the same backend pool (origin group), session affinity has to be consistent across both domains in order for migration validation to pass.
> [!NOTE]
-> You don't need to make any DNS changes before or during the migration process.
->
-> However, when the migration is completed and traffic flows through your new Azure Front Door Standard or Premium profile, you should update your DNS records. For more information, see [Update DNS records](#update-dns-records).
+> You don't need to make any DNS changes before or during the migration process. However, once the migration completes and traffic is flowing through your new Azure Front Door profile, you need to update your DNS records. For more information, see [Update DNS records](#update-dns-records).
## Validate compatibility
-1. Go to the Azure Front Door (classic) resource and select **Migration** from under *Settings*.
+1. Go to your Azure Front Door (classic) resource and select **Migration** from under *Settings*.
:::image type="content" source="./media/migrate-tier/overview.png" alt-text="Screenshot of the migration button for a Front Door (classic) profile.":::
-1. Select **Validate** to see if your Front Door (classic) profile is compatible for migration. This check can take up to two minutes depending on the complexity of your Front Door profile.
+1. Select **Validate** to see if your Azure Front Door (classic) profile is compatible for migration. Validation can take up to two minutes depending on the complexity of your Front Door profile.
- :::image type="content" source="./media/migrate-tier/validate.png" alt-text="Screenshot of the validate compatibility button from the migration page.":::
+ :::image type="content" source="./media/migrate-tier/validate.png" alt-text="Screenshot of the validate compatibility section of the migration page.":::
-1. If the migration isn't compatible, you can select **View errors to see a list of errors, and recommendation to resolve them.
+ If the migration isn't compatible, you can select **View errors** to see the list of errors, and recommendations to resolve them.
- :::image type="content" source="./media/migrate-tier/validation-failed.png" alt-text="Screenshot of the Front Door validate migration with errors.":::
+ :::image type="content" source="./media/migrate-tier/validation-failed.png" alt-text="Screenshot of the Front Door (classic) profile failing validation phase.":::
-1. Once the migration tool has validated that your Front Door profile is compatible to migrate, you can move onto preparing for migration.
+1. Once your Azure Front Door (classic) profile validates and is compatible for migration, you can move onto prepare phase.
- :::image type="content" source="./media/migrate-tier/validation-passed.png" alt-text="Screenshot of the Front Door migration passing validation.":::
+ :::image type="content" source="./media/migrate-tier/validation-passed.png" alt-text="Screenshot of the Front Door (classic) profile passing validation for migration.":::
## Prepare for migration
-1. A default name for the new Front Door profile has been provided for you. You can change this name before proceeding to the next step.
+1. A default name has been provided for you for the new Front Door profile. You can change the profile name before proceeding to the next step.
- :::image type="content" source="./media/migrate-tier/prepare-name.png" alt-text="Screenshot of the prepared name for Front Door migration.":::
+ :::image type="content" source="./media/migrate-tier/prepare-name.png" alt-text="Screenshot the name field in the prepare phase for the new Front Door profile.":::
-1. A Front Door tier is automatically selected for you based on the Front Door (classic) WAF policy settings.
+1. The Front Door tier gets automatically selected for you based on the Front Door (classic) WAF policy settings.
:::image type="content" source="./media/migrate-tier/prepare-tier.png" alt-text="Screenshot of the selected tier for the new Front Door profile.":::
- * A Standard tier gets selected if you *only have custom WAF rules* associated to the Front Door (classic) profile. You may choose to upgrade to a Premium tier.
- * A Premium tier gets selected if you *have managed WAF rules* associated to the Classic profile. To use Standard tier, the managed WAF rules must first be removed from the Classic profile.
+ * **Standard** - If you *only have custom WAF rules* associated to the Front Door (classic) profile. You may choose to upgrade to a Premium tier.
+ * **Premium** - If you *have managed WAF rules* associated to the Front Door (classic) profile. To use Standard tier, the managed WAF rules must be removed from the Front Door (classic) profile.
-1. Select **Configure WAF policy upgrades** to configure the WAF policies to be upgraded. Select the action you would like to happen for each WAF policy. You can either copy the old WAF policy to the new WAF policy or select and existing WAF policy that matches the Front Door tier. If you chose to copy the WAF policy, each WAF policy will be given a default WAF policy name that you can change. Select **Apply** once you finish making changes to the WAF policy configuration.
+1. Select **Configure WAF policy upgrades** to configure whether you want to upgrade your current WAF policies or to use an existing compatible WAF policy.
:::image type="content" source="./media/migrate-tier/prepare-waf.png" alt-text="Screenshot of the configure WAF policy link during Front Door migration preparation."::: > [!NOTE]
- > The **Configure WAF policy upgrades** link only appears if you have WAF policies associated to the Front Door (classic) profile.
+ > The **Configure WAF policy upgrades** link will only appear if you have WAF policies associated to the Front Door (classic) profile.
- For each WAF policy associated to the Front Door (classic) profile select an action. You can make copy of the WAF policy that matches the tier you're migrating the Front Door profile to or you can use an existing WAF policy that matches the tier. You may also update the WAF policy names from the default names assigned. Select **Apply** to save the WAF settings.
+ For each WAF policy associated to the Front Door (classic) profile select an action. You can make copy of the WAF policy that matches the tier you're migrating the Front Door profile to or you can use an existing compatible WAF policy. You may also change the WAF policy name from the default provided name. Once completed, select **Apply** to save your Front Door WAF settings.
- :::image type="content" source="./media/migrate-tier/waf-policy.png" alt-text="Screenshot of the upgrade wAF policy screen.":::
+ :::image type="content" source="./media/migrate-tier/waf-policy.png" alt-text="Screenshot of the upgrade WAF policy screen.":::
-1. Select **Prepare**, and then select **Yes** to confirm you would like to proceed with the migration process. Once confirmed, you won't be able to make any further changes to the Front Door (classic) settings.
+1. Select **Prepare**, and when prompted, select **Yes** to confirm that you would like to proceed with the migration process. Once confirmed, you won't be able to make any further changes to the Front Door (classic) profile.
- :::image type="content" source="./media/migrate-tier/prepare-confirmation.png" alt-text="Screenshot the prepare button and confirmation to proceed with Front Door migration.":::
+ :::image type="content" source="./media/migrate-tier/prepare-confirmation.png" alt-text="Screenshot of the prepare button and confirmation message to proceed with the migration.":::
-1. Select the link that appears to view the configuration of the new Front Door profile. At this time, review each of the settings for the new profile to ensure all settings are correct. Once you're done reviewing the read-only profile, select the **X** in the top right corner of the page to go back to the migration screen.
+1. Select the link that appears to view the configuration of the new Front Door profile. At this time, you can review each of the settings for the new profile to ensure all settings are correct. Once you're done reviewing the read-only profile, select the **X** in the top right corner of the page to go back to the migration screen.
:::image type="content" source="./media/migrate-tier/verify-new-profile.png" alt-text="Screenshot of the link to view the new read-only Front Door profile.":::
-> [!NOTE]
-> If you're not using your own certificate, enabling managed identities and granting access to the Key Vault is not required. You can skip to the [**Migrate**](migrate-tier.md#migrate) step.
- ## Enable managed identities
-You're using your own certificate and will need to enable managed identity so Azure Front Door can access the certificate in your Key Vault.
+> [!NOTE]
+> If you're not using your own certificate, enabling managed identities and granting access to the Key Vault is not required. You can skip to the [**Migrate**](#migrate) phase.
+
+If you're using your own certificate and you'll need to enable managed identity so Azure Front Door can access the certificate in your Azure Key Vault. Managed identity is a feature of Azure Active Directory that allows you to securely connect to other Azure services without having to manage credentials. For more information, see [What are managed identities for Azure resources?](..//active-directory/managed-identities-azure-resources/overview.md)
-1. Select **Enable** and then select either **System assigned** or **User assigned** depending on the type of managed identities you want to use. For more information, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md)
+1. Select **Enable** and then select either **System assigned** or **User assigned** depending on the type of managed identities you want to use.
:::image type="content" source="./media/migrate-tier/enable-managed-identity.png" alt-text="Screenshot of the enable manage identity button for Front Door migration.":::
You're using your own certificate and will need to enable managed identity so Az
* *User assigned* - To create a user assigned managed identity, see [Create a user-assigned identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). If you've already have a user managed identity, select the identity, and then select **Add**.
-1. Select the **X** to return to the migration page. You'll then see that you've successfully enabled managed identities.
+1. Select the **X** in the top right corner to return to the migration page. You'll then see that you've successfully enabled managed identities.
:::image type="content" source="./media/migrate-tier/enable-managed-identity-successful.png" alt-text="Screenshot of managed identity getting enabled."::: ## Grant manage identity to Key Vault
-Select **Grant** to add managed identities from the last section to all the Key Vaults used in the Front Door (classic) profile.
+Select **Grant** to add the managed identity to all Azure Key Vaults used with the Front Door (classic) profile.
:::image type="content" source="./media/migrate-tier/grant-access.png" alt-text="Screenshot of granting managed identity access to Key Vault."::: ## Migrate
-1. Select **Migrate** to initiate the migration process. When prompted, select **Yes** to confirm you want to move forward with the migration. Once the migration is completed, you can select the banner at the top to go to the new Front Door profile.
+1. Select **Migrate** to initiate the migration process. When prompted, select **Yes** to confirm you want to move forward with the migration. The migration may take a few minutes depending on the complexity of your Front Door (classic) profile.
:::image type="content" source="./media/migrate-tier/migrate.png" alt-text="Screenshot of migrate and confirmation button for Front Door migration."::: > [!NOTE]
- > If you cancel the migration, only the new Front Door profile will get deleted. Any new WAF policy copies will need to be manually deleted.
-
- > [!WARNING]
- > Deleting the new profile will delete the production configuration once the **Migrate** step is initiated, which is an irreversible change.
+ > If you cancel the migration, only the new Front Door profile gets deleted. Any new WAF policy copies will need to be manually deleted.
-
-1. Once the migration completes, you can select the banner the top of the page or the link from the successful message to go to the new Front Door profile.
+1. Once migration completes, you can select the banner the top of the page or the link at the bottom from the successful message to go to the new Front Door profile.
:::image type="content" source="./media/migrate-tier/successful-migration.png" alt-text="Screenshot of a successful Front Door migration.":::
-1. The Front Door (classic) profile is now in a **Disabled** state and can be deleted from your subscription.
+1. The Front Door (classic) profile is now **Disabled** and can be deleted from your subscription.
+
+ :::image type="content" source="./media/migrate-tier/classic-profile.png" alt-text="Screenshot of the overview page of a Front Door (classic) in disabled state.":::
- :::image type="content" source="./media/migrate-tier/classic-profile.png" alt-text="Screenshot of the overview page of a Front Door (classic) in a disabled state.":::
+> [!WARNING]
+> Once migration has completed, if you delete the new profile that will delete the production environment, which is an irreversible change.
## Update DNS records
-Your old Azure Front Door (classic) instance uses a different fully qualified domain name (FQDN) than Azure Front Door Standard and Premium. For example, an Azure Front Door (classic) endpoint might be `contoso.azurefd.net`, while the Azure Front Door Standard or Premium endpoint might be `contoso-mdjf2jfgjf82mnzx.z01.azurefd.net`. For more information about Azure Front Door Standard and Premium endpoints, see [Endpoints in Azure Front Door](./endpoint.md).
+Your old Azure Front Door (classic) instance uses a different fully qualified domain name (FQDN) than Azure Front Door Standard and Premium. For example, an Azure Front Door (classic) endpoint might be `contoso.azurefd.net`, while the Azure Front Door Standard or Premium endpoint might be `contoso-mdjf2jfgjf82mnzx.z01.azurefd.net`. For more information about Azure Front Door Standard and Premium endpoints, see [Endpoints in Azure Front Door](endpoint.md).
You don't need to update your DNS records before or during the migration process. Azure Front Door automatically sends traffic that it receives on the Azure Front Door (classic) endpoint to your Azure Front Door Standard or Premium profile without you making any configuration changes.
-However, when your migration is finished, we strongly recommend that you update your DNS records to direct traffic to the new Azure Front Door Standard or Premium endpoint. Changing your DNS records helps to ensure that your profile continues to work in the future. The change in DNS record doesn't cause any downtime. You don't need to plan this update to happen at any specific time, and you can schedule it at your convenience.
+However, once your migration is finished, we strongly recommend that you update your DNS records to direct traffic to the new Azure Front Door Standard or Premium endpoint. Changing your DNS records helps to ensure that your profile continues to work in the future. The change in DNS record won't cause any downtime. You don't need to plan ahead for this update to happen, and can schedule it at your convenience.
## Next steps
frontdoor Tier Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-mapping.md
Title: Azure Front Door profile mapping between Classic and Standard/Premium tier
-description: This article explains the differences and settings mapping between an Azure Front Door (classic) and Standard/Premium profile.
+ Title: Settings mapping between Azure Front Door (classic) and Standard/Premium tier
+description: This article explains the differences between settings mapped between an Azure Front Door (classic) and Azure Front Door Standard or Premium profile.
Previously updated : 11/03/2022 Last updated : 05/24/2023
-# Mapping between Azure Front Door (classic) and Standard/Premium tier
+# Settings mapped between Azure Front Door (classic) and Standard/Premium tier
-As you migrate from Azure Front Door (classic) to Front Door Standard or Premium, you'll notice some configurations have been changed, or moved to a new location to provide a better experience when managing the Front Door profile. In this article you'll learn how routing rules, cache duration, rules engine configuration, WAF policy and custom domains gets mapped to new Front Door tiers.
+When you migrate your Azure Front Door (classic) to Azure Front Door Standard or Premium, you'll notice some configurations have been either changed, or relocated to provide a better experience when managing your Front Door profile. In this article you'll learn how routing rules, cache duration, rules engine configuration, WAF policy and custom domains are mapped in the new Front Door tier.
## Routing rules
-| Front Door (classic) settings | Mapping in Standard and Premium |
+| Front Door (classic) settings | Mapping in Front Door Standard and Premium |
|--|--|
-| Route status - Enable/disable | Same as Front Door (classic) profile. |
+| Route status - enable/disable | Changes to **Enable route** with checkbox. Location remains the same. |
| Accepted protocol | Copied from Front Door (classic) profile. |
-| Frontend/domains | Copied from Front Door (classic) profile. |
+| Frontend/domains | Changes to **Domains**. Copied from Front Door (classic) profile. |
| Patterns to match | Copied from Front Door (classic) profile. |
-| Rules engine configuration | Rules engine changes to Rule Set and will retain route association from Front Door (classic) profile. |
-| Route type: Forwarding | Backend pool changes to Origin group. Forwarding protocol is copied from Front Door (classic) profile. </br> - If URL rewrite is set to `disabled`, the origin path in Standard and Premium profile is set to empty. </br> - If URL rewrite is set to `enabled`, the origin path is copied from *Custom forwarding path* of the Front Door (classic) profile. |
-| Route type: Redirect | URL redirect rule gets created in Rule set. The Rule set name is called *URLRedirectMigratedRuleSet2*. |
+| Rules engine configuration | The rules engine configuration name changes to rule set but will retain its association to routes from the Front Door (classic) profile. |
+| Route type: *Forwarding* | Backend pool changes to origin group. Forwarding protocol is copied from the Front Door (classic) profile. </br> - If URL rewrite is set to **disabled**, the origin path in Standard or Premium profile is *blank*. </br> - If URL rewrite is set to **enabled**, the *Custom forwarding path* of the Front Door (classic) profile is set as the *origin path*. |
+| Route type: Redirect | A URL redirect rule set gets created called *URLRedirectMigratedRuleSet1* with a URL redirect rule. |
## Cache duration
-In Azure Front Door (classic), the *Minimum cache duration* is located in the routing settings and the *Use default cache duration* is located in the Rules engine. Azure Front Door Standard and Premium tier only support caching in a Rule set.
+In Azure Front Door (classic), the *Minimum cache duration* is configured in the routing rules settings and the *Use default cache duration* is set in the Rules engine configuration. Azure Front Door Standard and Premium only supports changing caching duration in a Rule set rule.
-| Front Door (classic) | Front Door Standard and Premium |
+| Front Door (classic) | Mapping in Front Door Standard and Premium |
|--|--|
-| When caching is *disabled* and the default caching is used. | Caching is *disabled*. |
-| When caching is *enabled* and the default caching duration is used. | Caching is *enabled*, the origin caching behavior is honored. |
-| Caching is *enabled*. | Caching is *enabled*. |
-| When use default cache duration is set to *No*, the input cache duration is used. | Cache behavior is set to override always and the input cache duration is used. |
-| N/A | Caching is *enabled*, the caching behavior is set to override if origin is missing, and the input cache duration is used. |
+| Caching is **disabled** and default caching is used. | Caching is set to **disabled**. |
+| Caching is **enabled** and the default caching duration is used. | Caching is set to **enabled**, the origin caching behavior is honored. |
+| Caching is **enabled** and minimum caching duration is set. | Caching is set to **enabled** and the cache behavior is set to **override always** with the minimum cache duration from Front Door (classic). |
+| N/A | Caching is set to **enabled**. The caching behavior is set to override if the origin is missing, and the input cache duration gets used. |
-## Route configuration override in Rule engine actions
+## Route configuration override in rules engine configuration
-The route configuration override in Front Door (classic) is split into three different actions in rules engine for Standard and Premium profile. Those three actions are URL Redirect, URL Rewrite and Route Configuration Override.
+The route configuration override in a rules engine configuration action for Front Door (classic) is split into three different actions in a Rule set rule for Azure Front Door Standard and Premium. Those three actions are URL redirect, URL rewrite and route configuration override.
-| Actions | Mapping in Standard and Premium |
+| Actions in rules engine configuration | Mapping in Front Door Standard and Premium |
|--|--|
-| Route type set to forward | 1. Forward with URL rewrites disabled. All configurations are copied to the Standard or Premium profile.</br>2. Forward with URL rewrites enabled. There will be two rule actions, one for URL rewrite and one for the route configuration override in the Standard or Premium profile.</br> For URL rewrites - </br>- Custom forwarding path in Classic profile is the same as source pattern in Standard or Premium profile.</br>- Destination from Classic profile is copied over to Standard or Premium profile. |
-| Route type set to redirect | Mapping is 1:1 in the Standard or Premium profile. |
-| Route configuration override | 1. Backend pool is 1:1 mapping for origin group in Standard or Premium profile.</br>2. Caching</br>- Enabling and disabling caching is 1:1 mapping in the Standard or Premium profile.</br>- Query string is 1:1 mapping in Standard or Premium profile.</br>3. Dynamic compression is 1:1 mapping in the Standard or Premium profile.
-| Use default cache duration | Same as mentioned in the [Cache duration](#cache-duration) section. |
+| Route type set to **Forward** | 1. If URL rewrite is **disabled**, all settings are copied over to the Standard or Premium profile.</br>2. If URL rewrite is **enabled**, two rule actions will be created. One for URL rewrite and one for the route configuration override setting. For the URL rewrite action, the *custom forwarding path* in Front Door (classic) profile is set to the **destination**. |
+| Route type set to **Redirect** | URL redirect action settings are copied over. |
+| Route configuration override | Backend pool is mapped to an origin group. Enabling caching remains the same. Query string is mapped to query string caching behavior, dynamic compression is mapped to compression.
+| Use default cache duration | For more information, see [cache duration](#cache-duration) section. |
## Other configurations
-| Front Door (classic) configuration | Mapping in Standard and Premium |
+| Front Door (classic) configuration | Mapping in Front Door Standard and Premium |
|--|--|
-| Request and response header | Request and response header in Rules engine actions is copied over to Rule set in Standard/Premium profile. |
-| Enforce certificate name check | Enforce certificate name check is supported at the profile level of Azure Front Door (classic). In a Front Door Standard or Premium profile this setting can be found in the origin settings. This configuration will apply to all origins in the migrated Standard or Premium profile. |
-| Origin response time | Origin response time is copied over to the migrated Standard or Premium profile. |
-| Web Application Firewall (WAF) | If the Azure Front Door (classic) profile has WAF policies associated, the migration will create a copy of WAF policies with a default name for the Standard or Premium profile. The names for each WAF policy can be changed during setup from the default names. You can also select an existing Standard or Premium WAF policy that matches the migrated Front Door profile. |
-| Custom domain | This section will use `www.contoso.com` as an example to show a domain going through the migration. The custom domain `www.contoso.com` points to `contoso.azurefd.net` in Front Door (classic) for the CNAME record. </br></br>When the custom domain `www.contoso.com` gets moved to the new Front Door profile:</br>- The association for the custom domain shows the new Front Door endpoint as `contoso-hashvalue.z01.azurefd.net`. The CNAME of the custom domain will automatically point to the new endpoint name with the hash value in the backend. At this point, you can change the CNAME record with your DNS provider to point to the new endpoint name with the hash value.</br>- The classic endpoint `contoso.azurefd.net` will show as a custom domain in the migrated Front Door profile under the *Migrated domain* tab of the **Domains* page. This domain will be associated to the default migrated route. This default route can only be removed once the domain is disassociated from it. The domain properties can't be updated, for the exception of associating and removing the association from a route. The domain can only be deleted after you've changed the CNAME to the new endpoint name.</br>- The certificate state and DNS state for `www.contoso.com` is the same as the Front Door (classic) profile.</br></br> There are no changes to the managed certificate auto rotation settings. |
+| Request and response header | Request and response header in Rules engine actions is copied over to Rule set. |
+| Enforce certificate name check | Enforce certificate name check is supported at the profile level of an Azure Front Door (classic). In Azure Front Door Standard or Premium profile this setting can be found in the origin settings. This configuration gets applied to all origins in the migrated profile. |
+| Origin response time | Origin response time gets copied over to the migrated profile. |
+| Web Application Firewall (WAF) | If the Azure Front Door (classic) profile has WAF policies associated, the migration will create a copy of each WAF policy for the respective tier migrating to. Names for each WAF policy can be changed during prepare phase of the migration. You can also select an existing Front Door Standard or Premium WAF policy that matches the migrated Front Door profile. |
+| Custom domain | This section will use `www.contoso.com` as an example to show what happens to a domain going through the migration. The custom domain `www.contoso.com` points to `contoso.azurefd.net` in the Front Door (classic) as a CNAME record. </br></br>When `www.contoso.com` gets moved to the new Front Door profile:</br>- The association for the custom domain shows the new Front Door endpoint as `contoso-<hashvalue>.z01.azurefd.net`. The CNAME of the custom domain will automatically point to the new endpoint name with the hash value in the backend. At this point, you can change the CNAME record with your DNS provider to point to the new endpoint name with the hash value.</br>- The classic endpoint `contoso.azurefd.net` will show as a custom domain in the migrated Front Door profile under the *Migrated domain* tab of the **Domains* page. This domain will be associated to the default migrated route. This default route can only be removed once the domain is disassociated from it. The domain properties can't be updated, except for when associating and removing the association from a route. The domain can only be deleted after you've changed the CNAME to the new endpoint name.</br>- The certificate state and DNS state for `www.contoso.com` will be consistent as the Front Door (classic) profile.</br></br> No changes are made the managed certificate auto rotation settings. |
## Next steps
-* Learn more about the [Azure Front Door tier migration process](tier-migration.md).
-* Learn how to [migrate from Classic to Standard/Premium tier](migrate-tier.md) using the Azure portal.
+* Learn more about the [Azure Front Door migration process](tier-migration.md).
+* Learn how to [migrate from Azure Front Door (classic) to Azure Front Door Standard or Premium](migrate-tier.md) using the Azure portal.
+* Learn how to [migrate from Azure Front Door (classic) to Azure Front Door Standard or Premium](migrate-tier-powershell.md) using the Azure PowerShell.
frontdoor Tier Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-migration.md
Title: About Azure Front Door (classic) to Standard/Premium tier migration (Preview)
+ Title: About Azure Front Door (classic) to Standard/Premium tier migration
description: This article explains the migration process and changes expected when using the migration tool to Azure Front Door Standard/Premium tier. Previously updated : 11/3/2022 Last updated : 05/26/2023
-# About Azure Front Door (classic) to Standard/Premium tier migration (Preview)
+# About Azure Front Door (classic) to Standard/Premium tier migration
-Azure Front Door Standard and Premium tiers were released in March 2022 as the next generation content delivery network service. The newer tiers combine the capabilities of Azure Front Door (classic), Microsoft CDN (classic), and Web Application Firewall (WAF). With features such as Private Link integration, enhanced rules engine and advanced diagnostics you have the ability to secure and accelerate your web applications to bring a better experience to your customers.
+Azure Front Door Standard and Premium tier were released in March 2022 as the next generation content delivery network service. The newer tiers combine the capabilities of Azure Front Door (classic), Microsoft CDN (classic), and Web Application Firewall (WAF). With features such as Private Link integration, enhanced rules engine and advanced diagnostics you have the ability to secure and accelerate your web applications to bring a better experience to your customers.
-> [!IMPORTANT]
-> Migration capability for Azure Front Door is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-Azure recommends migrating to the newer tiers to benefit from the new features and improvements over the Classic tier. To help with the migration process, Azure Front Door provides a zero-downtime migration to migrate your workload from Azure Front Door (class) to either Standard or Premium tier.
+We recommend migrating your classic profile to one of the newer tier to benefit from the new features and improvements. To ease the move to the new tiers, Azure Front Door provides a zero-downtime migration to move your workload from Azure Front Door (classic) to either Standard or Premium.
In this article you'll learn about the migration process, understand the breaking changes involved, and what to do before, during and after the migration. ## Migration process overview
-Azure Front Door zero-down time migration happens in three stages. The first stage is validation, followed by preparing for migration, and then migrate. The time it takes for a migration to complete depends on the complexity of the Azure Front Door profile. You can expect the migration to take a few minutes for a simple Azure Front Door profile and longer for a profile that has many frontend domains, backend pools, routing rules and rule engine rules.
+Migrating to Standard or Premium tier for Azure Front Door happens in either three or five phases depending on if you're using your certificate. The time it takes to migrate depends on the complexity of your Azure Front Door (classic) profile. You can expect the migration to take a few minutes for a simple Azure Front Door profile and longer for a profile that has multiple frontend domains, backend pools, routing rules and rule engine rules.
-### Five steps of migration
+### Phases of migration
-**Validate compatibility** - The migration will validate if the Azure Front Door (classic) profile is eligible for migration. You'll be prompted with messages on what needs to be fixed before you can move onto the preparation phase. For more information, see [prerequisites](#prerequisites).
+#### Validate compatibility
-**Prepare for migration** - Azure Front Door will create a new Standard or Premium profile based on your Classic profile configuration in a disabled state. The new Front Door profile created will depend on the Web Application Firewall (WAF) policy you've associated to the profile.
+The migration tool checks to see if your Azure Front Door (classic) profile is compatible for migration. If validation fails, you're provided with suggestions on how to resolve any issues before you can validate again.
-* **Premium tier** - If you have *managed WAF* policies associated to the Azure Front Door (classic) profile. A premium tier profile **can't** be downgraded to a standard tier after migration.
-* **Standard tier** - If you have *custom WAF* policies associated to the Azure Front Door (classic) profile. A standard tier profile **can** be upgraded to premium tier after migration.
+* Azure Front Door Standard and Premium require all custom domains to use HTTPS. If you don't have your own certificate, you can use an Azure Front Door managed certificate. The certificate is free of charge and gets managed for you.
- During the preparation stage, Azure Front Door will create copies of WAF policies specific to the Front Door tier with default names. You can change the name for the WAF policies at this time. You can also select an existing WAF policy that matches the tier you're migrating to. At this time, a read-only view of the newly created profile is provided for you to verify configurations.
+* Session affinity gets enabled in the origin group settings for an Azure Front Door Standard or Premium profile. In Azure Front Door (classic), session affinity is set at the domain level. As part of the migration, session affinity is based on the Front Door (classic) profile settings. If you have two domains in your Front Door (classic) profile that shares the same backend pool, session affinity has to be consistent across both domains in order for migration validation to pass.
- > [!NOTE]
- > No changes can be to the Front Door (classic) configuration once this step has been initiated.
+* If you're using BYOC (Bring Your Own Certificate) for Azure Front Door (classic), you need to [grant Key Vault access](standard-premium/how-to-configure-https-custom-domain.md#register-azure-front-door) to Azure Front Door Standard or Premium. This step is required for Azure Front Door Standard or Premium to access your certificate in Key Vault. If you're using Azure Front Door managed certificate, you don't need to grant Key Vault access.
-**Enable managed identity** - During this step you can configure managed identities for Azure Front Door to access your certificate in a Key Vault.
+#### Prepare for migration
-**Grant managed identity to Key Vault** - This step adds managed identity access to all the Key Vaults used in the Front Door (classic) profile.
+Azure Front Door creates a new Standard or Premium profile based on your Front Door (classic) profile's configuration. The new Front Door profile tier depends on the Web Application Firewall (WAF) policy settings you associate with the profile.
-**Migrate/Abort migration**
-
-* **Migrate** - Once you select this option, the Azure Front Door (classic) profile gets disabled and the Azure Front Door Standard or Premium profile will be activated. Traffic will start going through the new profile once the migration completes.
-* **Abort migration** - If you decided you no longer want to move forward with the migration process, selecting this option will delete the new Front Door profile that was created.
+* **Premium** - If your WAF policy has **managed WAF rules** associated to the Azure Front Door (classic) profile.
+
+* **Standard** - If your WAF policy only has **custom WAF rules** associated to the Azure Front Door (classic) profile.
> [!NOTE]
-> * If you cancel the migration only the new Front Door profile gets deleted, any WAF policy copies will need to be manually deleted.
-> * Traffic to your Azure Front Door (classic) will continue to be serve until migration has been completed.
-> * Each Azure Front Door (classic) profile can create one Azure Front Door Standard or Premium profile.
+> A standard tier Front Door profile **can** be upgraded to premium tier after migration. However, a premium tier Front Door profile **can't** be downgraded to standard tier after migration.
-Migration is only available and can be completed using the Azure portal. Service charges for Azure Front Door Standard or Premium tier will start once migration is completed.
+During the preparation phase, Azure Front Door creates a copy of each WAF policy associated to the Front Door (classic) profile. The WAF policy tier is specific to the tier you're migrating to. A default name is provided for each WAF policy and you can change the name during this phase. You also can select an existing WAF policy that matches the tier you're migrating to instead of making a copy. Once the preparation phase is completed, a read-only view of the new Front Door profile is provided for you to verify configurations.
-## Breaking changes between tiers
+> [!IMPORTANT]
+> You won't be able to make changes to the Front Door (classic) configuration once the preparation phase has been initiated.
-### Dev-ops
+#### Enable managed identity
-Azure Front Door Standard/Premium uses a different resource provider namespace of *Microsoft.Cdn*, while Azure Front Door (classic) uses *Microsoft.Network*. After you've migrated your Azure Front Door profile, you need to change your Dev-ops script to use the new namespace, different Azure PowerShell module and CLI commands and API.
+During this step, you configure managed identity for Azure Front Door to access your certificate in an Azure Key Vault. Managed identity is required if you're using BYOC (Bring Your Own Certificate) for Azure Front Door (classic). If you're using Azure Front Door managed certificate, you don't need to grant Key Vault access.
-### Endpoint with hash value
+#### Grant managed identity to Key Vault
-Azure Front Door Standard and Premium endpoints are generated to include a hash value to prevent your domain from being taken over. The format of the endpoint name is `<endpointname>-<hashvalue>.z01.azurefd.net`. The Classic Front Door endpoint name will continue to work after migration but we recommend replacing it with the newly created endpoint name from the Standard or Premium profile. For more information, see [Endpoint domain names](endpoint.md#endpoint-domain-names).
+This step adds managed identity access to all Azure Key Vaults used in the Front Door (classic) profile.
-### Logs and metrics
+#### Migrate
+
+Once migration begins, the Azure Front Door (classic) profile gets disabled and the Azure Front Door Standard, or Premium profile gets activated. Traffic starts flowing through the new profile once the migration completes.
-Diagnostic logs and metrics won't be migrated. Azure Front Door Standard/Premium log fields are different from Front Door (classic). The newer tier also has heath probe logs and is recommended you enable diagnostic logging after the migration complete. Standard and Premium tier also supports built-in reports that will start displaying data once the migration is done.
+If you decided you no longer want to move forward with the migration process, you can select **Abort migration**. Aborting the migration deletes the new Front Door profile that was created. The Azure Front Door (classic) profile remains active and you can continue to use it. Any WAF policy copies need to be manually deleted.
-## Prerequisites
+Service charges for Azure Front Door Standard or Premium tier start once migration is completed.
-* HTTPS is required for all custom domains. All Azure Front Door Standard and Premium tiers enforce HTTPS on every domain. If you don't your own certificate, you can use Azure Front Door managed certificate that is free and managed for you.
-* If you use BYOC for Azure Front Door (classic), you need to grant Key Vault access to your Azure Front Door Standard or Premium profile by completing the following steps:
- * Register the service principal for **Microsoft.AzureFrontDoor-Cdn** as an app in your Azure Active Directory using Azure PowerShell.
- * Grant **Microsoft.AzureFrontDoor-Cdn** access to your Key Vault.
-* Session affinity is enabled from within the origin group in an Azure Front Door Standard and Premium profile. In Azure Front Door (classic), session affinity is controlled at the domain level. As part of the migration, session affinity gets enabled or disabled based on the Classic profile's configuration. If you have two domains in a Classic profile that shares the same origin group, session affinity has to be consistent across both domains in order for migration can pass validation.
+## Breaking changes when migrating to Standard or Premium tier
> [!IMPORTANT] > * If your Azure Front Door (classic) profile can qualify to migrate to Standard tier but the number of resources exceeds the Standard tier quota limit, it will be migrated to Premium tier instead. > * If you use Azure PowerShell, Azure CLI, API, or Terraform to do the migration, then you need to create WAF policies separately.
+### Dev-ops
+
+Azure Front Door Standard and Premium use a different resource provider namespace of *Microsoft.Cdn*, while Azure Front Door (classic) uses *Microsoft.Network*. After you migrate your Azure Front Door profile, you'll need to change your Dev-ops script to use the new namespace, updated Azure PowerShell module, CLI commands and APIs.
+
+### Endpoint with hash value
+
+Azure Front Door Standard and Premium endpoints are generated to include a hash value to prevent your domain from being taken over. The format of the endpoint name is `<endpointname>-<hashvalue>.z01.azurefd.net`. The Front Door (classic) endpoint name will continue to work after migration but we recommend replacing it with the newly created endpoint name from your new Standard or Premium profile. For more information, see [Endpoint domain names](endpoint.md#endpoint-domain-names).
+
+### Logs and metrics
+
+Diagnostic logs and metrics aren't migrated. Azure Front Door Standard and Premium log fields are different from Azure Front Door (classic). Standard and Premium tier has heath probe logging and we recommend that you enable diagnostic logging after you migrate. Standard and Premium tier also supports built-in reports that start displaying data once the migration is completed. For more information, see [Azure Front Door reports](standard-premium/how-to-reports.md).
+ ### Web Application Firewall (WAF)
-The default Azure Front Door tier created during migration is determined by the type of rules contain in the WAF policy. In this section we'll, cover scenarios for different rule type for a WAF policy.
+The default Azure Front Door tier selected for migration gets determined by the type of rules contain in the WAF policy. In this section, we cover scenarios for different rule types for a WAF policy.
-* Classic WAF policy contains only custom rules.
- * The new Azure Front Door profile defaults to Standard tier and can be upgraded to Premium during migration. If you use the portal for migration, Azure will create custom WAF rules for Standard. If you upgrade to Premium during migration, custom WAF rules will be created by the migration capability, but managed WAF rules will need to be created manually after migration.
-* Classic WAF policy has only managed WAF rules, or both managed and custom WAF rules.
- * The new Azure Front Door profile defaults to Premium tier and isn't eligible for downgrade during migration. Remove the WAF policy association or delete the manage WAF rules from the Classic WAF policy.
+**Classic WAF policy with only custom rules** - the new Azure Front Door profile defaults to Standard tier and can be upgraded to Premium during the migration. If you use the portal for migration, Azure creates custom WAF rules for Standard. If you upgrade to Premium during migration, custom WAF rules are created as part of the migration process. You'll need to add managed WAF rules manually after migration if you want to use managed rules.
- > [!NOTE]
- > To avoid creating duplicate WAF policies during migration, the Azure portal provides the option to either create copies or reuse an existing Azure Front Door Standard or Premium WAF policy.
-
-* If you migrate your Azure Front Door profile using Azure PowerShell or Azure CLI, you need to create the WAF policies separately before migration.
+**Classic WAF policy with only managed WAF rules, or both managed and custom WAF rules** - the new Azure Front Door profile defaults to Premium tier and can't be downgraded during the migration. If you want to use Standard tier, then you need to remove the WAF policy association or delete the manage WAF rules from the Front Door (classic) WAF policy.
+
+> [!NOTE]
+> To avoid creating duplicate WAF policies during migration, the migration capability provides the option to either create copies or use an existing Azure Front Door Standard or Premium WAF policy.
+
+### Azure Policy for Azure Front Door WAF
+
+[Azure Policy for WAF](../web-application-firewall/shared/waf-azure-policy.md) is not available for Azure Front Door Standard and Premium. Azure Policy lets you set and check WAF standards for your organization at a large scale. This feature will be available in the near future.
-## Naming convention for migration
+## Naming convention used for migration
-During the migration, a default profile name is used in the format of `<endpointprefix>-migrated`. For example, a Classic endpoint named `myEndpoint.azurefd.net`, will have the default name of `myEndpoint-migrated`.
-WAF policy name will use the format of `<classicWAFpolicyname>-<standard or premium>`. For example, a Classic WAF policy named `contosoWAF1`, will have the default name of `contosoWAF1-premium`. You can rename the Front Door profile and the WAF policy during migration. Renaming of rule engine and routes isn't supported, instead default names will be assigned.
+During the migration, a default profile name is used in the format of `<endpointprefix>-migrated`. For example, an Azure Front Door (classic) endpoint named `myEndpoint.azurefd.net`, has the default name of `myEndpoint-migrated`.
+A WAF policy name has `-standard` or `-premium` appended to the classic WAF policy name. For example, a Front Door (classic) WAF policy named `contosoWAF1`, has the default name of `contosoWAF1-premium`. You can rename both the Front Door profile and WAF policy during migration process. Renaming of rules engine configuration and routes aren't supported and instead default names are assigned.
-URL redirect and URL rewrite are supported through rules engine in Azure Front Door Standard and Premium, while Azure Front Door (classic) supports them through routing rules. During migration, these two rules get created as rules engine rules in a Standard and Premium profile. The names of these rules are `urlRewriteMigrated` and `urlRedirectMigrated`.
+URL redirect and URL rewrite are supported through rules engine in Azure Front Door Standard and Premium, while Azure Front Door (classic) supports them through routing rules. During migration, these two rules get created as rule set rules in a Standard and Premium profile. The namings of these rules are `urlRewriteMigrated` and `urlRedirectMigrated`.
## Resource states
The following table explains the various stages of the migration process and if
| Migration state | Front Door (classic) resource state | Can make changes? | Front Door Standard/Premium | Can make changes? | |--|--|--|--|--|
-|Before migration| Active | Yes | N/A | N/A |
-| Step 1: Validating compatibility | Active | Yes | N/A | N/A |
-| Step 2: Preparing for migration | Migrating | No | Creating | No |
-| Step 5: Committing migration | Migrating | No | CommittingMigration | No |
-| Step 5: Committed migration | Migrated | No | Active | Yes |
-| Step 5: Aborting migration | AbortingMigration | No | Deleting | No |
-| Step 5: Aborted migration | Active | Yes | Deleted | N/A |
+| Before migration | Active | Yes | N/A | N/A |
+| Validating compatibility | Active | Yes | N/A | N/A |
+| Prepare for migration | Migrating | No | Creating | No |
+| Committing migration | Migrating | No | CommittingMigration | No |
+| Committed migration | Migrated | No | Active | Yes |
+| Aborting migration | AbortingMigration | No | Deleting | No |
+| Aborted migration | Active | Yes | Deleted | N/A |
## Next steps
-* Understand the [mapping between Front Door tiers](tier-mapping.md) settings.
-* Learn how to [migrate from Classic to Standard/Premium tier](migrate-tier.md) using the Azure portal.
+* Understand the [settings mapping between Azure Front Door tiers](tier-mapping.md).
+* Learn how to [migrate from Azure Front Door (classic) to Standard or Premium tier](migrate-tier.md) using the Azure portal.
+* Learn how to [migrate from Azure Front Door (classic) to Standard or Premium tier](migrate-tier-powershell.md) using Azure PowerShell.
frontdoor Tier Upgrade Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-upgrade-powershell.md
+
+ Title: Upgrade from Azure Front Door Standard to Premium with Azure PowerShell
+description: This article shows you how to upgrade from an Azure Front Door Standard to an Azure Front Door Premium profile with Azure PowerShell.
++++ Last updated : 06/05/2023+++
+# Upgrade from Azure Front Door Standard to Premium with Azure PowerShell
+
+Azure Front Door supports upgrading from Standard to Premium for more advanced capabilities and an increase in quota limit. The upgrade doesn't cause any downtime to your services or applications. For more information about the differences between Standard and Premium, see [Tier comparison](standard-premium/tier-comparison.md).
+
+This article walks you through how to perform the tier upgrade for an Azure Front Door Standard profile. Once upgraded, you're charged for the Azure Front Door Premium monthly base fee at an hourly rate.
+
+> [!IMPORTANT]
+> Downgrading from **Premium** to **Standard** isn't supported.
+
+## Prerequisite
+
+* Confirm you have an Azure Front Door Standard profile available in your subscription to upgrade.
+* Latest Azure PowerShell module installed locally or Azure Cloud Shell. For more information, see [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell).
+
+## Upgrade tier
+
+Run the [Update-AzFrontDoorCdnProfile](/powershell/module/az.cdn/update-azfrontdoorcdnprofile) command to upgrade your Azure Front Door Standard profile to Premium. The following example shows the command to upgrade a profile named **myAzureFrontDoor** in the resource group **myAFDResourceGroup**.
+
+### No WAF policies associated
+
+```powershell-interactive
+Update-AzFrontDoorCdnProfile -ProfileName myAzureFrontDoor -ResourceGroupName myAFDResourceGroup -ProfileUpgradeParameter @{}
+```
+
+The following example shows the output of the command:
+
+```
+Location Name Kind ResourceGroupName
+-- - - --
+Global myAzureFrontDoor frontdoor myAFDResourceGroup
+```
+
+### WAF policies associated
+
+1. Run the [New-AzFrontDoorCdnProfileChangeSkuWafMappingObject](/powershell/module/az.cdn/new-azfrontdoorcdnprofilechangeskuwafmappingobject) command to create a new object for the WAF policy mapping. This command maps the standard WAF policy to the premium WAF policy resource ID. The premium WAF policy can be an existing one or a new one. If you're using an existing one, replace the WafPolicyId value with the resource ID of the premium WAF policy. If you're creating a new one, replace the `premiumWAFPolicyName` value with the name of the premium WAF policy. In this example, we're creating two premium WAF policies named **myPremiumWAFPolicy1** and **myPremiumWAFPolicy2**.
+
+ ```powershell-interactive
+
+ Replace the following values in the command:
+
+ * `<subscriptionId>`: Your subscription ID.
+ * `<resourceGroupName>`: The resource group name of the WAF policy.
+ * `<standardWAFPolicyName>`: The name of the standard WAF policy.
+
+ ```powershell-interactive
+ $waf1 = New-AzFrontDoorCdnProfileChangeSkuWafMappingObject SecurityPolicyName <standardWAFPolicyName> -WafPolicyId /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/frontDoorWebApplicationFirewallPolicies/myPremiumWAFPolicy1
+
+ $waf2 = New-AzFrontDoorCdnProfileChangeSkuWafMappingObject SecurityPolicyName <standardWAFPolicyName> -WafPolicyId /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/frontDoorWebApplicationFirewallPolicies/myPremiumWAFPolicy2
+ ```
+
+1. Run the [New-AzFrontDoorCdnProfileUpgradeParametersObject](/powershell/module/az.cdn/new-azfrontdoorcdnprofileupgradeparametersobject) command to create a new object for the upgrade parameters.
+
+ ```powershell-interactive
+ $upgradeParams = New-AzFrontDoorCdnProfileUpgradeParametersObject -WafPolicyMapping @{$waf1, $waf2}
+ ```
+
+1. Run the [Update-AzFrontDoorCdnProfile](/powershell/module/az.cdn/update-azfrontdoorcdnprofile) command to upgrade your Azure Front Door Standard profile to Premium. The following example shows the command to upgrade a profile named **myAzureFrontDoor** in the resource group **myAFDResourceGroup**.
+
+ ```powershell-interactive
+ Update-AzFrontDoorCdnProfile -ProfileName myAzureFrontDoor -ResourceGroupName myAFDResourceGroup -ProfileUpgradeParameter $upgradeParams
+ ```
+
+ The following example shows the output of the command:
+
+ ```
+ Location Name Kind ResourceGroupName
+ -- - - --
+ Global myAzureFrontDoor frontdoor myAFDResourceGroup
+ ```
+
+> [!NOTE]
+> You're now being billed for the Azure Front Door Premium at an hourly rate.
+
+## Next steps
+
+* Learn more about [Managed rule for Azure Front Door WAF policy](../web-application-firewall/afds/waf-front-door-drs.md).
+* Learn how to enable [Private Link to origin resources in Azure Front Door](private-link.md).
frontdoor Tier Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-upgrade.md
Title: Upgrade from Azure Front Door Standard to Premium tier (Preview)
-description: This article provides step-by-step instructions on how to upgrade from an Azure Front Door Standard to an Azure Front Door Premium tier profile.
+ Title: Upgrade from Azure Front Door Standard to Premium
+description: This article shows you how to upgrade from an Azure Front Door Standard to an Azure Front Door Premium profile.
Previously updated : 11/2/2022 Last updated : 05/26/2023
-# Upgrade from Azure Front Door Standard to Premium tier (Preview)
+# Upgrade from Azure Front Door Standard to Premium
-Azure Front Door supports upgrading from Standard to Premium tier for more advanced capabilities and an increase in quota limits. The upgrade won't cause any downtime to your services or applications. For more information about the differences between Standard and Premium tier, see [Tier comparison](standard-premium/tier-comparison.md).
+Azure Front Door supports upgrading from Standard to Premium for more advanced capabilities and an increase in quota limit. The upgrade doesn't cause any downtime to your services or applications. For more information about the differences between Standard and Premium, see [Tier comparison](standard-premium/tier-comparison.md).
-> [!IMPORTANT]
-> Upgrading Azure Front Door tier is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-This article will walk you through how to perform the tier upgrade on the configuration page of a Front Door Standard profile. Once upgraded, you'll be charged for the Azure Front Door Premium monthly base fee at an hourly rate.
+This article walks you through how to perform the tier upgrade for an Azure Front Door Standard profile. Once upgraded, you're charged for the Azure Front Door Premium monthly base fee at an hourly rate.
> [!IMPORTANT]
-> Downgrading from Premium to Standard tier is not supported.
+> Downgrading from **Premium** to **Standard** isn't supported.
## Prerequisite
Confirm you have an Azure Front Door Standard profile available in your subscrip
## Upgrade tier
-1. Go to the Azure Front Door Standard profile you want to upgrade and select **Configuration (preview)** from under *Settings*.
+1. Go to the Azure Front Door Standard profile you want to upgrade and select **Configuration** from under *Settings*.
:::image type="content" source="./media/tier-upgrade/overview.png" alt-text="Screenshot of the configuration button under settings for a Front Door standard profile.":::
-1. Select **Upgrade** to begin the upgrade process. If you don't have any WAF policies associated to your Front Door Standard profile, then you'll be prompted with a confirmation to proceed with the upgrade.
+1. Select **Upgrade** to begin the upgrade process. If you don't have any WAF policies associated to your Front Door Standard profile, then you're prompted with a confirmation to proceed with the upgrade.
:::image type="content" source="./media/tier-upgrade/upgrade-button.png" alt-text="Screenshot of the upgrade button on the configuration page a Front Door Standard profile.":::
-1. If you have WAF policies associated to the Front Door Standard profile, then you'll be taken to the *Upgrade WAF policies* page. On this page, you'll decide whether you want to make copies of the WAF policies or use an existing premium WAF policy. You can also change the name of the new WAF policy copy during this step.
+1. If you have WAF policies associated to the Front Door Standard profile, then you're taken to the *Upgrade WAF policies* page. On this page, you decide whether you want to make copies of the WAF policies or use an existing premium WAF policy. You can also change the name of the new WAF policy copy during this step.
:::image type="content" source="./media/tier-upgrade/upgrade-waf.png" alt-text="Screenshot of the upgrade WAF policies page."::: > [!NOTE]
- > To use managed WAF rules for the new premium WAF policy copies, you'll need to manually enable them after the upgrade.
+ > To use managed WAF rules for the new premium WAF policy copies, you'll need to manually enable them after upgrading the Front Door profile.
-1. Select **Upgrade** once you're done setting up the WAF policies. Select **Yes** to confirm you would like to proceed with the upgrade.
+1. Select **Upgrade** once you're done setting up WAF policies. Select **Yes** to confirm you would like to proceed with the upgrade.
:::image type="content" source="./media/tier-upgrade/confirm-upgrade.png" alt-text="Screenshot of the confirmation message from upgrade WAF policies page.":::
-1. The upgrade process will create new premium WAF policy copies and associate them to the upgraded Front Door profile. The upgrade can take a few minutes to complete depending on the complexity of your Front Door profile.
+1. The upgrade process creates a new premium WAF policy and associates it to the Front Door Premium profile. The upgrade can take a few minutes to complete depending on the complexity of your Front Door Standard profile.
:::image type="content" source="./media/tier-upgrade/upgrade-in-progress.png" alt-text="Screenshot of the configuration page with upgrade in progress status.":::
-1. Once the upgrade completes, you'll see **Tier: Premium** display on the *Configuration* page.
+1. Once the upgrade completes, you see **Tier: Premium** display on the *Configuration* page.
:::image type="content" source="./media/tier-upgrade/upgrade-complete.png" alt-text="Screenshot of the Front Door tier upgraded to premium on the configuration page."::: > [!NOTE]
- > You're now being billed for the Azure Front Door Premium base fee at an hourly rate.
+ > You're now being billed for the Azure Front Door Premium at an hourly rate.
## Next steps
-* Learn more about [Managed rule for WAF policy](../web-application-firewall/afds/waf-front-door-drs.md).
-* Learn how to enable [Private Link to origin resources](private-link.md).
+* Learn more about [Managed rule for Azure Front Door WAF policy](../web-application-firewall/afds/waf-front-door-drs.md).
+* Learn how to enable [Private Link to origin resources in Azure Front Door](private-link.md).
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
To enable and use Azure Policy with your Kubernetes cluster, take the following
The following general limitations apply to the Azure Policy Add-on for Kubernetes clusters: -- Azure Policy Add-on for Kubernetes is supported on Kubernetes version **1.14** or higher.
+- Azure Policy Add-on for Kubernetes is supported on [supported Kubernetes versions in Azure Kubernetes Service (AKS)](../../../aks/supported-kubernetes-versions.md).
- Azure Policy Add-on for Kubernetes can only be deployed to Linux node pools. - Maximum number of pods supported by the Azure Policy Add-on: **10,000** - Maximum number of Non-compliant records per policy per cluster: **500**
hdinsight Cluster Management Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/cluster-management-best-practices.md
description: Learn best practices for managing HDInsight clusters.
Previously updated : 05/30/2022 Last updated : 06/12/2023 # HDInsight cluster management best practices
hdinsight Apache Hadoop Connect Excel Power Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-connect-excel-power-query.md
description: Learn how to take advantage of business intelligence components and
Previously updated : 05/30/2022 Last updated : 06/12/2023 # Connect Excel to Apache Hadoop by using Power Query
hdinsight Connect Install Beeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/connect-install-beeline.md
description: Learn how to connect to the Apache Beeline client to run Hive queri
Previously updated : 05/30/2022 Last updated : 06/12/2023 # Connect to HiveServer2 using Beeline or install Beeline locally to connect from your local
hdinsight Hdinsight Troubleshoot Converting Service Principal Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-troubleshoot-converting-service-principal-certificate.md
Title: Converting certificate contents to base-64 - Azure HDInsight
description: Converting service principal certificate contents to base-64 encoded string format in Azure HDInsight Previously updated : 05/30/2022- Last updated : 06/12/2023 # Converting service principal certificate contents to base-64 encoded string format in HDInsight
This article describes troubleshooting steps and possible resolutions for issues
## Issue
-You receive an error message stating the input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or a non-white space character among the padding characters.
+You receive an error message stating the input isn't a valid Base-64 string as it contains a nonbase 64 character, more than two padding characters, or a nonwhite space character among the padding characters.
## Cause
-When using PowerShell or Azure template deployment to create clusters with Data Lake as either primary or additional storage, the service principal certificate contents provided to access the Data Lake storage account is in the base-64 format. Improper conversion of pfx certificate contents to base-64 encoded string can lead to this error.
+When using PowerShell or Azure template deployment to create clusters with Data Lake as either primary or more storage, the service principal certificate contents provided to access the Data Lake storage account is in the base-64 format. Improper conversion of pfx certificate contents to base-64 encoded string can lead to this error.
## Resolution
hdinsight Hdinsight Troubleshoot Out Disk Space https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-troubleshoot-out-disk-space.md
Title: Cluster node runs out of disk space in Azure HDInsight
description: Troubleshooting Apache Hadoop cluster node disk space issues in Azure HDInsight. Previously updated : 05/30/2022 Last updated : 06/12/2023 # Scenario: Cluster node runs out of disk space in Azure HDInsight
hdinsight Hdinsight Hadoop Compare Storage Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-compare-storage-options.md
description: Provides an overview of storage types and how they work with Azure
Previously updated : 05/30/2022 Last updated : 06/12/2023 # Compare storage options for use with Azure HDInsight clusters
hdinsight Apache Kafka Connect Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-connect-vpn-gateway.md
description: Learn how to directly connect to Kafka on HDInsight through an Azur
Previously updated : 05/30/2022 Last updated : 06/12/2023 # Connect to Apache Kafka on HDInsight through an Azure Virtual Network
hdinsight Apache Spark Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-settings.md
description: How to view and configure Apache Spark settings for an Azure HDInsi
Previously updated : 05/30/2022 Last updated : 06/12/2023 # Configure Apache Spark settings
healthcare-apis Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/known-issues.md
Refer to the table below to find details about resolution dates or possible work
|Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- |
+|Import functionality isn't working as expected when NDJSON file size is greater than 2 GB. Customer sees the import job stuck in retry mode.| June 2023| Suggested workaround is to reduce file size less than 2 GB.|--|
|Using [token type](https://www.hl7.org/fhir/search.html#token) fields of length more than 128 characters can result in undesired behavior on `create`, `search`, `update`, and `delete` operations. | August 2022 |- | Resolved, customers impacted with 128 characters issue are notified on resolution. |
-|The SQL provider will cause the `RawResource` column in the database to save incorrectly. This occurs in a small number of cases when a transient exception occurs that causes the provider to use its retry logic.ΓÇ»|April 2022 |-|May 2022 Resolved [#2571](https://github.com/microsoft/fhir-server/pull/2571) |
+|The SQL provider causes the `RawResource` column in the database to save incorrectly. This occurs in a few cases when a transient exception occurs that causes the provider to use its retry logic.ΓÇ»|April 2022 |-|May 2022 Resolved [#2571](https://github.com/microsoft/fhir-server/pull/2571) |
| Queries not providing consistent result counts after appended with `_sort` operator. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). | July 2022 | -|August 2022 Resolved [#2680](https://github.com/microsoft/fhir-server/pull/2680) | ## Next steps
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
With Incremental Load mode, customers can:
For details on Incremental Import, visit [Import Documentation](./../healthcare-apis/fhir/configure-import-data.md). **Feature Enhancement: Reindex operation provides job status at resource level**+ Reindex operation supports determining the status of the reindex operation with help of API call `GET {{FHIR_URL}}/_operations/reindex/{{reindexJobId}}`. Details per resource, on the number of completed reindexed resources can be obtained with help of the new field, added in the response- "resourceReindexProgressByResource". For details, visit [3286](https://github.com/microsoft/fhir-server/pull/3286). **Bug Fix: FHIR Search Query optimization of complex queries**
-We have seen issues where complex FHIR queries with Reference Search Parameters would time out. Issue is fixed by updating the SQL query generatior to use an INNER JOIN for Reference Search Parameters. For details, visit [#3295](https://github.com/microsoft/fhir-server/pull/3295).
+
+We have seen issues where complex FHIR queries with Reference Search Parameters would time out. Issue is fixed by updating the SQL query generator to use an INNER JOIN for Reference Search Parameters. For details, visit [#3295](https://github.com/microsoft/fhir-server/pull/3295).
**Bug Fix: Metadata endpoint URL in capability statement is relative URL**+ Per FHIR specification, metadata endpoint URL in capability statement needs to be an absolute URL. For details on the FHIR specification, visit [Capability Statement](https://www.hl7.org/fhir/capabilitystatement-definitions.html#CapabilityStatement.url). This fix addresses the issue, for details visit [3265](https://github.com/microsoft/fhir-server/pull/3265). +
+#### DICOM Service
+
+**Retrieve rendered image is GA**
+
+[Rendered images](dicom/dicom-services-conformance-statement.md#retrieve-rendered-image-for-instance-or-frame) can now be retrieved from the DICOM service by using the new rendered endpoint. This API allows a DICOM instance or frame to be accessed in a consumer format (`jpeg` or `png`), a capability that can simplify scenarios such as a client application displaying an image preview.
++
+**Fixed issue where DICOM events and Change Feed may miss changes**
+
+The DICOM Change Feed API could previously return results that incorrectly skipped pending changes when the DICOM server was under load. Identical calls to the Change Feed resource could have resulted in new change events appearing in the middle of the result set. For example, if the first call returned sequence numbers `1`, `2`, `3`, and `5`, then the second identical call could have incorrectly returned `1`, `2`, `3`, `4`, and `5`. This behavior also impacted the DICOM events sent to Azure Event Grid System Topics, and could have resulted in missing events in downstream event handlers. For more details, see [#2611](https://github.com/microsoft/dicom-server/pull/2611).
++ ## May 2023 #### Azure Health Data Services
iot-central Concepts Faq Extend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-extend.md
Title: How to extend IoT Central
description: How to use data exports, rules, or the REST API to extend IoT Central if it's missing something you need. Previously updated : 06/09/2022 Last updated : 06/12/2023
iot-central Concepts Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-quotas-limits.md
Title: Azure IoT Central quotas and limits
-description: This article lists the key quotas and limits that apply to an IoT Central application including those from the underlying DPS and IoT Hub services.
+description: This article lists the key quotas and limits that apply to an IoT Central application including from the underlying DPS and IoT Hub services.
Previously updated : 06/07/2022 Last updated : 06/12/2023
There are various quotas and limits that apply to IoT Central applications. IoT
| Item | Quota or limit | Notes | | - | -- | -- | | Number of telemetry messages per second per device| 10 | If you need to exceed this limit, contact support to discuss increasing it for your application. |
-| Maximum size of a device-to-cloud message | 256 KB | This value is set by the IoT Hub service. |
-| Maximum size of a cloud-to-device message | 64 KB | This value is set by the IoT Hub service. |
+| Maximum size of a device-to-cloud message | 256 KB | The IoT Hub service sets this value. |
+| Maximum size of a cloud-to-device message | 64 KB | The IoT Hub service sets this value. |
## Property updates | Item | Quota or limit | Notes | | - | -- | -- |
-| Number of property updates per second | 100 | This is a soft limit. IoT Central autoscales the application as needed<sup>1</sup>. |
-| Properties | Maximum size of desired properties and reported properties sections are 32 KB each. Maximum size of tags section is 8 KB. Maximum size of each individual property in every section is 4 KB. | These values are set by the IoT Hub service. |
+| Number of property updates per second | 100 | This limit is a soft limit. IoT Central autoscales the application as needed<sup>1</sup>. |
+| Properties | Maximum size of desired properties and reported properties sections are 32 KB each. Maximum size of tags section is 8 KB. Maximum size of each individual property in every section is 4 KB. | The IoT Hub service sets these values. |
## Commands | Item | Quota or limit | Notes | | - | -- | -- |
-| Number of command executions per second | 20 | This is a soft limit. IoT Central autoscales the application as needed<sup>1</sup>. |
+| Number of command executions per second | 20 | This limit is a soft limit. IoT Central autoscales the application as needed<sup>1</sup>. |
## REST API calls
There are various quotas and limits that apply to IoT Central applications. IoT
| Item | Quota or limit | Notes | | - | -- | -- |
-| Number of devices registrations per minute | 200 | This quota is set by the underlying DPS instance. Contact support to discuss increasing this quota for your application. |
+| Number of devices registrations per minute | 200 | The underlying DPS instance sets this quota. Contact support to discuss increasing this quota for your application. |
## Rules
There are various quotas and limits that apply to IoT Central applications. IoT
| Item | Quota or limit | Notes | | - | -- | -- |
-| Maximum user role assignments per application | 200 | This isn't the same as the number of users per application. |
-| Maximum roles per application | 50 | This includes the default application and organization roles. |
+| Maximum user role assignments per application | 200 | This limit isn't the same as the number of users per application. |
+| Maximum roles per application | 50 | This limit includes the default application and organization roles. |
| Maximum organizations per application| 200 | | | Maximum organization hierarchy depth | 5 | |
iot-central Howto Integrate With Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-integrate-with-devops.md
Title: Integrate Azure IoT Central with CI/CD
description: Describes how to integrate IoT Central into a pipeline created with Azure Pipelines to enable continuous integration and continuous delivery. Previously updated : 05/27/2022 Last updated : 06/12/2023 # Integrate IoT Central with Azure Pipelines for continuous integration and continuous delivery
-## Overview
+Continuous integration and continuous delivery (CI/CD) refers to the process of developing and delivering software in short, frequent cycles using automation pipelines. This article shows you how to automate the build, test, and deployment of an IoT Central application configuration. This automation enables development teams to deliver reliable releases more frequently.
-Continuous integration and continuous delivery (CI/CD) refers to the process of developing and delivering software in short, frequent cycles using automation pipelines. This article shows you how to automate the build, test, and deployment of IoT Central application configuration, to enable development teams to deliver reliable releases more frequently.
-
-Continuous integration starts with a commit of your code to a branch in a source code repository. Each commit is merged with commits from other developers to ensure that no conflicts are introduced. Changes are further validated by creating a build and running automated tests against that build. This process ultimately results in an artifact, or deployment bundle, to deploy to a target environment, in this case an Azure IoT Central application.
+Continuous integration starts with a commit of your code to a branch in a source code repository. Each commit is merged with commits from other developers to ensure that no conflicts are introduced. Changes are further validated by creating a build and running automated tests against that build. This process ultimately results in an artifact, or deployment bundle, to deploy to a target environment. In this case, the target is an Azure IoT Central application.
Just as IoT Central is a part of your larger IoT solution, IoT Central is a part of your CI/CD pipeline. Your CI/CD pipeline should deploy your entire IoT solution and all configurations to each environment from development through to production: IoT Central is an *application platform as a service* that has different deployment requirements from *platform as a service* components. For IoT Central, you deploy configurations and device templates. These configurations and device templates are managed and integrated into your release pipeline by using APIs.
By using the Azure IoT Central REST API, you can integrate IoT Central app confi
This guide walks you through the creation of a new pipeline that updates an IoT Central application based on configuration files managed in GitHub. This guide has specific instructions for integrating with [Azure Pipelines](/azure/devops/pipelines/?view=azure-devops&preserve-view=true), but could be adapted to include IoT Central in any release pipeline built using tools such as Tekton, Jenkins, GitLab, or GitHub Actions.
-In this guide, you create a pipeline that only applies an IoT Central configuration to a single instance of an IoT Central application. You should integrate the steps into a larger pipeline that deploys your entire solution and promotes it from *development* to *QA* to *pre-production* to *production*, performing all necessary testing along the way.
+In this guide, you create a pipeline that only applies an IoT Central configuration to a single instance of an IoT Central application. You should integrate the steps into a larger pipeline that deploys your entire solution and promotes it from *development* to *QA* to *preproduction* to *production*, performing all necessary testing along the way.
The scripts currently don't transfer the following settings between IoT Central instances: dashboards, views, custom settings in device templates, pricing plan, UX customizations, application image, rules, scheduled jobs, saved jobs, and enrollment groups.
To get started, fork the IoT Central CI/CD GitHub repository and then clone your
## Create a service principal
-While Azure Pipelines can integrate directly with a key vault, your pipeline needs a service principal for some of the dynamic key vault interactions such as fetching secrets for data export destinations.
+While Azure Pipelines can integrate directly with a key vault, a pipeline needs a service principal for some dynamic key vault interactions such as fetching secrets for data export destinations.
To create a service principal scoped to your subscription:
Now that you have a working pipeline you can manage your IoT Central instances d
## Next steps
-Now that know how to integrate IoT Central configurations into your CI/CD pipelines, a suggested next step is to learn how to [Manage and monitor IoT Central from the Azure portal](howto-manage-iot-central-from-portal.md).
+Now that you know how to integrate IoT Central configurations into your CI/CD pipelines, a suggested next step is to learn how to [Manage and monitor IoT Central from the Azure portal](howto-manage-iot-central-from-portal.md).
iot-central Howto Upload File Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-upload-file-rest-api.md
Title: Configure uploads with the REST API in Azure IoT Central
description: How to use the IoT Central REST API to add an upload storage account configuration in an application Previously updated : 05/12/2022 Last updated : 06/12/2023
To test the file upload, install the following prerequisites in your local devel
## Add a file upload storage account configuration
+To add a file upload storage account configuration:
+ ### Create a storage account To use the Azure Storage REST API, you need a bearer token for the `management.azure.com` resource. To get a bearer token, you can use the Azure CLI:
npm build
### Create the device template and import the model
-To test the file upload you run a sample device application. Create a device template for the sample device to use.
+To test the file upload, you run a sample device application. Create a device template for the sample device to use.
1. Open your application in IoT Central UI.
To add a device to your Azure IoT Central application:
1. Choose **Devices** on the left pane.
-1. Select the *File Upload Device Sample* device template which you created earlier.
+1. Select the *File Upload Device Sample* device template that you created earlier.
1. Select + **New** and select **Create**.
-1. Select the device which you created and Select **Connect**
+1. Select the device that you created and Select **Connect**
-Copy the values for `ID scope`, `Device ID`, and `Primary key`. You'll use these values in the device sample code.
+Copy the values for `ID scope`, `Device ID`, and `Primary key`. You use these values in the device sample code.
### Run the sample code
-Open the git repository you downloaded in VS code. Create an ".env" file at the root of your project and add the values you copied above. The file should look like the sample below with the values you made a note of previously.
+Open the git repository you downloaded in VS Code. Create an ".env" file at the root of your project and add the values you copied previously. The file should look like the following sample with the values you made a note of previously.
```cmd/sh scopeId=<YOUR_SCOPE_ID>
deviceKey=<YOUR_PRIMARY_KEY>
modelId=dtmi:IoTCentral:IotCentralFileUploadDevice;1 ```
-Open the git repository you downloaded in VS code. Press F5 to run/debug the sample. In your terminal window you see that the device is registered and is connected to IoT Central:
+Open the git repository you downloaded in VS Code. Press F5 to run/debug the sample. In your terminal window you see that the device is registered and is connected to IoT Central:
```cmd/sh Starting IoT Central device...
Sending telemetry: {
```
-The sample project comes with a sample file named *datafile.json*. This is the file that's uploaded when you use the **Upload File** command in your IoT Central application.
+The sample project comes with a sample file named *datafile.json*. This file is uploaded when you use the **Upload File** command in your IoT Central application.
-To test this open your application and select the device you created. Select the **Command** tab and you see a button named **Run**. When you select that button the IoT Central app calls a direct method on your device to upload the file. You can see this direct method in the sample code in the /device.ts file. The method is named *uploadFileCommand*.
+To test the upload, open your application and select the device you created. Select the **Command** tab and you see a button named **Run**. When you select that button the IoT Central app calls a direct method on your device to upload the file. You can see this direct method in the sample code in the /device.ts file. The method is named *uploadFileCommand*.
Select the **Raw data** tab to verify the file upload status.
iot-central Iot Central Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/iot-central-customer-data-requests.md
Title: Customer data request featuresΓÇï in Azure IoT Central
description: This article describes identifying, deleting, and exporting customer data in Azure IoT Central application. Previously updated : 06/03/2022 Last updated : 06/12/2023 - # Azure IoT Central customer data request featuresΓÇï
iot-central Iot Central Customer Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/iot-central-customer-data-residency.md
Title: Customer data residency in Azure IoT Central
-description: This article describes customer data residency in Azure IoT Central applications and how it relates to Azure geopgraphies.
+description: This article describes customer data residency in Azure IoT Central applications and how it relates to Azure geographies.
Previously updated : 06/07/2022 Last updated : 06/12/2023
# Azure IoT Central customer data residencyΓÇï
-IoT Central does not store customer data outside of the customer specified geography except for the following scenarios:
+IoT Central doesn't store customer data outside of the customer specified geography except for the following scenarios:
- When a new user is added to an existing IoT Central application, the user's email ID may be stored outside of the geography until the invited user accesses the application for the first time. - IoT Central dashboard map tiles use [Azure Maps](../../azure-maps/about-azure-maps.md). When you add a map tile to an existing IoT Central application, the location data may be processed or stored in accordance with the geolocation rules of the Azure Maps service. -- IoT Central uses the Device Provisioning Service (DPS) internally. DPS uses the same device provisioning endpoint for all provisioning service instances, and performs traffic load balancing to the nearest available service endpoint. As a result, authentication secrets may be temporarily transferred outside of the region where the DPS instance was initially created. However, once the device is connected, the device data will flow directly to the original region of the DPS instance.
+- IoT Central uses the Device Provisioning Service (DPS) internally. DPS uses the same device provisioning endpoint for all provisioning service instances, and performs traffic load balancing to the nearest available service endpoint. As a result, authentication secrets may be temporarily transferred outside of the region where the DPS instance was initially created. However, once the device is connected, the device data flows directly to the original region of the DPS instance.
iot-central Iot Central Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/iot-central-supported-browsers.md
Title: Supported browsers for Azure IoT Central
description: Azure IoT Central can be accessed across modern desktops, tablets and browsers. This article outlines the list of supported browsers. Previously updated : 06/08/2022 Last updated : 06/12/2023 - # This article applies to operators, builders, and administrators.
iot-central Overview Iot Central Api Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-api-tour.md
Title: Take a tour of the Azure IoT Central API
description: Become familiar with the key areas of the Azure IoT Central REST API. Use the API to create, manage, and use your IoT solution from client applications. Previously updated : 06/10/2022 Last updated : 06/12/2023
The REST API operations are grouped into the:
## Data plane operations
-Version 2022-05-31 of the data plane API lets you manage the following resources in your IoT Central application:
+Version 2022-07-31 of the data plane API lets you manage the following resources in your IoT Central application:
- API tokens - Device groups - Device templates - Devices
+- Enrollment groups
- File uploads
+- Jobs
- Organizations - Roles
+- Scheduled jobs
- Users
-The preview devices API also lets you [query telemetry and property values from your devices](howto-query-with-rest-api.md), [manage jobs](howto-manage-jobs-with-rest-api.md), and [manage data exports](howto-manage-data-export-with-rest-api.md).
+The preview devices API also lets you [manage dashboards](howto-manage-dashboards-with-rest-api.md), [manage deployment manifests](howto-manage-deployment-manifests-with-rest-api.md), and [manage data exports](howto-manage-data-export-with-rest-api.md).
To get started with the data plane APIs, see [Tutorial: Use the REST API to manage an Azure IoT Central application](tutorial-use-rest-api.md).
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
Title: Azure IoT Central data integration guide
description: This guide describes how to integrate your IoT Central application with other services to extend its capabilities. Previously updated : 06/03/2022 Last updated : 06/12/2023
A typical IoT solution:
- Extracts business value from your device data. - Is composed of multiple services and applications. When you use IoT Central to create an IoT solution, tasks include:
For example, you can:
- Enrich the data streams with custom values and property values from the device. - [Transform the data](howto-transform-data-internally.md) streams to modify their shape and content.
-Currently, IoT Central export data to:
+Currently, IoT Central can export data to:
- [Azure Data Explorer](howto-export-to-azure-data-explorer.md) - [Blob Storage](howto-export-to-blob-storage.md)
IoT Central provides a rich platform to help you extract business value from you
Built-in features of IoT Central you can use to extract business value include: -- Configure dashboards and views:
+- Dashboards and views:
An IoT Central application can have one or more dashboards that operators use to view and interact with the application. You can customize the default dashboard and create specialized dashboards:
Built-in features of IoT Central you can use to extract business value include:
- When a device connects to an IoT Central, the device is assigned to a device template for the device type. A device template has customizable views that an operator uses to manage individual devices. You can create and customize the available views for each device type. To learn more, see [Add views](howto-set-up-template.md#views). -- Use built-in rules and analytics:
+- Built-in rules and analytics:
You can add rules to an IoT Central application that run customizable actions. Rules evaluate conditions, based on data coming from a device, to determine when to run an action. To learn more about rules, see:
Scenarios that process IoT data outside of IoT Central to extract business value
Use IoT data to calculate common business metrics such as *overall equipment effectiveness* (OEE) and *overall process effectiveness* (OPE). You can also use IoT data to enrich your existing AI and ML assets. For example, IoT Central can help to capture the data you need to build, train, and deploy your models.
- Use the IoT Central continuous data export feature to publish captured IoT data into an Azure data lake. Then use a connected to Azure Databricks workspace to compute OEE and OPE. Pipe the same data to Azure ML or Azure Synapse to use their machine learning capabilities.
+ Use the IoT Central continuous data export feature to publish captured IoT data into an Azure data lake. Then use a connected to Azure Databricks workspace to compute OEE and OPE. Pipe the same data to Azure Machine Learning or Azure Synapse to use their machine learning capabilities.
- Streaming computation, monitoring, and diagnostics
Scenarios that process IoT data outside of IoT Central to extract business value
- Analyze and visualize IoT data alongside business data
- IoT Central provides feature-rich dashboards and visualizations. However, business-specific reports may require you to merge IoT data with existing business data sourced from external systems. Use the IoT Central integration features to extract IoT data from IoT Central. Then merge the IoT data with existing business data to deliver a centralized solution for analyzing and visualizing you business processes.
+ IoT Central provides feature-rich dashboards and visualizations. However, business-specific reports may require you to merge IoT data with existing business data sourced from external systems. Use the IoT Central integration features to extract IoT data from IoT Central. Then merge the IoT data with existing business data to deliver a centralized solution for analyzing and visualizing your business processes.
For example, use the IoT Central continuous data export feature to continuously ingest your IoT data into an Azure Synapse store. Then use Azure Data Factory to bring data from external systems into the Azure Synapse store. Use the Azure Synapse store with Power BI to generate your business reports.
iot-central Overview Iot Central Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-tour.md
Title: Take a tour of the Azure IoT Central UI
description: Become familiar with the key areas of the Azure IoT Central UI that you use to create, manage, and use your IoT solution. Previously updated : 06/10/2022 Last updated : 06/12/2023 - # Take a tour of the Azure IoT Central UI
This article introduces you to Azure IoT Central UI. You can use the UI to creat
The [IoT Central homepage](https://apps.azureiotcentral.com/) page is the place to learn more about the latest news and features available on IoT Central, create new applications, and see and launch your existing applications. ### Create an application In the **Build** section you can browse the list of industry-relevant IoT Central templates, or start from scratch using a Custom application template. To learn more, see the [Create an Azure IoT Central application](quick-deploy-iot-central.md) quickstart. ### Launch your application
-You launch your IoT Central application by navigating to the URL you chose during app creation. You can also see a list of all the applications you have access to in the [IoT Central app manager](https://apps.azureiotcentral.com/myapps).
+You launch your IoT Central application by navigating to the URL you chose during app creation. You can see a list of all the applications you have access to in the [IoT Central app manager](https://apps.azureiotcentral.com/myapps).
## Navigate your application
Once you're inside your IoT application, use the left pane to access various fea
**Edge manifests** lets you import and manage deployment manifests for the IoT Edge devices that connect to your application.
- **Data explorer** exposes rich capabilities to analyze historical trends and correlate various telemetries from your devices.
+ **Data explorer** exposes rich capabilities to analyze historical trends and correlate various telemetry types from your devices.
**Dashboards** displays all application and personal dashboards.
Once you're inside your IoT application, use the left pane to access various fea
The top menu appears on every page: * To search for devices, enter a **Search** value. * To change the UI language or theme, choose the **Settings** icon. Learn more about [managing your application preferences](howto-manage-preferences.md)
You can choose between a light theme or a dark theme for the UI:
### Devices This page shows the devices in your IoT Central application grouped by _device template_.
This page shows the devices in your IoT Central application grouped by _device t
### Device groups This page lets you create and view device groups in your IoT Central application. You can use device groups to do bulk operations in your application or to analyze data. To learn more, see the [Use device groups in your Azure IoT Central application](tutorial-use-device-groups.md) article. ### Device templates The device templates page is where you can view and create device templates in the application. To learn more, see [Connect Azure IoT Edge devices to an Azure IoT Central application](concepts-iot-edge.md). ### Edge manifests The edge manifests page is where you can import and manage IoT Edge deployment manifests in the application. To learn more, see the [Define a new device type in your Azure IoT Central application](howto-set-up-template.md) tutorial. ### Data Explorer
-Data explorer exposes rich capabilities to analyze historical trends and correlate various telemetries from your devices. To learn more, see the [Create analytics for your Azure IoT Central application](howto-create-analytics.md) article.
+Data explorer exposes rich capabilities to analyze historical trends and correlate various telemetry types from your devices. To learn more, see the [Create analytics for your Azure IoT Central application](howto-create-analytics.md) article.
### Dashboards * Personal dashboards can also be created to monitor what you care about. To learn more, see the [Create Azure IoT Central personal dashboards](howto-manage-dashboards.md) how-to article. ### Jobs This page lets you view and create jobs that can be used for bulk device management operations on your devices. You can update device properties, settings, and execute commands against device groups. To learn more, see the [Run a job](howto-manage-devices-in-bulk.md) article. ### Rules This page lets you view and create rules based on device data. When a rule fires, it can trigger one or more actions such as sending an email or invoking a webhook. To learn, see the [Configuring rules](tutorial-create-telemetry-rules.md) tutorial. ### Data export Data export enables you to set up streams of data to external systems. To learn more, see the [Export your data in Azure IoT Central](./howto-export-to-blob-storage.md) article. ### Audit logs Audit logs enable you to view a list of recent changes made in your IoT Central application. To learn more, see the [Use audit logs to track activity in your IoT Central application](howto-use-audit-logs.md) article. ### Permissions This page let you define a hierarchy that you use to manage which users can see which devices in your IoT Central application. To learn, see the [Manage IoT Central organizations](howto-create-organizations.md). ### Application The application page allows you to configure your IoT Central application. Here you can change your application name, URL, theming, manage users and roles, create API tokens, and export your application. To learn more, see the [Administer your Azure IoT Central application](howto-administer.md) article. ### Customization The customization page allows you to customize your IoT Central application. Here you can change your masthead logo, browser icon, and browser colors. To learn more, see the [How to customize the Azure IoT Central UI](howto-customize-ui.md) article.
iot-central Troubleshoot Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/troubleshoot-data-export.md
description: Troubleshoot data exports in IoT Central for issues such as managed
Previously updated : 06/10/2022 Last updated : 06/12/2023
This document helps you find out why the data your IoT Central application isn't
## Managed identity issues
-You're using a managed identity to authorize the connection to an export destination. Data is not arriving at the export destination.
+You're using a managed identity to authorize the connection to an export destination. Data isn't arriving at the export destination.
Before you configure or enable the export destination, make sure that you complete the following steps: -- Enable the managed identity for the the application.
+- Enable the managed identity for the application.
- Configure the permissions for the managed identity. - Configure any virtual networks, private endpoints, and firewall policies.
iot-central Tutorial Smart Meter App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/energy/tutorial-smart-meter-app.md
Title: Tutorial - Azure IoT smart-meter monitoring
description: This tutorial shows you how to deploy and use an application template for monitoring smart meters in Azure IoT Central. Previously updated : 06/14/2022 Last updated : 06/12/2023 - # Tutorial: Deploy and walk through an application template for monitoring smart meters
-Smart meters enable not only automated billing, but also advanced metering use cases like real-time readings and bidirectional communication.
+Smart meters not only enable automated billing, but also advanced metering use cases such as real-time readings and bidirectional communication.
-An application template enables utilities and partners to monitor the status and data of smart meters, along with defining alarms and notifications. The template provides sample commands, such as disconnecting a meter and updating software. You can set up the meter data to egress to other business applications, and to develop custom solutions.
+An application template enables utilities and partners to monitor smart meter status, telemetry, and define alarms and notifications. The template provides sample commands, such as disconnecting a meter and updating software. You can export the meter data to other business applications and use the data to develop custom solutions.
-The application's key functionalities include:
+The application's key functionality includes:
- Sample device model for meters - Meter info and live status
In this tutorial, you learn how to:
- Create an application for monitoring smart meters. - Walk through the application.-- Clean up resources. ## Application architecture
The architecture of the application consists of the following components. Some s
### Smart meters and connectivity
-A smart meter is one of the most important devices among all the energy assets. It records and communicates energy consumption data to utilities for monitoring and other use cases, such as billing and demand response.
+A smart meter records and communicates energy consumption data to utilities for monitoring and other use cases, such as billing and demand response.
Typically, a meter uses a gateway or bridge to connect to an Azure IoT Central application. To learn more about bridges, see [Use the Azure IoT Central device bridge to connect other IoT clouds to Azure IoT Central](../core/howto-build-iotc-device-bridge.md). ### Azure IoT Central platform
-When you build an Internet of Things (IoT) solution, Azure IoT Central simplifies the build process and helps reduce the burden and costs of IoT management, operations, and development. With Azure IoT Central, you can easily connect, monitor, and manage your IoT assets at scale.
+When you build an Internet of Things (IoT) solution, Azure IoT Central simplifies the build process and helps reduce the burden and costs of IoT management, operations, and development. With Azure IoT Central, you can easily connect, monitor, and manage your IoT assets at scale.
-After you connect your smart meters to Azure IoT Central, the application template uses built-in features such as device models, commands, and dashboards. The application template also uses the Azure IoT Central storage for warm path scenarios such as near real-time meter data monitoring, analytics, rules, and visualization.
+After you connect your smart meters to Azure IoT Central, the application template uses built-in features such as device models, commands, and dashboards. The application template also uses the Azure IoT Central storage for warm path scenarios such as near real-time meter data monitoring, analytics, rules, and visualizations.
-### Extensibility options to build with Azure IoT Central
+### IoT Central extensibility options
-The Azure IoT Central platform provides two extensibility options: Continuous Data Export and APIs. Customers and partners can choose between these options to customize their solutions for their specific needs.
+The Azure IoT Central platform provides two extensibility options: data export and APIs. Customers and partners can choose between these options to customize their solutions for their specific needs.
-For example, a partner might configure Continuous Data Export with Azure Data Lake Storage. That partner can then use Data Lake Storage for long-term data retention and other scenarios for cold path storage, such batch processing, auditing, and reporting.
+For example, a partner might configure data export to continuously send data to Azure Data Lake Storage. That partner can then use Data Lake Storage for long-term data retention and other scenarios for cold path storage, such batch processing, auditing, and reporting.
## Prerequisites
To complete this tutorial, you need an active Azure subscription. If you don't h
## Create an application for monitoring smart meters
-1. Go to the [Azure IoT Central build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account.
+1. Go to the [Azure IoT Central build](https://aka.ms/iotcentral) site. Then sign in with a Microsoft personal, work, or school account.
1. Select **Build** from the left menu, and then select the **Energy** tab.
After you deploy the application template, it comes with a sample smart meter, a
Adatum is a fictitious energy company that monitors and manages smart meters. The dashboard for monitoring smart meters shows properties, data, and sample commands for meters. The dashboard enables operators and support teams to proactively perform the following activities before they become support incidents:
-* Review the latest meter info and its installed [location](../core/howto-use-location-data.md) on the map.
-* Proactively check the meter network and connection status.
-* Monitor minimum and maximum voltage readings for network health.
-* Review the energy, power, and voltage trends to catch any anomalous patterns.
-* Track the total energy consumption for planning and billing purposes.
-* Perform command and control operations, such as reconnecting a meter and updating a firmware version. In the template, the command buttons show the possible functionalities and don't send real commands.
+- Review the latest meter info and its installed [location](../core/howto-use-location-data.md) on the map.
+- Proactively check the meter network and connection status.
+- Monitor minimum and maximum voltage readings for network health.
+- Review the energy, power, and voltage trends to catch any anomalous patterns.
+- Track the total energy consumption for planning and billing purposes.
+- Perform command and control operations, such as reconnecting a meter and updating a firmware version. In the template, the command buttons show the possible functionalities and don't send real commands.
:::image type="content" source="media/tutorial-iot-central-smart-meter/smart-meter-dashboard.png" alt-text="Screenshot that shows the dashboard for monitoring smart meters." lightbox="media/tutorial-iot-central-smart-meter/smart-meter-dashboard.png":::
iot-central Tutorial Solar Panel App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/energy/tutorial-solar-panel-app.md
Title: Tutorial - Azure IoT solar panel monitoring
description: This tutorial shows you how to deploy and use the solar panel monitoring application template for IoT Central. Previously updated : 06/14/2022 Last updated : 06/12/2023
# Tutorial: Deploy and walk through the solar panel monitoring application template
-The solar panel monitoring app enables utilities and partners to monitor solar panels, such as their energy generation and connection status in near real time. It can send notifications based on defined threshold criteria. It provides sample commands, such as update firmware and other properties. The solar panel data can be set up to egress to other business applications and to develop custom solutions.
+The solar panel monitoring app enables utilities and partners to monitor solar panels, such as their energy generation and connection status in near real time. It can send notifications based on defined threshold criteria. It provides sample commands, such as update firmware. You can export the solar panel data to other business applications.
Key application functionality:
This architecture consists of the following components. Some applications may no
### Solar panels and connectivity
-Solar panels are one of the significant sources of renewable energy. Typically, a solar panel uses a gateway to connect to an IoT Central application. You might need to build IoT Central device bridge to connect devices, which can't be connected directly. The IoT Central device bridge is an open-source solution and you can find the complete details [here](../core/howto-build-iotc-device-bridge.md).
+Solar panels are a source of renewable energy. Typically, a solar panel uses a gateway to connect to an IoT Central application. You might need to build IoT Central device bridge to connect devices that can't connect directly. The [IoT Central device bridge](../core/howto-build-iotc-device-bridge.md) is an open-source bridge solution.
### IoT Central platform
-When you build an IoT solution, Azure IoT Central simplifies the build process and helps to reduce the burden and costs of IoT management, operations, and development. With IoT Central, you can easily connect, monitor, and manage your Internet of Things (IoT) assets at scale. After you connect your solar panels to IoT Central, the application template uses built-in features such as device models, commands, and dashboards. The application template also uses the IoT Central storage for warm path scenarios such as near real-time meter data monitoring, analytics, rules, and visualization.
+When you build an IoT solution, Azure IoT Central simplifies the build process and helps to reduce the burden and costs of IoT management, operations, and development. With IoT Central, you can easily connect, monitor, and manage your IoT assets at scale. After you connect your solar panels to IoT Central, the application template uses built-in features such as device models, commands, and dashboards. The application template also uses the IoT Central storage for warm path scenarios such as near real-time meter data monitoring, analytics, rules, and visualization.
### Extensibility options to build with IoT Central
-The IoT Central platform provides two extensibility options: Continuous Data Export (CDE) and APIs. The customers and partners can choose between these options based to customize their solutions for specific needs. For example, one of our partners configured CDE with Azure Data Lake Storage (ADLS). They're using ADLS for long-term data retention and other cold path storage scenarios, such batch processing, auditing, and reporting purposes.
+The IoT Central platform provides two extensibility options: data export and APIs. The customers and partners can choose between these options to customize their solutions for specific needs. For example, use data export to send telemetry to Azure Data Lake Storage (ADLS). Use ADLS for long-term data retention and other cold path storage scenarios, such batch processing, auditing, and reporting.
In this tutorial, you learn how to:
The following sections walk you through the key features of the application:
### Dashboard
-After you deploy the application template, you'll want to explore the app a bit more. Notice that it comes with sample smart meter device, device model, and dashboard.
+After you deploy the application template, you can explore the application. The application comes with sample smart meter device, device template, and dashboard.
-Adatum is a fictitious energy company that monitors and manages solar panels. On the solar panel monitoring dashboard, you see solar panel properties, data, and sample commands. This dashboard allows you or your support team to perform the following activities proactively, before any problems require additional support:
+Adatum is a fictitious energy company that monitors and manages solar panels. On the solar panel monitoring dashboard, you see solar panel properties, data, and sample commands. This dashboard allows you or your support team to complete the following tasks, before any issues require extra support resources:
-* Review the latest panel info and its installed [location](../core/howto-use-location-data.md) on the map.
-* Check the panel status and connection status.
-* Review the energy generation and temperature trends to catch any anomalous patterns.
-* Track the total energy generation for planning and billing purposes.
-* Activate a panel and update the firmware version, if necessary. In the template, the command buttons show the possible functionalities, and don't send real commands.
+- Review the latest panel info and its installed location on the map.
+- Check the panel status and connection status.
+- Review the energy generation and temperature trends to catch any anomalous patterns.
+- Track the total energy generation for planning and billing purposes.
+- Activate a panel and update the firmware version, if necessary. In the template, the command buttons show the possible functionalities, and don't send real commands.
:::image type="content" source="media/tutorial-iot-central-solar-panel/solar-panel-dashboard.png" alt-text="Screenshot of Solar Panel Monitoring Template Dashboard.":::
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
Title: Tutorial - Deploy an Azure IoT in-store analytics app
description: This tutorial shows how to create and deploy an in-store analytics retail application in IoT Central. Previously updated : 06/14/2022 Last updated : 06/12/2023
The application template comes with a set of device templates and uses a set of
:::image type="content" source="media/tutorial-in-store-analytics-create-app/store-analytics-architecture-frame.png" alt-text="Diagram of the in-store analytics application architecture." border="false":::
-As shown in the preceding application architecture diagram, you can use the application template to:
+As shown in the previous application architecture diagram, you can use the application template to:
* **1**. Connect various IoT sensors to an IoT Central application instance.
- An IoT solution starts with a set of sensors that capture meaningful signals from within a retail store environment. The sensors are represented by the various icons at the far left of the architecture diagram.
+ An IoT solution starts with a set of sensors that capture meaningful signals from within a retail store environment. The various icons at the far left of the architecture diagram represent the sensors.
* **2**. Monitor and manage the health of the sensor network and any gateway devices in the environment.
As shown in the preceding application architecture diagram, you can use the appl
The IoT data can be used to power different kinds of business applications deployed within a retail environment. A retail store manager or staff member can use these applications to visualize business insights and take meaningful action in real time. To learn how to build a real-time Power BI dashboard for your retail team, see [tutorial](./tutorial-in-store-analytics-customize-dashboard.md).
-In this tutorial, you'll learn how to:
+In this tutorial, you learn how to:
> [!div class="checklist"] > - Use the Azure IoT Central *In-store analytics - checkout* template to create a retail store application > - Customize the application settings
An active Azure subscription. If you don't have an Azure subscription, [create a
## Create an in-store analytics application
-Create the application by doing the following:
+Create the application by completing the following steps:
1. Sign in to the [Azure IoT Central](https://aka.ms/iotcentral) build site with a Microsoft personal, work, or school account.
The following sections describe the key features of the application.
### Customize the application settings
-You can change several settings to customize the user experience in your application. In this section, you select a predefined application theme. Optionally, you'll learn how to create a custom theme and update the application image. A custom theme enables you to set the application browser colors, the browser icon, and the application logo that appears in the masthead.
+You can change several settings to customize the user experience in your application. In this section, you select a predefined application theme. You can also learn how to create a custom theme and update the application image. A custom theme enables you to set the application browser colors, the browser icon, and the application logo that appears in the masthead.
To select a predefined application theme:
To select a predefined application theme:
3. Select **Save**.
-Alternatively, you can create a custom theme. If you want to use a set of sample images to customize the application and complete the tutorial, download the [Contoso sample images](https://github.com/Azure-Samples/iot-central-docs-samples/tree/main/retail).
+To create a custom theme, you can use a set of sample images to customize the application and complete the tutorial. Download the [Contoso sample images](https://github.com/Azure-Samples/iot-central-docs-samples/tree/main/retail).
To create a custom theme:
To create a custom theme:
1. Select **Change**, and then select an image to upload as the masthead logo. Optionally, enter a value for **Logo alt text**.
-1. Select **Change**, and then select a **Browser icon** image that will appear on browser tabs.
+1. Select **Change**, and then select a **Browser icon** image to appear on browser tabs.
1. Optionally, replace the default **Browser colors** by adding HTML hexadecimal color codes: a. For **Header**, enter **#008575**.
To update the application image:
The image appears on the application tile on the **My Apps** page of the [Azure IoT Central application manager](https://aka.ms/iotcentral) site. - ### Create the device templates
-By creating device templates, you and the application operators can configure and manage devices. You can build a custom template, import an existing template file, or import a template from the Azure IoT device catalog. After you create and customize a device template, use it to connect real devices to your application.
+By creating device templates, you and the application operators can configure and manage devices. You can build a custom template, import an existing template file, or import a template from the Azure IoT device catalog. After you create and customize a device template, use it to connect real devices to your application.
Optionally, you can use a device template to generate simulated devices for testing.
-The *In-store analytics - checkout* application template has device templates for several devices, including templates for two of the three devices you use in the application. The RuuviTag device template isn't included in the *In-store analytics - checkout* application template.
+The *In-store analytics - checkout* application template has device templates for several devices, including templates for two of the three devices you use in the application. The RuuviTag device template isn't included in the *In-store analytics - checkout* application template.
In this section, you add a device template for RuuviTag sensors to your application. To do so:
In this section, you add a device template for RuuviTag sensors to your applicat
1. Select **Next: Review**.
-1. Select **Create**.
+1. Select **Create**.
The application adds the RuuviTag device template.
-1. On the left pane, select **Device templates**.
+1. On the left pane, select **Device templates**.
The page displays all the device templates in the application template and the RuuviTag device template you just added.
In this section, you add a device template for RuuviTag sensors to your applicat
### Customize the device templates
-You can customize the device templates in your application in three ways:
+You can customize the device templates in your application in three ways:
* Customize the native built-in interfaces in your devices by changing the device capabilities.
For the **RelativeHumidity** telemetry type, make the following changes:
1. Update the **Display Name** value from **RelativeHumidity** to a custom value such as **Humidity**.
-1. Change the **Semantic Type** option from **Relative humidity** to **Humidity**.
+1. Change the **Semantic Type** option from **Relative humidity** to **Humidity**.
Optionally, set schema values for the humidity telemetry type in the expanded schema view. By setting schema values, you can create detailed validation requirements for the data that your sensors track. For example, you could set minimum and maximum operating range values for a specified interface.
For the **RelativeHumidity** telemetry type, make the following changes:
Specify the following values to create a custom property to store the location of each device:
-1. For **Display Name**, enter the **Location** value.
+1. For **Display Name**, enter the **Location** value.
This value, which is a friendly name for the property, is automatically copied to the **Name**. You can use the copied value or change it. 1. For **Cloud Property**, select **Capability Type**.
-1. In the **Schema** dropdown list, select **String**.
+1. In the **Schema** dropdown list, select **String**.
By specifying a string type, you can associate a location name string with any device that's based on the template. For instance, you could associate an area in a store with each device.
For this tutorial, you use the following set of real and simulated devices to bu
- A real Rigado C500 gateway. - Two real RuuviTag sensors.-- A simulated *Occupancy* sensor. This simulated sensor is included in the application template, so you don't need to create it.
+- A simulated *Occupancy* sensor. This simulated sensor is included in the application template, so you don't need to create it.
> [!NOTE] > If you don't have real devices, you can still complete this tutorial by creating simulated RuuviTag sensors. The following directions include steps to create a simulated RuuviTag. You don't need to create a simulated gateway.
Complete the steps in the following two articles to connect a real Rigado gatewa
### Add rules and actions
-As part of using sensors in your Azure IoT Central application to monitor conditions, you can create rules to run actions when certain conditions are met.
+As part of using sensors in your Azure IoT Central application to monitor conditions, you can create rules to run actions when certain conditions are met.
A rule is associated with a device template and one or more devices, and it contains conditions that must be met based on device telemetry or events. A rule also has one or more associated actions. The actions might include sending email notifications, or triggering a webhook action to send data to other services. The *In-store analytics - checkout* application template includes some predefined rules for the devices in the application.
iot-central Tutorial In Store Analytics Customize Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-customize-dashboard.md
Previously updated : 06/14/2022 Last updated : 06/12/2023 # Tutorial: Customize the dashboard and manage devices in Azure IoT Central In this tutorial, you learn how to customize the dashboard in your Azure IoT Central in-store analytics application. Application operators can use the customized dashboard to run the application and manage the attached devices.
-In this tutorial, you'll learn how to:
+In this tutorial, you learn how to:
> [!div class="checklist"] > * Customize image tiles on the dashboard > * Arrange tiles to modify the layout
Before you begin, complete the following tutorial:
## Change the dashboard name
-After you've created your condition-monitoring application, you can edit its default dashboard. You can also create additional dashboards.
+After you've created your condition-monitoring application, you can edit its default dashboard. You can also create more dashboards.
The first step in customizing the application dashboard is to change the name:
The first step in customizing the application dashboard is to change the name:
## Customize image tiles on the dashboard
-An Azure IoT Central application dashboard consists of one or more tiles. A tile is a rectangular container for displaying content on a dashboard. You associate various types of content with tiles, and you can drag, drop, and resize tiles to customize the dashboard layout.
+An Azure IoT Central application dashboard consists of one or more tiles. A tile is a rectangular container for displaying content on a dashboard. You associate various types of content with tiles, and you can drag, drop, and resize tiles to customize the dashboard layout.
There are several types of tiles for displaying content: * **Image** tiles contain images, and you can add a URL that lets you select the image.
To customize the image tile that displays a map of the sensor zones in the store
:::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/store-map-save.png" alt-text="Screenshot that shows the in-store analytics application dashboard store map tile." lightbox="media/tutorial-in-store-analytics-customize-dashboard/store-map-save.png":::
-The example Contoso store map shows four zones: two checkout zones, a zone for apparel and personal care, and a zone for groceries and deli.
-
-In this tutorial, you'll associate sensors with these zones to provide telemetry.
+The example Contoso store map shows four zones: two checkout zones, a zone for apparel and personal care, and a zone for groceries and deli.
+
+In this tutorial, you associate sensors with these zones to provide telemetry.
## Arrange tiles to modify the layout
To remove tiles that you don't plan to use in your application:
1. Select **Save**. Removing unused tiles frees space on the edit page, and it simplifies the dashboard view for operators.
-After you've removed the unused tiles, rearrange the remaining tiles to create an organized layout. The new layout includes space for tiles that you'll add later.
+After you've removed the unused tiles, rearrange the remaining tiles to create an organized layout. The new layout includes space for tiles that you add later.
To rearrange the remaining tiles:
To edit the **People traffic** tile to show telemetry for only two checkout zone
## Add command tiles to run commands
-Application operators also use the dashboard to manage devices by running commands. You can add command tiles to the dashboard that will execute predefined commands on a device. In this section, you add a command tile to enable operators to reboot the Rigado gateway.
+Application operators also use the dashboard to manage devices by running commands. You can add command tiles to the dashboard that execute predefined commands on a device. In this section, you add a command tile to enable operators to reboot the Rigado gateway.
To add a command tile to reboot the gateway:
iot-central Tutorial In Store Analytics Export Data Visualize Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-export-data-visualize-insights.md
Title: Tutorial - Visualize data from Azure IoT Central
description: In this tutorial, learn how to export data from IoT Central, and visualize insights in a Power BI dashboard. Previously updated : 06/07/2022 Last updated : 06/12/2023
The data export may take a few minutes to start sending telemetry to your event
## Create the Power BI datasets
-Your Power BI dashboard will display data from your retail monitoring application. In this solution, you use Power BI streaming datasets as the data source for the Power BI dashboard. In this section, you define the schema of the streaming datasets so that the logic app can forward data from the event hub. The following steps show you how to create two streaming datasets for the environmental sensors and one streaming dataset for the occupancy sensor:
+Your Power BI dashboard displays data from your retail monitoring application. In this solution, you use Power BI streaming datasets as the data source for the Power BI dashboard. In this section, you define the schema of the streaming datasets so that the logic app can forward data from the event hub. The following steps show you how to create two streaming datasets for the environmental sensors and one streaming dataset for the occupancy sensor:
1. Sign in to your **Power BI** account. 1. Select **Workspaces**, and then select **Create a workspace**.
Your Power BI dashboard will display data from your retail monitoring applicatio
1. Select **Create** and then **Done**. 1. Create another streaming dataset called **Zone 2 sensor** with the same schema and settings as the **Zone 1 sensor** streaming dataset.
-You now have two streaming datasets. The logic app will route telemetry from the two environmental sensors connected to your **In-store analytics - checkout** application to these two datasets:
+You now have two streaming datasets. The logic app routes telemetry from the two environmental sensors connected to your **In-store analytics - checkout** application to these two datasets:
:::image type="content" source="media/tutorial-in-store-analytics-visualize-insights/dataset-1.png" alt-text="Screenshot that shows the zone one sensor dataset definition in Power B I.":::
The following steps show you how to create the logic app in the Azure portal:
1. Sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource** at the top left of the screen. 1. In **Search the Marketplace**, enter _Logic App_, and then press **Enter**. 1. On the **Logic App** page, select **Create**.
-1. On the **Logic App** create page:
+1. On the **Create** page:
* Enter a unique name for your logic app such as _yourname-retail-store-analysis_. * Select the same **Subscription** you used to create your IoT Central application. * Select the **retail-store-analysis** resource group.
iot-central Tutorial Iot Central Connected Logistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md
Previously updated : 06/13/2022 Last updated : 06/12/2023
Last updated 06/13/2022
Global logistics spending is expected to reach $10.6 trillion in 2020. Transportation of goods accounts for most of this spending and shipping providers are under intense competitive pressure and constraints.
-You can use IoT sensors to collect and monitor ambient conditions such as temperature, humidity, tilt, shock, light, and the location of a shipment. You can combine telemetry gathered from IoT sensors and devices with other data sources such as weather and traffic information in cloud-based business intelligence systems.
+You can use IoT sensors to collect and monitor ambient conditions such as temperature, humidity, tilt, shock, light, and the location of a shipment. In cloud-based business intelligence systems, you can combine telemetry gathered from sensors and devices with other data sources such as weather and traffic information.
The benefits of a connected logistics solution include:
Azure IoT Central is a solution development platform that simplifies IoT device
The IoT Central platform provides rich extensibility options through _data export and APIs (3)_. Business insights based on telemetry data processing or raw telemetry are typically exported to a preferred _line-of-business application (4,5)_.
-This tutorial shows you how to get started with the IoT Central *connected logistics* application template. You'll learn how to deploy and use the template.
+This tutorial shows you how to get started with the IoT Central *connected logistics* application template. You learn how to deploy and use the template.
In this tutorial, you learn how to:
Create the application using following steps:
1. **Create app** opens the **New application** form. Enter the following details:
- * **Application name**: you can use default suggested name or enter your friendly application name.
- * **URL**: you can use suggested default URL or enter your friendly unique memorable URL.
- * **Billing Info**: The directory, Azure subscription, and region details are required to provision the resources.
- * **Create**: Select create at the bottom of the page to deploy your application.
+ - **Application name**: you can use default suggested name or enter your friendly application name.
+ - **URL**: you can use suggested default URL or enter your friendly unique memorable URL.
+ - **Billing Info**: The directory, Azure subscription, and region details are required to provision the resources.
+ - **Create**: Select create at the bottom of the page to deploy your application.
## Walk through the application
The following sections walk you through the key features of the application.
After you deploy the application, your default dashboard is a connected logistics operator focused portal. Northwind Trader is a fictitious logistics provider managing a cargo fleet at sea and on land. In this dashboard, you see two different gateways providing telemetry from shipments, along with associated commands, jobs, and actions.
-This dashboard is pre-configured to show the critical logistics device operations activity.
+This preconfigured dashboard shows the critical logistics device operations activity.
The dashboard enables two different gateway device management operations:
-* View the logistics routes for truck shipments and the [location](../core/howto-use-location-data.md) details of ocean shipments.
-* View the gateway status and other relevant information.
-* You can track the total number of gateways, active, and unknown tags.
-* You can do device management operations such as: update firmware, disable and enable sensors, update a sensor threshold, update telemetry intervals, and update device service contracts.
-* View device battery consumption.
+- View the logistics routes for truck shipments and the details of ocean shipments.
+- View the gateway status and other relevant information.
+- You can track the total number of gateways, active, and unknown tags.
+- You can do device management operations such as: update firmware, disable and enable sensors, update a sensor threshold, update telemetry intervals, and update device service contracts.
+- View device battery consumption.
:::image type="content" source="media/tutorial-iot-central-connected-logistics/connected-logistics-dashboard.png" alt-text="Screenshot showing the connected logistics application dashboard." lightbox="media/tutorial-iot-central-connected-logistics/connected-logistics-dashboard.png"::: #### Device Template
-Select **Device templates** to see the gateway capability model. A capability model is structured around the **Gateway Telemetry & Property** and **Gateway Commands** interfaces.
+Select **Device templates** to see the gateway capability model. A capability model is structured around two interfaces:
-**Gateway Telemetry & Property** - This interface defines all the telemetry related to sensors, location, and device information. The interface also defines device twin property capabilities such as sensor thresholds and update intervals.
-
-**Gateway Commands** - This interface organizes all the gateway command capabilities.
+- **Gateway Telemetry & Property** - This interface defines all the telemetry related to sensors, location, and device information. The interface also defines device twin property capabilities such as sensor thresholds and update intervals.
+- **Gateway Commands** - This interface organizes all the gateway command capabilities.
:::image type="content" source="media/tutorial-iot-central-connected-logistics/connected-logistics-device-template.png" alt-text="Screenshot showing the connected logistics application device template." lightbox="media/tutorial-iot-central-connected-logistics/connected-logistics-device-template.png":::
Select **Device templates** to see the gateway capability model. A capability mo
Select the **Rules** tab to the rules in this application template. These rules are configured to email notifications to the operators for further investigations:
-**Gateway theft alert**: This rule triggers when there's unexpected light detection by the sensors during the journey. Operators must be notified immediately to investigate potential theft.
-
-**Lost gateway alert**: This rule triggers if the gateway doesn't report to the cloud for a prolonged period. The gateway could be unresponsive because of low battery, loss of connectivity, or device damage.
+- **Gateway theft alert**: This rule triggers when there's unexpected light detection by the sensors during the journey. Operators must be notified immediately to investigate potential theft.
+- **Lost gateway alert**: This rule triggers if the gateway doesn't report to the cloud for a prolonged period. The gateway could be unresponsive because of low battery, loss of connectivity, or device damage.
:::image type="content" source="media/tutorial-iot-central-connected-logistics/connected-logistics-rules.png" alt-text="Screenshot showing the connected logistics application rules." lightbox="media/tutorial-iot-central-connected-logistics/connected-logistics-rules.png"::: ### Jobs
-Select the **Jobs** tab to create the jobs in this application. The following screenshot shows an example of jobs created.
+Select the **Jobs** tab to create the jobs in this application. The following screenshot shows an example of created jobs:
:::image type="content" source="media/tutorial-iot-central-connected-logistics/connected-logistics-jobs.png" alt-text="Screenshot showing the connected logistics application job." lightbox="media/tutorial-iot-central-connected-logistics/connected-logistics-jobs.png"::: You can use jobs to do application-wide operations. The jobs in this application use device commands and twin capabilities to do tasks such as disabling specific sensors across all the gateways or modifying the sensor threshold depending on the shipment mode and route:
-* It's a standard operation to disable shock sensors during ocean shipment to conserve battery or lower temperature threshold during cold chain transportation.
-
-* Jobs enable you to do system-wide operations such as updating firmware on the gateways or updating service contract to stay current on maintenance activities.
+- It's a standard operation to disable shock sensors during ocean shipment to conserve battery or lower temperature threshold during cold chain transportation.
+- Jobs enable you to do system-wide operations such as updating firmware on the gateways or updating service contract to stay current on maintenance activities.
## Clean up resources
iot-central Tutorial Iot Central Digital Distribution Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-digital-distribution-center.md
Previously updated : 06/14/2022 Last updated : 06/12/2023 # Tutorial: Deploy and walk through the digital distribution center application template
-As manufacturers and retailers establish worldwide presences, their supply chains branch out and become more complex. Consumers now expect large selections of products to be available, and for those goods to arrive within one or two days of purchase. Distribution centers must adapt to these trends while overcoming existing inefficiencies.
+As manufacturers and retailers establish worldwide presences, their supply chains branch out and become more complex. Consumers now expect a large selection of products, and for those goods to arrive within one or two days of purchase. Distribution centers must adapt to these trends while overcoming existing inefficiencies.
-Today, reliance on manual labor means that picking and packing accounts for 55-65% of distribution center costs. Manual picking and packing are also typically slower than automated systems, and rapidly fluctuating staffing needs make it even harder to meet shipping volumes. This seasonal fluctuation results in high staff turnover and increase the likelihood of costly errors.
+Today, reliance on manual labor means that picking and packing accounts for 55-65% of distribution center costs. Manual picking and packing are also typically slower than automated systems, and rapidly fluctuating staffing needs make it even harder to meet shipping volumes. This seasonal fluctuation results in high staff turnover and increases the likelihood of costly errors.
Solutions based on IoT enabled cameras can deliver transformational benefits by enabling a digital feedback loop. Data from across the distribution center leads to actionable insights that, in turn, results in better data.
The benefits of a digital distribution center include:
### Video cameras (1)
-Video cameras are the primary sensors in this digitally connected enterprise-scale ecosystem. Advancements in machine learning and artificial intelligence that allow video to be turned into structured data and process it at edge before sending to cloud. We can use IP cameras to capture images, compress them on the camera, and then send the compressed data over edge compute for video analytics pipeline or use GigE vision cameras to capture images on the sensor and then send these images directly to the Azure IoT Edge, which then compresses before processing in video analytics pipeline.
+Video cameras are the primary sensors in this example application. Machine learning and artificial intelligence enable video to be turned into structured data and that you can process at the edge before sending it to the cloud. Use IP cameras to capture images, compress them on the camera, and then send the compressed data to edge compute resources for video analytics.
### Azure IoT Edge gateway (2)
-The "cameras-as-sensors" and edge workloads are managed locally by Azure IoT Edge and the camera stream is processed by analytics pipeline. The video analytics processing pipeline at Azure IoT Edge brings many benefits, including decreased response time, low-bandwidth consumption, which results in low latency for rapid data processing. Only the most essential metadata, insights, or actions are sent to the cloud for further action or investigation.
+Azure IoT Edge manages the "cameras-as-sensors" and edge workloads locally and a video analytics pipeline processes the data stream from the camera. The video analytics processing pipeline at Azure IoT Edge brings many benefits including decreased response times and low-bandwidth consumption. The IoT Edge device sends only the most essential metadata, insights, or actions to the cloud.
### Device management with IoT Central
Azure IoT Central is a solution development platform that simplifies IoT device
### Business insights and actions using data egress (5,6)
-IoT Central platform provides rich extensibility options through Continuous Data Export (CDE) and APIs. Business insights that are based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application. It can be achieved through webhook, Service Bus, event hub, or blob storage to build, train, and deploy machine learning models and further enrich insights.
+IoT Central platform provides rich extensibility options through data export and APIs. Business insights based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application. Export destinations include webhooks, Azure Service Bus, an event hub, or blob storage.
In this tutorial, you learn how to,
The following sections walk you through the key features of the application:
The default dashboard is a distribution center operator focused portal. Northwind Trader is a fictitious distribution center solution provider managing conveyor systems.
-In this dashboard, you'll see one gateway and one camera acting as an IoT device. Gateway is providing telemetry about packages such as valid, invalid, unidentified, and size along with associated device twin properties. All downstream commands are executed at IoT devices, such as a camera. This dashboard is pre-configured to showcase the critical distribution center device operations activity.
+In this dashboard, you see one gateway and one camera acting as an IoT device. The gateway provides telemetry about packages such as valid, invalid, unidentified, and size along with associated device twin properties. All downstream commands are executed at IoT devices. This dashboard is preconfigured to show the critical distribution center device operations activity.
The dashboard is logically organized to show the device management capabilities of the Azure IoT gateway and IoT device. You can:
-* Complete gateway command and control tasks.
-* Manage all the cameras in the solution.
+- Complete gateway command and control tasks.
+- Manage all the cameras in the solution.
:::image type="content" source="media/tutorial-iot-central-ddc/ddc-dashboard.png" alt-text="Screenshot showing the digital distribution center dashboard." lightbox="media/tutorial-iot-central-ddc/ddc-dashboard.png":::
The dashboard is logically organized to show the device management capabilities
Navigate to **Device templates**. The application has two device templates:
-* **Camera** - Organizes all the camera-specific command capabilities.
+- **Camera** - Organizes all the camera-specific command capabilities.
-* **Digital Distribution Gateway** - Represents all the telemetry coming from camera, cloud defined device twin properties and gateway info.
+- **Digital Distribution Gateway** - Represents all the telemetry coming from camera, cloud defined device twin properties and gateway info.
:::image type="content" source="media/tutorial-iot-central-ddc/ddc-devicetemplate.png" alt-text="Screenshot showing the digital distribution gateway device template." lightbox="media/tutorial-iot-central-ddc/ddc-devicetemplate.png"::: ### Rules
-Select the rules tab to see two different rules that exist in this application template. These rules are configured to email notifications to the operators for further investigations.
+Select the rules tab to see two different rules that exist in this application template. These rules configure email notifications to the operators for further investigations:
-**Too many invalid packages alert** - This rule is triggered when the camera detects a high number of invalid packages flowing through the conveyor system.
+- **Too many invalid packages alert** - This rule triggers when the camera detects a high number of invalid packages flowing through the conveyor system.
-**Large package** - This rule will trigger if the camera detects huge package that can't be inspected for the quality.
+- **Large package** - This rule triggers if the camera detects huge package that can't be inspected for the quality.
:::image type="content" source="media/tutorial-iot-central-ddc/ddc-rules.png" alt-text="Screenshot showing the list of rules in the digital distribution center application." lightbox="media/tutorial-iot-central-ddc/ddc-rules.png":::
iot-central Tutorial Iot Central Smart Inventory Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-smart-inventory-management.md
Previously updated : 06/13/2022 Last updated : 06/12/2023 # Tutorial: Deploy a smart inventory-management application template Inventory is the stock of goods that a retail business holds. As a retailer, you must balance the costs of storing too much inventory against the costs of having insufficient inventory to meet customer demand. It's critical to deploy smart inventory-management practices to ensure that the right products are in stock and in the right place at the right time.
-In this tutorial, you'll learn how to:
+In this tutorial, you learn how to:
> [!div class="checklist"] > * Create a smart inventory-management application
-> * Walk through the application
+> * Use the application
The benefits of smart inventory management include:
The benefits of smart inventory management include:
IoT data that you generate from radio-frequency identification (RFID) tags, beacons, and cameras gives you opportunities to improve inventory-management processes. You can combine telemetry that you've gathered from IoT sensors and devices with other data sources, such as weather and traffic information in cloud-based business intelligence systems.
-The application template that you'll create focuses on device connectivity, and it helps you configure and manage the RFID and Bluetooth low energy (BLE) reader devices.
+The application template that you create focuses on device connectivity, and it helps you configure and manage the RFID and Bluetooth low energy (BLE) reader devices.
## Smart inventory-management architecture
The preceding architecture diagram illustrates the smart inventory-management ap
* (**1**) RFID tags
- RFID tags transmit data about an item through radio waves. RFID tags ordinarily don't have a battery, unless specified. Tags receive energy from radio waves that are generated by the reader and then transmit a signal back to the RFID reader.
+ RFID tags transmit data about an item through radio waves. RFID tags ordinarily don't have a battery, unless specified. Tags receive energy from radio waves that the RFID reader generates and then transmit a signal back to the reader.
* (**1**) BLE tags
- An energy beacon broadcasts packets of data at regular intervals. Beacon data is detected by BLE readers or installed services on smartphones and then transmitted to the cloud.
+ An energy beacon broadcasts packets of data at regular intervals. BLE readers or installed services on smartphones detect beacon data and then transmit it to the cloud.
* (**1**) RFID and BLE readers
The preceding architecture diagram illustrates the smart inventory-management ap
BLE readers, also known as Access Points (AP), are similar to RFID readers. They're used to detect nearby Bluetooth signals and relay them to a local Azure IoT Edge instance or the cloud via JSON-RPC 2.0 over MQTT.
- Many readers can read RFID and beacon signals and provide additional sensor capability that's related to temperature and humidity, via accelerometer and gyroscope.
+ Many readers can read RFID and beacon signals and provide other sensor capabilities.
* (**2**) Azure IoT Edge gateway
- Azure IoT Edge server provides a place to preprocess the data locally before sending it on to the cloud. We can also deploy cloud workloads artificial intelligence, Azure and third-party services, and business logic by using standard containers.
+ Azure IoT Edge server provides a place to preprocess the data locally before sending it on to the cloud. You can also deploy cloud workloads artificial intelligence, Azure and third-party services, and business logic by using standard containers.
* Device management with IoT Central
The preceding architecture diagram illustrates the smart inventory-management ap
* (**3**) Business insights and actions using data egress
- The IoT Central platform provides rich extensibility options through Continuous Data Export (CDE) and APIs. Business insights that are based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application.
+ The IoT Central platform provides rich extensibility options through data export and APIs. Business insights that are based on telemetry data processing or raw telemetry are typically exported to a preferred line-of-business application.
You can use a webhook, service bus, event hub, or blob storage to build, train, and deploy machine learning models and further enrich insights.
An active Azure subscription. If you don't have an Azure subscription, [create a
## Create a smart inventory-management application
-Create the application by doing the following:
+Create the application by completing the following steps:
1. Sign in to the [Azure IoT Central Build](https://aka.ms/iotcentral) site with a Microsoft personal, work, or school account. On the left pane, select **Build**, and then select the **Retail** tab.
The following sections describe the key features of the application.
### Dashboard
-After you deploy the application, your default dashboard is a smart, operator-focused, inventory-management portal. Northwind Trader is a fictitious smart inventory provider that manages its warehouse with Bluetooth low energy (BLE) and its retail store with RFID.
+After you deploy the application, your default dashboard is a smart, operator-focused, inventory-management portal. Northwind Trader is a fictitious smart inventory provider that manages its warehouse with Bluetooth low energy (BLE) beacons and its retail store with RFID tags.
On this dashboard are two different gateways, each providing telemetry about inventory, along with associated commands, jobs, and actions that you can perform. This dashboard is preconfigured to display the activity of the critical smart inventory-management device. It's logically divided between two separate gateway device-management operations: * The warehouse is deployed with a fixed BLE gateway and BLE tags on pallets to track and trace inventory at a larger facility.- * The retail store is implemented with a fixed RFID gateway and RFID tags at the item level to track and trace the inventory in a store outlet.
-* View the [gateway location](../core/howto-use-location-data.md), status, and related details.
+* View the gateway location, status, and related details.
* You can easily track the total number of gateways, active tags, and unknown tags. * You can perform device management operations, such as: * Update firmware * Enable or disable sensors * Update sensor threshold * Update telemetry intervals
- * Update device service contracts
+ * Update device service contracts
* Gateway devices can perform on-demand inventory management with a complete or incremental scan.
This dashboard is preconfigured to display the activity of the critical smart in
Select the **Device templates** tab to display the gateway capability model. A capability model is structured around two separate interfaces:
-* **Gateway Telemetry and Property**: This interface displays the telemetry that's related to sensors, location, device info, and device twin property capability, such as gateway thresholds and update intervals.
+* **Gateway Telemetry and Property**: This interface displays the telemetry that's related to sensors, location, device info, and device twin properties such as gateway thresholds and update intervals.
* **Gateway Commands**: This interface organizes all the gateway command capabilities.
key-vault Move Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/move-region.md
Title: Move a key vault to a different region - Azure Key Vault | Microsoft Docs
+ Title: Move a key vault to a different region - Azure Key Vault
description: This article offers guidance on moving a key vault to a different region.
# Customer intent: As a key vault administrator, I want to move my vault to another region.
-# Move an Azure key vault across regions
+# Move a key vault across regions
Azure Key Vault does not allow you to move a key vault from one region to another. You can, however, create a key vault in the new region, manually copy each individual key, secret, or certificate from your existing key vault to the new key vault, and then remove the original key vault.
load-testing Overview What Is Azure Load Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/overview-what-is-azure-load-testing.md
For more advanced load testing scenarios, you can [create a load test by reusing
If your application is hosted on Azure, Azure Load Testing collects detailed resource metrics to help you [identify performance bottlenecks](#identify-performance-bottlenecks-by-using-high-scale-load-tests) across your Azure application components.
-To capture application performance regressions early, add your load test in your [continuous integration and continuous deployment (CI/CD) workflow](#enable-automated-load-testing). Leverage test fail criteria to define and validate your application quality requirements.
+To capture application performance regressions early, add your load test in your [continuous integration and continuous deployment (CI/CD) workflow](./quickstart-add-load-test-cicd.md). Leverage test fail criteria to define and validate your application quality requirements.
Azure Load Testing enables you to test private application endpoints or applications that you host on-premises. For more information, see the [scenarios for deploying Azure Load Testing in a virtual network](./concept-azure-load-testing-vnet-injection.md).
You might also [download the test results](./how-to-export-test-results.md) for
You can integrate Azure Load Testing in your CI/CD pipeline at meaningful points during the development lifecycle. For example, you could automatically run a load test at the end of each sprint or in a staging environment to validate a release candidate build.
-Get started with [adding load testing to your CI/CD workflow](./tutorial-identify-performance-regression-with-cicd.md) to quickly identify performance degradation of your application under load.
+Get started with [adding load testing to your CI/CD workflow](./quickstart-add-load-test-cicd.md) to quickly identify performance degradation of your application under load.
In the test configuration, [specify test fail criteria](./how-to-define-test-criteria.md) to catch application performance or stability regressions early in the development cycle. For example, get alerted when the average response time or the number of errors exceed a specific threshold.
Azure Load Testing doesn't store or process customer data outside the region you
Start using Azure Load Testing: - [Quickstart: Load test an existing web application](./quickstart-create-and-run-load-test.md).
+- [Quickstart: Automate load tests with CI/CD](./quickstart-add-load-test-cicd.md).
- [Tutorial: Use a load test to identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md).-- [Tutorial: Set up automated load testing](./tutorial-identify-performance-regression-with-cicd.md). - Learn about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
load-testing Quickstart Add Load Test Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/quickstart-add-load-test-cicd.md
+
+ Title: 'Quickstart: Add load test to CI/CD'
+
+description: 'This quickstart shows how to run your load tests with Azure Load Testing in CI/CD. Learn how to add a load test to GitHub Actions or Azure Pipelines.'
++++ Last updated : 06/05/2023++
+# Quickstart: Automate a load test with CI/CD in GitHub Actions or Azure Pipelines
+
+Get started with automating load tests in Azure Load Testing by adding it to a CI/CD pipeline. After running a load test in the Azure portal, you export the configuration files, and configure a CI/CD pipeline in GitHub Actions or Azure Pipelines.
+
+After you complete this quickstart, you have a CI/CD workflow that is configured to run a load test with Azure Load Testing.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure Load Testing test. Create a [URL-based load test](./quickstart-create-and-run-load-test.md) or [use an existing JMeter script](./how-to-create-and-run-load-test-with-jmeter-script.md) to create a load test.
+
+# [Azure Pipelines](#tab/pipelines)
+- An Azure DevOps organization and project. If you don't have an Azure DevOps organization, you can [create one for free](/azure/devops/pipelines/get-started/pipelines-sign-up?view=azure-devops&preserve-view=true). If you need help with getting started with Azure Pipelines, see [Create your first pipeline](/azure/devops/pipelines/create-first-pipeline?preserve-view=true&view=azure-devops&tabs=java%2Ctfs-2018-2%2Cbrowser).
+
+# [GitHub Actions](#tab/github)
+- A GitHub account. If you don't have a GitHub account, you can [create one for free](https://github.com/).
+- A GitHub repository to store the load test input files and create a GitHub Actions workflow. To create one, see [Creating a new repository](https://docs.github.com/github/creating-cloning-and-archiving-repositories/creating-a-new-repository).
+++
+## Configure service authentication
+
+To run a load test in your CI/CD workflow, you need to grant permission to the CI/CD workflow to access your load testing resource. Create a service principal for the CI/CD workflow and assign the Load Test Contributor Azure RBAC role.
+
+# [Azure Pipelines](#tab/pipelines)
+
+### Create a service connection in Azure Pipelines
+
+In Azure Pipelines, you create a *service connection* in your Azure DevOps project to access resources in your Azure subscription. When you create the service connection, Azure DevOps creates an Azure Active Directory service principal object.
+
+1. Sign in to your Azure DevOps organization (`https://dev.azure.com/<your-organization>`), and select your project.
+
+ Replace the `<your-organization>` text placeholder with your project identifier.
+
+1. Select **Project settings** > **Service connections** > **+ New service connection**.
+
+1. In the **New service connection** pane, select the **Azure Resource Manager**, and then select **Next**.
+
+1. Select the **Service Principal (automatic)** authentication method, and then select **Next**.
+
+1. Enter the service connection details, and then select **Save** to create the service connection.
+
+ | Field |