Updates from: 03/29/2023 01:12:19
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Previously updated : 03/27/2023 Last updated : 03/28/2023
Applications and systems that support customization of the attribute list includ
> [!NOTE]
-> Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined or if a source attribute isn't automatically displayed in the Azure Portal UI. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_Connect_Provisioning_forceSchemaEditorEnabled=true . You can then navigate to your application to view the attribute list as described [above](#editing-the-list-of-supported-attributes).
+> Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined or if a source attribute isn't automatically displayed in the Azure Portal UI. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_Connect_Provisioning_forceSchemaEditorEnabled=true . You can then navigate to your application to view the [attribute list](#editing-the-list-of-supported-attributes).
> [!NOTE] > When a directory extension attribute in Azure AD doesn't show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx_acmeCostCenter`, make sure you enter it in the same format as defined in the directory.
When you're editing the list of supported attributes, the following properties a
- **Multi-value?** - Whether the attribute supports multiple values. - **Exact case?** - Whether the attributes values are evaluated in a case-sensitive way. - **API Expression** - Don't use, unless instructed to do so by the documentation for a specific provisioning connector (such as Workday).-- **Referenced Object Attribute** - If it's a Reference type attribute, then this menu lets you select the table and attribute in the target application that contains the value associated with the attribute. For example, if you have an attribute named "Department" whose stored value references an object in a separate "Departments" table, you would select "Departments.Name". The reference tables and the primary ID fields supported for a given application are preconfigured and currently can't be edited using the Azure portal, but can be edited using the [Microsoft Graph API](/graph/api/resources/synchronization-configure-with-custom-target-attributes).
+- **Referenced Object Attribute** - If it's a Reference type attribute, then this menu lets you select the table and attribute in the target application that contains the value associated with the attribute. For example, if you have an attribute named "Department" whose stored value references an object in a separate "Departments" table, you would select "Departments.Name". The reference tables and the primary ID fields supported for a given application are preconfigured and can't be edited using the Azure portal. However, you can edit them using the [Microsoft Graph API](/graph/api/resources/synchronization-configure-with-custom-target-attributes).
#### Provisioning a custom extension attribute to a SCIM compliant application The SCIM RFC defines a core user and group schema, while also allowing for extensions to the schema to meet your application's needs. To add a custom attribute to a SCIM application:
For SCIM applications, the attribute name must follow the pattern shown in the e
These instructions are only applicable to SCIM-enabled applications. Applications such as ServiceNow and Salesforce aren't integrated with Azure AD using SCIM, and therefore they don't require this specific namespace when adding a custom attribute.
-Custom attributes can't be referential attributes, multi-value or complex-typed attributes. Custom multi-value and complex-typed extension attributes are currently supported only for applications in the gallery. The custom extension schema header is omitted in the example because it isn't sent in requests from the Azure AD SCIM client. This issue will be fixed in the future and the header will be sent in the request.
+Custom attributes can't be referential attributes, multi-value or complex-typed attributes. Custom multi-value and complex-typed extension attributes are currently supported only for applications in the gallery. The custom extension schema header is omitted in the example because it isn't sent in requests from the Azure AD SCIM client.
**Example representation of a user with an extension attribute:**
Custom attributes can't be referential attributes, multi-value or complex-typed
## Provisioning a role to a SCIM app
-Use the steps in the example to provision roles for a user to your application. Note that the description is specific to custom SCIM applications. For gallery applications such as Salesforce and ServiceNow, use the predefined role mappings. The bullets describe how to transform the AppRoleAssignments attribute to the format your application expects.
+Use the steps in the example to provision roles for a user to your application. The description is specific to custom SCIM applications. For gallery applications such as Salesforce and ServiceNow, use the predefined role mappings. The bullets describe how to transform the AppRoleAssignments attribute to the format your application expects.
- Mapping an appRoleAssignment in Azure AD to a role in your application requires that you transform the attribute using an [expression](../app-provisioning/functions-for-customizing-application-data.md). The appRoleAssignment attribute **shouldn't be mapped directly** to a role attribute without using an expression to parse the role details. - **SingleAppRoleAssignment** - **When to use:** Use the SingleAppRoleAssignment expression to provision a single role for a user and to specify the primary role.
- - **How to configure:** Use the steps described above to navigate to the attribute mappings page and use the SingleAppRoleAssignment expression to map to the roles attribute. There are three role attributes to choose from (`roles[primary eq "True"].display`, `roles[primary eq "True"].type`, and `roles[primary eq "True"].value`). You can choose to include any or all of the role attributes in your mappings. If you would like to include more than one, just add a new mapping and include it as the target attribute.
+ - **How to configure:** Use the steps described to navigate to the attribute mappings page and use the SingleAppRoleAssignment expression to map to the roles attribute. There are three role attributes to choose from (`roles[primary eq "True"].display`, `roles[primary eq "True"].type`, and `roles[primary eq "True"].value`). You can choose to include any or all of the role attributes in your mappings. If you would like to include more than one, just add a new mapping and include it as the target attribute.
![Add SingleAppRoleAssignment](./media/customize-application-attributes/edit-attribute-singleapproleassignment.png) - **Things to consider**
- - Ensure that multiple roles aren't assigned to a user. We can't guarantee which role will be provisioned.
+ - Ensure that multiple roles aren't assigned to a user. There is no guarantee which role is provisioned.
- SingleAppRoleAssignments isn't compatible with setting scope to "Sync All users and groups." - **Example request (POST)**
The request formats in the PATCH and POST differ. To ensure that POST and PATCH
- **AppRoleAssignmentsComplex** - **When to use:** Use the AppRoleAssignmentsComplex expression to provision multiple roles for a user.
- - **How to configure:** Edit the list of supported attributes as described above to include a new attribute for roles:
+ - **How to configure:** Edit the list of supported attributes as described to include a new attribute for roles:
![Add roles](./media/customize-application-attributes/add-roles.png)<br>
The request formats in the PATCH and POST differ. To ensure that POST and PATCH
![Add AppRoleAssignmentsComplex](./media/customize-application-attributes/edit-attribute-approleassignmentscomplex.png)<br> - **Things to consider**
- - All roles will be provisioned as primary = false.
+ - All roles are provisioned as primary = false.
- The POST contains the role type. The PATCH request doesn't contain type. We're working on sending the type in both POST and PATCH requests. - AppRoleAssignmentsComplex isn't compatible with setting scope to "Sync All users and groups."
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md
You can use scoping filters to define attribute-based rules that determine which
### B2B (guest) users
-It's possible to use the Azure AD user provisioning service to provision B2B (guest) users in Azure AD to SaaS applications.
-However, for B2B users to sign in to the SaaS application using Azure AD, the SaaS application must have its SAML-based single sign-on capability configured in a specific way. For more information on how to configure SaaS applications to support sign-ins from B2B users, see [Configure SaaS apps for B2B collaboration](../external-identities/configure-saas-apps.md).
+It's possible to use the Azure AD user provisioning service to provision B2B (guest) users in Azure AD to SaaS applications. However, for B2B users to sign in to the SaaS application using Azure AD, you must manually configure the SaaS application to use Azure AD as a Security Assertion Markup Language (SAML) identity provider.
+
+Follow these general guidelines when configuring SaaS apps for B2B (guest) users:
+- For most of the apps, user setup needs to happen manually. Users must be created manually in the app as well.
+- For apps that support automatic setup, such as Dropbox, separate invitations are created from the apps. Users must be sure to accept each invitation.
+- In the user attributes, to mitigate any issues with mangled user profile disk (UPD) in guest users, always set the user identifier to **user.mail**.
> [!NOTE] > The userPrincipalName for a B2B user represents the external user's email address alias@theirdomain as "alias_theirdomain#EXT#@yourdomain". When the userPrincipalName attribute is included in your attribute mappings as a source attribute, and a B2B user is being provisioned, the #EXT# and your domain is stripped from the userPrincipalName, so only their original alias@theirdomain is used for matching or provisioning. If you require the full user principal name including #EXT# and your domain to be present, replace userPrincipalName with originalUserPrincipalName as the source attribute. <br />
active-directory Concept Certificate Based Authentication Certificateuserids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md
To update certificate user IDs for federated users, configure Azure AD Connect t
### Synchronize X509:\<PN>PrincipalNameValue
-To synchronize X509:\<PN>PrincipalNameValue, create an outbound synchronization rule, and choose **Expression** in the flow type. Choose the target attribute as \<certificateUserIds>, and in the source field, add the expression <"X509:\<PN>"&[userPrincipalName]>. If your source attribute isn't userPrincipalName, you can change the expression accordingly.
+To synchronize X509:\<PN>PrincipalNameValue, create an outbound synchronization rule, and choose **Expression** in the flow type. Choose the target attribute as **certificateUserIds**, and in the source field, add the following expression. If your source attribute isn't userPrincipalName, you can change the expression accordingly.
+
+```
+"X509:\<PN>"&[userPrincipalName]
+```
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/pnexpression.png" alt-text="Screenshot of how to sync x509."::: ### Synchronize X509:\<RFC822>RFC822Name
-To synchronize X509:\<RFC822>RFC822Name, create an outbound synchronization rule, choose **Expression** in the flow type. Choose the target attribute as \<certificateUserIds>, and in the source field, add the expression <"X509:\<RFC822>"&[userPrincipalName]>. If your source attribute isn't userPrincipalName, you can change the expression accordingly.
+To synchronize X509:\<RFC822>RFC822Name, create an outbound synchronization rule, choose **Expression** in the flow type. Choose the target attribute as **certificateUserIds**, and in the source field, add the following expression. If your source attribute isn't userPrincipalName, you can change the expression accordingly.
+
+```
+"X509:\<RFC822>"&[userPrincipalName]
+```
:::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/rfc822expression.png" alt-text="Screenshot of how to sync RFC822Name.":::
AlternativeSecurityId isn't part of the default attributes. An administrator nee
1. Create an inbound synchronization rule to transform from altSecurityIdentities to alternateSecurityId attribute.
+ In the inbound rule, use the following options.
+
+ |Option | Value |
+ |-|-|
+ |Name | Descriptive name of the rule, such as: In from AD - altSecurityIdentities |
+ |Connected System | Your on-premises AD domain |
+ |Connected System Object Type | user |
+ |Metaverse Object Type | person |
+ |Precedence | Choose a random high number not currently used |
+
+ Then proceed to the Transformations tab and do a direct mapping of the target attribute of **alternativeSecurityId** to **altSecurityIdentities** as shown below.
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/alt-security-identity-inbound.png" alt-text="Screenshot of how to transform from altSecurityIdentities to alternateSecurityId attribute"::: 1. Create an outbound synchronization rule to transform from alternateSecurityId attribute to certificateUserIds alt-security-identity-add.
+ |Option | Value |
+ |-|-|
+ |Name | Descriptive name of the rule, such as: Out to AAD - certificateUserIds |
+ |Connected System | Your Azure AD doamin |
+ |Connected System Object Type | user |
+ |Metaverse Object Type | person |
+ |Precedence | Choose a random high number not currently used |
+
+ Then proceed to the Transformations tab and change your FlowType option to *Expression*, the target attribute to **certificateUserIds** and then input the below expression in to the Source field.
+ :::image type="content" border="true" source="./media/concept-certificate-based-authentication-certificateuserids/alt-security-identity-outbound.png" alt-text="Screenshot of outbound synchronization rule to transform from alternateSecurityId attribute to certificateUserIds"::: To map the pattern supported by certificateUserIds, administrators must use expressions to set the correct value.
active-directory Howto Authentication Passwordless Security Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key.md
Registration features for passwordless authentication methods rely on the combin
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Browse to **Azure Active Directory** > **Security** > **Authentication methods** > **Authentication method policy**.
-1. Under the method **FIDO2 Security Key**, click **All users**, or click **Add groups** to select specific groups.
+1. Under the method **FIDO2 Security Key**, click **All users**, or click **Add groups** to select specific groups. *Only security groups are supported*.
1. **Save** the configuration. >[!NOTE]
active-directory Howto Mfa Nps Extension Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-errors.md
If your users are [Having trouble with two-step verification](https://support.mi
### Health check script
-The [Azure AD MFA NPS Extension health check script](/samples/azure-samples/azure-mfa-nps-extension-health-check/azure-mfa-nps-extension-health-check/) performs a basic health check when troubleshooting the NPS extension. Run the script and choose option 3.
+The [Azure AD MFA NPS Extension health check script](/samples/azure-samples/azure-mfa-nps-extension-health-check/azure-mfa-nps-extension-health-check/) performs a basic health check when troubleshooting the NPS extension. Run the script and choose option **1** to isolate the cause of the potential issue.
### Contact Microsoft support If you need additional help, contact a support professional through [Azure Multi-Factor Authentication Server support](https://support.microsoft.com/oas/default.aspx?prid=14947). When contacting us, it's helpful if you can include as much information about your issue as possible. Information you can supply includes the page where you saw the error, the specific error code, the specific session ID, the ID of the user who saw the error, and debug logs.
-To collect debug logs for support diagnostics, use the following steps on the NPS server:
+To collect debug logs for support diagnostics, run the [Azure AD MFA NPS Extension health check script](/samples/azure-samples/azure-mfa-nps-extension-health-check/azure-mfa-nps-extension-health-check/) on the NPS server and choose option **4** to collect logs.
-1. Open Registry Editor and browse to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMfa set **VERBOSE_LOG** to **TRUE**
-2. Open an Administrator command prompt and run these commands:
-
- ```
- Mkdir c:\NPS
- Cd c:\NPS
- netsh trace start Scenario=NetConnection capture=yes tracefile=c:\NPS\nettrace.etl
- logman create trace "NPSExtension" -ow -o c:\NPS\NPSExtension.etl -p {7237ED00-E119-430B-AB0F-C63360C8EE81} 0xffffffffffffffff 0xff -nb 16 16 -bs 1024 -mode Circular -f bincirc -max 4096 -ets
- logman update trace "NPSExtension" -p {EC2E6D3A-C958-4C76-8EA4-0262520886FF} 0xffffffffffffffff 0xff -ets
- ```
-
-3. Reproduce the issue
-
-4. Stop the tracing with these commands:
-
- ```
- logman stop "NPSExtension" -ets
- netsh trace stop
- wevtutil epl AuthNOptCh C:\NPS\%computername%_AuthNOptCh.evtx
- wevtutil epl AuthZOptCh C:\NPS\%computername%_AuthZOptCh.evtx
- wevtutil epl AuthZAdminCh C:\NPS\%computername%_AuthZAdminCh.evtx
- Start .
- ```
-
-5. Open Registry Editor and browse to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\AzureMfa set **VERBOSE_LOG** to **FALSE**
-6. Zip the contents of the C:\NPS folder and attach the zipped file to the support case.
+At the end, zip the contents of the C:\NPS folder and attach the zipped file to the support case.
active-directory Howto Mfa Nps Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md
You can choose to create this key and set it to *FALSE* while your users are onb
### NPS extension health check script
-The following script is available to perform basic health check steps when troubleshooting the NPS extension.
-
-[MFA_NPS_Troubleshooter.ps1](/samples/azure-samples/azure-mfa-nps-extension-health-check/azure-mfa-nps-extension-health-check/)
+The [Azure AD MFA NPS Extension health check script](/samples/azure-samples/azure-mfa-nps-extension-health-check/azure-mfa-nps-extension-health-check/) performs a basic health check when troubleshooting the NPS extension. Run the script and choose one of available options.
### How to fix the error "Service principal was not found" while running `AzureMfaNpsExtnConfigSetup.ps1` script?
active-directory Howto Mfaserver Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfaserver-deploy.md
Now that you have downloaded the server you can install and configure it. Be sur
1. Double-click the executable. 2. On the Select Installation Folder screen, make sure that the folder is correct and click **Next**.
-3. Once the installation is complete, click **Finish**. The configuration wizard launches.
-4. On the configuration wizard welcome screen, check **Skip using the Authentication Configuration Wizard** and click **Next**. The wizard closes and the server starts.
-
- ![Skip using the Authentication Configuration Wizard](./media/howto-mfaserver-deploy/skip2.png)
-
+ The following libraries are installed:
+ * [Visual C++ Redistributable for Visual Studio 2017 (x64)](https://go.microsoft.com/fwlink/?LinkId=746572)
+ * [Visual C++ Redistributable for Visual Studio 2017 (x86)](https://go.microsoft.com/fwlink/?LinkId=746571)
+3. When the installation finishes, select **Finish**. The configuration wizard starts.
5. Back on the page that you downloaded the server from, click the **Generate Activation Credentials** button. Copy this information into the Azure MFA Server in the boxes provided and click **Activate**. > [!NOTE]
Once you have upgraded to or installed MFA Server version 8.x or higher, it is r
- Set up and configure the Azure MFA Server with [Active Directory Federation Service](multi-factor-authentication-get-started-adfs.md), [RADIUS Authentication](howto-mfaserver-dir-radius.md), or [LDAP Authentication](howto-mfaserver-dir-ldap.md). - Set up and configure [Remote Desktop Gateway and Azure Multi-Factor Authentication Server using RADIUS](howto-mfaserver-nps-rdg.md). - [Deploy the Azure Multi-Factor Authentication Server Mobile App Web Service](howto-mfaserver-deploy-mobileapp.md).-- [Advanced scenarios with Azure Multi-Factor Authentication and third-party VPNs](howto-mfaserver-nps-vpn.md).
+- [Advanced scenarios with Azure Multi-Factor Authentication and third-party VPNs](howto-mfaserver-nps-vpn.md).
active-directory Tutorial Enable Cloud Sync Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-cloud-sync-sspr-writeback.md
With password writeback enabled in Azure AD Connect cloud sync, now verify, and
To verify and enable password writeback in SSPR, complete the following steps: 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global Administrator account. 1. Search for and select **Azure Active Directory**, select **Password reset**, then choose **On-premises integration**.
-1. Check the option for **Write back passwords to your on-premises directory** .
+1. Check the option for **Enable password write back for synced users** .
1. (optional) If Azure AD Connect provisioning agents are detected, you can additionally check the option for **Write back passwords with Azure AD Connect cloud sync**. 3. Check the option for **Allow users to unlock accounts without resetting their password** to *Yes*.
If you no longer want to use the SSPR writeback functionality you have configure
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Search for and select **Azure Active Directory**, select **Password reset**, then choose **On-premises integration**.
-1. Uncheck the option for **Write back passwords to your on-premises directory**.
+1. Uncheck the option for **Enable password write back for synced users**.
1. Uncheck the option for **Write back passwords with Azure AD Connect cloud sync**. 1. Uncheck the option for **Allow users to unlock accounts without resetting their password**. 1. When ready, select **Save**.
active-directory Location Condition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md
Named locations defined by IPv4/IPv6 address ranges are subject to the following
- Configure up to 195 named locations. - Configure up to 2000 IP ranges per named location. - Both IPv4 and IPv6 ranges are supported.-- Private IP ranges can't be configured. - The number of IP addresses contained in a range is limited. Only CIDR masks greater than /8 are allowed when defining an IP range. #### Trusted locations
active-directory Active Directory Optional Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-optional-claims.md
Previously updated : 12/28/2022 Last updated : 03/28/2023
While optional claims are supported in both v1.0 and v2.0 format tokens and SAML
## v1.0 and v2.0 optional claims set
-The set of optional claims available by default for applications to use are listed below. You can use custom data in extension attributes and directory extensions to add optional claims for your application. To use directory extensions, see [Directory Extensions](#configuring-directory-extension-optional-claims), below. When adding claims to the **access token**, the claims apply to access tokens requested *for* the application (a web API), not claims requested *by* the application. No matter how the client accesses your API, the right data is present in the access token that is used to authenticate against your API.
+The set of optional claims available by default for applications to use are listed in the following table. You can use custom data in extension attributes and directory extensions to add optional claims for your application. To use directory extensions, see [Directory Extensions](#configuring-directory-extension-optional-claims). When adding claims to the **access token**, the claims apply to access tokens requested *for* the application (a web API), not claims requested *by* the application. No matter how the client accesses your API, the right data is present in the access token that is used to authenticate against your API.
> [!NOTE] >The majority of these claims can be included in JWTs for v1.0 and v2.0 tokens, but not SAML tokens, except where noted in the Token Type column. Consumer accounts support a subset of these claims, marked in the "User Type" column. Many of the claims listed do not apply to consumer users (they have no tenant, so `tenant_ctry` has no value).
The set of optional claims available by default for applications to use are list
| `acct` | Users account status in tenant | JWT, SAML | | If the user is a member of the tenant, the value is `0`. If they're a guest, the value is `1`. | | `auth_time` | Time when the user last authenticated. See OpenID Connect spec.| JWT | | | | `ctry` | User's country/region | JWT | | Azure AD returns the `ctry` optional claim if it's present and the value of the field is a standard two-letter country/region code, such as FR, JP, SZ, and so on. |
-| `email` | The reported email address for this user | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. This value isn't guaranteed to be correct, and is mutable over time - never use it for authorization or to save data for a user. For more information, see [Validate the user has permission to access this data](access-tokens.md). If you require an addressable email address in your app, request this data from the user directly, using this claim as a suggestion or pre-fill in your UX. |
+| `email` | The reported email address for this user | JWT, SAML | MSA, Azure AD | This value is included by default if the user is a guest in the tenant. For managed users (the users inside the tenant), it must be requested through this optional claim or, on v2.0 only, with the OpenID scope. This value isn't guaranteed to be correct, and is mutable over time - never use it for authorization or to save data for a user. For more information, see [Validate the user has permission to access this data](access-tokens.md). If you require an addressable email address in your app, request this data from the user directly, using this claim as a suggestion or prefill in your UX. |
| `fwd` | IP address.| JWT | | Adds the original IPv4 address of the requesting client (when inside a VNET) |
-| `groups`| Optional formatting for group claims |JWT, SAML| |For details see [Group claims](#configuring-groups-optional-claims) below. For more information about group claims, see [How to configure group claims](../hybrid/how-to-connect-fed-group-claims.md). Used with the GroupMembershipClaims setting in the [application manifest](reference-app-manifest.md), which must be set as well.
+| `groups`| Optional formatting for group claims |JWT, SAML| |For details see [Group claims](#configuring-groups-optional-claims). For more information about group claims, see [How to configure group claims](../hybrid/how-to-connect-fed-group-claims.md). Used with the GroupMembershipClaims setting in the [application manifest](reference-app-manifest.md), which must be set as well.
| `idtyp` | Token type | JWT access tokens | Special: only in app-only access tokens | Value is `app` when the token is an app-only token. This claim is the most accurate way for an API to determine if a token is an app token or an app+user token.|
-| `login_hint` | Login hint | JWT | MSA, Azure AD | An opaque, reliable login hint claim that's base64 encoded. Do not modify this value. This claim is the best value to use for the `login_hint` OAuth parameter in all flows to get SSO. It can be passed between applications to help them silently SSO as well - application A can sign in a user, read the `login_hint` claim, and then send the claim and the current tenant context to application B in the query string or fragment when the user selects on a link that takes them to application B. To avoid race conditions and reliability issues, the `login_hint` claim *doesn't* include the current tenant for the user, and defaults to the user's home tenant when used. If you're operating in a guest scenario where the user is from another tenant, you must provide a tenant identifier in the sign-in request, and pass the same to apps you partner with. This claim is intended for use with your SDK's existing `login_hint` functionality, however that it exposed. |
-| `sid` | Session ID, used for per-session user sign-out. | JWT | Personal and Azure AD accounts. | |
+| `login_hint` | Login hint | JWT | MSA, Azure AD | An opaque, reliable login hint claim that's base64 encoded. Don't modify this value. This claim is the best value to use for the `login_hint` OAuth parameter in all flows to get SSO. It can be passed between applications to help them silently SSO as well - application A can sign in a user, read the `login_hint` claim, and then send the claim and the current tenant context to application B in the query string or fragment when the user selects on a link that takes them to application B. To avoid race conditions and reliability issues, the `login_hint` claim *doesn't* include the current tenant for the user, and defaults to the user's home tenant when used. If you're operating in a guest scenario where the user is from another tenant, you must provide a tenant identifier in the sign-in request, and pass the same to apps you partner with. This claim is intended for use with your SDK's existing `login_hint` functionality, however that it exposed. |
+| `sid` | Session ID, used for per-session user sign out. | JWT | Personal and Azure AD accounts. | |
| `tenant_ctry` | Resource tenant's country/region | JWT | | Same as `ctry` except set at a tenant level by an admin. Must also be a standard two-letter value. | | `tenant_region_scope` | Region of the resource tenant | JWT | | | | `upn` | UserPrincipalName | JWT, SAML | | An identifier for the user that can be used with the username_hint parameter. Not a durable identifier for the user and shouldn't be used for authorization or to uniquely identity user information (for example, as a database key). Instead, use the user object ID (`oid`) as a database key. For more information, see [Validate the user has permission to access this data](access-tokens.md). Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) shouldn't be shown their User Principal Name (UPN). Instead, use the following ID token claims for displaying sign-in state to the user: `preferred_username` or `unique_name` for v1 tokens and `preferred_username` for v2 tokens. Although this claim is automatically included, you can specify it as an optional claim to attach additional properties to modify its behavior in the guest user case. You should use the `login_hint` claim for `login_hint` use - human-readable identifiers like UPN are unreliable.|
These claims are always included in v1.0 Azure AD tokens, but not included in v2
| `in_corp` | Inside Corporate Network | Signals if the client is logging in from the corporate network. If they're not, the claim isn't included. | Based off of the [trusted IPs](../authentication/howto-mfa-mfasettings.md#trusted-ips) settings in MFA. | | `family_name` | Last Name | Provides the last name, surname, or family name of the user as defined in the user object. <br>"family_name":"Miller" | Supported in MSA and Azure AD. Requires the `profile` scope. | | `given_name` | First name | Provides the first or "given" name of the user, as set on the user object.<br>"given_name": "Frank" | Supported in MSA and Azure AD. Requires the `profile` scope. |
-| `upn` | User Principal Name | An identifer for the user that can be used with the username_hint parameter. Not a durable identifier for the user and shouldn't be used for authorization or to uniquely identity user information (for example, as a database key). For more information, see [Validate the user has permission to access this data](access-tokens.md). Instead, use the user object ID (`oid`) as a database key. Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) shouldn't be shown their User Principal Name (UPN). Instead, use the following `preferred_username` claim for displaying sign-in state to the user. | See [additional properties](#additional-properties-of-optional-claims) below for configuration of the claim. Requires the `profile` scope.|
+| `upn` | User Principal Name | An identifer for the user that can be used with the username_hint parameter. Not a durable identifier for the user and shouldn't be used for authorization or to uniquely identity user information (for example, as a database key). For more information, see [Validate the user has permission to access this data](access-tokens.md). Instead, use the user object ID (`oid`) as a database key. Users signing in with an [alternate login ID](../authentication/howto-authentication-use-email-signin.md) shouldn't be shown their User Principal Name (UPN). Instead, use the following `preferred_username` claim for displaying sign-in state to the user. | See [additional properties](#additional-properties-of-optional-claims) for configuration of the claim. Requires the `profile` scope.|
## v1.0-specific optional claims set
Some optional claims can be configured to change the way the claim is returned.
|-|--|-| | `upn` | | Can be used for both SAML and JWT responses, and for v1.0 and v2.0 tokens. | | | `include_externally_authenticated_upn` | Includes the guest UPN as stored in the resource tenant. For example, `foo_hometenant.com#EXT#@resourcetenant.com` |
-| | `include_externally_authenticated_upn_without_hash` | Same as above, except that the hash marks (`#`) are replaced with underscores (`_`), for example `foo_hometenant.com_EXT_@resourcetenant.com`|
+| | `include_externally_authenticated_upn_without_hash` | Same as listed previously, except that the hash marks (`#`) are replaced with underscores (`_`), for example `foo_hometenant.com_EXT_@resourcetenant.com`|
| `aud` | | In v1 access tokens, this claim is used to change the format of the `aud` claim. This claim has no effect in v2 tokens or either version's ID tokens, where the `aud` claim is always the client ID. Use this configuration to ensure that your API can more easily perform audience validation. Like all optional claims that affect the access token, the resource in the request must set this optional claim, since resources own the access token.| | | `use_guid` | Emits the client ID of the resource (API) in GUID format as the `aud` claim always instead of it being runtime dependent. For example, if a resource sets this flag, and its client ID is `bb0a297b-6a42-4a55-ac40-09a501456577`, any app that requests an access token for that resource will receive an access token with `aud` : `bb0a297b-6a42-4a55-ac40-09a501456577`. </br></br> Without this claim set, an API could get tokens with an `aud` claim of `api://MyApi.com`, `api://MyApi.com/`, `api://myapi.com/AdditionalRegisteredField` or any other value set as an app ID URI for that API, and the client ID of the resource. |
You can configure optional claims for your application through the UI or applica
1. Go to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. 1. Search for and select **Azure Active Directory**. 1. Under **Manage**, select **App registrations**.
-1. Select the application you want to configure optional claims for in the list.
+1. Choose the application for which you want to configure optional claims based on your scenario and desired outcome.
**Configuring optional claims through the UI:**
You can configure optional claims for your application through the UI or applica
} ```
-2. When finished, select **Save**. Now the specified optional claims will be included in the tokens for your application.
+2. When finished, select **Save**. Now the specified optional claims are included in the tokens for your application.
### OptionalClaims type
Directory extensions are an Azure AD-only feature. If your application manifest
When configuring directory extension optional claims using the application manifest, use the full name of the extension (in the format: `extension_<appid>_<attributename>`). The `<appid>` is the stripped version of the **appId** (or Client ID) of the application requesting the claim.
-Within the JWT, these claims will be emitted with the following name format: `extn.<attributename>`.
+Within the JWT, these claims are emitted with the following name format: `extn.<attributename>`.
-Within the SAML tokens, these claims will be emitted with the following URI format: `http://schemas.microsoft.com/identity/claims/extn.<attributename>`
+Within the SAML tokens, these claims are emitted with the following URI format: `http://schemas.microsoft.com/identity/claims/extn.<attributename>`
## Configuring groups optional claims
-This section covers the configuration options under optional claims for changing the group attributes used in group claims from the default group objectID to attributes synced from on-premises Windows Active Directory. You can configure groups optional claims for your application through the UI or application manifest. Group optional claims are only emitted in the JWT for **user principals**. **Service principals** _will not_ have group optional claims emitted in the JWT.
+This section covers the configuration options under optional claims for changing the group attributes used in group claims from the default group objectID to attributes synced from on-premises Windows Active Directory. You can configure groups optional claims for your application through the UI or application manifest. Group optional claims are only emitted in the JWT for **user principals**. **Service principals** _won't_ have group optional claims emitted in the JWT.
> [!IMPORTANT] > Azure AD limits the number of groups emitted in a token to 150 for SAML assertions and 200 for JWT, including nested groups. For more information on group limits and important caveats for group claims from on-premises attributes, see [Configure group claims for applications with Azure AD](../hybrid/how-to-connect-fed-group-claims.md).
This section covers the configuration options under optional claims for changing
In additionalProperties only one of "sam_account_name", "dns_domain_and_sam_account_name", "netbios_domain_and_sam_account_name" are required. If more than one is present, the first is used and any others ignored. Additionally you can add ΓÇ£cloud_displaynameΓÇ¥ to emit display name of the cloud group. Note, that this option works only when `ΓÇ£groupMembershipClaimsΓÇ¥` is set to `ΓÇ£ApplicationGroupΓÇ¥`.
- Some applications require group information about the user in the role claim. To change the claim type from a group claim to a role claim, add "emit_as_roles" to additional properties. The group values will be emitted in the role claim.
+ Some applications require group information about the user in the role claim. To change the claim type from a group claim to a role claim, add "emit_as_roles" to additional properties. The group values are emitted in the role claim.
If "emit_as_roles" is used, any application roles configured that the user is assigned won't appear in the role claim.
There are multiple options available for updating the properties on an applicati
**Example:**
-In the example below, you'll use the **Token configuration** UI and **Manifest** to add optional claims to the access, ID, and SAML tokens intended for your application. Different optional claims will be added to each type of token that the application can receive:
+In the example below, you'll use the **Token configuration** UI and **Manifest** to add optional claims to the access, ID, and SAML tokens intended for your application. Different optional claims are added to each type of token that the application can receive:
- The ID tokens will now contain the UPN for federated users in the full form (`<upn>_<homedomain>#EXT#@<resourcedomain>`). - The access tokens that other clients request for this application will now include the auth_time claim.
In the example below, you'll use the **Token configuration** UI and **Manifest**
1. Search for and select **Azure Active Directory**. 1. Find the application you want to configure optional claims for in the list and select it. 1. Under **Manage**, select **Manifest** to open the inline manifest editor.
-1. You can directly edit the manifest using this editor. The manifest follows the schema for the [Application entity](./reference-app-manifest.md), and automatically formats the manifest once saved. New elements will be added to the `OptionalClaims` property.
+1. You can directly edit the manifest using this editor. The manifest follows the schema for the [Application entity](./reference-app-manifest.md), and automatically formats the manifest once saved. New elements are added to the `OptionalClaims` property.
```json "optionalClaims": {
active-directory Quickstart V2 Nodejs Webapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-nodejs-webapp.md
- Title: "Quickstart: Add user sign-in to a Node.js web app"
-description: In this quickstart, you learn how to implement authentication in a Node.js web application using OpenID Connect.
------- Previously updated : 11/22/2021----
-#Customer intent: As an application developer, I want to know how to set up OpenID Connect authentication in a web application built using Node.js with Express.
--
-# Quickstart: Add sign in using OpenID Connect to a Node.js web app
-
-> [!div renderon="docs"]
-> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
->
-> > [Quickstart: Add user sign-in to a Node.js web app built with the Express framework ](web-app-quickstart.md?pivots=devlang-nodejs-passport)
->
-> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
-
-> [!div renderon="portal" class="sxs-lookup"]
-> In this quickstart, you download and run a code sample that demonstrates how to set up OpenID Connect authentication in a web application built using Node.js with Express. The sample is designed to run on any platform.
->
-> ## Prerequisites
->
-> - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-> - [Node.js](https://nodejs.org/en/download/).
->
-> ## Register your application
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `MyWebApp`. Users of your app might see this name, and you can change it later.
-> 1. In the **Supported account types** section, select **Accounts in any organizational directory and personal Microsoft accounts (e.g. Skype, Xbox, Outlook.com)**.
->
-> If there are more than one redirect URIs, add these from the **Authentication** tab later after the app has been successfully created.
->
-> 1. Select **Register** to create the app.
-> 1. On the app's **Overview** page, find the **Application (client) ID** value and record it for later. You'll need this > value to configure the application later in this project.
-> 1. Under **Manage**, select **Authentication**.
-> 1. Select **Add a platform** > **Web**.
-> 1. In the **Redirect URIs** section, enter `http://localhost:3000/auth/openid/return`.
-> 1. Enter a **Front-channel logout URL** `https://localhost:3000`.
-> 1. In the **Implicit grant and hybrid flows** section, select **ID tokens** as this sample requires the [Implicit grant flow](./v2-oauth2-implicit-grant-flow.md) to be enabled to sign-in the user.
-> 1. Select **Configure**.
-> 1. Under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
-> 1. Enter a key description (for instance app secret).
-> 1. Select a key duration of either **In 1 year, In 2 years,** or **Never Expires**.
-> 1. Select **Add**. The key value will be displayed. Copy the key value and save it in a safe location for later use.
->
->
-> ## Download the sample application and modules
->
-> Next, clone the sample repo and install the NPM modules.
->
-> From your shell or command line:
->
-> `$ git clone git@github.com:AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-nodejs.git`
->
-> or
->
-> `$ git clone https://github.com/AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-nodejs.git`
->
-> From the project root directory, run the command:
->
-> `$ npm install`
->
-> ## Configure the application
->
-> Provide the parameters in `exports.creds` in config.js as instructed.
->
-> * Update `<tenant_name>` in `exports.identityMetadata` with the Azure AD tenant name of the format \*.onmicrosoft.com.
-> * Update `exports.clientID` with the Application ID noted from app registration.
-> * Update `exports.clientSecret` with the Application secret noted from app registration.
-> * Update `exports.redirectUrl` with the Redirect URI noted from app registration.
->
-> **Optional configuration for production apps:**
->
-> * Update `exports.destroySessionUrl` in config.js, if you want to use a different `post_logout_redirect_uri`.
->
-> * Set `exports.useMongoDBSessionStore` in config.js to true, if you want to use [mongoDB](https://www.mongodb.com) or other [compatible session stores](https://github.com/expressjs/session#compatible-session-stores).
-> The default session store in this sample is `express-session`. The default session store is not suitable for production.
->
-> * Update `exports.databaseUri`, if you want to use mongoDB session store and a different database URI.
->
-> * Update `exports.mongoDBSessionMaxAge`. Here you can specify how long you want to keep a session in mongoDB. The unit is second(s).
->
-> ## Build and run the application
->
-> Start mongoDB service. If you are using mongoDB session store in this app, you have to [install mongoDB](http://www.mongodb.org/) and start the service first. If you are using the default session store, you can skip this step.
->
-> Run the app using the following command from your command line.
->
-> ```
-> $ node app.js
-> ```
->
-> **Is the server output hard to understand?:** We use `bunyan` for logging in this sample. The console won't make much sense to you unless you also install bunyan and run the server like above but pipe it through the bunyan binary:
->
-> ```
-> $ npm install -g bunyan
->
-> $ node app.js | bunyan
-> ```
->
-> ### You're done!
->
-> You will have a server successfully running on `http://localhost:3000`.
->
-> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
->
-> ## Next steps
-> Learn more about the web app scenario that the Microsoft identity platform supports:
-> > [!div class="nextstepaction"]
-> > [Web app that signs in users scenario](scenario-web-app-sign-user-overview.md)
active-directory Scenario Protected Web Api App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-protected-web-api-app-registration.md
If you're following along with the web API scenario described in this set of art
- **User consent description**: _Accesses the TodoListService web API as a user_ - **State**: _Enabled_
+> [!TIP]
+> For the **Application ID URI**, you have the option to set it to the physical authority of the API, for example `https://graph.microsoft.com`. This can be useful if the URL of the API that needs to be called is known.
+ ### If your web API is called by a service or daemon app Expose _application permissions_ instead of delegated permissions if your API should be accessed by daemons, services, or other non-interactive (by a human) applications. Because daemon- and service-type applications run unattended and authenticate with their own identity, there is no user to "delegate" their permission.
active-directory Scenario Web Api Call Api App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-api-call-api-app-configuration.md
using Microsoft.Identity.Web;
builder.Services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddMicrosoftIdentityWebApi(Configuration, "AzureAd") .EnableTokenAcquisitionToCallDownstreamApi()
- .AddDownstreamWebApi("MyApi", Configuration.GetSection("GraphBeta"))
+ .AddDownstreamApi("MyApi", Configuration.GetSection("GraphBeta"))
.AddInMemoryTokenCaches(); // ... ```
For more information about the OBO protocol, see the [Microsoft identity platfor
## Next steps Move on to the next article in this scenario,
-[Acquire a token for the app](scenario-web-api-call-api-acquire-token.md).
+[Acquire a token for the app](scenario-web-api-call-api-acquire-token.md).
active-directory Web App Quickstart Portal Node Js Passport https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js-passport.md
- Title: "Quickstart: Add user sign-in to a Node.js web app"
-description: In this quickstart, you learn how to implement authentication in a Node.js web application using OpenID Connect.
------- Previously updated : 08/16/2022---
-#Customer intent: As an application developer, I want to know how to set up OpenID Connect authentication in a web application built using Node.js with Express.
--
-# Quickstart: Add sign in using OpenID Connect to a Node.js web app
-
-> [!div renderon="docs"]
-> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
->
-> > [Quickstart: Add user sign-in to a Node.js web app built with the Express framework](web-app-quickstart.md?pivots=devlang-nodejs-passport)
->
-> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
-
-> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
-> # Quickstart: Add sign in using OpenID Connect to a Node.js web app
->
-> In this quickstart, you download and run a code sample that demonstrates how to set up OpenID Connect authentication in a web application built using Node.js with Express. The sample is designed to run on any platform.
->
-> ## Prerequisites
->
-> - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-> - [Node.js](https://nodejs.org/en/download/).
->
-> ## Register your application
->
-> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-> 1. Search for and select **Azure Active Directory**.
-> 1. Under **Manage**, select **App registrations** > **New registration**.
-> 1. Enter a **Name** for your application, for example `MyWebApp`. Users of your app might see this name, and you can change it later.
-> 1. In the **Supported account types** section, select **Accounts in any organizational directory and personal Microsoft accounts (e.g. Skype, Xbox, Outlook.com)**.
->
-> If there are more than one redirect URIs, add these from the **Authentication** tab later after the app has been successfully created.
->
-> 1. Select **Register** to create the app.
-> 1. On the app's **Overview** page, find the **Application (client) ID** value and record it for later. You'll need this > value to configure the application later in this project.
-> 1. Under **Manage**, select **Authentication**.
-> 1. Select **Add a platform** > **Web**.
-> 1. In the **Redirect URIs** section, enter `http://localhost:3000/auth/openid/return`.
-> 1. Enter a **Front-channel logout URL** `https://localhost:3000`.
-> 1. In the **Implicit grant and hybrid flows** section, select **ID tokens** as this sample requires the [Implicit grant flow](./v2-oauth2-implicit-grant-flow.md) to be enabled to sign-in the user.
-> 1. Select **Configure**.
-> 1. Under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
-> 1. Enter a key description (for instance app secret).
-> 1. Select a key duration of either **In 1 year, In 2 years,** or **Never Expires**.
-> 1. Select **Add**. The key value will be displayed. Copy the key value and save it in a safe location for later use.
->
->
-> ## Download the sample application and modules
->
-> Next, clone the sample repo and install the NPM modules.
->
-> From your shell or command line:
->
-> `$ git clone git@github.com:AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-nodejs.git`
->
-> or
->
-> `$ git clone https://github.com/AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-nodejs.git`
->
-> From the project root directory, run the command:
->
-> `$ npm install`
->
-> ## Configure the application
->
-> Provide the parameters in `exports.creds` in config.js as instructed.
->
-> * Update `<tenant_name>` in `exports.identityMetadata` with the Azure AD tenant name of the format \*.onmicrosoft.com.
-> * Update `exports.clientID` with the Application ID noted from app registration.
-> * Update `exports.clientSecret` with the Application secret noted from app registration.
-> * Update `exports.redirectUrl` with the Redirect URI noted from app registration.
->
-> **Optional configuration for production apps:**
->
-> * Update `exports.destroySessionUrl` in config.js, if you want to use a different `post_logout_redirect_uri`.
->
-> * Set `exports.useMongoDBSessionStore` in config.js to true, if you want to use [mongoDB](https://www.mongodb.com) or other [compatible session stores](https://github.com/expressjs/session#compatible-session-stores).
-> The default session store in this sample is `express-session`. The default session store is not suitable for production.
->
-> * Update `exports.databaseUri`, if you want to use mongoDB session store and a different database URI.
->
-> * Update `exports.mongoDBSessionMaxAge`. Here you can specify how long you want to keep a session in mongoDB. The unit is second(s).
->
-> ## Build and run the application
->
-> Start mongoDB service. If you are using mongoDB session store in this app, you have to [install mongoDB](http://www.mongodb.org/) and start the service first. If you are using the default session store, you can skip this step.
->
-> Run the app using the following command from your command line.
->
-> ```
-> $ node app.js
-> ```
->
-> **Is the server output hard to understand?:** We use `bunyan` for logging in this sample. The console won't make much sense to you unless you also install bunyan and run the server like above but pipe it through the bunyan binary:
->
-> ```
-> $ npm install -g bunyan
->
-> $ node app.js | bunyan
-> ```
->
-> ### You're done!
->
-> You will have a server successfully running on `http://localhost:3000`.
->
-> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
->
-> ## Next steps
-> Learn more about the web app scenario that the Microsoft identity platform supports:
-> > [!div class="nextstepaction"]
-> > [Web app that signs in users scenario](scenario-web-app-sign-user-overview.md)
active-directory Web App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart.md
zone_pivot_groups: web-app-quickstart
[!INCLUDE [node.js-msal](./includes/web-app/quickstart-nodejs-msal.md)] ::: zone-end - ::: zone pivot="devlang-java" [!INCLUDE [java](./includes/web-app/quickstart-java.md)] ::: zone-end
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Previously updated : 01/05/2023 Last updated : 03/27/2023
-# Log in to a Windows virtual machine in Azure by using Azure AD
+# Log in to a Windows virtual machine in Azure by using Azure AD including passwordless
Organizations can improve the security of Windows virtual machines (VMs) in Azure by integrating with Azure Active Directory (Azure AD) authentication. You can now use Azure AD as a core authentication platform to RDP into *Windows Server 2019 Datacenter edition* and later, or *Windows 10 1809* and later. You can then centrally control and enforce Azure role-based access control (RBAC) and Conditional Access policies that allow or deny access to the VMs.
This article shows you how to create and configure a Windows VM and log in by us
There are many security benefits of using Azure AD-based authentication to log in to Windows VMs in Azure. They include: -- Use Azure AD credentials to log in to Windows VMs in Azure. The result is federated and managed domain users.
+- Use Azure AD authentication including passwordless to log in to Windows VMs in Azure.
- Reduce reliance on local administrator accounts. - Password complexity and password lifetime policies that you configure for Azure AD also help secure Windows VMs. - With Azure RBAC: - Specify who can log in to a VM as a regular user or with administrator privileges. - When users join or leave your team, you can update the Azure RBAC policy for the VM to grant access as appropriate. - When employees leave your organization and their user accounts are disabled or removed from Azure AD, they no longer have access to your resources.-- Configure Conditional Access policies to require multifactor authentication (MFA) and other signals, such as user sign-in risk, before you can RDP into Windows VMs.
+- Configure Conditional Access policies to "phishing resistant MFA" using require authentication strength (preview) grant control or require multifactor authentication (MFA) and other signals, such as user sign-in risk, before you can RDP into Windows VMs.
- Use Azure Policy to deploy and audit policies to require Azure AD login for Windows VMs and to flag the use of unapproved local accounts on the VMs. - Use Intune to automate and scale Azure AD join with mobile device management (MDM) auto-enrollment of Azure Windows VMs that are part of your virtual desktop infrastructure (VDI) deployments.
This feature currently supports the following Windows distributions:
- Windows Server 2019 Datacenter and later - Windows 10 1809 and later-
-> [!IMPORTANT]
-> Remote connection to VMs that are joined to Azure AD is allowed only from Windows 10 or later PCs that are Azure AD registered (starting with Windows 10 20H1), Azure AD joined, or hybrid Azure AD joined to the *same* directory as the VM.
+- Windows 11 21H2 and later
This feature is now available in the following Azure clouds:
For more information about how to use Azure RBAC to manage access to your Azure
- [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles by using Azure PowerShell](../../role-based-access-control/role-assignments-powershell.md)
-## Enforce Conditional Access policies
+## Log in by using Azure AD credentials to a Windows VM
+
+You can do this over RDP using one of two methods:
+1. Passwordless using any of the supported Azure AD credential (recommended)
+1. Password/limited passwordless using Windows Hello for Business deployed using certificate trust model
+
+### Log in using passwordless authentication with Azure AD
-You can enforce Conditional Access policies, such as multifactor authentication or user sign-in risk check, before you authorize access to Windows VMs in Azure that are enabled with Azure AD login. To apply a Conditional Access policy, you must select the **Azure Windows VM Sign-In** app from the cloud apps or actions assignment option. Then use sign-in risk as a condition and/or require MFA as a control for granting access.
+To use passwordless authentication for your Windows VMs in Azure, you need the Windows client machine and the session host (VM) on the following operating systems:
+
+- Windows 11 with [2022-10 Cumulative Updates for Windows 11 (KB5018418)](https://support.microsoft.com/kb/KB5018418) or later installed.
+- Windows 10, version 20H2 or later with [2022-10 Cumulative Updates for Windows 10 (KB5018410)](https://support.microsoft.com/kb/KB5018410) or later installed.
+- Windows Server 2022 with [2022-10 Cumulative Update for Microsoft server operating system (KB5018421)](https://support.microsoft.com/kb/KB5018421) or later installed.
+
+> [!IMPORTANT]
+> There is no requirement for Windows client machine to be either Azure AD registered, or Azure AD joined or hybrid Azure AD joined to the *same* directory as the VM. Additionally, to RDP by using Azure AD credentials, users must belong to one of the two Azure roles, Virtual Machine Administrator Login or Virtual Machine User Login.
+
+To connect to the remote computer:
+
+- Launch **Remote Desktop Connection** from Windows Search, or by running `mstsc.exe`.
+- Select **Use a web account to sign in to the remote computer** option in the **Advanced** tab. This option is equivalent to the `enablerdsaadauth` RDP property. For more information, see [Supported RDP properties with Remote Desktop Services](/windows-server/remote/remote-desktop-services/clients/rdp-files).
+- Specify the name of the remote computer and select **Connect**.
> [!NOTE]
-> If you require MFA as a control for granting access to the Azure Windows VM Sign-In app, then you must supply an MFA claim as part of the client that initiates the RDP session to the target Windows VM in Azure. The only way to achieve this on a Windows 10 or later client is to use a Windows Hello for Business PIN or biometric authentication with the RDP client. Support for biometric authentication was added to the RDP client in Windows 10 version 1809.
->
-> Remote desktop using Windows Hello for Business authentication is available only for deployments that use a certificate trust model. It's currently not available for a key trust model.
+> IP address cannot be used when **Use a web account to sign in to the remote computer** option is used.
+> The name must match the hostname of the remote device in Azure AD and be network addressable, resolving to the IP address of the remote device.
-## Log in by using Azure AD credentials to a Windows VM
+- When prompted for credentials, specify your user name in `user@domain.com` format.
+- You're then prompted to allow the remote desktop connection when connecting to a new PC. Azure AD remembers up to 15 hosts for 30 days before prompting again. If you see this dialogue, select **Yes** to connect.
+
+> [!IMPORTANT]
+> If your organization has configured and is using [Azure AD Conditional Access](/azure/active-directory/conditional-access/overview), your device must satisfy the conditional access requirements to allow connection to the remote computer. Conditional Access policies may be applied to the application **Microsoft Remote Desktop (a4a365df-50f1-4397-bc59-1a1564b8bb9c)** for controlled access.
+
+> [!NOTE]
+> The Windows lock screen in the remote session doesn't support Azure AD authentication tokens or passwordless authentication methods like FIDO keys. The lack of support for these authentication methods means that users can't unlock their screens in a remote session. When you try to lock a remote session, either through user action or system policy, the session is instead disconnected and the service sends a message to the user explaining they've been disconnected. Disconnecting the session also ensures that when the connection is relaunched after a period of inactivity, Azure AD reevaluates the applicable conditional access policies.
+
+### Log in using password/limited passwordless authentication with Azure AD
> [!IMPORTANT] > Remote connection to VMs that are joined to Azure AD is allowed only from Windows 10 or later PCs that are either Azure AD registered (minimum required build is 20H1) or Azure AD joined or hybrid Azure AD joined to the *same* directory as the VM. Additionally, to RDP by using Azure AD credentials, users must belong to one of the two Azure roles, Virtual Machine Administrator Login or Virtual Machine User Login. > > If you're using an Azure AD-registered Windows 10 or later PC, you must enter credentials in the `AzureAD\UPN` format (for example, `AzureAD\john@contoso.com`). At this time, you can use Azure Bastion to log in with Azure AD authentication [via the Azure CLI and the native RDP client mstsc](../../bastion/connect-native-client-windows.md). + To log in to your Windows Server 2019 virtual machine by using Azure AD: 1. Go to the overview page of the virtual machine that has been enabled with Azure AD login.
You're now logged in to the Windows Server 2019 Azure virtual machine with the r
> [!NOTE] > You can save the .rdp file locally on your computer to start future remote desktop connections to your virtual machine, instead of going to the virtual machine overview page in the Azure portal and using the connect option.
+## Enforce Conditional Access policies
+
+You can enforce Conditional Access policies, such as "phishing resistant MFA" using require authentication strength (preview) grant contorl or multifactor authentication or user sign-in risk check, before you authorize access to Windows VMs in Azure that are enabled with Azure AD login. To apply a Conditional Access policy, you must select the **Azure Windows VM Sign-In** app from the cloud apps or actions assignment option. Then use sign-in risk as a condition and/or "phishing resistant MFA" using require authentication strength (preview) grant contorl or require MFA as a control for granting access.
+
+> [!NOTE]
+> If you require MFA as a control for granting access to the Azure Windows VM Sign-In app, then you must supply an MFA claim as part of the client that initiates the RDP session to the target Windows VM in Azure. This can be achieved using passwordless authentication method for RDP that satisfies the conditional access polices, however if you are using limited passwordless method for RDP then the only way to achieve this on a Windows 10 or later client is to use a Windows Hello for Business PIN or biometric authentication with the RDP client. Support for biometric authentication was added to the RDP client in Windows 10 version 1809. Remote desktop using Windows Hello for Business authentication is available only for deployments that use a certificate trust model. It's currently not available for a key trust model.
+ ## Use Azure Policy to meet standards and assess compliance Use Azure Policy to:
active-directory Reference Connect Accounts Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-accounts-permissions.md
The following table is a summary of the custom settings wizard pages, the creden
> > For more information, see [Azure AD Connect: Configure AD DS Connector account permission](how-to-connect-configure-ad-ds-connector-account.md).
-The account you specify on the **Connect your directories** page must be created in Windows Server AD before installation. Azure AD Connect version 1.1.524.0 and later has the option to let the Azure AD Connect wizard create the AD DS Connector account that's used to connect to Windows Server AD.
+The account you specify on the **Connect your directories** page must be created in Windows Server AD as a normal user object (VSA, MSA, or gMSA aren't supported) before installation. Azure AD Connect version 1.1.524.0 and later has the option to let the Azure AD Connect wizard create the AD DS Connector account that's used to connect to Windows Server AD.
The account you specify also must have the required permissions. The installation wizard doesn't verify the permissions, and any issues are found only during the sync process.
active-directory Configure Permission Classifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-permission-classifications.md
Previously updated : 2/24/2023 Last updated : 3/28/2023
You can use the latest [Azure AD PowerShell](/powershell/module/azuread/?preserv
Run the following command to connect to Azure AD PowerShell. To consent to the required scopes, sign in with one of the roles listed in the prerequisite section of this article. ```powershell
-Connect-AzureAD -Scopes "Policy.ReadWrite.PermissionGrant".
+Connect-AzureAD -Scopes
``` ### List the current permission classifications
active-directory Manage Application Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md
Previously updated : 03/16/2023 Last updated : 03/28/2023 zone_pivot_groups: enterprise-apps-all
active-directory Concept All Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-all-sign-ins.md
Previously updated : 03/24/2023 Last updated : 03/28/2023
Sign-ins are aggregated in the non-interactive users when the following data mat
- Status - Resource ID
-The IP address of non-interactive sign-ins doesn't match the actual source IP of where the refresh token request is coming from. Instead, it shows the original IP used for the original token issuance.
+> [!NOTE]
+> The IP address of non-interactive sign-ins performed by [confidential clients](../develop/msal-client-applications.md) doesn't match the actual source IP of where the refresh token request is coming from. Instead, it shows the original IP used for the original token issuance.
### Service principal sign-ins
active-directory Alinto Protect Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/alinto-protect-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
![Provisioning tab automatic](common/provisioning-automatic.png)
-1. In the **Admin Credentials** section, input your Alinto Protect Tenant URL as `https://cloud.cleanmail.eu/api/v3/scim2 ` and corresponding Secret Token obtained from Step 2. Click **Test Connection** to ensure Azure AD can connect to Alinto Protect. If the connection fails, ensure your Alinto Protect account has Admin permissions and try again.
+1. In the **Admin Credentials** section, input your Alinto Protect Tenant URL as `https://cloud.cleanmail.{Domain}/api/v3/scim2` and corresponding Secret Token obtained from Step 2. Click **Test Connection** to ensure Azure AD can connect to Alinto Protect. If the connection fails, ensure your Alinto Protect account has Admin permissions and try again.
![Token](common/provisioning-testconnection-tenanturltoken.png)
+ >[!NOTE]
+ >In the Tenant URL, **{Domain}** will be the country code top-level domain. For example, if the country is US, then the Tenant URL will be `https://cloud.cleanmail.com/api/v3/scim2`
+ 1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ![Notification Email](common/provisioning-notification-email.png)
active-directory Aws Single Sign On Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-tutorial.md
To configure the integration of AWS IAM Identity Center into Azure AD, you need
Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true).
- ## Configure and test Azure AD SSO for AWS IAM Identity Center Configure and test Azure AD SSO with AWS IAM Identity Center using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AWS IAM Identity Center.
active-directory Bis Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bis-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure BIS for automatic user provisioning with Azure Active Directory'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to BIS.
++
+writer: twimmers
+
+ms.assetid: d76e2482-4228-4907-8b4c-c75aa495a2ae
++++ Last updated : 03/24/2023+++
+# Tutorial: Configure BIS for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both BIS and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users to [BIS](https://www.trainanddevelop.c).
++
+## Supported capabilities
+> [!div class="checklist"]
+> * Create users in BIS.
+> * Remove users in BIS when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and BIS.
+> * [Single sign-on](bis-tutorial.md) to BIS (recommended).
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md)
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* An administrator account with BIS.
+* Region and Country should be passed as 2 or 3 letter code and not full name.
+* Make sure all existing account in BIS has data in sync with Azure AD to avoid duplicate account creation (for example, email in Azure AD should match with email in BIS).
+
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and BIS](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure BIS to support provisioning with Azure AD
+To get your credentials for authorization please contact [BIS Support](mailto:help@bistrainer.com) or your Account's Manager.
+
+## Step 3. Add BIS from the Azure AD application gallery
+
+Add BIS from the Azure AD application gallery to start managing provisioning to BIS. If you have previously setup BIS for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
++
+## Step 5. Configure automatic user provisioning to BIS
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for BIS in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png)
+
+1. In the applications list, select **BIS**.
+
+ ![Screenshot of the BIS link in the Applications list.](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Screenshot of Provisioning tab.](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Screenshot of setting Provisioning Mode to automatic.](common/provisioning-automatic.png)
+
+1. Under the **Admin Credentials** section, input your BIS Tenant URL as `https://www.bistrainer.com/scim` and corresponding Secret Token. Click **Test Connection** to ensure Azure AD can connect to BIS. If the connection fails, ensure your BIS account has Admin permissions and try again.
+
+ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png)
+
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Screenshot of Notification Email.](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to BIS**.
+
+1. Review the user attributes that are synchronized from Azure AD to BIS in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in BIS for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the BIS API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by BIS|
+ |||||
+ |userName|String|&check;|&check;
+ |active|Boolean||&check;
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |addresses[type eq "work"].streetAddress|String||
+ |addresses[type eq "work"].locality|String||
+ |addresses[type eq "work"].region|String||&check;
+ |addresses[type eq "work"].postalCode|String||
+ |addresses[type eq "work"].country|String||&check;
+ |phoneNumbers[type eq "work"].value|String||
+ |phoneNumbers[type eq "mobile"].value|String||
+ |externalId|String||
+ |title|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String||&check;
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String||
+ |name.middleName|String||
+ |urn:ietf:params:scim:schemas:extension:BIS:2.0:User:location|String||
+ |urn:ietf:params:scim:schemas:extension:BIS:2.0:User:startdate|DateTime||
+ |urn:ietf:params:scim:schemas:extension:BIS:2.0:User:terminationdate|DateTime||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for BIS, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png)
+
+1. Define the users that you would like to provision to BIS by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png)
+
+1. When you're ready to provision, click **Save**.
+
+ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Teamzskill Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/teamzskill-tutorial.md
Title: "Tutorial: Azure Active Directory single sign-on (SSO) integration with TeamzSkill"
-description: Learn how to configure single sign-on between Azure Active Directory and TeamzSkill.
+ Title: Azure Active Directory SSO integration with RevSpace
+description: Learn how to configure single sign-on between Azure Active Directory and RevSpace.
Previously updated : 11/21/2022 Last updated : 03/28/2023
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with TeamzSkill
+# Tutorial: Azure Active Directory SSO integration with RevSpace
-In this tutorial, you'll learn how to integrate TeamzSkill with Azure Active Directory (Azure AD). When you integrate TeamzSkill with Azure AD, you can:
+In this tutorial, you learn how to integrate RevSpace with Azure Active Directory (Azure AD). When you integrate RevSpace with Azure AD, you can:
-- Control in Azure AD who has access to TeamzSkill.-- Enable your users to be automatically signed-in to TeamzSkill with their Azure AD accounts.-- Manage your accounts in one central location - the Azure portal.
+* Control in Azure AD who has access to RevSpace.
+* Enable your users to be automatically signed-in to RevSpace with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To get started, you need the following items: -- An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).-- TeamzSkill single sign-on (SSO) enabled subscription.
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* RevSpace single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment. -- TeamzSkill supports **SP and IDP** initiated SSO-- TeamzSkill supports **Just In Time** user provisioning
+* RevSpace supports **SP and IDP** initiated SSO.
+* RevSpace supports **Just In Time** user provisioning.
-## Adding TeamzSkill from the gallery
+## Adding RevSpace from the gallery
-To configure the integration of TeamzSkill into Azure AD, you need to add TeamzSkill from the gallery to your list of managed SaaS apps.
+To configure the integration of RevSpace into Azure AD, you need to add RevSpace from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **TeamzSkill** in the search box.
-1. Select **TeamzSkill** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **RevSpace** in the search box.
+1. Select **RevSpace** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+ Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
-## Configure and test Azure AD SSO for TeamzSkill
+## Configure and test Azure AD SSO for RevSpace
-Configure and test Azure AD SSO with TeamzSkill using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in TeamzSkill.
+Configure and test Azure AD SSO with RevSpace using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in RevSpace.
-To configure and test Azure AD SSO with TeamzSkill, perform the following steps:
+To configure and test Azure AD SSO with RevSpace, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure TeamzSkill SSO](#configure-teamzskill-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create TeamzSkill test user](#create-teamzskill-test-user)** - to have a counterpart of B.Simon in TeamzSkill that is linked to the Azure AD representation of user.
+1. **[Configure RevSpace SSO](#configure-revspace-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create RevSpace test user](#create-revspace-test-user)** - to have a counterpart of B.Simon in RevSpace that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **TeamzSkill** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **RevSpace** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings. ![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
+1. On the **Basic SAML Configuration** section, perform the following steps:
a. In the **Identifier** text box, type a URL using the following pattern:
- `https://<CUSTOMER_SUBDOMAIN>.teamzskill.com/login/callback`
+ `https://<CUSTOMER_SUBDOMAIN>.revspace.io/login/callback`
b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<CUSTOMER_SUBDOMAIN>.teamzskill.com/login/callback`
+ `https://<CUSTOMER_SUBDOMAIN>.revspace.io/login/callback`
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
+1. Perform the following step if you wish to configure the application in **SP** initiated mode:
- In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<CUSTOMER_SUBDOMAIN>.teamzskill.com/login/callback`
+ In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<CUSTOMER_SUBDOMAIN>.revspace.io/login/callback`
> [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [TeamzSkill Client support team](mailto:support@teamzskill.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [RevSpace Client support team](mailto:support@revspace.io) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. TeamzSkill application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. RevSpace application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
![image](common/default-attributes.png)
-1. In addition to above, TeamzSkill application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+1. In addition to above, RevSpace application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirements.
| Name | Source Attribute | | - | |
Follow these steps to enable Azure AD SSO in the Azure portal.
| role | user.assignedroles | > [!NOTE]
- > TeamzSkill expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui).
+ > RevSpace expects roles for users assigned to the application. Please set up these roles in Azure AD so that users can be assigned the appropriate roles. To understand how to configure roles in Azure AD, see [here](../develop/howto-add-app-roles-in-azure-ad-apps.md#app-roles-ui).
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ![The Certificate download link](common/metadataxml.png)
-1. On the **Set up TeamzSkill** section, copy the appropriate URL(s) based on your requirement.
+1. On the **Set up RevSpace** section, copy the appropriate URL(s) based on your requirement.
![Copy configuration URLs](common/copy-configuration-urls.png) ### Create an Azure AD test user
-In this section, you'll create a test user in the Azure portal called B.Simon.
+In this section, you create a test user in the Azure portal called B.Simon.
1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**. 1. Select **New user** at the top of the screen.
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to TeamzSkill.
+In this section, you enable B.Simon to use Azure single sign-on by granting access to RevSpace.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **TeamzSkill**.
+1. In the applications list, select **RevSpace**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you have setup the roles as explained in the above, you can select it from the **Select a role** dropdown. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure TeamzSkill SSO
+## Configure RevSpace SSO
-1. In a different web browser window, sign into TeamzSkill as an administrator.
+1. In a different web browser window, sign into RevSpace as an administrator.
-1. click on user Profile Icon, then select **Company settings**.
+1. Click on user Profile icon, then select **Company settings**.
- ![Company settings in Teamzskill](./media/teamzskill-tutorial/settings.png)
+ ![Screenshot of company settings in RevSpace.](./media/teamzskill-tutorial/settings.png)
1. Perform the following steps in **Settings** page.
- ![settings in Teamzskill](./media/teamzskill-tutorial/metadata.png)
+ ![Screenshot of settings in RevSpace.](./media/teamzskill-tutorial/metadata.png)
a. Navigate to **Company > Single Sign-On**, then select the **Metadata Upload** tab.
- b. Paste the **Federation Metadata XML** Value, which you have copied from the Azure portal into **XML Metadata** field.
+ b. Paste the **Federation Metadata XML** Value, which you've copied from the Azure portal into **XML Metadata** field.
c. Then click **Save**.
-### Create TeamzSkill test user
+### Create RevSpace test user
-In this section, a user called B.Simon is created in TeamzSkill. TeamzSkill supports just-in-time provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in TeamzSkill, a new one is created when you attempt to access TeamzSkill.
+In this section, a user called B.Simon is created in RevSpace. RevSpace supports just-in-time provisioning, which is enabled by default. There's no action item for you in this section. If a user doesn't already exist in RevSpace, a new one is created when you attempt to access RevSpace.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated: -- Click on **Test this application** in Azure portal. This will redirect to TeamzSkill Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to RevSpace Sign-on URL where you can initiate the login flow.
-- Go to TeamzSkill Sign-on URL directly and initiate the login flow from there.
+* Go to RevSpace Sign-on URL directly and initiate the login flow from there.
#### IDP initiated: -- Click on **Test this application** in Azure portal and you should be automatically signed in to the TeamzSkill for which you set up the SSO
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the RevSpace for which you set up the SSO
-You can also use Microsoft Access Panel to test the application in any mode. When you click the TeamzSkill tile in the Access Panel, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the TeamzSkill for which you set up the SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+You can also use Microsoft My Apps to test the application in any mode. When you click the RevSpace tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the RevSpace for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
## Next steps
-Once you configure TeamzSkill you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
+Once you configure RevSpace you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-any-app).
aks Azure Csi Disk Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md
A persistent volume represents a piece of storage that has been provisioned for
This article shows you how to:
-* Work with a dynamic persistent volume (PV) by installing the Container Storage Interface (CSI) driver and dynamically creating one or more Azure managed disk to attach to a pod.
-* Work with a static PV by creating one or more Azure managed disk, or use an existing one and attach it to a pod.
+* Work with a dynamic persistent volume (PV) by installing the Container Storage Interface (CSI) driver and dynamically creating one or more Azure managed disks to attach to a pod.
+* Work with a static PV by creating one or more Azure managed disks, or use an existing one and attach it to a pod.
For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage]. ## Before you begin -- An Azure [storage account][azure-storage-account].
+* You need an Azure [storage account][azure-storage-account].
+* Make sure you have Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* The Azure Disks CSI driver has a limit of 32 volumes per node. The volume count changes based on the size of the node/node pool. Run the [kubectl get][kubectl-get] command to determine the number of volumes that can be allocated per node:
-- The Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].-
-The Azure Disks CSI driver has a limit of 32 volumes per node. The volume count changes based on the size of the node/node pool. Run the [kubectl get][kubectl-get] command to determine the number of volumes that can be allocated per node:
-
-```console
-kubectl get CSINode <nodename> -o yaml
-```
+ ```console
+ kubectl get CSINode <nodename> -o yaml
+ ```
## Dynamically provision a volume
This section provides guidance for cluster administrators who want to provision
### Built-in storage classes
-A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes Storage Classes][kubernetes-storage-classes].
+A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes storage classes][kubernetes-storage-classes].
Each AKS cluster includes four pre-created storage classes, two of them configured to work with Azure Disks:
-* The *default* storage class provisions a standard SSD Azure Disk.
+1. The *default* storage class provisions a standard SSD Azure Disk.
* Standard storage is backed by Standard SSDs and delivers cost-effective storage while still delivering reliable performance.
-* The *managed-csi-premium* storage class provisions a premium Azure Disk.
+1. The *managed-csi-premium* storage class provisions a premium Azure Disk.
* Premium disks are backed by SSD-based high-performance, low-latency disks. They're ideal for VMs running production workloads. When you use the Azure Disks CSI driver on AKS, you can also use the `managed-csi` storage class, which is backed by Standard SSD locally redundant storage (LRS).
-It's not supported to reduce the size of a PVC (to prevent data loss). You can edit an existing storage class by using the `kubectl edit sc` command, or you can create your own custom storage class.
-
-For example, if you want to use a disk of size 4 TiB, you must create a storage class that defines `cachingmode: None` because [disk caching isn't supported for disks 4 TiB and larger][disk-host-cache-setting].
-
-For more information about storage classes and creating your own storage class, see [Storage options for applications in AKS][storage-class-concepts].
+It's not supported to reduce the size of a PVC (to prevent data loss). You can edit an existing storage class using the `kubectl edit sc` command, or you can create your own custom storage class. For example, if you want to use a disk of size 4 TiB, you must create a storage class that defines `cachingmode: None` because [disk caching isn't supported for disks 4 TiB and larger][disk-host-cache-setting]. For more information about storage classes and creating your own storage class, see [Storage options for applications in AKS][storage-class-concepts].
Use the [kubectl get sc][kubectl-get] command to see the pre-created storage classes. The following example shows the pre-create storage classes available within an AKS cluster:
managed-csi disk.csi.azure.com 1h
A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. In this case, a PVC can use one of the pre-created storage classes to create a standard or premium Azure managed disk.
-Create a file named `azure-pvc.yaml`, and copy in the following manifest. The claim requests a disk named `azure-managed-disk` that is *5 GB* in size with *ReadWriteOnce* access. The *managed-csi* storage class is specified as the storage class.
-
-```yaml
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: azure-managed-disk
-spec:
- accessModes:
- - ReadWriteOnce
- storageClassName: managed-csi
- resources:
- requests:
- storage: 5Gi
-```
+1. Create a file named `azure-pvc.yaml`, and copy in the following manifest. The claim requests a disk named `azure-managed-disk` that is *5 GB* in size with *ReadWriteOnce* access. The *managed-csi* storage class is specified as the storage class.
-> [!TIP]
-> To create a disk that uses premium storage, use `storageClassName: managed-csi-premium` rather than *managed-csi*.
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: azure-managed-disk
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ storageClassName: managed-csi
+ resources:
+ requests:
+ storage: 5Gi
+ ```
-Create the persistent volume claim with the [kubectl apply][kubectl-apply] command and specify your *azure-pvc.yaml* file:
+ > [!TIP]
+ > To create a disk that uses premium storage, use `storageClassName: managed-csi-premium` rather than *managed-csi*.
-```bash
-kubectl apply -f azure-pvc.yaml
-```
+2. Create the persistent volume claim with the [kubectl apply][kubectl-apply] command and specify your *azure-pvc.yaml* file:
-The output of the command resembles the following example:
+ ```bash
+ kubectl apply -f azure-pvc.yaml
+ ```
-```console
-persistentvolumeclaim/azure-managed-disk created
-```
+ The output of the command resembles the following example:
+
+ ```console
+ persistentvolumeclaim/azure-managed-disk created
+ ```
### Use the persistent volume Once the persistent volume claim has been created and the disk successfully provisioned, a pod can be created with access to the disk. The following manifest creates a basic NGINX pod that uses the persistent volume claim named *azure-managed-disk* to mount the Azure Disk at the path `/mnt/azure`. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
-Create a file named `azure-pvc-disk.yaml`, and copy in the following manifest.
-
-```yaml
-kind: Pod
-apiVersion: v1
-metadata:
- name: mypod
-spec:
- containers:
- - name: mypod
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: azure-managed-disk
-```
-
-Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
-
-```console
-kubectl apply -f azure-pvc-disk.yaml
-```
+1. Create a file named `azure-pvc-disk.yaml`, and copy in the following manifest:
-The output of the command resembles the following example:
-
-```console
-pod/mypod created
-```
-
-You now have a running pod with your Azure Disk mounted in the `/mnt/azure` directory. This configuration can be seen when inspecting your pod using the [kubectl describe][kubectl-describe] command, as shown in the following condensed example:
-
-```bash
-kubectl describe pod mypod
-```
-
-The output of the command resembles the following example:
-
-```console
-[...]
-Volumes:
- volume:
- Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
- ClaimName: azure-managed-disk
- ReadOnly: false
- default-token-smm2n:
- Type: Secret (a volume populated by a Secret)
- SecretName: default-token-smm2n
- Optional: false
-[...]
-Events:
- Type Reason Age From Message
- - - - -
- Normal Scheduled 2m default-scheduler Successfully assigned mypod to aks-nodepool1-79590246-0
- Normal SuccessfulMountVolume 2m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "default-token-smm2n"
- Normal SuccessfulMountVolume 1m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "pvc-faf0f176-8b8d-11e8-923b-deb28c58d242"
-[...]
-```
+ ```yaml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: mypod
+ spec:
+ containers:
+ - name: mypod
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: "/mnt/azure"
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: azure-managed-disk
+ ```
+
+2. Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
+
+ ```console
+ kubectl apply -f azure-pvc-disk.yaml
+ ```
+
+ The output of the command resembles the following example:
+
+ ```console
+ pod/mypod created
+ ```
+
+3. You now have a running pod with your Azure Disk mounted in the `/mnt/azure` directory. This configuration can be seen when inspecting your pod using the [kubectl describe][kubectl-describe] command, as shown in the following condensed example:
+
+ ```bash
+ kubectl describe pod mypod
+ ```
+
+ The output of the command resembles the following example:
+
+ ```console
+ [...]
+ Volumes:
+ volume:
+ Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
+ ClaimName: azure-managed-disk
+ ReadOnly: false
+ default-token-smm2n:
+ Type: Secret (a volume populated by a Secret)
+ SecretName: default-token-smm2n
+ Optional: false
+ [...]
+ Events:
+ Type Reason Age From Message
+ - - - -
+ Normal Scheduled 2m default-scheduler Successfully assigned mypod to aks-nodepool1-79590246-0
+ Normal SuccessfulMountVolume 2m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "default-token-smm2n"
+ Normal SuccessfulMountVolume 1m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "pvc-faf0f176-8b8d-11e8-923b-deb28c58d242"
+ [...]
+ ```
### Use Azure ultra disks
To use Azure ultra disk, see [Use ultra disks on Azure Kubernetes Service (AKS)]
To back up the data in your persistent volume, take a snapshot of the managed disk for the volume. You can then use this snapshot to create a restored disk and attach to pods as a means of restoring the data.
-First, get the volume name with the [kubectl get][kubectl-get] command, such as for the PVC named *azure-managed-disk*:
+1. Get the volume name with the [kubectl get][kubectl-get] command, such as for the PVC named *azure-managed-disk*:
-```bash
-kubectl get pvc azure-managed-disk
-```
+ ```bash
+ kubectl get pvc azure-managed-disk
+ ```
-The output of the command resembles the following example:
+ The output of the command resembles the following example:
-```console
-NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-azure-managed-disk Bound pvc-faf0f176-8b8d-11e8-923b-deb28c58d242 5Gi RWO managed-premium 3m
-```
+ ```console
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ azure-managed-disk Bound pvc-faf0f176-8b8d-11e8-923b-deb28c58d242 5Gi RWO managed-premium 3m
+ ```
-This volume name forms the underlying Azure disk name. Query for the disk ID with [az disk list][az-disk-list] and provide your PVC volume name, as shown in the following example:
+2. This volume name forms the underlying Azure disk name. Query for the disk ID with [az disk list][az-disk-list] and provide your PVC volume name, as shown in the following example:
-```azurecli
-az disk list --query '[].id | [?contains(@,`pvc-faf0f176-8b8d-11e8-923b-deb28c58d242`)]' -o tsv
-
-/subscriptions/<guid>/resourceGroups/MC_MYRESOURCEGROUP_MYAKSCLUSTER_EASTUS/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242
-```
+ ```azurecli
+ az disk list --query '[].id | [?contains(@,`pvc-faf0f176-8b8d-11e8-923b-deb28c58d242`)]' -o tsv
-Use the disk ID to create a snapshot disk with [az snapshot create][az-snapshot-create]. The following example creates a snapshot named *pvcSnapshot* in the same resource group as the AKS cluster *MC_myResourceGroup_myAKSCluster_eastus*. You may encounter permission issues if you create snapshots and restore disks in resource groups that the AKS cluster doesn't have access to.
+ /subscriptions/<guid>/resourceGroups/MC_MYRESOURCEGROUP_MYAKSCLUSTER_EASTUS/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242
+ ```
-```azurecli
-az snapshot create \
- --resource-group MC_myResourceGroup_myAKSCluster_eastus \
- --name pvcSnapshot \
- --source /subscriptions/<guid>/resourceGroups/MC_myResourceGroup_myAKSCluster_eastus/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242
-```
+3. Use the disk ID to create a snapshot disk with [az snapshot create][az-snapshot-create]. The following example creates a snapshot named *pvcSnapshot* in the same resource group as the AKS cluster *MC_myResourceGroup_myAKSCluster_eastus*. You may encounter permission issues if you create snapshots and restore disks in resource groups that the AKS cluster doesn't have access to. Depending on the amount of data on your disk, it may take a few minutes to create the snapshot.
-Depending on the amount of data on your disk, it may take a few minutes to create the snapshot.
+ ```azurecli
+ az snapshot create \
+ --resource-group MC_myResourceGroup_myAKSCluster_eastus \
+ --name pvcSnapshot \
+ --source /subscriptions/<guid>/resourceGroups/MC_myResourceGroup_myAKSCluster_eastus/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242
+ ```
### Restore and use a snapshot
-To restore the disk and use it with a Kubernetes pod, use the snapshot as a source when you create a disk with [az disk create][az-disk-create]. This operation preserves the original resource if you then need to access the original data snapshot. The following example creates a disk named *pvcRestored* from the snapshot named *pvcSnapshot*:
-
-```azurecli
-az disk create --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --source pvcSnapshot
-```
-
-To use the restored disk with a pod, specify the ID of the disk in the manifest. Get the disk ID with the [az disk show][az-disk-show] command. The following example gets the disk ID for *pvcRestored* created in the previous step:
-
-```azurecli
-az disk show --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --query id -o tsv
-```
-
-Create a pod manifest named `azure-restored.yaml` and specify the disk URI obtained in the previous step. The following example creates a basic NGINX web server, with the restored disk mounted as a volume at */mnt/azure*:
-
-```yaml
-kind: Pod
-apiVersion: v1
-metadata:
- name: mypodrestored
-spec:
- containers:
- - name: mypodrestored
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- azureDisk:
- kind: Managed
- diskName: pvcRestored
- diskURI: /subscriptions/<guid>/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
-```
-
-Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
-
-```bash
-kubectl apply -f azure-restored.yaml
-```
-
-The output of the command resembles the following example:
-
-```console
-pod/mypodrestored created
-```
-
-You can use `kubectl describe pod mypodrestored` to view details of the pod, such as the following condensed example that shows the volume information:
-
-```bash
-kubectl describe pod mypodrestored
-```
-
-The output of the command resembles the following example:
-
-```console
-[...]
-Volumes:
- volume:
- Type: AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
- DiskName: pvcRestored
- DiskURI: /subscriptions/19da35d3-9a1a-4f3b-9b9c-3c56ef409565/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
- Kind: Managed
- FSType: ext4
- CachingMode: ReadWrite
- ReadOnly: false
-[...]
-```
+1. To restore the disk and use it with a Kubernetes pod, use the snapshot as a source when you create a disk with [az disk create][az-disk-create]. This operation preserves the original resource if you then need to access the original data snapshot. The following example creates a disk named *pvcRestored* from the snapshot named *pvcSnapshot*:
+
+ ```azurecli
+ az disk create --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --source pvcSnapshot
+ ```
+
+2. To use the restored disk with a pod, specify the ID of the disk in the manifest. Get the disk ID with the [az disk show][az-disk-show] command. The following example gets the disk ID for *pvcRestored* created in the previous step:
+
+ ```azurecli
+ az disk show --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --query id -o tsv
+ ```
+
+3. Create a pod manifest named `azure-restored.yaml` and specify the disk URI obtained in the previous step. The following example creates a basic NGINX web server, with the restored disk mounted as a volume at */mnt/azure*:
+
+ ```yaml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: mypodrestored
+ spec:
+ containers:
+ - name: mypodrestored
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: "/mnt/azure"
+ name: volume
+ volumes:
+ - name: volume
+ azureDisk:
+ kind: Managed
+ diskName: pvcRestored
+ diskURI: /subscriptions/<guid>/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
+ ```
+
+4. Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
+
+ ```bash
+ kubectl apply -f azure-restored.yaml
+ ```
+
+ The output of the command resembles the following example:
+
+ ```console
+ pod/mypodrestored created
+ ```
+
+5. You can use `kubectl describe pod mypodrestored` to view details of the pod, such as the following condensed example that shows the volume information:
+
+ ```bash
+ kubectl describe pod mypodrestored
+ ```
+
+ The output of the command resembles the following example:
+
+ ```console
+ [...]
+ Volumes:
+ volume:
+ Type: AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
+ DiskName: pvcRestored
+ DiskURI: /subscriptions/19da35d3-9a1a-4f3b-9b9c-3c56ef409565/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
+ Kind: Managed
+ FSType: ext4
+ CachingMode: ReadWrite
+ ReadOnly: false
+ [...]
+ ```
### Using Azure tags
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
Last updated 03/02/2023
It's common to use pipelines to build and deploy images on Azure Kubernetes Service (AKS) clusters. While great for image creation, this process often doesn't account for the stale images left behind and can lead to image bloat on cluster nodes. These images can present security issues as they may contain vulnerabilities. By cleaning these unreferenced images, you can remove an area of risk in your clusters. When done manually, this process can be time intensive, which Image Cleaner can mitigate via automatic image identification and removal. > [!NOTE]
-> Image Cleaner is a feature based on [Eraser](https://github.com/Azure/eraser).
+> Image Cleaner is a feature based on [Eraser](https://azure.github.io/eraser).
> On an AKS cluster, the feature name and property name is `Image Cleaner` while the relevant Image Cleaner pods' names contain `Eraser`. [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
aks Scale Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-cluster.md
Title: Scale an Azure Kubernetes Service (AKS) cluster description: Learn how to scale the number of nodes in an Azure Kubernetes Service (AKS) cluster. Previously updated : 06/29/2022 Last updated : 03/27/2023 # Scale the node count in an Azure Kubernetes Service (AKS) cluster
If the resource needs of your applications change, your cluster performance may
### [Azure CLI](#tab/azure-cli)
-First, get the *name* of your node pool using the [az aks show][az-aks-show] command. The following example gets the node pool name for the cluster named *myAKSCluster* in the *myResourceGroup* resource group:
-
-```azurecli-interactive
-az aks show --resource-group myResourceGroup --name myAKSCluster --query agentPoolProfiles
-```
-
-The following example output shows that the *name* is *nodepool1*:
-
-```output
-[
- {
- "count": 1,
- "maxPods": 110,
- "name": "nodepool1",
- "osDiskSizeGb": 30,
- "osType": "Linux",
- "storageProfile": "ManagedDisks",
- "vmSize": "Standard_DS2_v2"
- }
-]
-```
-
-Use the [az aks scale][az-aks-scale] command to scale the cluster nodes. The following example scales a cluster named *myAKSCluster* to a single node. Provide your own `--nodepool-name` from the previous command, such as *nodepool1*:
-
-```azurecli-interactive
-az aks scale --resource-group myResourceGroup --name myAKSCluster --node-count 1 --nodepool-name <your node pool name>
-```
-
-The following example output shows the cluster has successfully scaled to one node, as shown in the *agentPoolProfiles* section:
-
-```json
-{
- "aadProfile": null,
- "addonProfiles": null,
- "agentPoolProfiles": [
+1. Get the *name* of your node pool using the [`az aks show`][az-aks-show] command. The following example gets the node pool name for the cluster named *myAKSCluster* in the *myResourceGroup* resource group:
+
+ ```azurecli-interactive
+ az aks show --resource-group myResourceGroup --name myAKSCluster --query agentPoolProfiles
+ ```
+
+ The following example output shows that the *name* is *nodepool1*:
+
+ ```output
+ [
+ {
+ "count": 1,
+ "maxPods": 110,
+ "name": "nodepool1",
+ "osDiskSizeGb": 30,
+ "osType": "Linux",
+ "storageProfile": "ManagedDisks",
+ "vmSize": "Standard_DS2_v2"
+ }
+ ]
+ ```
+
+2. Scale the cluster nodes using the [`az aks scale`][az-aks-scale] command. The following example scales a cluster named *myAKSCluster* to a single node. Provide your own `--nodepool-name` from the previous command, such as *nodepool1*:
+
+ ```azurecli-interactive
+ az aks scale --resource-group myResourceGroup --name myAKSCluster --node-count 1 --nodepool-name <your node pool name>
+ ```
+
+ The following example output shows the cluster has successfully scaled to one node, as shown in the *agentPoolProfiles* section:
+
+ ```json
{
- "count": 1,
- "maxPods": 110,
- "name": "nodepool1",
- "osDiskSizeGb": 30,
- "osType": "Linux",
- "storageProfile": "ManagedDisks",
- "vmSize": "Standard_DS2_v2",
- "vnetSubnetId": null
+ "aadProfile": null,
+ "addonProfiles": null,
+ "agentPoolProfiles": [
+ {
+ "count": 1,
+ "maxPods": 110,
+ "name": "nodepool1",
+ "osDiskSizeGb": 30,
+ "osType": "Linux",
+ "storageProfile": "ManagedDisks",
+ "vmSize": "Standard_DS2_v2",
+ "vnetSubnetId": null
+ }
+ ],
+ [...]
}
- ],
- [...]
-}
-```
+ ```
### [Azure PowerShell](#tab/azure-powershell)
-First, get the *name* of your node pool using the [Get-AzAksCluster][get-azakscluster] command. The following example gets the node pool name for the cluster named *myAKSCluster* in the *myResourceGroup* resource group:
-
-```azurepowershell-interactive
-Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
- Select-Object -ExpandProperty AgentPoolProfiles
-```
-
-The following example output shows that the *name* is *nodepool1*:
-
-```Output
-Name : nodepool1
-Count : 1
-VmSize : Standard_D2_v2
-OsDiskSizeGB : 128
-VnetSubnetID :
-MaxPods : 30
-OsType : Linux
-MaxCount :
-MinCount :
-Mode : System
-EnableAutoScaling :
-Type : VirtualMachineScaleSets
-OrchestratorVersion : 1.23.3
-ProvisioningState : Succeeded
-...
-```
-
-Use the [Set-AzAksCluster][set-azakscluster] command to scale the cluster nodes. The following example scales a cluster named *myAKSCluster* to a single node. Provide your own `-NodeName` from the previous command, such as *nodepool1*:
-
-```azurepowershell-interactive
-Set-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 1 -NodeName <your node pool name>
-```
-
-The following example output shows the cluster has successfully scaled to one node, as shown in the *AgentPoolProfiles* property:
-
-```Output
-Name : nodepool1
-Count : 1
-VmSize : Standard_D2_v2
-OsDiskSizeGB : 128
-VnetSubnetID :
-MaxPods : 30
-OsType : Linux
-MaxCount :
-MinCount :
-Mode : System
-EnableAutoScaling :
-Type : VirtualMachineScaleSets
-OrchestratorVersion : 1.23.3
-ProvisioningState : Succeeded
-...
-```
+1. Get the *name* of your node pool using the [`Get-AzAksCluster`][get-azakscluster] command. The following example gets the node pool name for the cluster named *myAKSCluster* in the *myResourceGroup* resource group:
+
+ ```azurepowershell-interactive
+ Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster |
+ Select-Object -ExpandProperty AgentPoolProfiles
+ ```
+
+ The following example output shows that the *name* is *nodepool1*:
+
+ ```output
+ Name : nodepool1
+ Count : 1
+ VmSize : Standard_D2_v2
+ OsDiskSizeGB : 128
+ VnetSubnetID :
+ MaxPods : 30
+ OsType : Linux
+ MaxCount :
+ MinCount :
+ Mode : System
+ EnableAutoScaling :
+ Type : VirtualMachineScaleSets
+ OrchestratorVersion : 1.23.3
+ ProvisioningState : Succeeded
+ ...
+ ```
+
+2. Scale the cluster nodes using the [Set-AzAksCluster][set-azakscluster] command. The following example scales a cluster named *myAKSCluster* to a single node. Provide your own `-NodeName` from the previous command, such as *nodepool1*:
+
+ ```azurepowershell-interactive
+ Set-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 1 -NodeName <your node pool name>
+ ```
+
+ The following example output shows the cluster has successfully scaled to one node, as shown in the *AgentPoolProfiles* property:
+
+ ```output
+ Name : nodepool1
+ Count : 1
+ VmSize : Standard_D2_v2
+ OsDiskSizeGB : 128
+ VnetSubnetID :
+ MaxPods : 30
+ OsType : Linux
+ MaxCount :
+ MinCount :
+ Mode : System
+ EnableAutoScaling :
+ Type : VirtualMachineScaleSets
+ OrchestratorVersion : 1.23.3
+ ProvisioningState : Succeeded
+ ...
+ ```
Unlike `System` node pools that always require running nodes, `User` node pools
### [Azure CLI](#tab/azure-cli)
-To scale a user pool to 0, you can use the [az aks nodepool scale][az-aks-nodepool-scale] in alternative to the above `az aks scale` command, and set 0 as your node count.
+* To scale a user pool to 0, you can use the [az aks nodepool scale][az-aks-nodepool-scale] in alternative to the above `az aks scale` command, and set 0 as your node count.
-```azurecli-interactive
-az aks nodepool scale --name <your node pool name> --cluster-name myAKSCluster --resource-group myResourceGroup --node-count 0
-```
+ ```azurecli-interactive
+ az aks nodepool scale --name <your node pool name> --cluster-name myAKSCluster --resource-group myResourceGroup --node-count 0
+ ```
-You can also autoscale `User` node pools to 0 nodes, by setting the `--min-count` parameter of the [Cluster Autoscaler](cluster-autoscaler.md) to 0.
+* You can also autoscale `User` node pools to 0 nodes, by setting the `--min-count` parameter of the [Cluster Autoscaler](cluster-autoscaler.md) to 0.
### [Azure PowerShell](#tab/azure-powershell)
-To scale a user pool to 0, you can use the [Update-AzAksNodePool][update-azaksnodepool] in alternative to the above `Set-AzAksCluster` command, and set 0 as your node count.
+* To scale a user pool to 0, you can use the [Update-AzAksNodePool][update-azaksnodepool] in alternative to the above `Set-AzAksCluster` command, and set 0 as your node count.
-```azurepowershell-interactive
-Update-AzAksNodePool -Name <your node pool name> -ClusterName myAKSCluster -ResourceGroupName myResourceGroup -NodeCount 0
-```
+ ```azurepowershell-interactive
+ Update-AzAksNodePool -Name <your node pool name> -ClusterName myAKSCluster -ResourceGroupName myResourceGroup -NodeCount 0
+ ```
-You can also autoscale `User` node pools to 0 nodes, by setting the `-NodeMinCount` parameter of the [Cluster Autoscaler](cluster-autoscaler.md) to 0.
+* You can also autoscale `User` node pools to 0 nodes, by setting the `-NodeMinCount` parameter of the [Cluster Autoscaler](cluster-autoscaler.md) to 0.
In this article, you manually scaled an AKS cluster to increase or decrease the
[kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ <!-- LINKS - internal -->
-[aks-tutorial]: ./tutorial-kubernetes-prepare-app.md
[az-aks-show]: /cli/azure/aks#az_aks_show [get-azakscluster]: /powershell/module/az.aks/get-azakscluster [az-aks-scale]: /cli/azure/aks#az_aks_scale
aks Workload Identity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md
This article helps you understand this new authentication feature, and reviews t
- You can only have 20 federated identity credentials per managed identity. - It takes a few seconds for the federated identity credential to be propagated after being initially added.
+## Language SDK examples
+ - [Azure Identity SDK](https://azure.github.io/azure-workload-identity/docs/topics/language-specific-examples/azure-identity-sdk.html)
+ - [MSAL](https://azure.github.io/azure-workload-identity/docs/topics/language-specific-examples/msal.html)
+ ## How it works In this security model, the AKS cluster acts as token issuer, Azure Active Directory uses OpenID Connect to discover public signing keys and verify the authenticity of the service account token before exchanging it for an Azure AD token. Your workload can exchange a service account token projected to its volume for an Azure AD token using the Azure Identity client library or the Microsoft Authentication Library.
The following diagram summarizes the authentication sequence using OpenID Connec
:::image type="content" source="media/workload-identity-overview/aks-workload-identity-oidc-authentication-model.png" alt-text="Diagram of the AKS workload identity OIDC authentication sequence.":::
+### Webhook Certificate Auto Rotation
+
+Similar to other webhook addons, the certificate will be rotated by cluster certificate [auto rotation](https://learn.microsoft.com/azure/aks/certificate-rotation#certificate-auto-rotation) operation.
+ ## Service account labels and annotations Azure AD workload identity supports the following mappings related to a service account:
analysis-services Analysis Services Create Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-create-terraform.md
Title: 'Quickstart: Create an Azure Analysis Services server using Terraform' description: 'In this article, you create an Azure Analysis Services server using Terraform' -+ Last updated 3/10/2023
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
The following table compares features available in the managed gateway versus th
| [CA root certificates](api-management-howto-ca-certificates.md) for certificate validation | ✔️ | ❌ | ✔️<sup>3</sup> | | [Managed domain certificates](configure-custom-domain.md?tabs=managed#domain-certificate-options) | ✔️ | ✔️ | ❌ | | [TLS settings](api-management-howto-manage-protocols-ciphers.md) | ✔️ | ✔️ | ✔️ |
+| **HTTP/2** (Client-to-gateway) | ❌ | ❌ | ✔️ |
+| **HTTP/2** (Gateway-to-backend) | ❌ | ❌ | ✔️ |
<sup>1</sup> Depends on how the gateway is deployed, but is the responsibility of the customer.<br/> <sup>2</sup> Connectivity to the self-hosted gateway v2 [configuration endpoint](self-hosted-gateway-overview.md#fqdn-dependencies) requires DNS resolution of the default endpoint hostname; custom domain name is currently not supported.<br/>
The following table compares features available in the managed gateway versus th
| [Service Fabric](../service-fabric/service-fabric-api-management-overview.md) | Developer, Premium | ❌ | ❌ | | [Passthrough GraphQL](graphql-api.md) | ✔️ | ✔️<sup>1</sup> | ❌ | | [Synthetic GraphQL](graphql-schema-resolve-api.md) | ✔️ | ❌ | ❌ |
-| [Passthrough WebSocket](websocket-api.md) | ✔️ | ❌ | ❌ |
+| [Passthrough WebSocket](websocket-api.md) | ✔️ | ❌ | ✔️ |
<sup>1</sup> GraphQL subscriptions aren't supported in the Consumption tier.
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
We provide a variety of container images for self-hosted gateways to meet your n
| `v{major}` | Use this tag to always run a major version of the gateway with every new feature and patch. |`v2` | ✔️ | ❌ | | `v{major}-preview` | Use this tag if you always want to run our latest preview container image. | `v2-preview` | ✔️ | ❌ | | `latest` | Use this tag if you want to evaluate the self-hosted gateway. | `latest` | ✔️ | ❌ |
+| `beta`<sup>1</sup> | Use this tag if you want to evaluate preview versions of the self-hosted gateway. | `beta` | ✔️ | ❌ |
You can find a full list of available tags [here](https://mcr.microsoft.com/product/azure-api-management/gateway/tags).
+<sup>1</sup>Preview versions are not officially supported and are for experimental purposes only.<br/>
+ ### Use of tags in our official deployment options Our deployment options in the Azure portal use the `v2` tag that allows customers to use the most recent version of the self-hosted gateway v2 container image with all feature updates and patches.
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-aad.md
Use this option unless you need to create an app registration separately. It mak
1. Select **Authentication** in the menu on the left. Click **Add identity provider**. 1. Select **Microsoft** in the identity provider dropdown. The option to create a new registration is selected by default. You can change the name of the registration or the supported account types.
- A client secret will be created and stored as a slot-sticky [application setting](./configure-common.md#configure-app-settings) named `MICROSOFT_PROVIDER_AUTHENTICATION_SECRET`. You can update that setting later to use [Key Vault references](./app-service-key-vault-references.md) if you wish to manage the secret in Azure Key Vault.
+ A client secret will be created and stored as a slot-sticky [application setting] named `MICROSOFT_PROVIDER_AUTHENTICATION_SECRET`. You can update that setting later to use [Key Vault references](./app-service-key-vault-references.md) if you wish to manage the secret in Azure Key Vault.
1. If this is the first identity provider configured for the application, you will also be prompted with an **App Service authentication settings** section. Otherwise, you may move on to the next step.
To register the app, perform the following steps:
|Issuer Url| Use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (e.g., "https://login.microsoftonline.com" for global Azure), also replacing *\<tenant-id>* with the **Directory (tenant) ID** in which the app registration was created. This value is used to redirect users to the correct Azure AD tenant, as well as to download the appropriate metadata to determine the appropriate token signing keys and token issuer claim value for example. For applications that use Azure AD v1, omit `/v2.0` in the URL.| |Allowed Token Audiences| The configured **Application (client) ID** is *always* implicitly considered to be an allowed audience. If this is a cloud or server app and you want to accept authentication tokens from a client App Service app (the authentication token can be retrieved in the [X-MS-TOKEN-AAD-ID-TOKEN](configure-authentication-oauth-tokens.md#retrieve-tokens-in-app-code)) header, add the **Application (client) ID** of the client app here. |
- The client secret will be stored as a slot-sticky [application setting](./configure-common.md#configure-app-settings) named `MICROSOFT_PROVIDER_AUTHENTICATION_SECRET`. You can update that setting later to use [Key Vault references](./app-service-key-vault-references.md) if you wish to manage the secret in Azure Key Vault.
+ The client secret will be stored as a slot-sticky [application setting] named `MICROSOFT_PROVIDER_AUTHENTICATION_SECRET`. You can update that setting later to use [Key Vault references](./app-service-key-vault-references.md) if you wish to manage the secret in Azure Key Vault.
1. If this is the first identity provider configured for the application, you will also be prompted with an **App Service authentication settings** section. Otherwise, you may move on to the next step.
To register the app, perform the following steps:
You're now ready to use the Microsoft identity platform for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration.
-## Add customized authorization policy
+## Authorize requests
+
+By default, App Service Authentication only handles authentication, determining if the caller is who they say they are. Authorization, determining if that caller should have access to some resource, is an additional step beyond authentication. You can learn more about these concepts from [Microsoft identity platform authorization basics](../active-directory/develop/authorization-basics.md).
+
+Your app can [make authorization decisions in code](#perform-validations-from-application-code). App Service Authentication does provide some [built-in checks](#use-a-built-in-authorization-policy) which can help, but they may not alone be sufficient to cover the authorization needs of your app.
+
+> [!TIP]
+> Multi-tenant applications should validate the issuer and tenant ID of the request as part of this process to make sure the values are allowed. When App Service Authentication is configured for a multi-tenant scenario, it does not validate which tenant the request comes from. An app may need to be limited to specific tenants, based on if the organization has signed up for the service, for example. See the [Microsoft identity platform multi-tenant guidance](../active-directory/develop/howto-convert-app-to-be-multi-tenant.md#update-your-code-to-handle-multiple-issuer-values).
+
+### Perform validations from application code
+
+When you perform authorization checks in your app code, you can leverage the [claims information that App Service Authentication makes available](./configure-authentication-user-identities.md#access-user-claims-in-app-code). The injected `x-ms-client-principal` header contains a Base64-encoded JSON object with the claims asserted about the caller. By default, these claims go through a claims mapping, so the claim names may not always match what you would see in the token. For example, the `tid` claim is mapped to `http://schemas.microsoft.com/identity/claims/tenantid` instead.
+
+You can also work directly with the underlying access token from the injected `x-ms-token-aad-access-token` header.
+
+### Use a built-in authorization policy
The created app registration authenticates incoming requests for your Azure AD tenant. By default, it also lets anyone within the tenant to access the application, which is fine for many applications. However, some applications need to restrict access further by making authorization decisions. Your application code is often the best place to handle custom authorization logic. However, for common scenarios, the Microsoft identity platform provides built-in checks that you can use to limit access.
Within the API object, the Azure Active Directory identity provider configuratio
| `allowedPrincipals` | A grouping of checks that determine if the principal represented by the incoming request may access the app. Satisfaction of `allowedPrincipals` is based on a logical `OR` over its configured properties. | | `identities` (under `allowedPrincipals`) | An allowlist of string **object IDs** representing users or applications that have access. When this property is configured as a nonempty array, the `allowedPrincipals` requirement can be satisfied if the user or application represented by the request is specified in the list.<br/><br/>This policy evaluates the `oid` claim of the incoming token. See the [Microsoft Identity Platform claims reference]. |
+Additionally, some checks can be configured through an [application setting], regardless of the API version being used. The `WEBSITE_AUTH_AAD_ALLOWED_TENANTS` application setting can be configured with a comma-separated list of up to 10 tenant IDs (e.g., "559a2f9c-c6f2-4d31-b8d6-5ad1a13f8330,5693f64a-3ad5-4be7-b846-e9d1141bcebc") to require that the incoming token is from one of the specified tenants, as specified by the `tid` claim. The `WEBSITE_AUTH_AAD_REQUIRE_CLIENT_SERVICE_PRINCIPAL` application setting can be configured to "true" or "1" to require the incoming token to include an `oid` claim. This setting is ignored and treated as true if `allowedPrincipals.identities` has been configured (since the `oid` claim is checked against this provided list of identities).
+ Requests that fail these built-in checks are given an HTTP `403 Forbidden` response. [Microsoft Identity Platform claims reference]: ../active-directory/develop/access-tokens.md#payload-claims
Regardless of the configuration you use to set up authentication, the following
<!-- URLs. --> [Azure portal]: https://portal.azure.com/
+[application setting]: ./configure-common.md#configure-app-settings
app-service Configure Authentication User Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-user-identities.md
Last updated 03/29/2021
# Work with user identities in Azure App Service authentication
-This article shows you how to work with user identities when using the the built-in [authentication and authorization in App Service](overview-authentication-authorization.md).
+This article shows you how to work with user identities when using the built-in [authentication and authorization in App Service](overview-authentication-authorization.md).
## Access user claims in app code For all language frameworks, App Service makes the claims in the incoming token (whether from an authenticated end user or a client application) available to your code by injecting them into the request headers. External requests aren't allowed to set these headers, so they are present only if set by App Service. Some example headers include:
-* X-MS-CLIENT-PRINCIPAL-NAME
-* X-MS-CLIENT-PRINCIPAL-ID
+| Header | Description |
+||--|
+| `X-MS-CLIENT-PRINCIPAL` | A Base64 encoded JSON representation of available claims. See [Decoding the client principal header](#decoding-the-client-principal-header) for more information. |
+| `X-MS-CLIENT-PRINCIPAL-ID` | An identifier for the caller set by the identity provider. |
+| `X-MS-CLIENT-PRINCIPAL-NAME` | A human-readable name for the caller set by the identity provider. |
+| `X-MS-CLIENT-PRINCIPAL-IDP` | The name of the identity provider used by App Service Authentication. |
-Code that is written in any language or framework can get the information that it needs from these headers.
+Provider tokens are also exposed through similar headers. For example, the Microsoft Identity Provider also sets `X-MS-TOKEN-AAD-ACCESS-TOKEN` and `X-MS-TOKEN-AAD-ID-TOKEN` as appropriate.
> [!NOTE] > Different language frameworks may present these headers to the app code in different formats, such as lowercase or title case.
+Code that is written in any language or framework can get the information that it needs from these headers. [Decoding the client principal header](#decoding-the-client-principal-header) covers this process. For some frameworks, the platform also provides additional options which may be more convenient
+
+### Decoding the client principal header
+
+`X-MS-CLIENT-PRINCIPAL` contains the full set of available claims as Base64 encoded JSON. These claims go through a default claims-mapping process, so some may have different names than you would see if processing the token directly. The decoded payload is structured as follows:
+
+```json
+{
+ "auth_typ": "",
+ "claims": [
+ {
+ "typ": "",
+ "val": ""
+ }
+ ],
+ "name_typ": "",
+ "role_typ": ""
+}
+```
+
+| Property | Type | Description |
+||||
+| `auth_typ` | string | The name of the identity provider used by App Service Authentication. |
+| `claims` | array of objects | An array of objects representing the available claims. Each object contains `typ` and `val` properties. |
+| `typ` | string | The name of the claim. This may have been subject to default claims mapping and could be different from the corresponding claim contained in a token. |
+| `val` | string | The value of the claim. |
+| `name_typ` | string | The name claim type, which is typically a URI providing scheme information about the `name` claim if one is defined. |
+| `role_typ` | string | The role claim type, which is typically a URI providing scheme information about the `role` claim if one is defined. |
+
+To process this header, your app will need to decode the payload and iterate through the `claims` array to find the claims of interest. It may be convenient to convert these into a representation used by the app's language framework. Here is an example of this process in C# that constructs a [ClaimsPrincipal](/dotnet/api/system.security.claims.claimsprincipal) type for the app to use:
+
+```csharp
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Security.Claims;
+using System.Text;
+using System.Text.Json;
+using System.Text.Json.Serialization;
+using Microsoft.AspNetCore.Http;
+
+public static class ClaimsPrincipalParser
+{
+ private class ClientPrincipalClaim
+ {
+ [JsonPropertyName("typ")]
+ public string Type { get; set; }
+ [JsonPropertyName("val")]
+ public string Value { get; set; }
+ }
+
+ private class ClientPrincipal
+ {
+ [JsonPropertyName("auth_typ")]
+ public string IdentityProvider { get; set; }
+ [JsonPropertyName("name_typ")]
+ public string NameClaimType { get; set; }
+ [JsonPropertyName("role_typ")]
+ public string RoleClaimType { get; set; }
+ [JsonPropertyName("claims")]
+ public IEnumerable<ClientPrincipalClaim> Claims { get; set; }
+ }
+
+ public static ClaimsPrincipal Parse(HttpRequest req)
+ {
+ var principal = new ClientPrincipal();
+
+ if (req.Headers.TryGetValue("x-ms-client-principal", out var header))
+ {
+ var data = header[0];
+ var decoded = Convert.FromBase64String(data);
+ var json = Encoding.UTF8.GetString(decoded);
+ principal = JsonSerializer.Deserialize<ClientPrincipal>(json, new JsonSerializerOptions { PropertyNameCaseInsensitive = true });
+ }
+
+ /**
+ * At this point, the code can iterate through `principal.Claims` to
+ * check claims as part of validation. Alternatively, we can convert
+ * it into a standard object with which to perform those checks later
+ * in the request pipeline. That object can also be leveraged for
+ * associating user data, etc. The rest of this function performs such
+ * a conversion to create a `ClaimsPrincipal` as might be used in
+ * other .NET code.
+ */
+
+ var identity = new ClaimsIdentity(principal.IdentityProvider);
+ identity.AddClaims(principal.Claims.Select(c => new Claim(c.Type, c.Value)));
+
+ return new ClaimsPrincipal(identity);
+ }
+}
+```
+
+### Framework-specific alternatives
+ For ASP.NET 4.6 apps, App Service populates [ClaimsPrincipal.Current](/dotnet/api/system.security.claims.claimsprincipal.current) with the authenticated user's claims, so you can follow the standard .NET code pattern, including the `[Authorize]` attribute. Similarly, for PHP apps, App Service populates the `_SERVER['REMOTE_USER']` variable. For Java apps, the claims are [accessible from the Tomcat servlet](configure-language-java.md#authenticate-users-easy-auth). For [Azure Functions](../azure-functions/functions-overview.md), `ClaimsPrincipal.Current` is not populated for .NET code, but you can still find the user claims in the request headers, or get the `ClaimsPrincipal` object from the request context or even through a binding parameter. See [working with client identities in Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md#working-with-client-identities) for more information.
If the [token store](overview-authentication-authorization.md#token-store) is en
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Authenticate and authorize users end-to-end](tutorial-auth-aad.md)
+> [Tutorial: Authenticate and authorize users end-to-end](tutorial-auth-aad.md)
app-service Overview Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md
App Service uses [federated identity](https://en.wikipedia.org/wiki/Federated_id
| Provider | Sign-in endpoint | How-To guidance | | - | - | - |
-| [Microsoft Identity Platform](../active-directory/fundamentals/active-directory-whatis.md) | `/.auth/login/aad` | [App Service Microsoft Identity Platform login](configure-authentication-provider-aad.md) |
+| [Microsoft identity platform](../active-directory/fundamentals/active-directory-whatis.md) | `/.auth/login/aad` | [App Service Microsoft Identity Platform login](configure-authentication-provider-aad.md) |
| [Facebook](https://developers.facebook.com/docs/facebook-login) | `/.auth/login/facebook` | [App Service Facebook login](configure-authentication-provider-facebook.md) | | [Google](https://developers.google.com/identity/choose-auth) | `/.auth/login/google` | [App Service Google login](configure-authentication-provider-google.md) | | [Twitter](https://developer.twitter.com/en/docs/basics/authentication) | `/.auth/login/twitter` | [App Service Twitter login](configure-authentication-provider-twitter.md) |
App Service uses [federated identity](https://en.wikipedia.org/wiki/Federated_id
| [Sign in with Apple](https://developer.apple.com/sign-in-with-apple/) | `/.auth/login/apple` | [App Service Sign in With Apple login (Preview)](configure-authentication-provider-apple.md) | | Any [OpenID Connect](https://openid.net/connect/) provider | `/.auth/login/<providerName>` | [App Service OpenID Connect login](configure-authentication-provider-openid-connect.md) |
-When you enable authentication and authorization with one of these providers, its sign-in endpoint is available for user authentication and for validation of authentication tokens from the provider. You can provide your users with any number of these sign-in options.
+When you configure this feature with one of these providers, its sign-in endpoint is available for user authentication and for validation of authentication tokens from the provider. You can provide your users with any number of these sign-in options.
## Considerations for using built-in authentication
Enabling this feature will cause all requests to your application to be automa
App Service can be used for authentication with or without restricting access to your site content and APIs. To restrict app access only to authenticated users, set **Action to take when request is not authenticated** toΓÇ» log in with one of the configured identity providers. To authenticate but not restrict access, set **Action to take when request is not authenticated** to "Allow anonymous requests (no action)."
-> [!NOTE]
+> [!IMPORTANT]
> You should give each app registration its own permission and consent. Avoid permission sharing between environments by using separate app registrations for separate deployment slots. When testing new code, this practice can help prevent issues from affecting the production app. ## How it works
For client browsers, App Service can automatically direct all unauthenticated us
### Authorization behavior
+> [!IMPORTANT]
+> By default, this feature only provides authentication, not authorization. Your application may still need to make authorization decisions, in addition to any checks you configure here.
+ In the [Azure portal](https://portal.azure.com), you can configure App Service with a number of behaviors when incoming request is not authenticated. The following headings describe the options. **Allow unauthenticated requests**
With this option, you don't need to write any authentication code in your app. F
> Restricting access in this way applies to all calls to your app, which may not be desirable for apps wanting a publicly available home page, as in many single-page applications. > [!NOTE]
-> By default, any user in your Azure AD tenant can request a token for your application from Azure AD. You can [configure the application in Azure AD](../active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md) if you want to restrict access to your app to a defined set of users.
+> When using the Microsoft identity provider for users in your organization, the default behavior is that any user in your Azure AD tenant can request a token for your application. You can [configure the application in Azure AD](../active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md) if you want to restrict access to your app to a defined set of users. App Service also offers some [basic built-in authorization checks](.\configure-authentication-provider-aad.md#authorize-requests) which can help with some validations. To learn more about authorization in the Microsoft identity platform, see [Microsoft identity platform authorization basics](../active-directory/develop/authorization-basics.md).
### Token store
application-gateway Ssl Certificate Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ssl-certificate-management.md
From the list view, you can select the certificate name or three-dot menu option
* Changing the key vault association of a certificate ΓÇô You can change a certificateΓÇÖs reference from one key vault resource to another. When doing so, ensure the User-Assigned Managed Identity of your application gateway has sufficient access controls on the new key vault.
-* Renewal of an uploaded certificate ΓÇô When an existing uploaded certificate is due for renewal, you can upload a new PFX file to update your application gateway.
+* Renewal of an uploaded certificate ΓÇô When an existing uploaded certificate is due for renewal, you can upload a new PFX file to the existing certificate object of your application gateway.
* Changing the certificate type from "key vault" to "uploaded" (or vice-versa) ΓÇô You can easily transition your certificate provision from the one stored on your Application Gateway to the purpose-built Key Vault service. > [!NOTE]
-> A change in certificate associated with multiple listeners would reflect on all the listeners.
+> A change in certificate associated with multiple listeners would reflect on all the listeners. You can view the individual listener information to identify the related listeners.
### Deletion of an SSL certificate
There are two primary scenarios when deleting a certificate from portal:
| Port | The port associated with the listener gets updated to reflect the new state. | | Frontend IP | The frontend IP of the gateway gets updated to reflect the new state. |
+### Bulk update
+The bulk operation feature is helpful for large gateways having multiple SSL certificates for separate listeners. Similar to individual certificate management, this option allows you to change the type from "Uploaded" to "Key Vault" or vice-versa. This utility is also helpful in recovering a gateway when facing misconfigurations for multiple certificate objects simultaneously.
+
+To use the Bulk update option,
+1. Choose the certificates to be updated using the checkboxes and select the "Bulk update" menu option.
+
+1. On the next page, you can modify the settings for each certificate as needed. Based on your selection in Step 1, you will see different options for Step 2 and Step 3. Thus, it would be best to go step by step for each certificate row. The certificates you see here will be as per your selection. You may use the three-dot menu option to remove a wrongly selected certificate from the list.
+
+1. Once all the settings are updated, select Save.
+
+> [!NOTE]
+> Be aware of the listeners associated with each certificate when making a bulk change. Depending on your configuration, this single operation could update multiple certificates and many more listeners. Refer to the individual certificate information blade to identify the related listeners.
+ #### Caveats 1. You can't delete a certificate object if its associated listener is a redirection target for another listener. Any attempt to do so will return the following error. You can either remove the redirection or delete the dependent listener first to resolve this problem.
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
Title: Run Azure Automation runbooks on a Hybrid Runbook Worker
description: This article describes how to run runbooks on machines in your local datacenter or other cloud provider with the Hybrid Runbook Worker. Previously updated : 03/15/2022 Last updated : 03/27/2023
Enabling the Azure Firewall on [Azure Storage](../storage/common/storage-network
## Plan runbook job behavior
-Azure Automation handles jobs on Hybrid Runbook Workers differently from jobs run in Azure sandboxes. If you have a long-running runbook, make sure that it's resilient to possible restart. For details of the job behavior, see [Hybrid Runbook Worker jobs](automation-hybrid-runbook-worker.md#hybrid-runbook-worker-jobs).
+Azure Automation handles jobs on Hybrid Runbook Workers differently from jobs run in cloud sandboxes. If you have a long-running runbook, make sure that it's resilient to possible restart. For details of the job behavior, see [Hybrid Runbook Worker jobs](automation-hybrid-runbook-worker.md#hybrid-runbook-worker-jobs).
## Service accounts
-### Windows
+### Windows Hybrid Worker
Jobs for Hybrid Runbook Workers run under the local **System** account.
->[!NOTE]
-> To run PowerShell 7.x on a Windows Hybrid Runbook Worker, see [Installing PowerShell on Windows](/powershell/scripting/install/installing-powershell-on-windows).
-> We support [Hybrid worker extension based](./extension-based-hybrid-runbook-worker-install.md) and [agent based](./automation-windows-hrw-install.md) onboarding.
-> For agent based onboarding, ensure the Windows Hybrid Runbook worker version is 7.3.1296.0 or above.
-Make sure the path where the *pwsh.exe* executable is located and is added to the PATH environment variable. Restart the Hybrid Runbook Worker after installation completes.
+> [!NOTE]
+>- PowerShell 5.1, PowerShell 7.1(preview), Python 2.7, and Python 3.8(preview) runbooks are supported on both extension-based and agent-based Windows Hybrid Runbook Workers. For agent based workers, ensure the Windows Hybrid worker version is 7.3.12960 or above.
+>- PowerShell 7.2 (preview) and Python 3.10 (preview) runbooks are supported on extension-based Windows Hybrid Workers only. Ensure the Windows Hybrid worker extension version is 1.1.11 or above.
-### Linux
+#### [Extension-based Hybrid Workers](#tab/win-extn-hrw)
->[!NOTE]
-> To run PowerShell 7.x on a Linux Hybrid Runbook Worker, see [Installing PowerShell on Linux](/powershell/scripting/install/installing-powershell-on-linux).
-> We support [Hybrid worker extension based](./extension-based-hybrid-runbook-worker-install.md) and [agent based](./automation-linux-hrw-install.md) onboarding.
-> For agent based onboarding, ensure the Linux Hybrid Runbook worker version is 1.7.5.0 or above.
+> [!NOTE]
+> To create environment variable in Windows systems, follow these steps:
+> 1. Go to **Control Panel** > **System** > **Advanced System Settings**.
+> 1. In **System Properties** select **Environment variables**.
+> 1. In **System variables**, select **New**.
+> 1. Provide **Variable name** and **Variable value**, and then select **OK**.
+> 1. Restart the VM or logout from the current user and login to implement the environment variable changes.
+
+**PowerShell 7.2**
+
+To run PowerShell 7.2 runbooks on a Windows Hybrid Worker, install *PowerShell* on the Hybrid Worker. See [Installing PowerShell on Windows](https://learn.microsoft.com/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3).
+
+After PowerShell 7.2 installation is complete, create an environment variable with Variable name as powershell_7_2_path and Variable value as location of the executable *PowerShell*. Restart the Hybrid Runbook Worker after environment variable is created successfully.
+
+**PowerShell 7.1**
+
+To run PowerShell 7.1 runbooks on a Windows Hybrid Worker, install *PowerShell* on the Hybrid Worker. See [Installing PowerShell on Windows](https://learn.microsoft.com/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3).
+Ensure to add the *PowerShell* file to the PATH environment variable and restart the Hybrid Runbook Worker after the installation.
+
+**Python 3.10**
+
+To run Python 3.10 runbooks on a Windows Hybrid Worker, install *Python* on the Hybrid Worker. See [Installing Python on Windows](https://docs.python.org/3/using/windows.html).
+
+After Python 3.10 installation is complete, create an environment variable with Variable name as python_3_10_path and Variable value as location of the executable *Python*. Restart the Hybrid Runbook Worker after environment variable is created successfully.
+
+**Python 3.8**
+
+To run Python 3.8 runbooks on a Windows Hybrid Worker, install Python on the Hybrid Worker. See [Installing Python on Windows](https://docs.python.org/3/using/windows.html). Create **environment variable** *PYTHON_3_PATH* for Python 3.8 runbooks and ensure to add the location of executable Python as **Variable value**. Restart the Hybrid Runbook Worker after the environment variable is created successfully.
+
+If the *Python* executable file is at the default location *C:\WPy64-3800\python-3.8.0.amd64\python.exe*, then you do not have to create the environment variable.
++
+**Python 2.7**
+
+To run Python 2.7 runbooks on a Windows Hybrid Worker, install Python on the Hybrid Worker. See [Installing Python on Windows](https://docs.python.org/3/using/windows.html). Create **environment variable** *PYTHON_2_PATH* for Python 2.7 runbooks and ensure to add the location of executable Python file as **Variable value**. Restart the Hybrid Runbook Worker after the environment variable is created successfully.
+
+If the *Python* executable file is at the default location *C:\Python27\python.exe*, then you do not have to create the environment variable.
+
+#### [Agent-based Hybrid Workers](#tab/win-agt-hrw)
+
+> [!NOTE]
+> To create environment variable in Windows systems, follow these steps:
+> 1. Go to **Control Panel** > **System** > **Advanced System Settings**.
+> 1. In **System Properties** select **Environment variables**.
+> 1. In **System variables**, select **New**.
+> 1. Provide **Variable name** and **Variable value**, and then select **OK**.
+> 1. Restart the VM or logout from the current user and login to implement the environment variable changes.
+
+**PowerShell 7.1**
+
+To run PowerShell 7.1 runbooks on a Windows Hybrid Worker, install *PowerShell* on the Hybrid Worker. See [Installing PowerShell on Windows](https://learn.microsoft.com/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.3).
+Ensure to add the *PowerShell* file to the PATH environment variable and restart the Hybrid Runbook Worker after the installation.
+
+**Python 3.8**
+
+To run Python 3.8 runbooks on a Windows Hybrid Worker, install Python on the Hybrid Worker. See [Installing Python on Windows](https://docs.python.org/3/using/windows.html). Create **environment variable** *PYTHON_3_PATH* for Python 3.8 runbooks and ensure to add the location of executable Python as **Variable value**. Restart the Hybrid Runbook Worker after the environment variable is created successfully.
+If the *Python* executable file is at the default location *C:\WPy64-3800\python-3.8.0.amd64\python.exe*, then you do not have to create the environment variable.
-Service accounts **nxautomation** and **omsagent** are created. The creation and permission assignment script can be viewed at [https://github.com/microsoft/OMS-Agent-for-Linux/blob/master/installer/datafiles/linux.data](https://github.com/microsoft/OMS-Agent-for-Linux/blob/master/installer/datafiles/linux.data). The accounts, with the corresponding sudo permissions, must be present during [installation of a Linux Hybrid Runbook worker](automation-linux-hrw-install.md). If you try to install the worker, and the account is not present or doesn't have the appropriate permissions, the installation fails. Do not change the permissions of the `sudoers.d` folder or its ownership. Sudo permission is required for the accounts and the permissions shouldn't be removed. Restricting this to certain folders or commands may result in a breaking change. The **nxautomation** user enabled as part of Update Management executes only signed runbooks.
+
+**Python 2.7**
+
+To run Python 2.7 runbooks on a Windows Hybrid Worker, install Python on the Hybrid Worker. See [Installing Python on Windows](https://docs.python.org/3/using/windows.html). Create **environment variable** *PYTHON_2_PATH* for Python 2.7 runbooks and ensure to add the location of executable Python file as **Variable value**. Restart the Hybrid Runbook Worker after the environment variable is created successfully.
+
+If the *Python* executable file is at the default location *C:\Python27\python.exe*, then you do not have to create the environment variable.
+++
+### Linux Hybrid Worker
+
+> [!NOTE]
+>- PowerShell 5.1, PowerShell 7.1(preview), Python 2.7, Python 3.8 (preview) runbooks are supported on both extension-based and agent-based Linux Hybrid Runbook Workers. For agent-based workers, ensure the Linux Hybrid Runbook worker version is 1.7.5.0 or above.
+>- PowerShell 7.2 (preview) and Python 3.10 (preview) runbooks are supported on extension-based Linux Hybrid Workers only. Ensure the Linux Hybrid worker extension version is 1.1.11 or above.
+
+#### [Extension-based Hybrid Workers](#tab/Lin-extn-hrw)
+
+> [!NOTE]
+> To create environment variable in Linux systems, follow these steps:
+> 1. Open /etc/environment.
+> 1. Create a new Environment variable by adding VARIABLE_NAME="variable_value" in a new line in /etc/environment (VARIABLE_NAME is the name of the new Environment variable and variable_value represents the value it is to be assigned).
+> 1. Restart the VM or logout from current user and login after saving the changes to /etc/environment to implement environment variable changes.
+
+**PowerShell 7.2**
+
+To run PowerShell 7.2 runbooks on a Linux Hybrid Worker, install *PowerShell* file on the Hybrid Worker. For more information, see [Installing PowerShell on Linux](https://learn.microsoft.com/powershell/scripting/install/installing-powershell-on-linux?view=powershell-7.3).
+
+After PowerShell 7.2 installation is complete, create an environment variable with **Variable name** as *powershell_7_2_path* and **Variable value** as location of the executable *PowerShell* file. Restart the Hybrid Runbook Worker after an environment variable is created successfully.
+
+**Python 3.10**
+
+To run Python 3.10 runbooks on a Linux Hybrid Worker, install *Python* on the Hybrid Worker. For more information, see [Installing Python 3.10 on Linux](https://docs.python.org/3/using/unix.html).
+
+After Python 3.10 installation is complete, create an environment variable with **Variable name** as *python_3_10_path* and **Variable value** as location of the executable *Python* file. Restart the Hybrid Runbook Worker after environment variable is created successfully.
+
+**Python 3.8**
+
+To run Python 3.8 runbooks on a Linux Hybrid Worker, install *Python* on the Hybrid Worker.
+Ensure to add the executable *Python* file to the PATH environment variable and restart the Hybrid Runbook Worker after the installation.
+
+**Python 2.7**
+
+To run Python 2.7 runbooks on a Linux Hybrid Worker, install *Python* on the Hybrid Worker.
+Ensure to add the executable *Python* file to the PATH environment variable and restart the Hybrid Runbook Worker after the installation.
+
+#### [Agent-based Hybrid Workers](#tab/Lin-agt-hrw)
+
+Create Service accounts **nxautomation** and **omsagent** for agent-based Hybrid Workers. The creation and permission assignment script can be viewed at [linux data](https://github.com/microsoft/OMS-Agent-for-Linux/blob/master/installer/datafiles/linux.data). The accounts, with the corresponding sudo permissions, must be present during [installation of a Linux Hybrid Runbook worker](automation-linux-hrw-install.md).
+
+If you try to install the worker, and the account is not present or doesn't have the appropriate permissions, the installation fails. Do not change the permissions of the `sudoers.d` folder or its ownership. Sudo permission is required for the accounts and the permissions shouldn't be removed. Restricting this to certain folders or commands may result in a breaking change. The **nxautomation** user enabled as part of Update Management executes only signed runbooks.
To ensure the service accounts have access to the stored runbook modules:
To ensure the service accounts have access to the stored runbook modules:
The Automation worker log is located at `/var/opt/microsoft/omsagent/run/automationworker/worker.log`.
-The service accounts are removed when the machine is removed as a Hybrid Runbook Worker.
+ ## Configure runbook permissions
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
description: Significant updates to Azure Automation updated each month.
Previously updated : 01/11/2022 Last updated : 03/27/2023
Azure Automation receives improvements on an ongoing basis. To stay up to date w
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md).
+## March 2023
+
+### Retirement of Azure Automation Agent-based User Hybrid Runbook Worker
+
+**Type:** Plan for change
+
+On **31 August 2024**, Azure Automation will [retire](https://azure.microsoft.com/updates/retirement-azure-automation-agent-user-hybrid-worker/) Agent-based User Hybrid Runbook Worker ([Windows](automation-windows-hrw-install.md) and [Linux](automation-linux-hrw-install.md)). You must migrate all Agent-based User Hybrid Workers to [Extension-based User Hybrid Runbook Worker](extension-based-hybrid-runbook-worker-install.md) (Windows and Linux) before the deprecation date. Moreover, starting **1 October 2023**, creating **new** Agent-based User Hybrid Runbook Worker will not be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md).
++ ## January 2023 ### Public Preview of Automation extension for Visual Studio Code
azure-arc Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/private-link.md
Title: Private connectivity for Azure Arc-enabled Kubernetes clusters using private link (preview) Previously updated : 09/21/2021 Last updated : 09/21/2022 description: With Azure Arc, you can use a Private Link Scope model to allow multiple Kubernetes clusters to use a single private endpoint.
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md
In addition, resource bridge (preview) requires connectivity to the [Arc-enabled
## SSL proxy configuration
-If using a proxy, Azure Arc resource bridge must be configured for proxy so that it can connect to the Azure services. To configure the Arc resource bridge with proxy, provide the proxy certificate file path during creation of the configuration files. Only pass the single proxy certificate. If a certificate bundle is passed then the deployment will fail. The proxy server endpoint can't be a .local domain. Proxy configuration of the management machine isn't configured by the Azure Arc resource bridge.
+If using a proxy, Arc resource bridge must be configured for proxy so that it can connect to the Azure services. To configure the Arc resource bridge with proxy, provide the proxy certificate file path during creation of the configuration files. Only pass the single proxy certificate. If a certificate bundle is passed then the deployment will fail. The proxy server endpoint can't be a .local domain. Proxy configuration of the management machine isn't configured by Arc resource bridge.
-There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy: the SSL certificate for your SSL proxy (so that the host and guest trust your proxy FQDN and can establish an SSL connection to it), and the SSL certificate of the Microsoft download servers. This certificate must be trusted by your proxy server itself, as the proxy is the one establishing the final connection and needs to trust the endpoint. Non-Windows machines may not trust this second certificate by default, so you may need to ensure that it's trusted.
+There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy: the SSL certificate for your SSL proxy (so that the managment machine and on-premises appliance VM trust your proxy FQDN and can establish an SSL connection to it), and the SSL certificate of the Microsoft download servers. This certificate must be trusted by your proxy server itself, as the proxy is the one establishing the final connection and needs to trust the endpoint. Non-Windows machines may not trust this second certificate by default, so you may need to ensure that it's trusted.
+
+In order to deploy Arc resouce bridge, images need to be downloaded to the management machine and then uploaded to the on-premises private cloud gallery. If your proxy server throttles download speed, this may impact your ability to download the required images (~3 GB) within the alotted time (90 min).
## Exclusion list for no proxy
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
The following scenarios are supported in Azure Arc-enabled VMware vSphere (previ
You can use Azure Arc-enabled VMware vSphere (preview) in these supported regions:
+- Australia East
+- Canada Central
- East US-
+- Southeast Asia
+- UK South
- West Europe -- Australia East--- Canada Central
+For the most up-to-date information about region availability of Azure Arc-enabled VMware vSphere, see [Azure Products by Region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-arc&regions=all) page
## Data Residency
azure-cache-for-redis Cache How To Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-encryption.md
# Configure disk encryption for Azure Cache for Redis instances using customer managed keys (preview)
-In this article, you learn how to configure disk encryption using Customer Managed Keys (CMK). The Enterprise and Enterprise Flash tiers of Azure Cache for Redis offer the ability to encrypt the OS and data persistence disks with customer-managed key encryption. Platform-managed keys (PMKs), also know as Microsoft-managed keys (MMKs), are used to encrypt the data. However, customer managed keys (CMK) can also be used to wrap the MMKs to control access to these keys. This makes the CMK a _key encryption key_ or KEK. For more information, see [key management in Azure](/azure/security/fundamentals/key-management).
- Data in a Redis server is stored in memory by default. This data isn't encrypted. You can implement your own encryption on the data before writing it to the cache. In some cases, data can reside on-disk, either due to the operations of the operating system, or because of deliberate actions to persist data using [export](cache-how-to-import-export-data.md) or [data persistence](cache-how-to-premium-persistence.md).
-> [!NOTE]
-> Operating system disk encryption is more important on the Premium tier because open-source Redis can page cache data to disk. The Enterprise tiers does not do page cache data to disk, which is an advantage of the Enterprise and Enterprise Flash tiers.
->
+Azure Cache for Redis offers platform-managed keys (PMKs), also know as Microsoft-managed keys (MMKs), by default to encrypt data on-disk in all tiers. The Enterprise and Enterprise Flash tiers of Azure Cache for Redis additionally offer the ability to encrypt the OS and data persistence disks with a customer-managed key (CMK). Customer managed keys can be used to wrap the MMKs to control access to these keys. This makes the CMK a _key encryption key_ or KEK. For more information, see [key management in Azure](/azure/security/fundamentals/key-management).
+ ## Scope of availability for CMK disk encryption
-|: Tier :| Basic, Standard, Premium | Enterprise, Enterprise Flash |
-|--|||
+| Tier | Basic, Standard, Premium | Enterprise, Enterprise Flash |
+|:-:|||
|Microsoft managed keys (MMK) | Yes | Yes | |Customer managed keys (CMK) | No | Yes (preview) |
-> [!NOTE]
+> [!WARNING]
> By default, all Azure Cache for Redis tiers use Microsoft managed keys to encrypt disks mounted to cache instances. However, in the Basic and Standard tiers, the C0 and C1 SKUs do not support any disk encryption. >
In the **Enterprise Flash** tier, keys and values are also partially stored on-d
### Other tiers
-In the **Basic, Standard, and Premium** tiers, the OS disk is encrypted using MMK. There's no persistence disk mounted and Azure Storage is used instead.
+In the **Basic, Standard, and Premium** tiers, the OS disk is encrypted by default using MMK. There's no persistence disk mounted and Azure Storage is used instead. The C0 and C1 SKUs do not use disk encryption.
## Prerequisites and limitations ### General prerequisites and limitations - Disk encryption isn't available in the Basic and Standard tiers for the C0 or C1 SKUs-- Only user assigned managed identity is supported to connect to Azure Key Vault
+- Only user assigned managed identity is supported to connect to Azure Key Vault. System assigned managed identity is not supported.
- Changing between MMK and CMK on an existing cache instance triggers a long-running maintenance operation. We don't recommend this for production use because a service disruption occurs. ### Azure Key Vault prerequisites and limitations
In the **Basic, Standard, and Premium** tiers, the OS disk is encrypted using MM
1. If using the **Select Azure key vault and key** input method, choose the Key Vault instance that holds your customer managed key. This instance must be in the same region as your cache. > [!NOTE]
- > For instructions on how to set up an Azure Key Vault instance, see the [Azure Key Vault quickstart guide](../key-vault/secrets/quick-create-portal.md). You can also select the _Create a key vault_ link beneath the Key Vault selection to create a new Key Vault instance.
+ > For instructions on how to set up an Azure Key Vault instance, see the [Azure Key Vault quickstart guide](../key-vault/secrets/quick-create-portal.md). You can also select the _Create a key vault_ link beneath the Key Vault selection to create a new Key Vault instance. Remember that both purge protection and soft delete must be enabled in your Key Vault instance.
1. Choose the specific key and version using the **Customer-managed key (RSA)** and **Version** drop-downs.
azure-functions Create First Function Cli Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-node.md
In this article, you use command-line tools to create a JavaScript function that
[!INCLUDE [functions-nodejs-model-pivot-description](../../includes/functions-nodejs-model-pivot-description.md)]
-Note that completion will incur a small cost of a few USD cents or less in your Azure account.
+Completion of this quickstart incurs a small cost of a few USD cents or less in your Azure account.
-There is also a [Visual Studio Code-based version](create-first-function-vs-code-node.md) of this article.
+There's also a [Visual Studio Code-based version](create-first-function-vs-code-node.md) of this article.
## Configure your local environment
-Before you begin, you must have the following:
+Before you begin, you must have the following prerequisites:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
Before you begin, you must have the following:
### Prerequisite check
-Verify your prerequisites, which depend on whether you are using Azure CLI or Azure PowerShell for creating Azure resources:
+Verify your prerequisites, which depend on whether you're using Azure CLI or Azure PowerShell for creating Azure resources:
# [Azure CLI](#tab/azure-cli)
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
```console func init LocalFunctionProj --model V4 ```
- You are then prompted to choose a worker runtime and a language - choose Node for the first and JavaScript for the second.
+ You're then prompted to choose a worker runtime and a language - choose Node for the first and JavaScript for the second.
2. Navigate into the project folder:
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
func new ```
- Choose the template for "HTTP trigger". You can keep the default name (*httpTrigger*) or give it a new name (*HttpExample*). Your function name must be unique, or you'll be asked to confirm if your intention is to replace an existing function.
+ Choose the template for "HTTP trigger". You can keep the default name (*httpTrigger*) or give it a new name (*HttpExample*). Your function name must be unique, or you're asked to confirm if your intention is to replace an existing function.
You can find the function you added in the *src/functions* directory.
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
az functionapp create --resource-group AzureFunctionsQuickstart-rg --consumption-plan-location <REGION> --runtime node --runtime-version 18 --functions-version 4 --name <APP_NAME> --storage-account <STORAGE_NAME> ```
- The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. It is recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`.
+ The [az functionapp create](/cli/azure/functionapp#az-functionapp-create) command creates the function app in Azure. It's recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`.
# [Azure PowerShell](#tab/azure-powershell)
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime node -RuntimeVersion 18 -FunctionsVersion 4 -Location <REGION> ```
- The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. It is recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`.
+ The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure. It's recommended that you use the latest version of Node.js, which is currently 18. You can specify the version by setting `--runtime-version` to `18`.
In the previous example, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
- This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
+ This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also creates an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
::: zone pivot="nodejs-model-v4" ## Update app settings
azure-functions Create First Function Cli Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-typescript.md
In this article, you use command-line tools to create a TypeScript function that
[!INCLUDE [functions-nodejs-model-pivot-description](../../includes/functions-nodejs-model-pivot-description.md)]
-Note that completion will incur a small cost of a few USD cents or less in your Azure account.
+Completion of this quickstart incurs a small cost of a few USD cents or less in your Azure account.
There's also a [Visual Studio Code-based version](create-first-function-vs-code-typescript.md) of this article. ## Configure your local environment
-Before you begin, you must have the following:
+Before you begin, you must have the following prerequisites:
+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
func new ```
- Choose the template for "HTTP trigger". You can keep the default name (*httpTrigger*) or give it a new name (*HttpExample*). Your function name must be unique, or you'll be asked to confirm if your intention is to replace an existing function.
+ Choose the template for "HTTP trigger". You can keep the default name (*httpTrigger*) or give it a new name (*HttpExample*). Your function name must be unique, or you're asked to confirm if your intention is to replace an existing function.
You can find the function you added in the *src/functions* directory.
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
``` ::: zone-end
- Toward the end of the output, the following should appear:
+ Toward the end of the output, the following logs should appear:
![Screenshot of terminal window output when running function locally.](./media/functions-create-first-azure-function-azure-cli/functions-test-local-terminal.png) >[!NOTE]
- > If HttpExample doesn't appear as shown below, you likely started the host from outside the root folder of the project. In that case, use **Ctrl**+**C** to stop the host, navigate to the project's root folder, and run the previous command again.
+ > If HttpExample doesn't appear as shown in the logs, you likely started the host from outside the root folder of the project. In that case, use <kbd>Ctrl</kbd>+<kbd>c</kbd> to stop the host, navigate to the project's root folder, and run the previous command again.
1. Copy the URL of your `HttpExample` function from this output to a browser and append the query string `?name=<your-name>`, making the full URL like `http://localhost:7071/api/HttpExample?name=Functions`. The browser should display a message like `Hello Functions`:
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
The terminal in which you started your project also shows log output as you make requests.
-1. When you're ready, use **Ctrl**+**C** and choose `y` to stop the functions host.
+1. When you're ready, use <kbd>Ctrl</kbd>+<kbd>c</kbd> and choose <kbd>y</kbd> to stop the functions host.
[!INCLUDE [functions-create-azure-resources-cli](../../includes/functions-create-azure-resources-cli.md)]
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
In the previous example, replace `<STORAGE_NAME>` with the name of the account you used in the previous step, and replace `<APP_NAME>` with a globally unique name appropriate to you. The `<APP_NAME>` is also the default DNS domain for the function app.
- This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also provisions an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
+ This command creates a function app running in your specified language runtime under the [Azure Functions Consumption Plan](consumption-plan.md), which is free for the amount of usage you incur here. The command also creates an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see [Monitor Azure Functions](functions-monitoring.md). The instance incurs no costs until you activate it.
::: zone pivot="nodejs-model-v4" ## Update app settings
Before you use Core Tools to deploy your project to Azure, you create a producti
npm run build ```
-1. With the necessary resources in place, you're now ready to deploy your local functions project to the function app in Azure by using the [func azure functionapp publish](functions-run-local.md#project-file-deployment) command. In the following example, replace `<APP_NAME>` with the name of your app.
+1. With the necessary resources in place, you're now ready to deploy your local functions project to the function app in Azure by using the [publish](functions-run-local.md#project-file-deployment) command. In the following example, replace `<APP_NAME>` with the name of your app.
```console func azure functionapp publish <APP_NAME>
azure-functions Create First Function Vs Code Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-node.md
Use Visual Studio Code to create a JavaScript function that responds to HTTP req
[!INCLUDE [functions-nodejs-model-pivot-description](../../includes/functions-nodejs-model-pivot-description.md)]
-Note that completion will incur a small cost of a few USD cents or less in your Azure account.
+Completion of this quickstart incurs a small cost of a few USD cents or less in your Azure account.
There's also a [CLI-based version](create-first-function-cli-node.md) of this article.
Before you get started, make sure you have the following requirements in place:
## <a name="create-an-azure-functions-project"></a>Create your local project
-In this section, you use Visual Studio Code to create a local Azure Functions project in JavaScript. Later in this article, you'll publish your function code to Azure.
+In this section, you use Visual Studio Code to create a local Azure Functions project in JavaScript. Later in this article, you publish your function code to Azure.
1. Choose the Azure icon in the Activity bar. Then in the **Workspace (local)** area, select the **+** button, choose **Create Function** in the dropdown. When prompted, choose **Create new project**.
To enable your V4 programming model app to run in Azure, you need to add a new a
``` 1. [Rerun the function](#run-the-function-locally) app locally.
-1. In the prompt **Enter request body** change the request message body to { "name": "Tom","sport":"basketball" }. Press Enter to send this request message to your function.
+1. In the prompt **Enter request body**, change the request message body to { "name": "Tom","sport":"basketball" }. Press Enter to send this request message to your function.
1. View the response in the notification:
To enable your V4 programming model app to run in Azure, you need to add a new a
## Troubleshooting
-Use the table below to resolve the most common issues encountered when using this quickstart.
+Use the following table to resolve the most common issues encountered when using this quickstart.
|Problem|Solution| |--|--|
azure-functions Create First Function Vs Code Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-typescript.md
In this article, you use Visual Studio Code to create a TypeScript function that
[!INCLUDE [functions-nodejs-model-pivot-description](../../includes/functions-nodejs-model-pivot-description.md)]
-Note that completion will incur a small cost of a few USD cents or less in your Azure account.
+Completion of this quickstart incurs a small cost of a few USD cents or less in your Azure account.
There's also a [CLI-based version](create-first-function-cli-typescript.md) of this article.
Before you get started, make sure you have the following requirements in place:
## <a name="create-an-azure-functions-project"></a>Create your local project
-In this section, you use Visual Studio Code to create a local Azure Functions project in TypeScript. Later in this article, you'll publish your function code to Azure.
+In this section, you use Visual Studio Code to create a local Azure Functions project in TypeScript. Later in this article, you publish your function code to Azure.
1. Choose the Azure icon in the Activity bar. Then in the **Workspace (local)** area, select the **+** button, choose **Create Function** in the dropdown. When prompted, choose **Create new project**.
azure-functions Durable Functions Instance Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-instance-management.md
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.Ht
client = df.DurableOrchestrationClient(starter) reason = "Found a bug"
- return client.terminate(instance_id, reason)
+ return await client.terminate(instance_id, reason)
``` # [Java](#tab/java)
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.Ht
client = df.DurableOrchestrationClient(starter) event_data = [1, 2 ,3]
- return client.raise_event(instance_id, 'MyEvent', event_data)
+ return await client.raise_event(instance_id, 'MyEvent', event_data)
``` # [Java](#tab/java)
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
retry_interval_in_milliseconds = get_time_in_seconds(req, retry_interval) retry_interval_in_milliseconds = retry_interval_in_milliseconds if retry_interval_in_milliseconds != None else 1000
- return client.wait_for_completion_or_create_check_status_response(
+ return await client.wait_for_completion_or_create_check_status_response(
req, instance_id, timeout_in_milliseconds,
import azure.durable_functions as df
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.HttpResponse: client = df.DurableOrchestrationClient(starter)
- return client.purge_instance_history(instance_id)
+ return await client.purge_instance_history(instance_id)
``` # [Java](#tab/java)
async def main(req: func.HttpRequest, starter: str, instance_id: str) -> func.Ht
created_time_to = datetime.today() + timedelta(days = -30) runtime_statuses = [OrchestrationRuntimeStatus.Completed]
- return client.purge_instance_history_by(created_time_from, created_time_to, runtime_statuses)
+ return await client.purge_instance_history_by(created_time_from, created_time_to, runtime_statuses)
``` # [Java](#tab/java)
azure-functions Functions Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-monitoring.md
By assigning logged items to a category, you have more control over telemetry ge
### Custom telemetry data
-In [C#](functions-dotnet-class-library.md#log-custom-telemetry-in-c-functions), [JavaScript](functions-reference-node.md#log-custom-telemetry), and [Python](functions-reference-python.md#log-custom-telemetry), you can use an Application Insights SDK to write custom telemetry data.
+In [C#](functions-dotnet-class-library.md#log-custom-telemetry-in-c-functions), [JavaScript](functions-reference-node.md#track-custom-data), and [Python](functions-reference-python.md#log-custom-telemetry), you can use an Application Insights SDK to write custom telemetry data.
### Dependencies
Dependencies are written at the `Information` level. If you filter at `Warning`
In addition to automatic dependency data collection, you can also use one of the language-specific Application Insights SDKs to write custom dependency information to the logs. For an example how to write custom dependencies, see one of the following language-specific examples: + [Log custom telemetry in C# functions](functions-dotnet-class-library.md#log-custom-telemetry-in-c-functions)
-+ [Log custom telemetry in JavaScript functions](functions-reference-node.md#log-custom-telemetry)
++ [Log custom telemetry in JavaScript functions](functions-reference-node.md#track-custom-data) + [Log custom telemetry in Python functions](functions-reference-python.md#log-custom-telemetry) ### Performance Counters
azure-functions Functions Reference Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md
This guide is an introduction to developing Azure Functions using JavaScript or
As a JavaScript developer, you might also be interested in one of the following articles: | Getting started | Concepts| Guided learning |
-| -- | -- | -- |
+||||
| <ul><li>[Node.js function using Visual Studio Code](./create-first-function-vs-code-node.md)</li><li>[Node.js function with terminal/command prompt](./create-first-function-cli-node.md)</li><li>[Node.js function using the Azure portal](functions-create-function-app-portal.md)</li></ul> | <ul><li>[Developer guide](functions-reference.md)</li><li>[Hosting options](functions-scale.md)</li><li>[TypeScript functions](#typescript)</li><li>[Performance&nbsp; considerations](functions-best-practices.md)</li></ul> | <ul><li>[Create serverless applications](/training/paths/create-serverless-applications/)</li><li>[Refactor Node.js and Express APIs to Serverless APIs](/training/modules/shift-nodejs-express-apis-serverless/)</li></ul> | [!INCLUDE [Programming Model Considerations](../../includes/functions-nodejs-model-considerations.md)]
A JavaScript (Node.js) function is an exported `function` that executes when tri
## Folder structure
-The required folder structure for a JavaScript project looks like the following. This default can be changed. For more information, see the [scriptFile](#using-scriptfile) section below.
+The required folder structure for a JavaScript project looks like the following. This default can be changed. For more information, see the [scriptFile](#using-scriptfile) section.
``` FunctionsProject
The main project folder, *<project_root>*, can contain the following files:
JavaScript functions must be exported via [`module.exports`](https://nodejs.org/api/modules.html#modules_module_exports) (or [`exports`](https://nodejs.org/api/modules.html#modules_exports)). Your exported function should be a JavaScript function that executes when triggered.
-By default, the Functions runtime looks for your function in `index.js`, where `index.js` shares the same parent directory as its corresponding `function.json`. In the default case, your exported function should be the only export from its file or the export named `run` or `index`. To configure the file location and export name of your function, read about [configuring your function's entry point](functions-reference-node.md#configure-function-entry-point) below.
+By default, the Functions runtime looks for your function in `index.js`, where `index.js` shares the same parent directory as its corresponding `function.json`. In the default case, your exported function should be the only export from its file or the export named `run` or `index`. To configure the file location and export name of your function, see [configuring your function's entry point](functions-reference-node.md#configure-function-entry-point).
-Your exported function is passed a number of arguments on execution. The first argument it takes is always a `context` object.
+Your exported function is passed several arguments on execution. The first argument it takes is always a `context` object.
When using the [`async function`](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Statements/async_function) declaration or plain JavaScript [Promises](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise), you don't need to explicitly call the [`context.done`](#contextdone-method) callback to signal that your function has completed. Your function completes when the exported async function/Promise completes.
module.exports = async function (context) {
When exporting an async function, you can also configure an output binding to take the `return` value. This option is recommended if you only have one output binding.
-If your function is synchronous (doesn't return a Promise), you must pass the `context` object, as calling `context.done` is required for correct use. This option isn't recommended, for more information see [Use `async` and `await`](#use-async-and-await).
+If your function is synchronous (doesn't return a Promise), you must pass the `context` object, as calling `context.done` is required for correct use. This option isn't recommended, for more information on the alternative, see [Use `async` and `await`](#use-async-and-await).
```javascript // You should include `context`
module.exports = async function (context, req) {
``` ## Bindings
-In JavaScript, [bindings](functions-triggers-bindings.md) are configured and defined in a function's function.json. Functions interact with bindings a number of ways.
+In JavaScript, [bindings](functions-triggers-bindings.md) are configured and defined in a function's function.json. Functions interact with bindings in several ways.
### Inputs
-Input are divided into two categories in Azure Functions: one is the trigger input and the other is the additional input. Trigger and other input bindings (bindings of `direction === "in"`) can be read by a function in three ways:
+Input are divided into two categories in Azure Functions: one is the trigger input and the other is the secondary input. Trigger and other input bindings (bindings of `direction === "in"`) are used in the following ways:
- **_[Recommended]_ As parameters passed to your function.** They're passed to the function in the same order that they're defined in *function.json*. The `name` property defined in *function.json* doesn't need to match the name of your parameter, although it should. ```javascript module.exports = async function(context, myTrigger, myInput, myOtherInput) { ... }; ```
+ - **As members of the [`context.bindings`](#contextbindings-property) object.** Each member matches the `name` property defined in *function.json*.
```javascript module.exports = async function(context) {
Input are divided into two categories in Azure Functions: one is the trigger inp
``` ### Outputs
-Outputs (bindings of `direction === "out"`) can be written to by a function in a number of ways. In all cases, the `name` property of the binding as defined in *function.json* corresponds to the name of the object member written to in your function.
+Outputs (bindings of `direction === "out"`) can be set in several ways. In all cases, the `name` property of the binding as defined in *function.json* corresponds to the name of the object member written to in your function.
You can assign data to output bindings in one of the following ways (don't combine these methods): -- **_[Recommended for multiple outputs]_ Returning an object.** If you are using an async/Promise returning function, you can return an object with assigned output data. In the example below, the output bindings are named "httpResponse" and "queueOutput" in *function.json*.
+- **_[Recommended for multiple outputs]_ Returning an object.** If you're using an async/Promise returning function, you can return an object with assigned output data. In the following example, the output bindings are named "httpResponse" and "queueOutput" in *function.json*.
```javascript module.exports = async function(context) {
Options for `dataType` are: `binary`, `stream`, and `string`.
The programming model loads your functions based on the `main` field in your `package.json`. This field can be set to a single file like `src/index.js` or a [glob pattern](https://wikipedia.org/wiki/Glob_(programming)) specifying multiple files like `src/functions/*.js`.
-In order to register a function, you must import the `app` object from the `@azure/functions` npm module and call the method specific to your trigger type. The first argument when registering a function will always be the function name. The second argument is an `options` object specifying configuration for your trigger, your handler, and any other inputs or outputs. In some cases where trigger configuration is not necessary, you can pass the handler directly as the second argument instead of an `options` object.
+In order to register a function, you must import the `app` object from the `@azure/functions` npm module and call the method specific to your trigger type. The first argument when registering a function is the function name. The second argument is an `options` object specifying configuration for your trigger, your handler, and any other inputs or outputs. In some cases where trigger configuration isn't necessary, you can pass the handler directly as the second argument instead of an `options` object.
Registering a function can be done from any file in your project, as long as that file is loaded (directly or indirectly) based on the `main` field in your `package.json` file. The function should be registered at a global scope because you can't register functions once executions have started.
app.timer('timerTrigger1', {
### Extra inputs and outputs
-In addition to the trigger and return, you may specify extra inputs or outputs. You must configure these on the `options` argument when registering a function. The `input` and `output` objects exported from the `@azure/functions` module provide type-specific methods to help construct the configuration. During execution, you get or set the values with `context.extraInputs.get` or `context.extraOutputs.set`, passing in the original configuration object as the first argument.
+In addition to the trigger and return, you may specify extra inputs or outputs on the `options` argument when registering a function. The `input` and `output` objects exported from the `@azure/functions` module provide type-specific methods to help construct the configuration. During execution, you get or set the values with `context.extraInputs.get` or `context.extraOutputs.set`, passing in the original configuration object as the first argument.
The following example is a function triggered by a storage queue, with an extra blob input that is copied to an extra blob output.
context.bindings.myOutput = {
a_number: 1 }; ```
-In a synchronous function, you can choose to define output binding data using the `context.done` method instead of the `context.binding` object (see below).
- ## context.bindingData property ```js
Allows you to write to the streaming function logs at the default trace level, w
## Write trace output to logs
-In Functions, you use the `context.log` methods to write trace output to the logs and the console. When you call `context.log()`, your message is written to the logs at the default trace level, which is the _info_ trace level. Functions integrates with Azure Application Insights to better capture your function app logs. Application Insights, part of Azure Monitor, provides facilities for collection, visual rendering, and analysis of both application telemetry and your trace outputs. To learn more, see [monitoring Azure Functions](functions-monitoring.md).
+In Functions, you use the `context.log` methods to write trace output to the logs and the console. When you call `context.log()`, your message is written to the logs at the default trace level, which is the _info_ trace level. Azure Functions integrates with Azure Application Insights to better capture your function app logs. Application Insights, part of Azure Monitor, provides facilities for collection, visual rendering, and analysis of both application logs and your trace outputs. To learn more, see [monitoring Azure Functions](functions-monitoring.md).
The following example writes a log at the info trace level, including the invocation ID:
The following example writes a log at the info trace level, including the invoca
context.log("Something has happened. " + context.invocationId); ```
-All `context.log` methods support the same parameter format that's supported by the Node.js [util.format method](https://nodejs.org/api/util.html#util_util_format_format). Consider the following code, which writes function logs by using the default trace level:
+All `context.log` methods support the same parameter format supported by the Node.js [util.format method](https://nodejs.org/api/util.html#util_util_format_format). Consider the following code, which writes function logs by using the default trace level:
```javascript context.log('Node.js HTTP trigger function processed a request. RequestUri=' + req.originalUrl);
Because _error_ is the highest trace level, this trace is written to the output
### Configure the trace level for logging
-Functions lets you define the threshold trace level for writing to the logs or the console. The specific threshold settings depend on your version of the Functions runtime.
+Azure Functions lets you define the threshold trace level for writing to the logs or the console. The specific threshold settings depend on your version of the Functions runtime.
To set the threshold for traces written to the logs, use the `logging.logLevel` property in the host.json file. This JSON object lets you define a default threshold for all functions in your function app, plus you can define specific thresholds for individual functions. To learn more, see [How to configure monitoring for Azure Functions](configure-monitoring.md).
-## Log custom telemetry
+## Track custom data
-By default, Functions writes output as traces to Application Insights. For more control, you can instead use the [Application Insights Node.js SDK](https://github.com/microsoft/applicationinsights-node.js) to send custom telemetry data to your Application Insights instance.
+By default, Azure Functions writes output as traces to Application Insights. For more control, you can instead use the [Application Insights Node.js SDK](https://github.com/microsoft/applicationinsights-node.js) to send custom data to your Application Insights instance.
```javascript const appInsights = require("applicationinsights");
const client = appInsights.defaultClient;
module.exports = async function (context, req) { context.log('JavaScript HTTP trigger function processed a request.');
- // Use this with 'tagOverrides' to correlate custom telemetry to the parent function invocation.
+ // Use this with 'tagOverrides' to correlate custom logs to the parent function invocation.
var operationIdOverride = {"ai.operation.id":context.traceContext.traceparent}; client.trackEvent({name: "my custom event", tagOverrides:operationIdOverride, properties: {customProperty2: "custom property value"}});
module.exports = async function (context, req) {
}; ```
-The `tagOverrides` parameter sets the `operation_Id` to the function's invocation ID. This setting enables you to correlate all of the automatically generated and custom telemetry for a given function invocation.
+The `tagOverrides` parameter sets the `operation_Id` to the function's invocation ID. This setting enables you to correlate all of the automatically generated and custom logs for a given function invocation.
::: zone-end
The `InvocationContext` class has the following properties:
## Logging
-In Azure Functions, you use the `context.log` method to write logs. When you call `context.log()`, your message is written with the default level "information". Azure Functions integrates with Azure Application Insights to better capture your function app logs. Application Insights, part of Azure Monitor, provides facilities for collection, visual rendering, and analysis of both application telemetry and your trace outputs. To learn more, see [monitoring Azure Functions](functions-monitoring.md).
+In Azure Functions, you use the `context.log` method to write logs. When you call `context.log()`, your message is written with the default level "information". Azure Functions integrates with Azure Application Insights to better capture your function app logs. Application Insights, part of Azure Monitor, provides facilities for collection, visual rendering, and analysis of both application logs and your trace outputs. To learn more, see [monitoring Azure Functions](functions-monitoring.md).
> [!NOTE] > If you use the alternative Node.js `console.log` method, those logs are tracked at the app-level and will *not* be associated with any specific function. It is *highly recommended* to use `context` for logging instead of `console` so that all logs are associated with a specific function.
The `context.res` (response) object has the following properties:
### Accessing the request and response
-When you work with HTTP triggers, you can access the HTTP request and response objects in a number of ways:
+When you work with HTTP triggers, you can access the HTTP request and response objects in several ways:
+ **From `req` and `res` properties on the `context` object.** In this way, you can use the conventional pattern to access HTTP data from the context object, instead of having to use the full `context.bindings.name` pattern. The following example shows how to access the `req` and `res` objects on the `context`:
The response can be set in multiple different ways.
## Scaling and concurrency
-By default, Azure Functions automatically monitors the load on your application and creates additional host instances for Node.js as needed. Functions uses built-in (not user configurable) thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. For more information, see [How the Consumption and Premium plans work](event-driven-scaling.md).
+By default, Azure Functions automatically monitors the load on your application and creates more host instances for Node.js as needed. Azure Functions uses built-in (not user configurable) thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. For more information, see [How the Consumption and Premium plans work](event-driven-scaling.md).
This scaling behavior is sufficient for many Node.js applications. For CPU-bound applications, you can improve performance further by using multiple language worker processes.
-By default, every Functions host instance has a single language worker process. You can increase the number of worker processes per host (up to 10) by using the [FUNCTIONS_WORKER_PROCESS_COUNT](functions-app-settings.md#functions_worker_process_count) application setting. Azure Functions then tries to evenly distribute simultaneous function invocations across these workers. This makes it less likely that a CPU-intensive function blocks other functions from running.
+By default, every Functions host instance has a single language worker process. You can increase the number of worker processes per host (up to 10) by using the [FUNCTIONS_WORKER_PROCESS_COUNT](functions-app-settings.md#functions_worker_process_count) application setting. Azure Functions then tries to evenly distribute simultaneous function invocations across these workers. This behavior makes it less likely that a CPU-intensive function blocks other functions from running.
-The FUNCTIONS_WORKER_PROCESS_COUNT applies to each host that Functions creates when scaling out your application to meet demand.
+The FUNCTIONS_WORKER_PROCESS_COUNT applies to each host that Azure Functions creates when scaling out your application to meet demand.
## Node version
az functionapp config set --linux-fx-version "node|18" --name "<MY_APP_NAME>" --
-To learn more about Azure Functions runtime support policy, please refer to this [article](./language-support-policy.md).
+To learn more about Azure Functions runtime support policy, refer to this [article](./language-support-policy.md).
## Environment variables
Add your own environment variables to a function app, in both your local and clo
### In local development environment
-When running locally, your functions project includes a [`local.settings.json` file](./functions-run-local.md), where you store your environment variables in the `Values` object.
+When you run locally, your functions project includes a [`local.settings.json` file](./functions-run-local.md), where you store your environment variables in the `Values` object.
```json {
When running locally, your functions project includes a [`local.settings.json` f
### In Azure cloud environment
-When running in Azure, the function app lets you set and use [Application settings](functions-app-settings.md), such as service connection strings, and exposes these settings as environment variables during execution.
+When you run in Azure, the function app lets you set and use [Application settings](functions-app-settings.md), such as service connection strings, and exposes these settings as environment variables during execution.
[!INCLUDE [Function app settings](../../includes/functions-app-settings.md)]
The `function.json` for `myNodeFunction` should include a `scriptFile` property
### Using `entryPoint`
-In `scriptFile` (or `index.js`), a function must be exported using `module.exports` in order to be found and run. By default, the function that executes when triggered is the only export from that file, the export named `run`, or the export named `index`.
-
-This can be configured using `entryPoint` in `function.json`, as in the following example:
+In `scriptFile` (or `index.js`), a function must be exported using `module.exports` in order to be found and run. By default, the function that executes when triggered is the only export from that file, the export named `run`, or the export named `index`. The following example uses `entryPoint` in `function.json`:
```json {
To debug locally, add `"languageWorkers:node:arguments": "--inspect=5858"` under
When debugging using VS Code, the `--inspect` parameter is automatically added using the `port` value in the project's launch.json file.
-In runtime version 1.x, setting `languageWorkers:node:arguments` won't work. The debug port can be selected with the [`--nodeDebugPort`](./functions-run-local.md#start) parameter on Azure Functions Core Tools.
+In runtime version 1.x, setting `languageWorkers:node:arguments` doesn't work. The debug port can be selected with the [`--nodeDebugPort`](./functions-run-local.md#start) parameter on Azure Functions Core Tools.
> [!NOTE] > You can only configure `languageWorkers:node:arguments` when running the function app locally.
A generated `.funcignore` file is used to indicate which files are excluded when
::: zone pivot="nodejs-model-v3"
-TypeScript files (.ts) are transpiled into JavaScript files (.js) in the `dist` output directory. TypeScript templates use the [`scriptFile` parameter](#using-scriptfile) in `function.json` to indicate the location of the corresponding .js file in the `dist` folder. The output location is set by the template by using `outDir` parameter in the `tsconfig.json` file. If you change this setting or the name of the folder, the runtime isn't able to find the code to run.
+TypeScript files (.ts) are transpiled into JavaScript files (.js) in the `dist` output directory. TypeScript templates use the [`scriptFile` parameter](#using-scriptfile) in `function.json` to indicate the location of the corresponding .js file in the `dist` folder. The setting `outDir` in your `tsconfig.json` file controls the output location. If you change this setting or the name of the folder, the runtime isn't able to find the code to run.
::: zone-end
There are several ways in which a TypeScript project differs from a JavaScript p
#### Create project
-To create a TypeScript function app project using Core Tools, you must specify the TypeScript language option when you create your function app. You can do this in one of the following ways:
+To create a TypeScript function app project using Core Tools, you must specify the TypeScript language option when you create your function app. You can create an app in one of the following ways:
::: zone pivot="nodejs-model-v3"
When you create a function app that uses the App Service plan, we recommend that
### Cold Start
-When developing Azure Functions in the serverless hosting model, cold starts are a reality. *Cold start* refers to the fact that when your function app starts for the first time after a period of inactivity, it takes longer to start up. For JavaScript functions with large dependency trees in particular, cold start can be significant. To speed up the cold start process, [run your functions as a package file](run-functions-from-deployment-package.md) when possible. Many deployment methods use the run from package model by default, but if you're experiencing large cold starts and aren't running this way, this change can offer a significant improvement.
+When you develop Azure Functions in the serverless hosting model, cold starts are a reality. *Cold start* refers to the first time your function app starts after a period of inactivity, taking longer to start up. For JavaScript functions with large dependency trees in particular, cold start can be significant. To speed up the cold start process, [run your functions as a package file](run-functions-from-deployment-package.md) when possible. Many deployment methods use this model by default, but if you're experiencing large cold starts you should check to make sure you're running this way.
### Connection Limits
When writing Azure Functions in JavaScript, you should write code using the `asy
- Throwing uncaught exceptions that [crash the Node.js process](https://nodejs.org/api/process.html#process_warning_using_uncaughtexception_correctly), potentially affecting the execution of other functions. - Unexpected behavior, such as missing logs from context.log, caused by asynchronous calls that aren't properly awaited.
-In the example below, the asynchronous method `fs.readFile` is invoked with an error-first callback function as its second parameter. This code causes both of the issues mentioned above. An exception that isn't explicitly caught in the correct scope crashed the entire process (issue #1). Calling the 1.x `context.done()` outside of the scope of the callback function means that the function invocation may end before the file is read (issue #2). In this example, calling 1.x `context.done()` too early results in missing log entries starting with `Data from file:`.
+In the following example, the asynchronous method `fs.readFile` is invoked with an error-first callback function as its second parameter. This code causes both of the issues previously mentioned. An exception that isn't explicitly caught in the correct scope crashed the entire process (issue #1). Calling the 1.x `context.done()` outside of the scope of the callback function means that the function invocation may end before the file is read (issue #2). In this example, calling 1.x `context.done()` too early results in missing log entries starting with `Data from file:`.
```javascript // NOT RECOMMENDED PATTERN
module.exports = function (context) {
Using the `async` and `await` keywords helps avoid both of these errors. You should use the Node.js utility function [`util.promisify`](https://nodejs.org/api/util.html#util_util_promisify_original) to turn error-first callback-style functions into awaitable functions.
-In the example below, any unhandled exceptions thrown during the function execution only fail the individual invocation that raised an exception. The `await` keyword means that steps following `readFileAsync` only execute after `readFile` is complete. With `async` and `await`, you also don't need to call the `context.done()` callback.
+In the following example, any unhandled exceptions thrown during the function execution only fail the individual invocation that raised an exception. The `await` keyword means that steps following `readFileAsync` only execute after `readFile` is complete. With `async` and `await`, you also don't need to call the `context.done()` callback.
```javascript // Recommended pattern
azure-maps Create Data Source Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-android-sdk.md
A vector tile source describes how to access a vector tile layer. Use the `Vecto
Azure Maps adheres to the [Mapbox Vector Tile Specification](https://github.com/mapbox/vector-tile-spec), an open standard. Azure Maps provides the following vector tiles services as part of the platform: -- Road tiles [documentation](/rest/api/maps/render-v2/get-map-tile) | [data format details](https://developer.tomtom.com/maps-api/maps-api-documentation-vector/tile)-- Traffic incidents [documentation](/rest/api/maps/traffic/gettrafficincidenttile) | [data format details](https://developer.tomtom.com/traffic-api/traffic-api-documentation-traffic-incidents/vector-incident-tiles)-- Traffic flow [documentation](/rest/api/maps/traffic/gettrafficflowtile) | [data format details](https://developer.tomtom.com/traffic-api/traffic-api-documentation-traffic-flow/vector-flow-tiles)
+- Road tiles [documentation](/rest/api/maps/render-v2/get-map-tile)
+- Traffic incidents [documentation](/rest/api/maps/traffic/gettrafficincidenttile)
+- Traffic flow [documentation](/rest/api/maps/traffic/gettrafficflowtile)
- Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Render V2-Get Map Tile API](/rest/api/maps/render-v2/get-map-tile) > [!TIP]
azure-maps Create Data Source Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-ios-sdk.md
A vector tile source describes how to access a vector tile layer. Use the `Vecto
Azure Maps adheres to the [Mapbox Vector Tile Specification](https://github.com/mapbox/vector-tile-spec), an open standard. Azure Maps provides the following vector tiles services as part of the platform: -- Road tiles [documentation](/rest/api/maps/render-v2/get-map-tile) | [data format details](https://developer.tomtom.com/maps-api/maps-api-documentation-vector/tile)-- Traffic incidents [documentation](/rest/api/maps/traffic/gettrafficincidenttile) | [data format details](https://developer.tomtom.com/traffic-api/traffic-api-documentation-traffic-incidents/vector-incident-tiles)-- Traffic flow [documentation](/rest/api/maps/traffic/gettrafficflowtile) | [data format details](https://developer.tomtom.com/traffic-api/traffic-api-documentation-traffic-flow/vector-flow-tiles)
+- Road tiles [documentation](/rest/api/maps/render-v2/get-map-tile)
+- Traffic incidents [documentation](/rest/api/maps/traffic/gettrafficincidenttile)
+- Traffic flow [documentation](/rest/api/maps/traffic/gettrafficflowtile)
- Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Render V2-Get Map Tile API](/rest/api/maps/render-v2/get-map-tile) > [!TIP]
azure-maps Create Data Source Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-web-sdk.md
A vector tile source describes how to access a vector tile layer. Use the [Vecto
Azure Maps adheres to the [Mapbox Vector Tile Specification](https://github.com/mapbox/vector-tile-spec), an open standard. Azure Maps provides the following vector tiles services as part of the platform: -- Road tiles [documentation](/rest/api/maps/render-v2/get-map-tile) | [data format details](https://developer.tomtom.com/maps-api/maps-api-documentation-vector/tile)-- Traffic incidents [documentation](/rest/api/maps/traffic/gettrafficincidenttile) | [data format details](https://developer.tomtom.com/traffic-api/traffic-api-documentation-traffic-incidents/vector-incident-tiles)-- Traffic flow [documentation](/rest/api/maps/traffic/gettrafficflowtile) | [data format details](https://developer.tomtom.com/traffic-api/traffic-api-documentation-traffic-flow/vector-flow-tiles)
+- Road tiles [documentation](/rest/api/maps/render-v2/get-map-tile)
+- Traffic incidents [documentation](/rest/api/maps/traffic/gettrafficincidenttile)
+- Traffic flow [documentation](/rest/api/maps/traffic/gettrafficflowtile)
- Azure Maps Creator also allows custom vector tiles to be created and accessed through the [Render V2-Get Map Tile API](/rest/api/maps/render-v2/get-map-tile) > [!TIP]
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
The `unitProperties` object contains a JSON array of unit properties.
| Property | Type | Required | Description | |--||-|-| |`unitName`|string|true|Name of unit to associate with this `unitProperty` record. This record is only valid when a label matching `unitName` is found in the `unitLabel` layers. |
-|`categoryName`|string|false|Purpose of the unit. A list of values that the provided rendering styles can make use of is available [here](https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json).|
+|`categoryName`|string|false|Purpose of the unit. A list of values that the provided rendering styles can make use of is documented in [categories.json](https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json).|
|`occupants`|array of directoryInfo objects |false |List of occupants for the unit. | |`nameAlt`|string|false|Alternate name of the unit. | |`nameSubtitle`|string|false|Subtitle of the unit. |
The `zoneProperties` object contains a JSON array of zone properties.
| Property | Type | Required | Description | |--||-|-| |zoneName |string |true |Name of zone to associate with `zoneProperty` record. This record is only valid when a label matching `zoneName` is found in the `zoneLabel` layer of the zone. |
-|categoryName| string| false |Purpose of the zone. A list of values that the provided rendering styles can make use of is available [here](https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json).|
+|categoryName| string| false |Purpose of the zone. A list of values that the provided rendering styles can make use of is documented in [categories.json](https://atlas.microsoft.com/sdk/javascript/indoor/0.1/categories.json).|
|zoneNameAlt| string| false |Alternate name of the zone. | |zoneNameSubtitle| string | false |Subtitle of the zone. | |zoneSetId| string | false | Set ID to establish a relationship among multiple zones so that they can be queried or selected as a group. For example, zones that span multiple levels. |
azure-maps How To Use Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md
The Map Control client-side JavaScript library allows you to render maps and embedded Azure Maps functionality into your web or mobile application.
-This documentation uses the Azure Maps Web SDK, however the Azure Maps services can be used with any map control. [Here](open-source-projects.md#third-part-map-control-plugins) are some popular open-source map controls that the Azure Maps team has created plugin's for.
+This article uses the Azure Maps Web SDK, however the Azure Maps services work with any map control. For a list of third-party map control plug-ins, see [Azure Maps community - Open-source projects](open-source-projects.md#third-part-map-control-plugins).
## Prerequisites
You can embed a map in a web page by using the Map Control client-side JavaScrip
</script> ```
- For more information about authentication with Azure Maps, see the [Authentication with Azure Maps](azure-maps-authentication.md) document. Also, a list of samples showing how to integrate Azure Active Directory (AAD) with Azure Maps can be found [here](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples).
+ For more information about authentication with Azure Maps, see the [Authentication with Azure Maps](azure-maps-authentication.md) document. For a list of samples showing how to integrate Azure Active Directory (AAD) with Azure Maps, see [Azure Maps & Azure Active Directory Samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples) in GitHub.
>[!TIP] >In this example, we've passed in the `id` of the map `<div>`. Another way to do this is to pass in the `HTMLElement` object by passing`document.getElementById('myMap')` as the first parameter.
Here is an example of Azure Maps with the language set to "fr-FR" and the region
![Map image showing labels in French](./media/how-to-use-map-control/websdk-localization.png)
-A complete list of supported languages and regional views is documented [here](supported-languages.md).
+For a list of supported languages and regional views, see [Localization support in Azure Maps](supported-languages.md).
## Azure Government cloud support
azure-maps Map Accessibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-accessibility.md
# Building an accessible application
-Upwards of 20% of internet users have a need for accessible web applications. As such, it's important to make sure your application is designed such that any user can easily use it. Rather than thinking of accessibility as a set of tasks to complete, think of it as part of your overall user experience. The more accessible your application, the more people who can use it.
+Upwards of 20% of internet users have a need for accessible web applications. As such, it's important to make sure your application is designed such that any user can easily use it. Rather than thinking of accessibility as a set of tasks to complete, think of it as part of your overall user experience. The more accessible your application, the more people who can use it.
When it comes to rich interactive content like a map, some common accessibility considerations are: - Support the screen reader for users who have difficulty seeing the web application. - Have multiple methods for interacting with and navigating the web application such as mouse, touch, and keyboard.-- Ensure color contrast is such that colors don't blend together and become hard to distinguish from each other.
+- Ensure color contrast is such that colors don't blend together and become hard to distinguish from each other.
The Azure Maps Web SDK comes prebuilt with many accessibility features such as: - Screen reader descriptions when the map moves and when the user focuses on a control or popup.
The Azure Maps Web SDK comes prebuilt with many accessibility features such as:
- Accessible color contrast support in the road map style. - High contrast support.
-Full accessibility conformance details for all Microsoft products can be found [here](https://cloudblogs.microsoft.com/industry-blog/government/2018/09/11/accessibility-conformance-reports/). Search for "Azure Maps web" to find the document specifically for the Azure Maps Web SDK.
+For accessibility conformance details for all Microsoft products, see [Accessibility Conformance Reports](https://cloudblogs.microsoft.com/industry-blog/government/2018/09/11/accessibility-conformance-reports/). Search for "Azure Maps web" to find the document specifically for the Azure Maps Web SDK.
## Navigating the map
There are several different ways in which the map can be zoomed, panned, rotated
**Rotate the map** -- Using a mouse, press down with the right mouse button on the map and drag left or right.
+- Using a mouse, press down with the right mouse button on the map and drag left or right.
- Using a touch screen, touch the map with two fingers and rotate. - With the map focused, use the shift key and the left or right arrow keys. - Using the rotation control with a mouse, touch or keyboard tab/enter keys. **Pitch the map** -- Using the mouse, press down with the right mouse button on the map and drag up or down.
+- Using the mouse, press down with the right mouse button on the map and drag up or down.
- Using a touch screen, touch the map with two fingers and drag them up or down together.-- With the map focused, use the shift key plus the up or down arrow keys.
+- With the map focused, use the shift key plus the up or down arrow keys.
- Using the pitch control with a mouse, touch or keyboard tab/enter keys. ## Change the Map Style
azure-maps Map Add Shape https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-shape.md
The Polygon layer only has a few styling options. Here is a tool to try them out
## Add a circle to the map
-Azure Maps uses an extended version of the GeoJSON schema that provides a definition for circles, as noted [here](extend-geojson.md#circle). A circle is rendered on the map by creating a `Point` feature. This `Point` has a `subType` property with a value of `"Circle"` and a `radius` property with a number that represents the radius in meters.
+Azure Maps uses an extended version of the GeoJSON schema that provides a [definition for circles](extend-geojson.md#circle). A circle is rendered on the map by creating a `Point` feature. This `Point` has a `subType` property with a value of `"Circle"` and a `radius` property with a number that represents the radius in meters.
```javascript {
azure-maps Map Extruded Polygon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon.md
A choropleth map can be rendered using the polygon extrusion layer. Set the `hei
## Add a circle to the map
-Azure Maps uses an extended version of the GeoJSON schema that provides a definition for circles as noted [here](./extend-geojson.md#circle). An extruded circle can be rendered on the map by creating a `point` feature with a `subType` property of `Circle` and a numbered `Radius` property representing the radius in **meters**. For example:
+Azure Maps uses an extended version of the GeoJSON schema that provides a [definition for circles] (./extend-geojson.md#circle). An extruded circle can be rendered on the map by creating a `point` feature with a `subType` property of `Circle` and a numbered `Radius` property representing the radius in **meters**. For example:
```javascript {
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
# Tutorial: Migrate a web app from Bing Maps
-Web apps that use Bing Maps often use the Bing Maps V8 JavaScript SDK. The Azure Maps Web SDK is the suitable Azure-based SDK to migrate to. The Azure Maps Web SDK lets you customize interactive maps with your own content and imagery for display in your web or mobile applications. This control makes use of WebGL, allowing you to render large data sets with high performance. Develop with this SDK using JavaScript or TypeScript. In this tutorial, you will learn how to:
+Web apps that use Bing Maps often use the Bing Maps V8 JavaScript SDK. The Azure Maps Web SDK is the suitable Azure-based SDK to migrate to. The Azure Maps Web SDK lets you customize interactive maps with your own content and imagery for display in your web or mobile applications. This control makes use of WebGL, allowing you to render large data sets with high performance. Develop with this SDK using JavaScript or TypeScript. This tutorial demonstrates how to:
> [!div class="checklist"] >
Web apps that use Bing Maps often use the Bing Maps V8 JavaScript SDK. The Azure
> * Show traffic data > * Add a ground overlay
-If migrating an existing web application, check to see if it is using an open-source map control library such as Cesium, Leaflet, and OpenLayers. If it is and you would prefer to continue to use that library, you can connect it to the Azure Maps tile services ([road tiles](/rest/api/maps/render/getmaptile) \| [satellite tiles](/rest/api/maps/render/getmapimagerytile)). The links below provide details on how to use Azure Maps in some commonly used open-source map control libraries.
+If migrating an existing web application, check to see if it's using an open-source map control library such as Cesium, Leaflet, and OpenLayers. If it's and you would prefer to continue to use that library, you can connect it to the Azure Maps tile services ([road tiles] \| [satellite tiles]). The following links provide details on how to use Azure Maps in commonly used open-source map control libraries.
-* [Cesium](https://www.cesium.com/) - A 3D map control for the web. [Code samples](https://samples.azuremaps.com/?search=Cesium) \| [Plugin repo]()
-* [Leaflet](https://leafletjs.com/) ΓÇô Lightweight 2D map control for the web. [Code samples](https://samples.azuremaps.com/?search=leaflet) \| [Plugin repo]()
-* [OpenLayers](https://openlayers.org/) - A 2D map control for the web that supports projections. [Code samples](https://samples.azuremaps.com/?search=openlayers) \| [Plugin repo]()
+* [Cesium] - A 3D map control for the web. <!--[Cesium code samples] \|--> [Cesium plugin]
+* [Leaflet] ΓÇô Lightweight 2D map control for the web. [Leaflet code samples] \| [Leaflet plugin]
+* [OpenLayers] - A 2D map control for the web that supports projections. <!--[OpenLayers code samples] \|--> [OpenLayers plugin]
If developing using a JavaScript framework, one of the following open-source projects may be useful:
-* [ng-azure-maps](https://github.com/arnaudleclerc/ng-azure-maps) - Angular 10 wrapper around Azure maps.
-* [AzureMapsControl.Components](https://github.com/arnaudleclerc/AzureMapsControl.Components) - An Azure Maps Blazor component.
-* [Azure Maps React Component](https://github.com/WiredSolutions/react-azure-maps) - A react wrapper for the Azure Maps control.
-* [Vue Azure Maps](https://github.com/rickyruiz/vue-azure-maps) - An Azure Maps component for Vue application.
+* [ng-azure-maps] - Angular 10 wrapper around Azure maps.
+* [AzureMapsControl.Components] - An Azure Maps Blazor component.
+* [Azure Maps React Component] - A react wrapper for the Azure Maps control.
+* [Vue Azure Maps] - An Azure Maps component for Vue application.
## Prerequisites
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+If you don't have an Azure subscription, create a [free account] before you begin.
* An [Azure Maps account] * A [subscription key]
If you don't have an Azure subscription, create a [free account](https://azure.m
The following table lists key API features in the Bing Maps V8 JavaScript SDK and the support of a similar API in the Azure Maps Web SDK.
-| Bing Maps feature | Azure Maps Web SDK support |
-|--|:--:|
-| Pushpins | Γ£ô |
-| Pushpin clustering | Γ£ô |
-| Polylines & Polygons | Γ£ô |
-| Ground Overlays | Γ£ô |
-| Heat maps | Γ£ô |
-| Tile Layers | Γ£ô |
-| KML Layer | Γ£ô |
-| Contour layer | [Samples](https://samples.azuremaps.com/?search=contour) |
-| Data binning layer | Included in the open-source Azure Maps [Gridded Data Source module](https://github.com/Azure-Samples/azure-maps-gridded-data-source) |
-| Animated tile layer | Included in the open-source Azure Maps [Animation module](https://github.com/Azure-Samples/azure-maps-animations) |
-| Drawing tools | Γ£ô |
-| Geocoder service | Γ£ô |
-| Directions service | Γ£ô |
-| Distance Matrix service | Γ£ô |
-| Spatial Data service | N/A |
-| Satellite/Aerial imagery | Γ£ô |
-| Birds eye imagery | N/A |
-| Streetside imagery | N/A |
-| GeoJSON support | Γ£ô |
-| GeoXML support | Γ£ô [Spatial IO module](how-to-use-spatial-io-module.md) |
-| Well-Known Text support | Γ£ô |
-| Custom map styles | Partial |
-
-Azure Maps also has many additional [open-source modules for the web SDK](open-source-projects.md#open-web-sdk-modules) that extend its capabilities.
+| Bing Maps feature | Azure Maps Web SDK support |
+|--|::|
+| Pushpins | Γ£ô |
+| Pushpin clustering | Γ£ô |
+| Polylines & Polygons | Γ£ô |
+| Ground Overlays | Γ£ô |
+| Heat maps | Γ£ô |
+| Tile Layers | Γ£ô |
+| KML Layer | Γ£ô |
+| Contour layer | [Contour layer code samples] |
+| Data binning layer | Included in the open-source Azure Maps [Gridded Data Source module] |
+| Animated tile layer | Included in the open-source Azure Maps [Animation module] |
+| Drawing tools | Γ£ô |
+| Geocoder service | Γ£ô |
+| Directions service | Γ£ô |
+| Distance Matrix service | Γ£ô |
+| Spatial Data service | N/A |
+| Satellite/Aerial imagery | Γ£ô |
+| Birds eye imagery | N/A |
+| Streetside imagery | N/A |
+| GeoJSON support | Γ£ô |
+| GeoXML support | Γ£ô [Spatial IO module] |
+| Well-Known Text support | Γ£ô |
+| Custom map styles | Partial |
+
+Azure Maps more [open-source modules for the web SDK] that extend its capabilities.
## Notable differences in the web SDKs The following are some of the key differences between the Bing Maps and Azure Maps Web SDKs to be aware of:
-* In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is also available for embedding the Web SDK into apps if preferred. For more information, see this [documentation](./how-to-use-map-control.md) for more information. This package also includes TypeScript definitions.
-* Bing Maps provides two hosted branches of their SDK; Release and Experimental. The Experimental branch may receive multiple updates a day when new development is taking place. Azure Maps only hosts a release branch, however experimental features are created as custom modules in the open-source Azure Maps code samples project. Bing Maps used to have a frozen branch as well that was updated less frequently, thus reducing the risk of breaking changes due to a release. In Azure Maps there you can use the npm module and point to any previous minor version release.
+* In addition to providing a hosted endpoint for accessing the Azure Maps Web SDK, an npm package is available for embedding the Web SDK into apps if preferred. For more information, see [Use the Azure Maps map control] in the Web SDK documentation. This package also includes TypeScript definitions.
+* Bing Maps provides two hosted branches of their SDK; Release and Experimental. The Experimental branch may receive multiple updates a day when new development is taking place. Azure Maps only hosts a release branch, however experimental features are created as custom modules in the open-source Azure Maps code samples project. Bing Maps used to have a frozen branch as well that was updated less frequently, thus reducing the risk of breaking changes due to a release. In Azure Maps, you can use the npm module and point to any previous minor version release.
> [!TIP]
-> Azure Maps publishes both minified and unminified versions of the SDK. Simple remove `.min` from the file names. The unminified version is useful when debugging issues but be sure to use the minified version in production to take advantage of the smaller file size.
+> Azure Maps publishes both minified and unminified versions of the SDK. Simply remove `.min` from the file names. The unminified version is useful when debugging issues but be sure to use the minified version in production to take advantage of the smaller file size.
-* After creating an instance of the Map class in Azure Maps, your code should wait for the maps `ready` or `load` event to fire before interacting with the map. These events ensure that all the map resources have been loaded and are ready to be accessed.
-* Both platforms use a similar tiling system for the base maps, however the tiles in Bing Maps are 256 pixels in dimension while the tiles in Azure Maps are 512 pixels in dimension. As such, to get the same map view in Azure Maps as Bing Maps, a zoom level used in Bing Maps needs to be subtracted by one in Azure Maps.
+* Once an instance of the Map class is created in Azure Maps, your code should wait for the maps `ready` or `load` event to fire before interacting with the map. These events ensure that all the map resources are loaded and ready to be accessed.
+* Both platforms use a similar tiling system for the base maps, however the tiles in Bing Maps are 256 pixels and are 512 pixels in Azure Maps. To get the same map view in Azure Maps as Bing Maps, subtract one zoom level in Azure Maps.
* Coordinates in Bing Maps are referred to as `latitude, longitude` while Azure Maps uses `longitude, latitude`. This format aligns with the standard `[x, y]` that is followed by most GIS platforms.
-* Shapes in the Azure Maps Web SDK are based on the GeoJSON schema. Helper classes are exposed through the [atlas.data namespace](/javascript/api/azure-maps-control/atlas.data). There is also the [atlas.Shape](/javascript/api/azure-maps-control/atlas.shape) class that can be used to wrap GeoJSON objects and make them easy to update and maintain in a data bindable way.
+* Shapes in the Azure Maps Web SDK are based on the GeoJSON schema. Helper classes are exposed through the [atlas.data namespace]. There's also the [atlas.Shape] class that can be used to wrap GeoJSON objects and make them easy to update and maintain in a data bindable way.
* Coordinates in Azure Maps are defined as Position objects that can be specified as a simple number array in the format `[longitude, latitude]` or `new atlas.data.Position(longitude, latitude)`. > [!TIP]
-> The Position class has a static helper function for importing coordinates that are in `latitude, longitude` format. The [atlas.data.Position.fromLatLng](/javascript/api/azure-maps-control/atlas.data.position)function can often be replace the `new Microsoft.Maps.Location` function in Bing Maps code.
+> The Position class has a static helper function for importing coordinates that are in `latitude, longitude` format. The [atlas.data.Position.fromLatLng] function can often be replace the `new Microsoft.Maps.Location` function in Bing Maps code.
-* Rather than specifying styling information on each shape that is added to the map, Azure Maps separates styles from the data. Data is stored in data sources and is connected to rendering layers that Azure Maps code uses to render the data. This approach provides enhanced performance benefit. Additionally, many layers support data-driven styling where business logic can be added to layer style options that will change how individual shapes are rendered within a layer based on properties defined in the shape.
-* Azure Maps provides a bunch of useful spatial math functions in the `atlas.math` namespace, however these differ from those in the Bing Maps spatial math module. The primary difference is that Azure Maps doesnΓÇÖt provide built-in functions for binary operations such as union and intersection, however, since Azure Maps is based on GeoJSON that is an open standard, there are many open-source libraries available. One popular option that works well with Azure Maps and provides a ton of spatial math capabilities is [turf js](https://turfjs.org/).
+* Rather than specifying styling information on each shape that is added to the map, Azure Maps separates styles from the data. Data is stored in data sources and is connected to rendering layers that Azure Maps code uses to render the data. This approach provides enhanced performance benefit. Many layers support data-driven styling, done by adding business logic to layer style options that change how individual shapes are rendered within a layer depending on its properties.
+* Azure Maps provides spatial math functions in the `atlas.math` namespace that differ from Bing Maps spatial math functions. The primary difference is that Azure Maps doesnΓÇÖt provide built-in functions for binary operations such as `union` and `intersection`. However, Azure Maps is based on the open GeoJSON standard and there are open-source libraries available. One popular option that works well with Azure Maps and provides spatial math capabilities is [turf js].
-See also the [Azure Maps Glossary](./glossary.md) for an in-depth list of terminology associated with Azure Maps.
+For more information on terminology related to Azure Maps, see the [Azure Maps Glossary].
## Web SDK side-by-side examples
-The following is a collection of code samples for each platform that cover common use cases to help you migrate your web application from Bing Maps V8 JavaScript SDK to the Azure Maps Web SDK. Code samples related to web applications are provided in JavaScript; however, Azure Maps also provides TypeScript definitions as an additional option through an [npm module](./how-to-use-map-control.md).
+The following list is a collection of code samples for each platform that cover common use cases to help you migrate your web application from Bing Maps V8 JavaScript SDK to the Azure Maps Web SDK. Code samples related to web applications are provided in JavaScript; however, Azure Maps also provides TypeScript definitions in an npm module. For more information about TypeScript definitions, see [Use the Azure Maps map control].
**Topics**
-* [Load a map](#load-a-map)
-* [Localizing the map](#localizing-the-map)
-* [Setting the map view](#setting-the-map-view)
-* [Adding a pushpin](#adding-a-pushpin)
-* [Adding a custom pushpin](#adding-a-custom-pushpin)
-* [Adding a polyline](#adding-a-polyline)
-* [Adding a polygon](#adding-a-polygon)
-* [Display an infobox](#display-an-infobox)
-* [Pushpin clustering](#pushpin-clustering)
-* [Add a heat map](#add-a-heat-map)
-* [Overlay a tile layer](#overlay-a-tile-layer)
-* [Show traffic data](#show-traffic-data)
-* [Add a ground overlay](#add-a-ground-overlay)
-* [Add KML data to the map](#add-kml-data-to-the-map)
-* [Add drawing tools](#add-drawing-tools)
+* [Load a map]
+* [Localizing the map]
+* [Setting the map view]
+* [Adding a pushpin]
+* [Adding a custom pushpin]
+* [Adding a polyline]
+* [Adding a polygon]
+* [Display an infobox]
+* [Pushpin clustering]
+* [Add a heat map]
+* [Overlay a tile layer]
+* [Show traffic data]
+* [Add a ground overlay]
+* [Add KML data to the map]
+* [Add drawing tools]
### Load a map Loading a map in both SDKs follows the same set of steps; * Add a reference to the Map SDK.
-* Add a `div` tag to the body of the page that will act as a placeholder for the map.
+* Add a `div` tag to the body of the page that acts as a placeholder for the map.
* Create a JavaScript function that gets called when the page has loaded. * Create an instance of the respective map class.
-**Some key differences**
+**Key differences**
-* Bing maps requires an account key to be specified in the script reference of the API or as a map option. Authentication credentials for Azure Maps are specified as options of the map class. This can be a subscription key or Azure Active Directory information.
+* Bing maps require an account key specified in the script reference of the API or as a map option. Authentication credentials for Azure Maps are specified as options of the map class as either [Shared Key authentication] or [Azure Active Directory].
* Bing Maps takes in a callback function in the script reference of the API that is used to call an initialization function to load the map. With Azure Maps, the onload event of the page should be used.
-* When using an ID to reference the `div` element that the map will be rendered in, Bing Maps uses an HTML selector (i.e.`#myMap`), whereas Azure Maps only uses the ID value (i.e. `myMap`).
+* When using an ID to reference the `div` element that the map is rendered in, Bing Maps uses an HTML selector (`#myMap`), whereas Azure Maps only uses the ID value (`myMap`).
* Coordinates in Azure Maps are defined as Position objects that can be specified as a simple number array in the format `[longitude, latitude]`. * The zoom level in Azure Maps is one level lower than the Bing Maps example due to the difference in tiling system sizes between the platforms. * By default, Azure Maps doesnΓÇÖt add any navigation controls to the map canvas, such as zoom buttons and map style buttons. There are however controls for adding a map style picker, zoom buttons, compass or rotation control, and a pitch control.
-* An event handler is added in Azure Maps to monitor the `ready` event of the map instance. This will fire when the map has finished loading the WebGL context and all resources needed. Any post load code can be added in this event handler.
+* An event handler is added in Azure Maps to monitor the `ready` event of the map instance. This fires when the map has finished loading the WebGL context and all resources needed. Any post load code can be added in this event handler.
-The examples below show how to load a basic map such that is centered over New York at coordinates (longitude: -73.985, latitude: 40.747) and is at zoom level 12 in Bing Maps.
+The following examples demonstrate loading a basic map centered over New York at coordinates (longitude: -73.985, latitude: 40.747) and is at zoom level 12 in Bing Maps.
**Before: Bing Maps**
The following code is an example of how to display a Bing Map centered and zoome
</html> ```
-Running this code in a browser will display a map that looks like the following image:
+Running this code in a browser displays a map that looks like the following image:
![Bing Maps map](media/migrate-bing-maps-web-app/bing-maps-load-map.jpg)
The following code shows how to load a map with the same view in Azure Maps alon
</html> ```
-Running this code in a browser will display a map that looks like the following image:
+Running this code in a browser displays a map that looks like the following image:
![Azure Maps map](media/migrate-bing-maps-web-app/azure-maps-load-map.jpg)
-Detailed documentation on how to set up and use the Azure Maps map control in a web app can be found [here](how-to-use-map-control.md).
+For more information on how to set up and use the Azure Maps map control in a web app, see [Use the Azure Maps map control].
> [!TIP] > Azure Maps publishes both minified and unminified versions of the SDK. Remove `.min` from the file names. The unminified version is useful when debugging issues but be sure to use the minified version in production to take advantage of the smaller file size.
-**Additional resources**
+**More resources**
-* Azure Maps also provides navigation controls for rotating and pitching the map view as documented [here](map-add-controls.md).
+* For more information on Azure Maps navigation controls for rotating and pitching a map, see [Add controls to a map].
### Localizing the map
in certain markets, as such the market of the user is specified using the `setMk
<script type="text/javascript" src="https://www.bing.com/api/maps/mapcontrol?callback=initMap&setLang={language-code}&setMkt={market}&UR={region-code}" async defer></script> ```
-Here is an example of Bing Maps with the language set to "fr-FR".
+Here's an example of Bing Maps with the language set to "fr-FR".
![Localized Bing Maps map](media/migrate-bing-maps-web-app/bing-maps-localized-map.jpg) **After: Azure Maps**
-Azure Maps only provides options for setting the language and regional view of the map. A market parameter is not used to limit features. There are two different ways of setting the language and regional view of the map. The first option is to add this information to the global `atlas` namespace that will result in all map control instances in your app defaulting to these settings. The following sets the language to French ("fr-FR") and the regional view to `"Auto"`:
+Azure Maps only provides options for setting the language and regional view of the map. A market parameter isn't used to limit features. There are two different ways of setting the language and regional view of the map. The first option is to add this information to the global `atlas` namespace that results in all map control instances in your app defaulting to these settings. The following sets the language to French ("fr-FR") and the regional view to `"Auto"`:
```javascript atlas.setLanguage('fr-FR');
map = new atlas.Map('myMap', {
``` > [!NOTE]
-> With Azure Maps it is possible to load multiple map instances on the same page with different language and region settings. Additionally, it is also possible to update these settings in the map after it has loaded. A detailed list of supported languages in Azure Maps can be found [here](./supported-languages.md).
+> Azure Maps can load multiple map instances on the same page with different language and region settings. It is also possible to update these settings in the map after it has loaded. For a list of supported languages in Azure Maps, see [Localization support in Azure Maps].
-Here is an example of Azure Maps with the language set to "fr" and the user region set to "fr-FR".
+Here's an example of Azure Maps with the language set to "fr" and the user region set to "fr-FR".
![Localized Azure Maps map](media/migrate-bing-maps-web-app/bing-maps-localized-map.jpg) ### Setting the map view
-Dynamic maps in both Bing and Azure Maps can be programmatically moved to new geographic locations by calling the appropriate functions in JavaScript. The examples below show how to make the map display satellite aerial imagery, center the map over a location with coordinates (longitude: -111.0225, latitude: 35.0272) and change the zoom level to 15 in Bing Maps.
+Dynamic maps in both Bing and Azure Maps can be programmatically moved to new geographic locations by calling the appropriate functions in JavaScript. The following example demonstrates a map displaying satellite aerial imagery, centered over a location with coordinates (longitude: -111.0225, latitude: 35.0272) and change the zoom level to 15 in Bing Maps.
> [!NOTE] > Bing Maps uses tiles that are 256 pixels in dimensions while Azure Maps uses a larger 512-pixel tile. This reduces the number of network requests needed by Azure Maps to load the same map area as Bing Maps. However, due to the way tile pyramids work in map controls, the larger tiles in Azure Maps means that to achieve that same viewable area as a map in Bing Maps, you need to subtract the zoom level used in Bing Maps by 1 when using Azure Maps.
map.setView({
**After: Azure Maps**
-In Azure Maps, the map position can be changed programmatically by using the `setCamera` function of the map and the map style can be changed using the `setStyle` function. Note that the coordinates in Azure Maps are in "longitude, latitude" format, and the zoom level value is subtracted by 1.
+In Azure Maps, the map position can be changed programmatically by using the `setCamera` function of the map and the map style can be changed using the `setStyle` function. The coordinates in Azure Maps are in "longitude, latitude" format, and the zoom level value is subtracted by 1.
```javascript map.setCamera({
map.setStyle({
![Azure Maps set map view](media/migrate-bing-maps-web-app/azure-maps-set-map-view.jpg)
-**Additional resources**
+**More resources**
-* [Choose a map style](./choose-map-style.md)
-* [Supported map styles](./supported-map-styles.md)
+* [Choose a map style]
+* [Supported map styles]
### Adding a pushpin
-In Azure Maps there are multiple ways that point data can be rendered on the map;
+In Azure Maps, there are multiple ways that point data can be rendered on the map;
* HTML Markers ΓÇô Renders points using traditional DOM elements. HTML Markers support dragging. * Symbol Layer ΓÇô Renders points with an icon and/or text within the WebGL context. * Bubble Layer ΓÇô Renders points as circles on the map. The radii of the circles can be scaled based on properties in the data.
-Both Symbol and Bubble layers are rendered within the WebGL context and are capable of rendering very large sets of points on the map. These layers require data to be stored in a data source. Data sources and rendering layers should be added to the map after the `ready` event has fired. HTML Markers are rendered as DOM elements within the page and donΓÇÖt use a data source. The more DOM elements a page has, the slower the page becomes. If rendering more than a few hundred points on a map, it is recommended to use one of the rendering layers instead.
+Both Symbol and Bubble layers are rendered within the WebGL context and are capable of rendering large sets of points on the map. These layers require data to be stored in a data source. Data sources and rendering layers should be added to the map after the `ready` event has fired. HTML Markers are rendered as DOM elements within the page and donΓÇÖt use a data source. The more DOM elements a page has, the slower the page becomes. If rendering more than a few hundred points on a map, it's recommended to use one of the rendering layers instead.
-The examples below add a marker to the map at (longitude: -0.2, latitude: 51.5) with the number 10 overlaid as a label.
+The following examples add a marker to the map at (longitude: -0.2, latitude: 51.5) with the number 10 overlaid as a label.
**Before: Bing Maps**
map.entities.add(pushpin);
**After: Azure Maps using HTML Markers**
-In Azure Maps, HTML markers can be used to easily display a point on the map and are recommended for simple apps that only need to display a small number of points on the map. To use an HTML marker, create an instance of the `atlas.HtmlMarker` class, set the text and position options, and add the marker to the map using the `map.markers.add` function.
+In Azure Maps, HTML markers can be used to easily display a point on the map and are recommended for simple apps that only need to display a few points on the map. To use an HTML marker, create an instance of the `atlas.HtmlMarker` class, set the text and position options, and add the marker to the map using the `map.markers.add` function.
```javascript //Create a HTML marker and add it to the map.
map.markers.add(new atlas.HtmlMarker({
**After: Azure Maps using a Symbol Layer**
-When using a Symbol layer, the data must be added to a data source, and the data source attached to the layer. Additionally, the data source and layer should be added to the map after the `ready` event has fired. To render a unique text value above a symbol, the text information needs to be stored as a property of the data point and that property referenced in the `textField` option of the layer. This is a bit more work than using HTML markers but provides many performance advantages.
+When using a Symbol layer, the data must be added to a data source, and the data source attached to the layer. Additionally, the data source and layer should be added to the map after the `ready` event has fired. To render a unique text value above a symbol, the text information needs to be stored as a property of the data point and that property referenced in the `textField` option of the layer. This is a bit more work than using HTML markers but provides performance advantages.
```html <!DOCTYPE html>
When using a Symbol layer, the data must be added to a data source, and the data
![Azure Maps add symbol layer](media/migrate-bing-maps-web-app/azure-maps-add-pushpin.jpg)
-**Additional resources**
+**More resources**
-* [Create a data source](./create-data-source-web-sdk.md)
-* [Add a Symbol layer](./map-add-pin.md)
-* [Add a Bubble layer](./map-add-bubble-layer.md)
-* [Cluster point data](./clustering-point-data-web-sdk.md)
-* [Add HTML Markers](./map-add-custom-html.md)
-* [Use data-driven style expressions](./data-driven-style-expressions-web-sdk.md)
-* [Symbol layer icon options](/javascript/api/azure-maps-control/atlas.iconoptions)
-* [Symbol layer text option](/javascript/api/azure-maps-control/atlas.textoptions)
-* [HTML marker class](/javascript/api/azure-maps-control/atlas.htmlmarker)
-* [HTML marker options](/javascript/api/azure-maps-control/atlas.htmlmarkeroptions)
+* [Create a data source]
+* [Add a Symbol layer]
+* [Add a Bubble layer]
+* [Cluster point data]
+* [Add HTML Markers]
+* [Use data-driven style expressions]
+* [Symbol layer icon options]
+* [Symbol layer text option]
+* [HTML marker class]
+* [HTML marker options]
### Adding a custom pushpin
map.layers.insert(layer);
**After: Azure Maps using HTML Markers**
-To customize an HTML marker in Azure Maps an HTML `string` or `HTMLElement` can be passed into the `htmlContent` option of the marker. In Azure Maps, an `anchor` option is used to specify the relative position of the marker relative to the position coordinate using one of nine defined reference points; "center", "top", "bottom", "left", "right", "top-left", "top-right", "bottom-left", "bottom-right". The content is anchored and is set to "bottom" by default, that is the bottom center of the html content. To make it easier to migrate code from Bing Maps, set the anchor to "top-left", and then use the `offset` option with the same offset used in Bing Maps. The offsets in Azure Maps move in the opposite direction of Bing Maps, so multiply them by minus one.
+To customize an HTML marker in Azure Maps an HTML `string` or `HTMLElement` can be passed into the `htmlContent` option of the marker. In Azure Maps, an `anchor` option is used to specify the relative position of the marker relative to the position coordinate using one of nine defined reference points; "center", "top", "bottom", "left", "right", "top-left", "top-right", "bottom-left", "bottom-right". The content is anchored and is centered at the "bottom" by default. To make it easier to migrate code from Bing Maps, set the anchor to "top-left", and then use the `offset` option with the same offset used in Bing Maps. The offsets in Azure Maps move in the opposite direction of Bing Maps, so multiply them by minus one.
> [!TIP] > Add `pointer-events:none` as a style on the HTML content to disable the default drag behavior in MS Edge that will display an unwanted icon.
Symbol layers in Azure Maps support custom images as well, but the image needs t
> [!TIP] > To create advanced custom rendering of points, use multiple rendering layers together. For example, if you want to have multiple pushpins that have the same icon on different colored circles, instead of creating a bunch of images for each color overlay a symbol layer on top of a bubble layer and have them reference the same data source. This will be much more efficient than creating, and having the map maintain a bunch of different images.
-**Additional resources**
+**More resources**
-* [Create a data source](./create-data-source-web-sdk.md)
-* [Add a Symbol layer](./map-add-pin.md)
-* [Add HTML Markers](./map-add-custom-html.md)
-* [Use data-driven style expressions](./data-driven-style-expressions-web-sdk.md)
-* [Symbol layer icon options](/javascript/api/azure-maps-control/atlas.iconoptions)
-* [Symbol layer text option](/javascript/api/azure-maps-control/atlas.textoptions)
-* [HTML marker class](/javascript/api/azure-maps-control/atlas.htmlmarker)
-* [HTML marker options](/javascript/api/azure-maps-control/atlas.htmlmarkeroptions)
+* [Create a data source]
+* [Add a Symbol layer]
+* [Add HTML Markers]
+* [Use data-driven style expressions]
+* [Symbol layer icon options]
+* [Symbol layer text option]
+* [HTML marker class]
+* [HTML marker options]
### Adding a polyline
-Polylines are used to represent a line or path on the map. The examples below show how to create a dashed polyline on the map.
+Polylines are used to represent a line or path on the map. The following example demonstrates creating a dashed polyline on the map.
**Before: Bing Maps**
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
![Azure Maps line](media/migrate-bing-maps-web-app/azure-maps-line.jpg)
-**Additional resources**
+**More resources**
-* [Add lines to the map](./map-add-line-layer.md)
-* [Line layer options](/javascript/api/azure-maps-control/atlas.linelayeroptions)
-* [Use data-driven style expressions](./data-driven-style-expressions-web-sdk.md)
+* [Add lines to the map]
+* [Line layer options]
+* [Use data-driven style expressions]
### Adding a polygon
-Polygons are used to represent an area on the map. Azure Maps and Bing Maps provide very similar support for polygons. The examples below show how to create a polygon that forms a triangle based on the center coordinate of the map.
+Polygons are used to represent an area on the map. Azure Maps and Bing Maps provide similar support for polygons. The following example shows how to create a polygon that forms a triangle based on the center coordinate of the map.
**Before: Bing Maps**
map.layers.add(new atlas.layer.LineLayer(datasource, null, {
![Azure Maps polyogn](media/migrate-bing-maps-web-app/azure-maps-polygon.jpg)
-**Additional resources**
+**More resources**
-* [Add a polygon to the map](./map-add-shape.md#use-a-polygon-layer)
-* [Add a circle to the map](./map-add-shape.md#add-a-circle-to-the-map)
-* [Polygon layer options](/javascript/api/azure-maps-control/atlas.polygonlayeroptions)
-* [Line layer options](/javascript/api/azure-maps-control/atlas.linelayeroptions)
-* [Use data-driven style expressions](./data-driven-style-expressions-web-sdk.md)
+* [Add a polygon to the map]
+* [Add a circle to the map]
+* [Polygon layer options]
+* [Line layer options]
+* [Use data-driven style expressions]
### Display an infobox
-Additional information for an entity can be displayed on the map as an `Microsoft.Maps.Infobox` class in Bing Maps, in Azure Maps this can be achieved using the `atlas.Popup` class. The examples below add a pushpin/marker to the map, and when clicked, displays an infobox/popup.
+More information for an entity can be displayed on the map as an `Microsoft.Maps.Infobox` class in Bing Maps, in Azure Maps this is achieved using the `atlas.Popup` class. The following example adds a pushpin/marker to the map that when selected, displays an infobox/popup.
**Before: Bing Maps**
Microsoft.Maps.Events.addHandler(pushpin, 'click', function () {
**After: Azure Maps**
-In Azure Maps, a popup can be used to display additional information for a location. An HTML `string` or `HTMLElement` object can be passed into the `content` option of the popup. Popups can be displayed independently of any shape if desired and thus require a `position` value to be specified. To display a popup, call the `open` function and pass in the map that the popup is to be displayed on.
+In Azure Maps, a popup can be used to display more information for a location. An HTML `string` or `HTMLElement` object can be passed into the `content` option of the popup. Popups can be displayed independently of any shape if desired and thus require a `position` value to be specified. To display a popup, call the `open` function and pass in the map that the popup is to be displayed on.
```javascript //Add a marker to the map that to display a popup for.
map.events.add('click', marker, function () {
> [!NOTE] > To do the same thing with a symbol, bubble, line or polygon layer, pass the layer into the maps event code instead of a marker.
-**Additional resources**
+**More resources**
-* [Add a popup](./map-add-popup.md)
-* [Popup with Media Content](https://samples.azuremaps.com/?sample=popup-with-media-content)
-* [Popups on Shapes](https://samples.azuremaps.com/?sample=popups-on-shapes)
-* [Reusing Popup with Multiple Pins](https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins)
-* [Popup class](/javascript/api/azure-maps-control/atlas.popup)
-* [Popup options](/javascript/api/azure-maps-control/atlas.popupoptions)
+* [Add a popup]
+* [Popup with Media Content]
+* [Popups on Shapes]
+* [Reusing Popup with Multiple Pins]
+* [Popup class]
+* [Popup options]
### Pushpin clustering When visualizing many data points on the map, points overlap each other, the map looks cluttered and it becomes difficult to see and use. Clustering of point data can be used to improve this user experience and also improve performance. Clustering point data is the process of combining point data that are near each other and representing them on the map as a single clustered data point. As the user zooms into the map, the clusters break apart into their individual data points.
-The examples below load a GeoJSON feed of earthquake data from the past week and add it to the map. Clusters are rendered as scaled and colored circles depending on the number of points they contain.
+The following example loads a GeoJSON feed of earthquake data from the past week and add it to the map. Clusters are rendered as scaled and colored circles depending on the number of points they contain.
> [!NOTE]
-> There are several different algorithms used for pushpin clustering. Bing Maps uses a simple grid-based function, while Azure Maps uses a more advanced and visually appealing point-based clustering methods.
+> There are several different algorithms used for pushpin clustering. Bing Maps uses a simple grid-based function, while Azure Maps uses a more advanced and visually appealing point-based clustering method.
**Before: Bing Maps**
In Bing Maps, GeoJSON data can be loaded using the GeoJSON module. Pushpins can
In Azure Maps, data is added and managed by a data source. Layers connect to data sources and render the data in them. The `DataSource` class in Azure Maps provides several clustering options.
-* `cluster` ΓÇô Tells the data source to cluster point data.
+* `cluster` ΓÇô Tells the data source to cluster point data.
* `clusterRadius` - The radius in pixels to cluster points together.
-* `clusterMaxZoom` - The maximum zoom level that clustering occurs. If you zoom in more than this, all points are rendered as symbols.
+* `clusterMaxZoom` - The maximum zoom level that clustering occurs. Any additional zooming results in all points being rendered as symbols.
* `clusterProperties` - Defines custom properties that are calculated using expressions against all the points within each cluster and added to the properties of each cluster point.
-When clustering is enabled, the data source will send clustered and unclustered data points to layers for rendering. The data source is capable of clustering hundreds of thousands of data points. A clustered data point has the following properties on it:
+When clustering is enabled, the data source sends clustered and unclustered data points to layers for rendering. The data source is capable of clustering hundreds of thousands of data points. A clustered data point has the following properties on it:
| Property name | Type | Description | |--||| | `cluster` | boolean | Indicates if feature represents a cluster. | | `cluster_id` | string | A unique ID for the cluster that can be used with the `DataSource` classes `getClusterExpansionZoom`, `getClusterChildren`, and `getClusterLeaves` functions. | | `point_count` | number | The number of points the cluster contains. |
-| `point_count_abbreviated` | string | A string that abbreviates the `point_count` value if it is long. (for example, 4,000 becomes 4K) |
+| `point_count_abbreviated` | string | A string that abbreviates the `point_count` value if it's long. (for example, 4,000 becomes 4K) |
The `DataSource` class has the following helper function for accessing additional information about a cluster using the `cluster_id`. | Function | Return type | Description | |-|--|--|
-| `getClusterChildren(clusterId: number)` | `Promise<Feature<Geometry, any> | Shape>` | Retrieves the children of the given cluster on the next zoom level. These children may be a combination of shapes and sub-clusters. The sub-clusters will be features with properties matching cluster properties. |
-| `getClusterExpansionZoom(clusterId: number)` | `Promise<number>` | Calculates a zoom level that the cluster will start expanding or break apart. |
+| `getClusterChildren(clusterId: number)` | `Promise<Feature<Geometry, any> | Shape>` | Retrieves the children of the given cluster on the next zoom level. These children may be a combination of shapes and subclusters. The subclusters are features with properties matching cluster properties. |
+| `getClusterExpansionZoom(clusterId: number)` | `Promise<number>` | Calculates a zoom level that the cluster starts expanding or break apart. |
| `getClusterLeaves(clusterId: number, limit: number, offset: number)` | `Promise<Feature<Geometry, any> | Shape>` | Retrieves all points in a cluster. Set the `limit` to return a subset of the points and use the `offset` to page through the points. |
-When rendering clustered data on the map, it is often easiest to use two or more layers. The example below uses three layers, a bubble layer for drawing scaled colored circles based on the size of the clusters, a symbol layer to render the cluster size as text, and a second symbol layer for rendering the unclustered points. There are many other ways to render clustered data in Azure Maps highlighted in the [Cluster point data](./clustering-point-data-web-sdk.md) documentation.
+When rendering clustered data on the map, it's often easiest to use two or more layers. The following example uses three layers, a bubble layer for drawing scaled colored circles based on the size of the clusters, a symbol layer to render the cluster size as text, and a second symbol layer for rendering the unclustered points. For more information on rendering clustered data in Azure Maps, see [Clustering point data in the Web SDK]
GeoJSON data can be directly imported in Azure Maps using the `importDataFromUrl` function on the `DataSource` class.
GeoJSON data can be directly imported in Azure Maps using the `importDataFromUrl
![Azure Maps clustering](media/migrate-bing-maps-web-app/azure-maps-clustering.jpg)
-**Additional resources**
+**More resources**
-* [Add a Symbol layer](./map-add-pin.md)
-* [Add a Bubble layer](./map-add-bubble-layer.md)
-* [Cluster point data](./clustering-point-data-web-sdk.md)
-* [Use data-driven style expressions](./data-driven-style-expressions-web-sdk.md)
+* [Add a Symbol layer]
+* [Add a Bubble layer]
+* [Cluster point data]
+* [Use data-driven style expressions]
### Add a heat map Heat maps, also known as point density maps, are a type of data visualization used to represent the density of data using a range of colors. They're often used to show the data "hot spots" on a map and are a great way to render large point data sets.
-The examples below load a GeoJSON feed of all earthquakes over the past month from the USGS and renders them as a heat map.
+The following example loads a GeoJSON feed of all earthquakes over the past month from the USGS, rendered as a heat map.
**Before: Bing Maps**
In Azure Maps, load the GeoJSON data into a data source and connect the data sou
![Azure Maps heatmap](media/migrate-bing-maps-web-app/azure-maps-heatmap.jpg)
-**Additional resources**
+**More resources**
-* [Add a heat map layer](./map-add-heat-map-layer.md)
-* [Heat map layer class](/javascript/api/azure-maps-control/atlas.layer.heatmaplayer)
-* [Heat map layer options](/javascript/api/azure-maps-control/atlas.heatmaplayeroptions)
-* [Use data-driven style expressions](./data-driven-style-expressions-web-sdk.md)
+* [Add a heat map layer]
+* [Heat map layer class]
+* [Heat map layer options]
+* [Use data-driven style expressions]
### Overlay a tile layer
-Tile layers allow you to overlay large images that have been broken up into smaller tiled images that align with the maps tiling system. This is a common way to overlay large images or very large data sets.
+Tile layers allow you to overlay large images that have been broken up into smaller tiled images that align with the maps tiling system. This is a common way to overlay large images or large data sets.
-The examples below overlay a weather radar tile layer from Iowa Environmental Mesonet of Iowa State University that uses an X, Y, Zoom tiling naming schema.
+The following example overlays a weather radar tile layer from Iowa Environmental Mesonet of Iowa State University that uses an X, Y, Zoom tiling naming schema.
**Before: Bing Maps**
map.layers.insert(weatherTileLayer);
In Azure Maps, a tile layer can be added to the map in much the same way as any other layer. A formatted URL that has in x, y, zoom placeholders; `{x}`, `{y}`, `{z}` respectively is used to tell the layer where to access the tiles. Azure Maps tile layers also support `{quadkey}`, `{bbox-epsg-3857}` and `{subdomain}` placeholders. > [!TIP]
-> In Azure Maps layers can easily be rendered below other layers, including base map layers. Often it is desirable to render tile layers below the map labels so that they are easy to read. The `map.layers.add` function takes in a second parameter that is the ID of a second layer to insert the new layer below. To insert a tile layer below the map labels the following code can be used:
+> In Azure Maps, layers can be rendered below other layers, including base map layers. Often it is desirable to render tile layers below the map labels so that they are easy to read. The `map.layers.add` function takes in a second parameter that is the ID of a second layer to insert the new layer below. To insert a tile layer below the map labels the following code can be used:
> > `map.layers.add(myTileLayer, "labels");`
map.layers.add(new atlas.layer.TileLayer({
> [!TIP] > Tile requests can be captured using the `transformRequest` option of the map. This will allow you to modify or add headers to the request if desired.
-**Additional resources**
+**More resources**
-* [Add tile layers](./map-add-tile-layer.md)
-* [Tile layer class](/javascript/api/azure-maps-control/atlas.layer.tilelayer)
-* [Tile layer options](/javascript/api/azure-maps-control/atlas.tilelayeroptions)
+* [Add tile layers]
+* [Tile layer class]
+* [Tile layer options]
### Show traffic data
map.setTraffic({
![Azure Maps traffic](media/migrate-bing-maps-web-app/azure-maps-traffic.jpg)
-If you click on one of the traffic icons in Azure Maps, additional information is displayed in a popup.
+If you select one of the traffic icons in Azure Maps, more information displays in a popup.
![Azure Maps traffic popup](media/migrate-bing-maps-web-app/azure-maps-traffic-popup.jpg)
-**Additional resources**
+**More resources**
-* [Show traffic on the map](./map-show-traffic.md)
-* [Traffic overlay options](https://samples.azuremaps.com/?sample=traffic-overlay-options)
-* [Traffic control](https://samples.azuremaps.com/?sample=traffic-controls)
+* [Show traffic on the map]
+* [Traffic overlay options]
+* [Traffic control]
### Add a ground overlay
-Both Bing and Azure maps support overlaying georeferenced images on the map so that they move and scale as you pan and zoom the map. In Bing Maps these are known as ground overlays while in Azure Maps they are referred to as image layers. These are great for building floor plans, overlaying old maps, or imagery from a drone.
+Both Bing and Azure maps support overlaying georeferenced images on the map that they move and scale as you pan and zoom the map. In Bing Maps these are known as ground overlays, in Azure Maps they're referred to as image layers. image layers are great for building floor plans, overlaying old maps, or imagery from a drone.
**Before: Bing Maps**
-When creating a ground overlay in Bing Maps you need to specify the URL to the image to overlay and a bounding box to bind the image to on the map. This example overlays a map image of Newark New Jersey from 1922 on the map.
+To create a ground overlay in Bing Maps, you need to specify the URL to the overlay image and a bounding box that binds the image to the map. This example overlays a map image of Newark New Jersey from 1922 on the map.
```html <!DOCTYPE html>
When creating a ground overlay in Bing Maps you need to specify the URL to the i
</html> ```
-Running this code in a browser will display a map that looks like the following image:
+Running this code in a browser displays a map that looks like the following image:
![Bing Maps ground overlay](media/migrate-bing-maps-web-app/bing-maps-ground-overlay.jpg)
Running this code in a browser will display a map that looks like the following
In Azure Maps, georeferenced images can be overlaid using the `atlas.layer.ImageLayer` class. This class requires a URL to an image and a set of coordinates for the four corners of the image. The image must be hosted either on the same domain or have CORs enabled. > [!TIP]
-> If you only have north, south, east, west and rotation information, and not coordinates for each corner of the image, you can use the static [atlas.layer.ImageLayer.getCoordinatesFromEdges](/javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number-) function.
+> If you only have north, south, east, west and rotation information, and not coordinates for each corner of the image, you can use the static [atlas.layer.ImageLayer.getCoordinatesFromEdges] function.
```html <!DOCTYPE html>
In Azure Maps, georeferenced images can be overlaid using the `atlas.layer.Image
![Azure Maps ground overlay](media/migrate-bing-maps-web-app/azure-maps-ground-overlay.jpg)
-**Additional resources**
+**More resources**
-* [Overlay an image](./map-add-image-layer.md)
-* [Image layer class](/javascript/api/azure-maps-control/atlas.layer.imagelayer)
+* [Overlay an image]
+* [Image layer class]
### Add KML data to the map
Both Azure and Bing maps can import and render KML, KMZ, GeoRSS, GeoJSON and Wel
**Before: Bing Maps**
-Running this code in a browser will display a map that looks like the following image:
+Running this code in a browser displays a map that looks like the following image:
```html <!DOCTYPE html>
Running this code in a browser will display a map that looks like the following
**After: Azure Maps**
-In Azure Maps, GeoJSON is the main data format used in the web SDK, additional spatial data formats can be easily integrated in using the [spatial IO module](/javascript/api/azure-maps-spatial-io/). This module has functions for both reading and writing spatial data and also includes a simple data layer that can easily render data from any of these spatial data formats. To read the data in a spatial data file, pass in a URL, or raw data as string or blob into the `atlas.io.read` function. This will return all the parsed data from the file that can then be added to the map. KML is a bit more complex than most spatial data format as it includes a lot more styling information. The `SpatialDataLayer` class supports rendering majority of these styles, however icons images have to be loaded into the map before loading the feature data, and ground overlays have to be added as layers to the map separately. When loading data via a URL, it should be hosted on a CORs enabled endpoint, or a proxy service should be passed in as an option into the read function.
+In Azure Maps, GeoJSON is the main data format used in the web SDK, additional spatial data formats can be easily integrated in using the [spatial IO module]. This module has functions for both reading and writing spatial data and also includes a simple data layer that can easily render data from any of these spatial data formats. To read the data in a spatial data file, pass in a URL, or raw data as string or blob into the `atlas.io.read` function. This returns all the parsed data from the file that can then be added to the map. KML is a bit more complex than most spatial data format as it includes a lot more styling information. The `SpatialDataLayer` class supports rendering most of these styles, however icons images have to be loaded into the map before loading the feature data, and ground overlays have to be added as layers to the map separately. When loading data via a URL, it should be hosted on a CORs enabled endpoint, or a proxy service should be passed in as an option into the read function.
```html <!DOCTYPE html>
In Azure Maps, GeoJSON is the main data format used in the web SDK, additional s
![Azure Maps kml](media/migrate-bing-maps-web-app/azure-maps-kml.jpg)
-**Additional resources**
+**More resources**
-* [atlas.io.read function](/javascript/api/azure-maps-spatial-io/atlas.io#read-stringarraybufferblob--spatialdatareadoptions-)
-* [SimpleDataLayer](/javascript/api/azure-maps-spatial-io/atlas.layer.simpledatalayer)
-* [SimpleDataLayerOptions](/javascript/api/azure-maps-spatial-io/atlas.simpledatalayeroptions)
+* [atlas.io.read function]
+* [SimpleDataLayer]
+* [SimpleDataLayerOptions]
### Add drawing tools
-Both Bing and Azure Maps provide a module that adds the ability for the user to draw and edit shapes on the map using the mouse or other input device. They both support drawing pushpins, lines, and polygons. Azure Maps also provides options for drawing circles and rectangles.
+Both Bing and Azure Maps provide a module to enable the user to draw and edit shapes on the map using the mouse or other input devices. They both support drawing pushpins, lines, and polygons. Azure Maps also provides options for drawing circles and rectangles.
**Before: Bing Maps**
In Bing Maps the `DrawingTools` module is loaded using the `Microsoft.Maps.loadM
**After: Azure Maps**
-In Azure Maps the drawing tools module needs to be loaded by loading the JavaScript and CSS files need to be referenced in the app. Once the map has loaded, an instance of the `DrawingManager` class can be created and a `DrawingToolbar` instance attached.
+In Azure Maps, the drawing tools module needs to be loaded by loading the JavaScript and CSS files need to be referenced in the app. Once the map has loaded, an instance of the `DrawingManager` class can be created and a `DrawingToolbar` instance attached.
```html <!DOCTYPE html>
In Azure Maps the drawing tools module needs to be loaded by loading the JavaScr
> [!TIP] > In Azure Maps layers the drawing tools provide multiple ways that users can draw shapes. For example, when drawing a polygon the user can click to add each point, or hold the left mouse button down and drag the mouse to draw a path. This can be modified using the `interactionType` option of the `DrawingManager`.
-**Additional resources**
+**More resources**
-* [Documentation](./set-drawing-options.md)
-* [Code samples](https://samples.azuremaps.com/#drawing-tools-module)
+* [Use the drawing tools module]
+* [Drawing tools module code samples]
## Additional resources
-Take a look at the [open-source Azure Maps Web SDK modules](open-source-projects.md#open-web-sdk-modules). These modules provide a ton of additional functionality and are fully customizable.
+Take a look at the [open-source Azure Maps Web SDK modules]. These modules provide more functionality and are fully customizable.
Review code samples related migrating other Bing Maps features:
Learn more about the Azure Maps Web SDK.
> [!div class="nextstepaction"] > [Azure Maps Web SDK Service API reference documentation](/javascript/api/azure-maps-control/)
-## Clean up resources
-
-No resources to be cleaned up.
- ## Next steps Learn more about migrating from Bing Maps to Azure Maps.
Learn more about migrating from Bing Maps to Azure Maps.
> [!div class="nextstepaction"] > [Migrate a web service](migrate-from-bing-maps-web-services.md)
+<!End Links-->
+[road tiles]: /rest/api/maps/render/getmaptile
+[satellite tiles]: /rest/api/maps/render/getmapimagerytile
+[Cesium]: https://www.cesium.com/?azure-portal=true
+<!--[Cesium code samples]: https://samples.azuremaps.com/?search=Cesium&azure-portal=true-->
+[Cesium plugin]: /samples/azure-samples/azure-maps-cesium/azure-maps-cesium-js-plugin
+[Leaflet]: https://leafletjs.com/?azure-portal=true
+[Leaflet code samples]: https://samples.azuremaps.com/?search=leaflet&azure-portal=true
+[Leaflet plugin]: /samples/azure-samples/azure-maps-leaflet/azure-maps-leaflet-plugin
+[OpenLayers]: https://openlayers.org/?azure-portal=true
+<!--[OpenLayers code samples]: https://samples.azuremaps.com/?search=openlayers&azure-portal=true-->
+[OpenLayers plugin]: /samples/azure-samples/azure-maps-OpenLayers/azure-maps-OpenLayers-plugin?azure-portal=true
+
+<! If developing using a JavaScript framework, one of the following open-source projects may be useful ->
+[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps?azure-portal=true
+[AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components?azure-portal=true
+[Azure Maps React Component]: https://github.com/WiredSolutions/react-azure-maps?azure-portal=true
+[Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps?azure-portal=true
+
+<!-- Key features support ->
+[Contour layer code samples]: https://samples.azuremaps.com/?search=contour&azure-portal=true
+[Gridded Data Source module]: https://github.com/Azure-Samples/azure-maps-gridded-data-source?azure-portal=true
+[Animation module]: https://github.com/Azure-Samples/azure-maps-animations?azure-portal=true
+[Spatial IO module]: how-to-use-spatial-io-module.md
+[open-source modules for the web SDK]: open-source-projects.md#open-web-sdk-modules
+
+<! Topics >
+[Load a map]: #load-a-map
+[Localizing the map]: #localizing-the-map
+[Setting the map view]: #setting-the-map-view
+[Adding a pushpin]: #adding-a-pushpin
+[Adding a custom pushpin]: #adding-a-custom-pushpin
+[Adding a polyline]: #adding-a-polyline
+[Adding a polygon]: #adding-a-polygon
+[Display an infobox]: #display-an-infobox
+[Pushpin clustering]: #pushpin-clustering
+[Add a heat map]: #add-a-heat-map
+[Overlay a tile layer]: #overlay-a-tile-layer
+[Show traffic data]: #show-traffic-data
+[Add a ground overlay]: #add-a-ground-overlay
+[Add KML data to the map]: #add-kml-data-to-the-map
+[Add drawing tools]: #add-drawing-tools
+
+<! Additional resources -->
+[Add a heat map layer]: map-add-heat-map-layer.md
+[Heat map layer class]: /javascript/api/azure-maps-control/atlas.layer.heatmaplayer
+[Heat map layer options]: /javascript/api/azure-maps-control/atlas.heatmaplayeroptions
+[Use data-driven style expressions]: data-driven-style-expressions-web-sdk.md
+[Choose a map style]: choose-map-style.md
+[Supported map styles]: supported-map-styles.md
+
+[Create a data source]: create-data-source-web-sdk.md
+[Add a Symbol layer]: map-add-pin.md
+[Add a Bubble layer]: map-add-bubble-layer.md
+[Cluster point data]: clustering-point-data-web-sdk.md
+[Symbol layer icon options]: /javascript/api/azure-maps-control/atlas.iconoptions
+[Symbol layer text option]: /javascript/api/azure-maps-control/atlas.textoptions
+[HTML marker class]: /javascript/api/azure-maps-control/atlas.htmlmarker
+[HTML marker options]: /javascript/api/azure-maps-control/atlas.htmlmarkeroptions
+[Add HTML Markers]: map-add-custom-html.md
+
+[Add lines to the map]: map-add-line-layer.md
+[Line layer options]: /javascript/api/azure-maps-control/atlas.linelayeroptions
+
+[Add a polygon to the map]: map-add-shape.md#use-a-polygon-layer
+[Add a circle to the map]: map-add-shape.md#add-a-circle-to-the-map
+[Polygon layer options]: /javascript/api/azure-maps-control/atlas.polygonlayeroptions
+
+[Add a popup]: map-add-popup.md
+[Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content&azure-portal=true
+[Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes&azure-portal=true
+[Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins&azure-portal=true
+[Popup class]: /javascript/api/azure-maps-control/atlas.popup
+[Popup options]: /javascript/api/azure-maps-control/atlas.popupoptions
+
+[Add tile layers]: map-add-tile-layer.md
+[Tile layer class]: /javascript/api/azure-maps-control/atlas.layer.tilelayer
+[Tile layer options]: /javascript/api/azure-maps-control/atlas.tilelayeroptions
+
+[Show traffic on the map]: map-show-traffic.md
+[Traffic overlay options]: https://samples.azuremaps.com/?sample=traffic-overlay-options&azure-portal=true
+[Traffic control]: https://samples.azuremaps.com/?sample=traffic-controls&azure-portal=true
+
+[Overlay an image]: map-add-image-layer.md
+[Image layer class]: /javascript/api/azure-maps-control/atlas.layer.imagelayer
+
+[atlas.io.read function]: /javascript/api/azure-maps-spatial-io/atlas.io#read-stringarraybufferblob--spatialdatareadoptions-
+[SimpleDataLayer]: /javascript/api/azure-maps-spatial-io/atlas.layer.simpledatalayer
+[SimpleDataLayerOptions]: /javascript/api/azure-maps-spatial-io/atlas.simpledatalayeroptions
+
+[Use the drawing tools module]: set-drawing-options.md
+[Drawing tools module code samples]: https://samples.azuremaps.com?azure-portal=true#drawing-tools-module
+
+<!>
+
+[free account]: https://azure.microsoft.com/free/?azure-portal=true
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
+[Shared Key authentication]: azure-maps-authentication.md#shared-key-authentication
+[Azure Active Directory]: azure-maps-authentication.md#azure-ad-authentication
+[Use the Azure Maps map control]: how-to-use-map-control.md
+[atlas.data namespace]: /javascript/api/azure-maps-control/atlas.data
+[atlas.Shape]: /javascript/api/azure-maps-control/atlas.shape
+[atlas.data.Position.fromLatLng]: /javascript/api/azure-maps-control/atlas.data.position
+[turf js]: https://turfjs.org?azure-portal=true
+[Azure Maps Glossary]: glossary.md
+[Add controls to a map]: map-add-controls.md
+[Localization support in Azure Maps]: supported-languages.md
+[open-source Azure Maps Web SDK modules]: open-source-projects.md#open-web-sdk-modules
+[Clustering point data in the Web SDK]: clustering-point-data-web-sdk.md
+[atlas.layer.ImageLayer.getCoordinatesFromEdges]: /javascript/api/azure-maps-control/atlas.layer.imagelayer#getcoordinatesfromedges-number--number--number--number--number-
azure-maps Migrate From Bing Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md
If you don't have an Azure subscription, create a [free account] before you begi
## Azure Maps platform overview
-Azure Maps provides developers from all industries powerful geospatial capabilities, packed with the freshest mapping data available to provide geographic context for web and mobile applications. Azure Maps is an Azure One API compliant set of REST APIs for Maps, Search, Routing, Traffic, Time Zones, Geofencing, Map Data, Weather Data, and many more services accompanied by both Web and Android SDKs to make development easy, flexible, and portable across multiple platforms. [Azure Maps is also available in Power BI](power-bi-visual-get-started.md).
+Azure Maps provides developers from all industries powerful geospatial capabilities, packed with the freshest mapping data available to provide geographic context for web and mobile applications. Azure Maps is an Azure One API compliant set of REST APIs for Maps, Search, Routing, Traffic, Time Zones, Geofencing, Map Data, Weather Data, and many more services accompanied by both Web and Android SDKs to make development easy, flexible, and portable across multiple platforms. [Azure Maps is also available in Power BI].
## High-level platform comparison
-The following table provides a high-level list of Bing Maps features and the relative support for those features in Azure Maps. This list doesnΓÇÖt include additional Azure Maps features such as accessibility, geofencing APIs, traffic services, spatial operations, direct map tile access, and batch services.
+The following table provides a high-level list of Bing Maps features and the relative support for those features in Azure Maps. This list doesnΓÇÖt include other Azure Maps features such as accessibility, geofencing APIs, traffic services, spatial operations, direct map tile access, and batch services.
| Bing Maps feature | Azure Maps support | ||::|
Bing Maps provides basic key-based authentication. Azure Maps provides both basi
## Licensing considerations
-When migrating to Azure Maps from Bing Maps, the following information should be considered with regard to licensing.
+When migrating to Azure Maps from Bing Maps, the following information should be considered regarding licensing.
* Azure Maps charges for the usage of interactive maps based on the number of map tiles loaded, whereas Bing Maps charges for the loading of the map control (sessions). To reduce costs for developers, Azure Maps automatically caches map tiles. One Azure Maps transaction is generated for every 15 map tiles that are loaded. The interactive Azure Maps SDKs use 512-pixel tiles, and on average generates one or less transactions per page view.
-* Azure Maps allows data from its platform to be stored in Azure. Caching and storing results locally is only permitted when the purpose of caching is to reduce latency times of CustomerΓÇÖs application, see [Microsoft Azure terms of use](https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=31) for more information.
+* Azure Maps allows data from its platform to be stored in Azure. Caching and storing results locally is only permitted when the purpose of caching is to reduce latency times of CustomerΓÇÖs application, see [Microsoft Azure terms of use] for more information.
Here are some licensing-related resources for Azure Maps:
-* [Azure Maps pricing page](https://azure.microsoft.com/pricing/details/azure-maps/)
-* [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=azure-maps)
-* [Azure Maps term of use](https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/MCA) (Scroll down to the Azure Maps section)
-* [Choose the right pricing tier in Azure Maps](./choose-pricing-tier.md)
+* [Azure Maps pricing page]
+* [Azure pricing calculator]
+* [Azure Maps term of use] (Scroll down to the Azure Maps section)
+* [Choose the right pricing tier in Azure Maps]
## Suggested migration plan Here's an example of a high-level migration plan. 1. Take inventory of what Bing Maps SDKs and services your application is using and verify that Azure Maps provides alternative SDKs and services for you to migrate to.
-2. Create an Azure subscription (if you donΓÇÖt already have one) at [azure.com](https://azure.com).
-3. Create an Azure Maps account ([documentation](./how-to-manage-account-keys.md))
- and authentication key or Azure Active Directory ([documentation](./how-to-manage-authentication.md)).
-4. Migrate your application code.
-5. Test your migrated application.
-6. Deploy your migrated application to production.
+1. Create an Azure subscription (if you donΓÇÖt already have one) at [azure.com]).
+1. Create an [Azure Maps account].
+1. Setup authentication using an Azure Maps [subscription key] or [Azure Active Directory authentication].
+1. Migrate your application code.
+1. Test your migrated application.
+1. Deploy your migrated application to production.
## Create an Azure Maps account To create an Azure Maps account and get access to the Azure Maps platform, follow these steps:
-1. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-2. Sign in to the [Azure portal](https://portal.azure.com/).
-3. Create an [Azure Maps account](./how-to-manage-account-keys.md).
-4. [Get your Azure Maps subscription key](./how-to-manage-authentication.md#view-authentication-details) or setup Azure Active Directory authentication for enhanced security.
+1. If you don't have an Azure subscription, create a [free Azure account] before you begin.
+2. Sign in to the [Azure portal].
+3. Create an [Azure Maps account].
+4. Get your Azure Maps [subscription key] or setup [Azure Active Directory authentication] for enhanced security.
## Azure Maps technical resources
-Here is a list of useful technical resources for Azure Maps.
+Here's a list of useful technical resources for Azure Maps.
-* Overview: <https://azure.com/maps>
-* Documentation: <https://aka.ms/AzureMapsDocs>
-* Web SDK Code Samples: <https://aka.ms/AzureMapsSamples>
-* Developer Forums: <https://aka.ms/AzureMapsForums>
-* Videos: <https://aka.ms/AzureMapsVideos>
-* Blog: <https://aka.ms/AzureMapsBlog>
-* Azure Maps Feedback (UserVoice): <https://aka.ms/AzureMapsFeedback>
+* [Azure Maps product page]
+* [Azure Maps product documentation]
+* [Azure Maps code samples]
+* [Azure Maps developer forums]
+* [Microsoft learning center shows]
+* [Azure Maps Blog]
+* [Azure Maps Feedback (UserVoice)]
## Migration support
-Developers can seek migration support through the [forums](/answers/topics/azure-maps.html) or through one of the many [Azure support options](https://azure.microsoft.com/support/options/).
+Developers can seek migration support through the [Azure Maps Q&A] or through one of the many [Azure support options].
## New terminology
Learn the details of how to migrate your Bing Maps application with these articl
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[free account]: https://azure.microsoft.com/free/
+[free Azure account]: https://azure.microsoft.com/free/
+[Azure portal]: https://portal.azure.com/
[manage authentication in Azure Maps]: how-to-manage-authentication.md
+[Azure Maps is also available in Power BI]: power-bi-visual-get-started.md
+[Microsoft Azure terms of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=31
+[Azure Maps pricing page]: https://azure.microsoft.com/pricing/details/azure-maps/
+[Azure pricing calculator]: https://azure.microsoft.com/pricing/calculator/?service=azure-maps
+[Azure Maps term of use]: https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/MCA
+[Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md
+[azure.com]: https://azure.com
+[Azure Active Directory authentication]: azure-maps-authentication.md#azure-ad-authentication
+ [Azure Maps Q&A]: /answers/topics/azure-maps.html
+[Azure support options]: https://azure.microsoft.com/support/options/
+[Azure Maps product page]: https://azure.com/maps
+[Azure Maps product documentation]: https://aka.ms/AzureMapsDocs
+[Azure Maps code samples]: https://aka.ms/AzureMapsSamples
+[Azure Maps developer forums]: https://aka.ms/AzureMapsForums
+[Microsoft learning center shows]: https://aka.ms/AzureMapsVideos
+[Azure Maps Blog]: https://aka.ms/AzureMapsBlog
+[Azure Maps Feedback (UserVoice)]: https://aka.ms/AzureMapsFeedback
azure-maps Open Source Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/open-source-projects.md
The following is a list of open-source projects that extend the capabilities of
| [Azure Maps Docs](https://github.com/MicrosoftDocs/azure-docs/tree/master/articles/azure-maps) | Source for all Azure Location Based Services documentation. | | [Azure Maps Creator Tools](https://github.com/Azure-Samples/AzureMapsCreator) | Python tools for Azure Maps Creator Tools. |
-A longer list of open-source projects for Azure Maps that includes community created projects is available [here](https://github.com/microsoft/Maps/blob/master/AzureMaps.md)
+For a more complete list of open-source projects for Azure Maps that includes community created projects, see [Azure Maps Open Source Projects](https://github.com/microsoft/Maps/blob/master/AzureMaps.md) in GitHub.
## Supportability of open-source projects
azure-maps Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-browsers.md
You might want to target older browsers that don't support WebGL or that have on
(<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>. </iframe>
-Additional code samples using Azure Maps in Leaflet can be found [here](https://samples.azuremaps.com/?search=leaflet).
+For code samples using Azure Maps in Leaflet, see [Azure Maps Samples](https://samples.azuremaps.com/?search=leaflet).
-[Here](open-source-projects.md#third-part-map-control-plugins) are some popular open-source map controls that the Azure Maps team has created plugin's for.
+For a list of third-party map control plug-ins, see [Azure Maps community - Open-source projects](open-source-projects.md#third-part-map-control-plugins).
## Next steps
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md
In this tutorial, you'll learn how to:
## Sample code
-In this tutorial, you'll create a store locator for a fictional company named *Contoso Coffee*. Also, this tutorial includes some tips to help you learn about extending the store locator with other optional functionality.
+This tutorial demonstrates how to create a store locator for a fictional company named *Contoso Coffee*, along with tips to extend the store locator with additional functionality.
To see a live sample of what you're creating in this tutorial, see [Simple Store Locator] on the **Azure Maps Code Samples** site.
This section describes how to create a dataset of the stores that you want to di
:::image type="content" source="./media/tutorial-create-store-locator/store-locator-data-spreadsheet.png" alt-text="Screenshot of the store locator data in an Excel workbook.":::
-The excel file containing the full dataset for the Contoso Coffee locator sample application can be downloaded from the [data] folder of the _Azure Maps code samples_ repository in GitHub.
+Download the excel file containing the full dataset for the Contoso Coffee locator sample application from the [data] folder of the _Azure Maps code samples_ repository in GitHub.
From the above screenshot of the data, we can make the following observations:
From the above screenshot of the data, we can make the following observations:
## Load Contoso Coffee shop locator dataset
- The Contoso Coffee shop locator dataset is small, so we'll convert the Excel worksheet into a tab-delimited text file. This file can then be downloaded by the browser when the application loads.
+ The Contoso Coffee shop locator dataset is small, so it can be converted into a tab-delimited text file that the browser downloads when the application loads.
> [!TIP] > If your dataset is too large for client download, or is updated frequently, you might consider storing your dataset in a database. After your data is loaded into a database, you can then set up a web service that accepts queries for the data, then sends the results to the user's browser.
To create the HTML:
</main> ```
-After you finish, *https://docsupdatetracker.net/index.html* should look like _[Simple Store Locator.html]_ in the tutorial sample code.
+Once completed, *https://docsupdatetracker.net/index.html* should look like [Simple Store Locator.html] in the tutorial sample code.
## Define the CSS styles
The next step is to define the CSS styles. CSS styles define how the application
} ```
-Run the application. You'll see the header, search box, and search button. However, the map isn't visible because it hasn't been loaded yet. If you try to do a search, nothing happens. We need to add the JavaScript logic described in the next section. This logic accesses all the functionality of the store locator.
+If you run the application at this point, the header, search box, and search button appear. However, the map isn't visible because it hasn't been loaded yet. If you try to do a search, nothing happens. The next section describes adding the JavaScript logic needed to access all the functionality of the store locator.
## Add JavaScript code
The JavaScript code in the Contoso Coffee shop locator app enables the following
2. When the user selects the search button, or types a location in the search box then presses enter, a fuzzy search against the user's query begins. The code passes in an array of country/region ISO 2 values to the `countrySet` option to limit the search results to those countries/regions. Limiting the countries/regions to search helps increase the accuracy of the results that are returned.
-3. Once the search completes, the first location result is used as the center focus of the map. When the user selects the My Location button, the code retrieves the user's location using the *HTML5 Geolocation API* that's built into the browser. After retrieving the location, the code centers the map over the user's location.
+3. Once the search completes, the first location result is used as the center focus of the map. When the user selects the My Location button, the code retrieves the user's location using the *HTML5 Geolocation API* that's built into the browser. Once the location is retrieved, the code centers the map over the user's location.
To add the JavaScript:
azure-maps Weather Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-service-tutorial.md
import aiohttp
## Import weather data
-For the sake of this tutorial, we'll use weather data readings from sensors installed at four different wind turbines. The sample data consists of 30 days of weather readings. These readings are gathered from weather data centers near each turbine location. The demo data contains data readings for temperature, wind speed and, direction. You can download the demo data from [here](https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/tree/master/AzureMapsJupyterSamples/Tutorials/Analyze%20Weather%20Data/data). The script below imports demo data to the Azure Notebook.
+For the sake of this tutorial, we'll use weather data readings from sensors installed at four different wind turbines. The sample data consists of 30 days of weather readings. These readings are gathered from weather data centers near each turbine location. The demo data contains data readings for temperature, wind speed and, direction. You can download the demo data contained in [weather_dataset_demo.csv](https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/tree/master/AzureMapsJupyterSamples/Tutorials/Analyze%20Weather%20Data/data) from GitHub. The script below imports demo data to the Azure Notebook.
```python df = pd.read_csv("./data/weather_dataset_demo.csv")
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
For information about pricing for supported countries/regions, see [Azure Monito
Webhook action groups use the following rules: -- A webhook call is attempted at most three times.-- The first call waits 10 seconds for a response.-- Between the first and second call, it waits 20 seconds for a response.-- Between the second and third call, it waits 40 seconds for a response.-- The call is retried if any of the following conditions are met:
+The retry logic below assumes that the failure is retriable. The status codes: 408, 429, 503, 504, or HttpRequestException, WebException, `TaskCancellationException` are considered ΓÇ£retriableΓÇ¥.
- - A response isn't received within the timeout period.
- - One of the following HTTP status codes is returned: 408, 429, 503, 504, or `TaskCancellationException`.
- - If any one of the preceding errors is encountered, wait an additional 5 seconds for the response.
+When a webhook is invoked, if the first call fails, it will be retried at least 1 more time (retry), and up to 5 times (5 retries) at various delay intervals (5, 20, 40 seconds).
-- If three attempts to call the webhook fail, no action group calls the endpoint for 15 minutes.
+- The delay between 1st and 2nd attempt is 5 seconds
+- The delay between 2nd and 3rd attempt is 20 seconds
+- The delay between 3rd and 4th attempt is 5 seconds
+- The delay between 4th and 5th attempt is 40 seconds
+- The delay between 5th and 6th attempt is 5 seconds
+
+- After retries attempted to call the webhook fail, no action group calls the endpoint for 15 minutes.
For source IP address ranges, see [Action group IP addresses](../app/ip-addresses.md).
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
For more advanced use cases, you can modify telemetry by adding spans, updating
### Enable distributed tracing for Java function apps
-On the function app **Overview** pane, go to **Configuration**. Under **Application settings**, select **New application setting**.
+1. **Option 1**: On the function app **Overview** pane, go to **Application Insights**. Under **Collection Level**, select **Recommended**.
-> [!div class="mx-imgBorder"]
-> ![Screenshot that shows the New application setting option.](./media//functions/create-new-setting.png)
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot that shows the how to enable the AppInsights Java Agent.](./media//functions/collection-level.jpg)
-Add application settings with the following values and select **Save**.
+2. **Option 2**: On the function app **Overview** pane, go to **Configuration**. Under **Application settings**, select **New application setting**.
-```
-APPLICATIONINSIGHTS_ENABLE_AGENT: true
-```
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot that shows the New application setting option.](./media//functions/create-new-setting.png)
+
+ Add an application setting with the following values and select **Save**.
+
+ ```
+ APPLICATIONINSIGHTS_ENABLE_AGENT: true
+ ```
### Troubleshooting
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
OpenTelemetry offerings are available for .NET, Node.js, Python and Java applica
- <a name="PREVIEW"> :warning: 2</a>: OpenTelemetry is available as a public preview. [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) > [!NOTE]
-> For a feature-by-feature release status, see the [FAQ](../faq.yml#what-is-the-current-release-state-of-features-within-each-opentelemetry-offering-).
+> For a feature-by-feature release status, see the [FAQ](../faq.yml#what-s-the-current-release-state-of-features-within-each-opentelemetry-offering-).
## Get started
Depending on your language and signal type, there are different ways to collect
The following table represents the currently supported custom telemetry types:
-| | Custom Events | Custom Metrics | Dependencies | Exceptions | Page Views | Requests | Traces |
+| Custom Telemetry Types | Custom Events | Custom Metrics | Dependencies | Exceptions | Page Views | Requests | Traces |
|-||-|--|||-|--| | **.NET** | | | | | | | | | &nbsp;&nbsp;&nbsp;OpenTelemetry API | | | Yes | Yes | | Yes | |
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Since Azure Monitor charges for the collection of data, your goal should be to c
| Recommendation | Description | |:|:|
+| Change to Workspace-based Application Insights | Ensure that your Application Insights resources are Workspace-based so that they can leveage new cost saving tools such as Basic Logs, Commitment Tiers, Retention by data type and Data Archive. |
| Use sampling to tune the amount of data collected. | [Sampling](app/sampling.md) is the primary tool you can use to tune the amount of data collected by Application Insights. Use sampling to reduce the amount of telemetry that's sent from your applications with minimal distortion of metrics. | | Limit the number of Ajax calls. | [Limit the number of Ajax calls](app/javascript.md#configuration) that can be reported in every page view or disable Ajax reporting. If you disable Ajax calls, you'll be disabling [JavaScript correlation](app/javascript.md#enable-distributed-tracing) too. | | Disable unneeded modules. | [Edit ApplicationInsights.config](app/configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data aren't required. |
Since Azure Monitor charges for the collection of data, your goal should be to c
| Limit the use of custom metrics. | The Application Insights option to [Enable alerting on custom metric dimensions](app/pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can increase costs. Using this option can result in the creation of more pre-aggregation metrics. | | Ensure use of updated SDKs. | Earlier versions of the ASP.NET Core SDK and Worker Service SDK [collect many counters by default](app/eventcounters.md#default-counters-collected), which were collected as custom metrics. Use later versions to specify [only required counters](app/eventcounters.md#customizing-counters-to-be-collected). |
+#### All log data collection
+
+| Recommendation | Description |
+|:|:|
+| Remove unnecssary data during data ingestion | After following all of the preveious recommendations, consider using Azure Monitor data collection transformations to reduce the size of your data during ingestion. |
## Monitor workspace and analyze usage
azure-monitor Azure Monitor Workspace Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-manage.md
Previously updated : 01/19/2023 Last updated : 03/28/2023 # Manage an Azure Monitor workspace
resource workspace 'microsoft.monitor/accounts@2021-06-03-preview' = {
} ```-
-To connect your Azure Monitor managed service for Prometheus to your Azure Monitor workspace, see [Collect Prometheus metrics from AKS cluster](./prometheus-metrics-enable.md)
+When you create an Azure Monitor workspace, a new resource group is created. The resource group name has the following format: `MA_<azure monitor workspace resource name>_<location code>_managed`, where the tokenized elements are in lower case. The resource group contains a data collection endpoint, and a data collection rule with the same name as the workspace. The resource group and its resources are automatically deleted when you delete the workspace.
+
+To connect your Azure Monitor managed service for Prometheus to your Azure Monitor workspace, see [Collect Prometheus metrics from AKS cluster](./prometheus-metrics-enable.md)
## Delete an Azure Monitor workspace
To set up an Azure monitor workspace as a data source for Grafana using a Resour
-If your Grafana Instance is self managed see [Use Azure Monitor managed service for Prometheus (preview) as data source for self-managed Grafana using managed system identity](./prometheus-self-managed-grafana-azure-active-directory.md)
+If your Grafana Instance is self managed, see [Use Azure Monitor managed service for Prometheus (preview) as data source for self-managed Grafana using managed system identity](./prometheus-self-managed-grafana-azure-active-directory.md)
azure-monitor Data Collection Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations.md
The following example is a DCR for Azure Monitor Agent that sends data to the `S
"streams": [ "Microsoft-Syslog" ],
- "transformKql": "source | where message contains 'error'",
+ "transformKql": "source | where message has 'error'",
"destinations": [ "centralWorkspace" ]
The following example is a DCR for data from the Logs Ingestion API that sends d
"destinations": [ "clv2ws1" ],
- "transformKql": "source | where (AdditionalContext contains 'malicious traffic!' | project TimeGenerated = Time, Computer, Subject = AdditionalContext",
+ "transformKql": "source | where (AdditionalContext has 'malicious traffic!' | project TimeGenerated = Time, Computer, Subject = AdditionalContext",
"outputStream": "Microsoft-SecurityEvent" } ]
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/monitor-azure-resource.md
This section discusses collecting and monitoring data.
As soon as you create an Azure resource, Azure Monitor is enabled and starts collecting metrics and activity logs. With some configuration, you can gather more monitoring data and enable other features. The Azure Monitor data platform is made up of Metrics and Logs. Each feature collects different kinds of data and enables different Azure Monitor features. - [Azure Monitor Metrics](../essentials/data-platform-metrics.md) stores numeric data from monitored resources into a time-series database. The metric database is automatically created for each Azure subscription. Use [Metrics Explorer](../essentials/tutorial-metrics.md) to analyze data from Azure Monitor Metrics.
+- [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) collects and stores numeric data from Azure Kubernetes Service, in a Prometheus compatible time-series database. Onboard to managed Prometheus using remote write, or the Azure Kubernetes Service add-on. Analyze the data using a Prometheus explorer workbook in your [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md), or [Grafana](../visualize/grafana-plugin.md).
- [Azure Monitor Logs](../logs/data-platform-logs.md) collects logs and performance data where they can be retrieved and analyzed in different ways by using log queries. You must create a Log Analytics workspace to collect log data. Use [Log Analytics](../logs/log-analytics-tutorial.md) to analyze data from Azure Monitor Logs. ### <a id="monitoring-data-from-azure-resources"></a> Monitor data from Azure resources
azure-monitor Data Collector Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collector-api.md
To use the HTTP Data Collector API, you create a POST request that includes the
|: |: | | Authorization |The authorization signature. Later in the article, you can read about how to create an HMAC-SHA256 header. | | Log-Type |Specify the record type of the data that's being submitted. It can contain only letters, numbers, and the underscore (_) character, and it can't exceed 100 characters. |
-| x-ms-date |The date that the request was processed, in RFC 7234 format. |
+| x-ms-date |The date that the request was processed, in RFC [1123](/dotnet/api/system.globalization.datetimeformatinfo.rfc1123pattern) format. |
| x-ms-AzureResourceId | The resource ID of the Azure resource that the data should be associated with. It populates the [_ResourceId](./log-standard-columns.md#_resourceid) property and allows the data to be included in [resource-context](manage-access.md#access-mode) queries. If this field isn't specified, the data won't be included in resource-context queries. | | time-generated-field | The name of a field in the data that contains the timestamp of the data item. If you specify a field, its contents are used for **TimeGenerated**. If you don't specify this field, the default for **TimeGenerated** is the time that the message is ingested. The contents of the message field should follow the ISO 8601 format YYYY-MM-DDThh:mm:ssZ. The Time Generated value cannot be older than 2 days before received time or more than a day in the future. In such case, the time that the message is ingested will be used.| | | |
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
The [number of supported event hubs in Basic and Standard namespace tiers is 10]
> - The Basic Event Hubs namespace tier is limited. It supports [lower event size](../../event-hubs/event-hubs-quotas.md#basic-vs-standard-vs-premium-vs-dedicated-tiers) and no [Auto-inflate](../../event-hubs/event-hubs-auto-inflate.md) option to automatically scale up and increase the number of throughput units. Because data volume to your workspace increases over time and as a consequence event hub scaling is required, use Standard, Premium, or Dedicated Event Hubs tiers with the **Auto-inflate** feature enabled. For more information, see [Automatically scale up Azure Event Hubs throughput units](../../event-hubs/event-hubs-auto-inflate.md). > - Data export can't reach Event Hubs resources when virtual networks are enabled. You have to select the **Allow Azure services on the trusted services list to access this storage account** checkbox to bypass this firewall setting in an event hub to grant access to your event hubs.
+## Query exported data
+
+Exporting data from workspaces to Storage Accounts help satisfy various scenarios mentioned in [overview](#overview), and can be consumed by tools that can read blobs from Storage Accounts. The following methods let you query data using Log Analytics query language, which is the same for Azure Data Explorer.
+1. Use Azure Data Explorer to [query data in Azure Data Lake](/azure/data-explorer/data-lake-query-data.md).
+2. Use Azure Data Explorer to [ingest data from a Storage Account](/azure/data-explorer/ingest-from-container.md).
+3. Use Log Analytics workspace to query [ingested data using Logs Ingestion API ](./logs-ingestion-api-overview.md). Ingested data is to a custom log table and not to the original table.
+
+ ## Enable data export The following steps must be performed to enable Log Analytics data export. For more information on each, see the following sections:
azure-monitor Tutorial Logs Ingestion Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-portal.md
Before you can send data to the workspace, you need to create the custom table w
:::image type="content" source="media/tutorial-logs-ingestion-portal/new-data-collection-rule.png" lightbox="media/tutorial-logs-ingestion-portal/new-data-collection-rule.png" alt-text="Screenshot that shows the new DCR.":::
-1. Select the DCE that you created, and then select **Next**.
+1. Select the DCR that you created, and then select **Next**.
:::image type="content" source="media/tutorial-logs-ingestion-portal/custom-log-table-name.png" lightbox="media/tutorial-logs-ingestion-portal/custom-log-table-name.png" alt-text="Screenshot that shows the custom log table name.":::
azure-portal Azure Portal Add Remove Sort Favorites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-add-remove-sort-favorites.md
Title: Add, remove, and arrange favorites in Azure portal
-description: Learn how to add or remove items from the favorites list and rearrange the order of items
-keywords: favorites,portal
+ Title: Manage favorites in Azure portal
+description: Learn how to add or remove services from the favorites list.
Last updated 02/17/2022
-# Add, remove, and rearrange favorites
+# Manage favorites
-Add or remove items from your **Favorites** list in the Azure portal so that you can quickly go to the services you use most often. We've already added some common services to your **Favorites** list, but you'll likely want to customize it. You're the only one who sees the changes you make to **Favorites**.
+Add or remove items from your **Favorites** list in the Azure portal so that you can quickly go to the services you use most often. We've already added some common services to your **Favorites** list, but you may want to customize it. You're the only one who sees the changes you make to **Favorites**.
-## Add a favorite
+## Add a favorite service
Items that are listed under **Favorites** are selected from **All services**. Hover over a service name to display information and resources related to the service. A filled star icon ![Filled star icon](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-graystar.png) next to the service name indicates that the item appears on the **Favorites** list. Select the star icon to add a service to the **Favorites** list.
-In this example, we'll add Cost Management + Billing to the **Favorites** list.
+In this example, we'll add **Cost Management + Billing** to the **Favorites** list.
1. Select **All services** from the Azure portal menu.
You can now remove an item directly from the **Favorites** list.
2. On the information card, select the star so that it changes from filled to unfilled. The service is removed from the **Favorites** list.
-## Rearrange favorites
-
-You can change the order in which your favorite services are listed. Just select an item, then drag and drop it to another location under **Favorites**.
- ## Next steps - To create a project-focused workspace, see [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md).
azure-portal Azure Portal Keyboard Shortcuts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-keyboard-shortcuts.md
Title: Azure portal keyboard shortcuts description: The Azure portal supports global keyboard shortcuts to help you perform actions, navigate, and go to locations in the Azure portal. Previously updated : 11/04/2021 Last updated : 03/23/2023
The letters that appear below represent letter keys on your keyboard. For exampl
## Next steps -- [Turn on high contrast or change theme](set-preferences.md#choose-a-theme-or-enable-high-contrast)-- [Learn about supported browsers and devices](azure-portal-supported-browsers-devices.md)
+- [Turn on high contrast or change theme](set-preferences.md#choose-a-theme-or-enable-high-contrast) in the Azure portal.
+- Learn about [supported browsers and devices](azure-portal-supported-browsers-devices.md).
azure-portal Azure Portal Markdown Tile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-markdown-tile.md
Title: Use a custom markdown tile on Azure dashboards description: Learn how to add a markdown tile to an Azure dashboard to display static content Previously updated : 01/11/2022 Last updated : 03/27/2023
You can add a markdown tile to your Azure dashboards to display custom, static c
1. In the dashboard view, select the dashboard where the custom markdown tile should appear, then select **Edit**.
- ![Screenshot showing dashboard edit view](./media/azure-portal-markdown-tile/azure-portal-dashboard-edit.png)
+ :::image type="content" source="media/azure-portal-markdown-tile/azure-portal-dashboard-edit.png" alt-text="Screenshot showing the dashboard edit option in the Azure portal.":::
1. In the **Tile Gallery**, locate the tile called **Markdown** and select **Add**. The tile is added to the dashboard and the **Edit Markdown** pane opens. 1. Enter values for **Title** and **Subtitle**, which display on the tile after you move to another field.
- ![Screenshot showing results of entering title and subtitle](./media/azure-portal-markdown-tile/azure-portal-dashboard-enter-title.png)
+ :::image type="content" source="media/azure-portal-markdown-tile/azure-portal-dashboard-enter-title.png" alt-text="Screenshot showing how to add a title and subtitle to a markdown tile.":::
1. Select one of the options for including markdown content: **Inline editing** or **Insert content using URL**.
You can add a markdown tile to your Azure dashboards to display custom, static c
![Screenshot showing entering URL](./media/azure-portal-markdown-tile/azure-portal-dashboard-markdown-url.png) > [!NOTE]
- > For added security, create a markdown file and store it in an [Azure storage account blob where encryption is enabled](../storage/common/storage-service-encryption.md). For additional control, configure the encryption with [customer-managed keys stored in Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md?tabs=portal). You can then point to the file using the **Insert content using URL** option. Only users with permissions to the file can see the markdown content on the dashboard. You might need to set a [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) rule on the storage account so that the Azure portal (_https://portal.azure.com/_) can access the markdown file in the blob.
+ > For added security, create a markdown file and store it in an [Azure storage account blob where encryption is enabled](../storage/common/storage-service-encryption.md). For additional control, configure the encryption with [customer-managed keys stored in Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md?tabs=portal). You can then point to the file using the **Insert content using URL** option. Only users with permissions to the file can see the markdown content on the dashboard. You might need to set a [cross-origin resource sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) rule on the storage account so that the Azure portal (`https://portal.azure.com/`) can access the markdown file in the blob.
1. Select **Done** to dismiss the **Edit Markdown** pane. Your content appears on the Markdown tile, which you can resize by dragging the handle in the lower right-hand corner.
- ![Screenshot showing custom markdown tile](./media/azure-portal-markdown-tile/azure-portal-custom-markdown-tile.png)
+ :::image type="content" source="media/azure-portal-markdown-tile/azure-portal-custom-markdown-tile.png" alt-text="Screenshot showing the custom markdown tile on a dashboard.":::
## Markdown content capabilities and limitations
azure-portal Azure Portal Quickstart Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-quickstart-center.md
Title: Get started with the Azure Quickstart Center description: Use the Azure Quickstart Center guided experience to get started with Azure. Learn to set up, migrate, and innovate. Previously updated : 10/01/2021 Last updated : 03/23/2023
Azure Quickstart Center is a guided experience in the Azure portal available to
1. In the search bar, type "Quickstart Center", and then select it.
- Or, select **All services** from the Azure portal menu, then select **General** > **Quickstart Center**.
+ Or, select **All services** from the Azure portal menu, then select **General** > **Get Started** > **Quickstart Center**.
For an in-depth look at what Azure Quickstart Center can do for you, check out this video: > [!VIDEO https://www.youtube.com/embed/0bSA7RXrbAg]
You can also select **Browse our full Azure catalog** to see all Azure learning
## Next steps * Learn more about Azure setup and migration in the [Microsoft Cloud Adoption Framework for Azure](/azure/architecture/cloud-adoption/).
-* Unlock your cloud skills with more [Learn modules](/training/azure/).
+* Unlock your cloud skills with free [Learn modules](/training/azure/).
azure-portal Azure Portal Safelist Urls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-safelist-urls.md
The URL endpoints to allow for the Azure portal are specific to the Azure cloud
### [Public Cloud](#tab/public-cloud)
+> [!TIP]
+> The service tags required to access the Azure portal (including authentication and resource listing) are **AzureActiveDirectory**, **AzureResourceManager**, and **AzureFrontDoor.Frontend**. Access to other services may require additional permissions, as described below.
+> However, there is a possibility that unnecessary communication other than communication to access the portal may also be allowed. If granular control is required, FQDN-based access control such as Azure Firewall is required.
+ #### Azure portal authentication ```
azure-portal Manage Filter Resource Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/manage-filter-resource-views.md
Title: View and filter Azure resource information description: Filter information and use different views to better understand your Azure resources. Previously updated : 03/16/2021 Last updated : 03/27/2023 # View and filter Azure resource information The Azure portal enables you to browse detailed information about resources across your Azure subscriptions. This article shows you how to filter information and use different views to better understand your resources.
-The article focuses on the **All resources** screen shown in the following screenshot. Screens for individual resource types, such as virtual machines, have different options, such as starting and stopping a VM.
-
+This article focuses on filtering information the **All resources** screen. Screens for individual resource types, such as virtual machines, may have different options.
## Filter resources
-Start exploring **All resources** by using filters to focus on a subset of your resources. The following screenshot shows filtering on resource groups, selecting two of the six resource groups in a subscription.
+Start exploring **All resources** by using filters to focus on a subset of your resources. The following screenshot shows filtering on resource groups, selecting two of the four resource groups in a subscription.
:::image type="content" source="media/manage-filter-resource-views/filter-resource-group.png" alt-text="Filter view based on resource groups":::
-You can combine filters, including those based on text searches, as shown in the following screenshot. In this case the results are scoped to resources that contain "SimpleWinVM" in one of the two resource groups already selected.
-
+You can combine filters, including those based on text searches. For example, after selecting specific resource groups, you can enter text in the filter box, or select a different filter option.
-To change which columns are included in a view, select **Manage view** then **Edit columns**.
+To change which columns are included in a view, select **Manage view**, then select**Edit columns**.
:::image type="content" source="media/manage-filter-resource-views/edit-columns.png" alt-text="Edit columns shown in view":::
To change which columns are included in a view, select **Manage view** then **Ed
You can save views that include the filters and columns you've selected. To save and use a view:
-1. Select **Manage view** then **Save view**.
+1. Select **Manage view**, then select **Save view**.
-1. Enter a name for the view then select **OK**. The saved view now appears in the **Manage view** menu.
+1. Enter a name for the view, then select **OK**. The saved view now appears in the **Manage view** menu.
:::image type="content" source="media/manage-filter-resource-views/simple-view.png" alt-text="Saved view":::
-1. To use a view, switch between **Default** and one of your own views to see how that affects the list of resources displayed.
+Try switching between **Default** and one of your own views to see how that affects the list of resources displayed.
-To delete a view:
+You can also select **Choose favorite view** to use one of your views as the default views for **All resources**.
-1. Select **Manage view** then **Browse all views**.
+To delete a view you've created:
-1. In the **Saved views** pane, select the view then select the **Delete** icon ![Delete view icon](media/manage-filter-resource-views/icon-delete.png).
+1. Select **Manage view**, then select **Browse all views for "All resources"**.
+
+1. In the **Saved views** pane, select the view, then select the **Delete** icon ![Delete view icon](media/manage-filter-resource-views/icon-delete.png). Select **OK** to confirm the deletion.
## Export information from a view
You can export the resource information from a view. To export information in CS
:::image type="content" source="media/manage-filter-resource-views/export-csv.png" alt-text="Screenshot of exporting to CSV format":::
-1. Save the file locally, then open in Excel or another application that supports the CSV format.
+1. Save the file locally, then open the file in Excel or another application that supports the CSV format.
As you move around the portal, you'll see other areas where you can export information, such as an individual resource group.
To save and use a summary view:
:::image type="content" source="media/manage-filter-resource-views/type-summary-bar-chart.png" alt-text="Type summary showing a bar chart":::
-1. Select **Manage view** then **Save view** to save this view like you did with the list view.
+1. Select **Manage view**, then select **Save view** to save this view, just like you did with the list view.
1. In the summary view, under **Type summary**, select a bar in the chart. Selecting the bar provides a list filtered down to one type of resource.
To run a Resource Graph query:
## Next steps
-[Azure portal overview](azure-portal-overview.md)
-
-[Create and share dashboards in the Azure portal](azure-portal-dashboards.md)
+- Read an [overview of the Azure portal overview](azure-portal-overview.md).
+- Learn how to [create and share dashboards in the Azure portal](azure-portal-dashboards.md).
azure-portal Quickstart Portal Dashboard Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quickstart-portal-dashboard-azure-cli.md
Title: Create an Azure portal dashboard with Azure CLI
description: "Quickstart: Learn how to create a dashboard in the Azure portal using the Azure CLI. A dashboard is a focused and organized view of your cloud resources." Previously updated : 01/13/2022 Last updated : 03/27/2023 # Quickstart: Create an Azure portal dashboard with Azure CLI
-A dashboard in the Azure portal is a focused and organized view of your cloud resources. This article shows you how to use Azure CLI to create a dashboard. In this example, the dashboard shows the performance of a virtual machine (VM), as well as some static information and links.
+A dashboard in the Azure portal is a focused and organized view of your cloud resources. This article shows you how to use Azure CLI to create a dashboard. In this example, the dashboard shows the performance of a virtual machine (VM) that you create, as well as some static information and links.
-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+In addition to the prerequisites below, you'll need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
[!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
az vm create --resource-group myResourceGroup --name myVM1 --image win2016datace
--admin-username azureuser --admin-password 1StrongPassword$ ```
-> [!Note]
+> [!NOTE]
> This is a new username and password (not the account you use to sign in to Azure). The password must be complex. For more information, see [username requirements](../virtual-machines/windows/faq.yml#what-are-the-username-requirements-when-creating-a-vm-) and [password requirements](../virtual-machines/windows/faq.yml#what-are-the-password-requirements-when-creating-a-vm-).
azure-portal Quickstart Portal Dashboard Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quickstart-portal-dashboard-powershell.md
Title: Create an Azure portal dashboard with PowerShell
description: Learn how to create a dashboard in the Azure portal using Azure PowerShell. Previously updated : 01/13/2022 Last updated : 03/27/2023 # Quickstart: Create an Azure portal dashboard with PowerShell
-A dashboard in the Azure portal is a focused and organized view of your cloud resources. This article focuses on the process of using the Az.Portal PowerShell module to create a dashboard. The dashboard shows the performance of a virtual machine (VM), as well as some static information
-and links.
+A dashboard in the Azure portal is a focused and organized view of your cloud resources. This article focuses on the process of using the Az.Portal PowerShell module to create a dashboard. The dashboard shows the performance of a virtual machine (VM) that you create, as well as some static information and links.
-## Requirements
+## Prerequisites
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
The following example shows the rules that are available for configuration.
}, "simplify-json-null": { "level": "warning"
- }
+ },
"use-parent-property": { "level": "warning" },
azure-video-indexer Audio Effects Detection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/audio-effects-detection-overview.md
+
+ Title: Introduction to Azure Video Indexer audio effects detection
+
+description: An introduction to Azure Video Indexer audio effects detection component responsibly.
++++ Last updated : 06/15/2022+++
+# Audio effects detection
+
+Audio effects detection is an Azure Video Indexer feature that detects insights on various acoustic events and classifies them into acoustic categories. Audio effect detection can detect and classify different categories such as laughter, crowd reactions, alarms and/or sirens.
+
+When working on the website, the instances are displayed in the Insights tab. They can also be generated in a categorized list in a JSON file that includes the category ID, type, name, and instances per category together with the specific timeframes and confidence score.
+
+## Prerequisites
+
+Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
+
+## General principles
+
+This article discusses audio effects detection and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
+
+* Does this feature perform well in my scenario? Before deploying audio effects detection into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.
+* Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.
+
+## View the insight
+
+To see the instances on the website, do the following:
+
+1. When uploading the media file, go to Video + Audio Indexing, or go to Audio Only or Video + Audio and select Advanced.
+1. After the file is uploaded and indexed, go to Insights and scroll to audio effects.
+
+To display the JSON file, do the following:
+
+1. Select Download -> Insights (JSON).
+1. Copy the `audioEffects` element, under `insights`, and paste it into your Online JSON viewer.
+
+ ```json
+ "audioEffects": [
+ {
+ "id": 1,
+ "type": "Silence",
+ "instances": [
+ {
+ "confidence": 0,
+ "adjustedStart": "0:01:46.243",
+ "adjustedEnd": "0:01:50.434",
+ "start": "0:01:46.243",
+ "end": "0:01:50.434"
+ }
+ ]
+ },
+ {
+ "id": 2,
+ "type": "Speech",
+ "instances": [
+ {
+ "confidence": 0,
+ "adjustedStart": "0:00:00",
+ "adjustedEnd": "0:01:43.06",
+ "start": "0:00:00",
+ "end": "0:01:43.06"
+ }
+ ]
+ }
+ ],
+ ```
+
+To download the JSON file via the API, use the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+
+## Audio effects detection components
+
+During the audio effects detection procedure, audio in a media file is processed, as follows:
+
+|Component|Definition|
+|||
+|Source file | The user uploads the source file for indexing. |
+|Segmentation| The audio is analyzed, nonspeech audio is identified and then split into short overlapping internals. |
+|Classification| An AI process analyzes each segment and classifies its contents into event categories such as crowd reaction or laughter. A probability list is then created for each event category according to department-specific rules. |
+|Confidence level| The estimated confidence level of each audio effect is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
+
+## Example use cases
+
+- Companies with a large video archive can improve accessibility by offering more context for a hearing- impaired audience by transcription of nonspeech effects.
+- Improved efficiency when creating raw data for content creators. Important moments in promos and trailers such as laughter, crowd reactions, gunshots, or explosions can be identified, for example, in Media and Entertainment.
+- Detecting and classifying gunshots, explosions, and glass shattering in a smart-city system or in other public environments that include cameras and microphones to offer fast and accurate detection of violence incidents.
+
+## Considerations and limitations when choosing a use case
+
+- Avoid use of short or low-quality audio, audio effects detection provides probabilistic and partial data on detected nonspeech audio events. For accuracy, audio effects detection requires at least 2 seconds of clear nonspeech audio. Voice commands or singing aren't supported.  
+- Avoid use of audio with loud background music or music with repetitive and/or linearly scanned frequency, audio effects detection is designed for nonspeech audio only and therefore can't classify events in loud music. Music with repetitive and/or linearly scanned frequency many be incorrectly classified as an alarm or siren.
+- Carefully consider the methods of usage in law enforcement and similar institutions, to promote more accurate probabilistic data, carefully review the following:
+
+ - Audio effects can be detected in nonspeech segments only.
+ - The duration of a nonspeech section should be at least 2 seconds.
+ - Low quality audio might impact the detection results.
+ - Events in loud background music aren't classified.
+ - Music with repetitive and/or linearly scanned frequency might be incorrectly classified as an alarm or siren.
+ - Knocking on a door or slamming a door might be labeled as a gunshot or explosion.
+ - Prolonged shouting or sounds of physical human effort might be incorrectly classified.
+ - A group of people laughing might be classified as both laughter and crowd.
+ - Natural and nonsynthetic gunshot and explosions sounds are supported.
+
+When used responsibly and carefully, Azure Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
+
+- Always respect an individual’s right to privacy, and only ingest audio for lawful and justifiable purposes.  
+- Don't purposely disclose inappropriate audio of young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.  
+- Commit to respecting and promoting human rights in the design and deployment of your analyzed audio.  
+- When using third party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.ΓÇ»
+- Always seek legal advice when using audio from unknown sources.ΓÇ»
+- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing audio containing people.ΓÇ»
+- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.  
+- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.ΓÇ»
+
+## Next steps
+
+- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)
+- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)
+- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)
+- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5)
+
+### Contact us
+
+`visupport@microsoft.com`
+
+## Azure Video Indexer insights
+
+- [Face detection](face-detection.md)
+- [OCR](ocr.md)
+- [Keywords extraction](keywords.md)
+- [Transcription, translation & language identification](transcription-translation-lid.md)
+- [Labels identification](labels-identification.md)
+- [Named entities](named-entities.md)
+- [Observed people tracking & matched faces](observed-matched-people.md)
+- [Topics inference](topics-inference.md)
azure-video-indexer Concepts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/concepts-overview.md
Last updated 12/01/2022
-# Azure Video Indexer terminology & concepts
+# Azure Video Indexer terminology & concepts
-This article gives a brief overview of Azure Video Indexer terminology and concepts.
+This article gives a brief overview of Azure Video Indexer terminology and concepts. Also, review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
## Artifact files
azure-video-indexer Face Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/face-detection.md
+
+ Title: Azure Video Indexer face detection overview
+
+description: This article gives an overview of an Azure Video Indexer face detection.
++++ Last updated : 06/15/2022+++
+# Face detection
+
+> [!IMPORTANT]
+> Face identification, customization and celebrity recognition features access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face identification, customization and celebrity recognition features are only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to apply for access.
+
+Face detection is an Azure Video Indexer AI feature that automatically detects faces in a media file and aggregates instances of similar faces into the same group. The celebrities recognition module is then run to recognize celebrities. This module covers approximately one million faces and is based on commonly requested data sources. Faces that aren't recognized by Azure Video Indexer are still detected but are left unnamed. Customers can build their own custom [Person modules](/azure/azure-video-indexer/customize-person-model-overview) whereby the Azure Video Indexer recognizes faces that aren't recognized by default.
+
+The resulting insights are generated in a categorized list in a JSON file that includes a thumbnail and either name or ID of each face. Clicking faceΓÇÖs thumbnail displays information like the name of the person (if they were recognized), the % of appearances in the video, and their biography if they're a celebrity. It also enables scrolling between the instances in the video.ΓÇ»
+
+## Prerequisites
+
+Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
+
+## General principles
+
+This article discusses faces detection and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
+
+- Will this feature perform well in my scenario? Before deploying faces detection into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.
+- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.
+
+## Key terms
+
+|Term|Definition|
+|||
+|InsightΓÇ» |The information and knowledge derived from the processing and analysis of video and audio files that generate different types of insights and can include detected objects, people, faces, animated characters, keyframes and translations or transcriptions. |
+|Face recognition  |The analysis of images to identify the faces that appear in the images. This process is implemented via the Azure Cognitive Services Face API. |
+|Template |Enrolled images of people are converted to templates, which are then used for facial recognition. Machine-interpretable features are extracted from one or more images of an individual to create that individualΓÇÖs template. The enrollment or probe images aren't stored by Face API and the original images can't be reconstructed based on a template. Template quality is a key determinant on the accuracy of your results. |
+|Enrollment |The process of enrolling images of individuals for template creation so they can be recognized. When a person is enrolled to a verification system used for authentication, their template is also associated with a primary identifier2 that is used to determine which template to compare with the probe template. High-quality images and images representing natural variations in how a person looks (for instance wearing glasses, not wearing glasses) generate high-quality enrollment templates. |
+|Deep searchΓÇ» |The ability to retrieve only relevant video and audio files from a video library by searching for specific terms within the extracted insights.|
+
+## View the insight
+
+To see the instances on the website, do the following:
+
+1. When uploading the media file, go to Video + Audio Indexing, or go to Audio Only or Video + Audio and select Advanced.
+1. After the file is uploaded and indexed, go to Insights and scroll to People.
+
+To see face detection insight in the JSON file, do the following:
+
+1. Select Download -> Insights (JSON).
+1. Copy the `faces` element, under `insights`, and paste it into your JSON viewer.
+
+ ```json
+ "faces": [
+ {
+ "id": 1785,
+ "name": "Emily Tran",
+ "confidence": 0.7855,
+ "description": null,
+ "thumbnailId": "fd2720f7-b029-4e01-af44-3baf4720c531",
+ "knownPersonId": "92b25b4c-944f-4063-8ad4-f73492e42e6f",
+ "title": null,
+ "imageUrl": null,
+ "thumbnails": [
+ {
+ "id": "4d182b8c-2adf-48a2-a352-785e9fcd1fcf",
+ "fileName": "FaceInstanceThumbnail_4d182b8c-2adf-48a2-a352-785e9fcd1fcf.jpg",
+ "instances": [
+ {
+ "adjustedStart": "0:00:00",
+ "adjustedEnd": "0:00:00.033",
+ "start": "0:00:00",
+ "end": "0:00:00.033"
+ }
+ ]
+ },
+ {
+ "id": "feff177b-dabf-4f03-acaf-3e5052c8be57",
+ "fileName": "FaceInstanceThumbnail_feff177b-dabf-4f03-acaf-3e5052c8be57.jpg",
+ "instances": [
+ {
+ "adjustedStart": "0:00:05",
+ "adjustedEnd": "0:00:05.033",
+ "start": "0:00:05",
+ "end": "0:00:05.033"
+ }
+ ]
+ },
+ ]
+ }
+ ]
+ ```
+
+To download the JSON file via the API, [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+
+## Face detection components
+
+During the Faces Detection procedure, images in a media file are processed, as follows:
+
+|Component|Definition|
+|||
+|Source file | The user uploads the source file for indexing. |
+|Detection and aggregation |The face detector identifies the faces in each frame. The faces are then aggregated and grouped. |
+|Recognition |The celebrities module runs over the aggregated groups to recognize celebrities. If the customer has created their own **persons** module it's also run to recognize people. When people aren't recognized, they're labeled Unknown1, Unknown2 and so on. |
+|Confidence value |Where applicable for well-known faces or faces identified in the customizable list, the estimated confidence level of each label is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
+
+## Example use cases
+
+* Summarizing where an actor appears in a movie or reusing footage by deep searching for specific faces in organizational archives for insight on a specific celebrity.
+* Improved efficiency when creating feature stories at a news or sports agency, for example deep searching for a celebrity or football player in organizational archives.
+* Using faces appearing in the video to create promos, trailers or highlights. Azure Video Indexer can assist by adding keyframes, scene markers, timestamps and labeling so that content editors invest less time reviewing numerous files.  
+
+## Considerations when choosing a use case
+
+* Carefully consider the accuracy of the results, to promote more accurate detections, check the quality of the video, low quality video might impact the detected insights.
+* Carefully consider when using for law enforcement. People might not be detected if they're small, sitting, crouching, or obstructed by objects or other people. To ensure fair and high-quality decisions, combine face detection-based automation with human oversight.
+* Don't use face detection for decisions that may have serious adverse impacts. Decisions based on incorrect output could have serious adverse impacts. Additionally, it's advisable to include human review of decisions that have the potential for serious impacts on individuals.
+
+When used responsibly and carefully face detection is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
+
+* Always respect an individual’s right to privacy, and only ingest videos for lawful and justifiable purposes.  
+* Don't purposely disclose inappropriate content about young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.  
+* Commit to respecting and promoting human rights in the design and deployment of your analyzed media.  
+* When using third party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.ΓÇ»
+* Always seek legal advice when using content from unknown sources.ΓÇ»
+* Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.    
+* Provide a feedback channel that allows users and individuals to report issues with the service.  
+* Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.ΓÇ»
+* Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.  
+* Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.ΓÇ»
+
+## Next steps
+
+### Learn More about Responsible AI
+
+- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)
+- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)
+- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)
+- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5)
+
+### Contact us
+
+`visupport@microsoft.com`
+
+## Azure Video Indexer insights
+
+- [Audio effects detection](audio-effects-detection.md)
+- [OCR](ocr.md)
+- [Keywords extraction](keywords.md)
+- [Transcription, translation & language identification](transcription-translation-lid.md)
+- [Labels identification](labels-identification.md)
+- [Named entities](named-entities.md)
+- [Observed people tracking & matched persons](observed-matched-people.md)
+- [Topics inference](topics-inference.md)
azure-video-indexer Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/insights-overview.md
When a video is indexed, Azure Video Indexer analyzes the video and audio content by running 30+ AI models, generating rich insights. Insights contain an aggregated view of the data: transcripts, optical character recognition elements (OCRs), face, topics, emotions, etc. Once the video is indexed and analyzed, Azure Video Indexer produces a JSON content that contains details of the video insights. For example, each insight type includes instances of time ranges that show when the insight appears in the video.
+Read details about the following insights here:
+
+- [Audio effects detection](audio-effects-detection-overview.md)
+- [Faces detection](face-detection.md)
+- [OCR](ocr.md)
+- [Keywords extraction](keywords.md)
+- [Transcription, translation, language](transcription-translation-lid.md)
+- [Labels identification](labels-identification.md)
+- [Named entities](named-entities.md)
+- [Observed people tracking & matched faces](observed-matched-people.md)
+- [Topics inference](topics-inference.md)
+ For information about features and other insights, see: - [Azure Video Indexer overview](video-indexer-overview.md)
azure-video-indexer Keywords https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/keywords.md
+
+ Title: Azure Video Indexer keywords extraction overview
+
+description: An introduction to Azure Video Indexer keywords extraction component responsibly.
++++ Last updated : 06/15/2022+++
+# Keywords extraction
+
+Keywords extraction is an Azure Video Indexer AI feature that automatically detects insights on the different keywords discussed in media files. Keywords extraction can extract insights in both single language and multi-language media files. The total number of extracted keywords and their categories are listed in the Insights tab, where clicking a Keyword and then clicking Play Previous or Play Next jumps to the keyword in the media file.
+
+## Prerequisites
+
+Review [Transparency Note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
+
+## General principles
+
+This article discusses Keywords and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
+
+- Will this feature perform well in my scenario? Before deploying Keywords Extraction into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.
+- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.
+
+## View the insight
+
+When working on the website the insights are displayed in the **Insights** tab. They can also be generated in a categorized list in a JSON file which includes the KeywordΓÇÖs ID, text, together with each keywordΓÇÖs specific start and end time and confidence score.
+
+To display the instances in a JSON file, do the following:
+
+1. Click Download and then Insights (JSON).
+1. Copy the text and paste it into your Online JSON Viewer.
+
+ ```json
+ "keywords": [
+ {
+ "id": 1,
+ "text": "office insider",
+ "confidence": 1,
+ "language": "en-US",
+ "instances": [
+ {
+ "adjustedStart": "0:00:00",
+ "adjustedEnd": "0:00:05.75",
+ "start": "0:00:00",
+ "end": "0:00:05.75"
+ },
+ {
+ "adjustedStart": "0:01:21.82",
+ "adjustedEnd": "0:01:24.7",
+ "start": "0:01:21.82",
+ "end": "0:01:24.7"
+ },
+ {
+ "adjustedStart": "0:01:31.32",
+ "adjustedEnd": "0:01:32.76",
+ "start": "0:01:31.32",
+ "end": "0:01:32.76"
+ },
+ {
+ "adjustedStart": "0:01:35.8",
+ "adjustedEnd": "0:01:37.84",
+ "start": "0:01:35.8",
+ "end": "0:01:37.84"
+ }
+ ]
+ },
+ {
+ "id": 2,
+ "text": "insider tip",
+ "confidence": 0.9975,
+ "language": "en-US",
+ "instances": [
+ {
+ "adjustedStart": "0:01:14.91",
+ "adjustedEnd": "0:01:19.51",
+ "start": "0:01:14.91",
+ "end": "0:01:19.51"
+ }
+ ]
+ },
+
+ ```
+
+To download the JSON file via the API, use the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+
+> [!NOTE]
+> Keywords extraction is language independent.
+
+## Keywords components
+
+During the Keywords procedure, audio and images in a media file are processed, as follows:
+
+|Component|Definition|
+|||
+|Source language | The user uploads the source file for indexing. |
+|Transcription API |The audio file is sent to Cognitive Services and the translated transcribed output is returned. If a language has been specified it is processed.|
+|OCR of video |Images in a media file are processed using the Computer Vision Read API to extract text, its location, and other insights. |
+|Keywords extraction |An extraction algorithm processes the transcribed audio. The results are then combined with the insights detected in the video during the OCR process. The keywords and where they appear in the media and then detected and identified. |
+|Confidence level| The estimated confidence level of each keyword is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty will be represented as an 0.82 score.|
+
+## Example use cases
+
+- Personalization of keywords to match customer interests, for example websites about England posting promotions about English movies or festivals.
+- Deep-searching archives for insights on specific keywords to create feature stories about companies, personas or technologies, for example by a news agency.
+
+## Considerations and limitations when choosing a use case
+
+Below are some considerations to keep in mind when using keywords extraction:
+
+- When uploading a file always use high-quality video content. The recommended maximum frame size is HD and frame rate is 30 FPS. A frame should contain no more than 10 people. When outputting frames from videos to AI models, only send around 2 or 3 frames per second. Processing 10 and more frames might delay the AI result.
+- When uploading a file always use high quality audio and video content. At least 1 minute of spontaneous conversational speech is required to perform analysis. Audio effects are detected in non-speech segments only. The minimal duration of a non-speech section is 2 seconds. Voice commands and singing aren't supported. 
+
+When used responsibly and carefully Keywords is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
+
+- Always respect an individual’s right to privacy, and only ingest media for lawful and justifiable purposes.  
+- Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.  
+- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.  
+- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.ΓÇ»
+- Always seek legal advice when using media from unknown sources.ΓÇ»
+- Always obtain appropriate legal and professional advice to ensure that your uploaded media is secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.    
+- Provide a feedback channel that allows users and individuals to report issues with the service.  
+- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.ΓÇ»
+- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.  
+- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.ΓÇ»
+
+## Next steps
+
+### Learn More about Responsible AI
+
+- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)
+- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)
+- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)
+- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5)
+
+### Contact us
+
+`visupport@microsoft.com`
+
+## Azure Video Indexer insights
+
+- [Audio effects detection](audio-effects-detection.md)
+- [Face detection](face-detection.md)
+- [OCR](ocr.md)
+- [Transcription, Translation & Language identification](transcription-translation-lid.md)
+- [Labels identification](labels-identification.md)
+- [Named entities](named-entities.md)
+- [Observed people tracking & matched persons](observed-matched-people.md)
+- [Topics inference](topics-inference.md)
azure-video-indexer Labels Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/labels-identification.md
+
+ Title: Azure Video Indexer labels identification overview
+
+description: This article gives an overview of an Azure Video Indexer labels identification.
++++ Last updated : 06/15/2022+++
+# Labels identification
+
+Labels identification is an Azure Video Indexer AI feature that identifies visual objects like sunglasses or actions like swimming, appearing in the video footage of a media file. There are many labels identification categories and once extracted, labels identification instances are displayed in the Insights tab and can be translated into over 50 languages. Clicking a Label opens the instance in the media file, select Play Previous or Play Next to see more instances.
+
+## Prerequisites
+
+Review [Transparency Note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
+
+## General principles
+
+This article discusses labels identification and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
+
+- Does this feature perform well in my scenario? Before deploying labels identification into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.
+- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.
+
+## View the insight
+
+When working on the website the instances are displayed in the Insights tab. They can also be generated in a categorized list in a JSON file that includes the Labels ID, category, instances together with each labelΓÇÖs specific start and end times and confidence score, as follows:
+
+To display labels identification insights in a JSON file, do the following:
+
+1. Click Download and then Insights (JSON).
+1. Copy the text, paste it into your JSON Viewer.
+
+ ```json
+ "labels": [
+ {
+ "id": 1,
+ "name": "human face",
+ "language": "en-US",
+ "instances": [
+ {
+ "confidence": 0.9987,
+ "adjustedStart": "0:00:00",
+ "adjustedEnd": "0:00:25.6",
+ "start": "0:00:00",
+ "end": "0:00:25.6"
+ },
+ {
+ "confidence": 0.9989,
+ "adjustedStart": "0:01:21.067",
+ "adjustedEnd": "0:01:41.334",
+ "start": "0:01:21.067",
+ "end": "0:01:41.334"
+ }
+ ]
+ },
+ {
+ "id": 2,
+ "name": "person",
+ "referenceId": "person",
+ "language": "en-US",
+ "instances": [
+ {
+ "confidence": 0.9959,
+ "adjustedStart": "0:00:00",
+ "adjustedEnd": "0:00:26.667",
+ "start": "0:00:00",
+ "end": "0:00:26.667"
+ },
+ {
+ "confidence": 0.9974,
+ "adjustedStart": "0:01:21.067",
+ "adjustedEnd": "0:01:41.334",
+ "start": "0:01:21.067",
+ "end": "0:01:41.334"
+ }
+ ]
+ },
+ ```
+
+To download the JSON file via the API, [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+
+## Labels components
+
+During the Labels procedure, objects in a media file are processed, as follows:
+
+|Component|Definition|
+|||
+|Source |The user uploads the source file for indexing. |
+|Tagging| Images are tagged and labeled. For example, door, chair, woman, headphones, jeans. |
+|Filtering and aggregation |Tags are filtered according to their confidence level and aggregated according to their category.|
+|Confidence level| The estimated confidence level of each label is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
+
+## Example use cases
+
+- Extracting labels from frames for contextual advertising or branding. For example, placing an ad for beer following footage on a beach.
+- Creating a verbal description of footage to enhance accessibility for the visually impaired, for example a background storyteller in movies.
+- Deep searching media archives for insights on specific objects to create feature stories for the news.
+- Using relevant labels to create content for trailers, highlights reels, social media or new clips.
+
+## Considerations when choosing a use case
+
+- Carefully consider the accuracy of the results, to promote more accurate detections, check the quality of the video, low quality video might impact the detected insights.
+- Carefully consider when using for law enforcement that Labels potentially cannot detect parts of the video. To ensure fair and high-quality decisions, combine Labels with human oversight.
+- Don't use labels identification for decisions that may have serious adverse impacts. Machine learning models can result in undetected or incorrect classification output. Decisions based on incorrect output could have serious adverse impacts. Additionally, it's advisable to include human review of decisions that have the potential for serious impacts on individuals.
+
+When used responsibly and carefully, Azure Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
+
+- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes.
+- Don't purposely disclose inappropriate content about young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom.
+- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.
+- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.
+- Always seek legal advice when using content from unknown sources.
+- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.
+- Provide a feedback channel that allows users and individuals to report issues with the service.
+- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.
+- Keep a human in the loop. Do not use any solution as a replacement for human oversight and decision-making.
+- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.
+
+## Next steps
+
+### Learn More about Responsible AI
+
+- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)
+- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)
+- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)
+- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5)
+
+### Contact us
+
+`visupport@microsoft.com`
+
+## Azure Video Indexer insights
+
+- [Audio effects detection](audio-effects-detection.md)
+- [Face detection](face-detection.md)
+- [OCR](ocr.md)
+- [Keywords extraction](keywords.md)
+- [Transcription, Translation & Language identification](transcription-translation-lid.md)
+- [Named entities](named-entities.md)
+- [Observed people tracking & matched persons](observed-matched-people.md)
+- [Topics inference](topics-inference.md)
azure-video-indexer Named Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/named-entities.md
+
+ Title: Azure Video Indexer named entities extraction overview
+
+description: An introduction to Azure Video Indexer named entities extraction component responsibly.
++++ Last updated : 06/15/2022+++
+# Named entities extraction
+
+Named entities extraction is an Azure Video Indexer AI feature that uses Natural Language Processing (NLP) to extract insights on the locations, people and brands appearing in audio and images in media files. Named entities extraction is automatically used with Transcription and OCR and its insights are based on those extracted during these processes. The resulting insights are displayed in the **Insights** tab and are filtered into locations, people and brand categories. Clicking a named entity, displays its instance in the media file. It also displays a description of the entity and a Find on Bing link of recognizable entities.
+
+## Prerequisites
+
+Review [Transparency Note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
+
+## General principles
+
+This article discusses named entities and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
+
+- Will this feature perform well in my scenario? Before deploying named entities extraction into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.
+- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.
+
+## View the insight
+
+To see the insights in the website, do the following:
+
+1. Go to View and check Named Entities.
+1. Go to Insights and scroll to named entities.
+
+To display named entities extraction insights in a JSON file, do the following:
+
+1. Click Download and then Insights (JSON).
+2. Named entities are divided into three:
+
+ * Brands
+ * Location
+ * People
+3. Copy the text and paste it into your JSON Viewer.
+
+ ```json
+ namedPeople: [
+ {
+ referenceId: "Satya_Nadella",
+ referenceUrl: "https://en.wikipedia.org/wiki/Satya_Nadella",
+ confidence: 1,
+ description: "CEO of Microsoft Corporation",
+ seenDuration: 33.2,
+ id: 2,
+ name: "Satya Nadella",
+ appearances: [
+ {
+ startTime: "0:01:11.04",
+ endTime: "0:01:17.36",
+ startSeconds: 71,
+ endSeconds: 77.4
+ },
+ {
+ startTime: "0:01:31.83",
+ endTime: "0:01:37.1303666",
+ startSeconds: 91.8,
+ endSeconds: 97.1
+ },
+ ```
+
+To download the JSON file via the API, use the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+
+## Named entities extraction components
+
+During the named entities extraction procedure, the media file is processed, as follows:
+
+|Component|Definition|
+|||
+|Source file | The user uploads the source file for indexing. |
+|Text extraction |- The audio file is sent to Speech Services API to extract the transcription.<br/>- Sampled frames are sent to the Computer Vision API to extract OCR. |
+|Analytics |The insights are then sent to the Text Analytics API to extract the entities. For example, Microsoft, Paris or a personΓÇÖs name like Paul or Sarah.
+|Processing and consolidation | The results are then processed. Where applicable, Wikipedia links are added and brands are identified via the Video Indexer built-in and customizable branding lists.
+Confidence value The estimated confidence level of each named entity is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
+
+## Example use cases
+
+- Contextual advertising, for example, placing an ad for a Pizza chain following footage on Italy.
+- Deep searching media archives for insights on people or locations to create feature stories for the news.
+- Creating a verbal description of footage via OCR processing to enhance accessibility for the visually impaired, for example a background storyteller in movies.
+- Extracting insights on brand na
+
+## Considerations and limitations when choosing a use case
+
+- Carefully consider the accuracy of the results, to promote more accurate detections, check the quality of the audio and images, low quality audio and images might impact the detected insights.
+- Named entities only detect insights in audio and images. Logos in a brand name may not be detected.
+- Carefully consider that when using for law enforcement named entities may not always detect parts of the audio. To ensure fair and high-quality decisions, combine named entities with human oversight.
+- Don't use named entities for decisions that may have serious adverse impacts. Machine learning models that extract text can result in undetected or incorrect text output. Decisions based on incorrect output could have serious adverse impacts. Additionally, it's advisable to include human review of decisions that have the potential for serious impacts on individuals.
+
+When used responsibly and carefully Azure Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
+
+- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes.
+- Don't purposely disclose inappropriate content about young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom.
+- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.
+- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.
+- Always seek legal advice when using content from unknown sources.
+- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.
+- Provide a feedback channel that allows users and individuals to report issues with the service.
+- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.
+- Keep a human in the loop. Do not use any solution as a replacement for human oversight and decision-making.
+- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.
+
+## Next steps
+
+### Learn More about Responsible AI
+
+- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)
+- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)
+- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)
+- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5)
+
+### Contact us
+
+`visupport@microsoft.com`
+
+## Azure Video Indexer insights
+
+- [Audio effects detection](audio-effects-detection.md)
+- [Face detection](face-detection.md)
+- [Keywords extraction](keywords.md)
+- [Transcription, Translation & Language identification](transcription-translation-lid.md)
+- [Labels identification](labels-identification.md)
+- [Observed people tracking & matched persons](observed-matched-people.md)
+- [Topics inference](topics-inference.md)
azure-video-indexer Observed Matched People https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/observed-matched-people.md
+
+ Title: Azure Video Indexer observed people tracking & matched faces overview
+
+description: An introduction to Azure Video Indexer observed people tracking & matched faces component responsibly.
++++ Last updated : 07/07/2022+++
+# Observed people tracking & matched faces
+
+> [!IMPORTANT]
+> Face identification, customization and celebrity recognition features access is limited based on eligibility and usage criteria in order to support our Responsible AI principles. Face identification, customization and celebrity recognition features are only available to Microsoft managed customers and partners. Use the [Face Recognition intake form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUQjA5SkYzNDM4TkcwQzNEOE1NVEdKUUlRRCQlQCN0PWcu) to apply for access.
+
+Observed people tracking and matched faces are Azure Video Indexer AI features that automatically detect and match people in media files. Observed people tracking and matched faces can be set to display insights on people, their clothing, and the exact timeframe of their appearance.
+
+The resulting insights are displayed in a categorized list in the Insights tab, the tab includes a thumbnail of each person and their ID. Clicking the thumbnail of a person displays the matched person (the corresponding face in the People insight). Insights are also generated in a categorized list in a JSON file that includes the thumbnail ID of the person, the percentage of time appearing in the file, Wiki link (if they're a celebrity) and confidence level.
+
+## Prerequisites
+
+Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
+
+## General principles
+
+This article discusses observed people tracking and matched faces and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
+
+- Will this feature perform well in my scenario? Before deploying observed people tracking and matched faces into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.
+- Are we equipped to identify and respond to errors? AI-powered products and features will not be 100% accurate, so consider how you'll identify and respond to any errors that may occur.
+
+## View the insight
+
+When uploading the media file, go to Video + Audio Indexing and select Advanced.
+
+To display observed people tracking and matched faces insight on the website, do the following:
+
+1. After the file has been indexed, go to Insights and then scroll to observed people.
+
+To see the insights in a JSON file, do the following:
+
+1. Click Download and then Insights (JSON).
+1. Copy the `observedPeople` text and paste it into your JSON viewer.
+
+ The following section shows observed people and clothing. For the person with id 4 (`"id": 4`) there's also a matching face.
+
+ ```json
+ "observedPeople": [
+ {
+ "id": 1,
+ "thumbnailId": "4addcebf-6c51-42cd-b8e0-aedefc9d8f6b",
+ "clothing": [
+ {
+ "id": 1,
+ "type": "sleeve",
+ "properties": {
+ "length": "long"
+ }
+ },
+ {
+ "id": 2,
+ "type": "pants",
+ "properties": {
+ "length": "long"
+ }
+ }
+ ],
+ "instances": [
+ {
+ "adjustedStart": "0:00:00.0667333",
+ "adjustedEnd": "0:00:12.012",
+ "start": "0:00:00.0667333",
+ "end": "0:00:12.012"
+ }
+ ]
+ },
+ {
+ "id": 2,
+ "thumbnailId": "858903a7-254a-438e-92fd-69f8bdb2ac88",
+ "clothing": [
+ {
+ "id": 1,
+ "type": "sleeve",
+ "properties": {
+ "length": "short"
+ }
+ }
+ ],
+ "instances": [
+ {
+ "adjustedStart": "0:00:23.2565666",
+ "adjustedEnd": "0:00:25.4921333",
+ "start": "0:00:23.2565666",
+ "end": "0:00:25.4921333"
+ },
+ {
+ "adjustedStart": "0:00:25.8925333",
+ "adjustedEnd": "0:00:25.9926333",
+ "start": "0:00:25.8925333",
+ "end": "0:00:25.9926333"
+ },
+ {
+ "adjustedStart": "0:00:26.3930333",
+ "adjustedEnd": "0:00:28.5618666",
+ "start": "0:00:26.3930333",
+ "end": "0:00:28.5618666"
+ }
+ ]
+ },
+ {
+ "id": 3,
+ "thumbnailId": "1406252d-e7f5-43dc-852d-853f652b39b6",
+ "clothing": [
+ {
+ "id": 1,
+ "type": "sleeve",
+ "properties": {
+ "length": "short"
+ }
+ },
+ {
+ "id": 2,
+ "type": "pants",
+ "properties": {
+ "length": "long"
+ }
+ },
+ {
+ "id": 3,
+ "type": "skirtAndDress"
+ }
+ ],
+ "instances": [
+ {
+ "adjustedStart": "0:00:31.9652666",
+ "adjustedEnd": "0:00:34.4010333",
+ "start": "0:00:31.9652666",
+ "end": "0:00:34.4010333"
+ }
+ ]
+ },
+ {
+ "id": 4,
+ "thumbnailId": "d09ad62e-e0a4-42e5-8ca9-9a640c686596",
+ "clothing": [
+ {
+ "id": 1,
+ "type": "sleeve",
+ "properties": {
+ "length": "short"
+ }
+ },
+ {
+ "id": 2,
+ "type": "pants",
+ "properties": {
+ "length": "short"
+ }
+ }
+ ],
+ "matchingFace": {
+ "id": 1310,
+ "confidence": 0.3819
+ },
+ "instances": [
+ {
+ "adjustedStart": "0:00:34.8681666",
+ "adjustedEnd": "0:00:36.0026333",
+ "start": "0:00:34.8681666",
+ "end": "0:00:36.0026333"
+ },
+ {
+ "adjustedStart": "0:00:36.6699666",
+ "adjustedEnd": "0:00:36.7367",
+ "start": "0:00:36.6699666",
+ "end": "0:00:36.7367"
+ },
+ {
+ "adjustedStart": "0:00:37.2038333",
+ "adjustedEnd": "0:00:39.6729666",
+ "start": "0:00:37.2038333",
+ "end": "0:00:39.6729666"
+ }
+ ]
+ },
+ ```
+
+To download the JSON file via the API, use the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+
+## Observed people tracking and matched faces components
+
+During the observed people tracking and matched faces procedure, images in a media file are processed, as follows:
+
+|Component|Definition|
+|||
+|Source file | The user uploads the source file for indexing. |
+|Detection | The media file is tracked to detect observed people and their clothing. For example, shirt with long sleeves, dress or long pants. Note that to be detected, the full upper body of the person must appear in the media.|
+|Local grouping |The identified observed faces are filtered into local groups. If a person is detected more than once, additional observed faces instances are created for this person. |
+|Matching and Classification |The observed people instances are matched to faces. If there is a known celebrity, the observed person will be given their name. Any number of observed people instances can be matched to the same face. |
+|Confidence value| The estimated confidence level of each observed person is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
+
+## Example use cases
+
+- Tracking a personΓÇÖs movement, for example, in law enforcement for more efficiency when analyzing an accident or crime.
+- Improving efficiency by deep searching for matched people in organizational archives for insight on specific celebrities, for example when creating promos and trailers.
+- Improved efficiency when creating feature stories, for example, searching for people wearing a red shirt in the archives of a football game at a News or Sports agency.
+
+## Considerations and limitations when choosing a use case
+
+Below are some considerations to keep in mind when using observed people and matched faces.
+
+- When uploading a file always use high-quality video content. The recommended maximum frame size is HD and frame rate is 30 FPS. A frame should contain no more than 10 people. When outputting frames from videos to AI models, only send around 2 or 3 frames per second. Processing 10 and more frames might delay the AI result. People and faces in videos recorded by cameras that are high-mounted, down-angled or with a wide field of view (FOV) may have fewer pixels that may result in lower accuracy of the generated insights.
+- Typically, small people or objects under 200 pixels and people who are seated may not be detected. People wearing similar clothes or uniforms might be detected as being the same person and will be given the same ID number. People or objects that are obstructed may not be detected. Tracks of people with front and back poses may be split into different instances.
+- An observed person must first be detected and appear in the people category before they're matched. Tracks are optimized to handle observed people who frequently appear in the front. Obstructions like overlapping people or faces may cause mismatches between matched people and observed people. Mismatching may occur when different people appear in the same relative spatial position in the frame within a short period.
+
+When used responsibly and carefully, Azure Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
+
+- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes.
+- Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom.
+- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.
+- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.
+- Always seek legal advice when using media from unknown sources.
+- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.
+- Provide a feedback channel that allows users and individuals to report issues with the service.
+- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.
+- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.
+- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.
+
+## Next steps
+
+### Learn More about Responsible AI
+
+- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)
+- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)
+- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)
+- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5)
+
+### Contact us
+
+`visupport@microsoft.com`
+
+## Azure Video Indexer insights
+
+- [Audio effects detection](audio-effects-detection.md)
+- [Face detection](face-detection.md)
+- [Keywords extraction](keywords.md)
+- [Transcription, translation & language identification](transcription-translation-lid.md)
+- [Labels identification](labels-identification.md)
+- [Named entities](named-entities.md)
+- [Topics inference](topics-inference.md)
azure-video-indexer Ocr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/ocr.md
+
+ Title: Azure Video Indexer optical character recognition (OCR) overview
+
+description: An introduction to Azure Video Indexer optical character recognition (OCR) component responsibly.
++++ Last updated : 06/15/2022+++
+# Optical character recognition (OCR)
+
+Optical character recognition (OCR) is an Azure Video Indexer AI feature that extracts text from images like pictures, street signs and products in media files to create insights.
+
+OCR currently extracts insights from printed and handwritten text in over 50 languages, including from an image with text in multiple languages. For more information, see [OCR supported languages](/azure/cognitive-services/computer-vision/language-support#optical-character-recognition-ocr).
+
+## Prerequisites
+
+Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
+
+## General principles
+
+This article discusses optical character recognition (OCR) and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
+
+- Will this feature perform well in my scenario? Before deploying OCR into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.
+- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.
+
+## View the insight
+
+When working on the website the insights are displayed in the **Timeline** tab. They can also be generated in a categorized list in a JSON file that includes the ID, transcribed text, duration and confidence score.
+
+To see the instances on the website, do the following:
+
+1. Go to View and check OCR.
+1. Select Timeline to display the extracted text.
+
+Insights can also be generated in a categorized list in a JSON file that includes the ID, language, text together with each instanceΓÇÖs confidence score.
+
+To see the insights in a JSON file, do the following:
+
+1. Select Download -> Insight (JSON).
+1. Copy the `ocr` element, under `insights`, and paste it into your online JSON viewer.
+
+ ```json
+ "ocr": [
+ {
+ "id": 1,
+ "text": "2017 Ruler",
+ "confidence": 0.4365,
+ "left": 901,
+ "top": 3,
+ "width": 80,
+ "height": 23,
+ "angle": 0,
+ "language": "en-US",
+ "instances": [
+ {
+ "adjustedStart": "0:00:45.5",
+ "adjustedEnd": "0:00:46",
+ "start": "0:00:45.5",
+ "end": "0:00:46"
+ },
+ {
+ "adjustedStart": "0:00:55",
+ "adjustedEnd": "0:00:55.5",
+ "start": "0:00:55",
+ "end": "0:00:55.5"
+ }
+ ]
+ },
+ {
+ "id": 2,
+ "text": "2017 Ruler postppu - PowerPoint",
+ "confidence": 0.4712,
+ "left": 899,
+ "top": 4,
+ "width": 262,
+ "height": 48,
+ "angle": 0,
+ "language": "en-US",
+ "instances": [
+ {
+ "adjustedStart": "0:00:44.5",
+ "adjustedEnd": "0:00:45",
+ "start": "0:00:44.5",
+ "end": "0:00:45"
+ }
+ ]
+ },
+ ```
+
+To download the JSON file via the API, use the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+
+## OCR components
+
+During the OCR procedure, text images in a media file are processed, as follows:
+
+|Component|Definition|
+|||
+|Source file| The user uploads the source file for indexing.|
+|Read model |Images are detected in the media file and text is then extracted and analyzed by Azure Cognitive Services. |
+|Get read results model |The output of the extracted text is displayed in a JSON file.|
+|Confidence value| The estimated confidence level of each word is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty will be represented as an 0.82 score.|
+
+For more information, seeΓÇ»[OCR technology](/azure/cognitive-services/computer-vision/overview-ocr).
+
+## Example use cases
+
+- Deep searching media footage for images with signposts, street names or car license plates, for example, in law enforcement.
+- Extracting text from images in media files and then translating it into multiple languages in labels for accessibility, for example in media or entertainment.
+- Detecting brand names in images and tagging them for translation purposes, for example in advertising and branding.
+- Extracting text in images that is then automatically tagged and categorized for accessibility and future usage, for example to generate content at a news agency.
+- Extracting text in warnings in online instructions and then translating the text to comply with local standards, for example, e-learning instructions for using equipment.
+
+## Considerations and limitations when choosing a use case
+
+- Carefully consider the accuracy of the results, to promote more accurate detections, check the quality of the image, low quality images might impact the detected insights.
+- Carefully consider when using for law enforcement that OCR can potentially misread or not detect parts of the text. To ensure fair and high-quality decisions, combine OCR-based automation with human oversight.
+- When extracting handwritten text, avoid using the OCR results of signatures that are hard to read for both humans and machines. A better way to use OCR is to use it for detecting the presence of a signature for further analysis.
+- Don't use OCR for decisions that may have serious adverse impacts. Machine learning models that extract text can result in undetected or incorrect text output. Decisions based on incorrect output could have serious adverse impacts. Additionally, it's advisable to include human review of decisions that have the potential for serious impacts on individuals.
+
+When used responsibly and carefully, Azure Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:  
+
+- Always respect an individual’s right to privacy, and only ingest videos for lawful and justifiable purposes.  
+- Don't purposely disclose inappropriate content about young children or family members of celebrities or other content that may be detrimental or pose a threat to an individual’s personal freedom.  
+- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.  
+- When using third party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.ΓÇ»
+- Always seek legal advice when using content from unknown sources.ΓÇ»
+- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.    
+- Provide a feedback channel that allows users and individuals to report issues with the service.  
+- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.ΓÇ»
+- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.  
+- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.ΓÇ»
+
+## Next steps
+
+### Learn More about Responsible AI
+
+- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)
+- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)
+- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)
+- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5)
+
+### Contact us
+
+`visupport@microsoft.com`
+
+## Azure Video Indexer insights
+
+- [Audio effects detection](audio-effects-detection.md)
+- [Face detection](face-detection.md)
+- [Keywords extraction](keywords.md)
+- [Transcription, translation & language identification](transcription-translation-lid.md)
+- [Labels identification](labels-identification.md)
+- [Named entities](named-entities.md)
+- [Observed people tracking & matched faces](observed-matched-people.md)
+- [Topics inference](topics-inference.md)
azure-video-indexer Topics Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/topics-inference.md
+
+ Title: Azure Video Indexer topics inference overview
+
+description: An introduction to Azure Video Indexer topics inference component responsibly.
++++ Last updated : 06/15/2022+++
+# Topics inference
+
+Topics inference is an Azure Video Indexer AI feature that automatically creates inferred insights derived from the transcribed audio, OCR content in visual text, and celebrities recognized in the video using the Video Indexer facial recognition model. The extracted Topics and categories (when available) are listed in the Insights tab. To jump to the topic in the media file, click a Topic -> Play Previous or Play Next.
+
+The resulting insights are also generated in a categorized list in a JSON file which includes the topic name, timeframe and confidence score.
+
+## Prerequisites
+
+Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
+
+## General principles
+
+This article discusses topics and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
+
+- Will this feature perform well in my scenario? Before deploying topics inference into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.
+- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.
+
+## View the insight
+
+To display Topics Inference insights on the website.
+
+1. Go to Insights and scroll to Topics.
+
+To display the instances in a JSON file, do the following:
+
+1. Click Download -> Insight (JSON).
+1. Copy the `topics` text and paste it into your JSON viewer.
+
+ ```json
+ "topics": [
+ {
+ "id": 1,
+ "name": "Pens",
+ "referenceId": "Category:Pens",
+ "referenceUrl": "https://en.wikipedia.org/wiki/Category:Pens",
+ "referenceType": "Wikipedia",
+ "confidence": 0.6833,
+ "iabName": null,
+ "language": "en-US",
+ "instances": [
+ {
+ "adjustedStart": "0:00:30",
+ "adjustedEnd": "0:01:17.5",
+ "start": "0:00:30",
+ "end": "0:01:17.5"
+ }
+ ]
+ },
+ {
+ "id": 2,
+ "name": "Musical groups",
+ "referenceId": "Category:Musical_groups",
+ "referenceUrl": "https://en.wikipedia.org/wiki/Category:Musical_groups",
+ "referenceType": "Wikipedia",
+ "confidence": 0.6812,
+ "iabName": null,
+ "language": "en-US",
+ "instances": [
+ {
+ "adjustedStart": "0:01:10",
+ "adjustedEnd": "0:01:17.5",
+ "start": "0:01:10",
+ "end": "0:01:17.5"
+ }
+ ]
+ },
+ ```
+
+To download the JSON file via the API, use the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+
+For more information, see [about topics](https://azure.microsoft.com/blog/multi-modal-topic-inferencing-from-videos/).
+
+## Topics components
+
+During the topics indexing procedure, topics are extracted, as follows:
+
+|Component|Definition|
+|||
+|Source language |The user uploads the source file for indexing.|
+|Pre-processing|Transcription, OCR and facial recognition AIs extract insights from the media file.|
+|Insights processing| Topics AI analyzes the transcription, OCR and facial recognition insights extracted during pre-processing: <br/>- Transcribed text, each line of transcribed text insight is examined using ontology-based AI technologies. <br/>- OCR and Facial Recognition insights are examined together using ontology-based AI technologies. |
+|Post-processing |- Transcribed text, insights are extracted and tied to a Topic category together with the line number of the transcribed text. For example, Politics in line 7.<br/>- OCR and Facial Recognition, each insight is tied to a Topic category together with the time of the topicΓÇÖs instance in the media file. For example, Freddie Mercury in the People and Music categories at 20.00. |
+|Confidence value |The estimated confidence level of each topic is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
+
+## Example use cases
+
+- Personalization using topics inference to match customer interests, for example websites about England posting promotions about English movies or festivals.
+- Deep-searching archives for insights on specific topics to create feature stories about companies, personas or technologies, for example by a news agency.
+- Monetization, increasing the worth of extracted insights. For example, industries like the news or social media that rely on ad revenue can deliver relevant ads by using the extracted insights as additional signals to the ad server.
+
+## Considerations and limitations when choosing a use case
+
+Below are some considerations to keep in mind when using topics:
+
+- When uploading a file always use high-quality video content. The recommended maximum frame size is HD and frame rate is 30 FPS. A frame should contain no more than 10 people. When outputting frames from videos to AI models, only send around 2 or 3 frames per second. Processing 10 and more frames might delay the AI result.
+- When uploading a file always use high quality audio and video content. At least 1 minute of spontaneous conversational speech is required to perform analysis. Audio effects are detected in non-speech segments only. The minimal duration of a non-speech section is 2 seconds. Voice commands and singing aren't supported.
+- Typically, small people or objects under 200 pixels and people who are seated may not be detected. People wearing similar clothes or uniforms might be detected as being the same person and will be given the same ID number. People or objects that are obstructed may not be detected. Tracks of people with front and back poses may be split into different instances.
+
+When used responsibly and carefully, Azure Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
+
+- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes.
+- Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom.
+- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.
+- When using 3rd party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.
+- Always seek legal advice when using media from unknown sources.
+- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.
+- Provide a feedback channel that allows users and individuals to report issues with the service.
+- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.
+- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.
+- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.
+
+## Next steps
+
+### Learn More about Responsible AI
+
+- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)
+- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)
+- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)
+- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5)
+
+### Contact us
+
+`visupport@microsoft.com`
+
+## Azure Video Indexer insights
+
+- [Audio effects detection](audio-effects-detection.md)
+- [Face detection](face-detection.md)
+- [Keywords extraction](keywords.md)
+- [Transcription, translation & language identification](transcription-translation-lid.md)
+- [Labels identification](labels-identification.md)
+- [Named entities](named-entities.md)
+- [Observed people tracking & matched faces](observed-matched-people.md)
azure-video-indexer Transcription Translation Lid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/transcription-translation-lid.md
+
+ Title: Azure Video Indexer media transcription, translation and language identification overview
+
+description: An introduction to Azure Video Indexer media transcription, translation and language identification components responsibly.
++++ Last updated : 06/15/2022+++
+# Media transcription, translation and language identification
+
+Azure Video Indexer transcription, translation and language identification automatically detects, transcribes, and translates the speech in media files into over 50 languages.
+
+- Azure Video Indexer processes the speech in the audio file to extract the transcription that is then translated into many languages. When selecting to translate into a specific language, both the transcription and the insights like keywords, topics, labels or OCR are translated into the specified language. Transcription can be used as is or be combined with speaker insights that map and assign the transcripts into speakers. Multiple speakers can be detected in an audio file. An ID is assigned to each speaker and is displayed under their transcribed speech.
+- Azure Video Indexer language identification (LID) automatically recognizes the supported dominant spoken language in the video file. For more information, see [Applying LID](/azure/azure-video-indexer/language-identification-model).
+- Azure Video Indexer multi-language identification (MLID) automatically recognizes the spoken languages in different segments in the audio file and sends each segment to be transcribed in the identified languages. At the end of this process, all transcriptions are combined into the same file. For more information, see [Applying MLID](/azure/azure-video-indexer/multi-language-identification-transcription).
+The resulting insights are generated in a categorized list in a JSON file that includes the ID, language, transcribed text, duration and confidence score.
+
+## Prerequisites
+
+Review [transparency note overview](/legal/azure-video-indexer/transparency-note?context=/azure/azure-video-indexer/context/context)
+
+## General principles
+
+This article discusses transcription, translation and language identification and the key considerations for making use of this technology responsibly. There are many things you need to consider when deciding how to use and implement an AI-powered feature:
+
+- Will this feature perform well in my scenario? Before using transcription, translation and language Identification into your scenario, test how it performs using real-life data and make sure it can deliver the accuracy you need.
+- Are we equipped to identify and respond to errors? AI-powered products and features won't be 100% accurate, so consider how you'll identify and respond to any errors that may occur.
+
+## View the insight
+
+To view the insights on the website:
+
+1. Go to Insight and scroll to Transcription and Translation.
+
+To view language insights in `insights.json`, do the following:
+
+1. Select Download -> Insights (JSON).
+1. Copy the desired element, under `insights`, and paste it into your online JSON viewer.
+
+ ```json
+ "insights": {
+ "version": "1.0.0.0",
+ "duration": "0:01:50.486",
+ "sourceLanguage": "en-US",
+ "sourceLanguages": [
+ "en-US"
+ ],
+ "language": "en-US",
+ "languages": [
+ "en-US"
+ ],
+ "transcript": [
+ {
+ "id": 1,
+ "text": "Hi, I'm Doug from office. We're talking about new features that office insiders will see first and I have a program manager,",
+ "confidence": 0.8879,
+ "speakerId": 1,
+ "language": "en-US",
+ "instances": [
+ {
+ "adjustedStart": "0:00:00",
+ "adjustedEnd": "0:00:05.75",
+ "start": "0:00:00",
+ "end": "0:00:05.75"
+ }
+ ]
+ },
+ {
+ "id": 2,
+ "text": "Emily Tran, with office graphics.",
+ "confidence": 0.8879,
+ "speakerId": 1,
+ "language": "en-US",
+ "instances": [
+ {
+ "adjustedStart": "0:00:05.75",
+ "adjustedEnd": "0:00:07.01",
+ "start": "0:00:05.75",
+ "end": "0:00:07.01"
+ }
+ ]
+ },
+ ```
+
+To download the JSON file via the API, use the [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/).
+
+## Transcription, translation and language identification components
+
+During the transcription, translation and language identification procedure, speech in a media file is processed, as follows:
+
+|Component|Definition|
+|||
+|Source language | The user uploads the source file for indexing, and either:<br/>- Specifies the video source language.<br/>- Selects auto detect single language (LID) to identify the language of the file. The output is saved separately.<br/>- Selects auto detect multi language (MLID) to identify multiple languages in the file. The output of each language is saved separately.|
+|Transcription API| The audio file is sent to Cognitive Services to get the transcribed and translated output. If a language has been specified, it's processed accordingly. If no language is specified, a LID or MLID process is run to identify the language after which the file is processed. |
+|Output unification |The transcribed and translated files are unified into the same file. The outputted data includes the speaker ID of each extracted sentence together with its confidence level.|
+|Confidence value |The estimated confidence level of each sentence is calculated as a range of 0 to 1. The confidence score represents the certainty in the accuracy of the result. For example, an 82% certainty is represented as an 0.82 score.|
+
+## Example use cases
+
+- Promoting accessibility by making content available for people with hearing disabilities using Azure Video Indexer to generate speech to text transcription and translation into multiple languages.
+- Improving content distribution to a diverse audience in different regions and languages by delivering content in multiple languages using Azure Video IndexerΓÇÖs transcription and translation capabilities.
+- Enhancing and improving manual closed captioning and subtitles generation by leveraging Azure Video IndexerΓÇÖs transcription and translation capabilities and by using the closed captions generated by Azure Video Indexer in one of the supported formats.
+- Using language identification (LID) or multi language identification (MLID) to transcribe videos in unknown languages to allow Azure Video Indexer to automatically identify the languages appearing in the video and generate the transcription accordingly.
+
+## Considerations and limitations when choosing a use case
+
+When used responsibly and carefully, Azure Video Indexer is a valuable tool for many industries. To respect the privacy and safety of others, and to comply with local and global regulations, we recommend the following:
+
+- Carefully consider the accuracy of the results, to promote more accurate data, check the quality of the audio, low quality audio might impact the detected insights.
+- Always respect an individualΓÇÖs right to privacy, and only ingest videos for lawful and justifiable purposes.
+- Don't purposely disclose inappropriate media showing young children or family members of celebrities or other content that may be detrimental or pose a threat to an individualΓÇÖs personal freedom.
+- Commit to respecting and promoting human rights in the design and deployment of your analyzed media.
+- When using third party materials, be aware of any existing copyrights or permissions required before distributing content derived from them.
+- Always seek legal advice when using media from unknown sources.
+- Always obtain appropriate legal and professional advice to ensure that your uploaded videos are secured and have adequate controls to preserve the integrity of your content and to prevent unauthorized access.
+- Provide a feedback channel that allows users and individuals to report issues with the service.
+- Be aware of any applicable laws or regulations that exist in your area regarding processing, analyzing, and sharing media containing people.
+- Keep a human in the loop. Don't use any solution as a replacement for human oversight and decision-making.
+- Fully examine and review the potential of any AI model you're using to understand its capabilities and limitations.
+
+For more information, see: guidelines and limitations in [language detection and transcription](/azure/azure-video-indexer/multi-language-identification-transcription).
+
+## Next steps
+
+### Learn More about Responsible AI
+
+- [Microsoft Responsible AI principles](https://www.microsoft.com/ai/responsible-ai?activetab=pivot1%3aprimaryr6)
+- [Microsoft Responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)
+- [Microsoft Azure Learning courses on Responsible AI](/training/paths/responsible-ai-business-principles/)
+- [Microsoft Global Human Rights Statement](https://www.microsoft.com/corporate-responsibility/human-rights-statement?activetab=pivot_1:primaryr5)
+
+### Contact us
+
+`visupport@microsoft.com`
+
+## Azure Video Indexer insights
+
+- [Audio effects detection](audio-effects-detection.md)
+- [Face detection](face-detection.md)
+- [OCR](ocr.md)
+- [Keywords extraction](keywords.md)
+- [Labels identification](labels-identification.md)
+- [Named entities](named-entities.md)
+- [Observed people tracking & matched faces](observed-matched-people.md)
+- [Topics inference](topics-inference.md)
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
Refer to the table below to find details about resolution dates or possible work
| :- | : | :- | :- | | [VMSA-2021-002 ESXiArgs](https://www.vmware.com/security/advisories/VMSA-2021-0002.html) OpenSLP vulnerability publicized in February 2023 | 2021 | [Disable OpenSLP service](https://kb.vmware.com/s/article/76372) | February 2021 - Resolved in [ESXi 7.0 U3c](concepts-private-clouds-clusters.md#vmware-software-versions) | | After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **DNS Forwarder Upstream Server Timeout** alarm is raised | February 2023 | [Enable private cloud internet Access](concepts-design-public-internet-access.md), alarm is raised because NSX-T Manager cannot access the configured CloudFlare DNS server. | February 2023 |-
+| The cloudadmin user sees a message about the "Distributed Switch not being associated with the host" if they look at Host > Configure > Virtual switches. There *is no* actual problem. Cloudadmin simply can't see it because of permissions. | March 2023 | We will look into adding read-only permissions for the Virtual Distributed Switch (VDS) to the cloudadmin account, which should make that message disappear. | |
In this article, you learned about the current known issues with the Azure VMware Solution. For more information, see [About Azure VMware Solution](introduction.md).
backup Azure Kubernetes Service Cluster Backup Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-concept.md
Title: Azure Kubernetes Service (AKS) backup using Azure Backup prerequisites
description: This article explains the prerequisites for Azure Kubernetes Service (AKS) backup. Previously updated : 03/20/2023 Last updated : 03/27/2023
Azure Backup now allows you to back up AKS clusters (cluster resources and persi
- You need to install Backup Extension on both the source cluster to be backed up and the target cluster where the restore will happen. -- Backup Extension can be installed in the cluster from the *AKS portal* blade on the **Backup** tab under **Settings**. You can also use the Azure CLI commands to [manage the installation and other operations on the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#manage-operations).
+- Backup Extension can be installed in the cluster from the *AKS portal* blade on the **Backup** tab under **Settings**. You can also use the Azure CLI commands to [manage the installation and other operations on the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#backup-extension-related-operations).
-- Before you install an extension in an AKS cluster, you must register the `Microsoft.KubernetesConfiguration` resource provider at the subscription level. Learn how to [register the resource provider](azure-kubernetes-service-cluster-manage-backups.md#register-the-resource-provider).
+- Before you install an extension in an AKS cluster, you must register the `Microsoft.KubernetesConfiguration` resource provider at the subscription level. Learn how to [register the resource provider](azure-kubernetes-service-cluster-manage-backups.md#resource-provider-registrations).
-Learn [how to manage the operation to install Backup Extension using Azure CLI](azure-kubernetes-service-cluster-manage-backups.md#manage-operations).
+Learn [how to manage the operation to install Backup Extension using Azure CLI](azure-kubernetes-service-cluster-manage-backups.md#backup-extension-related-operations).
## Trusted Access
Your Azure resources access AKS clusters through the AKS regional gateway using
For AKS backup, the Backup vault accesses your AKS clusters via Trusted Access to configure backups and restores. The Backup vault is assigned a pre-defined role **Microsoft.DataProtection/backupVaults/backup-operator** in the AKS cluster, allowing it to only perform specific backup operations.
-Before you enable Trusted Access between a Backup vault and an AKS cluster, [enable a *feature flag* on the cluster's subscription](azure-kubernetes-service-cluster-manage-backups.md#enable-the-feature-flag).
+To enable Trusted Access between a Backup vault and an AKS cluster, you must register the `TrustedAccessPreview` feature flag on `Microsoft.ContainerService` at the subscription level. Learn more [to register the resource provider](azure-kubernetes-service-cluster-manage-backups.md#enable-the-feature-flag).
+
+Learn [how to enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#register-the-trusted-access).
+
+>[!Note]
+>- You can install the Backup Extension on your AKS cluster directly from the Azure portal under the *Backup* section in AKS portal.
+>- You can also enable Trusted Access between Backup vault and AKS cluster during the backup or restore operations in the Azure portal.
-Learn [how to enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#enable-trusted-access).
## AKS Cluster
backup Azure Kubernetes Service Cluster Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-support-matrix.md
Title: Azure Kubernetes Service (AKS) backup support matrix description: This article provides a summary of support settings and limitations of Azure Kubernetes Service (AKS) backup. Previously updated : 03/20/2023 Last updated : 03/27/2023
AKS backup is available in all the Azure public cloud regions, East US, North Eu
- Currently, the modification of backup policy and the modification of snapshot resource group (assigned to a backup instance during configuration of the AKS cluster backup) aren't supported.
+- AKS cluster and Backup Extension pods should be in running state for any backup and restore operations to be performed. This includes deletion of expired recovery points.
+ - For successful backup and restore operations, role assignments are required by the Backup vault's managed identity. If you don't have the required permissions, you may see permission issues during backup configuration or restore operations soon after assigning roles because the role assignments take a few minutes to take effect. Learn about the [role definitions](azure-kubernetes-service-cluster-backup-concept.md#required-roles-and-permissions). - AKS backup limits are:
backup Azure Kubernetes Service Cluster Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup.md
Title: Back up Azure Kubernetes Service (AKS) using Azure Backup
description: This article explains how to back up Azure Kubernetes Service (AKS) using Azure Backup. Previously updated : 03/20/2023 Last updated : 03/27/2023
Azure Backup now allows you to back up AKS clusters (cluster resources and persi
- You must [install the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) to configure backup and restore operations on an AKS cluster. Learn more [about Backup Extension](azure-kubernetes-service-cluster-backup-concept.md#backup-extension). -- Ensure that the `Microsoft.KubernetesConfiguration` and `Microsoft.DataProtection` providers are registered for your subscription before initiating backup configuration and restore operations.
+- Ensure that `Microsoft.KubernetesConfiguration`, `Microsoft.DataProtection`, and the `TrustedAccessPreview` feature flag on `Microsoft.ContainerService` are registered for your subscription before initiating the backup configuration and restore operations.
- Ensure to perform [all the prerequisites](azure-kubernetes-service-cluster-backup-concept.md) before initiating backup or restore operation for AKS backup.
backup Azure Kubernetes Service Cluster Manage Backups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-manage-backups.md
Title: Manage Azure Kubernetes Service (AKS) backups using Azure Backup
description: This article explains how to manage Azure Kubernetes Service (AKS) backups using Azure Backup. Previously updated : 03/20/2023 Last updated : 03/27/2023 # Manage Azure Kubernetes Service backups using Azure Backup (preview)
-This article describes how to manage Azure Kubernetes Service (AKS) backups using Azure CLI commands.
+This article describes how to register resource providers on your subscriptions for using Backup Extension and Trusted Access. Also, it provides you with the Azure CLI commands to manage them.
-Azure Backup now allows you to back up AKS clusters (cluster resources and persistent volumes attached to the cluster) using a backup extension, which must be installed in the cluster. Backup vault communicates with the cluster via this Backup Extension to perform backup and restore operations.
+Azure Backup now allows you to back up AKS clusters (cluster resources and persistent volumes attached to the cluster) using a backup extension, which must be installed in the cluster. AKS cluster requires Trusted Access enabled with Backup vault, so that the vault can communicate with the Backup Extension to perform backup and restore operations.
-## Manage operations
+## Resource provider registrations
-This section provides the set of Azure CLI commands to create, update, delete operations on the backup extension. You can use the *update* command to change the blob container where backups are stored along with compute limits for the underlying Backup Extension Pods.
+- You must register these resource providers on the subscription before initiating any backup and restore operation.
+- Once the registration is complete, you can perform backup and restore operations on all the cluster under the subscription.
-## Register the resource provider
+### Register the Backup Extension
-To register the resource provider, run the following command:
+To install Backup Extension, you need to register `Microsoft.KubernetesConfiguration` resource provider on the subscription. To perform the registration, run the following command:
```azurecli-interactive az provider register --namespace Microsoft.KubernetesConfiguration ```
->[!Note]
->Don't initiate extension installation before registering resource provider.
-
-### Monitor the registration process
- The registration may take up to *10 minutes*. To monitor the registration process, run the following command: ```azurecli-interactive az provider show -n Microsoft.KubernetesConfiguration -o table ```
-### Install Backup Extension
+### Register the Trusted Access
-To install the Backup Extension, use the following command:
+To enable Trusted Access between the Backup vault and AKS cluster, you must register *TrustedAccessPreview* feature flag on *Microsoft.ContainerService* over the subscription. To perform the registration, run the following commands:
- ```azurecli-interactive
- az k8s-extension create --name azure-aks-backup --extension-type Microsoft.DataProtection.Kubernetes --scope cluster --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg --release-train stable --configuration-settings blobContainer=containername storageAccount=storageaccountname storageAccountResourceGroup=storageaccountrg storageAccountSubscriptionId=subscriptionid
- ```
+## Enable the feature flag
-### Update resources in Backup Extension
+To enable the feature flag follow these steps:
-To update blob container, CPU, and memory in the Backup Extension, use the following command:
+1. Install the *aks-preview* extension:
```azurecli-interactive
- az k8s-extension update --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg --release-train stable --configuration-settings [blobContainer=containername storageAccount=storageaccountname storageAccountResourceGroup=storageaccountrg storageAccountSubscriptionId=subscriptionid] [cpuLimit=1] [memoryLimit=1Gi]
-
- []: denotes the 3 different sub-groups of updates possible (discard the brackets while using the command)
-
+ az extension add --name aks-preview
```
-### Delete Backup Extension installation operation
-
-To stop the Backup Extension install operation, use the following command:
+1. Update to the latest version of the extension released:
```azurecli-interactive
- az k8s-extension delete --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg
+ az extension update --name aks-preview
```
-### Grant permission on storage account
-
-To provide *Storage Account Contributor Permission* to the Extension Identity on storage account, run the following command:
+1. Register the *TrustedAccessPreview* feature flag:
```azurecli-interactive
- az role assignment create --assignee-object-id $(az k8s-extension show --name azure-aks-backup --cluster-name aksclustername --resource-group aksclusterresourcegroup --cluster-type managedClusters --query aksAssignedIdentity.principalId --output tsv) --role 'Storage Account Contributor' --scope /subscriptions/subscriptionid/resourceGroups/storageaccountresourcegroup/providers/Microsoft.Storage/storageAccounts/storageaccountname
+ az feature register --namespace "Microsoft.ContainerService" --name "TrustedAccessPreview"
```
+
+ It takes a few minutes for the status to show *Registered*.
-### View Backup Extension installation status
+1. Verify the registration status:
-To view the progress of Backup Extension installation, use the following command:
+ ```azurecli-interactive
+ az feature show --namespace "Microsoft.ContainerService" --name "TrustedAccessPreview"
+ ```
+
+1. When the status shows *Registered*, refresh the `Microsoft.ContainerService` resource provider registration:
```azurecli-interactive
- az k8s-extension show --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg
+ az provider register --namespace Microsoft.ContainerService
```
-## Enable the feature flag
+## Backup Extension related operations
-To enable the feature flag follow these steps:
+This section provides the set of Azure CLI commands to perform create, update, or delete operations on the Backup Extension. You can use the update command to change compute limits for the underlying Backup Extension Pods.
+
+### Install Backup Extension
-1. To install the *aks-preview* extension, run the following command:
+To install the Backup Extension, run the following command:
```azurecli-interactive
- az extension add --name aks-preview
+ az k8s-extension create --name azure-aks-backup --extension-type Microsoft.DataProtection.Kubernetes --scope cluster --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg --release-train stable --configuration-settings blobContainer=containername storageAccount=storageaccountname storageAccountResourceGroup=storageaccountrg storageAccountSubscriptionId=subscriptionid
```
-1. To update to the latest version of the extension released, run the following command:
+### View Backup Extension installation status
+
+To view the progress of Backup Extension installation, use the following command:
```azurecli-interactive
- az extension update --name aks-preview
+ az k8s-extension show --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg
```
-1. To register the *TrustedAccessPreview* feature flag, run the `az feature register` command.
+### Update resources in Backup Extension
- **Example**
+To update blob container, CPU, and memory in the Backup Extension, use the following command:
```azurecli-interactive
- az feature register --namespace "Microsoft.ContainerService" --name "TrustedAccessPreview"
- ```
+ az k8s-extension update --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg --release-train stable --configuration-settings [blobContainer=containername storageAccount=storageaccountname storageAccountResourceGroup=storageaccountrg storageAccountSubscriptionId=subscriptionid] [cpuLimit=1] [memoryLimit=1Gi]
- It takes a few minutes for the status to show Registered.
+ []: denotes the 3 different sub-groups of updates possible (discard the brackets while using the command)
-1. To verify the registration status, run the `az feature show` command.
+ ```
- **Example**
+### Delete Backup Extension installation operation
+
+To stop the Backup Extension install operation, use the following command:
```azurecli-interactive
- az feature show --namespace "Microsoft.ContainerService" --name "TrustedAccessPreview"
+ az k8s-extension delete --name azure-aks-backup --cluster-type managedClusters --cluster-name aksclustername --resource-group aksclusterrg
```
-1. When the status shows as **Registered**, run the `az provider register` command to refresh the `Microsoft.ContainerService` resource provider registration.
+### Grant permission on storage account
- **Example**
+To provide *Storage Account Contributor Permission* to the Extension Identity on storage account, run the following command:
```azurecli-interactive
- az provider register --namespace Microsoft.ContainerService
+ az role assignment create --assignee-object-id $(az k8s-extension show --name azure-aks-backup --cluster-name aksclustername --resource-group aksclusterresourcegroup --cluster-type managedClusters --query aksAssignedIdentity.principalId --output tsv) --role 'Storage Account Contributor' --scope /subscriptions/subscriptionid/resourceGroups/storageaccountresourcegroup/providers/Microsoft.Storage/storageAccounts/storageaccountname
```
->[!Note]
->Don't initiate backup configuration before enabling the feature flag.
-## Enable Trusted Access
+## Trusted Access related operations
To enable Trusted Access between Backup vault and AKS cluster, use the following Azure CLI command:
backup Azure Kubernetes Service Cluster Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-restore.md
Title: Restore Azure Kubernetes Service (AKS) using Azure Backup
description: This article explains how to restore backed-up Azure Kubernetes Service (AKS) using Azure Backup. Previously updated : 03/03/2023 Last updated : 03/27/2023
Azure Backup now allows you to back up AKS clusters (cluster resources and persi
- AKS backup allows you to restore to original AKS cluster (that was backed up) and to an alternate AKS cluster. AKS backup allows you to perform a full restore and item-level restore. You can utilize [restore configurations](#restore-configurations) to define parameters based on the cluster resources that will be picked up during the restore. -- You must [install the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) in the target AKS cluster. Also, you must [enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#enable-trusted-access) between the Backup vault and the AKS cluster.
+- You must [install the Backup Extension](azure-kubernetes-service-cluster-manage-backups.md#install-backup-extension) in the target AKS cluster. Also, you must [enable Trusted Access](azure-kubernetes-service-cluster-manage-backups.md#register-the-trusted-access) between the Backup vault and the AKS cluster.
For more information on the limitations and supported scenarios, see the [support matrix](azure-kubernetes-service-cluster-backup-support-matrix.md).
cdn Cdn Change Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-change-provider.md
+
+ Title: Migrate between CDN providers
+
+description: Best practices of migrating between CDN providers
+++++ Last updated : 03/27/2023+++
+# Migrate between CDN providers
+
+Content Delivery Network (CDN) services can provide resiliency and add benefits for different types of workloads. Switching between CDN providers is a common practice when your web delivery requirements changes or when a different service is better suited for your business needs.
+
+The purpose of this article is to share best practices when migrating from one CDN service to another. In this article we talk about the different Azure CDN services, how to compare these products and best practices to consider when performing the migration.
+
+## Overview of Azure CDN profiles
+
+**Azure Front Door:** release two new tiers (Standard and Premium) on March 29, 2022, which is the next generation Front Door service. It combines the capabilities of Azure Front Door (classic), Microsoft CDN (classic), and Web Application Firewall. With features such as private link integration, enhancements to rules engine, diagnostics, and a one-stop secure application acceleration for Azure customers. For more information about Azure Front Door, see [Front Door overview](../frontdoor/front-door-overview.md).
+
+**Azure CDN Standard/Premium from Verizon:** is an alternative to Azure Front Door for your general CDN and media solutions. Azure CDN from Verizon is optimized for large media streaming workloads. It has unique CDN features such as cache warmup, log delivery services, and reporting features.
+
+**Azure CDN Standard from Akamai (Retiring October 31, 2023)**: In May of 2016, Azure partnered with Akamai Technologies Inc to offer Azure CDN Standard from Akamai. Recently, Azure and Akamai Technologies Inc have decided not to renew this partnership. As a result, starting October 31, 2023, Azure CDN Standard from Akamai will no longer be supported.
+
+You'll still be able to manage your existing profiles until October 31. After October 31, you'll no longer be able to create a new Azure CDN Standard from Akamai profiles or modify previously created profiles.
+
+If you don't migrate your workloads by October 31, we'll migrate your Azure CDN Standard from Akamai profile to another Azure CDN service with similar features and pricing starting November 1, 2023.
+
+## Pricing comparison
+
+Switching between CDN profiles may introduce changes to your content delivery overall cost. For more information about service pricing, see [Azure Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor/) and [Azure CDN pricing](https://azure.microsoft.com/pricing/details/cdn/).
+
+## Compare Azure CDN profiles and features
+
+For a features comparison between the different Azure CDN services, see [Compare Azure CDN product features](cdn-features.md).
+
+## Guidance for migrating between CDN providers
+
+The following guidance is consideration for scoping and tracking your CDN migration plans:
+
+### Prepare
+
+Review your existing CDN utilizations and network architecture. Including the following guidance:
+
+* Create an inventory of each endpoint, custom domains and their use cases.
+* Review your existing endpoint configurations and capture caching, compression rules and other applicable settings, such caching rule and their scenarios.
+
+### Proof of concept
+
+Create a small-scale proof of concept testing environment with your potential replacement CDN profile.
+
+* Define success criteria:
+ * Cost - does the new CDN profile meet your cost requirements?
+ * Performance - does the new CDN profile meet the performance requirements of your workload?
+* Create a new profile - for example, Azure CDN with Verizon.
+* Configure your new profile with similar configuration settings as your existing profile.
+* Fine tune caching and compression configuration settings to meet your requirements.
+
+### Implement
+
+Once you've completed your proof of concept testing, you can begin the migration process.
+
+* Set up the new CDN profile for production by performing validation before the change over.
+ * **Staging environment testing:**
+ * Test your workload and DNS configuration to see if it's working properly.
+ * Ensure caching is configured correctly. For example, account pages.
+ * **A/B environment validation (if allowed):**
+ * Configure Traffic Manager to route traffic to the new CDN profile and compare performance and caching behavior.
+* **CDN service change over:** configure DNS change to point to the new CDN CNAME.
+* **Post change monitoring:** monitor the CDN cache hit rate, origin traffic volume, any abnormal status codes, and top URLs.
+
+> [!TIP]
+> Items to verify prior to migrating production workloads
+> 1. Verify configuration settings such as cache objects, TTLs and other potential custom settings at the CDN profile level are being accommodated.
+> 2. Origin application customizations are adjusted:
+> * Update Access Control List (ACL) if one is being used to allow CDN egress ranges.
+> * Traffic management tools such as a load balancer has the correct policies and rules for the CDN.
+> 3. Validate origin workloads and CDN caching performance.
+> * Changing between CDNs can increase traffic to origin for a period of time until the new provider caches the content.
+
+## Improve migration with Azure Traffic Manager
+
+If you have multiple Azure CDN profiles, you can improve availability and performance using Azure Traffic Manager. You can use Traffic Manager to load balance among multiple Azure CDN endpoints for failover and geo-load balancing.
+
+In a typical failover scenario, all client requests are directed to the primary CDN profile. If the profile is unavailable, requests are sent to the secondary profile. Requests resume to your primary profile when it becomes available again. Using Azure Traffic Manager in this manner ensures your web application is always available.
+
+For more information, see [Failover CDN endpoints with Traffic Manager](cdn-traffic-manager.md).
+
+## Next Steps
+
+* Create an [Azure Front Door](../frontdoor/create-front-door-portal.md) profile.
+* Create an [Azure CDN from Verizon](cdn-create-endpoint-how-to.md) profile.
cdn Cdn Dynamic Site Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-dynamic-site-acceleration.md
To configure a CDN endpoint to optimize delivery of dynamic files, you can eithe
**To configure an existing endpoint for DSA (Azure CDN from Akamai profiles only):**
+> [!IMPORTANT]
+> Azure CDN from Akamai is scheduled to be retired on October 31, 2023. For more information, see [**Migrate CDN provider**](cdn-change-provider.md) for guidance on migrating to another Azure CDN provider.
+ 1. In the **CDN profile** page, select the endpoint you want to modify. 2. From the left pane, select **Optimization**.
cdn Cdn Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-features.md
Azure Content Delivery Network (CDN) includes four products:
* **Azure CDN Standard from Verizon** * **Azure CDN Premium from Verizon**.
+> [!IMPORTANT]
+> Azure CDN from Akamai is scheduled to be retired on October 31, 2023. For more information, see [**Migrate CDN provider**](cdn-change-provider.md) for guidance on migrating to another Azure CDN provider.
+ The following table compares the features available with each product. | **Performance features and optimizations** | **Standard Microsoft** | **Standard Akamai** | **Standard Verizon** | **Premium Verizon** |
cdn Cdn Large File Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-large-file-optimization.md
Large file optimization features for **Azure CDN Standard from Verizon** and **A
## Optimize for delivery of large files with Azure CDN Standard from Akamai
+> [!IMPORTANT]
+> Azure CDN from Akamai is scheduled to be retired on October 31, 2023. For more information, see [**Migrate CDN provider**](cdn-change-provider.md) for guidance on migrating to another Azure CDN provider.
+ **Azure CDN Standard from Akamai** profile endpoints offer a feature that delivers large files efficiently to users across the globe at scale. The feature reduces latencies because it reduces the load on the origin servers. The large file optimization type feature turns on network optimizations and configurations to deliver large files faster and more responsively. General web delivery with **Azure CDN Standard from Akamai** endpoints caches files only below 1.8 GB and can tunnel (not cache) files up to 150 GB. Large file optimization caches files up to 150 GB.
cdn Cdn Media Streaming Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-media-streaming-optimization.md
Partial cache sharing allows the CDN to serve partially cached content to new re
## Media streaming optimizations for Azure CDN from Akamai
-
+
+> [!IMPORTANT]
+> Azure CDN from Akamai is scheduled to be retired on October 31, 2023. For more information, see [**Migrate CDN provider**](cdn-change-provider.md) for guidance on migrating to another Azure CDN provider.
+ **Azure CDN Standard from Akamai** offers a feature that delivers streaming media assets efficiently to users across the globe at scale. The feature reduces latencies because it reduces the load on the origin servers. This feature is available with the standard Akamai pricing tier. Media streaming optimization for **Azure CDN Standard from Akamai** is effective for live or video-on-demand streaming media that uses individual media fragments for delivery. This process is different from a single large asset transferred via progressive download or by using byte-range requests. For information on that style of media delivery, see [Large file optimization](cdn-large-file-optimization.md).
cdn Cdn Optimization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-optimization-overview.md
This article provides an overview of various optimization features and when you
**Azure CDN Standard from Akamai** profiles support the following optimizations:
+> [!IMPORTANT]
+> Azure CDN from Akamai is scheduled to be retired on October 31, 2023. For more information, see [**Migrate CDN provider**](cdn-change-provider.md) for guidance on migrating to another Azure CDN provider.
+ * [General web delivery](#general-web-delivery) * [General media streaming](#general-media-streaming)
cdn Cdn Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-overview.md
# What is a content delivery network on Azure?
+> [!IMPORTANT]
+> Azure CDN from Akamai is scheduled to be retired on October 31, 2023. For more information, see [**Migrate CDN provider**](cdn-change-provider.md) for guidance on migrating to another Azure CDN provider.
+ A content delivery network (CDN) is a distributed network of servers that can efficiently deliver web content to users. A CDN store cached content on edge servers in point-of-presence (POP) locations that are close to end users, to minimize latency. Azure CDN offers developers a global solution for rapidly delivering high-bandwidth content to users by caching their content at strategically placed physical nodes across the world. Azure CDN can also accelerate dynamic content, which can't get cached, by using various network optimizations using CDN POPs. For example, route optimization to bypass Border Gateway Protocol (BGP).
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
When a new engine is available, you're prompted to update your neural voice mode
:::image type="content" source="media/custom-voice/cnv-engine-update-prompt.png" alt-text="Screenshot of displaying engine update message." lightbox="media/custom-voice/cnv-engine-update-prompt.png":::
-Go to the model details page, select **Update** at the top to display **Update** window.
+Go to the model details page and follow the on-screen instructions to install the latest engine.
-Then select **Update** to update your model to the latest engine version.
+Alternatively, select **Install the latest engine** later to update your model to the latest engine version.
You're not charged for engine update. The previous versions are still kept. You can check all engine versions for the model from **Engine version** drop-down list, or remove one if you don't need it anymore.
The updated version is automatically set as default. But you can change the defa
If you want to test each engine version of your voice model, you can select a version from the drop-down list, then select **DefaultTests** under **Testing** to listen to the sample audios. If you want to upload your own test scripts to further test your current engine version, first make sure the version is set as default, then follow the [testing steps above](#test-your-voice-model).
-After you've updated the engine version for your voice model, you need to [redeploy this new version](how-to-deploy-and-use-endpoint.md#switch-to-a-new-voice-model-in-your-product). You can only deploy the default version.
+Updating the engine will create a new version of the model at no additional cost. After you've updated the engine version for your voice model, you need to deploy the new version to [create a new endpoint](how-to-deploy-and-use-endpoint.md#add-a-deployment-endpoint). You can only deploy the default version.
++
+After you've created a new endpoint, you need to [transfer the traffic to the new endpoint in your product](how-to-deploy-and-use-endpoint.md#switch-to-a-new-voice-model-in-your-product).
For more information, [learn more about the capabilities and limits of this feature, and the best practice to improve your model quality](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
cognitive-services How To Custom Voice Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
To upload training data, follow these steps:
Data files are automatically validated when you select **Submit**. Data validation includes series of checks on the audio files to verify their file format, size, and sampling rate. If there are any errors, fix them and submit again.
-After you upload the data, you can check the details in the training set detail view. On the **Overview** tab, you can further check the pronunciation scores and the noise level for each of your data. The pronunciation score ranges from 0-100. A score below 70 normally indicates a speech error or script mismatch. A heavy accent can reduce your pronunciation score and affect the generated digital voice.
+After you upload the data, you can check the details in the training set detail view. On the detail page, you can further check the pronunciation issue and the noise level for each of your data. The pronunciation score at the sentence level ranges from 0-100. A score below 70 normally indicates a speech error or script mismatch. Utterances with an overall score lower than 70 will be rejected. A heavy accent can reduce your pronunciation score and affect the generated digital voice.
## Resolve data issues online
After upload, you can check the data details of the training set. Before continu
You can resolve data issues per utterance in Speech Studio.
-1. On the **Data details** page, select individual utterances you want to edit, then click **Edit**.
+1. On the detail page, go to the **Accepted data** or **Rejected data** page. Select individual utterances you want to change, then click **Edit**.
- :::image type="content" source="media/custom-voice/cnv-edit-trainingset.png" alt-text="Screenshot of selecting edit button on the Data details page.":::
+ :::image type="content" source="media/custom-voice/cnv-edit-trainingset.png" alt-text="Screenshot of selecting edit button on the accepted data or rejected data details page.":::
+
+ You can choose which data issues to be displayed based on your criteria.
+
+ :::image type="content" source="media/custom-voice/cnv-issues-display-criteria.png" alt-text="Screenshot of choosing which data issues to be displayed":::
1. Edit window will be displayed.
You can resolve data issues per utterance in Speech Studio.
:::image type="content" source="media/custom-voice/cnv-edit-trainingset-upload-recording.png" alt-text="Screenshot that shows how to upload recording file on the Edit transcript and recording file window.":::
-1. After the data in a training set are updated, you need to check the data quality by clicking **Analyze data** before using this training set for training.
+1. After you've made changes to your data, you need to check the data quality by clicking **Analyze data** before using this dataset for training.
You can't select this training set for training model before the analysis is complete.
The issues are divided into three types. Refer to the following tables to check
**Auto-rejected**
-Data with these errors won't be used for training. Imported data with errors will be ignored, so you don't need to delete them. You can resubmit the corrected data for training.
+Data with these errors won't be used for training. Imported data with errors will be ignored, so you don't need to delete them. You can [fix these data errors online](#resolve-data-issues-online) or upload the corrected data again for training.
| Category | Name | Description | | | -- | |
The following errors are fixed automatically, but you should review and confirm
| | -- | | | Mismatch |Silence auto fixed |The start silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it. | | Mismatch |Silence auto fixed | The end silence is detected to be shorter than 100 ms, and has been extended to 100 ms automatically. Download the normalized dataset and review it.|
+| Script | Text auto normalized|Text is automatically normalized for digits, symbols, and abbreviations. Review the script and audio to make sure they match.|
**Manual check required**
Unresolved errors listed in the next table affect the quality of training, but d
| Category | Name | Description | | | -- | |
-| Script | Non-normalized text|This script contains digits. Expand them to normalized words, and match with the audio. For example, normalize *123* to *one hundred and twenty-three*.|
-| Script | Non-normalized text|This script contains symbols. Normalize the symbols to match the audio. For example, normalize *50%* to *fifty percent*.|
+| Script | Non-normalized text |This script contains symbols. Normalize the symbols to match the audio. For example, normalize */* to *slash*.|
| Script | Not enough question utterances| At least 10 percent of the total utterances should be question sentences. This helps the voice model properly express a questioning tone.| | Script | Not enough exclamation utterances| At least 10 percent of the total utterances should be exclamation sentences. This helps the voice model properly express an excited tone.| | Script | No valid end punctuation| Add one of the following at the end of the line: full stop (half-width '.' or full-width '。'), exclamation point (half-width '!' or full-width '!' ), or question mark ( half-width '?' or full-width '?').|
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/sovereign-clouds.md
Available to US government entities and their partners only. See more informatio
- Speech translation - **Unsupported features:** - Custom Voice
+ - Custom Commands
- **Supported languages:** - See the list of supported languages [here](language-support.md)
Available to organizations with a business presence in China. See more informati
- Speech translator - **Unsupported features:** - Custom Voice
+ - Custom Commands
- **Supported languages:** - See the list of supported languages [here](language-support.md)
cosmos-db Analytical Store Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-change-data-capture.md
Last updated 03/23/2023
Change data capture (CDC) in [Azure Cosmos DB analytical store](analytical-store-introduction.md) allows you to efficiently consume a continuous and incremental feed of changed (inserted, updated, and deleted) data from analytical store. The change data capture feature of the analytical store is seamlessly integrated with Azure Synapse and Azure Data Factory, providing you with a scalable no-code experience for high data volume. As the change data capture feature is based on analytical store, it [doesn't consume provisioned RUs, doesn't affect your transactional workloads](analytical-store-introduction.md#decoupled-performance-for-analytical-workloads), provides lower latency, and has lower TCO.
+Included here's a diagram of change data capture (CDC) with Azure Cosmos DB analytical store. For more information on supported sink types in a mapping data flow, see [data flow supported sink types](../data-factory/data-flow-sink.md#supported-sinks).
+ :::image type="content" source="media\analytical-store-change-data-capture\overview-diagram.png" alt-text="Diagram of the analytical store in Azure Cosmos DB and how it, with change data capture, can write to various first and third-party target services."::: In addition to providing incremental data feed from analytical store to diverse targets, change data capture supports the following capabilities:
You can use analytical store change data capture, if you're currently using or p
### Incremental feed to analytical platform of your choice
-change data capture capability enables end-to-end analytical story providing you with the flexibility to use Azure Cosmos DB data on analytical platform of your choice seamlessly. It also enables you to bring Cosmos DB data into a centralized data lake and join with data from diverse data sources. For more information, see [supported sink types](../data-factory/data-flow-sink.md#supported-sinks). You can flatten the data, apply more transformations either in Azure Synapse Analytics or Azure Data Factory.
+Change data capture capability enables an end-to-end analytical solution providing you with the flexibility to use Azure Cosmos DB data with any of the supported sink types. For more information on supported sink types, see [data flow supported sink types](../data-factory/data-flow-sink.md#supported-sinks). Change data capture also enables you to bring Azure Cosmos DB data into a centralized data lake and join the data with data from other diverse sources. You can flatten the data, partition it, and apply more transformations either in Azure Synapse Analytics or Azure Data Factory.
## Change data capture on Azure Cosmos DB for MongoDB containers
cosmos-db Tutorial Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/tutorial-query.md
# Tutorial: Query Azure Cosmos DB for Gremlin by using Gremlin [!INCLUDE[Gremlin](../includes/appliesto-gremlin.md)]
-The Azure Cosmos DB [API for Gremlin](introduction.md) supports [Gremlin](https://github.com/tinkerpop/gremlin/wiki) queries. This article provides sample documents and queries to get you started. A detailed Gremlin reference is provided in the [Gremlin support](support.md) article.
+The Azure Cosmos DB [API for Gremlin](introduction.md) supports [Gremlin](https://tinkerpop.apache.org/gremlin.html) queries. This article provides sample documents and queries to get you started. A detailed Gremlin reference is provided in the [Gremlin support](support.md) article.
This article covers the following tasks:
cosmos-db Quickstart Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-go.md
The following snippets are all taken from the `todo.go` file.
clientOptions := options.Client().ApplyURI(mongoDBConnectionString).SetDirect(true)
- c, err := mongo.NewClient(clientOptions)
- err = c.Connect(ctx)
+ c, err := mongo.Connect(ctx, clientOptions)
if err != nil { log.Fatalf("unable to initialize connection %v", err) }
cost-management-billing Allocate Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/allocate-costs.md
Title: Allocate Azure costs
description: This article explains how create cost allocation rules to distribute costs of subscriptions, resource groups, or tags to others. Previously updated : 02/16/2023 Last updated : 03/28/2023
Allocated costs appear in cost analysis. They appear as other items associated w
## Prerequisites -- Cost allocation currently only supports customers with a [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/) or an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/).
+- Cost allocation currently only supports customers with:
+ - A [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/) (MCA) in the Enterprise motion where you buy Azure services through a Microsoft representative. It's also called an MCA enterprise agreement.
+ - A [Microsoft Customer Agreement](https://azure.microsoft.com/pricing/purchase-options/microsoft-customer-agreement/) that you bought through the Azure website. It's also called an MCA individual agreement.
+ - An [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/).
- To create or manage a cost allocation rule, you must use an Enterprise Administrator account for [Enterprise Agreements](../manage/understand-ea-roles.md). Or you must be a [Billing account](../manage/understand-mca-roles.md) owner for Microsoft Customer Agreements. ## Create a cost allocation rule
cost-management-billing Synapse Analytics Pre Purchase Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/synapse-analytics-pre-purchase-plan.md
Previously updated : 12/06/2022 Last updated : 03/28/2023
You can save on your Azure Synapse Analytics costs when you prepurchase Azure Synapse commit units (SCU) for one year. You can use the prepurchased SCUs at any time during the purchase term. Unlike VMs, the prepurchased units don't expire on an hourly basis and you use them at any time during the term of the purchase.
-Any Azure Synapse Analytics use deducts from the prepurchased SCUs automatically. You don't need to redeploy or assign a Pre-Purchased Plan to your Azure Synapse Analytics workspaces to get the prepurchase discounts.
+Any Azure Synapse Analytics usage deducts from the prepurchased SCUs automatically. You don't need to redeploy or assign a Pre-Purchased Plan to your Azure Synapse Analytics workspaces to get the prepurchase discounts.
## Determine the right size to buy A synapse prepurchase applies to all Synapse workloads and tiers. You can think of the Pre-Purchase Plan as a pool of prepaid Synapse commit units. Usage is deducted from the pool, regardless of the workload or tier. Integrated services such as VMs for SHIR, Azure Storage accounts, and networking components are charged separately.
+ThereΓÇÖs no ratio on which the SCUs are applied. SCUs are equivalent to the purchase currency value and are deducted at retail prices. Like other reservations, the benefit of a pre-purchase plan is discounted pricing by committing to a purchase term. The more you buy, the larger the discount you receive.
+
+For example, if you want to use your SCUs for Data Warehousing ΓÇô Dedicated SQL pool in West US 2, and you consume 1 hour of SQL Dedicated Pool DWU100 that has a retail price of $1.20, then 1.2 SCUs are consumed.
+ The Synapse prepurchase discount applies to usage from the following products: - Azure Synapse Analytics Dedicated SQL Pool
The Synapse prepurchase discount applies to usage from the following products:
- Azure Synapse Analytics Data Flow - Basic - Azure Synapse Analytics Data Flow - Standard
-For more information about available SCU tiers and pricing discounts, you'll use the reservation purchase experience in the following section.
+For more information about available SCU tiers and pricing discounts, you use the reservation purchase experience in the following section.
## Purchase Synapse commit units
You buy Synapse plans in the [Azure portal](https://portal.azure.com). To buy a
- **Single subscription scope** - Applies the reservation discount to the matching resources in the selected subscription. - **Shared scope** - Applies the reservation discount to matching resources in eligible subscriptions that are in the billing context. For Enterprise Agreement customers, the billing context is the enrollment. - **Management group** - Applies the reservation discount to the matching resource in the list of subscriptions that are a part of both the management group and billing scope.
-1. Select how many Azure Synapse commit units you want to purchase and then complete the purchase.
+1. Select how many SCUs you want to purchase and then complete the purchase.
:::image type="content" source="./media/synapse-analytics-pre-purchase-plan/buy-synapse-analytics-pre-purchase-plan.png" alt-text="Screenshot showing the Select the product experience for the Azure Synapse Analytics Pre-Purchase Plan." lightbox="./media/synapse-analytics-pre-purchase-plan/buy-synapse-analytics-pre-purchase-plan.png" ::: ## Change scope and ownership
data-factory Concept Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concept-managed-airflow.md
You can install any provider package by editing the airflow environment from the
:::image type="content" source="media/concept-managed-airflow/airflow-integration.png" lightbox="media/concept-managed-airflow/airflow-integration.png" alt-text="Screenshot shows airflow integration.":::
+## Limitations
+
+* Managed Airflow in other regions will be available by GA (Tentative GA is Q2 2023 ).
+* Data Sources connecting through airflow should be publicly accessible.
+* Blob Storage behind VNet are not supported during the public preview (Tentative GA is Q2 2023
+* DAGs that are inside a Blob Storage in VNet/behind Firewall is currently not supported.
+* Azure Key Vault is not supported in LinkedServices to import dags.(Tentative GA is Q2 2023)
+* Airflow supports officially Blob Storage and ADLS with some limitations.
+ ## Next steps - [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md)
data-factory How Does Managed Airflow Work https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-does-managed-airflow-work.md
The following steps describe how to import DAGs into Managed Airflow.
### Prerequisites
-You'll need to upload a sample DAG onto an accessible Storage account.
+You'll need to upload a sample DAG onto an accessible Storage account (Should be under dags folder).
> [!NOTE]
-> Blob Storage behind VNet are not supported during the preview.
+> Blob Storage behind VNet are not supported during the preview.<br>
+> KeyVault configuration in storageLinkedServices not supported to import dags.
[Sample Apache Airflow v2.x DAG](https://airflow.apache.org/docs/apache-airflow/stable/tutorial/fundamentals.html). [Sample Apache Airflow v1.10 DAG](https://airflow.apache.org/docs/apache-airflow/1.10.11/_modules/airflow/example_dags/tutorial.html).
defender-for-cloud Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-overview.md
This article describes security alerts and notifications in Microsoft Defender for Cloud. ## What are security alerts?
-Security alerts are the notifications generated by Defender for Cloud and Defender for Cloud plans when threats are identified in your cloud, hybrid, or on-premises environment.
--- Security alerts are triggered by advanced detections in Defender for Cloud, and are available when you enable Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).-- Each alert provides details of affected resources, issues, and remediation recommendations.-- Defender for Cloud classifies alerts and prioritizes them by severity in the Defender for Cloud portal.-- Alerts are displayed for 90 days, even if the resource related to the alert was deleted during that time. This is because the alert might indicate a potential breach to your organization that needs to be further investigated. -- Alerts can be exported to CSV format, or directly injected into Microsoft Sentinel.
+Security alerts are the notifications generated by Defender for Cloud's workload protection plans when threats are identified in your Azure, hybrid, or multi-cloud environments.
+
+- Security alerts are triggered by advanced detections available when you enable [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads) for specific resource types.
+- Each alert provides details of affected resources, issues, and remediation steps.
+- Defender for Cloud classifies alerts and prioritizes them by severity.
+- Alerts are displayed in the portal for 90 days, even if the resource related to the alert was deleted during that time. This is because the alert might indicate a potential breach to your organization that needs to be further investigated.
+- Alerts can be exported to CSV format.
+- Alerts can also be streamed directly to a Security Information and Event Management (SIEM) such as Microsoft Sentinel, Security Orchestration Automated Response (SOAR), or IT Service Management (ITSM) solution.
- Defender for Cloud leverages the [MITRE Attack Matrix](https://attack.mitre.org/matrices/enterprise/) to associate alerts with their perceived intent, helping formalize security domain knowledge. ### How are alerts classified?
-Defender for Cloud assigns a severity to alerts to help you prioritize how you attend to each alert. Severity is based on how confident Defender for Cloud is in the:
+Alerts have a severity level assigned to help prioritize how to attend to each alert. Severity is based on:
-- Finding/analytic used to issue the alert-- Confidence level that there was malicious intent behind the activity that led to the alert
+- The specific trigger
+- The confidence level that there was malicious intent behind the activity that led to the alert
| Severity | Recommended response |
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
description: This article lists the security alerts visible in Microsoft Defende
Previously updated : 03/05/2023 Last updated : 03/26/2023 # Security alerts - a reference guide
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Fileless attack behavior detected**<br>(VM_FilelessAttackBehavior.Windows) | The memory of the process specified contains behaviors commonly used by fileless attacks. Specific behaviors include:<br>1) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>2) Active network connections. See NetworkConnections below for details.<br>3) Function calls to security sensitive operating system interfaces. See Capabilities below for referenced OS capabilities.<br>4) Contains a thread that was started in a dynamically allocated code segment. This is a common pattern for process injection attacks. | Defense Evasion | Low | | **Fileless attack technique detected**<br>(VM_FilelessAttackTechnique.Windows) | The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software. Specific behaviors include:<br>1) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>2) Executable image injected into the process, such as in a code injection attack.<br>3) Active network connections. See NetworkConnections below for details.<br>4) Function calls to security sensitive operating system interfaces. See Capabilities below for referenced OS capabilities.<br>5) Process hollowing, which is a technique used by malware in which a legitimate process is loaded on the system to act as a container for hostile code.<br>6) Contains a thread that was started in a dynamically allocated code segment. This is a common pattern for process injection attacks. | Defense Evasion, Execution | High | | **Fileless attack toolkit detected**<br>(VM_FilelessAttackToolkit.Windows) | The memory of the process specified contains a fileless attack toolkit: [toolkit name]. Fileless attack toolkits use techniques that minimize or eliminate traces of malware on disk, and greatly reduce the chances of detection by disk-based malware scanning solutions. Specific behaviors include:<br>1) Well-known toolkits and crypto mining software.<br>2) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>3) Injected malicious executable in process memory. | Defense Evasion, Execution | Medium |
-| **High risk software detected** | Analysis of host data from %{Compromised Host} detected the usage of software that has been associated with the installation of malware in the past. A common technique utilized in the distribution of malicious software is to package it within otherwise benign tools such as the one seen in this alert. Upon using these tools, the malware can be silently installed in the background. | - | Medium |
+| **High risk software detected** | Analysis of host data from %{Compromised Host} detected the usage of software that has been associated with the installation of malware in the past. A common technique utilized in the distribution of malicious software is to package it within otherwise benign tools such as the one seen in this alert. When you use these tools, the malware can be silently installed in the background. | - | Medium |
| **Local Administrators group members were enumerated** | Machine logs indicate a successful enumeration on group %{Enumerated Group Domain Name}\%{Enumerated Group Name}. Specifically, %{Enumerating User Domain Name}\%{Enumerating User Name} remotely enumerated the members of the %{Enumerated Group Domain Name}\%{Enumerated Group Name} group. This activity could either be legitimate activity, or an indication that a machine in your organization has been compromised and used to reconnaissance %{vmname}. | - | Informational | | **Malicious firewall rule created by ZINC server implant [seen multiple times]** | A firewall rule was created using techniques that match a known actor, ZINC. The rule was possibly used to open a port on %{Compromised Host} to allow for Command & Control communications. This behavior was seen [x] times today on the following machines: [Machine names] | - | High | | **Malicious SQL activity** | Machine logs indicate that '%{process name}' was executed by account: %{user name}. This activity is considered malicious. | - | High |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Attempt to run high privilege command detected**<br>(AppServices_HighPrivilegeCommand) | Analysis of App Service processes detected an attempt to run a command that requires high privileges.<br>The command ran in the web application context. While this behavior can be legitimate, in web applications this behavior is also observed in malicious activities.<br>(Applies to: App Service on Windows) | - | Medium | | **Communication with suspicious domain identified by threat intelligence**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised. | Initial Access, Persistence, Execution, Command And Control, Exploitation | Medium | | **Connection to web page from anomalous IP address detected**<br>(AppServices_AnomalousPageAccess) | Azure App Service activity log indicates an anomalous connection to a sensitive web page from the listed source IP address. This might indicate that someone is attempting a brute force attack into your web app administration pages. It might also be the result of a new IP address being used by a legitimate user. If the source IP address is trusted, you can safely suppress this alert for this resource. To learn how to suppress security alerts, see [Suppress alerts from Microsoft Defender for Cloud](alerts-suppression-rules.md). <br>(Applies to: App Service on Windows and App Service on Linux) | Initial Access | Low |
-| **Dangling DNS record for an App Service resource detected**<br>(AppServices_DanglingDomain) | A DNS record that points to a recently deleted App Service resource (also known as "dangling DNS" entry) has been detected. This leaves you susceptible to a subdomain takeover. Subdomain takeovers enable malicious actors to redirect traffic intended for an organizationΓÇÖs domain to a site performing malicious activity.<br>(Applies to: App Service on Windows and App Service on Linux) | - | High |
+| **Dangling DNS record for an App Service resource detected**<br>(AppServices_DanglingDomain) | A DNS record that points to a recently deleted App Service resource (also known as "dangling DNS" entry) has been detected. This leaves you susceptible to a subdomain takeover. Subdomain takeovers enable malicious actors to redirect traffic intended for an organization's domain to a site performing malicious activity.<br>(Applies to: App Service on Windows and App Service on Linux) | - | High |
| **Detected encoded executable in command line data**<br>(AppServices_Base64EncodedExecutableInCommandLineParams) | Analysis of host data on {Compromised host} detected a base-64 encoded executable. This has previously been associated with attackers attempting to construct executables on-the-fly through a sequence of commands, and attempting to evade intrusion detection systems by ensuring that no individual command would trigger an alert. This could be legitimate activity, or an indication of a compromised host.<br>(Applies to: App Service on Windows) | Defense Evasion, Execution | High | | **Detected file download from a known malicious source**<br>(AppServices_SuspectDownload) | Analysis of host data has detected the download of a file from a known malware source on your host.<br>(Applies to: App Service on Linux) | Privilege Escalation, Execution, Exfiltration, Command and Control | Medium | | **Detected suspicious file download**<br>(AppServices_SuspectDownloadArtifacts) | Analysis of host data has detected suspicious download of remote file.<br>(Applies to: App Service on Linux) | Persistence | Medium |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **PHP file in upload folder**<br>(AppServices_PhpInUploadFolder) | Azure App Service activity log indicates an access to a suspicious PHP page located in the upload folder.<br>This type of folder does not usually contain PHP files. The existence of this type of file might indicate an exploitation taking advantage of arbitrary file upload vulnerabilities.<br>(Applies to: App Service on Windows and App Service on Linux) | Execution | Medium | | **Possible Cryptocoinminer download detected**<br>(AppServices_CryptoCoinMinerDownload) | Analysis of host data has detected the download of a file normally associated with digital currency mining.<br>(Applies to: App Service on Linux) | Defense Evasion, Command and Control, Exploitation | Medium | | **Possible data exfiltration detected**<br>(AppServices_DataEgressArtifacts) | Analysis of host/device data detected a possible data egress condition. Attackers will often egress data from machines they have compromised.<br>(Applies to: App Service on Linux) | Collection, Exfiltration | Medium |
-| **Potential dangling DNS record for an App Service resource detected**<br>(AppServices_PotentialDanglingDomain) | A DNS record that points to a recently deleted App Service resource (also known as "dangling DNS" entry) has been detected. This might leave you susceptible to a subdomain takeover. Subdomain takeovers enable malicious actors to redirect traffic intended for an organizationΓÇÖs domain to a site performing malicious activity. In this case, a text record with the Domain Verification ID was found. Such text records prevent subdomain takeover but we still recommend removing the dangling domain. If you leave the DNS record pointing at the subdomain youΓÇÖre at risk if anyone in your organization deletes the TXT file or record in the future.<br>(Applies to: App Service on Windows and App Service on Linux) | - | Low |
+| **Potential dangling DNS record for an App Service resource detected**<br>(AppServices_PotentialDanglingDomain) | A DNS record that points to a recently deleted App Service resource (also known as "dangling DNS" entry) has been detected. This might leave you susceptible to a subdomain takeover. Subdomain takeovers enable malicious actors to redirect traffic intended for an organization's domain to a site performing malicious activity. In this case, a text record with the Domain Verification ID was found. Such text records prevent subdomain takeover but we still recommend removing the dangling domain. If you leave the DNS record pointing at the subdomain you're at risk if anyone in your organization deletes the TXT file or record in the future.<br>(Applies to: App Service on Windows and App Service on Linux) | - | Low |
| **Potential reverse shell detected**<br>(AppServices_ReverseShell) | Analysis of host data detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns.<br>(Applies to: App Service on Linux) | Exfiltration, Exploitation | Medium | | **Raw data download detected**<br>(AppServices_DownloadCodeFromWebsite) | Analysis of App Service processes detected an attempt to download code from raw-data websites such as Pastebin. This action was run by a PHP process. This behavior is associated with attempts to download web shells or other malicious components to the App Service.<br>(Applies to: App Service on Windows) | Execution | Medium | | **Saving curl output to disk detected**<br>(AppServices_CurlToDisk) | Analysis of App Service processes detected the running of a curl command in which the output was saved to the disk. While this behavior can be legitimate, in web applications this behavior is also observed in malicious activities such as attempts to infect websites with web shells.<br>(Applies to: App Service on Windows) | - | Low |
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
| **Suspicious process name detected**<br>(AppServices_ProcessWithKnownSuspiciousExtension) | Analysis of host data on {NAME} detected a process whose name is suspicious, for example corresponding to a known attacker tool or named in a way that is suggestive of attacker tools that try to hide in plain sight. This process could be legitimate activity, or an indication that one of your machines has been compromised.<br>(Applies to: App Service on Windows) | Persistence, Defense Evasion | Medium | | **Suspicious SVCHOST process executed**<br>(AppServices_SVCHostFromInvalidPath) | The system process SVCHOST was observed running in an abnormal context. Malware often use SVCHOST to mask its malicious activity.<br>(Applies to: App Service on Windows) | Defense Evasion, Execution | High | | **Suspicious User Agent detected**<br>(AppServices_UserAgentInjection) | Azure App Service activity log indicates requests with suspicious user agent. This behavior can indicate on attempts to exploit a vulnerability in your App Service application.<br>(Applies to: App Service on Windows and App Service on Linux) | Initial Access | Medium |
-| **Suspicious WordPress theme invocation detected**<br>(AppServices_WpThemeInjection) | Azure App Service activity log indicates a possible code injection activity on your App Service resource.<br>The suspicious activity detected resembles that of a manipulation of WordPress theme to support server side execution of code, followed by a direct web request to invoke the manipulated theme file.<br>This type of activity was seen in the past as part of an attack campaign over WordPress.<br>If your App Service resource isnΓÇÖt hosting a WordPress site, it isnΓÇÖt vulnerable to this specific code injection exploit and you can safely suppress this alert for the resource. To learn how to suppress security alerts, see [Suppress alerts from Microsoft Defender for Cloud](alerts-suppression-rules.md).<br>(Applies to: App Service on Windows and App Service on Linux) | Execution | High |
-| **Vulnerability scanner detected**<br>(AppServices_DrupalScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting a content management system (CMS).<br>If your App Service resource isnΓÇÖt hosting a Drupal site, it isnΓÇÖt vulnerable to this specific code injection exploit and you can safely suppress this alert for the resource. To learn how to suppress security alerts, see [Suppress alerts from Microsoft Defender for Cloud](alerts-suppression-rules.md).<br>(Applies to: App Service on Windows) | PreAttack | Medium |
-| **Vulnerability scanner detected**<br>(AppServices_JoomlaScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting Joomla applications.<br>If your App Service resource isnΓÇÖt hosting a Joomla site, it isnΓÇÖt vulnerable to this specific code injection exploit and you can safely suppress this alert for the resource. To learn how to suppress security alerts, see [Suppress alerts from Microsoft Defender for Cloud](alerts-suppression-rules.md).<br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium |
-| **Vulnerability scanner detected**<br>(AppServices_WpScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting WordPress applications.<br>If your App Service resource isnΓÇÖt hosting a WordPress site, it isnΓÇÖt vulnerable to this specific code injection exploit and you can safely suppress this alert for the resource. To learn how to suppress security alerts, see [Suppress alerts from Microsoft Defender for Cloud](alerts-suppression-rules.md).<br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium |
+| **Suspicious WordPress theme invocation detected**<br>(AppServices_WpThemeInjection) | Azure App Service activity log indicates a possible code injection activity on your App Service resource.<br>The suspicious activity detected resembles that of a manipulation of WordPress theme to support server side execution of code, followed by a direct web request to invoke the manipulated theme file.<br>This type of activity was seen in the past as part of an attack campaign over WordPress.<br>If your App Service resource isn't hosting a WordPress site, it isn't vulnerable to this specific code injection exploit and you can safely suppress this alert for the resource. To learn how to suppress security alerts, see [Suppress alerts from Microsoft Defender for Cloud](alerts-suppression-rules.md).<br>(Applies to: App Service on Windows and App Service on Linux) | Execution | High |
+| **Vulnerability scanner detected**<br>(AppServices_DrupalScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting a content management system (CMS).<br>If your App Service resource isn't hosting a Drupal site, it isn't vulnerable to this specific code injection exploit and you can safely suppress this alert for the resource. To learn how to suppress security alerts, see [Suppress alerts from Microsoft Defender for Cloud](alerts-suppression-rules.md).<br>(Applies to: App Service on Windows) | PreAttack | Medium |
+| **Vulnerability scanner detected**<br>(AppServices_JoomlaScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting Joomla applications.<br>If your App Service resource isn't hosting a Joomla site, it isn't vulnerable to this specific code injection exploit and you can safely suppress this alert for the resource. To learn how to suppress security alerts, see [Suppress alerts from Microsoft Defender for Cloud](alerts-suppression-rules.md).<br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium |
+| **Vulnerability scanner detected**<br>(AppServices_WpScanner) | Azure App Service activity log indicates that a possible vulnerability scanner was used on your App Service resource.<br>The suspicious activity detected resembles that of tools targeting WordPress applications.<br>If your App Service resource isn't hosting a WordPress site, it isn't vulnerable to this specific code injection exploit and you can safely suppress this alert for the resource. To learn how to suppress security alerts, see [Suppress alerts from Microsoft Defender for Cloud](alerts-suppression-rules.md).<br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium |
| **Web fingerprinting detected**<br>(AppServices_WebFingerprinting) | Azure App Service activity log indicates a possible web fingerprinting activity on your App Service resource.<br>The suspicious activity detected is associated with a tool called Blind Elephant. The tool fingerprint web servers and tries to detect the installed applications and version.<br>Attackers often use this tool for probing the web application to find vulnerabilities.<br>(Applies to: App Service on Windows and App Service on Linux) | PreAttack | Medium | | **Website is tagged as malicious in threat intelligence feed**<br>(AppServices_SmartScreen) | Your website as described below is marked as a malicious site by Windows SmartScreen. If you think this is a false positive, contact Windows SmartScreen via report feedback link provided.<br>(Applies to: App Service on Windows and App Service on Linux) | Collection | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Abnormal activity of managed identity associated with Kubernetes (Preview)**<br>(K8S_AbnormalMiActivity) | Analysis of Azure Resource Manager operations detected an abnormal behavior of a managed identity used by an AKS addon. The detected activity isn\'t consistent with the behavior of the associated addon. While this activity can be legitimate, such behavior might indicate that the identity was gained by an attacker, possibly from a compromised container in the Kubernetes cluster. | Lateral Movement | Medium | | **Abnormal Kubernetes service account operation detected**<br>(K8S_ServiceAccountRareOperation) | Kubernetes audit log analysis detected abnormal behavior by a service account in your Kubernetes cluster. The service account was used for an operation which isn't common for this service account. While this activity can be legitimate, such behavior might indicate that the service account is being used for malicious purposes. | Lateral Movement, Credential Access | Medium | | **An uncommon connection attempt detected**<br>(K8S.NODE_SuspectConnection) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium |
-| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alertΓÇÖs extended properties. | Execution | Medium |
+| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert's extended properties. | Execution | Medium |
| **Anomalous secret access (Preview)**<br>(K8S_AnomalousSecretAccess) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected secret access request which is anomalous based on previous secret access activity. This activity is considered an anomaly when taking into account how the different features seen in the secret access operation are in relations to one another. The features monitored by this analytics include the user name used, the name of the secret, the name of the namespace, user agent used in the operation, or other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert extended properties. | CredentialAccess | Medium | | **Attempt to stop apt-daily-upgrade.timer service detected**<br>(K8S.NODE_TimerServiceDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational | | **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Command within a container running with high privileges**<br>(K8S.NODE_PrivilegedExecutionInContainer) <sup>[1](#footnote1)</sup> | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Low | | **Container running in privileged mode**<br>(K8S.NODE_PrivilegedContainerArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a Docker command that is running a privileged container. The privileged container has full access to the hosting pod or host resource. If compromised, an attacker may use the privileged container to gain access to the hosting pod or host. | PrivilegeEscalation, Execution | Low | | **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium |
-| **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the clusterΓÇÖs DNS server and poison it. | Lateral Movement | Low |
+| **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the cluster's DNS server and poison it. | Lateral Movement | Low |
| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low | | **Detected file download from a known malicious source**<br>(K8S.NODE_SuspectDownload) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium | | **Detected suspicious file download**<br>(K8S.NODE_SuspectDownloadArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious download of a remote file. | Persistence | Low |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low | | **Manipulation of host firewall detected**<br>(K8S.NODE_FirewallDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium | | **Microsoft Defender for Cloud test alert (not a threat).**<br>(K8S.NODE_EICAR) <sup>[1](#footnote1)</sup> | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High |
-| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new container in the kube-system namespace that isnΓÇÖt among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
+| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new container in the kube-system namespace that isn't among the containers that normally run in this namespace. The kube-system namespaces should not contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low | | **Possible attack tool detected**<br>(K8S.NODE_KnownLinuxAttackTool) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious tool invocation. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium | | **Possible backdoor detected**<br>(K8S.NODE_LinuxBackdoorArtifact) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Possible password change using crypt-method detected**<br>(K8S.NODE_SuspectPasswordChange) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a password change using the crypt method. Attackers can make this change to continue access and gain persistence after compromise. | CredentialAccess | Medium | | **Potential port forwarding to external IP address**<br>(K8S.NODE_SuspectPortForwarding) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an initiation of port forwarding to an external IP address. | Exfiltration, Command And Control | Medium | | **Potential reverse shell detected**<br>(K8S.NODE_ReverseShell) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. | Exfiltration, Exploitation | Medium |
-| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the nodeΓÇÖs resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low |
+| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the node's resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low |
| **Process associated with digital currency mining detected**<br>(K8S.NODE_CryptoCoinMinerArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected the execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium | | **Process seen accessing the SSH authorized keys file in an unusual way**<br>(K8S.NODE_SshKeyAccess) <sup>[1](#footnote1)</sup> | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low | | **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Possible malicious web shell detected.**<br>(K8S.NODE_Webshell) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected a possible web shell. Attackers will often upload a web shell to a compute resource they have compromised to gain persistence or for further exploitation. | Persistence, Exploitation | Medium | | **Burst of multiple reconnaissance commands could indicate initial activity after compromise**<br>(K8S.NODE_ReconnaissanceArtifactsBurst) <sup>[1](#footnote1)</sup> | Analysis of host/device data detected execution of multiple reconnaissance commands related to gathering system or host details performed by attackers after initial compromise. | Discovery, Collection | Low | | **Suspicious Download Then Run Activity**<br>(K8S.NODE_DownloadAndRunCombo) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a file being downloaded then run in the same command. While this isn't always malicious, this is a very common technique attackers use to get malicious files onto victim machines. | Execution, CommandAndControl, Exploitation | Medium |
-| **Digital currency mining activity**<br>(K8S.NODE_CurrencyMining) <sup>[1](#footnote1)</sup> | Analysis of DNS transactions detected digital currency mining activity. Such activity, while possibly legitimate user behavior, is frequently performed by attackers following compromise of resources. Typical related attacker activity is likely to include the download and execution of common mining tools. | Exfiltration | Low |
+| **Digital currency mining activity**<br>(K8S.NODE_CurrencyMining) <sup>[1](#footnote1)</sup> | Analysis of DNS transactions detected digital currency mining activity. Such activity, while possibly legitimate user behavior, is frequently performed by attackers following compromise of resources. Typical related attacker activity is likely to include the download and execution of common mining tools. | Exfiltration | Low |
| **Access to kubelet kubeconfig file detected**<br>(K8S.NODE_KubeConfigAccess) <sup>[1](#footnote1)</sup> | Analysis of processes running on a Kubernetes cluster node detected access to kubeconfig file on the host. The kubeconfig file, normally used by the Kubelet process, contains credentials to the Kubernetes cluster API server. Access to this file is often associated with attackers attempting to access those credentials, or with security scanning tools which check if the file is accessible. | CredentialAccess | Medium | | **Access to cloud metadata service detected**<br>(K8S.NODE_ImdsCall) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected access to the cloud metadata service for acquiring identity token. The container doesn't normally perform such operation. While this behavior might be legitimate, attackers might use this technique to access cloud resources after gaining initial access to a running container. | CredentialAccess | Medium | | **MITRE Caldera agent detected**<br>(K8S.NODE_MitreCalderaTools) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious process. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Suspected brute force attack using a valid user**<br>(SQL.DB_BruteForce<br>SQL.VM_BruteForce<br>SQL.DW_BruteForce<br>SQL.MI_BruteForce<br>Synapse.SQLPool_BruteForce) | A potential brute force attack has been detected on your resource. The attacker is using the valid user (username), which has permissions to login. | PreAttack | High | | **Suspected brute force attack**<br>(SQL.DB_BruteForce<br>SQL.VM_BruteForce<br>SQL.DW_BruteForce<br>SQL.MI_BruteForce<br>Synapse.SQLPool_BruteForce) | A potential brute force attack has been detected on your resource. | PreAttack | High | | **Suspected successful brute force attack**<br>(SQL.DB_BruteForce<br>SQL.VM_BruteForce<br>SQL.DW_BruteForce<br>SQL.MI_BruteForce<br>Synapse.SQLPool_BruteForce) | A successful login occurred after an apparent brute force attack on your resource. | PreAttack | High |
-| **SQL Server potentially spawned a Windows command shell and accessed an abnormal external source**<br>(SQL.DB_ShellExternalSourceAnomaly<br>SQL.VM_ShellExternalSourceAnomaly<br>SQL.DW_ShellExternalSourceAnomaly<br>SQL.MI_ShellExternalSourceAnomaly<br>Synapse.SQLPool_ShellExternalSourceAnomaly) | A suspicious SQL statement potentially spawned a Windows command shell with an external source that hasnΓÇÖt been seen before. Executing a shell that accesses an external source is a method used by attackers to download malicious payload and then execute it on the machine and compromise it. This enables an attacker to perform malicious tasks under remote direction. Alternatively, accessing an external source can be used to exfiltrate data to an external destination. | Execution | High |
+| **SQL Server potentially spawned a Windows command shell and accessed an abnormal external source**<br>(SQL.DB_ShellExternalSourceAnomaly<br>SQL.VM_ShellExternalSourceAnomaly<br>SQL.DW_ShellExternalSourceAnomaly<br>SQL.MI_ShellExternalSourceAnomaly<br>Synapse.SQLPool_ShellExternalSourceAnomaly) | A suspicious SQL statement potentially spawned a Windows command shell with an external source that hasn't been seen before. Executing a shell that accesses an external source is a method used by attackers to download malicious payload and then execute it on the machine and compromise it. This enables an attacker to perform malicious tasks under remote direction. Alternatively, accessing an external source can be used to exfiltrate data to an external destination. | Execution | High |
| **Unusual payload with obfuscated parts has been initiated by SQL Server**<br>(SQL.VM_PotentialSqlInjection) | Someone has initiated a new payload utilizing the layer in SQL Server that communicates with the operating system while concealing the command in the SQL query. Attackers commonly hide impactful commands which are popularly monitored like xp_cmdshell, sp_add_job and others. Obfuscation techniques abuse legitimate commands like string concatenation, casting, base changing, and others, to avoid regex detection and hurt the readability of the logs. | Execution | High | ## <a name="alerts-osrdb"></a>Alerts for open-source relational databases
Microsoft Defender for Containers provides security alerts on the cluster level
| **PREVIEW - Suspicious invocation of a high-risk 'Initial Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.InitialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to access restricted resources. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to gain initial access to restricted resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Initial access | Medium | | **PREVIEW - Suspicious invocation of a high-risk 'Lateral Movement Access' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.LateralMovement) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to perform lateral movement. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to compromise more resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Lateral movement | Medium | | **PREVIEW - Suspicious invocation of a high-risk 'persistence' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.Persistence) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to establish persistence. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to establish persistence in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Persistence | Medium |
-| **PREVIEW - Suspicious invocation of a high-risk 'Privilege Escalation' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.PrivilegeEscalation) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to escalate privileges. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to escalate privileges while compromising resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent.. | Privilege escalation | Medium |
+| **PREVIEW - Suspicious invocation of a high-risk 'Privilege Escalation' operation by a service principal detected**<br>(ARM_AnomalousServiceOperation.PrivilegeEscalation) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription which might indicate an attempt to escalate privileges. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to escalate privileges while compromising resources in your environment. This can indicate that the service principal is compromised and is being used with malicious intent. | Privilege escalation | Medium |
| **PREVIEW - Suspicious management session using an inactive account detected**<br>(ARM_UnusedAccountPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal not in use for a long period of time is now performing actions that can secure persistence for an attacker. | Persistence | Medium | | **PREVIEW - Suspicious management session using PowerShell detected**<br>(ARM_UnusedAppPowershellPersistence) | Subscription activity logs analysis has detected suspicious behavior. A principal that doesn't regularly use PowerShell to manage the subscription environment is now using PowerShell, and performing actions that can secure persistence for an attacker. | Persistence | Medium | | **PREVIEW ΓÇô Suspicious management session using Azure portal detected**<br>(ARM_UnusedAppIbizaPersistence) | Analysis of your subscription activity logs has detected a suspicious behavior. A principal that doesn't regularly use the Azure portal (Ibiza) to manage the subscription environment (hasn't used Azure portal to manage for the last 45 days, or a subscription that it is actively managing), is now using the Azure portal and performing actions that can secure persistence for an attacker. | Persistence | Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Usage of NetSPI techniques to maintain persistence in your Azure environment**<br>(ARM_NetSPI.MaintainPersistence) | Usage of NetSPI persistence technique to create a webhook backdoor and maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High | | **Usage of PowerZure exploitation toolkit to run an arbitrary code or exfiltrate Azure Automation account credentials**<br>(ARM_PowerZure.RunCodeOnBehalf) | PowerZure exploitation toolkit detected attempting to run code or exfiltrate Azure Automation account credentials. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High | | **Usage of PowerZure function to maintain persistence in your Azure environment**<br>(ARM_PowerZure.MaintainPersistence) | PowerZure exploitation toolkit detected creating a webhook backdoor to maintain persistence in your Azure environment. This was detected by analyzing Azure Resource Manager operations in your subscription. | - | High |
-| **Suspicious classic role assignment detected (Preview)**<br>(ARM_AnomalousClassicRoleAssignment) | Microsoft Defender for Resource Manager identified a suspicious classic role assignment in your tenant which might indicate that an account in your organization was compromised. The identified operations are designed to provide backward compatibility with classic roles that are no longer commonly used. While this activity may be legitimate, a threat actor might utilize such assignment to grant permissions to an more user account under their control. |  Lateral Movement, Defense Evasion | High |
+| **Suspicious classic role assignment detected (Preview)**<br>(ARM_AnomalousClassicRoleAssignment) | Microsoft Defender for Resource Manager identified a suspicious classic role assignment in your tenant which might indicate that an account in your organization was compromised. The identified operations are designed to provide backward compatibility with classic roles that are no longer commonly used. While this activity may be legitimate, a threat actor might utilize such assignment to grant permissions to another user account under their control. |  Lateral Movement, Defense Evasion | High |
## <a name="alerts-dns"></a>Alerts for DNS
Microsoft Defender for Containers provides security alerts on the cluster level
| **Access from a suspicious IP address**<br>(Storage.Blob_SuspiciousIp<br>Storage.Files_SuspiciousIp) | Indicates that this storage account has been successfully accessed from an IP address that is considered suspicious. This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Pre Attack | High/Medium/Low | | **Phishing content hosted on a storage account**<br>(Storage.Blob_PhishingContent<br>Storage.Files_PhishingContent) | A URL used in a phishing attack points to your Azure Storage account. This URL was part of a phishing attack affecting users of Microsoft 365.<br>Typically, content hosted on such pages is designed to trick visitors into entering their corporate credentials or financial information into a web form that looks legitimate.<br>This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files | Collection | High | | **Storage account identified as source for distribution of malware**<br>(Storage.Files_WidespreadeAm) | Antimalware alerts indicate that an infected file(s) is stored in an Azure file share that is mounted to multiple VMs. If attackers gain access to a VM with a mounted Azure file share, they can use it to spread malware to other VMs that mount the same share.<br>Applies to: Azure Files | Execution | Medium |
-| **Storage account with potentially sensitive data has been detected with a publicly exposed container**<br>(Storage.Blob_OpenACL) | The access policy of a container in your storage account was modified to allow anonymous access. This might lead to a data breach if the container holds any sensitive data. This alert is based on analysis of Azure activity log.<br>Applies to: Azure Blob Storage, Azure Data Lake Storage Gen2 | Collection | Medium |
+| **The access level of a potentially sensitive storage blob container was changed to allow unauthenticated public access**<br>(Storage.Blob_OpenACL) | The alert indicates that someone has changed the access level of a blob container in the storage account, which may contain sensitive data, to the 'Container' level, to allow unauthenticated (anonymous) public access. The change was made through the Azure portal.<br>The blob container is flagged with possible sensitive data because, when statistically, blob containers or storage accounts with similar names have low public exposure.<br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts. | Collection | Medium |
| **Authenticated access from a Tor exit node**<br>(Storage.Blob_TorAnomaly<br>Storage.Files_TorAnomaly) | One or more storage container(s) / file share(s) in your storage account were successfully accessed from an IP address known to be an active exit node of Tor (an anonymizing proxy). Threat actors use Tor to make it difficult to trace the activity back to them. Authenticated access from a Tor exit node is a likely indication that a threat actor is trying to hide their identity.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Initial Access / Pre Attack | High/Medium | | **Access from an unusual location to a storage account**<br>(Storage.Blob_GeoAnomaly<br>Storage.Files_GeoAnomaly) | Indicates that there was a change in the access pattern to an Azure Storage account. Someone has accessed this account from an IP address considered unfamiliar when compared with recent activity. Either an attacker has gained access to the account, or a legitimate user has connected from a new or unusual geographic location. An example of the latter is remote maintenance from a new application or developer.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Initial Access | High/Medium/Low | | **Unusual unauthenticated access to a storage container**<br>(Storage.Blob_AnonymousAccessAnomaly) | This storage account was accessed without authentication, which is a change in the common access pattern. Read access to this container is usually authenticated. This might indicate that a threat actor was able to exploit public read access to storage container(s) in this storage account(s).<br>Applies to: Azure Blob Storage | Initial Access | High/Low |
-| **Potential malware uploaded to a storage account**<br>(Storage.Blob_MalwareHashReputation<br>Storage.Files_MalwareHashReputation) | Indicates that a blob containing potential malware has been uploaded to a blob container or a file share in a storage account. This alert is based on hash reputation analysis leveraging the power of Microsoft threat intelligence, which includes hashes for viruses, trojans, spyware and ransomware. Potential causes may include an intentional malware upload by an attacker, or an unintentional upload of a potentially malicious blob by a legitimate user.<br>Applies to: Azure Blob Storage, Azure Files (Only for transactions over REST API)<br>Learn more about [Azure's hash reputation analysis for malware](defender-for-storage-introduction.md#what-kind-of-alerts-does-microsoft-defender-for-storage-provide).<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684). | Lateral Movement | High |
+| **Potential malware uploaded to a storage account**<br>(Storage.Blob_MalwareHashReputation<br>Storage.Files_MalwareHashReputation) | Indicates that a blob containing potential malware has been uploaded to a blob container or a file share in a storage account. This alert is based on hash reputation analysis leveraging the power of Microsoft threat intelligence, which includes hashes for viruses, trojans, spyware and ransomware. Potential causes may include an intentional malware upload by an attacker, or an unintentional upload of a potentially malicious blob by a legitimate user.<br>Applies to: Azure Blob Storage, Azure Files (Only for transactions over REST API)<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684). | Lateral Movement | High |
| **Publicly accessible storage containers successfully discovered**<br>(Storage.Blob_OpenContainersScanning.SuccessfulDiscovery) | A successful discovery of publicly open storage container(s) in your storage account was performed in the last hour by a scanning script or tool.<br><br> This usually indicates a reconnaissance attack, where the threat actor tries to list blobs by guessing container names, in the hope of finding misconfigured open storage containers with sensitive data in them.<br><br> The threat actor may use their own script or use known scanning tools like Microburst to scan for publicly open containers.<br><br> Γ£ö Azure Blob Storage<br> Γ£û Azure Files<br> Γ£û Azure Data Lake Storage Gen2 | Collection | High/Medium | | **Publicly accessible storage containers unsuccessfully scanned**<br>(Storage.Blob_OpenContainersScanning.FailedAttempt) | A series of failed attempts to scan for publicly open storage containers were performed in the last hour. <br><br>This usually indicates a reconnaissance attack, where the threat actor tries to list blobs by guessing container names, in the hope of finding misconfigured open storage containers with sensitive data in them.<br><br> The threat actor may use their own script or use known scanning tools like Microburst to scan for publicly open containers.<br><br> Γ£ö Azure Blob Storage<br> Γ£û Azure Files<br> Γ£û Azure Data Lake Storage Gen2 | Collection | High/Low | | **Unusual access inspection in a storage account**<br>(Storage.Blob_AccessInspectionAnomaly<br>Storage.Files_AccessInspectionAnomaly) | Indicates that the access permissions of a storage account have been inspected in an unusual way, compared to recent activity on this account. A potential cause is that an attacker has performed reconnaissance for a future attack.<br>Applies to: Azure Blob Storage, Azure Files | Discovery | High/Medium |
Microsoft Defender for Containers provides security alerts on the cluster level
| **Unusual application accessed a storage account**<br>(Storage.Blob_ApplicationAnomaly<br>Storage.Files_ApplicationAnomaly) | Indicates that an unusual application has accessed this storage account. A potential cause is that an attacker has accessed your storage account by using a new application.<br>Applies to: Azure Blob Storage, Azure Files | Execution | High/Medium | | **Unusual data exploration in a storage account**<br>(Storage.Blob_DataExplorationAnomaly<br>Storage.Files_DataExplorationAnomaly) | Indicates that blobs or containers in a storage account have been enumerated in an abnormal way, compared to recent activity on this account. A potential cause is that an attacker has performed reconnaissance for a future attack.<br>Applies to: Azure Blob Storage, Azure Files | Execution | High/Medium | | **Unusual deletion in a storage account**<br>(Storage.Blob_DeletionAnomaly<br>Storage.Files_DeletionAnomaly) | Indicates that one or more unexpected delete operations has occurred in a storage account, compared to recent activity on this account. A potential cause is that an attacker has deleted data from your storage account.<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Exfiltration | High/Medium |
+|**Unusual unauthenticated public access to a sensitive blob container (Preview)**<br>Storage.Blob_AnonymousAccessAnomaly.Sensitive| The alert indicates that someone accessed a blob container with sensitive data in the storage account without authentication, using an external (public) IP address. This access is suspicious since the blob container is open to public access and is typically only accessed with authentication from internal networks (private IP addresses). This access could indicate that the blob container's access level is misconfigured, and a malicious actor may have exploited the public access. The security alert includes the discovered sensitive information context (scanning time, classification label, information types, and file types). Learn more on sensitive data threat detection. <br> Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Initial Access | High |
+|**Unusual amount of data extracted from a sensitive blob container (Preview)**<br>Storage.Blob_DataExfiltration.AmountOfDataAnomaly.Sensitive| The alert indicates that someone has extracted an unusually large number of blobs from a blob container with sensitive data in the storage account.<br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Exfiltration | Medium |
+|**Unusual number of blobs extracted from a sensitive blob container (Preview)**<br>Storage.Blob_DataExfiltration.NumberOfBlobsAnomaly.Sensitive| The alert indicates that someone has extracted an unusually large amount of data from a blob container with sensitive data in the storage account. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Exfiltration | |
+|**Access from a known suspicious application to a sensitive blob container (Preview)**<br>Storage.Blob_SuspiciousApp.Sensitive| The alert indicates that someone with a known suspicious application accessed a blob container with sensitive data in the storage account and performed authenticated operations. <br>The access may indicate that a threat actor obtained credentials to access the storage account by using a known suspicious application. However, the access could also indicate a penetration test carried out in the organization. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Initial Access | High |
+|**Access from a known suspicious IP address to a sensitive blob container (Preview)**<br>Storage.Blob_SuspiciousIp.Sensitive| The alert indicates that someone accessed a blob container with sensitive data in the storage account from a known suspicious IP address associated with threat intel by Microsoft Threat Intelligence. Since the access was authenticated, it's possible that the credentials allowing access to this storage account were compromised. <br>Learn more aboutΓÇ»[Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684). <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Pre-Attack | High |
+|**Access from a Tor exit node to a sensitive blob container (Preview)**<br>Storage.Blob_TorAnomaly.Sensitive| The alert indicates that someone with an IP address known to be a Tor exit node accessed a blob container with sensitive data in the storage account with authenticated access. Authenticated access from a Tor exit node strongly indicates that the actor is attempting to remain anonymous for possible malicious intent. Since the access was authenticated, it's possible that the credentials allowing access to this storage account were compromised. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Pre-Attack | High |
+|**Access from an unusual location to a sensitive blob container (Preview)**<br>Storage.Blob_GeoAnomaly.Sensitive| The alert indicates that someone has accessed blob container with sensitive data in the storage account with authentication from an unusual location. Since the access was authenticated, it's possible that the credentials allowing access to this storage account were compromised. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Initial Access | Medium |
+|**The access level of a sensitive storage blob container was changed to allow unauthenticated public access (Preview)**<br>Storage.Blob_OpenACL.Sensitive| The alert indicates that someone has changed the access level of a blob container in the storage account, which contains sensitive data, to the 'Container' level, which allows unauthenticated (anonymous) public access. The change was made through the Azure portal. <br>The access level change may compromise the security of the data. We recommend taking immediate action to secure the data and prevent unauthorized access in case this alert is triggered. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the data sensitivity threat detection feature enabled. | Collection | High |
+|**Suspicious external access to an Azure storage account with overly permissive SAS token (Preview)**<br>Storage.Blob_AccountSas.InternalSasUsedExternally| The alert indicates that someone with an external (public) IP address accessed the storage account using an overly permissive SAS token with a long expiration date. This type of access is considered suspicious because the SAS token is typically only used in internal networks (from private IP addresses). <br>The activity may indicate that a SAS token has been leaked by a malicious actor or leaked unintentionally from a legitimate source. <br>Even if the access is legitimate, using a high-permission SAS token with a long expiration date goes against security best practices and poses a potential security risk. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan. | Exfiltration / Resource Development / Impact | Medium |
+|**Suspicious external operation to an Azure storage account with overly permissive SAS token (Preview)**<br>Storage.Blob_AccountSas.UnusualOperationFromExternalIp| The alert indicates that someone with an external (public) IP address accessed the storage account using an overly permissive SAS token with a long expiration date. The access is considered suspicious because operations invoked outside your network (not from private IP addresses) with this SAS token are typically used for a specific set of Read/Write/Delete operations, but other operations occurred, which makes this access suspicious. <br>This activity may indicate that a SAS token has been leaked by a malicious actor or leaked unintentionally from a legitimate source. <br>Even if the access is legitimate, using a high-permission SAS token with a long expiration date goes against security best practices and poses a potential security risk. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan. | Exfiltration / Resource Development / Impact | Medium |
+|**Unusual SAS token was used to access an Azure storage account from a public IP address (Preview)**<br>Storage.Blob_AccountSas.UnusualExternalAccess| The alert indicates that someone with an external (public) IP address has accessed the storage account using an account SAS token. The access is highly unusual and considered suspicious, as access to the storage account using SAS tokens typically comes only from internal (private) IP addresses. <br>It's possible that a SAS token was leaked or generated by a malicious actor either from within your organization or externally to gain access to this storage account. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan. | Exfiltration / Resource Development / Impact | Low |
+|**Malicious file uploaded to storage account (Preview)**<br>Storage.Blob_AM.MalwareFound| The alert indicates that a malicious blob was uploaded to a storage account. This security alert is generated by the Malware Scanning feature in Defender for Storage. <br>Potential causes may include an intentional upload of malware by a threat actor or an unintentional upload of a malicious file by a legitimate user. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the Malware Scanning feature enabled. | LateralMovement | High |
+
Microsoft Defender for Containers provides security alerts on the cluster level
| **Unusual volume of data extracted**<br>(CosmosDB_DataExfiltrationAnomaly) | An unusually large volume of data has been extracted from this Azure Cosmos DB account. This might indicate that a threat actor exfiltrated data. | Exfiltration | Medium | | **Extraction of Azure Cosmos DB accounts keys via a potentially malicious script**<br>(CosmosDB_SuspiciousListKeys.MaliciousScript) | A PowerShell script was run in your subscription and performed a suspicious pattern of key-listing operations to get the keys of Azure Cosmos DB accounts in your subscription. Threat actors use automated scripts, like Microburst, to list keys and find Azure Cosmos DB accounts they can access. <br><br> This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise Azure Cosmos DB accounts in your environment for malicious intentions. <br><br> Alternatively, a malicious insider could be trying to access sensitive data and perform lateral movement. | Collection | High | | **Suspicious extraction of Azure Cosmos DB account keys** (AzureCosmosDB_SuspiciousListKeys.SuspiciousPrincipal) | A suspicious source extracted Azure Cosmos DB account access keys from your subscription. If this source is not a legitimate source, this may be a high impact issue. The access key that was extracted provides full control over the associated databases and the data stored within. See the details of each specific alert to understand why the source was flagged as suspicious. | Credential Access | high |
-| **SQL injection: potential data exfiltration**<br>(CosmosDB_SqlInjection.DataExfiltration) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> The injected statement might have succeeded in exfiltrating data that the threat actor isnΓÇÖt authorized to access. <br><br> Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks on Azure Cosmos DB accounts cannot work. However, the variation used in this attack may work and threat actors can exfiltrate data. | Exfiltration | Medium |
-| **SQL injection: fuzzing attempt**<br>(CosmosDB_SqlInjection.FailedFuzzingAttempt) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> Like other well-known SQL injection attacks, this attack wonΓÇÖt succeed in compromising the Azure Cosmos DB account. <br><br> Nevertheless, itΓÇÖs an indication that a threat actor is trying to attack the resources in this account, and your application may be compromised. <br><br> Some SQL injection attacks can succeed and be used to exfiltrate data. This means that if the attacker continues performing SQL injection attempts, they may be able to compromise your Azure Cosmos DB account and exfiltrate data. <br><br> You can prevent this threat by using parameterized queries. | Pre-attack | Low |
+| **SQL injection: potential data exfiltration**<br>(CosmosDB_SqlInjection.DataExfiltration) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> The injected statement might have succeeded in exfiltrating data that the threat actor isn't authorized to access. <br><br> Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks on Azure Cosmos DB accounts cannot work. However, the variation used in this attack may work and threat actors can exfiltrate data. | Exfiltration | Medium |
+| **SQL injection: fuzzing attempt**<br>(CosmosDB_SqlInjection.FailedFuzzingAttempt) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> Like other well-known SQL injection attacks, this attack won't succeed in compromising the Azure Cosmos DB account. <br><br> Nevertheless, it's an indication that a threat actor is trying to attack the resources in this account, and your application may be compromised. <br><br> Some SQL injection attacks can succeed and be used to exfiltrate data. This means that if the attacker continues performing SQL injection attempts, they may be able to compromise your Azure Cosmos DB account and exfiltrate data. <br><br> You can prevent this threat by using parameterized queries. | Pre-attack | Low |
## <a name="alerts-azurenetlayer"></a>Alerts for Azure network layer
The following tables include the Defender for Servers security alerts [to be dep
| **Alert Type** | **Alert Display Name** | **Severity** ||||
-VM.Windows_KnownCredentialAccessTools | Suspicious process executed | High
+VM.Windows_KnownCredentialAccessTools | Suspicious process executed | High
VM.Windows_SuspiciousAccountCreation | Suspicious Account Creation Detected | Medium
-VM_AbnormalDaemonTermination | Abnormal Termination | Low
-VM_BinaryGeneratedFromCommandLine | Suspicious binary detected | Medium
-VM_CommandlineSuspectDomain Suspicious | domain name reference | Low
+VM_AbnormalDaemonTermination | Abnormal Termination | Low
+VM_BinaryGeneratedFromCommandLine | Suspicious binary detected | Medium
+VM_CommandlineSuspectDomain Suspicious | domain name reference | Low
VM_CommonBot | Behavior similar to common Linux bots detected | Medium VM_CompCommonBots | Commands similar to common Linux bots detected |Medium VM_CompSuspiciousScript | Shell Script Detected | Medium
VM_ThreatIntelCommandLineSuspectDomain | A possible connection to malicious loca
VM_ThreatIntelSuspectLogon | A logon from a malicious IP has been detected | High VM_VbScriptHttpObjectAllocation| VBScript HTTP object allocation detected | High - ## Next steps To learn more about Microsoft Defender for Cloud security alerts, see the following: - [Security alerts in Microsoft Defender for Cloud](alerts-overview.md) - [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md)-- [Continuously export Defender for Cloud data](continuous-export.md)
+- [Continuously export Defender for Cloud data](continuous-export.md)
defender-for-cloud Attack Path Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md
description: This article lists Microsoft Defender for Cloud's list of attack paths based on resource. Previously updated : 01/24/2023 Last updated : 03/22/2023 - # Reference list of attack paths and cloud security graph components
-This article lists the attack paths, connections, and insights you might see in Microsoft Defender for Cloud related to Defender for Cloud Security Posture Management (CSPM). What you are shown in your environment depends on the resources you're protecting and your customized configuration. You'll need to [enable Defender for CSPM](enable-enhanced-security.md#enable-defender-plans-to-get-the-enhanced-security-features) to view your attack paths. Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer](concept-attack-path.md).
+This article lists the attack paths, connections, and insights used in Defender for Cloud Security Posture Management (CSPM).
+
+- You need to [enable Defender for CSPM](enable-enhanced-security.md#enable-defender-plans-to-get-the-enhanced-security-features) to view attack paths.
+- What you see in your environment depends on the resources you're protecting, and your customized configuration.
-To learn about how to [Identify and remediate attack paths](how-to-manage-attack-path.md).
+Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer](concept-attack-path.md).
## Attack paths
Prerequisite: For a list of prerequisites, see the [Availability table](how-to-m
|--|--| | Internet exposed VM has high severity vulnerabilities | A virtual machine is reachable from the internet and has high severity vulnerabilities. | | Internet exposed VM has high severity vulnerabilities and high permission to a subscription | A virtual machine is reachable from the internet, has high severity vulnerabilities, and identity and permission to a subscription. |
-| Internet exposed VM has high severity vulnerabilities and read permission to a data store with sensitive data | A virtual machine is reachable from the internet, has high severity vulnerabilities and read permission to a data store containing sensitive data. For more details, you can learn how to [prioritize security actions by data sensitivity](./information-protection.md). |
+| Internet exposed VM has high severity vulnerabilities and read permission to a data store with sensitive data (Preview) | A virtual machine is reachable from the internet, has high severity vulnerabilities and read permission to a data store containing sensitive data. <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender for CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
| Internet exposed VM has high severity vulnerabilities and read permission to a data store | A virtual machine is reachable from the internet and has high severity vulnerabilities and read permission to a data store. | | Internet exposed VM has high severity vulnerabilities and read permission to a Key Vault | A virtual machine is reachable from the internet and has high severity vulnerabilities and read permission to a key vault. | | VM has high severity vulnerabilities and high permission to a subscription | A virtual machine has high severity vulnerabilities and has high permission to a subscription. |
-| VM has high severity vulnerabilities and read permission to a data store with sensitive data | A virtual machine has high severity vulnerabilities and read permission to a data store containing sensitive data. For more details, you can learn how to [prioritize security actions by data sensitivity](./information-protection.md). |
+| VM has high severity vulnerabilities and read permission to a data store with sensitive data (Preview) | A virtual machine has high severity vulnerabilities and read permission to a data store containing sensitive data. <br/>Prerequisite: [Enable data-aware security for storage accounts in Defender for CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
| VM has high severity vulnerabilities and read permission to a key vault | A virtual machine has high severity vulnerabilities and read permission to a key vault. | | VM has high severity vulnerabilities and read permission to a data store | A virtual machine has high severity vulnerabilities and read permission to a data store. |
Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl
| Internet exposed EC2 instance has high severity vulnerabilities and high permission to an account | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has permission to an account. | | Internet exposed EC2 instance has high severity vulnerabilities and read permission to a DB | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has permission to a database. | | Internet exposed EC2 instance has high severity vulnerabilities and read permission to S3 bucket | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has an IAM role attached with permission to an S3 bucket via an IAM policy, or via a bucket policy, or via both an IAM policy and a bucket policy.
-| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a S3 bucket with sensitive data | An AWS EC2 instance is reachable from the internet has high severity vulnerabilities and has an IAM role attached with permission to an S3 bucket containing sensitive data via an IAM policy, or via a bucket policy, or via both an IAM policy and bucket policy. For more details, you can learn how to [prioritize security actions by data sensitivity](./information-protection.md). |
+| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a S3 bucket with sensitive data (Preview) | An AWS EC2 instance is reachable from the internet has high severity vulnerabilities and has an IAM role attached with permission to an S3 bucket containing sensitive data via an IAM policy, or via a bucket policy, or via both an IAM policy and bucket policy. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender for CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
| Internet exposed EC2 instance has high severity vulnerabilities and read permission to a KMS | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has an IAM role attached with permission to an AWS Key Management Service (KMS) via an IAM policy, or via an AWS Key Management Service (KMS) policy, or via both an IAM policy and an AWS KMS policy.| | Internet exposed EC2 instance has high severity vulnerabilities | An AWS EC2 instance is reachable from the internet and has high severity vulnerabilities. | | EC2 instance with high severity vulnerabilities has high privileged permissions to an account | An AWS EC2 instance has high severity vulnerabilities and has permissions to an account. | | EC2 instance with high severity vulnerabilities has read permissions to a data store |An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an S3 bucket via an IAM policy or via a bucket policy, or via both an IAM policy and a bucket policy. |
-| EC2 instance with high severity vulnerabilities has read permissions to a data store with sensitive data | An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an S3 bucket containing sensitive data via an IAM policy or via a bucket policy, or via both an IAM and bucket policy. |
+| EC2 instance with high severity vulnerabilities has read permissions to a data store with sensitive data (Preview) | An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an S3 bucket containing sensitive data via an IAM policy or via a bucket policy, or via both an IAM and bucket policy. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender for CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
| EC2 instance with high severity vulnerabilities has read permissions to a KMS key | An AWS EC2 instance has high severity vulnerabilities and has an IAM role attached which is granted with permissions to an AWS Key Management Service (KMS) key via an IAM policy, or via an AWS Key Management Service (KMS) policy, or via both an IAM and AWS KMS policy. | ### Azure data
-Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md).
- | Attack Path Display Name | Attack Path Description | |--|--|
-| Internet exposed SQL on VM has a user account with commonly used username and allows code execution on the VM | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. |
-| Internet exposed SQL on VM has a user account with commonly used username and known vulnerabilities | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). |
-| SQL on VM has a user account with commonly used username and allows code execution on the VM | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. |
-| SQL on VM has a user account with commonly used username and known vulnerabilities | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). |
-
-### AWS Data
-
-Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md).
+| Internet exposed SQL on VM has a user account with commonly used username and allows code execution on the VM | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
+| Internet exposed SQL on VM has a user account with commonly used username and known vulnerabilities | SQL on VM is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
+| SQL on VM has a user account with commonly used username and allows code execution on the VM | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying VM. <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)|
+| SQL on VM has a user account with commonly used username and known vulnerabilities | SQL on VM has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite: [Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md)|
+| Managed database with excessive internet exposure allows basic (local user/password) authentication | Database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
+| Internet exposed VM has high severity vulnerabilities and a hosted database installed | An attacker with network access to the DB machine can exploit the vulnerabilities and gain remote code execution.
+| Private Azure blob storage container replicates data to internet exposed and publicly accessible Azure blob storage container (Preview) | An internal Azure storage container replicates its data to another Azure storage container which is reachable from the internet and allows public access, and poses this data at risk. |
+| Internet exposed Azure Blob Storage container with sensitive data is publicly accessible (Preview) | A blob storage account container with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender for CSPM](data-security-posture-enable.md).|
+| Internet exposed managed database allows basic (local user/password) authentication (Preview) | A database can be accessed through the internet and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
+| Internet exposed Azure Blob Storage container with sensitive data is publicly accessible (Preview) | Azure Blob storage container with sensitive data is reachable from the internet and allows public read access without authorization required |
+
+### AWS data
| Attack Path Display Name | Attack Path Description | |--|--|
-| Internet exposed AWS S3 Bucket with sensitive data is publicly accessible | An S3 bucket with sensitive data is reachable from the internet and allows public read access without authorization required. For more details, you can learn how to [prioritize security actions by data sensitivity](./information-protection.md). |
+| Internet exposed AWS S3 Bucket with sensitive data is publicly accessible | An S3 bucket with sensitive data is reachable from the internet and allows public read access without authorization required. <br/> Prerequisite: [Enable data-aware security for S3 buckets in Defender for CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). |
+|Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute | Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute. <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md). |
+|Internet exposed SQL on EC2 instance has a user account with commonly used username and known vulnerabilities | SQL on EC2 instance is reachable from the internet, has a local user account with a commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
+|SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute | SQL on EC2 instance has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute. <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
+| SQL on EC2 instance has a user account with commonly used username and known vulnerabilities |SQL on EC2 instance [EC2Name] has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs). <br/> Prerequisite:ΓÇ»[Enable Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) |
+|Managed database with excessive internet exposure allows basic (local user/password) authentication | Database can be accessed through the internet from any public IP and allows authentication using username and password (basic authentication mechanism) which exposes the DB to brute force attacks. |
+|Internet exposed EC2 instance has high severity vulnerabilities and a hosted database installed | An attacker with network access to the DB machine can exploit the vulnerabilities and gain remote code execution.
+| Private AWS S3 bucket replicates data to internet exposed and publicly accessible AWS S3 bucket (Preview) | An internal AWS S3 bucket replicates its data to another S3 bucket which is reachable from the internet and allows public access, and poses this data at risk. |
+| RDS snapshot is publicly available to all AWS accounts (Preview) | A snapshot of an RDS instance or cluster is publicly accessible by all AWS accounts. |
+| Internet exposed managed database allows basic (local user/password) authentication (Preview) | A database can be accessed through the internet and allows user/password authentication only which exposes the DB to brute force attacks. |
+| Internet exposed SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | SQL on EC2 instance is reachable from the internet, has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute |
+| Internet exposed SQL on EC2 instance has a user account with commonly used username and known vulnerabilities (Preview) | SQL on EC2 instance is reachable from the internet, has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs) |
+| SQL on EC2 instance has a user account with commonly used username and allows code execution on the underlying compute (Preview) | SQL on EC2 instance has a local user account with commonly used username (which is prone to brute force attacks), and has vulnerabilities allowing code execution and lateral movement to the underlying compute |
+| SQL on EC2 instance has a user account with commonly used username and known vulnerabilities (Preview) | SQL on EC2 instance has a local user account with commonly used username (which is prone to brute force attacks), and has known vulnerabilities (CVEs) |
+| Private AWS S3 bucket replicates data to internet exposed and publicly accessible AWS S3 bucket (Preview) | Private AWS S3 bucket is replicating data to internet exposed and publicly accessible AWS S3 bucket |
+| Private AWS S3 bucket with sensitive data replicates data to internet exposed and publicly accessible AWS S3 bucket (Preview) | Private AWS S3 bucket with sensitive data is replicating data to internet exposed and publicly accessible AWS S3 bucket|
+| RDS snapshot is publicly available to all AWS accounts (Preview) | RDS snapshot is publicly available to all AWS accounts |
### Azure containers
Prerequisite: [Enable Defender for DevOps](defender-for-devops-introduction.md).
| Attack Path Display Name | Attack Path Description | |--|--|
-| Internet exposed GitHub repository with plaintext secret is publicly accessible (Preview) | A GitHub repositorie is reachable from the internet, allows public read access without authorization required, and holds plaintext secrets. |
+| Internet exposed GitHub repository with plaintext secret is publicly accessible (Preview) | A GitHub repository is reachable from the internet, allows public read access without authorization required, and holds plaintext secrets. |
## Cloud security graph components list
This section lists all of the cloud security graph components (connections and
| Insight | Description | Supported entities | |--|--|--|
-| Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod |
-| Contains sensitive data | Indicates that a resource contains sensitive data based on Microsoft Purview scan and applicable only if Microsoft Purview is enabled. For more details, you can learn how to [prioritize security actions by data sensitivity](./information-protection.md). | Azure SQL Server, Azure Storage Account, AWS S3 bucket |
+| Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod, Azure SQL Managed Instance, Azure MySQL Single Server, Azure MySQL Flexible Server, Azure PostgreSQL Single Server, Azure PostgreSQL Flexible Server, Azure MariaDB Single Server, Synapse Workspace, RDS Instance |
+| Allows basic authentication | Indicates that a resource allows basic (local user/password or key-based) authentication | Azure SQL Server, RDS Instance |
+| Contains sensitive data <br/> <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender for CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | Indicates that a resource contains sensitive data. | Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server, Azure SQL Database, Azure Data Lake Storage Gen2, Azure Database for PostgreSQL, Azure Database for MySQL, Azure Synapse Analytics, Azure Cosmos DB accounts |
+| Moves data to | Indicates that a resource moves its data to another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster |
+| Gets data from | Indicates that a resource gets its data from another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster |
| Has tags | Lists the resource tags of the cloud resource | All Azure and AWS resources |
-| Installed software | Lists all software installed on the machine. This insight is applicable only for VMs that have threat and vulnerability management integration with Defender for Cloud enabled and are connected to Defender for Cloud. | Azure virtual machine, AWS EC2 |
+| Installed software | Lists all software installed on the machine. This insight is applicable only for VMs that have threat and vulnerability management integration with Defender for Cloud enabled and are connected to Defender for Cloud. | Azure virtual machine, AWS EC2 |
| Allows public access | Indicates that a public read access is allowed to the resource with no authorization required | Azure storage account, AWS S3 bucket, GitHub repository |
-| Doesn't have MFA enabled | Indicates that the user account does not have a multi-factor authentication solution enabled | AAD User account, IAM user |
-| Is external user | Indicates that the user account is outside the organization's domain | AAD User account |
+| Doesn't have MFA enabled | Indicates that the user account does not have a multi-factor authentication solution enabled | Azure AD User account, IAM user |
+| Is external user | Indicates that the user account is outside the organization's domain | Azure AD User account |
| Is managed | Indicates that an identity is managed by the cloud provider | Azure Managed Identity |
-| Contains common usernames | Indicates that a SQL server has user accounts with common usernames which are prone to brute force attacks. | SQL on VM |
-| Can execute code on the host | Indicates that a SQL server allows executing code on the underlying VM using a built-in mechanism such as xp_cmdshell. | SQL on VM |
-| Has vulnerabilities | Indicates that the resource SQL server has vulnerabilities detected | SQL on VM |
+| Contains common usernames | Indicates that a SQL server has user accounts with common usernames which are prone to brute force attacks. | SQL VM, Arc-Enabled SQL VM |
+| Can execute code on the host | Indicates that a SQL server allows executing code on the underlying VM using a built-in mechanism such as xp_cmdshell. | SQL VM, Arc-Enabled SQL VM |
+| Has vulnerabilities | Indicates that the resource SQL server has vulnerabilities detected | SQL VM, Arc-Enabled SQL VM |
| DEASM findings | Microsoft Defender External Attack Surface Management (DEASM) internet scanning findings | Public IP | | Privileged container | Indicates that a Kubernetes container runs in a privileged mode | Kubernetes container | | Uses host network | Indicates that a Kubernetes pod uses the network namespace of its host machine | Kubernetes pod | | Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Kubernetes image | | Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Kubernetes image | | Public IP metadata | Lists the metadata of an Public IP | Public IP |
-| Identity metadata | Lists the metadata of an identity | AAD Identity |
+| Identity metadata | Lists the metadata of an identity | Azure AD Identity |
### Connections | Connection | Description | Source entity types | Destination entity types | |--|--|--|--|
-| Can authenticate as | Indicates that an Azure resource can authenticate to an identity and use its privileges | Azure VM, Azure VMSS, Azure Storage Account, Azure App Services, SQL Servers | AAD managed identity |
-| Has permission to | Indicates that an identity has permissions to a resource or a group of resources | AAD user account, Managed Identity, IAM user, EC2 instance | All Azure & AWS resources|
-| Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, GitHub owner, Azure DevOps project, Azure DevOps organization | All Azure & AWS resources, All Kubernetes entities, All DevOps entities |
+| Can authenticate as | Indicates that an Azure resource can authenticate to an identity and use its privileges | Azure VM, Azure VMSS, Azure Storage Account, Azure App Services, SQL Servers | Azure AD managed identity |
+| Has permission to | Indicates that an identity has permissions to a resource or a group of resources | Azure AD user account, Managed Identity, IAM user, EC2 instance | All Azure & AWS resources|
+| Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, GitHub owner, Azure DevOps project, Azure DevOps organization, Azure SQL server | All Azure & AWS resources, All Kubernetes entities, All DevOps entities, Azure SQL database |
| Routes traffic to | Indicates that the source entity can route network traffic to the target entity | Public IP, Load Balancer, VNET, Subnet, VPC, Internet Gateway, Kubernetes service, Kubernetes pod| Azure VM, Azure VMSS, AWS EC2, Subnet, Load Balancer, Internet gateway, Kubernetes pod, Kubernetes service |
-| Is running | Indicates that the source entity is running the target entity as a process | Azure VM, Kubernetes container | SQL, Kubernetes image, Kubernetes pod |
-| Member of | Indicates that the source identity is a member of the target identities group | AAD group, AAD user | AAD group |
+| Is running | Indicates that the source entity is running the target entity as a process | Azure VM, EC2, Kubernetes container | SQL, Arc-Enabled SQL, Hosted MongoDB, Hosted MySQL, Hosted Oracle, Hosted PostgreSQL, Hosted SQL Server, Kubernetes image, Kubernetes pod |
+| Member of | Indicates that the source identity is a member of the target identities group | Azure AD group, Azure AD user | Azure AD group |
| Maintains | Indicates that the source Kubernetes entity manages the life cycle of the target Kubernetes entity | Kubernetes workload controller, Kubernetes replica set, Kubernetes stateful set, Kubernetes daemon set, Kubernetes jobs, Kubernetes cron job | Kubernetes pod | ## Next steps -- [What are the cloud security graph, attack path analysis, and the cloud security explorer?](concept-attack-path.md)
+- [Identify and analyze risks across your environment](concept-attack-path.md)
- [Identify and remediate attack paths](how-to-manage-attack-path.md)-- [Cloud security explorer](how-to-manage-cloud-security-explorer.md)
+- [Cloud security explorer](how-to-manage-cloud-security-explorer.md)
defender-for-cloud Auto Deploy Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md
Previously updated : 02/28/2023 Last updated : 03/21/2023 # Automatically configure vulnerability assessment for your machines
Defender for Cloud collects data from your machines using agents and extensions.
To assess your machines for vulnerabilities, you can use one of the following solutions: -- Microsoft Defender Vulnerability Management available in Microsoft Defender for Endpoint (included with Microsoft Defender for Servers)-- An integrated Qualys agent (included with Microsoft Defender for Servers)
+- Microsoft Defender Vulnerability Management solution (included with Microsoft Defender for Servers)
+- Built-in Qualys agent (included with Microsoft Defender for Servers)
- A Qualys or Rapid7 scanner that you've licensed separately and configured within Defender for Cloud (this scenario is called the Bring Your Own License, or BYOL, scenario) > [!NOTE]
To assess your machines for vulnerabilities, you can use one of the following so
:::image type="content" source="media/auto-deploy-vulnerability-assessment/turn-on-deploy-vulnerability-assessment.png" alt-text="Screenshot showing where to turn on deployment of vulnerability assessment for machines." lightbox="media/auto-deploy-vulnerability-assessment/turn-on-deploy-vulnerability-assessment.png"::: > [!TIP]
- > Defender for Cloud enables the following policy: [(Preview) Configure machines to receive a vulnerability assessment provider](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f13ce0167-8ca6-4048-8e6b-f996402e3c1b).
+ > If you select the "Microsoft Defender for Cloud built-in Qualys solution" solution, Defender for Cloud enables the following policy: [(Preview) Configure machines to receive a vulnerability assessment provider](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f13ce0167-8ca6-4048-8e6b-f996402e3c1b).
-1. Select **Apply** and **Save**.
+1. Select **Apply** and then select **Save**.
1. To view the findings for **all** supported vulnerability assessment solutions, see the **Machines should have vulnerability findings resolved** recommendation.
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
Title: Overview of Cloud Security Posture Management (CSPM)
description: Learn more about the new Defender CSPM plan and the other enhanced security features that can be enabled for your multicloud environment through the Defender Cloud Security Posture Management (CSPM) plan. Previously updated : 03/08/2023 Last updated : 03/26/2023 # Cloud Security Posture Management (CSPM)
One of Microsoft Defender for Cloud's main pillars for cloud security is Cloud S
Defender for Cloud continually assesses your resources, subscriptions and organization for security issues. Defender for Cloud shows your security posture in secure score. The secure score is an aggregated score of the security findings that tells you your current security situation. The higher the score, the lower the identified risk level.
-## Availability
+## Prerequisites
-|Aspect|Details|
-|-|:-|
-|Release state:| Foundational CSPM capabilities: GA <br> Defender Cloud Security Posture Management (CSPM): Preview |
-| Prerequisites | - **Foundational CSPM capabilities** - None <br> <br> - **Defender Cloud Security Posture Management (CSPM)** - Agentless scanning requires the **Subscription Owner** to enable the plan. Anyone with a lower level of authorization can enable the Defender CSPM plan but the agentless scanner won't be enabled by default due to lack of permissions. Attack path analysis and security explorer won't populate with vulnerabilities because the agentless scanner is disabled. |
-|Clouds:| **Foundational CSPM capabilities** <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br> <br> For Connected AWS accounts and GCP projects availability, see the [feature availability](#defender-cspm-plan-options) table. <br> <br> **Defender Cloud Security Posture Management (CSPM)** <br> :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br> <br> For Connected AWS accounts and GCP projects availability, see the [feature availability](#defender-cspm-plan-options) table. |
+- **Foundational CSPM capabilities** - None
+- **Defender Cloud Security Posture Management (CSPM)** - Agentless scanning requires the **Subscription Owner** to enable the plan. Anyone with a lower level of authorization can enable the Defender CSPM plan but the agentless scanner won't be enabled by default due to lack of permissions. Attack path analysis and security explorer won't be populated with vulnerabilities because the agentless scanner is disabled.
+
+For commercial and national cloud coverage, see the [features supported in different Azure cloud environments](support-matrix-defender-for-cloud.md#features-supported-in-different-azure-cloud-environments).
## Defender CSPM plan options
-Defender for cloud offers foundational multicloud CSPM capabilities for free. These capabilities are automatically enabled by default on any subscription or account that has onboarded to Defender for Cloud. The foundational CSPM includes asset discovery, continuous assessment and security recommendations for posture hardening, compliance with Microsoft Cloud Security Benchmark (MCSB), and a [Secure score](secure-score-access-and-track.md) which measure the current status of your organizationΓÇÖs posture.
+Defender for cloud offers foundational multicloud CSPM capabilities for free. These capabilities are automatically enabled by default on any subscription or account that has onboarded to Defender for Cloud. The foundational CSPM includes asset discovery, continuous assessment and security recommendations for posture hardening, compliance with Microsoft Cloud Security Benchmark (MCSB), and a [Secure score](secure-score-access-and-track.md) which measure the current status of your organization's posture.
The optional Defender CSPM plan, provides advanced posture management capabilities such as [Attack path analysis](how-to-manage-attack-path.md), [Cloud security explorer](how-to-manage-cloud-security-explorer.md), advanced threat hunting, [security governance capabilities](concept-regulatory-compliance.md), and also tools to assess your [security compliance](review-security-recommendations.md) with a wide range of benchmarks, regulatory standards, and any custom security policies required in your organization, industry, or region.
+### Plan pricing
+
+> [!NOTE]
+> The Microsoft Defender CSPM plan protects across multicloud workloads. With Defender CSPM generally available (GA), the plan will remain free until billing starts on May 1 2023. Billing will apply for compute, database, and storage resources. Billable workloads will be VMs, Storage Accounts, OSS DBs, and SQL PaaS & Servers on Machines. When billing starts, existing Microsoft Defender for Cloud customers will receive automatically applied discounts for Defender CSPM. ΓÇï
+
+ Microsoft Defender CSPM protects across all your multicloud workloads, but billing only applies for Servers, Databases and Storage accounts at $15/billable resource/month. If you have one of the following plans enabled, you will receive a discount.
+
+Current Microsoft Defender for Cloud customers receive automatically applied discounts (5-25% discount per billed workload based on the highest applicable discount).
+
+Refer to the following table:
+
+| Current Defender for Cloud Customer | Automatic Discount | Defender CSPM Price |
+|--|--|--|
+|Defender for Servers P2 | 25% | **$11.25/** Compute or Data workload / month
+|Defender for Containers | 10% | **$13.50/** Compute or Data workload / month
+|Defender for DBs / Defender for Storage | 5% | **$14.25/** Compute or Data workload / month
+
+## Plan Availability
+
+Learn more about [Defender CSPM pricing](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+ The following table summarizes each plan and their cloud availability. | Feature | Foundational CSPM capabilities | Defender CSPM | Cloud availability | |--|--|--|--|
-| Continuous assessment of the security configuration of your cloud resources | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
| [Security recommendations to fix misconfigurations and weaknesses](review-security-recommendations.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png":::| Azure, AWS, GCP, on-premises |
+| Asset inventory | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
| [Secure score](secure-score-security-controls.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
+| Data visualization and reporting with Azure Workbooks | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
+| Data exporting | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
+| Workflow automation | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
+| Remediation tracking | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |
+| Microsoft Cloud Security Benchmark | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
| [Governance](concept-regulatory-compliance.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Regulatory compliance](concept-regulatory-compliance.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | | [Attack path analysis](how-to-manage-attack-path.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | | [Agentless scanning for machines](concept-agentless-data-collection.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
+| Agentless discovery for Kubernetes | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure |
+| Agentless vulnerability assessments for container images, including registry scanning (\* Up to 20 unique images per billable resource) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure |
+| Sensitive data discovery | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
+| Data flows discovery | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
+| EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS |
> [!NOTE] > If you have enabled Defender for DevOps, you will only gain cloud security graph and attack path analysis to the artifacts that arrive through those connectors. >
-> To enable Governance for for DevOps related recommendations, the Defender CSPM plan needs to be enabled on the Azure subscription that hosts the DevOps connector.
+> To enable Governance for DevOps related recommendations, the Defender CSPM plan needs to be enabled on the Azure subscription that hosts the DevOps connector.
## Next steps
defender-for-cloud Concept Data Security Posture Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md
+
+ Title: Support and prerequisites for data-aware security posture - Microsoft Defender for Cloud
+description: Learn about the requirements for data-aware security posture in Microsoft Defender for Cloud
++++ Last updated : 03/23/2023++
+# Support and prerequisites for data-aware security posture
+
+Review the requirements on this page before setting up [data-aware security posture](concept-data-security-posture.md) in Microsoft Defender for Cloud.
+
+## Enabling sensitive data discovery
+
+Sensitive data discovery is available in the Defender CSPM and Defender for Storage plans.
+
+- When you enable one of the plans, the sensitive data discovery extension is turned on as part of the plan.
+- If you have existing plans running, the extension is available, but turned off by default.
+- Existing plan status shows as ΓÇ£PartialΓÇ¥ rather than ΓÇ£FullΓÇ¥ until the feature is turned on manually.
+- The feature is turned on at the subscription level.
++
+## What's supported
+
+The table summarizes support for data-aware posture management.
+
+**Support** | **Details**
+ |
+What Azure data resources can I scan? | Azure storage accounts v1, v2<br/><br/> Azure Data Lake Storage Gen1/Gen2<br/><br/>Accounts are supported behind private networks but not behind private endpoints.<br/><br/> Defender for Cloud can discover data encrypted by KMB or a customer-managed key. <br/><br/>Page blobs aren't scanned.
+What AWS data resources can I scan? | AWS S3 buckets<br/><br/> Defender for Cloud can scan encrypted data, but not data encrypted with a customer-managed key.
+What permissions do I need for scanning? | Storage account: Subscription Owner or Microsoft.Storage/storageaccounts/{read/write} and Microsoft.Authorization/roleAssignments/{read/write/delete}<br/><br/> Amazon S3 buckets: AWS account permission to run Cloud Formation (to create a role).
+What file types are supported for sensitive data discovery? | Supported file types (you can't select a subset) - .doc, .docm, .docx, .dot, .odp, .ods, .odt, .pdf, .pot, .pps, .ppsx, .ppt, .pptm, .pptx, .xlc, .xls, .xlsb, .xlsm, .xlsx, .xlt, .cvs, .json, .psv, .ssv, .tsv, .txt., xml, .parquet, .avro, .orc.
+What Azure regions are supported? | You can scan Azure storage accounts in:<br/><br/> Australia Central; Australia Central 2; Australia East; Australia Southeast; Brazil South; Canada Central; Canada East; Central India; Central US; East Asia; East US; East US 2; France Central; Germany West Central; Japan East; Japan West: Jio India West: North Central US; North Europe; Norway East; South Africa North: South Central US; South India; Sweden Central; Switzerland North; UAE North; UK South; UK West: West Central US; West Europe; West US, West US3.<br/><br/> Scanning is done locally in the region.
+What AWS regions are supported? | Asia Pacific (Mumbai); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br/><br/> Scanning is done locally in the region.
+Do I need to install an agent? | No, scanning is agentless.
+What's the cost? | The feature is included with the Defender CSPM and Defender for Storage plans, and doesnΓÇÖt include other costs except for the respective plan costs.
+
+## Scanning
+
+- It takes up to 24 hours to see the results for a first scan.
+- Refreshed results for a resource that's previously been scanned take up to eight days.
+- A new Azure storage account that's added to an already scanned subscription is scanned within 24 hours or less.
+- A new AWS S3 bucket that's added to an already scanned AWS account is scanned within 48 hours or less.
+++
+## Configuring data sensitivity settings
+
+The main steps for configuring data sensitivity setting include:
+- [Import custom sensitive info types/labels from Microsoft Purview compliance portal](data-sensitivity-settings.md#import-custom-sensitive-info-typeslabels-from-microsoft-purview-compliance-portal)
+- [Customize sensitive data categories/types](data-sensitivity-settings.md#customize-sensitive-data-categoriestypes)
+- [Set the threshold for sensitivity labels](data-sensitivity-settings.md#set-the-threshold-for-sensitive-data-labels)
+
+[Learn more](/microsoft-365/compliance/create-sensitivity-labels) about sensitivity labels in Microsoft Purview.
+
+## Discovery and scanning
+
+Defender for Cloud starts discovering and scanning data immediately after enabling a plan, or after turning on the feature in plans that are already running.
+
+- After you onboard the feature, results appear in the Defender for Cloud portal within 24 hours.
+- After files are updated in the scanned resources, data is refreshed within eight days.
+
+## Scanning AWS storage
+
+In order to protect AWS resources in Defender for Cloud, you set up an AWS connector, using a CloudFormation template to onboard the AWS account.
+
+- To scan AWS data resources, Defender for Cloud updates the CloudFormation template.
+- The CloudFormation template creates a new role in AWS IAM, to allow permission for the Defender for Cloud scanner to access data in the S3 buckets.
+- To connect AWS accounts, you need Administrator permissions on the account.
+- The role allows these permissions: S3 read only; KMS decrypt.
+++++
+## Next steps
+
+[Enable](data-security-posture-enable.md) data-aware security posture.
+
defender-for-cloud Concept Data Security Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture.md
+
+ Title: Data-aware security posture in Microsoft Defender for Cloud
+description: Learn how Defender for Cloud helps improve data security posture in a multicloud environment.
++++ Last updated : 03/09/2023+
+# About data-aware security posture (preview)
+
+As digital transformation accelerates, organizations move data to the cloud at an exponential rate using multiple data stores such as object stores and managed/hosted databases. The dynamic and complex nature of the cloud has increased data threat surfaces and risk. This causes challenges for security teams around data visibility and protecting the cloud data estate.
+
+Data-aware security in Microsoft Defender for Cloud helps you to reduce data risk, and respond to data breaches. Using data-aware security posture you can:
+
+- Automatically discover sensitive data resources across multiple clouds.
+- Evaluate data sensitivity, data exposure, and how data flows across the organization.
+- Proactively and continuously uncover risks that might lead to data breaches.
+- Detect suspicious activities that might indicate ongoing threats to sensitive data resources.
+
+## Automatic discovery
+
+Data-aware security posture automatically and continuously discovers managed and shadow data resources across clouds, including different types of objects stores and databases.
+
+- You can discover sensitive data using the sensitive data discovery extension that's included in the Defender Cloud Security Posture Management (CSPM) and Defender for Storage plans.
+- Discovery of hosted databases and data flows is available in Cloud Security Explorer and Attack Paths. This functionality is available in the Defender for CSPM plan, and isn't dependent on the extension.
+
+## Data security in Defender CSPM
+
+Defender CSPM provides visibility and contextual insights into your organizational security posture. The addition of data-aware security posture to the Defender CSPM plan enables you to proactively identify and prioritize critical data risks, distinguishing them from less risky issues.
+
+### Attack paths
+
+Attack path analysis helps you to address security issues that pose immediate threats, and have the greatest potential for exploit in your environment. Defender for Cloud analyzes which security issues are part of potential attack paths that attackers could use to breach your environment. It also highlights the security recommendations that need to be resolved in order to mitigate the risks.
+
+You can discover risk of data breaches by attack paths of internet-exposed VMs that have access to sensitive data stores. Hackers can exploit exposed VMs to move laterally across the enterprise to access these stores. Review [attack paths](attack-path-reference.md#attack-paths).
+
+### Cloud Security Explorer
+
+Cloud Security Explorer helps you identify security risks in your cloud environment by running graph-based queries on Cloud Security Graph (Defender for Cloud's context engine). You can prioritize your security team's concerns, while taking your organization's specific context and conventions into account.
+
+You can leverage Cloud Security Explorer query templates, or build your own queries, to find insights about misconfigured data resources that are publicly accessible and contain sensitive data, across multicloud environments. You can run queries to examine security issues, and to get environment context into your asset inventory, exposure to internet, access controls, data flows, and more. Review [cloud graph insights](attack-path-reference.md#cloud-security-graph-components-list).
++
+## Data security in Defender for Storage
+
+Defender for Storage monitors Azure storage accounts with advanced threat detection capabilities. It detects potential data breaches by identifying harmful attempts to access or exploit data, and by identifying suspicious configuration changes that could lead to a breach.
+
+When early suspicious signs are detected, Defender for Storage generates security alerts, allowing security teams to quickly respond and mitigate.
+
+By applying sensitivity information types and Microsoft Purview sensitivity labels on storage resources, you can easily prioritize the alerts and recommendations that focus on sensitive data.
++
+## Scanning with smart sampling
+
+Defender for Cloud uses smart sampling to scan a selected number of files in your cloud datastores. The sampling results discover evidence of sensitive data issues, while saving on scanning costs and time.
+
+## Data sensitivity settings
+
+Data sensitivity settings define what's considered sensitive data in your organization. Data sensitivity values in Defender for Cloud are based on:
+
+- **Predefined sensitive information types**: Defender for Cloud uses the built-in sensitive information types in [Microsoft Purview](/microsoft-365/compliance/sensitive-information-type-learn-about). This ensures consistent classification across services and workloads. Some of these types are enabled by default in Defender for Cloud. You can modify these defaults.
+- **Custom information types/labels**: You can optionally import custom sensitive information types and [labels](/microsoft-365/compliance/sensitivity-labels) that you've defined in the Microsoft Purview compliance portal.
+- **Sensitive data thresholds**: In Defender for Cloud you can set the threshold for sensitive data labels. The threshold determines minimum confidence level for a label to be marked as sensitive in Defender for Cloud. Thresholds make it easier to explore sensitive data.
+
+When scanning resources for data sensitivity, scan results are based on these settings.
+
+When you enable data-aware security capabilities with the sensitive data discovery component in the Defender CSPM or Defender for Storage plans, Defender for Cloud uses algorithms to identify storage resources that appear to contain sensitive data. Resources are labeled in accordance with data sensitivity settings.
+
+Changes in sensitivity settings take effect the next time that resources are scanned.
++
+## Next steps
+
+[Prepare and review requirements](concept-data-security-posture-prepare.md) for data-aware security posture management.
defender-for-cloud Create Custom Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/create-custom-recommendations.md
+
+ Title: Create Custom Recommendations in Microsoft Defender for Cloud
+description: This article explains how to create custom recommendations in Microsoft Defender for Cloud to secure your environment based on your organization's internal needs and requirements.
++ Last updated : 03/26/2023+
+# Create custom recommendations and security standards
+
+Recommendations give you suggestions on how to better secure your resources.
+
+Security standards contain comprehensive sets of security recommendations to help secure your cloud environments.ΓÇ»
+
+Security teams can use the readily available recommendations and regulatory standards and also can create their own custom recommendations and standards to meet specific internal requirements in their organization.
+
+Microsoft Defender for Cloud provides the option of creating custom recommendations and standards for AWS and GCP using KQL queries. You can use a query editor to build and test queries over your data.
+
+There are three types of resources to create and manage custom recommendations:
+
+- **Recommendations** ΓÇô contains:
+ - Recommendation details (name, description, severity, remediation logic, etc.)
+ - Recommendation logic in KQL.
+ - The standard it belongs to.
+- **Standard** ΓÇô defines a set of recommendations.
+- **Standard assignment** ΓÇô defines the scope that the standard evaluates (for example, specific AWS accounts).
+
+## Prerequisites
+
+|Aspect|Details|
+|-|:-|
+|Required/Preferred Environmental Requirements| This preview includes only AWS and GCP recommendations. <br> This feature is part of the Defender CSPM bundle in the future. |
+| Required Roles & Permissions | Subscription Owner / Contributor |
+|Clouds:| :::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet) Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) |
+
+## Create a custom recommendation
+
+1. In Microsoft Defender for Cloud, select Environment Settings.
+
+1. Select the relevant account / project.
+
+1. Select Standards.
+
+1. Select Create and then select Recommendation.
+
+ :::image type="content" source="./media/create-custom-recommendations/select-create-recommendation.png" alt-text="Screenshot showing where to select Create and then Recommendation." lightbox="./media/create-custom-recommendations/select-create-recommendation.png":::
+
+1. Fill in the recommendation details (for example: name, severity) and select the standard/s you'd like to add this recommendation to.
+
+ :::image type="content" source="./media/create-custom-recommendations/fill-info-recommendation.png" alt-text="Screenshot showing where to fill in description details of recommendation." lightbox="./media/create-custom-recommendations/fill-info-recommendation.png":::
+
+1. Write a KQL query that defines the recommendation logic. You can write the query in the "recommendation query" text box or [use the query editor](#create-new-queries-using-the-query-editor).
+
+ :::image type="content" source="./media/create-custom-recommendations/open-query-editor.png" alt-text="Screenshot showing where to open the query editor." lightbox="./media/create-custom-recommendations/open-query-editor.png":::
+
+1. Select Next and review the recommendations details.
+
+ :::image type="content" source="./media/create-custom-recommendations/review-recommendation.png" alt-text="Screenshot showing where to review the recommendation details." lightbox="./media/create-custom-recommendations/review-recommendation.png":::
+
+1. Select Save.
+
+## Create a custom standard
+
+1. In Microsoft Defender for Cloud, select Environment Settings.
+
+1. Select the relevant account / project.
+
+1. Select Standards
+
+1. Select Add and then select Standard.
+
+ :::image type="content" source="./media/create-custom-recommendations/select-add-standard.png" alt-text="Screenshot showing where to select Add and then Standard." lightbox="./media/create-custom-recommendations/select-add-standard.png":::
+
+1. Fill in a name and description and select the recommendation you want to be included in this standard.
+
+ :::image type="content" source="./media/create-custom-recommendations/fill-name-description.png" alt-text="Screenshot showing where to fill in your custom recommendation's name and description." lightbox="./media/create-custom-recommendations/fill-name-description.png":::
+
+1. Select Save; the new standard will now be assigned to the account/project you've created it in. You can assign the same standard to other accounts / projects that you have Contributor and up access to.
+
+## Create new queries using the query editor
+
+In the query editor you have the ability to run your queries over your raw data (native API calls).
+To create a new query using the query editor, select the 'open query editor' button. The editor will contain data on all the native APIs we support to help build the queries. The data appears in the same structure as in the API. You can view the results of your query in the Results pane. The [**How to**](#steps-for-building-a-query) tab gives you step by step instructions for building your query.
++
+### Steps for building a query
+
+1. The first row of the query should include the environment and resource type. For example: | where Environment == 'AWS' and Identifiers.Type == 'ec2.instance'
+1. The query must contain an "iff" statement that defines the healthy or unhealthy conditions. Use this template and edit only the "condition": "| extend HealthStatus = iff(condition, 'UNHEALTHY','HEALTHY')".
+1. The last row should return all the original columns: "| project Id, Name, Environment, Identifiers, AdditionalData, Record, HealthStatus".
+
+ >[!Note]
+ >The Record field contains the data structure as it is returned from the AWS / GCP API. Use this field to define conditions which will determine if the resource is healthy or unhealthy. <br> You can access internal properties of Record filed using a dot notation. <br>
+ For example: | extend EncryptionType = Record.Encryption.Type.
+
+#### Additional instructions
+
+- No need to filter records by Timespan. The assessment service filters the most recent records on each run.
+- No need to filter by resource ARN, unless intended. The assessment service will run the query on assigned resources.
+- If a specific scope is filtered in the assessment query (for example: specific account ID), it will apply on all resources assigned to this query.
+- Currently it is not possible to create one recommendation for multiple environments.
+
+## Next steps
+
+You can use the following links to learn more about Kusto queries:
+
+- [KQL Quick Reference](/azure/data-explorer/kql-quick-reference)
+- [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/)
+- [Must Learn KQL Part 1: Tools and Resources](https://azurecloudai.blog/2021/11/17/must-learn-kql-part-1-tools-and-resources/)
+- [What are security policies, initiatives, and recommendations?](security-policy-concept.md)
defender-for-cloud Data Security Posture Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security-posture-enable.md
+
+ Title: Enable data-aware security posture for Azure datastores - Microsoft Defender for Cloud
+description: Learn how to enable data-aware security posture in Defender for Cloud
++++ Last updated : 03/14/2023+++
+# Enable data-aware security posture
+
+This article describes how to enable [data-aware security posture](data-security-posture-enable.md) in Microsoft Defender for Cloud.
+
+## Before you start
+
+- Before you enable data-aware security posture, [review support and prerequisites](concept-data-security-posture-prepare.md).
+- When you enable Defender CSPM or Defender for Storage plans, the sensitive data discovery extension is automatically enabled. You can disable this setting if you don't want to use data-aware security posture, but we recommend that you use the feature to get the most value from Defender for Cloud.
+- Sensitive data is identified based on the data sensitivity settings in Defender for Cloud. You can [customize the data sensitivity settings](data-sensitivity-settings.md) to identify the data that your organization considers sensitive.
+- It takes up to 24 hours to see the results of a first scan after enabling the feature.
+
+## Enable in Defender CSPM (Azure)
+
+Follow these steps to enable data-aware security posture. Don't forget to review [required permissions](concept-data-security-posture-prepare.md#whats-supported) before you start.
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environmental settings**.
+1. Select the relevant Azure subscription.
+1. For the Defender for CSPM plan, select the **On** status.
+
+ If Defender for CSPM is already on, select **Settings** in the Monitoring coverage column of the Defender CSPM plan and make sure that the **Sensitive data discovery** component is set to **On** status.
+
+## Enable in Defender CSPM (AWS)
+
+### Before you start
+
+- Don't forget to: [review the requirements](concept-data-security-posture-prepare.md#scanning-aws-storage) for AWS scanning, and [required permissions](concept-data-security-posture-prepare.md#whats-supported).
+- Check that there's no policy that blocks the connection to your Amazon S3 buckets.
+
+### Enable for AWS resources
+
+1. Enable data security posture as described above
+1. Proceed with the instructions to download the CloudFormation template and to run it in AWS.
+
+Automatic discovery of S3 buckets in the AWS account starts automatically. The Defender for Cloud scanner runs in your AWS account and connects to your S3 buckets.
+
+### Check for blocking policies
+
+If the enable process didn't work because of a blocked policy, check the following:
+
+- Make sure that the S3 bucket policy doesn't block the connection. In the AWS S3 bucket, select the **Permissions** tab > Bucket policy. Check the policy details to make sure the MDC scanner service running in the Microsoft account in AWS isn't blocked.
+- Make sure that there's no SCP policy that blocks the connection to the S3 bucket. For
+example, your SCP policy might block read API calls to the AWS Region where your S3
+bucket is hosted.
+- Check that these required API calls are allowed by your SCP policy: AssumeRole,
+GetBucketLocation, GetObject, ListBucket, GetBucketPublicAccessBlock
+- Check that your SCP policy allows calls to the us-east-1 AWS Region, which is the default
+region for API calls.
+
+## Enable data-aware monitoring in Defender for Storage
+
+Sensitive data threat detection is enabled by default when the sensitive data discovery component is enabled in the Defender for Storage plan. [Learn more](defender-for-storage-data-sensitivity.md)
++
+## Next steps
+
+[Review the security risks in your data](data-security-review-risks.md)
defender-for-cloud Data Security Review Risks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security-review-risks.md
+
+ Title: Explore risks to sensitive data in Microsoft Defender for Cloud
+description: Learn how to use attack paths and security explorer to find and remediate sensitive data risks.
++++ Last updated : 03/14/2023++
+# Explore risks to sensitive data
+
+After you [discover resources with sensitive data](data-security-posture-enable.md), Microsoft Defender for Cloud lets you explore sensitive data risk for those resources with these features:
+
+- **Attack paths**: When sensitive data discovery is enabled in the Defender Cloud Security Posture Management (CSPM) plan, you can use attack paths to discover risk of data breaches. [Learn more](concept-data-security-posture.md#data-security-in-defender-cspm).
+- **Security Explorer**: When sensitive data discovery is enabled in the Defender CSPM plan, you can use Cloud Security Explorer to find sensitive data insights. [Learn more](concept-data-security-posture.md#data-security-in-defender-cspm).
+- **Security alerts**: When sensitive data discovery is enabled in the Defender for Storage plan, you can prioritize and explore ongoing threats to sensitive data stores by applying sensitivity filters Security Alerts settings.
+
+## Explore risks through attack paths
+
+View predefined attack paths to discover data breach risks, and get remediation recommendations, as follows:
++
+1. In Defender for Cloud, open **Recommendations** > **Attack paths**.
+1. In **Risk category filter**, select **Data exposure** or **Sensitive data exposure** to filter the data-related attack paths.
+
+ :::image type="content" source="./media/data-security-review-risks/attack-paths.png" alt-text="Screenshot that shows attack paths for data risk.":::
+
+1. Review the data attack paths.
+1. To view sensitive information detected in data resources, select the resource name > **Insights**. Then, expand the **Contain sensitive data** insight.
+1. For risk mitigation steps, open **Active Recommendations**.
+
+Other examples of attack paths for sensitive data include:
+
+- "Internet exposed Azure Storage container with sensitive data is publicly accessible"
+- "VM has high severity vulnerabilities and read permission to a data store with sensitive data"
+- "Internet exposed AWS S3 Bucket with sensitive data is publicly accessible"
+- "Private AWS S3 bucket that replicates data to the internet is exposed and publicly accessible"
+
+[Review](attack-path-reference.md) a full list of attack paths.
++
+## Explore risks with Cloud Security Explorer
+
+Explore data risks and exposure in cloud security graph insights using a query template, or by defining a manual query.
+
+1. In Defender for Cloud, open **Cloud Security Explorer**.
+1. Select a query template, or build your own query. Here's an example:
+
+ :::image type="content" source="./media/data-security-review-risks/query.png" alt-text="Screenshot that shows an Insights data query.":::
+
+## Explore sensitive data security alerts
+
+When sensitive data discovery is enabled in the Defender for Storage plan, you can prioritize and focus on alerts the alerts that affect resources with sensitive data. [Learn more](defender-for-storage-data-sensitivity.md) about monitoring data security alerts in Defender for Storage.
+
+## Next steps
+
+- Learn more about [attack paths](concept-attack-path.md).
+- Learn more about [Cloud Security Explorer](how-to-manage-cloud-security-explorer.md).
defender-for-cloud Data Sensitivity Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-sensitivity-settings.md
+
+ Title: Customize data sensitivity settings in Microsoft Defender for Cloud
+description: Learn how to customize data sensitivity settings in Defender for Cloud
+++ Last updated : 03/22/2023+
+# Customize data sensitivity settings
+
+This article describes how to customize data sensitivity settings in Microsoft Defender for Cloud.
+
+Data sensitivity settings are used to identify and focus on managing the critical sensitive data in your organization.
+
+- The sensitive info types and sensitivity labels that come from Microsoft Purview compliance portal and which you can select in Defender for Cloud. By default Defender for Cloud uses the [built-in sensitive information types](/microsoft-365/compliance/sensitive-information-type-learn-about) provided by Microsoft Purview compliance portal. Some of the info types and labels are enabled by default, and you can modify them as needed.
+- You can optionally allow the import of custom sensitive info types and allow the import of [sensitivity labels](/microsoft-365/compliance/sensitivity-labels) that you've defined in Microsoft Purview.
+- If you import labels, you can set sensitivity thresholds that determine the minimum threshold sensitivity level for a label to be marked as sensitive in Defender for Cloud.
+
+This configuration helps you focus on your critical sensitive resources and improve the accuracy of the sensitivity insights.
+
+## Before you start
+
+You need one of these permissions in order to sign in and edit sensitivity settings: Global Administrator, Compliance Administrator, Compliance Data Administrator, Security Administrator, Security Operator.
+
+- [Review the prerequisites](concept-data-security-posture-prepare.md#configuring-data-sensitivity-settings) for customizing data sensitivity settings.
+- In Defender for Cloud, enable sensitive data discovery capabilities in the [Defender CSPM](data-security-posture-enable.md) and/or [Defender for Storage](defender-for-storage-data-sensitivity.md) plans.
+
+Changes in sensitivity settings take effect the next time that resources are scanned.
+
+## Import custom sensitive info types/labels from Microsoft Purview compliance portal
+
+Defender for Cloud uses built-in sensitive info types. You can optionally import your own custom sensitive info types and labels from Microsoft Purview compliance portal to align with your organization's needs.
+
+Import as follows (Import only once):
+
+1. Log into Microsoft Purview compliance portal.
+1. Navigate to Information Protection > [Labels](https://compliance.microsoft.com/informationprotection/labels).
+1. In the consent notice message, select **Turn on** and then select **Yes** to share your custom info types and sensitivity labels with Defender for Cloud.
+
+> [!NOTE]
+> - Imported labels appear in Defender for Cloud in the order rank that's set in Microsoft Purview.
+> - The two sensitivity labels that are set to highest priority in Microsoft Purview are turned on by default in Defender for Cloud.
++
+## Customize sensitive data categories/types
+
+To customize data sensitivity settings that appear in Defender for Cloud, review the [prerequisites](concept-data-security-posture-prepare.md#configuring-data-sensitivity-settings), and then do the following.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+1. Select **Data sensitivity**.
+1. Select the info type category that you want to customize:
+ - The **Finance**, **PII**, and **Credentials** categories contain the default info type data that are typically sought out by attackers.
+ - The **Custom** category contains custom info types from your Microsoft Purview compliance portal configuration.
+ - The **Other** category contains all of the rest of the built-in available info types.
+1. Select the info types that you want to be marked as sensitive.
+1. Select **Apply** and **Save**.
+
+ :::image type="content" source="./media/concept-data-security-posture/data-sensitivity.png" alt-text="Screenshot of the data sensitivity page, showing the sensitivity settings.":::
+
+## Set the threshold for sensitive data labels
+
+ You can set a threshold to determine the minimum sensitivity level for a label to be marked as sensitive in Defender for Cloud.
+
+If you're using Microsoft Purview sensitivity labels, make sure that:
+
+- the label scope is set to "Items"; under which you should configure [auto labeling for files and emails](/microsoft-365/compliance/apply-sensitivity-label-automatically#how-to-configure-auto-labeling-for-office-apps)
+- labels must be [published](/microsoft-365/compliance/create-sensitivity-labels#publish-sensitivity-labels-by-creating-a-label-policy) with a label policy that is in effect.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
+1. Select **Data sensitivity**.
+ The current minimum sensitivity threshold is shown.
+1. Select **Change** to see the list of sensitivity labels and select the lowest sensitivity label that you want marked as sensitive.
+1. Select **Apply** and **Save**.
+
+ :::image type="content" source="./media/concept-data-security-posture/sensitivity-threshold.png" alt-text="Screenshot of the data sensitivity page, showing the sensitivity label threshold.":::
+
+> [!NOTE]
+> - When you turn on the threshold, you select a label with the lowest setting that should be considered sensitive in your organization.
+> - Any resources with this minimum label or higher are presumed to contain sensitive data.
+> - For example, if you select **Confidential** as minimum, then **Highly Confidential** is also considered sensitive. **General**, **Public**, and **Non-Business** aren't.
+> - You canΓÇÖt select a sub label in the threshold. However, you can see the sublabel as the affected label on resources in attack path/Cloud Security Explorer, if the parent label is part of the threshold (part of the sensitive labels selected).
+
+## Next steps
+
+[Review risks](data-security-review-risks.md) to sensitive data
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Defender for Cloud includes Foundational CSPM (Free) capabilities for free. You
| [Multicloud coverage](plan-multicloud-security-get-started.md) | Connect to your multicloud environments with agentless methods for CSPM insight and CWP protection. | Connect your [Amazon AWS](quickstart-onboard-aws.md) and [Google GCP](quickstart-onboard-gcp.md) cloud resources to Defender for Cloud | Foundational CSPM (Free) | | [Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) | Use the dashboard to see weaknesses in your security posture. | [Enable CSPM tools](enable-enhanced-security.md) | Foundational CSPM (Free) | | [Advanced Cloud Security Posture Management](concept-cloud-security-posture-management.md) | Get advanced tools to identify weaknesses in your security posture, including:</br>- Governance to drive actions to improve your security posture</br>- Regulatory compliance to verify compliance with security standards</br>- Cloud security explorer to build a comprehensive view of your environment | [Enable CSPM tools](enable-enhanced-security.md) | Defender CSPM |
+| [Data-aware Security Posture](concept-data-security-posture.md) | Data-aware security posture automatically discovers datastores containing sensitive data, and helps reduce risk of data breaches. | [Enable data-aware security posture](data-security-posture-enable.md) | Defender CSPM or Defender for Storage |
| [Attack path analysis](concept-attack-path.md#what-is-attack-path-analysis) | Model traffic on your network to identify potential risks before you implement changes to your environment. | [Build queries to analyze paths](how-to-manage-attack-path.md) | Defender CSPM | | [Cloud Security Explorer](concept-attack-path.md#what-is-cloud-security-explorer) | A map of your cloud environment that lets you build queries to find security risks. | [Build queries to find security risks](how-to-manage-cloud-security-explorer.md) | Defender CSPM | | [Security governance](governance-rules.md#building-an-automated-process-for-improving-security-with-governance-rules) | Drive security improvements through your organization by assigning tasks to resource owners and tracking progress in aligning your security state with your security policy. | [Define governance rules](governance-rules.md#defining-governance-rules-to-automatically-set-the-owner-and-due-date-of-recommendations) | Defender CSPM |
defender-for-cloud Defender For Storage Classic Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic-enable.md
+
+ Title: Enable and configure Microsoft Defender for Storage (classic) - Microsoft Defender for Cloud
+description: Learn about how to enable and configure Microsoft Defender for Storage (classic).
Last updated : 03/16/2023+++++
+# Enable Microsoft Defender for Storage (classic)
+
+> [!NOTE]
+> Upgrade to the new [Microsoft Defender for Storage plan](defender-for-storage-introduction.md) and use advanced security capabilities, including Malware Scanning and sensitive data threat detection. Benefit from a more predictable and granular pricing structure that charges per storage account, with additional costs for high-volume transactions. This new pricing plan also encompasses all new security features and detections.
+> If you're using Defender for Storage (classic) with per-transaction or per-storage account pricing, you'll need to migrate to the new Defender for Storage (classic) plan to access these features and pricing. Learn about [migrating to the new Defender for Storage plan](defender-for-storage-classic-migrate.md).
+
+**Microsoft Defender for Storage** is an Azure-native layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your storage accounts. It uses advanced threat detection capabilities and [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks.
+
+Microsoft Defender for Storage continuously analyzes the transactions of [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/), [Azure Data Lake Storage](https://azure.microsoft.com/services/storage/data-lake-storage/), and [Azure Files](https://azure.microsoft.com/services/storage/files/) services. When potentially malicious activities are detected, security alerts are generated. Alerts are shown in Microsoft Defender for Cloud with the details of the suspicious activity, appropriate investigation steps, remediation actions, and security recommendations.
+
+Analyzed telemetry of Azure Blob Storage includes operation types such as Get Blob, Put Blob, Get Container ACL, List Blobs, and Get Blob Properties. Examples of analyzed Azure Files operation types include Get File, Create File, List Files, Get File Properties, and Put Range.
+
+Defender for Storage classic doesnΓÇÖt access the Storage account data and has no impact on its performance.
+
+Learn more about the [benefits, features, and limitations of Defender for Storage](defender-for-storage-introduction.md). You can also learn more about Defender for Storage in the [Defender for Storage episode](episode-thirteen.md) of the Defender for Cloud in the Field video series.
+
+## Availability
+
+|Aspect|Details|
+|-|:-|
+|Release state:|General availability (GA)|
+|Pricing:|**Microsoft Defender for Storage** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) and in the [Defender plans page](https://portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/pricingTier) in the Azure portal |
+|Protected storage types:|[Blob Storage](../storage/blobs/storage-blobs-introduction.md) (Standard/Premium StorageV2, Block Blobs) <br>[Azure Files](../storage/files/storage-files-introduction.md) (over REST API and SMB)<br>[Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) (Standard/Premium accounts with hierarchical namespaces enabled)|
+|Clouds:|:::image type="icon" source="media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="media/icons/yes-icon.png"::: Azure Government (Only for per-transaction plan)<br>:::image type="icon" source="media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="media/icons/no-icon.png"::: Connected AWS accounts|
+
+## Set up Microsoft Defender for Storage (classic)
+
+### Set up per-transaction pricing for a subscription
+
+For the Defender for Storage per-transaction pricing, we recommend that you enable Defender for Storage for each subscription so that all existing and new storage accounts are protected. If you want to only protect specific accounts, [configure Defender for Storage for each account](#set-up-per-transaction-pricing-for-a-storage-account).
+
+You can configure Microsoft Defender for Storage on your subscriptions in several ways:
+
+- [Terraform template](#terraform-template)
+- [Bicep template](#bicep-template)
+- [ARM template](#arm-template)
+- [PowerShell](#powershell)
+- [Azure CLI](#azure-cli)
+- [REST API](#rest-api)
++
+#### Terraform template
+
+To enable Microsoft Defender for Storage at the subscription level with per-transaction pricing using a Terraform template, add this code snippet to your template with your subscription ID as the `parent_id` value:
+
+```terraform
+resource "azapi_resource" "symbolicname" {
+ type = "Microsoft.Security/pricings@2022-03-01"
+ name = "StorageAccounts"
+ parent_id = "<subscriptionId>"
+ body = jsonencode({
+ properties = {
+ pricingTier = "Standard"
+ subPlan = "PerTransaction"
+ }
+ })
+}
+```
+
+To disable the plan, set the `pricingTier` property value to `Free` and remove the `subPlan` property.
+
+Learn more about the [ARM template AzAPI reference](/azure/templates/microsoft.security/pricings?pivots=deployment-language-arm-template).
+
+#### Bicep template
+
+To enable Microsoft Defender for Storage at the subscription level with per-transaction pricing using [Bicep](../azure-resource-manager/bicep/overview.md), add the following to your Bicep template:
+
+```bicep
+resource symbolicname 'Microsoft.Security/pricings@2022-03-01' = {
+ name: 'StorageAccounts'
+ properties: {
+ pricingTier: 'Standard'
+ subPlan: 'PerTransaction'
+ }
+}
+```
+
+To disable the plan, set the `pricingTier` property value to `Free` and remove the `subPlan` property.
+
+Learn more about the [Bicep template AzAPI reference](/azure/templates/microsoft.security/pricings?pivots=deployment-language-bicep&source=docs).
+
+#### ARM template
+
+To enable Microsoft Defender for Storage at the subscription level with per-transaction pricing using an ARM template, add this JSON snippet to the resources section of your ARM template:
+
+```json
+{
+ "type": "Microsoft.Security/pricings",
+ "apiVersion": "2022-03-01",
+ "name": "StorageAccounts",
+ "properties": {
+ "pricingTier": "Standard",
+ "subPlan": "PerTransaction"
+ }
+}
+```
+
+To disable the plan, set the `pricingTier` property value to `Free` and remove the `subPlan` property.
+
+Learn more about the [ARM template AzAPI reference](/azure/templates/microsoft.security/pricings?pivots=deployment-language-arm-template).
+
+#### PowerShell
+
+To enable Microsoft Defender for Storage at the subscription level with per-transaction pricing using PowerShell:
+
+1. If you don't have it already, [install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+1. Use the `Connect-AzAccount` cmdlet to sign in to your Azure account. Learn more about [signing in to Azure with Azure PowerShell](/powershell/azure/authenticate-azureps).
+1. Use these commands to register your subscription to the Microsoft Defender for Cloud Resource Provider:
+
+ ```powershell
+ Set-AzContext -Subscription <subscriptionId>
+ Register-AzResourceProvider -ProviderNamespace 'Microsoft.Security'
+ ```
+
+ Replace `<subscriptionId>` with your subscription ID.
+
+1. Enable Microsoft Defender for Storage for your subscription with theΓÇ»`Set-AzSecurityPricing` cmdlet:
+
+ ```powershell
+ Set-AzSecurityPricing -Name "StorageAccounts" -PricingTier "Standard"
+ ```
+
+> [!TIP]
+> You can use the [`GetAzSecurityPricing` (Az_Security)](/powershell/module/az.security/get-azsecuritypricing) to see all of the Defender for Cloud plans that are enabled for the subscription.
+
+To disable the plan, set the `-PricingTier` property value to `Free`.
+
+Learn more about the [using PowerShell with Microsoft Defender for Cloud](powershell-onboarding.md).
+
+#### Azure CLI
+
+To enable Microsoft Defender for Storage at the subscription level with per-transaction pricing using Azure CLI:
+
+1. If you don't have it already, [install the Azure CLI](/cli/azure/install-azure-cli).
+1. Use the `az login` command to sign in to your Azure account. Learn more about [signing in to Azure with Azure CLI](/cli/azure/authenticate-azure-cli).
+1. Use these commands to set the subscription ID and name:
+
+ ```azurecli
+ az account set --subscription "<subscriptionId or name>"
+ ```
+
+ Replace `<subscriptionId>` with your subscription ID.
+
+1. Enable Microsoft Defender for Storage for your subscription with theΓÇ»`az security pricing create` command:
+
+ ```azurecli
+ az security pricing create -n StorageAccounts --tier "standard"
+ ```
+
+> [!TIP]
+> You can use the [`az security pricing show`](/cli/azure/security/pricing#az-security-pricing-show) command to see all of the Defender for Cloud plans that are enabled for the subscription.
+
+To disable the plan, set the `-tier` property value to `free`.
+
+Learn more about the [`az security pricing create`](/cli/azure/security/pricing#az-security-pricing-create) command.
+
+#### REST API
+
+To enable Microsoft Defender for Storage at the subscription level with per-transaction pricing using the Microsoft Defender for Cloud REST API, create a PUT request with this endpoint and body:
+
+```http
+PUT https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Security/pricings/StorageAccounts?api-version=2022-03-01
+
+{
+"properties": {
+ "pricingTier": "Standard",
+ "subPlan": "PerTransaction"
+ }
+}
+```
+
+Replace `{subscriptionId}` with your subscription ID.
+
+To disable the plan, set the `-pricingTier` property value to `Free` and remove the `subPlan` parameter.
+
+Learn more about the [updating Defender plans with the REST API](/rest/api/defenderforcloud/pricings/update) in HTTP, Java, Go and JavaScript.
+
+### Set up per-transaction pricing for a storage account
+
+You can configure Microsoft Defender for Storage with per-transaction pricing on your accounts in several ways:
+
+- [ARM template](#arm-template-1)
+- [PowerShell](#powershell)
+- [Azure CLI](#azure-cli)
++
+#### ARM template
+
+To enable Microsoft Defender for Storage for a specific storage account with per-transaction pricing using an ARM template, use [the prepared Azure template](https://azure.microsoft.com/resources/templates/storage-advanced-threat-protection-create/).
+
+If you want to disable Defender for Storage on the account:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Navigate to your storage account.
+1. In the Security + networking section of the Storage account menu, select **Microsoft Defender for Cloud**.
+1. Select **Disable**.
+
+#### PowerShell
+
+To enable Microsoft Defender for Storage for a specific storage account with per-transaction pricing using PowerShell:
+
+1. If you don't have it already, [install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
+1. Use the Connect-AzAccount cmdlet to sign in to your Azure account. Learn more about [signing in to Azure with Azure PowerShell](/powershell/azure/authenticate-azureps).
+1. Enable Microsoft Defender for Storage for the desired storage account with theΓÇ»[`Enable-AzSecurityAdvancedThreatProtection`](/powershell/module/az.security/enable-azsecurityadvancedthreatprotection) cmdlet:
+
+ ```powershell
+ Enable-AzSecurityAdvancedThreatProtection -ResourceId "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/"
+ ```
+
+ Replace `<subscriptionId>`, `<resource-group>`, and `<storage-account>` with the values for your environment.
+
+If you want to disable per-transaction pricing for a specific storage account, use the [`Disable-AzSecurityAdvancedThreatProtection`](/powershell/module/az.security/disable-azsecurityadvancedthreatprotection) cmdlet:
+
+```powershell
+Disable-AzSecurityAdvancedThreatProtection -ResourceId "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/"
+```
+
+Learn more about the [using PowerShell with Microsoft Defender for Cloud](powershell-onboarding.md).
+
+#### Azure CLI
+
+To enable Microsoft Defender for Storage for a specific storage account with per-transaction pricing using Azure CLI:
+
+1. If you don't have it already, [install the Azure CLI](/cli/azure/install-azure-cli).
+1. Use the `az login` command to sign in to your Azure account. Learn more about [signing in to Azure with Azure CLI](/cli/azure/authenticate-azure-cli).
+1. Enable Microsoft Defender for Storage for your subscription with theΓÇ»[`az security atp storage update`](/cli/azure/security/atp/storage) command:
+
+ ```azurecli
+ az security atp storage update \
+ --resource-group <resource-group> \
+ --storage-account <storage-account> \
+ --is-enabled true
+ ```
+
+> [!TIP]
+> You can use the [`az security atp storage show`](/cli/azure/security/atp/storage) command to see if Defender for Storage is enabled on an account.
+
+To disable Microsoft Defender for Storage for your subscription, use theΓÇ»[`az security atp storage update`](/cli/azure/security/atp/storage) command:
+
+```azurecli
+az security atp storage update \
+--resource-group <resource-group> \
+--storage-account <storage-account> \
+--is-enabled false
+```
+
+Learn more about the [az security atp storage](/cli/azure/security/atp/storage#az-security-atp-storage-update) command.
+
+## Exclude a storage account from a protected subscription in the per-transaction plan
+
+> [!NOTE]
+> Consider upgrading to the new Defender for Storage plan if you have storage accounts you would like to exclude from the Defender for Storage classic plan. Not only will you save on costs for transaction-heavy accounts, but you'll also gain access to enhanced security features. Learn more about the [benefits of migrating to the new plan](defender-for-storage-introduction.md).
+>
+> Excluded storage accounts in the Defender for Storage classic are not automatically excluded when you migrate to the new plan.
+
+When you [enable Microsoft Defender for Storage](../storage/common/azure-defender-storage-configure.md) on a subscription for the per-transaction pricing, all current and future Azure Storage accounts in that subscription are protected. You can exclude specific storage accounts from the Defender for Storage protections using the Azure portal, PowerShell, or the Azure CLI.
+
+We recommend that you enable Defender for Storage on the entire subscription to protect all existing and future storage accounts in it. However, there are some cases where people want to exclude specific storage accounts from Defender protection.
+
+Exclusion of storage accounts from protected subscriptions requires you to:
+
+1. Add a tag to block inheriting the subscription enablement.
+1. Disable Defender for Storage (classic).
+
+### Exclude an Azure Storage account protection on a subscription with per-transaction pricing
+
+To exclude an Azure Storage account from Microsoft Defender for Storage (classic), you can use:
+
+- [PowerShell](#use-powershell-to-exclude-an-azure-storage-account)
+- [Azure CLI](#use-azure-cli-to-exclude-an-azure-storage-account)
+
+#### Use PowerShell to exclude an Azure Storage account
+
+1. If you don't have the Azure Az PowerShell module installed, install it using [the instructions from the Azure PowerShell documentation](/powershell/azure/install-az-ps).
+
+1. Using an authenticated account, connect to Azure with the ``Connect-AzAccount`` cmdlet, as explained in [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+
+1. Define the AzDefenderPlanAutoEnable tag on the storage account with the ``Update-AzTag`` cmdlet (replace the ResourceId with the resource ID of the relevant storage account):
+
+ ```azurepowershell
+ Update-AzTag -ResourceId <resourceID> -Tag @{"AzDefenderPlanAutoEnable" = "off"} -Operation Merge
+ ```
+
+ If you skip this stage, your untagged resources continue receiving daily updates from the subscription level enablement policy. That policy enables Defender for Storage again on the account.
+
+ > [!TIP]
+ > Learn more about tags in [Use tags to organize your Azure resources and management hierarchy](../azure-resource-manager/management/tag-resources.md).
+
+1. Disable Microsoft Defender for Storage for the desired account on the relevant subscription with the ``Disable-AzSecurityAdvancedThreatProtection`` cmdlet (using the same resource ID):
+
+ ```azurepowershell
+ Disable-AzSecurityAdvancedThreatProtection -ResourceId <resourceId>
+ ```
+
+ [Learn more about this cmdlet](/powershell/module/az.security/disable-azsecurityadvancedthreatprotection).
+
+#### Use Azure CLI to exclude an Azure Storage account
+
+1. If you don't have Azure CLI installed, install it using [the instructions from the Azure CLI documentation](/cli/azure/install-azure-cli).
+
+1. Using an authenticated account, connect to Azure with the ``login`` command as explained in [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli) and enter your account credentials when prompted:
+
+ ```azurecli
+ az login
+ ```
+
+1. Define the AzDefenderPlanAutoEnable tag on the storage account with the ``tag update`` command (replace the ResourceId with the resource ID of the relevant storage account):
+
+ ```azurecli
+ az tag update --resource-id MyResourceId --operation merge --tags AzDefenderPlanAutoEnable=off
+ ```
+
+ If you skip this stage, your untagged resources continue receiving daily updates from the subscription level enablement policy. That policy enables Defender for Storage again on the account.
+
+ > [!TIP]
+ > Learn more about tags in [az tag](/cli/azure/tag).
+
+1. Disable Microsoft Defender for Storage for the desired account on the relevant subscription with the `security atp storage` command (using the same resource ID):
+
+ ```azurecli
+ az security atp storage update --resource-group MyResourceGroup --storage-account MyStorageAccount --is-enabled false
+ ```
+
+ [Learn more about this command](/cli/azure/security/atp/storage).
++
+### Exclude an Azure Databricks Storage account
+
+#### Exclude an active Databricks workspace
+
+Microsoft Defender for Storage can exclude specific active Databricks workspace storage accounts, when the plan is already enabled on a subscription.
+
+**To exclude an active Databricks workspace**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to **Azure Databricks** > **`Your Databricks workspace`** > **Tags**.
+1. In the Name field, enter `AzDefenderPlanAutoEnable`.
+1. In the Value field, enter `off`.
+1. Select **Apply**.
+
+ :::image type="content" source="media/defender-for-storage-exclude/workspace-exclude.png" alt-text="Screenshot showing the location, and how to apply the tag to your Azure Databricks account.":::
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings** > **`Your subscription`**.
+1. Toggle the Defender for Storage plan to **Off**.
+
+ :::image type="content" source="media/defender-for-storage-exclude/storage-off.png" alt-text="Screenshot showing how to switch the Defender for Storage plan to off.":::
+
+1. Select **Save**.
+1. Re-enable Defender for Storage (classic) using one of the supported methods (you canΓÇÖt enable Defender for Storage classic from the Azure portal).
+
+The tags are inherited by the Storage account of the Databricks workspace and prevent Defender for Storage from turning on.
+
+> [!NOTE]
+> Tags can't be added directly to the Databricks Storage account, or its Managed Resource Group.
+
+#### Prevent autoenabling on a new Databricks workspace storage account
+
+When you create a new Databricks workspace, you have the ability to add a tag that prevents your Microsoft Defender for Storage account from enabling automatically.
+
+**To prevent auto-enabling on a new Databricks workspace storage account**:
+
+1. Follow [these steps](/azure/databricks/scenarios/quickstart-create-Databricks-workspace-portal?tabs=azure-portal) to create a new Azure Databricks workspace.
+
+1. In the Tags tab, enter a tag named `AzDefenderPlanAutoEnable`.
+
+1. Enter the value `off`.
+
+ :::image type="content" source="media/defender-for-storage-exclude/tag-off.png" alt-text="Screenshot that shows how to create a tag in the Databricks workspace.":::
+
+1. Continue following the instructions to create your new Azure Databricks workspace.
+
+The Microsoft Defender for Storage account inherits the tag of the Databricks workspace, which prevents Defender for Storage from turning on automatically.
+
+## FAQ - Microsoft Defender for Storage pricing
+
+### Can I switch from an existing per-transaction pricing to per-storage account pricing?
+
+Yes, you can migrate to per-storage account pricing in the Azure portal or using any of the other supported enablement methods. To migrate to per-storage account pricing, [enable per-storage account pricing at the subscription level](#set-up-microsoft-defender-for-storage-classic).
+
+### Can I return to per-transaction pricing after switching to per-storage account pricing?
+
+Yes, you can [enable per-transaction pricing](#set-up-microsoft-defender-for-storage-classic) to migrate back from per-storage account pricing using all enablement methods except for the Azure portal.
+
+### Will you continue supporting per-transaction pricing?
+
+Yes, you can [enable per-transaction pricing](#set-up-microsoft-defender-for-storage-classic) from all the enablement methods, except for the Azure portal.
+
+### Can I exclude specific storage accounts from protections in per-storage account pricing?
+
+No, you can only enable per-storage account pricing for each subscription. All storage accounts in the subscription are protected.
+
+### How long does it take for per-storage account pricing to be enabled?
+
+When you enable Microsoft Defender for Storage at the subscription level for per-storage account or per-transaction pricing, it takes up to 24 hours for the plan to be enabled.
+
+### Is there any difference in the feature set of per-storage account pricing compared to the legacy per-transaction pricing?
+
+No. Both per-storage account and per-transaction pricing include the same features. The only difference is the pricing.
+
+### How can I estimate the cost for each pricing?
+
+To estimate the cost according to each pricing for your environment, we created a [pricing estimation workbook](https://aka.ms/dfstoragecosttool) and a PowerShell script that you can run in your environment.
+
+## Next steps
+
+- Check out the [alerts for Azure Storage](alerts-reference.md#alerts-azurestorage)
+- Learn about the [features and benefits of Defender for Storage](defender-for-storage-introduction.md)
defender-for-cloud Defender For Storage Classic Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic-migrate.md
+
+ Title: Migrate from Defender for Storage (classic) - Microsoft Defender for Cloud
+description: Learn about how to migrate from Defender for Storage (classic) to the new Defender for Storage plan to take advantage of its enhanced capabilities and pricing.
Last updated : 03/16/2023+++++
+# Migrate from Defender for Storage (classic) to the new plan
+
+The new Defender for Storage plan was launched on March 28, 2023. If you're currently using Microsoft Defender for Storage (classic) with the per-transaction or the per-storage account pricing plan, consider upgrading to the new Defender for Storage plan, which offers several new benefits that aren't included in the classic plan. The new plan includes advanced security capabilities to help protect against malicious file uploads, sensitive data exfiltration, and data corruption. It also provides a more predictable and flexible pricing structure for better control over coverage and costs.
+
+## Why move to the new plan?
+
+The new plan includes more advanced capabilities that can help improve the security of your data and help prevent malicious file uploads, sensitive data exfiltration, and data corruption:
+
+### Malware Scanning
+
+Malware Scanning in Defender for Storage helps protect storage accounts from malicious content by performing a full malware scan on uploaded content in near real time, using Microsoft Defender Antivirus capabilities. It's designed to help fulfill security and compliance requirements to handle untrusted content. Every file type is scanned, and scan results are returned for every file.
+The Malware Scanning capability is an agentless SaaS solution that allows simple setup at scale, with zero maintenance, and supports automating response at scale.
+Learn more about [Malware Scanning](defender-for-storage-malware-scan.md).
+
+### Sensitive data threat detection
+
+The ΓÇÿsensitive data threat detectionΓÇÖ capability enables security teams to efficiently prioritize and examine security alerts by considering the sensitivity of the data that could be at risk, leading to better detection and preventing data breaches.
+ΓÇÿSensitive data threat detectionΓÇÖ is powered by the ΓÇ£Sensitive Data DiscoveryΓÇ¥ engine, an agentless engine that uses a smart sampling method to find resources with sensitive data.
+The service is easily integrated with Microsoft Purview's sensitive information types (SITs) and classification labels, allowing seamless inheritance of your organization's sensitivity settings.
+This is a configurable feature in the new Defender for Storage plan. You can choose to enable or disable it with no extra cost.
+
+Learn more about [sensitive data threat detection](defender-for-storage-data-sensitivity.md).
+
+### Detection of entities without identities
+
+This expansion of the security alerts suite helps identify suspicious activities generated by entities without identities, such as those using misconfigured or overly permissive Shared Access Signatures (SAS tokens) that may have leaked or been compromised. By detecting and addressing these issues, you can improve the security of your storage accounts and reduce the risk of unauthorized access.
+
+The new plan also includes a pricing plan that charges based on the number of storage accounts you protect, which simplifies cost calculations and allows for easy scaling as your needs change. You can enable it at the subscription or resource level and can also exclude specific storage accounts from protected subscriptions, providing more granular control over your security coverage. Extra charges may apply to storage accounts with high-volume transactions that exceed a high monthly threshold.
+
+## Deprecation of Defender for Storage (classic)
+
+The classic plan will be deprecated in the future, and the deprecation will be announced three years in advance. All future capabilities will only be added to the new plan.
+
+> [!NOTE]
+> If you already have the legacy Defender for Storage (classic) enabled and want to access the new security features and pricing, you'll need to proactively migrate to the new plan. You can migrate to the new plan with one-click through the Azure Portal or use Azure Policy and IaC tools.
+
+## Migration scenarios
+
+Migrating from the classic Defender for Storage plan to the new Defender for Storage plan is a straightforward process, and there are several ways to do it. You'll need to proactively enable the new plan to access its enhanced capabilities and pricing.
+
+>[!NOTE]
+> To enable the new plan, make sure to disable the old Defender for Storage policies. Look for and disable policies named "Configure Azure Defender for Storage to be enabled", "Azure Defender for Storage should be enabled", or "Configure Microsoft Defender for Storage to be enabled (per-storage account plan)".
+
+### Migrating from the classic Defender for Storage plan enabled with per-transaction pricing
+
+If the classic Defender for Storage plan is enabled with per-transaction pricing, you can switch to the new plan at either the subscription or resource level. You can also [exclude specific storage accounts](../storage/common/azure-defender-storage-configure.md#) from protected subscriptions.
+
+Storage accounts that were previously excluded from protected subscriptions in the per-transaction plan will not remain excluded when you switch to the new plan. However, the exclusion tags will remain on the resource and can be removed. In most cases, storage accounts that were previously excluded from protected subscriptions will benefit the most from the new pricing plan.
+
+### Migrating from the classic Defender for Storage plan enabled with per-storage account pricing
+
+If the classic Defender for Storage plan is enabled with per-storage account pricing, you can switch to the new plan at either the subscription or resource level. The pricing plan remains the same in the new Defender for Storage, except for extra charges for malware scanning, which are charged per GB scanned (free during preview).
+
+ You can also [exclude specific storage accounts](../storage/common/azure-defender-storage-configure.md#) from protected subscriptions.
+
+## Identify active Microsoft Defender for Storage pricing plans on your subscriptions
+
+If you're looking to quickly identify which pricing plans are active on your subscriptions, utilizing this [Coverage workbook](https://portal.azure.com/#blade/AppInsightsExtension/UsageNotebookBlade/ComponentId/Azure%20Security%20Center/ConfigurationId/community-Workbooks%2FAzure%20Security%20Center%2FCoverage/Type/workbook/WorkbookTemplateName/Coverage) based on [Azure Resource Graph (ARG) Explorer](https://portal.azure.com/#view/HubsExtension/ArgQueryBlade) (with the ΓÇÿ**securityresources**ΓÇÖ table) data is a great solution. This tool allows you to simplify and analyze your enablement status easily.
+
+>[!NOTE]
+>The Coverage workbook and ARG Explorer query only provide enablement status when Defender for Storage is enabled at the subscription level. For storage accounts with Defender for Storage enabled at the resource level, the enablement status can be found within the Defender for Cloud blade of the storage accounts in the Azure portal. Additionally, the enablement status can be queried with a PowerShell script.
++
+## Plan comparison
+
+To help you better understand the differences between the classic plan and the new plan, here's a comparison table:
+
+| Category | New Defender for Storage plan | Classic (per-transaction plan) | Classic (per-storage account plan) |
+| | | | |
+| Pricing structure | Cost is based on the number of storage accounts you protect\*. Add-on costs for GB scanned for malware, if enabled (free during preview) | Cost is based on the number of transactions processed | Cost is based on the number of storage accounts you protect* |
+| Enablement options | Subscription and resource level | Subscription and resource level | Subscription only |
+| Exclusion of storage accounts from protected subscriptions | Yes | Yes | No |
+| Activity monitoring (security alerts) | Yes | Yes | Yes |
+| Malware scanning in uploaded Blobs | Yes (add-on) | No (only hash-reputation analysis) | No (only hash-reputation analysis) |
+| Sensitive data threat detection | Yes (add-on) | No | No |
+| Detection of leaked/compromised SAS tokens (entities without identities) | Yes | No | No |
+
+\* extra charges may apply to storage accounts with high-volume transactions.
+
+The new plan offers a more comprehensive feature set designed to better protect your data. It also provides a more predictable pricing plan compared to the classic plan. We recommend you migrate to the new plan to take full advantage of its benefits.
+
+Learn more about how to [enable and configure Defender for Storage](../storage/common/azure-defender-storage-configure.md).
+
+## Next steps
+
+In this article, you learned about Microsoft Defender for Storage.
+
+> [!div class="nextstepaction"]
+> [Enable Defender for Storage](enable-enhanced-security.md)
defender-for-cloud Defender For Storage Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic.md
+
+ Title: Microsoft Defender for Storage (classic) - Microsoft Defender for Cloud
+description: Learn about the benefits and features of Microsoft Defender for Storage (classic).
Last updated : 03/16/2023+++++
+# Overview of Microsoft Defender for Storage (classic)
+
+> [!NOTE]
+> Upgrade to the new [Microsoft Defender for Storage plan](defender-for-storage-introduction.md). It includes new features like Malware Scanning and Sensitive Data Threat Detection. This plan also provides a more predictable pricing structure for better control over coverage and costs. Additionally, all new Defender for Storage features will only be released in the new plan.
+Migrating to the new plan is a simple process, read here about [how to migrate from the classic plan](defender-for-storage-classic-migrate.md).
+> If you're using Defender for Storage (classic) with per-transaction or per-storage account pricing, you'll need to migrate to the new Defender for Storage (classic) plan to access these features and pricing. Learn about the benefits of [migrating to the new Defender for Storage plan](defender-for-storage-classic-migrate.md).
+
+**Microsoft Defender for Storage (classic)** is an Azure-native layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your storage accounts. It uses advanced threat detection capabilities and [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks.
+
+You can [enable Microsoft Defender for Storage (classic)](../storage/common/azure-defender-storage-configure.md) at either the subscription level (recommended) or the resource level.
+
+Defender for Storage (classic) continually analyzes the telemetry stream generated by the [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/), [Azure Files](https://azure.microsoft.com/products/storage/files/), and [Azure Data Lake Storage](https://azure.microsoft.com/products/storage/data-lake-storage) services. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud, together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
+
+Analyzed telemetry of Azure Blob Storage includes operation types such as `Get Blob`, `Put Blob`, `Get Container ACL`, `List Blobs`, and `Get Blob Properties`. Examples of analyzed Azure Files operation types include `Get File`, `Create File`, `List Files`, `Get File Properties`, and `Put Range`.
+
+Defender for Storage (classic) doesn't access the Storage account data and has no impact on its performance.
+
+You can learn more by watching this video from the Defender for Cloud in the Field video series:
+- [Defender for Storage (classic) in the field](episode-thirteen.md)
+
+For more clarification about Defender for Storage (classic), see the [commonly asked questions](#common-questionsmicrosoft-defender-for-storage-classic).
+
+## Availability
+
+|Aspect|Details|
+|-|:-|
+|Release state:|General availability (GA)|
+|Pricing:|**Microsoft Defender for Storage (classic)** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
+|Protected storage types:|[Blob Storage](https://azure.microsoft.com/services/storage/blobs/) (Standard/Premium StorageV2, Block Blobs) <br>[Azure Files](../storage/files/storage-files-introduction.md) (over REST API and SMB)<br>[Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) (Standard/Premium accounts with hierarchical namespaces enabled)|
+|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts|
+
+## What are the benefits of Microsoft Defender for Storage (classic)?
+
+Defender for Storage (classic) provides:
+
+- **Azure-native security** - With 1-click enablement, Defender for Storage (classic) protects data stored in Azure Blob, Azure Files, and Data Lakes. As an Azure-native service, Defender for Storage (classic) provides centralized security across all data assets that are managed by Azure and is integrated with other Azure security services such as Microsoft Sentinel.
+
+- **Rich detection suite** - Powered by Microsoft Threat Intelligence, the detections in Defender for Storage (classic) cover the top storage threats such as unauthenticated access, compromised credentials, social engineering attacks, data exfiltration, privilege abuse, and malicious content.
+
+- **Response at scale** - Defender for Cloud's automation tools make it easier to prevent and respond to identified threats. Learn more in [Automate responses to Defender for Cloud triggers](workflow-automation.md).
++
+## Security threats in cloud-based storage services
+
+Microsoft security researchers have analyzed the attack surface of storage services. Storage accounts can be subject to data corruption, exposure of sensitive content, malicious content distribution, data exfiltration, unauthorized access, and more.
+
+The potential security risks are described in the [threat matrix for cloud-based storage services](https://www.microsoft.com/security/blog/2021/04/08/threat-matrix-for-storage/) and are based on the [MITRE ATT&CK® framework](https://attack.mitre.org/techniques/enterprise/), a knowledge base for the tactics and techniques employed in cyberattacks.
++
+## What kind of alerts does Microsoft Defender for Storage (classic) provide?
+
+Security alerts are triggered for the following scenarios (typically from 1-2 hours after the event):
+
+|Type of threat | Description |
+|||
+|**Unusual access to an account** | For example, access from a TOR exit node, suspicious IP addresses, unusual applications, unusual locations, and anonymous access without authentication. |
+|**Unusual behavior in an account** | Behavior that deviates from a learned baseline, such as a change of access permissions in an account, unusual access inspection, unusual data exploration, unusual deletion of blobs/files, or unusual data extraction. |
+|**Hash reputation based Malware detection** | Detection of known malware based on full blob/file hash. This can help detect ransomware, viruses, spyware, and other malware uploaded to an account, prevent it from entering the organization, and spreading to more users and resources. See also [Limitations of hash reputation analysis](#limitations-of-hash-reputation-analysis). |
+|**Unusual file uploads** | Unusual cloud service packages and executable files that have been uploaded to an account. |
+| **Public visibility** | Potential break-in attempts by scanning containers and pulling potentially sensitive data from publicly accessible containers. |
+| **Phishing campaigns** | When content that's hosted on Azure Storage is identified as part of a phishing attack that's impacting Microsoft 365 users. |
+
+> [!TIP]
+> For a comprehensive list of all Defender for Storage (classic) alerts, see the [alerts reference page](alerts-reference.md#alerts-azurestorage). It is essential to review the prerequisites, as certain security alerts are only accessible under the new Defender for Storage plan. The information in the reference page is beneficial for workload owners seeking to understand detectable threats and enables Security Operations Center (SOC) teams to familiarize themselves with detections prior to conducting investigations. Learn more about what's in a Defender for Cloud security alert, and how to manage your alerts in [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md).
+
+Alerts include details of the incident that triggered them, and recommendations on how to investigate and remediate threats. Alerts can be exported to Microsoft Sentinel or any other third-party SIEM or any other external tool. Learn more in [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md).
+
+## Limitations of hash reputation analysis
+
+> [!TIP]
+> If you're looking to have your uploaded blobs scanned for malware in near real-time, we recommend that you upgrade to the new Defender for Storage plan. Learn more about [Malware Scanning](defender-for-storage-malware-scan.md).
+
+- **Hash reputation isn't deep file inspection** - Microsoft Defender for Storage (classic) uses hash reputation analysis supported by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) to determine whether an uploaded file is suspicious. The threat protection tools donΓÇÖt scan the uploaded files; rather they analyze the telemetry generated from the Blobs Storage and Files services. Defender for Storage (classic) then compares the hashes of newly uploaded files with hashes of known viruses, trojans, spyware, and ransomware.
+
+- **Hash reputation analysis isn't supported for all files protocols and operation types** - Some, but not all, of the telemetry logs contain the hash value of the related blob or file. In some cases, the telemetry doesn't contain a hash value. As a result, some operations can't be monitored for known malware uploads. Examples of such unsupported use cases include SMB file-shares and when a blob is created using [Put Block](/rest/api/storageservices/put-block) and [Put Block List](/rest/api/storageservices/put-block-list).
+
+## Common questions - Microsoft Defender for Storage (classic)
+
+- [Are there differences in features between the new Defender for Storage plan and the legacy Defender for Storage Classic plan?](#are-there-differences-in-features-between-the-new-defender-for-storage-plan-and-the-legacy-defender-for-storage-classic-plan)
+- [How do I estimate charges at the account level?](#how-do-i-estimate-charges-at-the-account-level)
+- [Can I exclude a specific Azure storage account from a protected subscription?](#can-i-exclude-a-specific-azure-storage-account-from-a-protected-subscription)
+- [Can I switch from the per-transaction pricing in Defender for Storage (classic) to the new Defender for Storage plan?](#can-i-switch-from-the-per-transaction-pricing-in-defender-for-storage-classic-to-the-new-defender-for-storage-plan)
+- [Can I exclude specific storage accounts from protection in the new Defender for Storage plan?](#can-i-exclude-specific-storage-accounts-from-protection-in-the-new-defender-for-storage-plan)
+
+### Are there differences in features between the new Defender for Storage plan and the legacy Defender for Storage Classic plan?
+
+Yes. The new Defender for Storage plan offers additional security capabilities, such as near real-time malware scanning and sensitive data threat detection. This plan also provides a more predictable pricing structure for better control over coverage and costs. Learn more about the [benefits of migrating to the new plan](defender-for-storage-classic-migrate.md).
+
+### How do I estimate charges at the account level?
+
+To get an estimate of Defender for Storage classic costs, use the [Price Estimation Workbook](https://portal.azure.com/#blade/AppInsightsExtension/UsageNotebookBlade/ComponentId/Azure%20Security%20Center/ConfigurationId/community-Workbooks%2FAzure%20Security%20Center%2FPrice%20Estimation/Type/workbook/WorkbookTemplateName/Price%20Estimation) in the Azure portal.
+
+### Can I exclude a specific Azure Storage account from a protected subscription?
+
+Yes, you can [exclude specific storage accounts](defender-for-storage-classic-enable.md#exclude-a-storage-account-from-a-protected-subscription-in-the-per-transaction-plan) from protected subscriptions in Defender for Storage (classic).
+
+### Can I switch from the per-transaction pricing in Defender for Storage (classic) to the new Defender for Storage plan?
+
+Yes, you can move to the new Defender for Storage plan with per-storage account pricing through the Azure portal or other supported methods. This change isn't automatic, you'll need to actively make the switch. Learn about how to [migrate to the new Defender for Storage](defender-for-storage-classic-migrate.md).
+
+### Can I exclude specific storage accounts from protection in the new Defender for Storage plan?
+
+Yes, the new Defender for Storage plan with per-storage account pricing allows you to exclude and configure specific storage accounts within protected subscriptions. However, you'll need to set up the exclusion again after you migrate to the new plan. Learn about how to [migrate to the new Defender for Storage](defender-for-storage-classic-migrate.md).
+
+## Next steps
+
+In this article, you learned about Microsoft Defender for Storage (classic).
+
+> [!div class="nextstepaction"]
+> [Enable Defender for Storage (classic)](defender-for-storage-classic-enable.md)
defender-for-cloud Defender For Storage Configure Malware Scan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-configure-malware-scan.md
+
+ Title: Setting up response to Malware Scanning - Microsoft Defender for Cloud
+description: Learn about how to configure response to malware scanning to prevent harmful files from being uploaded to Azure Storage.
Last updated : 03/16/2023+++++
+# Setting up response to Malware Scanning
+
+Set up automated responses to move or remove malicious files or to move/ingest clean files to another destination. Select the preferred response option that fits your scenario architecture.
+
+With Malware Scanning, you can build your automation response using the following scan result option:
+
+- Defender for Cloud security alerts
+- Event Grid events
+- Blob index tags
+
+Here are some response options that you can use to automate your response:
+
+## Delete or move a malicious blob
+
+You can use code or workflow automation to delete or move malicious files to quarantine.
+
+### Prepare your environment for delete or move.
+
+- **Delete the malicious file** - Before setting up automated deletion, enabling [soft delete](../storage/blobs/soft-delete-blob-overview.md) on the storage account is recommended. It allows to ΓÇ£undeleteΓÇ¥ files if there are false positives or in cases where security professionals want to investigate the malicious files.
+
+- **Move the malicious file to quarantine** - You can move files to a dedicated storage container or storage account that are considered as ΓÇ£quarantineΓÇ¥.
+You may want only certain users, such as a security admin or a SOC analyst, to have permission to access this dedicated container or storage account.
+ - Using [Azure Active Directory (Azure AD) to control access to blob storage](../storage/blobs/authorize-access-azure-active-directory.md) is considered a best practice. To control access to the dedicated quarantine storage container, you can use [container-level role assignments using Azure AD Role-based access control (RBAC)](../storage/blobs/authorize-access-azure-active-directory.md). Users with storage account-level permissions may still be able to access the ΓÇ£quarantineΓÇ¥ container. You can either edit their permissions to be container-level or choose a different approach and move the malicious file to a dedicated storage account.
+ - If you must use other methods, such as [SAS (shared access signatures)](../storage/common/storage-sas-overview.md) tokens on the protected storage account, it's best practice to move malicious files to another storage account (quarantine). Then, it's best only to grant Azure AD permission to access the quarantined storage account.
+
+### Set up automation
+
+#### Option 1: Logic App based on Microsoft Defender for Cloud security alerts
+
+Logic App based responses are a simple, no-code approach to setting up response. However, the response time is slower than the event-driven code-based approach.
+
+1. Deploy the [DeleteBlobLogicApp](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fstorageantimalwareprev.blob.core.windows.net%2Fworkflows%2FDeleteBlobLogicApp-template.json****) Azure Resource Manager (ARM) template using the Azure portal.
+
+1. Add role assignment to the Logic App to allow it to delete blobs from your storage account:
+ 1. Go to **Identity** in the side menu and select on **Azure role assignments**.
+ :::image type="content" source="media/defender-for-storage-configure-malware-scan/storage-account-malware-response-1.png" alt-text="Screenshot showing how to set up a role assignment for workflow automation to respond to scan results.":::
+ 1. Add role assignment in the subscription level with the **Storage Blob Data Contributor** role.
+ :::image type="content" source="media/defender-for-storage-configure-malware-scan/storage-account-malware-response-2.png" alt-text="Screenshot showing how to set up the required role assignment for workflow automation to respond to scan results.":::
+ 1. Create workflow automation for Microsoft Defender for Cloud alerts:
+
+ 1. Go to **Microsoft Defender for Cloud** in the Azure portal.
+
+ 1. Go to **Workflow automation** in the side menu.
+ 1. Add a new workflow. In the **Alert name contains** field, fill in **Malicious file uploaded to storage account** and choose your Logic app in the **Actions** section.
+
+ :::image type="content" source="media/defender-for-storage-configure-malware-scan/storage-account-malware-response-3.png" alt-text="Screenshot showing how to set up workflow automation to respond to scan results.":::
+
+#### Option 2: Function App based on Event Grid events
+
+A Function App provides high performance with a low latency response time.
+
+1. Create a [Function App](../azure-functions/functions-overview.md) in the same resource group as your protected storage account.
+
+1. Add role assignment for the Function app identity.
+
+ 1. Go to **Identity** in the side menu, make sure the **System assigned** identity status is **ON**, and select on **Azure role assignments**.
+
+ 1. Add role assignment in the subscription or storage account levels with the **Storage Blob Data Contributor** role.
+
+1. Consume Event Grid events and connect an Azure Function as the endpoint type.
+
+1. When writing the Azure Function code, you can use our premade function sample - [MoveMaliciousBlobEventTrigger](https://storageantimalwareprev.blob.core.windows.net/samples/MoveMaliciousBlobEventTrigger.cs), or [write your own code](../storage/blobs/storage-blob-copy.md) to copy the blob elsewhere, then delete it from the source.
+
+## Make your applications and data flows aware of Malware Scanning scan results
+
+Malware Scanning is near real-time, and usually, there's a small time window between the time of the upload and the time of the scan.
+Because storage is noncompute, there's no risk that malicious files are executed in your storage. The risk is users or applications accessing malicious files and spreading them throughout the organization.
+
+There are a few methods to make your applications and data flows aware of Malware Scanning scan results and ensure there's no way to access/process a file before it has been scanned and its result has been consumed and acted on.
+
+### Applications ingest data based on the scan result
+
+#### Option 1: Apps checking ΓÇ£Index tagΓÇ¥ before processing
+
+One way to get ingest data is to update all the applications that access the storage account. Each application checks the scan result for each file, and if the blob **Index tag** scan result is **no threats found**, the application reads the blob.
+
+#### Option 2: Connect your application to a Webhook in Event Grid events
+
+You can connect your application to a Webhook in Event Grid events and use those events to trigger the relevant processes for files that have **no threats found** scan results.
+Learn more about using [Webhook event delivery and validating your endpoint](../event-grid/webhook-event-delivery.md).
+
+### Use an intermediary storage account as a DMZ
+
+You can set up an intermediary storage account for untrusted content (DMZ) and direct uploading traffic to the DMZ.
+On the untrusted storage account, enable Malware Scanning and connect Event Grid and Function App to move only blobs scanned with the ΓÇ£no threat foundΓÇ¥ result to the destination storage account.
++
+## Next steps
+
+In this article, you learned about Microsoft Defender for Storage.
+
+> [!div class="nextstepaction"]
+> [Enable Defender for Storage](enable-enhanced-security.md)
defender-for-cloud Defender For Storage Data Sensitivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-data-sensitivity.md
+
+ Title: Detect threats to sensitive data - Microsoft Defender for Cloud
+description: Learn about using security alerts to protect your sensitive data from exposure.
Last updated : 03/16/2023+++++
+# Detect threats to sensitive data
+
+Sensitive data threat detection lets you efficiently prioritize and examine security alerts by considering the sensitivity of the data that could be at risk, leading to better detection and preventing data breaches. By quickly identifying and addressing the most significant risks, this capability helps security teams reduce the likelihood of data breaches and enhances sensitive data protection by detecting exposure events and suspicious activities on resources containing sensitive data.ΓÇ»
+
+This is a configurable feature in the new Defender for Storage plan. You can choose to enable or disable it with no additional cost.
+
+Learn more about [scope and limitations of sensitive data scanning](concept-data-security-posture-prepare.md).
+
+## How does the Sensitive Data Discovery work?
+
+Sensitive Data Threat Detection is powered by the Sensitive Data Discovery engine, an agentless engine that uses a smart sampling method to find resources with sensitive data.
+
+The service is integrated with Microsoft Purview's sensitive information types (SITs) and classification labels, allowing seamless inheritance of your organization's sensitivity settings. This ensures that the detection and protection of sensitive data aligns with your established policies and procedures.
++
+Upon enablement, the Sensitive Data Discovery engine initiates an automatic scanning process across all supported storage accounts. Results are typically generated within 24 hours. Additionally, newly created storage accounts under protected subscriptions will be scanned within six hours of their creation. Recurring scans are scheduled to occur weekly after the enablement date. This is the same Sensitive Data Discovery engine used for sensitive data discovery in Defender CSPM.
+
+## Prerequisites
+
+Sensitive data threat detection is available for Blob storage accounts, including: Standard general-purpose V1, Standard general-purpose V2, Azure Data Lake Storage Gen2 and Premium block blobs. Learn more about the [availability of Defender for Storage features](defender-for-storage-introduction.md#availability).
+
+To enable sensitive data threat detection at subscription and storage account levels, you need Owner roles (subscription owner/storage account owner) or specific roles with corresponding data actions.
+Learn more about the [roles and permissions](support-matrix-defender-for-storage.md) required for sensitive data threat detection.
+
+## Enabling sensitive data threat detection
+
+Sensitive data threat detection is enabled by default when you enable Defender for Storage. You can [enable it or disable it](../storage/common/azure-defender-storage-configure.md) in the Azure portal or with other at-scale methods at no additional cost.
+
+## Using the sensitivity context in the security alerts
+
+Sensitive Data Threat Detection capability will help you to prioritize security incidents, allowing security teams to prioritize these incidents and respond on time. Defender for Storage alerts will include findings of sensitivity scanning and indications of operations that have been performed on resources containing sensitive data.
+
+In the alertΓÇÖs Extended Properties, you can find sensitivity scanning findings for a **blob container**:ΓÇ»
+
+- Sensitivity scanning time UTCΓÇ»- when the last scan was performed
+- Top sensitivity label - the most sensitive label found in the blob container
+- Sensitive information types - information types that were found and whether they are based on custom rules
+- Sensitive file types - the file types of the sensitive data
++
+## Integrate with the organizational sensitivity settings in Microsoft Purview (optional)
+
+When you enable sensitive data threat detection, the sensitive data categories include built-in sensitive information types (SITs) default list of Microsoft Purview. This will affect the alerts you receive from Defender for Storage and storage or containers that are found to contain these SITs are marked as containing sensitive data.
+
+To customize the Data Sensitivity Discovery for your organization, you can [create custom sensitive information types (SITs)](/microsoft-365/compliance/create-a-custom-sensitive-information-type) and connect to your organizational settings with a single step integration. Learn more [here](episode-two.md).
+
+You also can create and publish sensitivity labels for your tenant in Microsoft Purview with a scope that includes Items and Schematized data assets and Auto-labeling rules (recommended). Learn more about [sensitivity labels](/microsoft-365/compliance/sensitivity-labels) in Microsoft Purview.
+
+## Next steps
+
+In this article, you learned about Microsoft Defender for Storage.
+
+> [!div class="nextstepaction"]
+> [Enable Defender for Storage](enable-enhanced-security.md)
defender-for-cloud Defender For Storage Exclude https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-exclude.md
- Title: Exclude storage accounts from Microsoft Defender for Storage
-description: Learn how to exclude specific Azure Storage accounts from Microsoft Defender for Storage protections.
Previously updated : 08/04/2022-----
-# Exclude a storage account from a protected subscription in the per-transaction plan
-
-When you [enable Microsoft Defender for Storage](../storage/common/azure-defender-storage-configure.md) on a subscription for the per-transaction pricing, all current and future Azure Storage accounts in that subscription are protected. You can exclude specific storage accounts from the Defender for Storage protections using the Azure portal, PowerShell, or the Azure CLI.
-
-We don't recommend that you exclude storage accounts from Defender for Storage because attackers can use any opening in order to compromise your environment. If you want to optimize your Azure costs and remove storage accounts that you feel are low risk from Defender for Storage, you can use the [Price Estimation Workbook](https://portal.azure.com/#view/Microsoft_Azure_Security/SecurityMenuBlade/~/28) in the Azure portal to evaluate the cost savings.
-
-## Exclude an Azure Storage account protection on a subscription with per-transaction pricing
-
-To exclude an Azure Storage account from Microsoft Defender for Storage:
-
-### [**PowerShell**](#tab/enable-storage-protection-ps)
-
-### Use PowerShell to exclude an Azure Storage account
-
-1. If you don't have the Azure Az PowerShell module installed, install it using [the instructions from the Azure PowerShell documentation](/powershell/azure/install-az-ps).
-
-1. Using an authenticated account, connect to Azure with the ``Connect-AzAccount`` cmdlet, as explained in [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
-
-1. Define the AzDefenderPlanAutoEnable tag on the storage account with the ``Update-AzTag`` cmdlet (replace the ResourceId with the resource ID of the relevant storage account):
-
- ```azurepowershell
- Update-AzTag -ResourceId <resourceID> -Tag @{"AzDefenderPlanAutoEnable" = "off"} -Operation Merge
- ```
-
- If you skip this stage, your untagged resources will continue receiving daily updates from the subscription level enablement policy. That policy will enable Defender for Storage again on the account.
-
- > [!TIP]
- > Learn more about tags in [Use tags to organize your Azure resources and management hierarchy](../azure-resource-manager/management/tag-resources.md).
-
-1. Disable Microsoft Defender for Storage for the desired account on the relevant subscription with the ``Disable-AzSecurityAdvancedThreatProtection`` cmdlet (using the same resource ID):
-
- ```azurepowershell
- Disable-AzSecurityAdvancedThreatProtection -ResourceId <resourceId>
- ```
-
- [Learn more about this cmdlet](/powershell/module/az.security/disable-azsecurityadvancedthreatprotection).
--
-### [**Azure CLI**](#tab/enable-storage-protection-cli)
-
-### Use Azure CLI to exclude an Azure Storage account
-
-1. If you don't have Azure CLI installed, install it using [the instructions from the Azure CLI documentation](/cli/azure/install-azure-cli).
-
-1. Using an authenticated account, connect to Azure with the ``login`` command as explained in [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli) and enter your account credentials when prompted:
-
- ```azurecli
- az login
- ```
-
-1. Define the AzDefenderPlanAutoEnable tag on the storage account with the ``tag update`` command (replace the ResourceId with the resource ID of the relevant storage account):
-
- ```azurecli
- az tag update --resource-id MyResourceId --operation merge --tags AzDefenderPlanAutoEnable=off
- ```
-
- If you skip this stage, your untagged resources will continue receiving daily updates from the subscription level enablement policy. That policy will enable Defender for Storage again on the account.
-
- > [!TIP]
- > Learn more about tags in [az tag](/cli/azure/tag).
-
-1. Disable Microsoft Defender for Storage for the desired account on the relevant subscription with the `security atp storage` command (using the same resource ID):
-
- ```azurecli
- az security atp storage update --resource-group MyResourceGroup --storage-account MyStorageAccount --is-enabled false
- ```
-
- [Learn more about this command](/cli/azure/security/atp/storage).
--
-### [**Azure portal**](#tab/enable-storage-protection-portal)
-
-### Use the Azure portal to exclude an Azure Storage account
-
-1. Define the AzDefenderPlanAutoEnable tag on the storage account:
-
- 1. From the Azure portal, open the storage account and select the **Tags** page.
- 1. Enter the tag name **AzDefenderPlanAutoEnable** and set the value to **off**.
- 1. Select **Apply**.
-
- :::image type="content" source="media/defender-for-storage-exclude/define-tag-storage-account.png" alt-text="Screenshot of how to add a tag to a storage account in the Azure portal." lightbox="media/defender-for-storage-exclude/define-tag-storage-account.png":::
-
-1. Verify that the tag has been added successfully. It should look similar to this:
-
- :::image type="content" source="media/defender-for-storage-exclude/define-tag-storage-account-success.png" alt-text="Screenshot of a tag on a storage account in the Azure portal." lightbox="media/defender-for-storage-exclude/define-tag-storage-account-success.png":::
-
-1. Disable and then enable the Microsoft Defender for Storage on the subscription:
-
- 1. From the Azure portal, open **Microsoft Defender for Cloud**.
- 1. Open **Environment settings** > select the relevant subscription > **Defender plans** > toggle the Defender for Storage plan off > select **Save** > turn it back on > select **Save**.
-
- :::image type="content" source="media/defender-for-storage-exclude/defender-plan-toggle.png" alt-text="Screenshot of disabling and enabling the Microsoft Defender for Storage plan from Microsoft Defender for Cloud." lightbox="media/defender-for-storage-exclude/defender-plan-toggle.png":::
---
-## Exclude an Azure Databricks Storage account
-
-### Exclude an active Databricks workspace
-
-Microsoft Defender for Storage can exclude specific active Databricks workspace storage accounts, when the plan is already enabled on a subscription.
-
-**To exclude an active Databricks workspace**:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to **Azure Databricks** > **`Your Databricks workspace`** > **Tags**.
-1. In the Name field, enter `AzDefenderPlanAutoEnable`.
-1. In the Value field, enter `off`.
-1. Select **Apply**.
-
- :::image type="content" source="media/defender-for-storage-exclude/workspace-exclude.png" alt-text="Screenshot showing the location, and how to apply the tag to your Azure Databricks account.":::
-
-1. Navigate to **Microsoft Defender for Cloud** > **Environment settings** > **`Your subscription`**.
-1. Toggle the Defender for Storage plan to **Off**.
-
- :::image type="content" source="media/defender-for-storage-exclude/storage-off.png" alt-text="Screenshot showing how to switch the Defender for Storage plan to off.":::
-
-1. Select **Save**.
-1. Toggle the Defender for Storage plan to **On**.
-1. Select **Save**.
-
-The tags will be inherited by the Storage account of the Databricks workspace and prevent Defender for Storage from turning on.
-
-> [!NOTE]
-> Tags can't be added directly to the Databricks Storage account, or its Managed Resource Group.
-
-### Prevent auto-enabling on a new Databricks workspace storage account
-
-When you create a new Databricks workspace, you have the ability to add a tag that will prevent your Microsoft Defender for Storage account from enabling automatically.
-
-**To prevent auto-enabling on a new Databricks workspace storage account**:
-
-1. Follow [these steps](/azure/databricks/scenarios/quickstart-create-Databricks-workspace-portal?tabs=azure-portal) to create a new Azure Databricks workspace.
-
-1. In the Tags tab, enter a tag named `AzDefenderPlanAutoEnable`.
-
-1. Enter the value `off`.
-
- :::image type="content" source="media/defender-for-storage-exclude/tag-off.png" alt-text="Screenshot that shows how to create a tag in the Databricks workspace.":::
-
-1. Continue following the instructions to create your new Azure Databricks workspace.
-
-The Microsoft Defender for Storage account will inherit the tag of the Databricks workspace, which will prevent Defender for Storage from turning on automatically.
-
-## Next steps
--- Explore the [Microsoft Defender for Storage ΓÇô Price Estimation Dashboard](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-storage-price-estimation-dashboard/ba-p/2429724)
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
Title: Microsoft Defender for Storage - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Storage. Previously updated : 07/12/2022 Last updated : 03/23/2023
# Overview of Microsoft Defender for Storage
-**Microsoft Defender for Storage** is an Azure-native layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your storage accounts. It uses advanced threat detection capabilities and [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks.
+**Microsoft Defender for Storage** is an Azure-native layer of security intelligence that detects potential threats to your storage accounts.
+It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption.
-You can [enable Microsoft Defender for Storage](../storage/common/azure-defender-storage-configure.md) at either the subscription level (recommended) or the resource level.
+> [!Note]
+> This article is about the new Defender for Storage plan that was launched on March 28, 2023.  It includes new features like Malware Scanning and Sensitive Data Threat Detection. This plan also provides a more predictable pricing structure for better control over coverage and costs. Additionally, all new Defender features will only be added to the new plan.  Migrating to the new plan is a simple process, read here about how to migrate from the classic plan.
-Defender for Storage continually analyzes the telemetry stream generated by the [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/) and Azure Files services. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud, together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
+Microsoft Defender for Storage provides comprehensive security by analyzing the data plane and control plane telemetry generated by [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/), [Azure Files](https://azure.microsoft.com/products/storage/files/), and [Azure Data Lake Storage](https://azure.microsoft.com/products/storage/data-lake-storage) services. It uses advanced threat detection capabilities powered by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684), Microsoft Defender Antivirus, and [Sensitive Data Discovery](defender-for-storage-data-sensitivity.md) to help you discover and mitigate potential threats.
-Analyzed telemetry of Azure Blob Storage includes operation types such as `Get Blob`, `Put Blob`, `Get Container ACL`, `List Blobs`, and `Get Blob Properties`. Examples of analyzed Azure Files operation types include `Get File`, `Create File`, `List Files`, `Get File Properties`, and `Put Range`.
+Defender for Storage includes:
+- Activity Monitoring
+- Sensitive data threat detection (preview feature, new plan only)
+- Malware Scanning (preview feature, new plan only)
-Defender for Storage doesn't access the Storage account data and has no impact on its performance.
-You can learn more by watching this video from the Defender for Cloud in the Field video series:
-- [Defender for Storage in the field](episode-thirteen.md)
+## Getting started
+
+With a simple agentless setup at scale, you can [enable Defender for Storage](../storage/common/azure-defender-storage-configure.md) at the subscription or resource levels through the portal or programmatically. When enabled at the subscription level, all existing and newly created storage accounts under that subscription will be automatically protected. You can also exclude specific storage accounts from protected subscriptions.
> [!NOTE]
-> Microsoft Defender for Storage customers can now choose to move to a new predictable pricing plan. The pricing model is per-storage account, where high-volume transactions may incur additional overage charges. This new pricing plan will also include all new security features and detections.
->
-> Customers using the legacy per-transaction pricing plan need to migrate to the new per-storage account plan to access these new features and pricing. The legacy per-transaction pricing plan charges are based on the number of analyzed transactions in the storage account.
->
-> For further details, please refer to the [Microsoft Defender for Storage FAQ](../storage/common/azure-defender-storage-configure.md#faqmicrosoft-defender-for-storage-pricing).
+> If you already have the Defender for Storage (classic) enabled and want to access the new security features and pricing, you'll need to [migrate to the new pricing plan](defender-for-storage-classic-migrate.md).
## Availability |Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|**Microsoft Defender for Storage** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
-|Protected storage types:|[Blob Storage](https://azure.microsoft.com/services/storage/blobs/) (Standard/Premium StorageV2, Block Blobs) <br>[Azure Files](../storage/files/storage-files-introduction.md) (over REST API and SMB)<br>[Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) (Standard/Premium accounts with hierarchical namespaces enabled)|
-|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts|
+|Feature availability:|- Activity monitoring (security alerts) - General availability (GA)<br>- Malware Scanning ΓÇô Preview<br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô Preview|
+|Pricing:|- Defender for Storage: $10/storage accounts/month\*<br>- Malware Scanning (add-on): Free during public preview\*\*<br><br>Above pricing applies to commercial clouds. Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) to learn more.<br><br>\* Storage accounts that exceed 73 million monthly transactions will be charged $0.1492 for every 1 million transactions that exceed the threshold.<br>\*\* In the future, Malware Scanning will be priced at $0.15/GB of data ingested. Billing for Malware Scanning is not enabled during public preview and advanced notice will be given before billing starts.|
+| Supported storage types:|[Blob Storage](https://azure.microsoft.com/products/storage/blobs/)ΓÇ»(Standard/Premium StorageV2, including Data Lake Gen2): Activity monitoring, Malware Scanning, Sensitive Data Discovery<br>Azure Files (over REST API and SMB): Activity monitoring |
+|Required roles and permissions:|For Malware Scanning and sensitive data threat detection at subscription and storage account levels, you need Owner roles (subscription owner/storage account owner) or specific roles with corresponding data actions. To enable Activity Monitoring, you need 'Security Admin' permissions. Read more about the required permissions.|
+|Clouds:|:::image type="icon" source="../defender-for-cloud/media/icons/yes-icon.png"::: Commercial clouds\*<br>:::image type="icon" source="../defender-for-cloud/media/icons/yes-icon.png"::: Azure Government (Only for activity monitoring)<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Connected AWS accounts|
+
+\* Azure DNS Zone is not supported for Malware Scanning and sensitive data threat detection.
## What are the benefits of Microsoft Defender for Storage?
-Defender for Storage provides:
+
+Defender for Storage provides the following:
+
+- **Better protection against malware**: The Malware Scanning scans and detects in near real-time all file types, including archives of every uploaded blob, and provides fast and reliable results, helping you prevent your storage accounts from acting as an entry and distribution point for threats. Learn more about [Malware Scanning](defender-for-storage-malware-scan.md).
+
+- **Improved threat detection and protection of sensitive data**: The sensitive data threat detection capability enables security professionals to efficiently prioritize and examine security alerts by considering the sensitivity of the data that could be at risk, leading to better detection and protection against potential threats. By quickly identifying and addressing the most significant risks, this capability lowers the likelihood of data breaches and enhances sensitive data protection by detecting exposure events and suspicious activities on resources containing sensitive data. Learn more about [sensitive data threat detection](defender-for-storage-data-sensitivity.md).
+
+- **Detection of entities without identities**: Defender for Storage detects suspicious activities generated by entities without identities that access your data using misconfigured and overly permissive Shared Access Signatures (SAS tokens) that may have leaked or compromised so that you can improve the security hygiene and reduce the risk of unauthorized access. This capability is an expansion of the Activity Monitoring security alerts suite.
+
+- **Coverage of the top cloud storage threats**: Powered by Microsoft Threat Intelligence, behavioral models, and machine learning models to detect unusual and suspicious activities. The Defender for Storage security alerts cover the top cloud storage threats, such as sensitive data exfiltration, data corruption, and malicious file uploads.
+
+- **Comprehensive security without enabling logs**: When Microsoft Defender for Storage is enabled, it continuously analyzes both the data plane and control plane telemetry stream generated by Azure Blob Storage, Azure Files, and Azure Data Lake Storage services without the requirement of enabling diagnostic logs.
+
+- **Frictionless enablement at scale**: Microsoft Defender for Storage is an agentless solution, easy to deploy, and enables security protection at scale using a native solution to Azure with just a single click.
+
+## How does the service work?
+
+### Activity monitoring
+
+Defender for Storage continuously analyzes data and control plane logs from protected storage accounts when enabled. There's no need to turn on resource logs for security benefits. Using Microsoft Threat Intelligence, it identifies suspicious signatures such as malicious IP addresses, Tor exit nodes, and potentially dangerous apps. It also builds data models and uses statistical and machine-learning methods to spot baseline activity anomalies, which may indicate malicious behavior. You'll receive security alerts for suspicious activities, but Defender for Storage ensures you won't get too many similar alerts. Activity monitoring won't affect performance, ingestion capacity, or access to your data.
+ -- **Azure-native security** - With 1-click enablement, Defender for Storage protects data stored in Azure Blob, Azure Files, and Data Lakes. As an Azure-native service, Defender for Storage provides centralized security across all data assets that are managed by Azure and is integrated with other Azure security services such as Microsoft Sentinel.
+### Malware Scanning (powered by Microsoft Defender Antivirus)
-- **Rich detection suite** - Powered by Microsoft Threat Intelligence, the detections in Defender for Storage cover the top storage threats such as unauthenticated access, compromised credentials, social engineering attacks, data exfiltration, privilege abuse, and malicious content.
+Malware Scanning in Defender for Storage helps protect storage accounts from malicious content by performing a full malware scan on uploaded content in near real time, leveraging Microsoft Defender Antivirus capabilities. It is designed to help fulfill security and compliance requirements to handle untrusted content. Every file type is scanned, and scan results are returned for every file. The Malware Scanning capability is an agentless SaaS solution that allows simple setup at scale, with zero maintenance, and supports automating response at scale.
+This is a configurable feature in the new Defender for Storage plan that is priced per GB scanned.
+Learn more about [Malware Scanning](defender-for-storage-malware-scan.md).
-- **Response at scale** - Defender for Cloud's automation tools make it easier to prevent and respond to identified threats. Learn more in [Automate responses to Defender for Cloud triggers](workflow-automation.md).
+### Sensitive data threat detection (powered by Sensitive Data Discovery)
+The ΓÇÿsensitive data threat detectionΓÇÖ capability enables security teams to efficiently prioritize and examine security alerts by considering the sensitivity of the data that could be at risk, leading to better detection and preventing data breaches.
+ΓÇÿSensitive data threat detectionΓÇÖ is powered by the ΓÇ£Sensitive Data DiscoveryΓÇ¥ engine, an agentless engine that uses a smart sampling method to find resources with sensitive data.
+The service is integrated with Microsoft Purview's sensitive information types (SITs) and classification labels, allowing seamless inheritance of your organization's sensitivity settings.
+This is a configurable feature in the new Defender for Storage plan. You can choose to enable or disable it with no additional cost.
+For more details, visit [Sensitive data threat detection](defender-for-storage-data-sensitivity.md).
-## Security threats in cloud-based storage services
-Microsoft security researchers have analyzed the attack surface of storage services. Storage accounts can be subject to data corruption, exposure of sensitive content, malicious content distribution, data exfiltration, unauthorized access, and more.
+## Pricing and cost controls
-The potential security risks are described in the [threat matrix for cloud-based storage services](https://www.microsoft.com/security/blog/2021/04/08/threat-matrix-for-storage/) and are based on the [MITRE ATT&CK® framework](https://attack.mitre.org/techniques/enterprise/), a knowledge base for the tactics and techniques employed in cyber attacks.
+### Per storage account pricing
+The new Microsoft Defender for Storage plan has predictable pricing based on the number of storage accounts you protect. With the option to enable at the subscription or resource level and exclude specific storage accounts from protected subscriptions, you have increased flexibility to manage your security coverage. The pricing plan simplifies the cost calculation process, allowing you to scale easily as your needs change. Additional charges may apply to storage accounts with high-volume transactions.
-## What kind of alerts does Microsoft Defender for Storage provide?
+### Malware Scanning - Billing per GB, monthly capping, and configuration
-Security alerts are triggered for the following scenarios (typically from 1-2 hours after the event):
+Malware Scanning is charged on a per-gigabyte basis for scanned data. To ensure cost predictability, a monthly cap can be established for each storage account's scanned data volume, per-month basis. This cap can be set subscription-wide, affecting all storage accounts within the subscription, or applied to individual storage accounts. Under protected subscriptions, you can configure specific storage accounts with different limits.
-|Type of threat | Description |
-|||
-|**Unusual access to an account** | For example, access from a TOR exit node, suspicious IP addresses, unusual applications, unusual locations, and anonymous access without authentication. |
-|**Unusual behavior in an account** | Behavior that deviates from a learned baseline, such as a change of access permissions in an account, unusual access inspection, unusual data exploration, unusual deletion of blobs/files, or unusual data extraction. |
-|**Hash reputation based Malware detection** | Detection of known malware based on full blob/file hash. This can help detect ransomware, viruses, spyware, and other malware uploaded to an account, prevent it from entering the organization, and spreading to more users and resources. See also [Limitations of hash reputation analysis](#limitations-of-hash-reputation-analysis). |
-|**Unusual file uploads** | Unusual cloud service packages and executable files that have been uploaded to an account. |
-| **Public visibility** | Potential break-in attempts by scanning containers and pulling potentially sensitive data from publicly accessible containers. |
-| **Phishing campaigns** | When content that's hosted on Azure Storage is identified as part of a phishing attack that's impacting Microsoft 365 users. |
+By default, the limit is set to 5,000GB per month per storage account. Once this threshold is exceeded, scanning will cease for the remaining blobs, with a 20GB confidence interval. For configuration details, refer to [configure Defender for Storage](../storage/common/azure-defender-storage-configure.md).
-You can check out [the full list of Microsoft Defender for Storage alerts](alerts-reference.md#alerts-azurestorage).
+### Enablement at scale with granular controls
-Alerts include details of the incident that triggered them, and recommendations on how to investigate and remediate threats. Alerts can be exported to Microsoft Sentinel or any other third-party SIEM or any other external tool. Learn more in [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md).
+Microsoft Defender for Storage enables you to secure your data at scale with granular controls. You can apply consistent security policies across all your storage accounts within a subscription or customize them for specific accounts to suit your business needs. You can also control your costs by choosing the level of protection you need for each resource. To get started, visit [enable Defender for Storage](../storage/common/azure-defender-storage-configure.md).
-> [!TIP]
-> For a comprehensive list of all Defender for Storage alerts, see the [alerts reference page](alerts-reference.md#alerts-azurestorage). This is useful for workload owners who want to know what threats can be detected and help SOC teams gain familiarity with detections before investigating them. Learn more about what's in a Defender for Cloud security alert, and how to manage your alerts in [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md).
-## Explore security anomalies
+## Malware Scanning and hash reputation analysisΓÇ»
-When storage activity anomalies occur, you receive an email notification with information about the suspicious security event. Details of the event include:
+**Malware Scanning** is a paid add-on feature to Defender for Storage, currently available for Azure Blob Storage. It leverages MDAV (Microsoft Defender Antivirus) to do a full malware scan, with high efficacy. It is significantly more comprehensive than only file hash reputation analysis. 
+
+The Activity Monitoring feature in Defender for Storage includes blob/file hash reputation analysis.
-- The nature of the anomaly-- The storage account name-- The event time-- The storage type-- The potential causes-- The investigation steps-- The remediation steps
+### Limitations of hash reputation analysis
-The email also includes details on possible causes and recommended actions to investigate and mitigate the potential threat.
+- **Hash reputation isn't deep file inspection** - Microsoft Defender for Storage uses hash reputation analysis supported by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) to determine whether an uploaded file is suspicious. The threat protection tools don’t scan the uploaded files; rather they analyze the telemetry generated from the Blobs Storage and Files services. Defender for Storage then compares the hashes of newly uploaded files with hashes of known viruses, trojans, spyware, and ransomware. 
+- **Hash reputation analysis isn't supported for all files protocols and operation types** - Some, but not all, of the telemetry logs contain the hash value of the related blob or file. In some cases, the telemetry doesn't contain a hash value. As a result, some operations can't be monitored for known malware uploads. Examples of such unsupported use cases include SMB file-shares and when a blob is created using [Put Block](/rest/api/storageservices/put-block) and [Put Block List](/rest/api/storageservices/put-block-list).
-You can review and manage your current security alerts from Microsoft Defender for Cloud's [Security alerts tile](managing-and-responding-alerts.md). Select an alert for details and actions for investigating the current threat and addressing future threats.
+For blob storage, you can enable [Malware Scanning](defender-for-storage-malware-scan.md) to get fuller coverage and efficacy.ΓÇ»
-## Limitations of hash reputation analysis
+## Common questions
-- **Hash reputation isn't deep file inspection** - Microsoft Defender for Storage uses hash reputation analysis supported by [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) to determine whether an uploaded file is suspicious. The threat protection tools donΓÇÖt scan the uploaded files; rather they analyze the telemetry generated from the Blobs Storage and Files services. Defender for Storage then compares the hashes of newly uploaded files with hashes of known viruses, trojans, spyware, and ransomware.
+### Is it possible to enable Defender for Storage on a resource level?
-- **Hash reputation analysis isn't supported for all files protocols and operation types** - Some, but not all, of the telemetry logs contain the hash value of the related blob or file. In some cases, the telemetry doesn't contain a hash value. As a result, some operations can't be monitored for known malware uploads. Examples of such unsupported use cases include SMB file-shares and when a blob is created using [Put Block](/rest/api/storageservices/put-block) and [Put Block List](/rest/api/storageservices/put-block-list).
+Yes, it's possible to enable Defender for Storage at the resource level and set up Malware Scanning and Sensitivity Scanning accordingly. Keep in mind that enabling it at the subscription level is the recommended approach, as it will automatically protect all new storage accounts.
-> [!TIP]
-> When a file is suspected to contain malware, Defender for Cloud displays an alert and can optionally email the storage owner for approval to delete the suspicious file. To set up this automatic removal of files that hash reputation analysis indicates contain malware, deploy a [workflow automation to trigger on alerts that contain "Potential malware uploaded to a storage accountΓÇ¥](https://techcommunity.microsoft.com/t5/azure-security-center/how-to-respond-to-potential-malware-uploaded-to-azure-storage/ba-p/1452005).
+### Can I exclude certain storage accounts from protection?
-## FAQ - Microsoft Defender for Storage
+Yes, you can exclude storage accounts from protection.
-- [How do I estimate charges at the account level?](#how-do-i-estimate-charges-at-the-account-level)-- [Can I exclude a specific Azure Storage account from a protected subscription?](#can-i-exclude-a-specific-azure-storage-account-from-a-protected-subscription)-- [How do I configure automatic responses for security alerts?](#how-do-i-configure-automatic-responses-for-security-alerts)
+### How long does it take for subscription-level enablement to take effect?
-### How do I estimate charges at the account level?
+Enabling Defender for Storage at the subscription level may take up to 24 hours to be fully enabled across all storage accounts.
-To optimize costs, you might want to exclude specific Storage accounts associated with high traffic from Defender for Storage protections. To get an estimate of Defender for Storage costs, use the [Price Estimation Workbook](https://portal.azure.com/#blade/AppInsightsExtension/UsageNotebookBlade/ComponentId/Azure%20Security%20Center/ConfigurationId/community-Workbooks%2FAzure%20Security%20Center%2FPrice%20Estimation/Type/workbook/WorkbookTemplateName/Price%20Estimation) in the Azure portal.
+### Is there a difference in features between the new and Defender for Storage (classic)?
-### Can I exclude a specific Azure Storage account from a protected subscription?
+Yes, there is a difference in the capabilities of the two plans. New and future security capabilities will only be available in the new Defender for Storage plan. If you want to access these new capabilities, you'll need to enable the new plan.
-Excluding specific storage accounts from protection is only possible on the per-transaction pricing plan. The per-storage account pricing plan will support exclusion in the future. To exclude a storage account, follow instructions in [Exclude a storage account from a protected subscription in the per-transaction plan](defender-for-storage-exclude.md).
+### Will the Defender for Storage (classic) continue to be supported?
-### How do I configure automatic responses for security alerts?
+The Defender for Storage (classic) will still continue to be supported for three years after the release of the new Defender for Storage to general availability (GA).
-Use [workflow automation](workflow-automation.md) to trigger automatic responses to Defender for Cloud security alerts.
+### Can I switch back to the Defender for Storage (classic)?
-For example, you can set up automation to open tasks or tickets for specific personnel or teams in an external task management system.
+Yes, using the REST API, you can return to using the Defender for Storage (classic).
-> [!TIP]
-> Explore the automations available from the Defender for Cloud community pages: [ServiceNow automation](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workflow%20automation/Create-SNOWIncfromASCAlert), [Jira automation](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workflow%20automation/Open-JIRA-Ticket), [Azure DevOps automation](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workflow%20automation