Updates from: 07/09/2022 01:09:10
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-developer-notes.md
Previously updated : 06/15/2022 Last updated : 06/27/2022
The following table summarizes the Security Assertion Markup Language (SAML) app
| Feature | Custom policy | Notes | | - | :--: | -- |
-| [MFA using time-based one-time password (TOTP) with authenticator apps](multi-factor-authentication.md#verification-methods) | Preview | Users can use any authenticator app that supports TOTP verification, such as the [Microsoft Authenticator app](https://www.microsoft.com/security/mobile-authenticator-app).|
+| [MFA using time-based one-time password (TOTP) with authenticator apps](multi-factor-authentication.md#verification-methods) | GA | Users can use any authenticator app that supports TOTP verification, such as the [Microsoft Authenticator app](https://www.microsoft.com/security/mobile-authenticator-app).|
| [Phone factor authentication](phone-factor-technical-profile.md) | GA | | | [Azure AD MFA authentication](multi-factor-auth-technical-profile.md) | Preview | | | [One-time password](one-time-password-technical-profile.md) | GA | |
active-directory-b2c Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/multi-factor-authentication.md
Previously updated : 01/14/2022 Last updated : 06/27/2022
With [Conditional Access](conditional-access-identity-protection-overview.md) us
- **SMS or phone call** - During the first sign-up or sign-in, the user is asked to provide and verify a phone number. During subsequent sign-ins, the user is prompted to select either the **Send Code** or **Call Me** phone MFA option. Depending on the user's choice, a text message is sent or a phone call is made to the verified phone number to identify the user. The user either provides the OTP code sent via text message or approves the phone call. - **Phone call only** - Works in the same way as the SMS or phone call option, but only a phone call is made. - **SMS only** - Works in the same way as the SMS or phone call option, but only a text message is sent. -- **Authenticator app - TOTP (preview)** - The user must install an authenticator app that supports time-based one-time password (TOTP) verification, such as the [Microsoft Authenticator app](https://www.microsoft.com/security/mobile-authenticator-app), on a device that they own. During the first sign-up or sign-in, the user scans a QR code or enters a code manually using the authenticator app. During subsequent sign-ins, the user types the TOTP code that appears on the authenticator app. See [how to set up the Microsoft Authenticator app](#enroll-a-user-in-totp-with-an-authenticator-app-for-end-users).
+- **Authenticator app - TOTP** - The user must install an authenticator app that supports time-based one-time password (TOTP) verification, such as the [Microsoft Authenticator app](https://www.microsoft.com/security/mobile-authenticator-app), on a device that they own. During the first sign-up or sign-in, the user scans a QR code or enters a code manually using the authenticator app. During subsequent sign-ins, the user types the TOTP code that appears on the authenticator app. See [how to set up the Microsoft Authenticator app](#enroll-a-user-in-totp-with-an-authenticator-app-for-end-users).
> [!IMPORTANT] > Authenticator app - TOTP provides stronger security than SMS/Phone and email is the least secure. [SMS/Phone-based multi-factor authentication incurs separate charges from the normal Azure AD B2C MAU's pricing model](https://azure.microsoft.com/pricing/details/active-directory/external-identities/).
In Azure AD B2C, you can delete a user's TOTP authenticator app enrollment. Then
1. In the left menu, select **Users**. 1. Search for and select the user for which you want to delete TOTP authenticator app enrollment. 1. In the left menu, select **Authentication methods**.
-1. Under **Usable authentication methods**, find **Software OATH token (Preview)**, and then select the ellipsis menu next to it. If you don't see this interface, select the option to **"Switch to the new user authentication methods experience! Click here to use it now"** to switch to the new authentication methods experience.
+1. Under **Usable authentication methods**, find **Software OATH token**, and then select the ellipsis menu next to it. If you don't see this interface, select the option to **"Switch to the new user authentication methods experience! Click here to use it now"** to switch to the new authentication methods experience.
1. Select **Delete**, and then select **Yes** to confirm. :::image type="content" source="media/multi-factor-authentication/authentication-methods.png" alt-text="User authentication methods":::
active-directory-domain-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md
Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
active-directory Partner Driven Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/partner-driven-integrations.md
+
+ Title: 'Use partner driven integrations to provision accounts into all your applications'
+description: Use partner driven integrations to provision accounts into all your applications.
++++++ Last updated : 07/08/2022++++
+# Partner-driven provisioning integrations
+
+The Azure Active Directory Provisioning service allows you to provision users and groups into both [SaaS](user-provisioning.md) and [on-premises](on-premises-scim-provisioning.md) applications. There are four integration paths:
+
+**Option 1 - Azure AD Application Gallery:**
+Popular third party applications, such as Dropbox, Snowflake, and Workplace by Facebook, are made available for customers through the Azure AD application gallery. New applications can easily be onboarded to the gallery using the [application network portal](../azuread-dev/howto-app-gallery-listing.md).
+
+**Option 2 - Implement a SCIM compliant API for your application:**
+If your line-of-business application supports the [SCIM](https://aka.ms/scimoverview) standard, it can easily be integrated with the [Azure AD SCIM client](use-scim-to-provision-users-and-groups.md).
+
+**Option 3 - Use Microsoft Graph:**
+Many new applications use Microsoft Graph to retrieve users, groups and other resources from Azure Active Directory. You can learn more about what scenarios to use [SCIM and Graph](scim-graph-scenarios.md) in.
+
+**Option 4 - Use partner-driven connectors:**
+In cases where an application doesn't support SCIM, partners have built gateways between the Azure AD SCIM client and target applications. **This document serves as a place for partners to attest to integrations that are compatible with Azure Active Directory, and for customers to discover these partner-driven integrations.** These gateways are built, maintained, and owned by the third-party vendor.
+
+## Available partner-driven integrations
+The descriptions and lists of applications below are provided by the partners themselves. You can use the lists of applications supported to identify a partner that you may want to contact and learn more about.
+
+### IDMWORKS
+#### Description
+We Are Experts In Identity & Access Management and Data Center Management.
+The Azure AD platform integrates with IDMWORKS IdentityForge (IDF) Gateway for user lifecycle management for Mainframe systems (RACF, Top Secret, ACF2), Midrange system (AS400), Healthcare applications (EPIC/Cerner), Linux/Unix servers, Databases, and dozens of on-premises and cloud applications. IdentityForge provides a central, standardized integration engine and modern identity store that serves as a trusted source for all lifecycle management.
+The IDF Gateway for Azure AD provides lifecycle management for import sources and provisioning target systems that are not covered by the Azure AD connector portfolio like Mainframe systems (RACF, Top Secret, ACF2) or Healthcare applications (EPIC/Cerner). The IDF Gateway powers Azure AD identity lifecycle management (LCM) to continuously synchronize user account information from Mainframe/Healthcare sources and to automate the account provisioning lifecycle use cases like create, read (import), update, deactivate, delete user accounts and perform group management.
+
+#### Contact information
+* Company website: https://www.idmworks.com/identity-forge
+* Contact information: https://www.idmworks.com/contacts/
+
+#### Popular applications supported
+
+Leading provider of Mainframe, Healthcare and ERP integrations. More can be found at https://www.idmworks.com/identity-forge/
+
+* IBM RACF
+* CA Top Secret
+* CA ACF2
+* IBM i (AS/400)
+* HP NonStop
+* EPIC
+* SAP ECC
+
+### UNIFY Solutions
+#### Description
+
+UNIFY Solutions is the leading provider of Identity, Access, Security and Governance solutions.
+
+#### Contact information
+* Company website: https://unifysolutions.net/identity/unifyconnect
+* Contact information: https://unifysolutions.net/contact/
+
+#### Popular applications supported
+* Aurion People & Payroll
+* Frontier Software chris21
+* TechnologyOne HR
+* Ascender HCM
+* Fusion5 EmpowerHR
+* SAP ERP Human Capital Management
+
+## How-to add partner-driven integrations to this document
+If you have built a SCIM Gateway and would like to add it to this list, follow the steps below.
+
+1. Review the Azure AD SCIM [documentation](use-scim-to-provision-users-and-groups.md) to understand the Azure AD SCIM implementation.
+1. Test compatibility between the Azure AD SCIM client and your SCIM gateway.
+1. Click the pencil at the top of this document to edit the article
+1. Once you're redirected to Github, click the pencil at the top of the article to start making changes
+1. Make changes in the article using the Markdown language and create a pull request. Make sure to provide a description for the pull request.
+1. An admin of the repository will review and merge your changes so that others can view them.
+
+## Guidelines
+* Add any new partners in alphabetical order.
+* Limit your entries to 500 words.
+* Ensure that you provide contact information for customers to learn more.
+* To avoid duplication, only include applications that don't already have out of the box provisioning connectors in the [Azure AD application gallery](../saas-apps/tutorial-list.md).
+
+## Disclaimer
+For independent software vendors: The Microsoft Azure Active Directory Application Gallery Terms & Conditions, excluding Sections 2ΓÇô4, apply to this Partner-Driven Integrations Catalog (https://aka.ms/PartnerDrivenProvisioning, the ΓÇ£Integrations CatalogΓÇ¥). References to the ΓÇ£GalleryΓÇ¥ shall be read as the ΓÇ£Integrations CatalogΓÇ¥ and references to an ΓÇ£AppΓÇ¥ shall be read as ΓÇ£IntegrationΓÇ¥.
+
+If you don't agree with these terms, you shouldn't submit your Integration for listing in the Integrations Catalog. If you submit an Integration to the Integrations Catalog, you agree that you or the entity you represent (ΓÇ£YOUΓÇ¥ or ΓÇ£YOURΓÇ¥) is bound by these terms.
+
+Microsoft reserves the right to accept or reject your proposed Integration in its sole discretion and reserves the right to determine the manner in which Apps are presented, promoted, or featured in this Integrations Catalog.
active-directory Active Directory Saml Claims Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-saml-claims-customization.md
Previously updated : 02/07/2022 Last updated : 06/28/2022
# Customize claims issued in the SAML token for enterprise applications
-Today, the Microsoft identity platform supports single sign-on (SSO) with most enterprise applications, including both applications pre-integrated in the Azure AD app gallery as well as custom applications. When a user authenticates to an application through the Microsoft identity platform using the SAML 2.0 protocol, the Microsoft identity platform sends a token to the application (via an HTTP POST). And then, the application validates and uses the token to log the user in instead of prompting for a username and password. These SAML tokens contain pieces of information about the user known as *claims*.
+Today, the Microsoft identity platform supports single sign-on (SSO) with most enterprise applications, including both applications pre-integrated in the Azure AD app gallery and custom applications. When a user authenticates to an application through the Microsoft identity platform using the SAML 2.0 protocol, the Microsoft identity platform sends a token to the application (via an HTTP POST). And then, the application validates and uses the token to log the user in instead of prompting for a username and password. These SAML tokens contain pieces of information about the user known as *claims*.
A *claim* is information that an identity provider states about a user inside the token they issue for that user. In [SAML token](https://en.wikipedia.org/wiki/SAML_2.0), this data is typically contained in the SAML Attribute Statement. The userΓÇÖs unique ID is typically represented in the SAML Subject also called as Name Identifier.
-By default, the Microsoft identity platform issues a SAML token to your application that contains a `NameIdentifier` claim with a value of the userΓÇÖs username (also known as the user principal name) in Azure AD, which can uniquely identify the user. The SAML token also contains additional claims containing the userΓÇÖs email address, first name, and last name.
+By default, the Microsoft identity platform issues a SAML token to your application that contains a `NameIdentifier` claim with a value of the userΓÇÖs username (also known as the user principal name) in Azure AD, which can uniquely identify the user. The SAML token also contains other claims that include the userΓÇÖs email address, first name, and last name.
To view or edit the claims issued in the SAML token to the application, open the application in Azure portal. Then open the **User Attributes & Claims** section.
From the **Choose name identifier format** dropdown, you can select one of the f
| **Unspecified** | Microsoft identity platform will use Unspecified as the NameID format. | |**Windows domain qualified name**| Microsoft identity platform will use the WindowsDomainQualifiedName format.|
-Transient NameID is also supported, but is not available in the dropdown and cannot be configured on Azure's side. To learn more about the NameIDPolicy attribute, see [Single Sign-On SAML protocol](single-sign-on-saml-protocol.md).
+Transient NameID is also supported, but isn't available in the dropdown and can't be configured on Azure's side. To learn more about the NameIDPolicy attribute, see [Single Sign-On SAML protocol](single-sign-on-saml-protocol.md).
### Attributes
Select the desired source for the `NameIdentifier` (or NameID) claim. You can se
| employeeid | Employee ID of the user | | Directory extensions | Directory extensions [synced from on-premises Active Directory using Azure AD Connect Sync](../hybrid/how-to-connect-sync-feature-directory-extensions.md) | | Extension Attributes 1-15 | On-premises extension attributes used to extend the Azure AD schema |
+| pairwiseidΓÇï | Persistent form of user identifier |
For more info, see [Table 3: Valid ID values per source](reference-claims-mapping-policy-type.md#table-3-valid-id-values-per-source).
-You can also assign any constant (static) value to any claims which you define in Azure AD. Please follow the below steps to assign a constant value:
+You can also assign any constant (static) value to any claims, which you define in Azure AD. The steps below outline how to assign a constant value:
1. In the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>, on the **User Attributes & Claims** section, click on the **Edit** icon to edit the claims. 1. Click on the required claim which you want to modify.
To add application-specific claims:
To apply a transformation to a user attribute: 1. In **Manage claim**, select *Transformation* as the claim source to open the **Manage transformation** page.
-2. Select the function from the transformation dropdown. Depending on the function selected, you will have to provide parameters and a constant value to evaluate in the transformation. Refer to the table below for more information about the available functions.
-3. (preview) `Treat source as multivalued` is a checkbox indicating if the transform should be applied to all values or just the first. By default, transformations will only be applied to the first element in a multi value claim, by checking this box it ensures it is applied to all. This checkbox will only be enabled for multi valued attributes, for example `user.proxyaddresses`.
-4. To apply multiple transformation, click on **Add transformation**. You can apply a maximum of two transformation to a claim. For example, you could first extract the email prefix of the `user.mail`. Then, make the string upper case.
+2. Select the function from the transformation dropdown. Depending on the function selected, you'll have to provide parameters and a constant value to evaluate in the transformation. Refer to the table below for more information about the available functions.
+3. (preview) `Treat source as multivalued` is a checkbox indicating if the transform should be applied to all values or just the first. By default, transformations will only be applied to the first element in a multi value claim, by checking this box it ensures it's applied to all. This checkbox will only be enabled for multivalued attributes, for example `user.proxyaddresses`.
+4. To apply multiple transformations, click on **Add transformation**. You can apply a maximum of two transformations to a claim. For example, you could first extract the email prefix of the `user.mail`. Then, make the string upper case.
![Multiple claims transformation](./media/active-directory-saml-claims-customization/sso-saml-multiple-claims-transformation.png)
You can use the following functions to transform claims.
| Function | Description | |-|-| | **ExtractMailPrefix()** | Removes the domain suffix from either the email address or the user principal name. This extracts only the first part of the user name being passed through (for example, "joe_smith" instead of joe_smith@contoso.com). |
-| **Join()** | Creates a new value by joining two attributes. Optionally, you can use a separator between the two attributes. For NameID claim transformation, the Join() function has specific behaviour when the transformation input has a domain part. It will remove the domain part from input before joining it with the separator and the selected parameter. For example, if the input of the transformation is ΓÇÿjoe_smith@contoso.comΓÇÖ and the separator is ΓÇÿ@ΓÇÖ and the parameter is ΓÇÿfabrikam.comΓÇÖ, this will result in joe_smith@fabrikam.com. |
+| **Join()** | Creates a new value by joining two attributes. Optionally, you can use a separator between the two attributes. For NameID claim transformation, the Join() function has specific behavior when the transformation input has a domain part. It will remove the domain part from input before joining it with the separator and the selected parameter. For example, if the input of the transformation is ΓÇÿjoe_smith@contoso.comΓÇÖ and the separator is ΓÇÿ@ΓÇÖ and the parameter is ΓÇÿfabrikam.comΓÇÖ, this will result in joe_smith@fabrikam.com. |
| **ToLowercase()** | Converts the characters of the selected attribute into lowercase characters. | | **ToUppercase()** | Converts the characters of the selected attribute into uppercase characters. | | **Contains()** | Outputs an attribute or constant if the input matches the specified value. Otherwise, you can specify another output if thereΓÇÖs no match.<br/>For example, if you want to emit a claim where the value is the userΓÇÖs email address if it contains the domain ΓÇ£@contoso.comΓÇ¥, otherwise you want to output the user principal name. To do this, you would configure the following values:<br/>*Parameter 1(input)*: user.email<br/>*Value*: "@contoso.com"<br/>Parameter 2 (output): user.email<br/>Parameter 3 (output if there's no match): user.userprincipalname |
You can use the following functions to transform claims.
| **ExtractNumeric() - Prefix** | Returns the prefix numerical part of the string.<br/>For example, if the input's value is "123_BSimon", then it returns "123". | | **ExtractNumeric() - Suffix** | Returns the suffix numerical part of the string.<br/>For example, if the input's value is "BSimon_123", then it returns "123". | | **IfEmpty()** | Outputs an attribute or constant if the input is null or empty.<br/>For example, if you want to output an attribute stored in an extensionattribute if the employee ID for a given user is empty. To do this, you would configure the following values:<br/>Parameter 1(input): user.employeeid<br/>Parameter 2 (output): user.extensionattribute1<br/>Parameter 3 (output if there's no match): user.employeeid |
-| **IfNotEmpty()** | Outputs an attribute or constant if the input is not null or empty.<br/>For example, if you want to output an attribute stored in an extensionattribute if the employee ID for a given user is not empty. To do this, you would configure the following values:<br/>Parameter 1(input): user.employeeid<br/>Parameter 2 (output): user.extensionattribute1 |
-| **Substring() ΓÇô Fixed Length** (Preview)| Extracts parts of a string claim type, beginning at the character at the specified position, and returns the specified number of characters.<br/>SourceClaim - The claim source which the transform should be executed.<br/>StartIndex - The zero-based starting character position of a substring in this instance.<br/>Length - The length in characters of the substring.<br/>For example:<br/>sourceClaim ΓÇô PleaseExtractThisNow<br/>StartIndex ΓÇô 6<br/>Length ΓÇô 11<br/>Output: ExtractThis
-| **Substring() ΓÇô EndOfString** (Preview) | Extracts parts of a string claim type, beginning at the character at the specified position, and returns the rest of the claim from the specified start index. <br/>SourceClaim - The claim source which the transform should be executed.<br/>StartIndex - The zero-based starting character position of a substring in this instance.<br/>For example:<br/>sourceClaim ΓÇô PleaseExtractThisNow<br/>StartIndex ΓÇô 6<br/>Output: ExtractThisNow
+| **IfNotEmpty()** | Outputs an attribute or constant if the input isn't null or empty.<br/>For example, if you want to output an attribute stored in an extensionattribute if the employee ID for a given user is not empty. To do this, you would configure the following values:<br/>Parameter 1(input): user.employeeid<br/>Parameter 2 (output): user.extensionattribute1 |
+| **Substring() ΓÇô Fixed Length** (Preview)| Extracts parts of a string claim type, beginning at the character at the specified position, and returns the specified number of characters.<br/>SourceClaim - The claim source which the transform should be executed.<br/>StartIndex - The zero-based starting character position of a substring in this instance.<br/>Length - The length in characters of the substring.<br/>For example:<br/>sourceClaim ΓÇô PleaseExtractThisNow<br/>StartIndex ΓÇô 6<br/>Length ΓÇô 11<br/>Output: ExtractThis |
+| **Substring() ΓÇô EndOfString** (Preview) | Extracts parts of a string claim type, beginning at the character at the specified position, and returns the rest of the claim from the specified start index. <br/>SourceClaim - The claim source which the transform should be executed.<br/>StartIndex - The zero-based starting character position of a substring in this instance.<br/>For example:<br/>sourceClaim ΓÇô PleaseExtractThisNow<br/>StartIndex ΓÇô 6<br/>Output: ExtractThisNow |
+| **RegexReplace()** (Preview) | RegexReplace() transformation accepts as input parameters:<br />- a user attribute as regex input<br />- the regular expression itself,<br />- additional input user attributes<br />- and replacement pattern itself. The replacement pattern may contain static text format along with reference pointing to regex output groups and additional input parameters.<br /><br/>Additional instructions on how to use RegexReplace() Transformation described below. |
If you need additional transformations, submit your idea in the [feedback forum in Azure AD](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789) under the *SaaS application* category.
+## How to use the RegexReplace() Transformation
+
+1. Use edit button with a pen icon to open the claims transformation blade.
+1. Dropdown against the ΓÇ£TransformationΓÇ¥ label will allow you to select the different transformation function, select ΓÇ£RegexReplace()ΓÇ¥ to use regex-based claims transformation method for claims transformation.
+1. ΓÇ£Parameter 1ΓÇ¥ is the source user input attribute which will be an input for the regular expression transformation e.g. user.mail which will have user email address as admin@contoso.com.
+1. Some input user attribute can be multi-value user attribute, if the selected user attribute supports multiple values and the user wants to use multiple values for the transformation, they need to check the ΓÇ£Treat source as multivaluedΓÇ¥ checkbox. If administrator checks the checkbox, all values will be used for regex match otherwise only first one will be used.
+1. Textbox against the ΓÇ£Regex patternΓÇ¥ accepts the regular expression which is to be evaluated against the value of user attribute selected as ΓÇ£parameter 1ΓÇ¥. For example a regular expression to extract user alias from the userΓÇÖs email address would be like, ΓÇ£(?'domain'^.*?)(?i)(\@contoso\.com)$ΓÇ¥
+1. By using the ΓÇ£Add additional parameterΓÇ¥ button, an administrator can choose more user attributes, which can be used into the transformation. The value of those would then be merged with regex transformation output. Currently, up to five more parameters are supported.
+ <br />For example, if the user.country attribute is an input parameter, and the value of which might be ΓÇ£USΓÇ¥, to merge this in replacement pattern user need to refer it as {country} inside the replacement pattern. Once user selected the user attribute for the parameter, info balloon against the parameter will explain how parameter can be used inside the replacement pattern.
+1. Textbox against the ΓÇ£Replacement patternΓÇ¥ label accepts the replacement pattern. Replacement pattern is the text template, which contains placeholders for regex outcome group name, input parameter group name and static text value. All group names must ne wrapped inside the curly braces for example, {group-name}. LetΓÇÖs say, administration wants to use user alias with some other domain name e.g. xyz.com and merge country name with it, in this case the replacement pattern would be ΓÇ£{country}.{domain}@xyz.comΓÇ¥, where {country} will be the value of input parameter and {domain} will be the group output from the regular expression evaluation. In such case, the expected outcome will be ΓÇ£US.swmal@xyz.comΓÇ¥
+
+1. RegexReplace() transformation will be evaluated only if the value of the selected user attribute for ΓÇ£Parameter 1ΓÇ¥ matches with the regular expression provided in ΓÇ£Regex patternΓÇ¥ textbox, otherwise default claim value will be added to the token. To validate regular expression against the input parameter value, test experience is available within a transform blade, however it operates on dummy values only. In case of additional input parameters, name of the parameter will be added to the test result instead of actual value. You can see the sample output in point 18. To access the test section user, need to click on the ΓÇ£Test transformationΓÇ¥ button.
+
+1. Regex-based claims transformation can be used as the second level transformation as well, in that case user can use any other transformation method as first transformation.
+
+1. If regex replace selected as second level transformation, output of first level transformation will be used as an input for second level transformation. Second level regex expression should match the output of first transformation otherwise transformation won't be applied.
+
+1. Same as point 5 above, ΓÇ£Regex patternΓÇ¥ is the regular expression for the second level transformation.
+
+1. These are the inputs user attributes for the second level transformations.
+
+1. User can delete the selected input parameter if they donΓÇÖt need it anymore.
+
+1. Once user clicks on the ΓÇ£Test transformationΓÇ¥ button, this ΓÇ£Test transformationΓÇ¥ section will be displayed and ΓÇ£Test transformationΓÇ¥ button goes away.
+
+1. This cross (X) button will hide the test section and re-render the ΓÇ£Test transformationΓÇ¥ button again on the blade.
+
+1. Textbox against the ΓÇ£Test regex inputΓÇ¥ accepts the dummy input, which will be used as an input for the test regular expression evaluation. In case regex-based claims transformation is configured as a second level transformation, user need to provided dummy value, which is an expected output of first transformation.
+
+1. Once administrator provides the test regex input and configures the ΓÇ£Regex patternΓÇ¥, ΓÇ£Replacement patternΓÇ¥ and ΓÇ£Input parametersΓÇ¥, they can evaluate the expression by clicking on the ΓÇ£Run testΓÇ¥ button.
+
+1. If evaluation succeeded, output of test transformation will be rendered against the ΓÇ£Test transformation resultΓÇ¥ label.
+
+1. Administrator can remove the second level transformation by using ΓÇ£Remove transformationΓÇ¥ button.
+
+1. In case regex input value, which is configured against the ΓÇ£Parameter 1ΓÇ¥ doesn't matches the ΓÇ£Regular expressionΓÇ¥ the transformation is skipped, in such case administrator can configure the alternate user attribute, which will be added to the token for the claim by checking the checkbox against the ΓÇ£Specify output if no matchΓÇ¥ label.
+
+1. If an administrator wants to return alternate user attribute in case of no match and checked the ΓÇ£Specify output if no matchΓÇ¥ checkbox, they can select alternate user attribute for using the dropdown, which is available against label ΓÇ£Parameter 3 (output if no match)ΓÇ¥.
+
+1. At the bottom of the blade full summer of format is displayed which explains the meaning of transformation in simple text.
+
+1. Once user configures all settings for the transformation and happy with it, they can save add it to claims policy by clicking ΓÇ£AddΓÇ¥ button. However, changes wonΓÇÖt be saved unless user doesn't manually click the ΓÇ£SaveΓÇ¥ toolbar button available on ΓÇ£Manage ClaimΓÇ¥ blade.
+
+RegexReplace() transformation is also available for the group claims transformations.
+
+### RegexReplace() Transform Validations
+Input parameters with duplicate user attributes aren't allowed. If duplicate user attributes selected following validation message will be rendered after user clicked on ΓÇ£AddΓÇ¥ or ΓÇ£Run testΓÇ¥ button.
++
+When unused input parameters found, the following message will be rendered on click of ΓÇ£AddΓÇ¥ and ΓÇ£Run testΓÇ¥ button click. Defined input parameters should have respective usage into the Replacement pattern text.
++
+With test experience, if provided test regex input doesn't match with the provided regular expression then following message will be displayed. This validation needs input value hence it wonΓÇÖt be applied when user clicks on ΓÇ£AddΓÇ¥ button.
++
+With test experience, when source for the groups into the replacement pattern not found user will receive following message. This validation wonΓÇÖt be applied when user clicks on ΓÇ£AddΓÇ¥ button.
++ ## Add the UPN claim to SAML tokens
-The `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` claim is part of the [SAML restricted claim set](reference-claims-mapping-policy-type.md#table-2-saml-restricted-claim-set), so you can not add it in the **User Attributes & Claims** section. As a workaround, you can add it as an [optional claim](active-directory-optional-claims.md) through **App registrations** in the Azure portal.
+The `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` claim is part of the [SAML restricted claim set](reference-claims-mapping-policy-type.md#table-2-saml-restricted-claim-set), so you can't add it in the **User Attributes & Claims** section. As a workaround, you can add it as an [optional claim](active-directory-optional-claims.md) through **App registrations** in the Azure portal.
Open the app in **App registrations** and select **Token configuration** and then **Add optional claim**. Select the **SAML** token type, choose **upn** from the list, and click **Add** to get the claim in the token.
To add a claim condition:
3. Select the group(s) to which the user should belong. You can select up to 50 unique groups across all claims for a given application. 4. Select the **Source** where the claim is going to retrieve its value. You can select a user attribute from the source attribute dropdown or apply a transformation to the user attribute before emitting it as a claim.
-The order in which you add the conditions are important. Azure AD first evaluates all conditions with source `Attribute` and then evaluates all conditions with source `Transformation` to decide which value to emit in the claim. Conditions with the same source are evaluated from top to bottom. The last value which matches the expression will be emitted in the claim. Transformations such as `IsNotEmpty` and `Contains` act like additional restrictions.
+The order in which you add the conditions are important. Azure AD first evaluates all conditions with source `Attribute` and then evaluates all conditions with source `Transformation` to decide which value to emit in the claim. Conditions with the same source are evaluated from top to bottom. The last value, which matches the expression will be emitted in the claim. Transformations such as `IsNotEmpty` and `Contains` act like additional restrictions.
-For example, Britta Simon is a guest user in the Contoso tenant. She belongs to another organization that also uses Azure AD. Given the below configuration for the Fabrikam application, when Britta tries to sign in to Fabrikam, the Microsoft identity platform will evaluate the conditions as follow.
+For example, Britta Simon is a guest user in the Contoso tenant. Britta belongs to another organization that also uses Azure AD. Given the below configuration for the Fabrikam application, when Britta tries to sign in to Fabrikam, the Microsoft identity platform will evaluate the conditions as follows.
First, the Microsoft identity platform verifies if Britta's user type is **All guests**. Since, this is true then the Microsoft identity platform assigns the source for the claim to `user.extensionattribute1`. Second, the Microsoft identity platform verifies if Britta's user type is **AAD guests**, since this is also true then the Microsoft identity platform assigns the source for the claim to `user.mail`. Finally, the claim is emitted with value `user.mail` for Britta. As another example, consider when Britta Simon tries to sign in and the following configuration is used. Azure AD first evaluates all conditions with source `Attribute`. Because Britta's user type is **AAD guests**, `user.mail` is assigned as the source for the claim. Next, Azure AD evaluates the transformations. Because Britta is a guest, `user.extensionattribute1` is now the new source for the claim. Because Britta is in **AAD guests**, `user.othermail` is now the source for this claim. Finally, the claim is emitted with value `user.othermail` for Britta. +
+As a final example, letΓÇÖs consider what happens if Britta has no `user.othermail` configured or it's empty. In both cases the condition entry is ignored, and the claim will fall back to `user.extensionattribute1` instead.
+
+## Advanced SAML Claims Options
+The following table lists advanced options that can be configured for an application.
+
+| Option | Description |
+|--|-|
+| Append application ID to issuer | Automatically adds the application ID to the issuer claim. This option ensures a unique claim value for each instance when there are multiple instances of the same application. This setting is ignored if a custom signing key isn't configured for the application. |ΓÇ»
+| Override audience claim | Allows for the overriding of the audience claim sent to the application. The value provided must be a valid absolute URI. This setting is ignored if a custom signing key isn't configured for the application. |
+| Include attribute name format | If selected, Azure Active Directory adds an additional attribute called `NameFormat` that describes the format of the name to restricted, core, and optional claims for the application. For more information, see, [Claims mapping policy type](reference-claims-mapping-policy-type.md#claim-sets) |
+
-As a final example, letΓÇÖs consider what happens if Britta has no `user.othermail` configured or it is empty. In both cases the condition entry is ignored, and the claim will fall back to `user.extensionattribute1` instead.
## Next steps * [Application management in Azure AD](../manage-apps/what-is-application-management.md)
-* [Configure single sign-on on applications that are not in the Azure AD application gallery](../manage-apps/configure-saml-single-sign-on.md)
+* [Configure single sign-on on applications that aren't in the Azure AD application gallery](../manage-apps/configure-saml-single-sign-on.md)
* [Troubleshoot SAML-based single sign-on](../manage-apps/debug-saml-sso-issues.md)
active-directory Reference App Multi Instancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-app-multi-instancing.md
+
+ Title: Configure SAML app multi-instancing for an application
+description: Learn about SAML App Multi-Instancing, which is needed for the configuration of multiple instances of the same application within a tenant.
++++++++ Last updated : 06/28/2022++++
+# Configure SAML app multi-instancing for an application in Azure Active Directory   
+App multi-instancing refers to the need for the configuration of multiple instances of the same application within a tenant.  For example, the organization has multiple Amazon Web Services accounts, each of which needs a separate service principal to handle instance-specific claims mapping (adding the AccountID claim for that AWS tenant) and roles assignment.  Or the customer has multiple instances of Box, which doesn’t need special claims mapping, but does need separate service principals for separate signing keys. 
+
+## IDP versus SP initiated SSO    
+A user can sign-in to an application one of two ways, either through the application directly, which is known as service provider (SP) initiated single sign-on (SSO), or by going directly to the identity provider (IDP), known as IDP initiated SSO. Depending on which approach is used within your organization you'll need to follow the appropriate instructions below.ΓÇ»
+
+## SP Initiated  
+In the SAML request of SP initiated SSO, the Issuer specified is usually the App ID Uri. Utilizing App ID Uri doesn’t allow the customer to distinguish which instance of an application is being targeted when using SP initiated SSO.  
+
+## SP Initiated Configuration InstructionsΓÇ»
+Update the SAML single sign-on service URL configured within the service provider for each instance to include the service principal guid as part of the URL. For example, the general SSO sign-in URL for SAML would have been `https://login.microsoftonline.com/<tenantid>/saml2`, the URL can now be updated to target a specific service principal as follows `https://login.microsoftonline.com/<tenantid>/saml2/<issuer>`.ΓÇ»
+
+Only service principal identifiers in GUID format are accepted for the ΓÇÿissuerΓÇÖ value. The service principal identifiers override the issuer in the SAML request and response, and the rest of the flow is completed as usual. There's one exception: if the application requires the request to be signed, the request is rejected even if the signature was valid. The rejection is done to avoid any security risks with functionally overriding values in a signed request.ΓÇ»
+
+## IDP Initiated  
+The IDP initiated feature exposes two settings for each application.  
+
+- An “audience override” option exposed for configuration by using claims mapping or the portal.  The intended use case is applications that require the same audience for multiple instances. This setting is ignored if no custom signing key is configured for the application.   
+
+- An ΓÇ£issuer with application idΓÇ¥ flag to indicate the issuer should be unique for each application instead of unique for each tenant.ΓÇ» This setting is ignored if no custom signing key is configured for the application.ΓÇ»
+
+## IDP Initiated Configuration InstructionsΓÇ»
+1. Open any SSO enabled enterprise app and navigate to the SAML single sign on blade.  
+1. Select the ΓÇÿEditΓÇÖ button on the ΓÇÿUser Attributes & ClaimsΓÇÖ panel.
+![Edit Configuration](./media/reference-app-multi-instancing/userattributesclaimsedit.png)
+1. Open the advanced options blade.
+![Open Advanced Options](./media/reference-app-multi-instancing/advancedoptionsblade.png)
+1. Configure both options according to your preferences and hit save.
+![Configure Options](./media/reference-app-multi-instancing/advancedclaimsoptions.png)
+++
+## Next steps
+
+- To explore the claims mapping policy in graph see [Claims mapping policy](/graph/api/resources/claimsMappingPolicy?view=graph-rest-1.0)
+- To learn more about how to configure this policy see [Customize app SAML token claims](active-directory-saml-claims-customization.md)
active-directory Reference Claims Mapping Policy Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-claims-mapping-policy-type.md
Previously updated : 03/04/2022 Last updated : 06/28/2022 # Claims mapping policy type
-In Azure AD, a **Policy** object represents a set of rules enforced on individual applications or on all applications in an organization. Each type of policy has a unique structure, with a set of properties that are then applied to objects to which they are assigned.
+In Azure AD, a **Policy** object represents a set of rules enforced on individual applications or on all applications in an organization. Each type of policy has a unique structure, with a set of properties that are then applied to objects to which they're assigned.
A claims mapping policy is a type of **Policy** object that [modifies the claims emitted in tokens](active-directory-claims-mapping.md) issued for specific applications.
There are certain sets of claims that define how and when they're used in tokens
||| | Core claim set | Are present in every token regardless of the policy. These claims are also considered restricted, and can't be modified. | | Basic claim set | Includes the claims that are emitted by default for tokens (in addition to the core claim set). You can [omit or modify basic claims](active-directory-claims-mapping.md#omit-the-basic-claims-from-tokens) by using the claims mapping policies. |
-| Restricted claim set | Can't be modified using policy. The data source cannot be changed, and no transformation is applied when generating these claims. |
+| Restricted claim set | Can't be modified using policy. The data source can't be changed, and no transformation is applied when generating these claims. |
This section lists: - [Table 1: JSON Web Token (JWT) restricted claim set](#table-1-json-web-token-jwt-restricted-claim-set)
The following table lists the SAML claims that are by default in the restricted
| `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` | | `http://schemas.microsoft.com/ws/2008/06/identity/claims/role` |
-These claims are restricted by default, but are not restricted if you [set the AcceptMappedClaims property](active-directory-claims-mapping.md#update-the-application-manifest) to `true` in your app manifest *or* have a [custom signing key](active-directory-claims-mapping.md#configure-a-custom-signing-key):
+These claims are restricted by default, but aren't restricted if you [set the AcceptMappedClaims property](active-directory-claims-mapping.md#update-the-application-manifest) to `true` in your app manifest *or* have a [custom signing key](active-directory-claims-mapping.md#configure-a-custom-signing-key):
- `http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname` - `http://schemas.microsoft.com/ws/2008/06/identity/claims/primarysid`
These claims are restricted by default, but are not restricted if you [set the A
- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/sid` - `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/x500distinguishedname`
-These claims are restricted by default, but are not restricted if you have a [custom signing key](active-directory-claims-mapping.md#configure-a-custom-signing-key):
+These claims are restricted by default, but aren't restricted if you have a [custom signing key](active-directory-claims-mapping.md#configure-a-custom-signing-key):
- `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn` - `http://schemas.microsoft.com/ws/2008/06/identity/claims/role` ## Claims mapping policy properties
-To control what claims are emitted and where the data comes from, use the properties of a claims mapping policy. If a policy is not set, the system issues tokens that include the core claim set, the basic claim set, and any [optional claims](active-directory-optional-claims.md) that the application has chosen to receive.
+To control what claims are emitted and where the data comes from, use the properties of a claims mapping policy. If a policy isn't set, the system issues tokens that include the core claim set, the basic claim set, and any [optional claims](active-directory-optional-claims.md) that the application has chosen to receive.
> [!NOTE] > Claims in the core claim set are present in every token, regardless of what this property is set to.
To control what claims are emitted and where the data comes from, use the proper
**Summary:** This property determines whether the basic claim set is included in tokens affected by this policy. - If set to True, all claims in the basic claim set are emitted in tokens affected by the policy.-- If set to False, claims in the basic claim set are not in the tokens, unless they are individually added in the claims schema property of the same policy.
+- If set to False, claims in the basic claim set aren't in the tokens, unless they're individually added in the claims schema property of the same policy.
For each claim schema entry defined in this property, certain information is req
**Value:** The Value element defines a static value as the data to be emitted in the claim.
+**SAMLNameFormat:** The SAML Name Format property specifies the value for the ΓÇ£NameFormatΓÇ¥ attribute for this claim. If present, the allowed values are:
+- urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified
+- urn:oasis:names:tc:SAML:2.0:attrname-format:uri
+- urn:oasis:names:tc:SAML:2.0:attrname-format:basic
+ **Source/ID pair:** The Source and ID elements define where the data in the claim is sourced from. **Source/ExtensionID pair:** The Source and ExtensionID elements define the directory schema extension attribute where the data in the claim is sourced from. For more information, see [Using directory schema extension attributes in claims](active-directory-schema-extensions.md).
If the source is transformation, the **TransformationID** element must be includ
The ID element identifies which property on the source provides the value for the claim. The following table lists the values of ID valid for each value of Source. + > [!WARNING] > Currently, the only available multi-valued claim sources on a user object are multi-valued extension attributes which have been synced from AADConnect. Other properties, such as OtherMails and tags, are multi-valued but only one value is emitted when selected as a source.
Based on the method chosen, a set of inputs and outputs is expected. Define the
|TransformationMethod|Expected input|Expected output|Description| |--|--|--|--| |Join|string1, string2, separator|outputClaim|Joins input strings by using a separator in between. For example: string1:"foo@bar.com" , string2:"sandbox" , separator:"." results in outputClaim:"foo@bar.com.sandbox"|
-|ExtractMailPrefix|Email or UPN|extracted string|ExtensionAttributes 1-15 or any other Schema Extensions which are storing a UPN or email address value for the user e.g. johndoe@contoso.com. Extracts the local part of an email address. For example: mail:"foo@bar.com" results in outputClaim:"foo". If no \@ sign is present, then the original input string is returned as is.|
+|ExtractMailPrefix|Email or UPN|extracted string|ExtensionAttributes 1-15 or any other Schema Extensions, which are storing a UPN or email address value for the user for example, johndoe@contoso.com. Extracts the local part of an email address. For example: mail:"foo@bar.com" results in outputClaim:"foo". If no \@ sign is present, then the original input string is returned as is.|
**InputClaims:** Use an InputClaims element to pass the data from a claim schema entry to a transformation. It has three attributes: **ClaimTypeReferenceId**, **TransformationClaimType** and **TreatAsMultiValue** (Preview) - **ClaimTypeReferenceId** is joined with ID element of the claim schema entry to find the appropriate input claim. - **TransformationClaimType** is used to give a unique name to this input. This name must match one of the expected inputs for the transformation method.-- **TreatAsMultiValue** is a Boolean flag indicating if the transform should be applied to all values or just the first. By default, transformations will only be applied to the first element in a multi value claim, by setting this value to true it ensures it is applied to all. ProxyAddresses and groups are 2 examples for input claims that you would likely want to treat as a multi value.
+- **TreatAsMultiValue** is a Boolean flag indicating if the transform should be applied to all values or just the first. By default, transformations will only be applied to the first element in a multi value claim, by setting this value to true it ensures it's applied to all. ProxyAddresses and groups are two examples for input claims that you would likely want to treat as a multi value.
**InputParameters:** Use an InputParameters element to pass a constant value to a transformation. It has two attributes: **Value** and **ID**.
Based on the method chosen, a set of inputs and outputs is expected. Define the
| ExtractMailPrefix | None | | Join | The suffix being joined must be a verified domain of the resource tenant. |
+### Issuer With Application ID
+**String:** issuerWithApplicationId
+**Data type:** Boolean (True or False)
+**Summary:** This property enables the addition of the application ID to the issuer claim. Ensures that multiple instances of the same application have a unique claim value for each instance. This setting is ignored if a custom signing key isn't configured for the application.
+- If set to `True`, the application ID is added to the issuer claim in tokens affected by the policy.
+- If set to `False`, the application ID isn't added to the issuer claim in tokens affected by the policy. (default)
+
+### Audience Override
+**String:** audienceOverride
+**Data type:** String
+**Summary:** This property enables the overriding of the audience claim sent to the application. The value provided must be a valid absolute URI. This setting is ignored if no custom signing key is configured for the application. 
++ ## Next steps - To learn how to customize the claims emitted in tokens for a specific application in their tenant using PowerShell, see [How to: Customize claims emitted in tokens for a specific app in a tenant](active-directory-claims-mapping.md)
active-directory Single Sign On Saml Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-on-saml-protocol.md
Title: Azure Single Sign On SAML Protocol
-description: This article describes the Single Sign-On (SSO) SAML protocol in Azure Active Directory
+ Title: Azure single sign-on SAML protocol
+description: This article describes the single sign-on (SSO) SAML protocol in Azure Active Directory
documentationcenter: .net
-# Single Sign-On SAML protocol
+# Single sign-on SAML protocol
-This article covers the SAML 2.0 authentication requests and responses that Azure Active Directory (Azure AD) supports for Single Sign-On (SSO).
+This article covers the SAML 2.0 authentication requests and responses that Azure Active Directory (Azure AD) supports for single sign-on (SSO).
The protocol diagram below describes the single sign-on sequence. The cloud service (the service provider) uses an HTTP Redirect binding to pass an `AuthnRequest` (authentication request) element to Azure AD (the identity provider). Azure AD then uses an HTTP post binding to post a `Response` element to the cloud service.
If provided, don't include the `ProxyCount` attribute, `IDPListOption` or `Reque
### Signature
-A `Signature` element in `AuthnRequest` elements is optional. Azure AD does not validate signed authentication requests if a signature is present. Requestor verification is provided for by only responding to registered Assertion Consumer Service URLs.
+A `Signature` element in `AuthnRequest` elements is optional. Azure AD can be configured (Preview) to enforce the requirement of signed authentication requests. If enabled, only signed authentication requests are accepted, otherwise the requestor verification is provided for by only responding to registered Assertion Consumer Service URLs.
### Subject
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 06/22/2022 Last updated : 07/08/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on June 22nd, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on July 8th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft 365 Business Voice (without Calling Plan) for US | BUSINESS_VOICE_DIRECTROUTING_MED | 8330dae3-d349-44f7-9cad-1b23c64baabe | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | MICROSOFT 365 DOMESTIC CALLING PLAN (120 Minutes) | MCOPSTN_5 | 11dee6af-eca8-419f-8061-6864517c1875 | MCOPSTN5 (54a152dc-90de-4996-93d2-bc47e670fc06) | MICROSOFT 365 DOMESTIC CALLING PLAN (120 min) (54a152dc-90de-4996-93d2-bc47e670fc06) | | Microsoft 365 Domestic Calling Plan for GCC | MCOPSTN_1_GOV | 923f58ab-fca1-46a1-92f9-89fda21238a8 | MCOPSTN1_GOV (3c8a8792-7866-409b-bb61-1b20ace0368b)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8) | Domestic Calling for Government (3c8a8792-7866-409b-bb61-1b20ace0368b)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8) |
-| Microsoft 365 E3 | SPE_E3 | 05e9a617-0261-4cee-bb44-138d3ef5d965 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics - Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Forms (Plan E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee) |
+| Microsoft 365 E3 | SPE_E3 | 05e9a617-0261-4cee-bb44-138d3ef5d965 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MDE_LITE (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection and Governance Analytics - Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Defender for Endpoint Plan 1 (292cc034-7b7c-4950-aaf5-943befd3f1d4)<br/>Microsoft Forms (Plan E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>Microsoft Kaizala Pro (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Common Data Service (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) |
|Microsoft 365 E3 - Unattended License | SPE_E3_RPA1 | c2ac2ee4-9bb1-47e4-8541-d689c7e83371 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION_unattended (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/> WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Common Data Service for Teams_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Enterprise (Unattended) (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/> To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Windows 10 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft 365 E3_USGOV_DOD | SPE_E3_USGOV_DOD | d61d61cc-f992-433f-a577-5bd016037eeb | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS_AR_DOD (fd500458-c24c-478e-856c-a6067a8376cd)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams for DOD (AR) (fd500458-c24c-478e-856c-a6067a8376cd)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | | Microsoft 365 E3_USGOV_GCCHIGH | SPE_E3_USGOV_GCCHIGH | ca9d1dd9-dfe9-4fef-b97c-9bc1ea3c3658 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS_AR_GCCHIGH (9953b155-8aef-4c56-92f3-72b0487fce41)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1(6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/> Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/> Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/> Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/> Microsoft Teams for GCCHigh (AR) (9953b155-8aef-4c56-92f3-72b0487fce41)<br/> Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/> Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/> SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) |
-| Microsoft 365 E5 | SPE_E5 | 06ebc4ee-1bb5-47dd-8120-11324bc54e06 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft Kaizala Pro (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power Virtual Agents for Office 365 (ded3d325-1bdc-453e-8432-5bac26d7a014) |
+| Microsoft 365 E5 | SPE_E5 | 06ebc4ee-1bb5-47dd-8120-11324bc54e06 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft Kaizala Pro (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power Virtual Agents for Office 365 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) |
| Microsoft 365 E5 Developer (without Windows and Audio Conferencing) | DEVELOPERPACK_E5 | c42b9cae-ea4f-4ab7-9717-81576235ccac | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014) | Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Common Data Service for Teams (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Cloud App Security (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (Plan 3) (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Common Data Service (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power Virtual Agents for Office 365 (ded3d325-1bdc-453e-8432-5bac26d7a014) | | Microsoft 365 E5 Compliance | INFORMATION_PROTECTION_COMPLIANCE | 184efa21-98c3-4e5d-95ab-d07053a96e67 | LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>INSIDER_RISK_MANAGEMENT (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics - Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-Based Classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Insider Risk Management (9d0c4ee5-e4a1-4625-ab39-d82b619b1a34)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | | Microsoft 365 E5 Security | IDENTITY_THREAT_PROTECTION | 26124093-3d78-432b-b5dc-48bf992543d5 | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Microsoft Defender for Cloud Apps (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Windows 10 Enterprise A3 for faculty | WIN10_ENT_A3_FAC | 8efbe2f6-106e-442f-97d4-a59aa6037e06 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) | | Windows 10 Enterprise A3 for students | WIN10_ENT_A3_STU | d4ef921e-840b-4b48-9a90-ab6698bc7b31 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) | | WINDOWS 10 ENTERPRISE E3 | WIN10_PRO_ENT_SUB | cb10e6cd-9da4-4992-867b-67546b1db821 | WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111) | WINDOWS 10 ENTERPRISE (21b439ba-a0ca-424f-a6cc-52f954a5b111) |
-| WINDOWS 10 ENTERPRISE E3 | WIN10_VDA_E3 | 6a0f6da5-0b87-4190-a6ae-9bb5a2b9546a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL PRINT (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WINDOWS 10 ENTERPRISE (NEW) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWS UPDATE FOR BUSINESS DEPLOYMENT SERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) |
-| Windows 10 Enterprise E5 | WIN10_VDA_E5 | 488ba24a-39a9-4473-8ee5-19291e71b002 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender For Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) |
+| WINDOWS 10 ENTERPRISE E3 | WIN10_VDA_E3 | 6a0f6da5-0b87-4190-a6ae-9bb5a2b9546a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL PRINT (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WINDOWS 10 ENTERPRISE (NEW) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWS UPDATE FOR BUSINESS DEPLOYMENT SERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) |
+| Windows 10 Enterprise E5 | WIN10_VDA_E5 | 488ba24a-39a9-4473-8ee5-19291e71b002 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender For Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3) |
| Windows 10 Enterprise E5 Commercial (GCC Compatible) | WINE5_GCC_COMPAT | 938fd547-d794-42a4-996c-1cc206619580 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Defender For Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118) |
+| Windows 10/11 Enterprise VDA | E3_VDA_only | d13ef257-988a-46f3-8fce-f47484dd4550 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows_Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>DATAVERSE_FOR_POWERAUTOMATE_DESKTOP (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>POWERAUTOMATE_DESKTOP_FOR_WIN (2d589a15-b171-4e61-9b5f-31d15eeb2872) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Autopatch (9a6eeb79-0b4b-4bf0-9808-39d99a2cd5a3)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Dataverse for PAD (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>PAD for Windows (2d589a15-b171-4e61-9b5f-31d15eeb2872) |
| Windows 365 Business 2 vCPU, 4 GB, 64 GB | CPC_B_2C_4RAM_64GB | 42e6818f-8966-444b-b7ac-0027c83fa8b5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>(CPC_B_2C_4RAM_64GB (a790cd6e-a153-4461-83c7-e127037830b6) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Business 2 vCPU, 4 GB, 64 GB (a790cd6e-a153-4461-83c7-e127037830b6) | | Windows 365 Business 4 vCPU, 16 GB, 128 GB (with Windows Hybrid Benefit) | CPC_B_4C_16RAM_128GB_WHB | 439ac253-bfbc-49c7-acc0-6b951407b5ef | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_B_4C_16RAM_128GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Business 4 vCPU, 16 GB, 128 GB (1d4f75d3-a19b-49aa-88cb-f1ea1690b550) | | Windows 365 Enterprise 2 vCPU, 4 GB, 64 GB | CPC_E_2C_4GB_64GB | 7bb14422-3b90-4389-a7be-f1b745fc037f | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_2C_4GB_64GB (23a25099-1b2f-4e07-84bd-b84606109438) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 4 GB, 64 GB (23a25099-1b2f-4e07-84bd-b84606109438) |
active-directory B2b Direct Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-direct-connect-overview.md
For information about Conditional Access and Teams, see [Overview of security an
Currently, B2B direct connect enables the Teams Connect shared channels feature. B2B direct connect users can access an external organization's Teams shared channel without having to switch tenants or sign in with a different account. The B2B direct connect userΓÇÖs access is determined by the shared channelΓÇÖs policies.
-In the resource organization, the Teams shared channel owner can search within Teams for users from an external organization and add them to the shared channel. After they're added, the B2B direct connect users can access the shared channel from within their home instance of Teams, where they collaborate using features such as chat, calls, file-sharing, and app-sharing. For details, see [Overview of teams and channels in Microsoft Teams](/microsoftteams/teams-channels-overview).For details about the resources, files, and applications, that are available to the B2B direct connect user via the Teams shared channel, refer to [Chat, teams, channels, & apps in Microsoft Teams](/microsoftteams/deploy-chat-teams-channels-microsoft-teams-landing-page).
+In the resource organization, the Teams shared channel owner can search within Teams for users from an external organization and add them to the shared channel. After they're added, the B2B direct connect users can access the shared channel from within their home instance of Teams, where they collaborate using features such as chat, calls, file-sharing, and app-sharing. For details, see [Overview of teams and channels in Microsoft Teams](/microsoftteams/teams-channels-overview). For details about the resources, files, and applications, that are available to the B2B direct connect user via the Teams shared channel, refer to [Chat, teams, channels, & apps in Microsoft Teams](/microsoftteams/deploy-chat-teams-channels-microsoft-teams-landing-page).
## B2B direct connect vs. B2B collaboration
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md
When a user clicks the **Accept invitation** link in an [invitation email](invit
![Screenshot showing the redemption flow diagram](media/redemption-experience/invitation-redemption-flow.png)
-**If the userΓÇÖs User Principal Name (UPN) matches with both an existing Azure AD and personal MSA account, the user will be prompted to choose which account they want to redeem with.*
+**If the userΓÇÖs User Principal Name (UPN) matches with both an existing Azure AD and personal MSA account, the user will be prompted to choose which account they want to redeem with. If Email OTP is enabled, existing unmanaged "viral" Azure AD accounts will be ignored (See step #9).*
1. Azure AD performs user-based discovery to determine if the user exists in an [existing Azure AD tenant](./what-is-b2b.md#easily-invite-guest-users-from-the-azure-ad-portal).
If you see an error that requires admin consent while accessing an application,
- [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md) - [How do information workers add B2B collaboration users to Azure Active Directory?](add-users-information-worker.md) - [Add Azure Active Directory B2B collaboration users by using PowerShell](customize-invitation-api.md#powershell)-- [Leave an organization as a guest user](leave-the-organization.md)
+- [Leave an organization as a guest user](leave-the-organization.md)
active-directory Active Directory Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-architecture.md
Previously updated : 05/23/2019 Last updated : 07/08/2022
Azure AD operates across datacenters with the following characteristics:
* Authentication, Graph, and other AD services reside behind the Gateway service. The Gateway manages load balancing of these services. It will fail over automatically if any unhealthy servers are detected using transactional health probes. Based on these health probes, the Gateway dynamically routes traffic to healthy datacenters. * For *reads*, the directory has secondary replicas and corresponding front-end services in an active-active configuration operating in multiple datacenters. In case of a failure of an entire datacenter, traffic will be automatically routed to a different datacenter.
- *For *writes*, the directory will fail over primary (master) replica across datacenters via planned (new primary is synchronized to old primary) or emergency failover procedures. Data durability is achieved by replicating any commit to at least two datacenters.
+* For *writes*, the directory will fail over primary (master) replica across datacenters via planned (new primary is synchronized to old primary) or emergency failover procedures. Data durability is achieved by replicating any commit to at least two datacenters.
#### Data consistency
active-directory Road To The Cloud Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-posture.md
Many companies migrating from Active Directory (AD) to Azure Active Directory (A
* **Users and Groups**: Represent the human and non-human identities and attributes that access resources from different devices as specified.
+[ ![Architectural diagram depicting applications, devices, and users and groups layers, each containing common technologies found within each layer.](media/road-to-cloud-posture/road-to-the-cloud-start.png) ](media/road-to-cloud-posture/road-to-the-cloud-start.png#lightbox)
Microsoft has modeled five states of transformation that commonly align with the business goals of our customers. As the goals of customers mature, it's typical for them to shift from one state to the next at a pace that suits their resourcing and culture. This approach closely follows [Active Directory in Transition: Gartner Survey| Results and Analysis](https://www.gartner.com/en/documents/4006741).
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
For more information about users flows, see [User flow versions in Azure Active
In July 2020 we have added following 55 new applications in our App gallery with Federation support:
-[Appreiz](https://microsoftteams.appreiz.com/), [Inextor Vault](https://inexto.com/inexto-suite/inextor), [Beekast](https://my.beekast.com/), [Templafy OpenID Connect](https://app.templafy.com/), [PeterConnects receptionist](https://msteams.peterconnects.com/), [AlohaCloud](https://appfusions.alohacloud.com/auth), Control Tower, [Cocoom](https://start.cocoom.com/), [COINS Construction Cloud](https://sso.coinsconstructioncloud.com/#login/), [Medxnote MT](https://task.teamsmain.medx.im/authorization), [Reflekt](https://reflekt.konsolute.com/login), [Rever](https://app.reverscore.net/access), [MyCompanyArchive](https://login.mycompanyarchive.com/), [GReminders](https://app.greminders.com/o365-oauth), [Titanfile](../saas-apps/titanfile-tutorial.md), [Wootric](../saas-apps/wootric-tutorial.md), [SolarWinds Orion](https://support.solarwinds.com/SuccessCenter/s/orion-platform?language=en_US), [OpenText Directory Services](../saas-apps/opentext-directory-services-tutorial.md), [Datasite](../saas-apps/datasite-tutorial.md), [BlogIn](../saas-apps/blogin-tutorial.md), [IntSights](../saas-apps/intsights-tutorial.md), [kpifire](../saas-apps/kpifire-tutorial.md), [Textline](../saas-apps/textline-tutorial.md), [Cloud Academy - SSO](../saas-apps/cloud-academy-sso-tutorial.md), [Community Spark](../saas-apps/community-spark-tutorial.md), [Chatwork](../saas-apps/chatwork-tutorial.md), [CloudSign](../saas-apps/cloudsign-tutorial.md), [C3M Cloud Control](../saas-apps/c3m-cloud-control-tutorial.md), [SmartHR](https://smarthr.jp/), [NumlyEngageΓäó](../saas-apps/numlyengage-tutorial.md), [Michigan Data Hub Single Sign-On](../saas-apps/michigan-data-hub-single-sign-on-tutorial.md), [Egress](../saas-apps/egress-tutorial.md), [SendSafely](../saas-apps/sendsafely-tutorial.md), [Eletive](https://app.eletive.com/), [Right-Hand Cybersecurity ADI](https://right-hand.ai/), [Fyde Enterprise Authentication](https://enterprise.fyde.com/), [Verme](../saas-apps/verme-tutorial.md), [Lenses.io](../saas-apps/lensesio-tutorial.md), [Momenta](../saas-apps/momenta-tutorial.md), [Uprise](https://app.uprise.co/sign-in), [Q](https://q.moduleq.com/login), [CloudCords](../saas-apps/cloudcords-tutorial.md), [TellMe Bot](https://tellme365liteweb.azurewebsites.net/), [Inspire](https://app.inspiresoftware.com/), [Maverics Identity Orchestrator SAML Connector](https://www.strata.io/identity-fabric/), [Smartschool (School Management System)](https://smartschoolz.com/login), [Zepto - Intelligent timekeeping](https://user.zepto-ai.com/signin), [Studi.ly](https://studi.ly/), [Trackplan](http://www.trackplanfm.com/), [Skedda](../saas-apps/skedda-tutorial.md), [WhosOnLocation](../saas-apps/whos-on-location-tutorial.md), [Coggle](../saas-apps/coggle-tutorial.md), [Kemp LoadMaster](https://kemptechnologies.com/cloud-load-balancer/), [BrowserStack Single Sign-on](../saas-apps/browserstack-single-sign-on-tutorial.md)
+[Appreiz](https://microsoftteams.appreiz.com/), [Inextor Vault](https://inexto.com/inexto-suite/inextor), [Beekast](https://my.beekast.com/), [Templafy OpenID Connect](https://app.templafy.com/), [PeterConnects receptionist](https://msteams.peterconnects.com/), [AlohaCloud](https://www.alohacloud.com/), Control Tower, [Cocoom](https://start.cocoom.com/), [COINS Construction Cloud](https://sso.coinsconstructioncloud.com/#login/), [Medxnote MT](https://task.teamsmain.medx.im/authorization), [Reflekt](https://reflekt.konsolute.com/login), [Rever](https://app.reverscore.net/access), [MyCompanyArchive](https://login.mycompanyarchive.com/), [GReminders](https://app.greminders.com/o365-oauth), [Titanfile](../saas-apps/titanfile-tutorial.md), [Wootric](../saas-apps/wootric-tutorial.md), [SolarWinds Orion](https://support.solarwinds.com/SuccessCenter/s/orion-platform?language=en_US), [OpenText Directory Services](../saas-apps/opentext-directory-services-tutorial.md), [Datasite](../saas-apps/datasite-tutorial.md), [BlogIn](../saas-apps/blogin-tutorial.md), [IntSights](../saas-apps/intsights-tutorial.md), [kpifire](../saas-apps/kpifire-tutorial.md), [Textline](../saas-apps/textline-tutorial.md), [Cloud Academy - SSO](../saas-apps/cloud-academy-sso-tutorial.md), [Community Spark](../saas-apps/community-spark-tutorial.md), [Chatwork](../saas-apps/chatwork-tutorial.md), [CloudSign](../saas-apps/cloudsign-tutorial.md), [C3M Cloud Control](../saas-apps/c3m-cloud-control-tutorial.md), [SmartHR](https://smarthr.jp/), [NumlyEngageΓäó](../saas-apps/numlyengage-tutorial.md), [Michigan Data Hub Single Sign-On](../saas-apps/michigan-data-hub-single-sign-on-tutorial.md), [Egress](../saas-apps/egress-tutorial.md), [SendSafely](../saas-apps/sendsafely-tutorial.md), [Eletive](https://app.eletive.com/), [Right-Hand Cybersecurity ADI](https://right-hand.ai/), [Fyde Enterprise Authentication](https://enterprise.fyde.com/), [Verme](../saas-apps/verme-tutorial.md), [Lenses.io](../saas-apps/lensesio-tutorial.md), [Momenta](../saas-apps/momenta-tutorial.md), [Uprise](https://app.uprise.co/sign-in), [Q](https://q.moduleq.com/login), [CloudCords](../saas-apps/cloudcords-tutorial.md), [TellMe Bot](https://tellme365liteweb.azurewebsites.net/), [Inspire](https://app.inspiresoftware.com/), [Maverics Identity Orchestrator SAML Connector](https://www.strata.io/identity-fabric/), [Smartschool (School Management System)](https://smartschoolz.com/login), [Zepto - Intelligent timekeeping](https://user.zepto-ai.com/signin), [Studi.ly](https://studi.ly/), [Trackplan](http://www.trackplanfm.com/), [Skedda](../saas-apps/skedda-tutorial.md), [WhosOnLocation](../saas-apps/whos-on-location-tutorial.md), [Coggle](../saas-apps/coggle-tutorial.md), [Kemp LoadMaster](https://kemptechnologies.com/cloud-load-balancer/), [BrowserStack Single Sign-on](../saas-apps/browserstack-single-sign-on-tutorial.md)
You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial
active-directory Cloudflare Azure Ad Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/cloudflare-azure-ad-integration.md
+
+ Title: Secure hybrid access with Azure AD and Cloudflare
+description: In this tutorial, learn how to integrate Cloudflare with Azure AD for secure hybrid access
+++++++ Last updated : 6/27/2022++++
+# Tutorial: Configure Cloudflare with Azure Active Directory for secure hybrid access
+
+In this tutorial, learn how to integrate Azure Active Directory
+(Azure AD) with Cloudflare Zero Trust. Using this solution, you can build rules based on user identity and group membership. Users can authenticate with their Azure AD credentials and connect to Zero Trust protected applications.
+
+## Prerequisites
+
+To get started, you need:
+
+- An Azure AD subscription
+
+ - If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free/).
+
+- An Azure AD tenant linked to your Azure AD subscription
+
+ - See, [Quickstart: Create a new tenant in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/fundamentals/active-directory-access-create-new-tenant).
+
+- A Cloudflare Zero Trust account
+
+ - If you don't have one, go to [Get started with Cloudflare's Zero Trust
+ platform](https://dash.cloudflare.com/sign-up/teams)
+
+## Integrate organization identity providers with Cloudflare Access
+
+Cloudflare Zero Trust Access helps enforce default-deny, Zero Trust
+rules that limit access to corporate applications, private IP spaces,
+and hostnames. This feature connects users faster and safer than a virtual private network (VPN).
+
+Organizations can use multiple Identity Providers (IdPs) simultaneously, reducing friction when working with partners
+or contractors.
+
+To add an IdP as a sign-in method, configure [Cloudflare Zero Trust
+dashboard](https://dash.teams.cloudflare.com/) and Azure
+AD.
+
+The following architecture diagram shows the implementation.
+
+![Screenshot shows the architecture diagram of Cloudflare and Azure AD integration](./media/cloudflare-azure-ad-integration/cloudflare-architecture-diagram.png)
+
+## Integrate a Cloudflare Zero Trust account with Azure AD
+
+To integrate Cloudflare Zero Trust account with an instance of Azure AD:
+
+1. On the [Cloudflare Zero Trust
+ dashboard](https://dash.teams.cloudflare.com/),
+ navigate to **Settings > Authentication**.
+
+2. For **Login methods**, select **Add new**.
+
+ ![Screenshot shows adding new login methods](./media/cloudflare-azure-ad-integration/login-methods.png)
+
+3. Under **Select an identity provider**, select **Azure AD.**
+
+ ![Screenshot shows selecting a new identity provider](./media/cloudflare-azure-ad-integration/idp-azure-ad.png)
+
+4. The **Add Azure ID** dialog appears. Enter credentials from your Azure AD instance and make necessary selections.
+
+ ![Screenshot shows making selections to Azure AD dialog box](./media/cloudflare-azure-ad-integration/add-azure-ad-as-idp.png)
+
+5. Select **Save**.
+
+## Register Cloudflare with Azure AD
+
+Use the instructions in the following three sections to register Cloudflare with Azure AD.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+2. Under **Azure Services**, select **Azure Active Directory**.
+
+3. In the left menu, under **Manage**, select **App registrations**.
+
+4. Select the **+ New registration tab**.
+
+5. Name your application and enter your [team
+ domain](https://developers.cloudflare.com/cloudflare-one/glossary#team-domain), with **callback** at the end of the path: /cdn-cgi/access/callback.
+ For example, `https://<your-team-name>.cloudflareaccess.com/cdn-cgi/access/callback`
+
+6. Select **Register**.
+
+ ![Screenshot shows registering an application](./media/cloudflare-azure-ad-integration/register-application.png)
+
+### Certificates & secrets
+
+1. On the **Cloudflare Access** screen, under **Essentials**, copy and save the Application (client) ID and the Directory (tenant) ID.
+
+ [ ![Screenshot shows cloudflare access screen](./media/cloudflare-azure-ad-integration/cloudflare-access.png) ](./media/cloudflare-azure-ad-integration/cloudflare-access.png#lightbox)
++
+2. In the left menu, under **Manage**, select **Certificates &
+ secrets**.
+
+ ![Screenshot shows Azure AD certificates and secrets screen](./media/cloudflare-azure-ad-integration/add-client-secret.png)
+
+3. Under **Client secrets**, select **+ New client secret**.
+
+4. In **Description**, name the client secret.
+
+5. Under **Expires**, select an expiration.
+
+6. Select **Add**.
+
+7. Under **Client secrets**, from the **Value** field, copy the value. Consider the value an application password. This example's value is visible, Azure values appear in the Cloudflare Access configuration.
+
+ ![Screenshot shows cloudflare access configuration for Azure AD](./media/cloudflare-azure-ad-integration/cloudflare-access-configuration.png)
+
+### Permissions
+
+1. In the left menu, select **API permissions**.
+
+2. Select **+** **Add a permission**.
+
+3. Under **Select an API**, select **Microsoft Graph**.
+
+ ![Screenshot shows Azure AD API permissions using MS Graph](./media/cloudflare-azure-ad-integration/microsoft-graph.png)
+
+4. Select **Delegated permissions** for the following permissions:
+
+- `Email`
+
+- `openid`
+
+- `profile`
+
+- `offline_access`
+
+- `user.read`
+
+- `directory.read.all`
+
+- `group.read.all`
+
+5. Under **Manage**, select **+** **Add permissions**.
+
+ [ ![Screenshot shows Azure AD request API permissions screen](./media/cloudflare-azure-ad-integration/request-api-permissions.png) ](./media/cloudflare-azure-ad-integration/request-api-permissions.png#lightbox)
++
+6. Select **Grant Admin Consent for ...**.
+
+ [ ![Screenshot shows configured API permissions with granting admin consent](./media/cloudflare-azure-ad-integration/grant-admin-consent.png) ](./media/cloudflare-azure-ad-integration/grant-admin-consent.png#lightbox)
++
+7. On the [Cloudflare Zero Trust dashboard](https://dash.teams.cloudflare.com/),
+ navigate to **Settings> Authentication**.
+
+8. Under **Login methods**, select **Add new**.
+
+9. Select **Azure AD**.
+
+10. Enter the Application ID, Application secret, and Directory ID values.
+
+ >[!NOTE]
+ >For Azure AD groups, in **Edit your Azure AD identity provider**, for **Support Groups** select **On**.
+
+11. Select **Save**.
+
+## Test the integration
+
+1. To test the integration on the Cloudflare Zero Trust dashboard,
+ navigate to **Settings** > **Authentication**.
+
+2. Under **Login methods**, for Azure AD select **Test**.
+
+ ![Screenshot shows Azure AD as the login method for test](./media/cloudflare-azure-ad-integration/login-methods-test.png)
+
+3. Enter Azure AD credentials.
+
+4. The **Your connection works** message appears.
+
+ ![Screenshot shows Your connection works screen](./media/cloudflare-azure-ad-integration/connection-success-screen.png)
+
+## Next steps
+
+- [Integrate single sign-on (SSO) with Cloudflare](https://developers.cloudflare.com/cloudflare-one/identity/idp-integration/)
+
+- [Cloudflare integration with Azure AD B2C](https://docs.microsoft.com/azure/active-directory-b2c/partner-cloudflare)
active-directory Howto Enforce Signed Saml Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-enforce-signed-saml-authentication.md
+
+ Title: enforce signed SAML authentication requests
+description: Learn how to enforce signed SAML authentication requests.
+++++++ Last updated : 06/29/2022++++
+# SAML Request Signature Verification (Preview)
+
+SAML Request Signature Verification is a functionality that validates the signature of signed authentication requests. An App Admin now can enable and disable the enforcement of signed requests and upload the public keys that should be used to do the validation.
+
+If enabled Azure Active Directory will validate the requests against the public keys configured. There are some scenarios where the authentication requests can fail:
+
+- Protocol not allowed for signed requests. Only SAML protocol is supported.
+- Request not signed, but verification is enabled.
+- No verification certificate configured for SAML request signature verification.
+- Signature verification failed.
+- Key identifier in request is missing and 2 most recently added certificates do not match with the request signature.
+- Request signed but algorithm missing.
+- No certificate matching with provided key identifier.
+- Signature algorithm not allowed. Only RSA-SHA256 is supported.
+
+## To Configure SAML Request Signature Verification in the Azure Portal
+1. Inside the Azure Portal navigate to **Azure Active Directory** from the Search bar or Azure Services.
+![Azure Active Directory inside Azure Portal](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation1.png)
+1. Navigate to **Enterprise applications** from the left menu.
+![Enterprise Application option inside Azure Portal Navigation](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation2.png)
+1. Select the application you wish to apply the changes.
+1. Navigate to **Single sign-on.**
+1. In the **Single sign-on** screen, there is a new subsection called **Verification certificates** under **SAML Certificates.**
+![Verification certificates under SAML Certificates on the Enterprise Application page in the Azure Portal](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation3.png)
+1. Click on **Edit.**
+1. In the new blade, you will be able to enable the verification of signed requests and opt-in for weak algorithm verification in case your application still uses RSA-SHA1 to sign the authentication requests.
+1. To enable the verification of signed requests, click **Enable verification certificates** and upload a verification public key that matches with the private key used to sign the request.
+![Enable verification certificates in Enterprise Application within the Azure Portal](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation4.png)
+![Upload certificates in Enterprise Application within the Azure Portal](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation5.png)
+![Certificate upload success in Enterprise Application within the Azure Portal](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation6.png)
+1. Once you have your verification certificate uploaded, click **Save.**
+![Certificate verification save in Enterprise Application within the Azure Portal](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation7.png)
+![Certificate update success in Enterprise Application within the Azure Portal](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation8.png)
+1. When the verification of signed requests is enabled, the test experience is disabled as the requests requires to be signed by the service provider.
+![Testing disabled warning when signed requests enabled in Enterprise Application within the Azure Portal](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation9.png)
+1. If you want to see the current configuration of an enterprise application, you can navigate to the **Single Sign-on** screen and see the summary of your configuration under **SAML Certificates**. There you will be able to see if the verification of signed requests is enabled and the count of Active and Expired verification certificates.
+![Enterprise application configuration in Single Sign-on screen within the Azure Portal](./media/howto-enforce-signed-saml-authentication/samlsignaturevalidation10.png)
+
+## Next steps
+
+* Find out [How Azure AD uses the SAML protocol](../develop/active-directory-saml-protocol-reference.md)
+* Learn the format, security characteristics, and contents of [SAML tokens in Azure AD](../develop/reference-saml-tokens.md)
active-directory Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access.md
The following partners offer pre-built solutions and detailed guidance for integ
- [Cisco AnyConnect](../saas-apps/cisco-anyconnect.md)
+- [Cloudflare](../manage-apps/cloudflare-azure-ad-integration.md)
+ - [Fortinet](../saas-apps/fortigate-ssl-vpn-tutorial.md) - [Palo Alto Networks Global Protect](../saas-apps/paloaltoadmin-tutorial.md)
active-directory Competencyiq Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/competencyiq-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<customer>.competencyiq.com/` > [!NOTE]
- > The Sign on URL value is not real. Update the value with the actual Sign on URL. Contact [CompetencyIQ Client support team](https://www.competencyiq.com/) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The Sign on URL value is not real. Update the value with the actual Sign on URL. Contact CompetencyIQ Client support team to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Federation Metadata XML** from the given options as per your requirement and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure CompetencyIQ SSO
-To configure single sign-on on **CompetencyIQ** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [CompetencyIQ support team](https://www.competencyiq.com/). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **CompetencyIQ** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to CompetencyIQ support team. They set this setting to have the SAML SSO connection set properly on both sides.
### Create CompetencyIQ test user
-In this section, you create a user called Britta Simon in CompetencyIQ. Work with [CompetencyIQ support team](https://www.competencyiq.com/) to add the users in the CompetencyIQ platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in CompetencyIQ. Work with CompetencyIQ support team to add the users in the CompetencyIQ platform. Users must be created and activated before you use single sign-on.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure CompetencyIQ you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure CompetencyIQ you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Spring Cm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/spring-cm-tutorial.md
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
To enable Azure Active Directory users to sign in to SpringCM, they must be provisioned into SpringCM. In the case of SpringCM, provisioning is a manual task. > [!NOTE]
-> For more information, see [Create and Edit a SpringCM User](http://community.springcm.com/s/article/Create-and-Edit-a-SpringCM-User-1619481053).
+> For more information, see [Create and Edit a SpringCM User](https://support.docusign.com/s/document-item?language=en_US&bundleId=fsk1642969066834&topicId=ynn1576609925288.html&_LANG=enus).
**To provision a user account to SpringCM, perform the following steps:**
active-directory Versal Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/versal-tutorial.md
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://versal.com/sso/saml/orgs/<organization_id>` > [!NOTE]
- > The Reply URL value is not real. Update this value with the actual Reply URL. Contact [Versal Client support team](https://support.versal.com/hc/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The Reply URL value is not real. Update this value with the actual Reply URL. Contact Versal Client support team to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. Versal application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes, where as **nameidentifier** is mapped with **user.userprincipalname**. Versal application expects **nameidentifier** to be mapped with **user.mail**, so you need to edit the attribute mapping by clicking on **Edit** icon and change the attribute mapping.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
## Configure Versal SSO
-To configure single sign-on on **Versal** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Versal support team](https://support.versal.com/hc/). They set this setting to have the SAML SSO connection set properly on both sides.
+To configure single sign-on on **Versal** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to Versal support team. They set this setting to have the SAML SSO connection set properly on both sides.
### Create Versal test user
-In this section, you create a user called B.Simon in Versal. Follow the [Creating a SAML test user](https://support.versal.com/hc/articles/115011672887-Creating-a-SAML-test-user)
-support guide to create the user B.Simon within your organization. Users must be created and activated in Versal before you use single sign-on.
+In this section, you create a user called B.Simon in Versal. Follow the Creating a SAML test user support guide to create the user B.Simon within your organization. Users must be created and activated in Versal before you use single sign-on.
## Test SSO In this section, you test your Azure AD single sign-on configuration using a Versal course embedded within your website.
-Please see the [Embedding Organizational Courses](https://support.versal.com/hc/articles/203271866-Embedding-organizational-courses) **SAML Single Sign-On**
+Please see the Embedding Organizational Courses **SAML Single Sign-On**
support guide for instructions on how to embed a Versal course with support for Azure AD single sign-on. You will need to create a course, share it with your organization, and publish it in order to test course embedding.
-Please see [Creating a course](https://support.versal.com/hc/articles/203722528-Create-a-course), [Publishing a course](https://support.versal.com/hc/articles/203753398-Publishing-a-course),
- and [Course and learner management](https://support.versal.com/hc/articles/206029467-Course-and-learner-management) for more information.
## Next steps
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
The following diagram illustrates the Azure AD Verifiable Credentials architectu
[Azure Key Vault](../../key-vault/general/basic-concepts.md) is a cloud service that enables the secure storage and access of secrets and keys. The Verifiable Credentials service stores public and private keys in Azure Key Vault. These keys are used to sign and verify credentials.
-If you don't have an Azure Key Vault instance available, follow [these steps](../../key-vault/general/quick-create-portal.md) to create a key vault using the Azure portal.
+If you don't have an Azure Key Vault instance available, follow [these steps](/azure/key-vault/general/quick-create-portal) to create a key vault using the Azure portal.
>[!NOTE] >By default, the account that creates a vault is the only one with access. The Verifiable Credentials service needs access to the key vault. You must configure the key vault with an access policy that allows the account used during configuration to create and delete keys. The account used during configuration also requires permission to sign to create the domain binding for Verifiable Credentials. If you use the same account while testing, modify the default policy to grant the account sign permission, in addition to the default permissions granted to vault creators.
Once that you have successfully completed the verification steps, you are ready
## Next steps - [Learn how to issue Azure AD Verifiable Credentials from a web application](verifiable-credentials-configure-issuer.md).-- [Learn how to verify Azure AD Verifiable Credentials](verifiable-credentials-configure-verifier.md).
+- [Learn how to verify Azure AD Verifiable Credentials](verifiable-credentials-configure-verifier.md).
advisor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Advisor description: Sample Azure Resource Graph queries for Azure Advisor showing use of resource types and tables to access Azure Advisor related resources and properties. Previously updated : 06/16/2022 Last updated : 07/07/2022
aks Auto Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-cluster.md
+
+ Title: Automatically upgrade an Azure Kubernetes Service (AKS) cluster
+description: Learn how to automatically upgrade an Azure Kubernetes Service (AKS) cluster to get the latest features and security updates.
++++ Last updated : 07/07/2022++
+# Automatically upgrade an Azure Kubernetes Service (AKS) cluster
+
+Part of the AKS cluster lifecycle involves performing periodic upgrades to the latest Kubernetes version. ItΓÇÖs important you apply the latest security releases, or upgrade to get the latest features. Before learning about auto-upgrade, make sure you understand upgrade fundamentals by reading [Upgrade an AKS cluster][upgrade-aks-cluster].
+
+## Why use auto-upgrade
+
+Auto-upgrade provides a set once and forget mechanism that yields tangible time and operational cost benefits. By enabling auto-upgrade, you can ensure your clusters are up to date and don't miss the latest AKS features or patches from AKS and upstream Kubernetes.
+
+AKS follows a strict versioning window with regard to supportability. With properly selected auto-upgrade channels, you can avoid clusters falling into an unsupported version. For more on the AKS support window, see [Supported Kubernetes versions][supported-kubernetes-versions].
+
+## Using auto-upgrade
+
+Automatically completed upgrades are functionally the same as manual upgrades. The timing of upgrades is determined by the selected channel.
+
+The following upgrade channels are available:
+
+|Channel| Action | Example
+||||
+| `none`| disables auto-upgrades and keeps the cluster at its current version of Kubernetes| Default setting if left unchanged|
+| `patch`| automatically upgrade the cluster to the latest supported patch version when it becomes available while keeping the minor version the same.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.17.9*|
+| `stable`| automatically upgrade the cluster to the latest supported patch release on minor version *N-1*, where *N* is the latest supported minor version.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.18.6*.
+| `rapid`| automatically upgrade the cluster to the latest supported patch release on the latest supported minor version.| In cases where the cluster is at a version of Kubernetes that is at an *N-2* minor version where *N* is the latest supported minor version, the cluster first upgrades to the latest supported patch version on *N-1* minor version. For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster first is upgraded to *1.18.6*, then is upgraded to *1.19.1*.
+| `node-image`| automatically upgrade the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes won't get the new images unless you do a node image upgrade. Turning on the node-image channel will automatically update your node images whenever a new version is available. |
+
+> [!NOTE]
+> Cluster auto-upgrade only updates to GA versions of Kubernetes and will not update to preview versions.
+
+Automatically upgrading a cluster follows the same process as manually upgrading a cluster. For more information, see [Upgrade an AKS cluster][upgrade-aks-cluster].
+
+To set the auto-upgrade channel when creating a cluster, use the *auto-upgrade-channel* parameter, similar to the following example.
+
+```azurecli-interactive
+az aks create --resource-group myResourceGroup --name myAKSCluster --auto-upgrade-channel stable --generate-ssh-keys
+```
+
+To set the auto-upgrade channel on existing cluster, update the *auto-upgrade-channel* parameter, similar to the following example.
+
+```azurecli-interactive
+az aks update --resource-group myResourceGroup --name myAKSCluster --auto-upgrade-channel stable
+```
+
+## Using auto-upgrade with Planned Maintenance
+
+If youΓÇÖre using Planned Maintenance and Auto-Upgrade, your upgrade will start during your specified maintenance window. For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance].
+
+## Best practices for auto-upgrade
+
+The following best practices will help maximize your success when using auto-upgrade:
+
+- In order to keep your cluster always in a supported version (i.e within the N-2 rule), choose either `stable` or `rapid` channels.
+- If you're interested in getting the latest patches as soon as possible, use the `patch` channel. The `node-image` channel is a good fit if you want your agent pools to always be running the most recent node images.
+- Follow [Operator best practices][operator-best-practices-scheduler].
+- Follow [PDB best practices][pdb-best-practices].
+
+<!-- INTERNAL LINKS -->
+[supported-kubernetes-versions]: supported-kubernetes-versions.md
+[upgrade-aks-cluster]: upgrade-cluster.md
+[planned-maintenance]: planned-maintenance.md
+[operator-best-practices-scheduler]: operator-best-practices-scheduler.md#plan-for-availability-using-pod-disruption-budgets
++
+<!-- EXTERNAL LINKS -->
+[pdb-best-practices]: https://kubernetes.io/docs/tasks/run-application/configure-pdb/
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
Like the temporary disk, an ephemeral OS disk is included in the price of the vi
When using ephemeral OS, the OS disk must fit in the VM cache. The sizes for VM cache are available in the [Azure documentation](../virtual-machines/dv3-dsv3-series.md) in parentheses next to IO throughput ("cache size in GiB").
-Using the AKS default VM size Standard_DS2_v2 with the default OS disk size of 100GB as an example, this VM size supports ephemeral OS but only has 86GB of cache size. This configuration would default to managed disks if the user does not specify explicitly. If a user explicitly requested ephemeral OS, they would receive a validation error.
+Using the AKS default VM size [Standard_DS2_v2](/azure/virtual-machines/dv2-dsv2-series#dsv2-series) with the default OS disk size of 100GB as an example, this VM size supports ephemeral OS but only has 86GB of cache size. This configuration would default to managed disks if the user does not specify explicitly. If a user explicitly requested ephemeral OS, they would receive a validation error.
-If a user requests the same Standard_DS2_v2 with a 60GB OS disk, this configuration would default to ephemeral OS: the requested size of 60GB is smaller than the maximum cache size of 86GB.
+If a user requests the same [Standard_DS2_v2](/azure/virtual-machines/dv2-dsv2-series#dsv2-series) with a 60GB OS disk, this configuration would default to ephemeral OS: the requested size of 60GB is smaller than the maximum cache size of 86GB.
-Using Standard_D8s_v3 with 100GB OS disk, this VM size supports ephemeral OS and has 200GB of cache space. If a user does not specify the OS disk type, the node pool would receive ephemeral OS by default.
+Using [Standard_D8s_v3](/azure/virtual-machines/dv3-dsv3-series#dsv3-series) with 100GB OS disk, this VM size supports ephemeral OS and has 200GB of cache space. If a user does not specify the OS disk type, the node pool would receive ephemeral OS by default.
+
+The latest generation of VM series does not have a dedicated cache, but only temporary storage. Let's assume to use the [Standard_E2bds_v5](/azure/virtual-machines/ebdsv5-ebsv5-series#ebdsv5-series) VM size with the default OS disk size of 100 GiB as an example. This VM size supports ephemeral OS disks but only has 75 GiB of temporary storage. This configuration would default to managed OS disks if the user does not specify explicitly. If a user explicitly requested ephemeral OS disks, they would receive a validation error.
+
+If a user requests the same [Standard_E2bds_v5](/azure/virtual-machines/ebdsv5-ebsv5-series#ebdsv5-series) VM size with a 60 GiB OS disk, this configuration would default to ephemeral OS disks: the requested size of 60 GiB is smaller than the maximum temporary storage of 75 GiB.
+
+Using [Standard_E4bds_v5](/azure/virtual-machines/ebdsv5-ebsv5-series#ebdsv5-series) with 100 GiB OS disk, this VM size supports ephemeral OS and has 150 GiB of temporary storage. If a user does not specify the OS disk type, the node pool would receive ephemeral OS by default.
Ephemeral OS requires at least version 2.15.0 of the Azure CLI.
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
aks Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/rdp.md
description: Learn how to create an RDP connection with Azure Kubernetes Service (AKS) cluster Windows Server nodes for troubleshooting and maintenance tasks. Previously updated : 06/04/2019 Last updated : 07/06/2022 #Customer intent: As a cluster operator, I want to learn how to use RDP to connect to nodes in an AKS cluster to perform maintenance or troubleshoot a problem.
Last updated 06/04/2019
# Connect with RDP to Azure Kubernetes Service (AKS) cluster Windows Server nodes for maintenance or troubleshooting
-Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you may need to access an AKS Windows Server node. This access could be for maintenance, log collection, or other troubleshooting operations. You can access the AKS Windows Server nodes using RDP. Alternatively, if you want to use SSH to access the AKS Windows Server nodes and you have access to the same keypair that was used during cluster creation, you can follow the steps in [SSH into Azure Kubernetes Service (AKS) cluster nodes][ssh-steps]. For security purposes, the AKS nodes are not exposed to the internet.
+Throughout the lifecycle of your Azure Kubernetes Service (AKS) cluster, you may need to access an AKS Windows Server node. This access could be for maintenance, log collection, or other troubleshooting operations. You can access the AKS Windows Server nodes using RDP. For security purposes, the AKS nodes aren't exposed to the internet.
+
+Alternatively, if you want to SSH to your AKS Windows Server nodes, you'll need access to the same key-pair that was used during cluster creation. Follow the steps in [SSH into Azure Kubernetes Service (AKS) cluster nodes][ssh-steps].
This article shows you how to create an RDP connection with an AKS node using their private IP addresses.
This article shows you how to create an RDP connection with an AKS node using th
This article assumes that you have an existing AKS cluster with a Windows Server node. If you need an AKS cluster, see the article on [creating an AKS cluster with a Windows container using the Azure CLI][aks-quickstart-windows-cli]. You need the Windows administrator username and password for the Windows Server node you want to troubleshoot. You also need an RDP client such as [Microsoft Remote Desktop][rdp-mac].
-If you need to reset the password you can use `az aks update` to change the password.
+If you need to reset the password, use `az aks update` to change the password.
```azurecli-interactive az aks update -g myResourceGroup -n myAKSCluster --windows-admin-password $WINDOWS_ADMIN_PASSWORD ```
-If you need to reset both the username and password, see [Reset Remote Desktop Services or its administrator password in a Windows VM
+If you need to reset the username and password, see [Reset Remote Desktop Services or its administrator password in a Windows VM
](/troubleshoot/azure/virtual-machines/reset-rdp). You also need the Azure CLI version 2.0.61 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
You also need the Azure CLI version 2.0.61 or later installed and configured. Ru
This article assumes that you have an existing AKS cluster with a Windows Server node. If you need an AKS cluster, see the article on [creating an AKS cluster with a Windows container using the Azure PowerShell][aks-quickstart-windows-powershell]. You need the Windows administrator username and password for the Windows Server node you want to troubleshoot. You also need an RDP client such as [Microsoft Remote Desktop][rdp-mac].
-If you need to reset the password you can use `Set-AzAksCluster` to change the password.
+If you need to reset the password, use `Set-AzAksCluster` to change the password.
```azurepowershell-interactive $cluster = Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster
$cluster.WindowsProfile.AdminPassword = $WINDOWS_ADMIN_PASSWORD
$cluster | Set-AzAksCluster ```
-If you need to reset both the username and password, see [Reset Remote Desktop Services or its administrator password in a Windows VM
+If you need to reset the username and password, see [Reset Remote Desktop Services or its administrator password in a Windows VM
](/troubleshoot/azure/virtual-machines/reset-rdp). You also need the Azure PowerShell version 7.5.0 or later installed and configured. Run `Get-InstalledModule -Name Az` to find the version. If you need to install or upgrade, see [Install Azure PowerShell][install-azure-powershell].
The following example creates a virtual machine named *myVM* in the *myResourceG
### [Azure CLI](#tab/azure-cli)
-First, get the subnet used by your Windows Server node pool. To get the subnet ID, you need the name of the subnet. To get the name of the subnet, you need the name of the VNet. Get the VNet name by querying your cluster for its list of networks. To query the cluster, you need its name. You can get all of these by running the following in the Azure Cloud Shell:
+You'll need to get the subnet ID used by your Windows Server node pool. The commands below will query for the following information:
+* The cluster's node resource group
+* The virtual network
+* The subnet's name
+* The subnet ID
```azurecli-interactive CLUSTER_RG=$(az aks show -g myResourceGroup -n myAKSCluster --query nodeResourceGroup -o tsv)
SUBNET_NAME=$(az network vnet subnet list -g $CLUSTER_RG --vnet-name $VNET_NAME
SUBNET_ID=$(az network vnet subnet show -g $CLUSTER_RG --vnet-name $VNET_NAME --name $SUBNET_NAME --query id -o tsv) ```
-Now that you have the SUBNET_ID, run the following command in the same Azure Cloud Shell window to create the VM:
+Now that you've the SUBNET_ID, run the following command in the same Azure Cloud Shell window to create the VM:
```azurecli-interactive
+PUBLIC_IP_ADDRESS="myVMPublicIP"
+ az vm create \ --resource-group myResourceGroup \ --name myVM \ --image win2019datacenter \ --admin-username azureuser \
- --admin-password myP@ssw0rd12 \
+ --admin-password {admin-password} \
--subnet $SUBNET_ID \
+ --nic-delete-option delete \
+ --os-disk-delete-option delete \
+ --nsg "" \
+ --public-ip-address $PUBLIC_IP_ADDRESS \
--query publicIpAddress -o tsv ```
The following example output shows the VM has been successfully created and disp
13.62.204.18 ```
-Record the public IP address of the virtual machine. You will use this address in a later step.
+Record the public IP address of the virtual machine. You'll use this address in a later step.
### [Azure PowerShell](#tab/azure-powershell)
-First, get the subnet used by your Windows Server node pool. You need the name of the subnet and its address prefix. To get the name of the subnet, you need the name of the VNet. Get the VNet name by querying your cluster for its list of networks. To query the cluster, you need its name. You can get all of these by running the following in the Azure Cloud Shell:
+You'll need to get the subnet ID used by your Windows Server node pool. The commands below will query for the following information:
+* The cluster's node resource group
+* The virtual network
+* The subnet's name and address prefix
+* The subnet ID
```azurepowershell-interactive $CLUSTER_RG = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
$ipParams = @{
New-AzPublicIpAddress @ipParams $vmParams = @{
- ResourceGroupName = 'myResourceGroup'
- Name = 'myVM'
- Image = 'win2019datacenter'
- Credential = Get-Credential azureuser
- VirtualNetworkName = $VNET_NAME
- AddressPrefix = $ADDRESS_PREFIX
- SubnetName = $SUBNET_NAME
- SubnetAddressPrefix = $SUBNET_ADDRESS_PREFIX
- PublicIpAddressName = 'myPublicIP'
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'myVM'
+ Image = 'win2019datacenter'
+ Credential = Get-Credential azureuser
+ VirtualNetworkName = $VNET_NAME
+ AddressPrefix = $ADDRESS_PREFIX
+ SubnetName = $SUBNET_NAME
+ SubnetAddressPrefix = $SUBNET_ADDRESS_PREFIX
+ PublicIpAddressName = 'myPublicIP'
+ OSDiskDeleteOption = 'Delete'
+ NetworkInterfaceDeleteOption = 'Delete'
+ DataDiskDeleteOption = 'Delete'
} New-AzVM @vmParams
The following example output shows the VM has been successfully created and disp
13.62.204.18 ```
-Record the public IP address of the virtual machine. You will use this address in a later step.
+Record the public IP address of the virtual machine. You'll use this address in a later step.
AKS node pool subnets are protected with NSGs (Network Security Groups) by defau
> [!NOTE] > The NSGs are controlled by the AKS service. Any change you make to the NSG will be overwritten at any time by the control plane.
->
### [Azure CLI](#tab/azure-cli)
NSG_NAME=$(az network nsg list -g $CLUSTER_RG --query [].name -o tsv)
Then, create the NSG rule: ```azurecli-interactive
-az network nsg rule create --name tempRDPAccess --resource-group $CLUSTER_RG --nsg-name $NSG_NAME --priority 100 --destination-port-range 3389 --protocol Tcp --description "Temporary RDP access to Windows nodes"
+az network nsg rule create \
+ --name tempRDPAccess \
+ --resource-group $CLUSTER_RG \
+ --nsg-name $NSG_NAME \
+ --priority 100 \
+ --destination-port-range 3389 \
+ --protocol Tcp \
+ --description "Temporary RDP access to Windows nodes"
``` ### [Azure PowerShell](#tab/azure-powershell)
aks-nodepool1-42485177-vmss000000 Ready agent 18h v1.12.7 10.240.0.4
aksnpwin000000 Ready agent 13h v1.12.7 10.240.0.67 <none> Windows Server Datacenter 10.0.17763.437 ```
-Record the internal IP address of the Windows Server node you wish to troubleshoot. You will use this address in a later step.
+Record the internal IP address of the Windows Server node you wish to troubleshoot. You'll use this address in a later step.
## Connect to the virtual machine and node
After you've connected to your virtual machine, connect to the *internal IP addr
![Image of connecting to the Windows Server node using an RDP client](media/rdp/node-rdp.png)
-You are now connected to your Windows Server node.
+You're now connected to your Windows Server node.
![Image of cmd window in the Windows Server node](media/rdp/node-session.png)
You can now run any troubleshooting commands in the *cmd* window. Since Windows
When done, exit the RDP connection to the Windows Server node then exit the RDP session to the virtual machine. After you exit both RDP sessions, delete the virtual machine with the [az vm delete][az-vm-delete] command: ```azurecli-interactive
-az vm delete --resource-group myResourceGroup --name myVM
+# Delete the virtual machine
+az vm delete \
+ --resource-group myResourceGroup \
+ --name myVM
```
-And the NSG rule:
+Delete the public IP associated with the virtual machine:
```azurecli-interactive
-CLUSTER_RG=$(az aks show -g myResourceGroup -n myAKSCluster --query nodeResourceGroup -o tsv)
-NSG_NAME=$(az network nsg list -g $CLUSTER_RG --query [].name -o tsv)
-```
+az network public-ip delete \
+ --resource-group myResourceGroup \
+ --name $PUBLIC_IP_ADDRESS
+ ```
+
+Delete the NSG rule:
```azurecli-interactive
-az network nsg rule delete --resource-group $CLUSTER_RG --nsg-name $NSG_NAME --name tempRDPAccess
+CLUSTER_RG=$(az aks show -g myResourceGroup -n myAKSCluster --query nodeResourceGroup -o tsv)
+NSG_NAME=$(az network nsg list -g $CLUSTER_RG --query [].name -o tsv)
+az network nsg rule delete \
+ --resource-group $CLUSTER_RG \
+ --nsg-name $NSG_NAME \
+ --name tempRDPAccess
``` ### [Azure PowerShell](#tab/azure-powershell)
When done, exit the RDP connection to the Windows Server node then exit the RDP
Remove-AzVM -ResourceGroupName myResourceGroup -Name myVM ```
-And the NSG rule:
+Delete the public IP associated with the virtual machine:
+
+```azurepowershell-interactive
+Remove-AzPublicIpAddress -ResourceGroupName myResourceGroup -Name myPublicIP
+```
+
+Delete the NSG rule:
```azurepowershell-interactive $CLUSTER_RG = (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup $NSG_NAME = (Get-AzNetworkSecurityGroup -ResourceGroupName $CLUSTER_RG).Name+
+Get-AzNetworkSecurityGroup -Name $NSG_NAME -ResourceGroupName $CLUSTER_RG | Remove-AzNetworkSecurityRuleConfig -Name tempRDPAccess | Set-AzNetworkSecurityGroup
```
+Delete the NSG created by default from New-AzVM:
+ ```azurepowershell-interactive
-Get-AzNetworkSecurityGroup -Name $NSG_NAME -ResourceGroupName $CLUSTER_RG | Remove-AzNetworkSecurityRuleConfig -Name tempRDPAccess | Set-AzNetworkSecurityGroup
+Remove-AzNetworkSecurityGroup -ResourceGroupName myResourceGroup -Name myVM
```
+## Connect with Azure Bastion
+
+Alternatively, you can use [Azure Bastion][azure-bastion] to connect to your Windows Server node.
+
+### Deploy Azure Bastion
+
+To deploy Azure Bastion, you'll need to find the virtual network your AKS cluster is connected to.
+
+1. In the Azure portal, go to **Virtual networks**. Select the virtual network your AKS cluster is connected to.
+1. Under **Settings**, select **Bastion**, then select **Deploy Bastion**. Wait until the process is finished before going to the next step.
+
+### Connect to your Windows Server nodes using Azure Bastion
+
+Go to the node resource group of the AKS cluster. Run the command below in the Azure Cloud Shell to get the name of your node resource group:
+
+#### [Azure CLI](#tab/azure-cli)
+
+```azurecli-interactive
+az aks show -n myAKSCluster -g myResourceGroup --query 'nodeResourceGroup' -o tsv
+```
+
+#### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell-interactive
+(Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).nodeResourceGroup
+```
+++
+1. Select **Overview**, and select your Windows node pool virtual machine scale set.
+1. Under **Settings**, select **Instances**. Select a Windows server node that you'd like to connect to.
+1. Under **Support + troubleshooting**, select **Bastion**.
+1. Enter the credentials you set up when the AKS cluster was created. Select **Connect**.
+
+You can now run any troubleshooting commands in the *cmd* window. Since Windows Server nodes use Windows Server Core, there's not a full GUI or other GUI tools when you connect to a Windows Server node over RDP.
+
+> [!NOTE]
+> If you close out of the terminal window, press **CTRL + ALT + End**, select **Task Manager**, select **More details**, select **File**, select **Run new task**, and enter **cmd.exe** to open another terminal. You can also logout and re-connect with Bastion.
+
+### Remove Bastion access
+
+When you're finished, exit the Bastion session and remove the Bastion resource.
+
+1. In the Azure portal, go to **Bastion** and select the Bastion resource you created.
+1. At the top of the page, select **Delete**. Wait until the process is complete before proceeding to the next step.
+1. In the Azure portal, go to **Virtual networks**. Select the virtual network that your AKS cluster is connected to.
+1. Under **Settings**, select **Subnet**, and delete the **AzureBastionSubnet** subnet that was created for the Bastion resource.
+ ## Next steps
-If you need additional troubleshooting data, you can [view the Kubernetes master node logs][view-master-logs] or [Azure Monitor][azure-monitor-containers].
+If you need more troubleshooting data, you can [view the Kubernetes primary node logs][view-primary-logs] or [Azure Monitor][azure-monitor-containers].
<!-- EXTERNAL LINKS --> [kubectl]: https://kubernetes.io/docs/user-guide/kubectl/
If you need additional troubleshooting data, you can [view the Kubernetes master
[install-azure-cli]: /cli/azure/install-azure-cli [install-azure-powershell]: /powershell/azure/install-az-ps [ssh-steps]: ssh.md
-[view-master-logs]: view-master-logs.md
+[view-primary-logs]: ../azure-monitor/containers/container-insights-log-query.md#resource-logs
+[azure-bastion]: ../bastion/bastion-overview.md
aks Start Stop Nodepools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-nodepools.md
Title: Start and stop a node pool on Azure Kubernetes Service (AKS) (Preview)
+ Title: Start and stop a node pool on Azure Kubernetes Service (AKS)
description: Learn how to start or stop a node pool on Azure Kubernetes Service (AKS).
-# Start and stop an Azure Kubernetes Service (AKS) node pool (Preview)
+# Start and stop an Azure Kubernetes Service (AKS) node pool
Your AKS workloads may not need to run continuously, for example a development cluster that has node pools running specific workloads. To optimize your costs, you can completely turn off (stop) your node pools in your AKS cluster, allowing you to save on compute costs.
Your AKS workloads may not need to run continuously, for example a development c
This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-### Install aks-preview CLI extension
--
-You also need the *aks-preview* Azure CLI extension. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
-
-```azurecli-interactive
-# Install the aks-preview extension
-az extension add --name aks-preview
-
-# Update the extension to make sure you have the latest version installed
-az extension update --name aks-preview
-```
-
-### Register the `PreviewStartStopAgentPool` preview feature
-
-To use the feature, you must also enable the `PreviewStartStopAgentPool` feature flag on your subscription.
-
-Register the `PreviewStartStopAgentPool` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
-
-```azurecli-interactive
-az feature register --namespace "Microsoft.ContainerService" --name "PreviewStartStopAgentPool"
-```
-
-It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
-
-```azurecli-interactive
-az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/PreviewStartStopAgentPool')].{Name:name,State:properties.state}"
-```
-
-When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
-
-```azurecli-interactive
-az provider register --namespace Microsoft.ContainerService
-```
- ## Stop an AKS node pool > [!IMPORTANT]
aks Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md
az aks nodepool update -n mynodepool -g MyResourceGroup --cluster-name MyManaged
With a list of available versions for your AKS cluster, use the [az aks upgrade][az-aks-upgrade] command to upgrade. During the upgrade process, AKS will: - add a new buffer node (or as many nodes as configured in [max surge](#customize-node-surge-upgrade)) to the cluster that runs the specified Kubernetes version. -- [cordon and drain][kubernetes-drain] one of the old nodes to minimize disruption to running applications (if you're using max surge it will [cordon and drain][kubernetes-drain] as many nodes at the same time as the number of buffer nodes specified).
+- [cordon and drain][kubernetes-drain] one of the old nodes to minimize disruption to running applications (if you're using max surge, it will [cordon and drain][kubernetes-drain] as many nodes at the same time as the number of buffer nodes specified).
- When the old node is fully drained, it will be reimaged to receive the new version and it will become the buffer node for the following node to be upgraded. - This process repeats until all nodes in the cluster have been upgraded. - At the end of the process, the last buffer node will be deleted, maintaining the existing agent node count and zone balance.
default 9m22s Normal Surge node/aks-nodepool1-96663640-vmss000002 Created a surg
## Set auto-upgrade channel
-In addition to manually upgrading a cluster, you can set an auto-upgrade channel on your cluster. The following upgrade channels are available:
-
-|Channel| Action | Example
-||||
-| `none`| disables auto-upgrades and keeps the cluster at its current version of Kubernetes| Default setting if left unchanged|
-| `patch`| automatically upgrade the cluster to the latest supported patch version when it becomes available while keeping the minor version the same.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.17.9*|
-| `stable`| automatically upgrade the cluster to the latest supported patch release on minor version *N-1*, where *N* is the latest supported minor version.| For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster is upgraded to *1.18.6*.
-| `rapid`| automatically upgrade the cluster to the latest supported patch release on the latest supported minor version.| In cases where the cluster is at a version of Kubernetes that is at an *N-2* minor version where *N* is the latest supported minor version, the cluster first upgrades to the latest supported patch version on *N-1* minor version. For example, if a cluster is running version *1.17.7* and versions *1.17.9*, *1.18.4*, *1.18.6*, and *1.19.1* are available, your cluster first is upgraded to *1.18.6*, then is upgraded to *1.19.1*.
-| `node-image`| automatically upgrade the node image to the latest version available.| Microsoft provides patches and new images for image nodes frequently (usually weekly), but your running nodes won't get the new images unless you do a node image upgrade. Turning on the node-image channel will automatically update your node images whenever a new version is available. |
-
-> [!NOTE]
-> Cluster auto-upgrade only updates to GA versions of Kubernetes and will not update to preview versions.
-
-Automatically upgrading a cluster follows the same process as manually upgrading a cluster. For more information, see [Upgrade an AKS cluster][upgrade-cluster].
-
-To set the auto-upgrade channel when creating a cluster, use the *auto-upgrade-channel* parameter, similar to the following example.
-
-```azurecli-interactive
-az aks create --resource-group myResourceGroup --name myAKSCluster --auto-upgrade-channel stable --generate-ssh-keys
-```
-
-To set the auto-upgrade channel on existing cluster, update the *auto-upgrade-channel* parameter, similar to the following example.
-
-```azurecli-interactive
-az aks update --resource-group myResourceGroup --name myAKSCluster --auto-upgrade-channel stable
-```
-
-## Using Cluster Auto-Upgrade with Planned Maintenance
-
-If youΓÇÖre using Planned Maintenance and Auto-Upgrade, your upgrade will start during your specified maintenance window. For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster (preview)][planned-maintenance].
+In addition to manually upgrading a cluster, you can set an auto-upgrade channel on your cluster. For more information, see [Auto-upgrading an AKS cluster][aks-auto-upgrade].
## Special considerations for node pools that span multiple Availability Zones
This article showed you how to upgrade an existing AKS cluster. To learn more ab
[nodepool-upgrade]: use-multiple-node-pools.md#upgrade-a-node-pool [upgrade-cluster]: #upgrade-an-aks-cluster [planned-maintenance]: planned-maintenance.md
+[aks-auto-upgrade]: auto-upgrade-cluster.md
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
If `identity-type=jwt` is configured, a JWT token is required to be validated. T
| authorization-id | The authorization resource identifier. | Yes | | | context-variable-name | The name of the context variable to receive the [`Authorization` object](#authorization-object). | Yes | | | identity-type | Type of identity to be checked against the authorization access policy. <br> - `managed`: managed identity of the API Management service. <br> - `jwt`: JWT bearer token specified in the `identity` attribute. | No | managed |
-| identity | An Azure AD JWT bearer token to be checked against the authorization permissions. Ignored for `identity-type` other than `jwt`. <br><br>Expected claims: <br> - audience: https://azure-api.net/authorization-manager <br> - `oid`: Permission object id <br> - `tid`: Permission tenant id | No | |
+| identity | An Azure AD JWT bearer token to be checked against the authorization permissions. Ignored for `identity-type` other than `jwt`. <br><br>Expected claims: <br> - audience: `https://azure-api.net/authorization-manager` <br> - `oid`: Permission object ID <br> - `tid`: Permission tenant ID | No | |
| ignore-error | Boolean. If acquiring the authorization context results in an error (for example, the authorization resource is not found or is in an error state): <br> - `true`: the context variable is assigned a value of null. <br> - `false`: return `500` | No | false | ### Authorization object
To understand the difference between rate limits and quotas, [see Rate limits an
> * This policy can be used only once per policy document. > * [Policy expressions](api-management-policy-expressions.md) cannot be used in any of the policy attributes for this policy.
-> [!CAUTION]
-> Due to the distributed nature of throttling architecture, rate limiting is never completely accurate. The difference between configured and the real number of allowed requests varyies based on request volume and rate, backend latency, and other factors.
[!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)]
To understand the difference between rate limits and quotas, [see Rate limits an
For more information and examples of this policy, see [Advanced request throttling with Azure API Management](./api-management-sample-flexible-throttling.md).
-> [!CAUTION]
-> Due to the distributed nature of throttling architecture, rate limiting is never completely accurate. The difference between configured and the real number of allowed requests vary based on request volume and rate, backend latency, and other factors.
[!INCLUDE [api-management-policy-form-alert](../../includes/api-management-policy-form-alert.md)]
To understand the difference between rate limits and quotas, [see Rate limits an
> * This policy can be used only once per policy document. > * [Policy expressions](api-management-policy-expressions.md) cannot be used in any of the policy attributes for this policy. + [!INCLUDE [api-management-policy-generic-alert](../../includes/api-management-policy-generic-alert.md)] ### Policy statement
For more information and examples of this policy, see [Advanced request throttli
To understand the difference between rate limits and quotas, [see Rate limits and quotas.](./api-management-sample-flexible-throttling.md#rate-limits-and-quotas) ++ [!INCLUDE [api-management-policy-form-alert](../../includes/api-management-policy-form-alert.md)]
api-management Api Management Cross Domain Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-cross-domain-policies.md
Title: Azure API Management cross domain policies | Microsoft Docs
-description: Reference for the cross domain policies available for use in Azure API Management. Provides policy usage, settings, and examples.
+ Title: Azure API Management cross-domain policies | Microsoft Docs
+description: Reference for the policies in Azure API Management to enable cross-domain calls from various clients. Provides policy usage, settings, and examples.
Previously updated : 03/07/2022 Last updated : 07/05/2022
-# API Management cross domain policies
-This article provides a reference for API Management policies used to enable cross domain calls from different clients.
+# API Management cross-domain policies
+This article provides a reference for API Management policies used to enable cross-domain calls from different clients.
[!INCLUDE [api-management-policy-intro-links](../../includes/api-management-policy-intro-links.md)]
-## <a name="CrossDomainPolicies"></a> Cross domain policies
+## <a name="CrossDomainPolicies"></a> Cross-domain policies
- [Allow cross-domain calls](api-management-cross-domain-policies.md#AllowCrossDomainCalls) - Makes the API accessible from Adobe Flash and Microsoft Silverlight browser-based clients. - [CORS](api-management-cross-domain-policies.md#CORS) - Adds cross-origin resource sharing (CORS) support to an operation or an API to allow cross-domain calls from browser-based clients.
This policy can be used in the following policy [sections](./api-management-howt
## <a name="CORS"></a> CORS The `cors` policy adds cross-origin resource sharing (CORS) support to an operation or an API to allow cross-domain calls from browser-based clients.
-> [!NOTE]
-> If request matches an operation with an OPTIONS method defined in the API, pre-flight request processing logic associated with CORS policies will not be executed. Therefore, such operations can be used to implement custom pre-flight processing logic.
-> [!IMPORTANT]
-> If you configure the CORS policy at the product scope, and your API uses subscription key authentication, the policy will only work when requests include a subscription key as a query parameter.
+### About CORS
-CORS allows a browser and a server to interact and determine whether or not to allow specific cross-origin requests (i.e. XMLHttpRequests calls made from JavaScript on a web page to other domains). This allows for more flexibility than only allowing same-origin requests, but is more secure than allowing all cross-origin requests.
+[CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) is an HTTP header-based standard that allows a browser and a server to interact and determine whether or not to allow specific cross-origin requests (`XMLHttpRequest` calls made from JavaScript on a web page to other domains). This allows for more flexibility than only allowing same-origin requests, but is more secure than allowing all cross-origin requests.
-You need to apply the CORS policy to enable the interactive console in the developer portal. Refer to the [developer portal documentation](./developer-portal-faq.md#cors) for details.
+CORS specifies two types of [cross-origin requests](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#specifications):
+1. **Preflighted (or "preflight") requests** - The browser first sends an HTTP request using the `OPTIONS` method to the server, to determine if the actual request is permitted to send. If the server response includes the `Access-Control-Allow-Origin` header that allows access, the browser follows with the actual request.
+
+1. **Simple requests** - These requests include one or more extra `Origin` headers but don't trigger a CORS preflight. Only requests using the `GET` and `HEAD` methods and a limited set of request headers are allowed.
+
+### `cors` policy scenarios
+
+Configure the `cors` policy in API Management for the following scenarios:
+
+* Enable the interactive test console in the developer portal. Refer to the [developer portal documentation](./developer-portal-faq.md#cors) for details.
+ > [!NOTE]
+ > When you enable CORS for the interactive console, by default API Management configures the `cors` policy at the global scope.
+
+* Enable API Management to reply to preflight requests or to pass through simple CORS requests when the backends don't provide their own CORS support.
+
+ > [!NOTE]
+ > If a request matches an operation with an `OPTIONS` method defined in the API, preflight request processing logic associated with the `cors` policy will not be executed. Therefore, such operations can be used to implement custom preflight processing logic - for example, to apply the `cors` policy only under certain conditions.
+
+### Common configuration issues
+
+* **Subscription key in header** - If you configure the `cors` policy at the *product* scope, and your API uses subscription key authentication, the policy won't work when the subscription key is passed in a header. As a workaround, modify requests to include a subscription key as a query parameter.
+* **API with header versioning** - If you configure the `cors` policy at the *API* scope, and your API uses a header-versioning scheme, the policy won't work because the version is passed in a header. You may need to configure an alternative versioning method such as a path or query parameter.
+* **Policy order** - You may experience unexpected behavior if the `cors` policy is not the first policy in the inbound section. Select **Calculate effective policy** in the policy editor to check the [policy evaluation order](set-edit-policies.md#use-base-element-to-set-policy-evaluation-order) at each scope. Generally, only the first `cors` policy is applied.
+* **Empty 200 OK response** - In some policy configurations, certain cross-origin requests complete with an empty `200 OK` response. This response is expected when `terminate-unmatched-request` is set to its default value of `true` and an incoming request has an `Origin` header that doesnΓÇÖt match an allowed origin configured in the `cors` policy.
### Policy statement
You need to apply the CORS policy to enable the interactive console in the devel
``` ### Example
-This example demonstrates how to support [pre-flight requests](https://developer.mozilla.org/docs/Web/HTTP/CORS#preflighted_requests), such as those with custom headers or methods other than GET and POST. To support custom headers and additional HTTP verbs, use the `allowed-methods` and `allowed-headers` sections as shown in the following example.
+This example demonstrates how to support [preflight requests](https://developer.mozilla.org/docs/Web/HTTP/CORS#preflighted_requests), such as those with custom headers or methods other than `GET` and `POST`. To support custom headers and other HTTP verbs, use the `allowed-methods` and `allowed-headers` sections as shown in the following example.
```xml <cors allow-credentials="true">
This example demonstrates how to support [pre-flight requests](https://developer
|cors|Root element.|Yes|N/A| |allowed-origins|Contains `origin` elements that describe the allowed origins for cross-domain requests. `allowed-origins` can contain either a single `origin` element that specifies `*` to allow any origin, or one or more `origin` elements that contain a URI.|Yes|N/A| |origin|The value can be either `*` to allow all origins, or a URI that specifies a single origin. The URI must include a scheme, host, and port.|Yes|If the port is omitted in a URI, port 80 is used for HTTP and port 443 is used for HTTPS.|
-|allowed-methods|This element is required if methods other than GET or POST are allowed. Contains `method` elements that specify the supported HTTP verbs. The value `*` indicates all methods.|No|If this section is not present, GET and POST are supported.|
+|allowed-methods|This element is required if methods other than `GET` or `POST` are allowed. Contains `method` elements that specify the supported HTTP verbs. The value `*` indicates all methods.|No|If this section isn't present, `GET` and `POST` are supported.|
|method|Specifies an HTTP verb.|At least one `method` element is required if the `allowed-methods` section is present.|N/A|
-|allowed-headers|This element contains `header` elements specifying names of the headers that can be included in the request.|No|N/A|
+|allowed-headers|This element contains `header` elements specifying names of the headers that can be included in the request.|Yes|N/A|
|expose-headers|This element contains `header` elements specifying names of the headers that will be accessible by the client.|No|N/A|
-|header|Specifies a header name.|At least one `header` element is required in `allowed-headers` or `expose-headers` if the section is present.|N/A|
+|header|Specifies a header name.|At least one `header` element is required in `allowed-headers` or in `expose-headers` if that section is present.|N/A|
> [!CAUTION] > Use the `*` wildcard with care in policy settings. This configuration may be overly permissive and may make an API more vulnerable to certain [API security threats](mitigate-owasp-api-threats.md#security-misconfiguration).
This example demonstrates how to support [pre-flight requests](https://developer
|Name|Description|Required|Default| |-|--|--|-| |allow-credentials|The `Access-Control-Allow-Credentials` header in the preflight response will be set to the value of this attribute and affect the client's ability to submit credentials in cross-domain requests.|No|false|
-|terminate-unmatched-request|This attribute controls the processing of cross-origin requests that don't match the CORS policy settings. When OPTIONS request is processed as a pre-flight request and doesn't match CORS policy settings: If the attribute is set to `true`, immediately terminate the request with an empty 200 OK response; If the attribute is set to `false`, check inbound for other in-scope CORS policies that are direct children of the inbound element and apply them. If no CORS policies are found, terminate the request with an empty 200 OK response. When GET or HEAD request includes the Origin header (and therefore is processed as a cross-origin request) and doesn't match CORS policy settings: If the attribute is set to `true`, immediately terminate the request with an empty 200 OK response; If the attribute is set to `false`, allow the request to proceed normally and don't add CORS headers to the response.|No|true|
-|preflight-result-max-age|The `Access-Control-Max-Age` header in the preflight response will be set to the value of this attribute and affect the user agent's ability to cache pre-flight response.|No|0|
+|preflight-result-max-age|The `Access-Control-Max-Age` header in the preflight response will be set to the value of this attribute and affect the user agent's ability to cache the preflight response.|No|0|
+|terminate-unmatched-request|Controls the processing of cross-origin requests that don't match the policy settings. When `OPTIONS` request is processed as a preflight request and `Origin` header doesn't match policy settings: If the attribute is set to `true`, immediately terminate the request with an empty `200 OK` response; If the attribute is set to `false`, check inbound for other in-scope `cors` policies that are direct children of the inbound element and apply them. If no `cors` policies are found, terminate the request with an empty `200 OK` response. <br/><br/>When `GET` or `HEAD` request includes the `Origin` header (and therefore is processed as a simple cross-origin request), and doesn't match policy settings: If the attribute is set to `true`, immediately terminate the request with an empty `200 OK` response; If the attribute is set to `false`, allow the request to proceed normally and don't add CORS headers to the response.|No|true|
### Usage This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
This policy can be used in the following policy [sections](./api-management-howt
- **Policy sections:** inbound - **Policy scopes:** all scopes
+#### Usage notes
+ * You may configure the `cors` policy at more than one scope (for example, at the product scope and the global scope). Ensure that the `base` element is configured at the operation, API, and product scopes to inherit needed policies at the parent scopes.
+* Only the `cors` policy is evaluated on the `OPTIONS` request during preflight. Remaining configured policies are evaluated on the approved request.
+ ## <a name="JSONP"></a> JSONP The `jsonp` policy adds JSON with padding (JSONP) support to an operation or an API to allow cross-domain calls from JavaScript browser-based clients. JSONP is a method used in JavaScript programs to request data from a server in a different domain. JSONP bypasses the limitation enforced by most web browsers where access to web pages must be in the same domain.
api-management Api Management Key Concepts Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-key-concepts-experiment.md
+
+ Title: Azure API Management - Overview and key concepts | Microsoft Docs
+description: Introduction to key scenarios, capabilities, and concepts of the Azure API Management service. API Management supports the full API lifecycle.
+
+documentationcenter: ''
+
+editor: ''
+
++ Last updated : 06/27/2022++++++
+# What is Azure API Management?
+
+This article provides an overview of common scenarios and key components of Azure API Management. Azure API Management is a hybrid, multicloud management platform for APIs across all environments. As a platform-as-a-service, API Management supports the complete API lifecycle.
+
+> [!TIP]
+> If you're already familiar with API Management and ready to start, see these resources:
+> * [Features and service tiers](api-management-features.md)
+> * [Create an API Management instance](get-started-create-service-instance.md)
+> * [Import and publish an API](import-and-publish.md)
+> * [API Management policies](api-management-howto-policies.md)
+
+## Scenarios
+
+APIs enable digital experiences, simplify application integration, underpin new digital products, and make data and services reusable and universally accessible. ΓÇïWith the proliferation and increasing dependency on APIs, organizations need to manage them as first-class assets throughout their lifecycle.ΓÇï
+++
+Azure API Management helps customers meet these challenges:
+
+* Abstract backend architecture diversity and complexity from API consumers
+* Securely expose services hosted on and outside of Azure as APIs
+* Protect, accelerate, and observe APIs
+* Enable API discovery and consumption by internal and external users
+
+Common scenarios include:
+
+* **Unlocking legacy assets** - APIs are used to abstract and modernize legacy backends and make them accessible from new cloud services and modern applications. APIs allow innovation without the risk, cost, and delays of migration.
+* **API-centric app integration** - APIs are easily consumable, standards-based, and self-describing mechanisms for exposing and accessing data, applications, and processes. They simplify and reduce the cost of app integration.
+* **Multi-channel user experiences** - APIs are frequently used to enable user experiences such as web, mobile, wearable, or Internet of Things applications. Reuse APIs to accelerate development and ROI.
+* **B2B integration** - APIs exposed to partners and customers lower the barrier to integrate business processes and exchange data between business entities. APIs eliminate the overhead inherent in point-to-point integration. Especially with self-service discovery and onboarding enabled, APIs are the primary tools for scaling B2B integration.
+
+## API Management components
+
+Azure API Management is made up of an API *gateway*, a *management plane*, and a *developer portal*. These components are Azure-hosted and fully managed by default. API Management is available in various [tiers](api-management-features.md) differing in capacity and features.
++
+## API gateway
+
+All requests from client applications first reach the API gateway, which then forwards them to respective backend services. The API gateway acts as a façade to the backend services, allowing API providers to abstract API implementations and evolve backend architecture without impacting API consumers. The gateway enables consistent configuration of routing, security, throttling, caching, and observability.
+
+The API gateway:
+
+ * Accepts API calls and routes them to configured backends
+ * Verifies API keys, JWT tokens, certificates, and other credentials
+ * Enforces usage quotas and rate limits
+ * Optionally transforms requests and responses as specified in [policy statements](#policies)
+ * If configured, caches responses to improve response latency and minimize the load on backend services
+ * Emits logs, metrics, and traces for monitoring, reporting, and troubleshooting
+
+### Self-hosted gateway
+With the [self-hosted gateway](self-hosted-gateway-overview.md), customers can deploy the API gateway to the same environments where they host their APIs, to optimize API traffic and ensure compliance with local regulations and guidelines. The self-hosted gateway enables customers with hybrid IT infrastructure to manage APIs hosted on-premises and across clouds from a single API Management service in Azure.
+
+The self-hosted gateway is packaged as a Linux-based Docker container and is commonly deployed to Kubernetes, including to Azure Kubernetes Service and [Azure Arc-enabled Kubernetes](how-to-deploy-self-hosted-gateway-azure-arc.md).
+
+## Management plane
+
+API providers interact with the service through the management plane, which provides full access to the API Management service capabilities.
+
+Customers interact with the management plane through Azure tools including the Azure portal, Azure PowerShell, Azure CLI, a [Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-apimanagement&ssr=false#overview), or client SDKs in several popular programming languages.
+
+Use the management plane to:
+
+ * Provision and configure API Management service settings
+ * Define or import API schemas from a wide range of sources, including OpenAPI specifications, Azure compute services, or WebSocket or GraphQL backends
+ * Package APIs into products
+ * Set up [policies](#policies) like quotas or transformations on the APIs
+ * Get insights from analytics
+ * Manage users
++
+## Developer portal
+
+The open-source [developer portal][Developer portal] is an automatically generated, fully customizable website with the documentation of your APIs.
++
+API providers can customize the look and feel of the developer portal by adding custom content, customizing styles, and adding their branding. Extend the developer portal further by [self-hosting](developer-portal-self-host.md).
+
+App developers use the open-source developer portal to discover the APIs, onboard to use them, and learn how to consume them in applications. (APIs can also be exported to the [Power Platform](export-api-power-platform.md) for discovery and use by citizen developers.)
+
+Using the developer portal, developers can:
+
+ * Read API documentation
+ * Call an API via the interactive console
+ * Create an account and subscribe to get API keys
+ * Access analytics on their own usage
+ * Download API definitions
+ * Manage API keys
+
+## Integration with Azure services
+
+API Management integrates with many complementary Azure services to create enterprise solutions, including:
+
+* [Azure Key Vault](../key-vault/general/overview.md) for secure safekeeping and management of [client certificates](api-management-howto-mutual-certificates.md) and [secretsΓÇï](api-management-howto-properties.md)
+* [Azure Monitor](api-management-howto-use-azure-monitor.md) for logging, reporting, and alerting on management operations, systems events, and API requestsΓÇï
+* [Application Insights](api-management-howto-app-insights.md) for live metrics, end-to-end tracing, and troubleshooting
+* [Virtual networks](virtual-network-concepts.md), [private endpoints](private-endpoint.md), and [Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md) for network-level protectionΓÇï
+* Azure Active Directory for [developer authentication](api-management-howto-aad.md) and [request authorization](api-management-howto-protect-backend-with-aad.md)ΓÇï
+* [Event Hubs](api-management-howto-log-event-hubs.md) for streaming eventsΓÇï
+* Several Azure compute offerings commonly used to build and host APIs on Azure, including [Functions](import-function-app-as-api.md), [Logic Apps](import-logic-app-as-api.md), [Web Apps](import-app-service-as-api.md), [Service Fabric](how-to-configure-service-fabric-backend.md), and others.ΓÇï
+
+**More information**:
+* [Basic enterprise integration](/azure/architecture/reference-architectures/enterprise-integration/basic-enterprise-integration?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)
+* [Landing zone accelerator](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/landing-zone-accelerator?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)
++
+## Key concepts
+
+### APIs
+
+APIs are the foundation of an API Management service instance. Each API represents a set of *operations* available to app developers. Each API contains a reference to the backend service that implements the API, and its operations map to backend operations.
+
+Operations in API Management are highly configurable, with control over URL mapping, query and path parameters, request and response content, and operation response caching.
+
+**More information**:
+* [Import and publish your first API][How to create APIs]
+* [Mock API responses][How to add operations to an API]
+
+### Products
+
+Products are how APIs are surfaced to developers. Products in API Management have one or more APIs, and can be *open* or *protected*. Protected products require a subscription key, while open products can be consumed freely.
+
+When a product is ready for use by developers, it can be published. Once published, it can be viewed or subscribed to by developers. Subscription approval is configured at the product level and can either require an administrator's approval or be automatic.
+
+**More information**:
+* [Create and publish a product][How to create and publish a product]
+* [Subscriptions in API Management](api-management-subscriptions.md)
+
+### Groups
+
+Groups are used to manage the visibility of products to developers. API Management has the following built-in groups:
+
+* **Administrators** - Manage API Management service instances and create the APIs, operations, and products that are used by developers.
+
+ Azure subscription administrators are members of this group.
+
+* **Developers** - Authenticated developer portal users that build applications using your APIs. Developers are granted access to the developer portal and build applications that call the operations of an API.
+
+* **Guests** - Unauthenticated developer portal users, such as prospective customers visiting the developer portal. They can be granted certain read-only access, such as the ability to view APIs but not call them.
+
+Administrators can also create custom groups or use external groups in an [associated Azure Active Directory tenant](api-management-howto-aad.md) to give developers visibility and access to API products. For example, create a custom group for developers in a partner organization to access a specific subset of APIs in a product. A user can belong to more than one group.
+
+**More information**:
+* [How to create and use groups][How to create and use groups]
+
+### Developers
+
+Developers represent the user accounts in an API Management service instance. Developers can be created or invited to join by administrators, or they can sign up from the [developer portal][Developer portal]. Each developer is a member of one or more groups, and can subscribe to the products that grant visibility to those groups.
+
+When developers subscribe to a product, they're granted the primary and secondary key for the product for use when calling the product's APIs.
+
+**More information**:
+* [How to manage user accounts][How to create or invite developers]
+
+### Policies
+
+With [policies][API Management policies], an API publisher can change the behavior of an API through configuration. Policies are a collection of statements that are executed sequentially on the request or response of an API. Popular statements include format conversion from XML to JSON and call-rate limiting to restrict the number of incoming calls from a developer. For a complete list, see [API Management policies][Policy reference].
+
+Policy expressions can be used as attribute values or text values in any of the API Management policies, unless the policy specifies otherwise. Some policies such as the [Control flow](./api-management-advanced-policies.md#choose) and [Set variable](./api-management-advanced-policies.md#set-variable) policies are based on policy expressions.
+
+Policies can be applied at different scopes, depending on your needs: global (all APIs), a product, a specific API, or an API operation.
+
+**More information**:
+
+* [Transform and protect your API][How to create and configure advanced product settings].
+* [Policy expressions](./api-management-policy-expressions.md)
+
+## Next steps
+
+Complete the following quickstart and start using Azure API Management:
+
+> [!div class="nextstepaction"]
+> [Create an Azure API Management instance by using the Azure portal](get-started-create-service-instance.md)
+
+[APIs and operations]: #apis
+[Products]: #products
+[Groups]: #groups
+[Developers]: #developers
+[Policies]: #policies
+[Developer portal]: #developer-portal
+
+[How to create APIs]: ./import-and-publish.md
+[How to add operations to an API]: ./mock-api-responses.md
+[How to create and publish a product]: api-management-howto-add-products.md
+[How to create and use groups]: api-management-howto-create-groups.md
+[How to associate groups with developers]: api-management-howto-create-groups.md#associate-group-developer
+[How to create and configure advanced product settings]: transform-api.md
+[How to create or invite developers]: api-management-howto-create-or-invite-developers.md
+[Policy reference]: ./api-management-policies.md
+[API Management policies]: api-management-howto-policies.md
+[Create an API Management service instance]: get-started-create-service-instance.md
api-management Api Management Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-key-concepts.md
Last updated 01/07/2022 +
+adobe-target: true
+adobe-target-activity: DocsExpΓÇô458741ΓÇôA/BΓÇôDocs/APIManagementΓÇôContentΓÇôFY23Q1
+adobe-target-experience: Experience B
+adobe-target-content: ./api-management-key-concepts-experiment
# About API Management
API Management integrates with many complementary Azure services, including:
* [Application Insights](api-management-howto-app-insights.md) for live metrics, end-to-end tracing, and troubleshooting * [Virtual networks](virtual-network-concepts.md) and [Application Gateway](api-management-howto-integrate-internal-vnet-appgateway.md) for network-level protectionΓÇï * Azure Active Directory for [developer authentication](api-management-howto-aad.md) and [request authorization](api-management-howto-protect-backend-with-aad.md)ΓÇï
-* [Event Hub](api-management-howto-log-event-hubs.md) for streaming eventsΓÇï
+* [Event Hubs](api-management-howto-log-event-hubs.md) for streaming eventsΓÇï
* Several Azure compute offerings commonly used to build and host APIs on Azure, including [Functions](import-function-app-as-api.md), [Logic Apps](import-logic-app-as-api.md), [Web Apps](import-app-service-as-api.md), [Service Fabric](how-to-configure-service-fabric-backend.md), and others.ΓÇï ## Key concepts
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
More information about policies:
> [Limit call rate by subscription](api-management-access-restriction-policies.md#LimitCallRate) and [Set usage quota by subscription](api-management-access-restriction-policies.md#SetUsageQuota) have a dependency on the subscription key. A subscription key isn't required when using other policies.
-## [Access restriction policies](api-management-access-restriction-policies.md)
+## Access restriction policies
- [Check HTTP header](api-management-access-restriction-policies.md#CheckHTTPHeader) - Enforces existence and/or value of an HTTP Header. - [Get authorization context](api-management-access-restriction-policies.md#GetAuthorizationContext) - Gets the authorization context of a specified [authorization](authorizations-overview.md) configured in the API Management instance. - [Limit call rate by subscription](api-management-access-restriction-policies.md#LimitCallRate) - Prevents API usage spikes by limiting call rate, on a per subscription basis.
More information about policies:
- [Validate JWT](api-management-access-restriction-policies.md#ValidateJWT) - Enforces existence and validity of a JWT extracted from either a specified HTTP Header or a specified query parameter. - [Validate client certificate](api-management-access-restriction-policies.md#validate-client-certificate) - Enforces that a certificate presented by a client to an API Management instance matches specified validation rules and claims.
-## [Advanced policies](api-management-advanced-policies.md)
+## Advanced policies
- [Control flow](api-management-advanced-policies.md#choose) - Conditionally applies policy statements based on the evaluation of Boolean expressions. - [Forward request](api-management-advanced-policies.md#ForwardRequest) - Forwards the request to the backend service. - [Limit concurrency](api-management-advanced-policies.md#LimitConcurrency) - Prevents enclosed policies from executing by more than the specified number of requests at a time.
More information about policies:
- [Trace](api-management-advanced-policies.md#Trace) - Adds custom traces into the [API Inspector](./api-management-howto-api-inspector.md) output, Application Insights telemetries, and Resource Logs. - [Wait](api-management-advanced-policies.md#Wait) - Waits for enclosed [Send request](api-management-advanced-policies.md#SendRequest), [Get value from cache](api-management-caching-policies.md#GetFromCacheByKey), or [Control flow](api-management-advanced-policies.md#choose) policies to complete before proceeding.
-## [Authentication policies](api-management-authentication-policies.md)
+## Authentication policies
- [Authenticate with Basic](api-management-authentication-policies.md#Basic) - Authenticate with a backend service using Basic authentication. - [Authenticate with client certificate](api-management-authentication-policies.md#ClientCertificate) - Authenticate with a backend service using client certificates. - [Authenticate with managed identity](api-management-authentication-policies.md#ManagedIdentity) - Authenticate with a backend service using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md).
-## [Caching policies](api-management-caching-policies.md)
+## Caching policies
- [Get from cache](api-management-caching-policies.md#GetFromCache) - Perform cache lookup and return a valid cached response when available. - [Store to cache](api-management-caching-policies.md#StoreToCache) - Caches response according to the specified cache control configuration. - [Get value from cache](api-management-caching-policies.md#GetFromCacheByKey) - Retrieve a cached item by key. - [Store value in cache](api-management-caching-policies.md#StoreToCacheByKey) - Store an item in the cache by key. - [Remove value from cache](api-management-caching-policies.md#RemoveCacheByKey) - Remove an item in the cache by key.
-## [Cross domain policies](api-management-cross-domain-policies.md)
+## Cross-domain policies
- [Allow cross-domain calls](api-management-cross-domain-policies.md#AllowCrossDomainCalls) - Makes the API accessible from Adobe Flash and Microsoft Silverlight browser-based clients. - [CORS](api-management-cross-domain-policies.md#CORS) - Adds cross-origin resource sharing (CORS) support to an operation or an API to allow cross-domain calls from browser-based clients. - [JSONP](api-management-cross-domain-policies.md#JSONP) - Adds JSON with padding (JSONP) support to an operation or an API to allow cross-domain calls from JavaScript browser-based clients.
-## [Dapr integration policies](api-management-dapr-policies.md)
+## Dapr integration policies
- [Send request to a service](api-management-dapr-policies.md#invoke) - uses Dapr runtime to locate and reliably communicate with a Dapr microservice. - [Send message to Pub/Sub topic](api-management-dapr-policies.md#pubsub) - uses Dapr runtime to publish a message to a Publish/Subscribe topic. - [Trigger output binding](api-management-dapr-policies.md#bind) - uses Dapr runtime to invoke an external system via output binding.
-## [GraphQL API policies](graphql-policies.md)
+## GraphQL API policies
- [Validate GraphQL request](graphql-policies.md#validate-graphql-request) - Validates and authorizes a request to a GraphQL API. - [Set GraphQL resolver](graphql-policies.md#set-graphql-resolver) - Retrieves or sets data for a GraphQL field in an object type specified in a GraphQL schema.
-## [Transformation policies](api-management-transformation-policies.md)
+## Transformation policies
- [Convert JSON to XML](api-management-transformation-policies.md#ConvertJSONtoXML) - Converts request or response body from JSON to XML. - [Convert XML to JSON](api-management-transformation-policies.md#ConvertXMLtoJSON) - Converts request or response body from XML to JSON. - [Find and replace string in body](api-management-transformation-policies.md#Findandreplacestringinbody) - Finds a request or response substring and replaces it with a different substring.
More information about policies:
- [Rewrite URL](api-management-transformation-policies.md#RewriteURL) - Converts a request URL from its public form to the form expected by the web service. - [Transform XML using an XSLT](api-management-transformation-policies.md#XSLTransform) - Applies an XSL transformation to XML in the request or response body.
-## [Validation policies](validation-policies.md)
+## Validation policies
- [Validate content](validation-policies.md#validate-content) - Validates the size or JSON schema of a request or response body against the API schema. - [Validate parameters](validation-policies.md#validate-parameters) - Validates the request header, query, or path parameters against the API schema. - [Validate headers](validation-policies.md#validate-headers) - Validates the response headers against the API schema.
api-management Api Management Policy Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policy-expressions.md
Title: Azure API Management policy expressions | Microsoft Docs
-description: Learn about policy expressions in Azure API Management. See examples and view additional available resources.
+description: Learn about policy expressions in Azure API Management. See examples and view other available resources.
documentationcenter: ''
The following table lists the .NET Framework types and members allowed in policy
|Type|Supported members| |--|--|
-|Newtonsoft.Json.Formatting|All|
-|Newtonsoft.Json.JsonConvert|SerializeObject, DeserializeObject|
-|Newtonsoft.Json.Linq.Extensions|All|
-|Newtonsoft.Json.Linq.JArray|All|
-|Newtonsoft.Json.Linq.JConstructor|All|
-|Newtonsoft.Json.Linq.JContainer|All|
-|Newtonsoft.Json.Linq.JObject|All|
-|Newtonsoft.Json.Linq.JProperty|All|
-|Newtonsoft.Json.Linq.JRaw|All|
-|Newtonsoft.Json.Linq.JToken|All|
-|Newtonsoft.Json.Linq.JTokenType|All|
-|Newtonsoft.Json.Linq.JValue|All|
-|System.Array|All|
-|System.BitConverter|All|
-|System.Boolean|All|
-|System.Byte|All|
-|System.Char|All|
-|System.Collections.Generic.Dictionary<TKey, TValue>|All|
-|System.Collections.Generic.HashSet\<T>|All|
-|System.Collections.Generic.ICollection\<T>|All|
-|System.Collections.Generic.IDictionary<TKey, TValue>|All|
-|System.Collections.Generic.IEnumerable\<T>|All|
-|System.Collections.Generic.IEnumerator\<T>|All|
-|System.Collections.Generic.IList\<T>|All|
-|System.Collections.Generic.IReadOnlyCollection\<T>|All|
-|System.Collections.Generic.IReadOnlyDictionary<TKey, TValue>|All|
-|System.Collections.Generic.ISet\<T>|All|
-|System.Collections.Generic.KeyValuePair<TKey, TValue>|All|
-|System.Collections.Generic.List\<T>|All|
-|System.Collections.Generic.Queue\<T>|All|
-|System.Collections.Generic.Stack\<T>|All|
-|System.Convert|All|
-|System.DateTime|(Constructor), Add, AddDays, AddHours, AddMilliseconds, AddMinutes, AddMonths, AddSeconds, AddTicks, AddYears, Date, Day, DayOfWeek, DayOfYear, DaysInMonth, Hour, IsDaylightSavingTime, IsLeapYear, MaxValue, Millisecond, Minute, MinValue, Month, Now, Parse, Second, Subtract, Ticks, TimeOfDay, Today, ToString, UtcNow, Year|
-|System.DateTimeKind|Utc|
-|System.DateTimeOffset|All|
-|System.Decimal|All|
-|System.Double|All|
-|System.Exception|All|
-|System.Guid|All|
-|System.Int16|All|
-|System.Int32|All|
-|System.Int64|All|
-|System.IO.StringReader|All|
-|System.IO.StringWriter|All|
-|System.Linq.Enumerable|All|
-|System.Math|All|
-|System.MidpointRounding|All|
-|System.Net.IPAddress|All|
-|System.Net.WebUtility|All|
-|System.Nullable|All|
-|System.Random|All|
-|System.SByte|All|
-|System.Security.Cryptography.AsymmetricAlgorithm|All|
-|System.Security.Cryptography.CipherMode|All|
-|System.Security.Cryptography.HashAlgorithm|All|
-|System.Security.Cryptography.HashAlgorithmName|All|
-|System.Security.Cryptography.HMAC|All|
-|System.Security.Cryptography.HMACMD5|All|
-|System.Security.Cryptography.HMACSHA1|All|
-|System.Security.Cryptography.HMACSHA256|All|
-|System.Security.Cryptography.HMACSHA384|All|
-|System.Security.Cryptography.HMACSHA512|All|
-|System.Security.Cryptography.KeyedHashAlgorithm|All|
-|System.Security.Cryptography.MD5|All|
-|System.Security.Cryptography.Oid|All|
-|System.Security.Cryptography.PaddingMode|All|
-|System.Security.Cryptography.RNGCryptoServiceProvider|All|
-|System.Security.Cryptography.RSA|All|
-|System.Security.Cryptography.RSAEncryptionPadding|All|
-|System.Security.Cryptography.RSASignaturePadding|All|
-|System.Security.Cryptography.SHA1|All|
-|System.Security.Cryptography.SHA1Managed|All|
-|System.Security.Cryptography.SHA256|All|
-|System.Security.Cryptography.SHA256Managed|All|
-|System.Security.Cryptography.SHA384|All|
-|System.Security.Cryptography.SHA384Managed|All|
-|System.Security.Cryptography.SHA512|All|
-|System.Security.Cryptography.SHA512Managed|All|
-|System.Security.Cryptography.SymmetricAlgorithm|All|
-|System.Security.Cryptography.X509Certificates.PublicKey|All|
-|System.Security.Cryptography.X509Certificates.RSACertificateExtensions|All|
-|System.Security.Cryptography.X509Certificates.X500DistinguishedName|Name|
-|System.Security.Cryptography.X509Certificates.X509Certificate|All|
-|System.Security.Cryptography.X509Certificates.X509Certificate2|All|
-|System.Security.Cryptography.X509Certificates.X509ContentType|All|
-|System.Security.Cryptography.X509Certificates.X509NameType|All|
-|System.Single|All|
-|System.String|All|
-|System.StringComparer|All|
-|System.StringComparison|All|
-|System.StringSplitOptions|All|
-|System.Text.Encoding|All|
-|System.Text.RegularExpressions.Capture|Index, Length, Value|
-|System.Text.RegularExpressions.CaptureCollection|Count, Item|
-|System.Text.RegularExpressions.Group|Captures, Success|
-|System.Text.RegularExpressions.GroupCollection|Count, Item|
-|System.Text.RegularExpressions.Match|Empty, Groups, Result|
-|System.Text.RegularExpressions.Regex|(Constructor), IsMatch, Match, Matches, Replace, Unescape, Split|
-|System.Text.RegularExpressions.RegexOptions|All|
-|System.Text.StringBuilder|All|
-|System.TimeSpan|All|
-|System.TimeZone|All|
-|System.TimeZoneInfo.AdjustmentRule|All|
-|System.TimeZoneInfo.TransitionTime|All|
-|System.TimeZoneInfo|All|
-|System.Tuple|All|
-|System.UInt16|All|
-|System.UInt32|All|
-|System.UInt64|All|
-|System.Uri|All|
-|System.UriPartial|All|
-|System.Xml.Linq.Extensions|All|
-|System.Xml.Linq.XAttribute|All|
-|System.Xml.Linq.XCData|All|
-|System.Xml.Linq.XComment|All|
-|System.Xml.Linq.XContainer|All|
-|System.Xml.Linq.XDeclaration|All|
-|System.Xml.Linq.XDocument|All, except of: Load|
-|System.Xml.Linq.XDocumentType|All|
-|System.Xml.Linq.XElement|All|
-|System.Xml.Linq.XName|All|
-|System.Xml.Linq.XNamespace|All|
-|System.Xml.Linq.XNode|All|
-|System.Xml.Linq.XNodeDocumentOrderComparer|All|
-|System.Xml.Linq.XNodeEqualityComparer|All|
-|System.Xml.Linq.XObject|All|
-|System.Xml.Linq.XProcessingInstruction|All|
-|System.Xml.Linq.XText|All|
-|System.Xml.XmlNodeType|All|
+|`Newtonsoft.Json.Formatting`|All|
+|`Newtonsoft.Json.JsonConvert`|`SerializeObject`, `DeserializeObject`|
+|`Newtonsoft.Json.Linq.Extensions`|All|
+|`Newtonsoft.Json.Linq.JArray`|All|
+|`Newtonsoft.Json.Linq.JConstructor`|All|
+|`Newtonsoft.Json.Linq.JContainer`|All|
+|`Newtonsoft.Json.Linq.JObject`|All|
+|`Newtonsoft.Json.Linq.JProperty`|All|
+|`Newtonsoft.Json.Linq.JRaw`|All|
+|`Newtonsoft.Json.Linq.JToken`|All|
+|`Newtonsoft.Json.Linq.JTokenType`|All|
+|`Newtonsoft.Json.Linq.JValue`|All|
+|`System.Array`|All|
+|`System.BitConverter`|All|
+|`System.Boolean`|All|
+|`System.Byte`|All|
+|`System.Char`|All|
+|`System.Collections.Generic.Dictionary<TKey, TValue>`|All|
+|`System.Collections.Generic.HashSet<T>`|All|
+|`System.Collections.Generic.ICollection<T>`|All|
+|`System.Collections.Generic.IDictionary<TKey, TValue>`|All|
+|`System.Collections.Generic.IEnumerable<T>`|All|
+|`System.Collections.Generic.IEnumerator<T>`|All|
+|`System.Collections.Generic.IList<T>`|All|
+|`System.Collections.Generic.IReadOnlyCollection<T>`|All|
+|`System.Collections.Generic.IReadOnlyDictionary<TKey, TValue>`|All|
+|`System.Collections.Generic.ISet<T>`|All|
+|`System.Collections.Generic.KeyValuePair<TKey, TValue>`|All|
+|`System.Collections.Generic.List<T>`|All|
+|`System.Collections.Generic.Queue<T>`|All|
+|`System.Collections.Generic.Stack<T>`|All|
+|`System.Convert`|All|
+|`System.DateTime`|(Constructor), `Add`, `AddDays`, `AddHours`, `AddMilliseconds`, `AddMinutes`, `AddMonths`, `AddSeconds`, `AddTicks`, `AddYears`, `Date`, `Day`, `DayOfWeek`, `DayOfYear`, `DaysInMonth`, `Hour`, `IsDaylightSavingTime`, `IsLeapYear`, `MaxValue`, `Millisecond`, `Minute`, `MinValue`, `Month`, `Now`, `Parse`, `Second`, `Subtract`, `Ticks`, `TimeOfDay`, `Today`, `ToString`, `UtcNow`, `Year`|
+|`System.DateTimeKind`|`Utc`|
+|`System.DateTimeOffset`|All|
+|`System.Decimal`|All|
+|`System.Double`|All|
+|`System.Enum`|`Parse`, `TryParse`, `ToString`|
+|`System.Exception`|All|
+|`System.Guid`|All|
+|`System.Int16`|All|
+|`System.Int32`|All|
+|`System.Int64`|All|
+|`System.IO.StringReader`|All|
+|`System.IO.StringWriter`|All|
+|`System.Linq.Enumerable`|All|
+|`System.Math`|All|
+|`System.MidpointRounding`|All|
+|`System.Net.IPAddress`|`AddressFamily`, `Equals`, `GetAddressBytes`, `IsLoopback`, `Parse`, `TryParse`, `ToString`|
+|`System.Net.WebUtility`|All|
+|`System.Nullable`|All|
+|`System.Random`|All|
+|`System.SByte`|All|
+|`System.Security.Cryptography.AsymmetricAlgorithm`|All|
+|`System.Security.Cryptography.CipherMode`|All|
+|`System.Security.Cryptography.HashAlgorithm`|All|
+|`System.Security.Cryptography.HashAlgorithmName`|All|
+|`System.Security.Cryptography.HMAC`|All|
+|`System.Security.Cryptography.HMACMD5`|All|
+|`System.Security.Cryptography.HMACSHA1`|All|
+|`System.Security.Cryptography.HMACSHA256`|All|
+|`System.Security.Cryptography.HMACSHA384`|All|
+|`System.Security.Cryptography.HMACSHA512`|All|
+|`System.Security.Cryptography.KeyedHashAlgorithm`|All|
+|`System.Security.Cryptography.MD5`|All|
+|`System.Security.Cryptography.Oid`|All|
+|`System.Security.Cryptography.PaddingMode`|All|
+|`System.Security.Cryptography.RNGCryptoServiceProvider`|All|
+|`System.Security.Cryptography.RSA`|All|
+|`System.Security.Cryptography.RSAEncryptionPadding`|All|
+|`System.Security.Cryptography.RSASignaturePadding`|All|
+|`System.Security.Cryptography.SHA1`|All|
+|`System.Security.Cryptography.SHA1Managed`|All|
+|`System.Security.Cryptography.SHA256`|All|
+|`System.Security.Cryptography.SHA256Managed`|All|
+|`System.Security.Cryptography.SHA384`|All|
+|`System.Security.Cryptography.SHA384Managed`|All|
+|`System.Security.Cryptography.SHA512`|All|
+|`System.Security.Cryptography.SHA512Managed`|All|
+|`System.Security.Cryptography.SymmetricAlgorithm`|All|
+|`System.Security.Cryptography.X509Certificates.PublicKey`|All|
+|`System.Security.Cryptography.X509Certificates.RSACertificateExtensions`|All|
+|`System.Security.Cryptography.X509Certificates.X500DistinguishedName`|`Name`|
+|`System.Security.Cryptography.X509Certificates.X509Certificate`|All|
+|`System.Security.Cryptography.X509Certificates.X509Certificate2`|All|
+|`System.Security.Cryptography.X509Certificates.X509ContentType`|All|
+|`System.Security.Cryptography.X509Certificates.X509NameType`|All|
+|`System.Single`|All|
+|`System.String`|All|
+|`System.StringComparer`|All|
+|`System.StringComparison`|All|
+|`System.StringSplitOptions`|All|
+|`System.Text.Encoding`|All|
+|`System.Text.RegularExpressions.Capture`|`Index`, `Length`, `Value`|
+|`System.Text.RegularExpressions.CaptureCollection`|`Count`, `Item`|
+|`System.Text.RegularExpressions.Group`|`Captures`, `Success`|
+|`System.Text.RegularExpressions.GroupCollection`|`Count`, `Item`|
+|`System.Text.RegularExpressions.Match`|`Empty`, `Groups`, `Result`|
+|`System.Text.RegularExpressions.Regex`|(Constructor), `IsMatch`, `Match`, `Matches`, `Replace`, `Unescape`, `Split`|
+|`System.Text.RegularExpressions.RegexOptions`|All|
+|`System.Text.StringBuilder`|All|
+|`System.TimeSpan`|All|
+|`System.TimeZone`|All|
+|`System.TimeZoneInfo.AdjustmentRule`|All|
+|`System.TimeZoneInfo.TransitionTime`|All|
+|`System.TimeZoneInfo`|All|
+|`System.Tuple`|All|
+|`System.UInt16`|All|
+|`System.UInt32`|All|
+|`System.UInt64`|All|
+|`System.Uri`|All|
+|`System.UriPartial`|All|
+|`System.Xml.Linq.Extensions`|All|
+|`System.Xml.Linq.XAttribute`|All|
+|`System.Xml.Linq.XCData`|All|
+|`System.Xml.Linq.XComment`|All|
+|`System.Xml.Linq.XContainer`|All|
+|`System.Xml.Linq.XDeclaration`|All|
+|`System.Xml.Linq.XDocument`|All, except `Load`|
+|`System.Xml.Linq.XDocumentType`|All|
+|`System.Xml.Linq.XElement`|All|
+|`System.Xml.Linq.XName`|All|
+|`System.Xml.Linq.XNamespace`|All|
+|`System.Xml.Linq.XNode`|All|
+|`System.Xml.Linq.XNodeDocumentOrderComparer`|All|
+|`System.Xml.Linq.XNodeEqualityComparer`|All|
+|`System.Xml.Linq.XObject`|All|
+|`System.Xml.Linq.XProcessingInstruction`|All|
+|`System.Xml.Linq.XText`|All|
+|`System.Xml.XmlNodeType`|All|
## <a name="ContextVariables"></a> Context variable+ The `context` variable is implicitly available in every policy [expression](api-management-policy-expressions.md#Syntax). Its members: * Provide information relevant to the API [request](#ref-context-request) and [response](#ref-context-response), and related properties. * Are all read-only. |Context Variable|Allowed methods, properties, and parameter values| |-|-|
-|context|[Api](#ref-context-api): [IApi](#ref-iapi)<br /><br /> [Deployment](#ref-context-deployment)<br /><br /> Elapsed: TimeSpan - time interval between the value of Timestamp and current time<br /><br /> [LastError](#ref-context-lasterror)<br /><br /> [Operation](#ref-context-operation)<br /><br /> [Product](#ref-context-product)<br /><br /> [Request](#ref-context-request)<br /><br /> RequestId: Guid - unique request identifier<br /><br /> [Response](#ref-context-response)<br /><br /> [Subscription](#ref-context-subscription)<br /><br /> Timestamp: DateTime - point in time when request was received<br /><br /> Tracing: bool - indicates if tracing is on or off <br /><br /> [User](#ref-context-user)<br /><br /> [Variables](#ref-context-variables): IReadOnlyDictionary<string, object><br /><br /> void Trace(message: string)|
-|<a id="ref-context-api"></a>context.Api|Id: string<br /><br /> IsCurrentRevision: bool<br /><br /> Name: string<br /><br /> Path: string<br /><br /> Revision: string<br /><br /> ServiceUrl: [IUrl](#ref-iurl)<br /><br /> Version: string |
-|<a id="ref-context-deployment"></a>context.Deployment|GatewayId: string (returns 'managed' for managed gateways)<br /><br /> Region: string<br /><br /> ServiceId: string<br /><br /> ServiceName: string<br /><br /> Certificates: IReadOnlyDictionary<string, X509Certificate2>|
-|<a id="ref-context-lasterror"></a>context.LastError|Source: string<br /><br /> Reason: string<br /><br /> Message: string<br /><br /> Scope: string<br /><br /> Section: string<br /><br /> Path: string<br /><br /> PolicyId: string<br /><br /> For more information about context.LastError, see [Error handling](api-management-error-handling-policies.md).|
-|<a id="ref-context-operation"></a>context.Operation|Id: string<br /><br /> Method: string<br /><br /> Name: string<br /><br /> UrlTemplate: string|
-|<a id="ref-context-product"></a>context.Product|Apis: IEnumerable<[IApi](#ref-iapi)\><br /><br /> ApprovalRequired: bool<br /><br /> Groups: IEnumerable<[IGroup](#ref-igroup)\><br /><br /> Id: string<br /><br /> Name: string<br /><br /> State: enum ProductState {NotPublished, Published}<br /><br /> SubscriptionLimit: int?<br /><br /> SubscriptionRequired: bool|
-|<a id="ref-context-request"></a>context.Request|Body: [IMessageBody](#ref-imessagebody) or `null` if request does not have a body.<br /><br /> Certificate: System.Security.Cryptography.X509Certificates.X509Certificate2<br /><br /> [Headers](#ref-context-request-headers): IReadOnlyDictionary<string, string[]><br /><br /> IpAddress: string<br /><br /> MatchedParameters: IReadOnlyDictionary<string, string><br /><br /> Method: string<br /><br /> OriginalUrl: [IUrl](#ref-iurl)<br /><br /> Url: [IUrl](#ref-iurl)<br /><br /> PrivateEndpointConnection: [IPrivateEndpointConnection](#ref-iprivateendpointconnection) or `null` if request does not come from a private endpoint connection.|
-|<a id="ref-context-request-headers"></a>string context.Request.Headers.GetValueOrDefault(headerName: string, defaultValue: string)|headerName: string<br /><br /> defaultValue: string<br /><br /> Returns comma-separated request header values or `defaultValue` if the header is not found.|
-|<a id="ref-context-response"></a>context.Response|Body: [IMessageBody](#ref-imessagebody)<br /><br /> [Headers](#ref-context-response-headers): IReadOnlyDictionary<string, string[]><br /><br /> StatusCode: int<br /><br /> StatusReason: string|
-|<a id="ref-context-response-headers"></a>string context.Response.Headers.GetValueOrDefault(headerName: string, defaultValue: string)|headerName: string<br /><br /> defaultValue: string<br /><br /> Returns comma-separated response header values or `defaultValue` if the header is not found.|
-|<a id="ref-context-subscription"></a>context.Subscription|CreatedDate: DateTime<br /><br /> EndDate: DateTime?<br /><br /> Id: string<br /><br /> Key: string<br /><br /> Name: string<br /><br /> PrimaryKey: string<br /><br /> SecondaryKey: string<br /><br /> StartDate: DateTime?|
-|<a id="ref-context-user"></a>context.User|Email: string<br /><br /> FirstName: string<br /><br /> Groups: IEnumerable<[IGroup](#ref-igroup)\><br /><br /> Id: string<br /><br /> Identities: IEnumerable<[IUserIdentity](#ref-iuseridentity)\><br /><br /> LastName: string<br /><br /> Note: string<br /><br /> RegistrationDate: DateTime|
-|<a id="ref-iapi"></a>IApi|Id: string<br /><br /> Name: string<br /><br /> Path: string<br /><br /> Protocols: IEnumerable<string\><br /><br /> ServiceUrl: [IUrl](#ref-iurl)<br /><br /> SubscriptionKeyParameterNames: [ISubscriptionKeyParameterNames](#ref-isubscriptionkeyparameternames)|
-|<a id="ref-igroup"></a>IGroup|Id: string<br /><br /> Name: string|
-|<a id="ref-imessagebody"></a>IMessageBody|As<T\>(preserveContent: bool = false): Where T: string, byte[],JObject, JToken, JArray, XNode, XElement, XDocument<br /><br /> The `context.Request.Body.As<T>` and `context.Response.Body.As<T>` methods are used to read either a request and response message body in specified type `T`. By default, the method:<br /><ul><li>Uses the original message body stream.</li><li>Renders it unavailable after it returns.</li></ul> <br />To avoid that and have the method operate on a copy of the body stream, set the `preserveContent` parameter to `true`, as in [this example](api-management-transformation-policies.md#SetBody).|
-|<a id="ref-iprivateendpointconnection"></a>IPrivateEndpointConnection|Name: string<br /><br /> GroupId: string<br /><br /> MemberName: string<br /><br />For more information, see the [REST API](/rest/api/apimanagement/current-ga/private-endpoint-connection/list-private-link-resources).|
-|<a id="ref-iurl"></a>IUrl|Host: string<br /><br /> Path: string<br /><br /> Port: int<br /><br /> [Query](#ref-iurl-query): IReadOnlyDictionary<string, string[]><br /><br /> QueryString: string<br /><br /> Scheme: string|
-|<a id="ref-iuseridentity"></a>IUserIdentity|Id: string<br /><br /> Provider: string|
-|<a id="ref-isubscriptionkeyparameternames"></a>ISubscriptionKeyParameterNames|Header: string<br /><br /> Query: string|
-|<a id="ref-iurl-query"></a>string IUrl.Query.GetValueOrDefault(queryParameterName: string, defaultValue: string)|queryParameterName: string<br /><br /> defaultValue: string<br /><br /> Returns comma-separated query parameter values or `defaultValue` if the parameter is not found.|
-|<a id="ref-context-variables"></a>T context.Variables.GetValueOrDefault<T\>(variableName: string, defaultValue: T)|variableName: string<br /><br /> defaultValue: T<br /><br /> Returns variable value cast to type `T` or `defaultValue` if the variable is not found.<br /><br /> This method throws an exception if the specified type does not match the actual type of the returned variable.|
-|BasicAuthCredentials AsBasic(input: this string)|input: string<br /><br /> If the input parameter contains a valid HTTP Basic Authentication authorization request header value, the method returns an object of type `BasicAuthCredentials`; otherwise the method returns null.|
-|bool TryParseBasic(input: this string, result: out BasicAuthCredentials)|input: string<br /><br /> result: out BasicAuthCredentials<br /><br /> If the input parameter contains a valid HTTP Basic Authentication authorization value in the request header the method returns `true` and the result parameter contains a value of type `BasicAuthCredentials`; otherwise the method returns `false`.|
-|BasicAuthCredentials|Password: string<br /><br /> UserId: string|
-|Jwt AsJwt(input: this string)|input: string<br /><br /> If the input parameter contains a valid JWT token value, the method returns an object of type `Jwt`; otherwise the method returns `null`.|
-|bool TryParseJwt(input: this string, result: out Jwt)|input: string<br /><br /> result: out Jwt<br /><br /> If the input parameter contains a valid JWT token value, the method returns `true` and the result parameter contains a value of type `Jwt`; otherwise the method returns `false`.|
-|Jwt|Algorithm: string<br /><br /> Audiences: IEnumerable<string\><br /><br /> Claims: IReadOnlyDictionary<string, string[]><br /><br /> ExpirationTime: DateTime?<br /><br /> Id: string<br /><br /> Issuer: string<br /><br /> IssuedAt: DateTime?<br /><br /> NotBefore: DateTime?<br /><br /> Subject: string<br /><br /> Type: string|
-|string Jwt.Claims.GetValueOrDefault(claimName: string, defaultValue: string)|claimName: string<br /><br /> defaultValue: string<br /><br /> Returns comma-separated claim values or `defaultValue` if the header is not found.|
-|byte[] Encrypt(input: this byte[], alg: string, key:byte[], iv:byte[])|input - plaintext to be encrypted<br /><br />alg - name of a symmetric encryption algorithm<br /><br />key - encryption key<br /><br />iv - initialization vector<br /><br />Returns encrypted plaintext.|
-|byte[] Encrypt(input: this byte[], alg: System.Security.Cryptography.SymmetricAlgorithm)|input - plaintext to be encrypted<br /><br />alg - encryption algorithm<br /><br />Returns encrypted plaintext.|
-|byte[] Encrypt(input: this byte[], alg: System.Security.Cryptography.SymmetricAlgorithm, key:byte[], iv:byte[])|input - plaintext to be encrypted<br /><br />alg - encryption algorithm<br /><br />key - encryption key<br /><br />iv - initialization vector<br /><br />Returns encrypted plaintext.|
-|byte[] Decrypt(input: this byte[], alg: string, key:byte[], iv:byte[])|input - cypher text to be decrypted<br /><br />alg - name of a symmetric encryption algorithm<br /><br />key - encryption key<br /><br />iv - initialization vector<br /><br />Returns plaintext.|
-|byte[] Decrypt(input: this byte[], alg: System.Security.Cryptography.SymmetricAlgorithm)|input - cypher text to be decrypted<br /><br />alg - encryption algorithm<br /><br />Returns plaintext.|
-|byte[] Decrypt(input: this byte[], alg: System.Security.Cryptography.SymmetricAlgorithm, key:byte[], iv:byte[])|input - cypher text to be decrypted<br /><br />alg - encryption algorithm<br /><br />key - encryption key<br /><br />iv - initialization vector<br /><br />Returns plaintext.|
-|bool VerifyNoRevocation(input: this System.Security.Cryptography.X509Certificates.X509Certificate2)|Performs a X.509 chain validation without checking certificate revocation status.<br /><br />input - certificate object<br /><br />Returns `true` if the validation succeeds; `false` if the validation fails.|
+|`context`|[`Api`](#ref-context-api): [`IApi`](#ref-iapi)<br /><br /> [`Deployment`](#ref-context-deployment)<br /><br /> Elapsed: `TimeSpan` - time interval between the value of `Timestamp` and current time<br /><br /> [`LastError`](#ref-context-lasterror)<br /><br /> [`Operation`](#ref-context-operation)<br /><br /> [`Product`](#ref-context-product)<br /><br /> [`Request`](#ref-context-request)<br /><br /> `RequestId`: `Guid` - unique request identifier<br /><br /> [`Response`](#ref-context-response)<br /><br /> [`Subscription`](#ref-context-subscription)<br /><br /> `Timestamp`: `DateTime` - point in time when request was received<br /><br /> `Tracing`: `bool` - indicates if tracing is on or off <br /><br /> [User](#ref-context-user)<br /><br /> [`Variables`](#ref-context-variables): `IReadOnlyDictionary<string, object>`<br /><br /> `void Trace(message: string)`|
+|<a id="ref-context-api"></a>`context.Api`|`Id`: `string`<br /><br /> `IsCurrentRevision`: `bool`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Revision`: `string`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `Version`: `string` |
+|<a id="ref-context-deployment"></a>`context.Deployment`|`GatewayId`: `string` (returns 'managed' for managed gateways)<br /><br /> `Region`: `string`<br /><br /> `ServiceId`: `string`<br /><br /> `ServiceName`: `string`<br /><br /> `Certificates`: `IReadOnlyDictionary<string, X509Certificate2>`|
+|<a id="ref-context-lasterror"></a>`context.LastError`|`Source`: `string`<br /><br /> `Reason`: `string`<br /><br /> `Message`: `string`<br /><br /> `Scope`: `string`<br /><br /> `Section`: `string`<br /><br /> `Path`: `string`<br /><br /> `PolicyId`: `string`<br /><br /> For more information about `context.LastError`, see [Error handling](api-management-error-handling-policies.md).|
+|<a id="ref-context-operation"></a>`context.Operation`|`Id`: `string`<br /><br /> `Method`: `string`<br /><br /> `Name`: `string`<br /><br /> `UrlTemplate`: `string`|
+|<a id="ref-context-product"></a>`context.Product`|`Apis`: `IEnumerable<`[`IApi`](#ref-iapi)`>`<br /><br /> `ApprovalRequired`: `bool`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `State`: `enum ProductState {NotPublished, Published}`<br /><br /> `SubscriptionLimit`: `int?`<br /><br /> `SubscriptionRequired`: `bool`|
+|<a id="ref-context-request"></a>`context.Request`|`Body`: [`IMessageBody`](#ref-imessagebody) or `null` if request doesn't have a body.<br /><br /> `Certificate`: `System.Security.Cryptography.X509Certificates.X509Certificate2`<br /><br /> [`Headers`](#ref-context-request-headers): `IReadOnlyDictionary<string, string[]>`<br /><br /> `IpAddress`: `string`<br /><br /> `MatchedParameters`: `IReadOnlyDictionary<string, string>`<br /><br /> `Method`: `string`<br /><br /> `OriginalUrl`: [`IUrl`](#ref-iurl)<br /><br /> `Url`: [`IUrl`](#ref-iurl)<br /><br /> `PrivateEndpointConnection`: [`IPrivateEndpointConnection`](#ref-iprivateendpointconnection) or `null` if request doesn't come from a private endpoint connection.|
+|<a id="ref-context-request-headers"></a>`string context.Request.Headers.GetValueOrDefault(headerName: string, defaultValue: string)`|`headerName`: `string`<br /><br /> `defaultValue`: `string`<br /><br /> Returns comma-separated request header values or `defaultValue` if the header isn't found.|
+|<a id="ref-context-response"></a>`context.Response`|`Body`: [`IMessageBody`](#ref-imessagebody)<br /><br /> [`Headers`](#ref-context-response-headers): `IReadOnlyDictionary<string, string[]>`<br /><br /> `StatusCode`: `int`<br /><br /> `StatusReason`: `string`|
+|<a id="ref-context-response-headers"></a>`string context.Response.Headers.GetValueOrDefault(headerName: string, defaultValue: string)`|`headerName`: `string`<br /><br /> `defaultValue`: `string`<br /><br /> Returns comma-separated response header values or `defaultValue` if the header isn't found.|
+|<a id="ref-context-subscription"></a>`context.Subscription`|`CreatedDate`: `DateTime`<br /><br /> `EndDate`: `DateTime?`<br /><br /> `Id`: `string`<br /><br /> `Key`: `string`<br /><br /> `Name`: `string`<br /><br /> `PrimaryKey`: `string`<br /><br /> `SecondaryKey`: `string`<br /><br /> `StartDate`: `DateTime?`|
+|<a id="ref-context-user"></a>`context.User`|`Email`: `string`<br /><br /> `FirstName`: `string`<br /><br /> `Groups`: `IEnumerable<`[`IGroup`](#ref-igroup)`>`<br /><br /> `Id`: `string`<br /><br /> `Identities`: `IEnumerable<`[`IUserIdentity`](#ref-iuseridentity)`>`<br /><br /> `LastName`: `string`<br /><br /> `Note`: `string`<br /><br /> `RegistrationDate`: `DateTime`|
+|<a id="ref-iapi"></a>`IApi`|`Id`: `string`<br /><br /> `Name`: `string`<br /><br /> `Path`: `string`<br /><br /> `Protocols`: `IEnumerable<string>`<br /><br /> `ServiceUrl`: [`IUrl`](#ref-iurl)<br /><br /> `SubscriptionKeyParameterNames`: [`ISubscriptionKeyParameterNames`](#ref-isubscriptionkeyparameternames)|
+|<a id="ref-igroup"></a>`IGroup`|`Id`: `string`<br /><br /> `Name`: `string`|
+|<a id="ref-imessagebody"></a>`IMessageBody`|`As<T>(preserveContent: bool = false): Where T: string, byte[],JObject, JToken, JArray, XNode, XElement, XDocument`<br /><br /> The `context.Request.Body.As<T>` and `context.Response.Body.As<T>` methods are used to read either a request and response message body in specified type `T`. By default, the method:<br /><ul><li>Uses the original message body stream.</li><li>Renders it unavailable after it returns.</li></ul> <br />To avoid that and have the method operate on a copy of the body stream, set the `preserveContent` parameter to `true`, as in [this example](api-management-transformation-policies.md#SetBody).|
+|<a id="ref-iprivateendpointconnection"></a>`IPrivateEndpointConnection`|`Name`: `string`<br /><br /> `GroupId`: `string`<br /><br /> `MemberName`: `string`<br /><br />For more information, see the [REST API](/rest/api/apimanagement/current-ga/private-endpoint-connection/list-private-link-resources).|
+|<a id="ref-iurl"></a>`IUrl`|`Host`: `string`<br /><br /> `Path`: `string`<br /><br /> `Port`: `int`<br /><br /> [`Query`](#ref-iurl-query): `IReadOnlyDictionary<string, string[]>`<br /><br /> `QueryString`: `string`<br /><br /> `Scheme`: `string`|
+|<a id="ref-iuseridentity"></a>`IUserIdentity`|`Id`: `string`<br /><br /> `Provider`: `string`|
+|<a id="ref-isubscriptionkeyparameternames"></a>`ISubscriptionKeyParameterNames`|`Header`: `string`<br /><br /> `Query`: `string`|
+|<a id="ref-iurl-query"></a>`string IUrl.Query.GetValueOrDefault(queryParameterName: string, defaultValue: string)`|`queryParameterName`: `string`<br /><br /> `defaultValue`: `string`<br /><br /> Returns comma-separated query parameter values or `defaultValue` if the parameter isn't found.|
+|<a id="ref-context-variables"></a>`T context.Variables.GetValueOrDefault<T>(variableName: string, defaultValue: T)`|`variableName`: `string`<br /><br /> `defaultValue`: `T`<br /><br /> Returns variable value cast to type `T` or `defaultValue` if the variable isn't found.<br /><br /> This method throws an exception if the specified type doesn't match the actual type of the returned variable.|
+|`BasicAuthCredentials AsBasic(input: this string)`|`input`: `string`<br /><br /> If the input parameter contains a valid HTTP Basic Authentication authorization request header value, the method returns an object of type `BasicAuthCredentials`; otherwise the method returns null.|
+|`bool TryParseBasic(input: this string, result: out BasicAuthCredentials)`|`input`: `string`<br /><br /> `result`: `out BasicAuthCredentials`<br /><br /> If the input parameter contains a valid HTTP Basic Authentication authorization value in the request header, the method returns `true` and the result parameter contains a value of type `BasicAuthCredentials`; otherwise the method returns `false`.|
+|`BasicAuthCredentials`|`Password`: `string`<br /><br /> `UserId`: `string`|
+|`Jwt AsJwt(input: this string)`|`input`: `string`<br /><br /> If the input parameter contains a valid JWT token value, the method returns an object of type `Jwt`; otherwise the method returns `null`.|
+|`bool TryParseJwt(input: this string, result: out Jwt)`|`input`: `string`<br /><br /> `result`: `out Jwt`<br /><br /> If the input parameter contains a valid JWT token value, the method returns `true` and the result parameter contains a value of type `Jwt`; otherwise the method returns `false`.|
+|`Jwt`|`Algorithm`: `string`<br /><br /> `Audiences`: `IEnumerable<string>`<br /><br /> `Claims`: `IReadOnlyDictionary<string, string[]>`<br /><br /> `ExpirationTime`: `DateTime?`<br /><br /> `Id`: `string`<br /><br /> `Issuer`: `string`<br /><br /> `IssuedAt`: `DateTime?`<br /><br /> `NotBefore`: `DateTime?`<br /><br /> `Subject`: `string`<br /><br /> `Type`: `string`|
+|`string Jwt.Claims.GetValueOrDefault(claimName: string, defaultValue: string)`|`claimName`: `string`<br /><br /> `defaultValue`: `string`<br /><br /> Returns comma-separated claim values or `defaultValue` if the header isn't found.|
+|`byte[] Encrypt(input: this byte[], alg: string, key:byte[], iv:byte[])`|`input` - plaintext to be encrypted<br /><br />`alg` - name of a symmetric encryption algorithm<br /><br />`key` - encryption key<br /><br />`iv` - initialization vector<br /><br />Returns encrypted plaintext.|
+|`byte[] Encrypt(input: this byte[], alg: System.Security.Cryptography.SymmetricAlgorithm)`|`input` - plaintext to be encrypted<br /><br />`alg` - encryption algorithm<br /><br />Returns encrypted plaintext.|
+|`byte[] Encrypt(input: this byte[], alg: System.Security.Cryptography.SymmetricAlgorithm, key:byte[], iv:byte[])`|`input` - plaintext to be encrypted<br /><br />`alg` - encryption algorithm<br /><br />`key` - encryption key<br /><br />`iv` - initialization vector<br /><br />Returns encrypted plaintext.|
+|`byte[] Decrypt(input: this byte[], alg: string, key:byte[], iv:byte[])`|`input` - cypher text to be decrypted<br /><br />`alg` - name of a symmetric encryption algorithm<br /><br />`key` - encryption key<br /><br />`iv` - initialization vector<br /><br />Returns plaintext.|
+|`byte[] Decrypt(input: this byte[], alg: System.Security.Cryptography.SymmetricAlgorithm)`|`input` - cypher text to be decrypted<br /><br />`alg` - encryption algorithm<br /><br />Returns plaintext.|
+|`byte[] Decrypt(input: this byte[], alg: System.Security.Cryptography.SymmetricAlgorithm, key:byte[], iv:byte[])`|`input` - cypher text to be decrypted<br /><br />`alg` - encryption algorithm<br /><br />`key` - encryption key<br /><br />`iv` - initialization vector<br /><br />Returns plaintext.|
+|`bool VerifyNoRevocation(input: this System.Security.Cryptography.X509Certificates.X509Certificate2)`|Performs an X.509 chain validation without checking certificate revocation status.<br /><br />`input` - certificate object<br /><br />Returns `true` if the validation succeeds; `false` if the validation fails.|
## Next steps
api-management Api Management Sample Flexible Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-sample-flexible-throttling.md
Rate limits and quotas are used for different purposes.
### Rate limits Rate limits are usually used to protect against short and intense volume bursts. For example, if you know your backend service has a bottleneck at its database with a high call volume, you could set a `rate-limit-by-key` policy to not allow high call volume by using this setting. ++ ### Quotas Quotas are usually used for controlling call rates over a longer period of time. For example, they can set the total number of calls that a particular subscriber can make within a given month. For monetizing your API, quotas can also be set differently for tier-based subscriptions. For example, a Basic tier subscription might be able to make no more than 10,000 calls a month but a Premium tier could go up to 100,000,000 calls each month. Within Azure API Management, rate limits are typically propagated faster across the nodes to protect against spikes. In contrast, usage quota information is used over a longer term and hence its implementation is different.
-> [!CAUTION]
-> Due to the distributed nature of throttling architecture, rate limiting is never completely accurate. The difference between the configured and the real number of allowed requests vary based on request volume and rate, backend latency, and other factors.
+ ## Product-based throttling Rate throttling capabilities that are scoped to a particular subscription are useful for the API provider to apply limits on the developers who have signed up to use their API. However, it does not help, for example, in throttling individual end users of the API. It is possible for a single user of the developer's application to consume the entire quota and then prevent other customers of the developer from being able to use the application. Also, several customers who might generate a high volume of requests may limit access to occasional users.
api-management Api Management Transformation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-transformation-policies.md
This article provides a reference for API Management policies used to transform
### Policy statement ```xml
-<json-to-xml apply="always | content-type-json" consider-accept-header="true | false" parse-date="true | false"/>
+<json-to-xml
+ apply="always | content-type-json"
+ consider-accept-header="true | false"
+ parse-date="true | false"
+ namespace-separator="separator character"
+ attribute-block-name="name" />
``` ### Example
+Consider the following policy:
+ ```xml <policies> <inbound>
This article provides a reference for API Management policies used to transform
</inbound> <outbound> <base />
- <json-to-xml apply="always" consider-accept-header="false" parse-date="false"/>
+ <json-to-xml apply="always" consider-accept-header="false" parse-date="false" namespace-separator=":" attribute-block-name="#attrs" />
</outbound> </policies> ```
+If the backend returns the following JSON:
+
+``` json
+{
+ "soapenv:Envelope": {
+ "xmlns:soapenv": "http://schemas.xmlsoap.org/soap/envelope/",
+ "xmlns:v1": "http://localdomain.com/core/v1",
+ "soapenv:Header": {},
+ "soapenv:Body": {
+ "v1:QueryList": {
+ "#attrs": {
+ "queryName": "test"
+ },
+ "v1:QueryItem": {
+ "name": "dummy text"
+ }
+ }
+ }
+ }
+}
+```
+
+The XML response to the client will be:
+
+``` xml
+<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:v1="http://localdomain.com/core/v1">
+ <soapenv:Header />
+ <soapenv:Body>
+ <v1:QueryList queryName="test">
+ <name>dummy text</name>
+ </v1:QueryList>
+ </soapenv:Body>
+</soapenv:Envelope>
+```
+ ### Elements |Name|Description|Required|
This article provides a reference for API Management policies used to transform
|apply|The attribute must be set to one of the following values.<br /><br /> - always - always apply conversion.<br />- content-type-json - convert only if response Content-Type header indicates presence of JSON.|Yes|N/A| |consider-accept-header|The attribute must be set to one of the following values.<br /><br /> - true - apply conversion if XML is requested in request Accept header.<br />- false -always apply conversion.|No|true| |parse-date|When set to `false` date values are simply copied during transformation|No|true|
+|namespace-separator|The character to use as a namespace separator|No|Underscore|
+|attribute-block-name|When set, properties inside the named object will be added to the element as attributes|No|Not set|
### Usage This policy can be used in the following policy [sections](./api-management-howto-policies.md#sections) and [scopes](./api-management-howto-policies.md#scopes).
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
App Service can also host web apps natively on Linux for supported application s
### Built-in languages and frameworks
-App Service on Linux supports a number of language specific built-in images. Just deploy your code. Supported languages include: Node.js, Java (JRE 8 & JRE 11), PHP, Python, .NET Core, and Ruby. Run [`az webapp list-runtimes --os linux`](/cli/azure/webapp#az-webapp-list-runtimes) to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container.
+App Service on Linux supports a number of language specific built-in images. Just deploy your code. Supported languages include: Node.js, Java (8, 11, and 17), Tomcat, PHP, Python, .NET Core, and Ruby. Run [`az webapp list-runtimes --os linux`](/cli/azure/webapp#az-webapp-list-runtimes) to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container.
Outdated runtimes are periodically removed from the Web Apps Create and Configuration blades in the Portal. These runtimes are hidden from the Portal when they are deprecated by the maintaining organization or found to have significant vulnerabilities. These options are hidden to guide customers to the latest runtimes where they will be the most successful.
When an outdated runtime is hidden from the Portal, any of your existing sites u
If you need to create another web app with an outdated runtime version that is no longer shown on the Portal see the language configuration guides for instructions on how to get the runtime version of your site. You can use the Azure CLI to create another site with the same runtime. Alternatively, you can use the **Export Template** button on the web app blade in the Portal to export an ARM template of the site. You can reuse this template to deploy a new site with the same runtime and configuration.
+#### Debian 9 End of Life
+
+On June 30th 2022 Debian 9 (also known as "Stretch") will reach End-of-Life (EOL) status, which means security patches and updates will cease. As of June 2022, a platform update is rolling out to provide an upgrade path to Debian 11 (also known as "Bullseye"). The runtimes listed below are currently using Debian 9; if you are using one of the listed runtimes, follow the instructions below to upgrade your site to Bullseye.
+
+- Python 3.8
+- Python 3.7
+- .NET 3.1
+- PHP 7.4
+
+> [!NOTE]
+> To ensure customer applications are running on secure and supported Debian distributions, after February 2023 all Linux web apps still running on Debian 9 (Stretch) will be upgraded to Debian 11 (Bullseye) automatically.
+>
+
+##### Verify the platform update
+
+First, validate that the new platform update which contains Debian 11 has reached your site.
+
+1. Navigate to the SCM site (also known as Kudu site) of your webapp. You can browse to this site at `http://<your-site-name>.scm.azurewebsites.net/Env` (replace `\<your-site-name>` with the name of your web app).
+1. Under "Environment Variables", search for `PLATFORM_VERSION`. The value of this environment variable is the current platform version of your web app.
+1. If the value of `PLATFORM_VERSION` starts with "99" or greater, then your site is on the latest platform update and you can continue to the section below. If the value does **not** show "99" or greater, then your site has not yet received the latest platform update--please check again at a later date.
+
+Next, create a deployment slot to test that your application works properly with Debian 11 before applying the change to production.
+
+1. [Create a deployment slot](deploy-staging-slots.md#add-a-slot) if you do not already have one, and clone your settings from the production slot. A deployment slot will allow you to safely test changes to your application (such as upgrading to Debian 11) and swap those changes into production after review.
+1. To upgrade to Debian 11 (Bullseye), create an app setting on your slot named `ORYX_DEFAULT_OS` with a value of `bullseye`.
+
+ ```bash
+ az webapp config appsettings set -g MyResourceGroup -n MyUniqueApp --settings ORYX_DEFAULT_OS=bullseye
+ ```
+1. Deploy your application to the deployment slot using the tool of your choice (VS Code, Azure CLI, GitHub Actions, etc.)
+1. Confirm your application is functioning as expected in the deployment slot.
+1. [Swap your production and staging slots](deploy-staging-slots.md#swap-two-slots). This will apply the `APPSETTING_DEFAULT_OS=bullseye` app setting to production.
+1. Delete the deployment slot if you are no longer using it.
+
+##### Resources
+
+- [Debian Long Term Support schedule](https://wiki.debian.org/LTS)
+- [Debian 11 (Bullseye) Release Notes](https://www.debian.org/releases/bullseye/)
+- [Debain 9 (Stretch) Release Notes](https://www.debian.org/releases/stretch/)
+ ### Limitations > [!NOTE]
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
app-service Samples Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-bicep.md
To learn about the Bicep syntax and properties for App Services resources, see [
|-|-| | [App Service plan and basic Linux app](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-basic-linux) | Deploys an App Service app that is configured for Linux. | | [App Service plan and basic Windows app](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-basic-windows) | Deploys an App Service app that is configured for Windows. |
-| [Website with container](https://github.com/Azure/bicep/tree/main/docs/examples/101/website-with-container) | Deploys a website from a docker image configured for Linux. |
| **Configuring an app** | **Description** |
-| [App with conditional logging](https://github.com/Azure/bicep/tree/main/docs/examples/201/web-app-conditional-log)| Deploys an App Service app with a conditional for logging enablement. |
| [App with log analytics module](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-loganalytics)| Deploys an App Service app with log analytics. | | [App with regional VNet integration](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/app-service-regional-vnet-integration)| Deploys an App Service app with regional VNet integration enabled. | |**App with connected resources**| **Description** |
-| [App with CosmosDB](https://github.com/Azure/bicep/tree/main/docs/examples/101/cosmosdb-webapp)| Deploys an App Service app on Linux with CosmosDB. |
+| [App with CosmosDB](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.documentdb/cosmosdb-webapp)| Deploys an App Service app on Linux with CosmosDB. |
| [App with MySQL](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-managed-mysql)| Deploys an App Service app on Windows with Azure Database for MySQL. | | [App with a database in Azure SQL Database](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/web-app-sql-database)| Deploys an App Service app and a database in Azure SQL Database at the Basic service level. | | [App connected to a backend webapp](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-privateendpoint-vnet-injection)| Deploys two web apps (frontend and backend) securely connected together with VNet injection and Private Endpoint. |
application-gateway Create Url Route Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/create-url-route-portal.md
Title: 'Tutorial: URL path-based routing rules using portal - Azure Application Gateway'
+ Title: 'Tutorial: Create an application gateway with URL path-based routing rules using Azure portal'
description: In this tutorial, you learn how to create URL path-based routing rules for an application gateway and virtual machine scale set using the Azure portal. Previously updated : 02/23/2021 Last updated : 07/08/2022 + #Customer intent: As an IT administrator, I want to use the Azure portal to set up an application gateway so I can route my app traffic based on path-based routing rules.
In this article, you learn how to:
> * Create a backend listener > * Create a path-based routing rule
-![URL routing example](./media/application-gateway-create-url-route-portal/scenario.png)
- [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ ## Prerequisites
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An Azure subscription
+ ## Create virtual machines
In this example, you create three virtual machines to be used as backend servers
## Create an application gateway
-1. Select **Create a resource** on the left menu of the Azure portal. The **New** window appears.
+1. Select **Create a resource** on the left menu of the Azure portal.
2. Select **Networking** and then select **Application Gateway** in the **Featured** list.
In this example, you create three virtual machines to be used as backend servers
- **Subscription**: Select your subscription. - **Resource group**: Select **myResourceGroupAG** for the resource group. - **Application gateway name**: Type *myAppGateway* for the name of the application gateway.
- - **Region** - Select **(US) East US**.
-
- ![Create new application gateway: Basics](./media/application-gateway-create-gateway-portal/application-gateway-create-basics.png)
+ - **Region** - Select **East US**.
2. Under **Configure virtual network**, select **myVNet** for the name of the virtual network. 3. Select **myAGSubnet** for the subnet. 3. Accept the default values for the other settings and then select **Next: Frontends**.
+ :::image type="content" source="./media/create-url-route-portal/application-gateway-create-basics.png" alt-text="Screenshot of Basics tab of Create application gateway page.":::
+ ### Frontends tab 1. On the **Frontends** tab, verify **Frontend IP address type** is set to **Public**.
Review the settings on the **Review + create** tab, and then select **Create** t
The listener on port 8080 routes this request to the default backend pool.
-3. Change the URL to *http://&lt;ip-address&gt;:8080/images/test.htm*, replacing &lt;ip-address&gt; with your IP address, and you should see something like the following example:
+3. Change the URL to *http://&lt;ip-address&gt;:8080/images/test.htm*, replacing &lt;ip-address&gt; with the public IP address of **myAppGateway**, and you should see something like the following example:
![Test images URL in application gateway](./media/application-gateway-create-url-route-portal/application-gateway-iistest-images.png) The listener on port 8080 routes this request to the *Images* backend pool.
-4. Change the URL to *http://&lt;ip-address&gt;:8080/video/test.htm*, replacing &lt;ip-address&gt; with your IP address, and you should see something like the following example:
+4. Change the URL to *http://&lt;ip-address&gt;:8080/video/test.htm*, replacing &lt;ip-address&gt; with the public IP address of **myAppGateway**, and you should see something like the following example:
![Test video URL in application gateway](./media/application-gateway-create-url-route-portal/application-gateway-iistest-video.png)
When no longer needed, delete the resource group and all related resources. To d
## Next steps
+In this tutorial, you created an application gateway with a path-based routing rule.
+
+To learn more about path-based routing in Application Gateways, see [URL path-based routing overview](url-route-overview.md)
+
+To learn how to create and configure an Application Gateway to redirect web traffic using the Azure CLI, advance to the next tutorial.
+ > [!div class="nextstepaction"]
-> [Enable end to end TLS on Azure Application Gateway](./ssl-overview.md)
+> [Redirect web traffic](tutorial-url-redirect-cli.md)
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
azure-arc Troubleshoot Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/troubleshoot-guide.md
Previously updated : 05/27/2022 Last updated : 07/07/2022
This article identifies troubleshooting resources for Azure Arc-enabled data services.
+## Uploads
-## Logs Upload related errors
+### Logs Upload related errors
If you deployed Azure Arc data controller in the `direct` connectivity mode using `kubectl`, and have not created a secret for the Log Analytics workspace credentials, you may see the following error messages in the Data Controller CR (Custom Resource):
type: Opaque
```
-## Metrics upload related errors in direct connected mode
+### Metrics upload related errors in direct connected mode
If you configured automatic upload of metrics, in the direct connected mode and the permissions needed for the MSI have not been properly granted (as described in [Upload metrics](upload-metrics.md)), you might see an error in your logs as follows:
If you configured automatic upload of metrics, in the direct connected mode and
To resolve above error, retrieve the MSI for the Azure Arc data controller extension, and grant the required roles as described in [Upload metrics](upload-metrics.md). -
-## Usage upload related errors in direct connected mode
+### Usage upload related errors in direct connected mode
If you deployed your Azure Arc data controller in the direct connected mode the permissions needed to upload your usage information are automatically granted for the Azure Arc data controller extension MSI. If the automatic upload process runs into permissions related issues you might see an error in your logs as follows:
identified that your data controller stopped uploading usage data to Azure. The
To resolve the permissions issue, retrieve the MSI and grant the required roles as described in [Upload metrics](upload-metrics.md)).
+## Upgrades
+
+### Incorrect image tag
+
+If you are using `az` CLI to upgrade and you pass in an incorrect image tag you will see an error within two minutes.
+
+```output
+Job Still Active : Failed to await bootstrap job complete after retrying for 2 minute(s).
+Failed to await bootstrap job complete after retrying for 2 minute(s).
+```
+
+When you view the pods you will see the bootstrap job status as `ErrImagePull`.
+
+```output
+STATUS
+ErrImagePull
+```
+
+When you describe the pod you will see
+
+```output
+Failed to pull image "<registry>/<repository>/arc-bootstrapper:<incorrect image tag>": [rpc error: code = NotFound desc = failed to pull and unpack image
+```
+
+To resolve, reference the [Version log](version-log.md) for the correct image tag. Re-run the upgrade command with the correct image tag.
+
+### Unable to connect to registry or repository
+
+If you are trying to upgrade and the upgrade job has not produced an error but runs for longer than fifteen minutes, you can view the progress of the upgrade by watching the pods. Run
+
+```console
+kubectl get pods -n <namespace>
+```
+
+When you view the pods you will see the bootstrap job status as `ErrImagePull`.
+
+```output
+STATUS
+ErrImagePull
+```
+
+Describe the bootstrap job pod to view the Events.
+
+```console
+kubectl describe pod <pod name> -n <namespace>
+```
+
+When you describe the pod you will see an error that says
+
+```output
+failed to resolve reference "<registry>/<repository>/arc-bootstrapper:<image tag>"
+```
+
+This is common if your image was deployed from a private registry, you're using Kubernetes to upgrade via a yaml file, and the yaml file references mcr.microsoft.com instead of the private registry. To resolve, cancel the upgrade job. To find the registry you deployed from, run
+
+```console
+kubectl describe pod <controller in format control-XXXXX> -n <namespace>
+```
+
+Look for Containers.controller.Image, where you will see the registry and repository. Capture those values, enter into your yaml file, and re-run the upgrade.
+
+### Not enough resources
+
+If you are trying to upgrade and the upgrade job has not produced an error but runs for longer than fifteen minutes, you can view the progress of the upgrade by watching the pods. Run
+
+```console
+kubectl get pods -n <namespace>
+```
+
+Look for a pod that shows some of the containers are ready, but not - for example, this metricsdb-0 pod has only one of two containers:
+
+```output
+NAME READY STATUS RESTARTS AGE
+bootstrapper-848f8f44b5-7qxbx 1/1 Running 0 16m
+control-7qxw8 2/2 Running 0 16m
+controldb-0 2/2 Running 0 16m
+logsdb-0 3/3 Running 0 18d
+logsui-hvsrm 3/3 Running 0 18d
+metricsdb-0 1/2 Running 0 18d
+```
+
+Describe the pod to see Events.
+
+```console
+kubectl describe pod <pod name> -n <namespace>
+```
+
+If there are no events, get the container names and view the logs for the containers.
+
+```console
+kubectl get pods <pod name> -n <namespace> -o jsonpath='{.spec.containers[*].name}*'
+
+kubectl logs <pod name> <container name> -n <namespace>
+```
+
+If you see a message about insufficient CPU or memory, you should add more nodes to your Kubernetes cluster, or add more resources to your existing nodes.
## Resources by type
azure-arc Upgrade Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-cli.md
Previously updated : 05/27/2022 Last updated : 07/07/2022
The upgrade is a two-part process. First the controller is upgraded, then the mo
Ready ```
-## Troubleshoot upgrade problems
-
-If you encounter any troubles with upgrading, see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Upgrade Data Controller Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-direct-portal.md
Previously updated : 05/31/2022 Last updated : 07/07/2022
To view the status of your upgrade in the portal, go to the resource group of th
You will see a "Validate Deploy" option that shows the status.
-## Troubleshoot upgrade problems
-
-If you encounter any troubles with upgrading, see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Upgrade Data Controller Indirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-cli.md
Previously updated : 05/27/2022 Last updated : 07/07/2022
When the upgrade is complete, the output will be:
Ready ```
-## Troubleshoot upgrade problems
-
-If you encounter any troubles with upgrading, see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Upgrade Data Controller Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-kubernetes-tools.md
Previously updated : 05/27/2022 Last updated : 07/07/2022
monitorstack Updating 41m
monitorstack Ready 41m ```
-## Troubleshoot upgrade problems
-
-If you encounter any troubles with upgrading, see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Upgrade Sql Managed Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-cli.md
Previously updated : 11/03/2021 Last updated : 07/07/2022
Status:
State: Ready ```
-## Troubleshoot upgrade problems
-
-If you encounter any troubles with upgrading, see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Upgrade Sql Managed Instance Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-direct-cli.md
Previously updated : 05/27/2022 Last updated : 07/07/2022
Status:
State: Ready ```
-## Troubleshoot upgrade problems
-
-If you encounter any troubles with upgrading, see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Upgrade Sql Managed Instance Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-direct-portal.md
Previously updated : 05/27/2022 Last updated : 07/07/2022
To view the status of your upgrade in the portal, go to the resource group of th
A **Validate Deploy** option that shows the status.
-## Troubleshoot upgrade problems
-
-If you encounter any troubles with upgrading, see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Upgrade Sql Managed Instance Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-indirect-kubernetes-tools.md
Previously updated : 11/08/2021 Last updated : 07/07/2022
Status:
State: Ready ```
-## Troubleshoot upgrade problems
-
-If you encounter any troubles with upgrading, see the [troubleshooting guide](troubleshoot-guide.md).
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022 #
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled Kubernetes description: Sample Azure Resource Graph queries for Azure Arc-enabled Kubernetes showing use of resource types and tables to access Azure Arc-enabled Kubernetes related resources and properties. Previously updated : 06/16/2022 Last updated : 07/07/2022
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc description: Sample Azure Resource Graph queries for Azure Arc showing use of resource types and tables to access Azure Arc related resources and properties. Previously updated : 06/16/2022 Last updated : 07/07/2022
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions.md
Title: VM extension management with Azure Arc-enabled servers description: Azure Arc-enabled servers can manage deployment of virtual machine extensions that provide post-deployment configuration and automation tasks with non-Azure VMs. Previously updated : 12/21/2021 Last updated : 07/01/2022
Arc-enabled servers support moving machines with one or more VM extensions insta
|Azure Monitor Agent |Microsoft.Azure.Monitor |AzureMonitorWindowsAgent |[Install the Azure Monitor agent](../../azure-monitor/agents/azure-monitor-agent-manage.md) | |Azure Automation Hybrid Runbook Worker extension (preview) |Microsoft.Compute |HybridWorkerForWindows |[Deploy an extension-based User Hybrid Runbook Worker](../../automation/extension-based-hybrid-runbook-worker-install.md) to execute runbooks locally. | |Azure Extension for SQL Server |Microsoft.AzureData |WindowsAgent.SqlServer |[Install Azure extension for SQL Server](/sql/sql-server/azure-arc/connect#initiate-the-connection-from-azure) to initiate SQL Server connection to Azure. |
+|Windows Admin Center (preview) |Microsoft.AdminCenter |Admin Center |[Manage Azure Arc-enabled Servers using Windows Admin Center in Azure](/windows-server/manage/windows-admin-center/azure/manage-arc-hybrid-machines) |
### Linux extensions
The following extensions are available for Windows and Linux machines:
### Windows extension availability
-|Operating system |Azure Monitor agent |Log Analytics agent |Dependency VM Insights |Qualys |Custom Script |Key Vault |Hybrid Runbook |Antimalware Extension |Connected Machine agent |
-|--|--|--|--|-|--|-||-||
-|Windows Server 2019 |X |X |X |X |X |X | |X |
-|Windows Server 2019 Core |X | | |X |X |X |X | |X |
+|Operating system |Azure Monitor agent |Log Analytics agent |Dependency VM Insights |Qualys |Custom Script |Key Vault |Hybrid Runbook |Antimalware Extension |Windows Admin Center |
+|--|--|--|--|-|--|-||-||
+|Windows Server 2022 |X |X |X |X |X | |X | |X |
+|Windows Server 2019 |X |X |X |X |X |X | | |X |
|Windows Server 2016 |X |X |X |X |X |X |X |Built-in |X |
-|Windows Server 2016 Core |X | | |X |X |X | |Built-in |X |
-|Windows Server 2012 R2 |X |X |X |X |X | |X |X |X |
-|Windows Server 2012 |X |X |X |X |X |X |X |X |X |
+|Windows Server 2012 R2 |X |X |X |X |X | |X |X | |
+|Windows Server 2012 |X |X |X |X |X |X |X |X | |
|Windows Server 2008 R2 SP1 |X |X |X |X |X | |X |X | |
-|Windows Server 2008 R2 | | | |X |X | |X |X |X |
+|Windows Server 2008 R2 | | | |X |X | |X |X | |
|Windows Server 2008 SP2 | |X | |X |X | |X | | | |Windows 11 client OS |X | | |X | | | | | |
-|Windows 10 1803 (RS4) and higher |X | | |X |X | | | |X |
-|Windows 10 Enterprise (including multi-session) and Pro (Server scenarios only) |X |X |X |X |X | |X | |X |
+|Windows 10 1803 (RS4) and higher |X | | |X |X | | | | |
+|Windows 10 Enterprise (including multi-session) and Pro (Server scenarios only) |X |X |X |X |X | |X | | |
|Windows 8 Enterprise and Pro (Server scenarios only) | |X |X |X | | |X | | | |Windows 7 SP1 (Server scenarios only) | |X |X |X | | |X | | |
-|Azure Stack HCI (Server scenarios only) | |X | |X | | |X | |X |
+|Azure Stack HCI (Server scenarios only) | |X | |X | | |X | | |
### Linux extension availability
azure-arc Onboard Ansible Playbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-ansible-playbooks.md
If you are onboarding machines to Azure Arc-enabled servers, copy the following
tasks: - name: Check if the Connected Machine Agent has already been downloaded on Linux servers stat:
- path: /usr/bin/azvmagent
+ path: /usr/bin/azcmagent
get_attributes: False get_checksum: False get_mine: azcmagent_downloaded
If you are onboarding machines to Azure Arc-enabled servers, copy the following
when: (ansible_os_family == 'Windows') and (not azcmagent_downloaded.stat.exists) - name: Install the Connected Machine Agent on Linux servers become: yes
- shell: bash ~/install_linux_azcmagent.sh
+ command:
+ cmd: bash ~/install_linux_azcmagent.sh
when: (ansible_system == 'Linux') and (not azcmagent_downloaded.stat.exists) - name: Install the Connected Machine Agent on Windows servers win_package:
If you are onboarding machines to Azure Arc-enabled servers, copy the following
- name: Check if the Connected Machine Agent has already been connected become: true command:
- cmd: azcmagent show --join
+ cmd: azcmagent show
register: azcmagent_connected - name: Connect the Connected Machine Agent on Linux servers to Azure Arc become: yes
- shell: sudo azcmagent connect --service-principal-id {{ azure.service_principal_id }} --service-principal-secret {{ azure.service_principal_secret }} --resource-group {{ azure.resource_group }} --tenant-id {{ azure.tenant_id }} --location {{ azure.location }} --subscription-id {{ azure.subscription_id }}
+ command:
+ cmd: azcmagent connect --service-principal-id {{ azure.service_principal_id }} --service-principal-secret {{ azure.service_principal_secret }} --resource-group {{ azure.resource_group }} --tenant-id {{ azure.tenant_id }} --location {{ azure.location }} --subscription-id {{ azure.subscription_id }}
when: (azcmagent_connected.rc == 0) and (ansible_system == 'Linux') - name: Connect the Connected Machine Agent on Windows servers to Azure win_shell: '& $env:ProgramFiles\AzureConnectedMachineAgent\azcmagent.exe connect --service-principal-id "{{ azure.service_principal_id }}" --service-principal-secret "{{ azure.service_principal_secret }}" --resource-group "{{ azure.resource_group }}" --tenant-id "{{ azure.tenant_id }}" --location "{{ azure.location }}" --subscription-id "{{ azure.subscription_id }}"'
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc-enabled servers description: Sample Azure Resource Graph queries for Azure Arc-enabled servers showing use of resource types and tables to access Azure Arc-enabled servers related resources and properties. Previously updated : 06/16/2022 Last updated : 07/07/2022
azure-cache-for-redis Cache How To Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-import-export-data.md
Title: Import and Export data in Azure Cache for Redis description: Learn how to import and export data to and from blob storage with your premium Azure Cache for Redis instances - Last updated 06/07/2022
Import/Export is an Azure Cache for Redis data management operation. It allows y
Import/Export enables you to migrate between different Azure Cache for Redis instances or populate the cache with data before use.
-This article provides a guide for importing and exporting data with Azure Cache for Redis and provides the answers to commonly asked questions.
+This article provides a guide for importing and exporting data with Azure Cache for Redis and provides the answers to commonly asked questions.
-> [!IMPORTANT]
-> Import/Export is only available for [Premium tier](cache-overview.md#service-tiers) caches.
+For information on which Azure Cache for Redis tiers support import and export, see [feature comparison](cache-overview.md#feature-comparison).
## Import
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
azure-fluid-relay Azure Function Token Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/how-tos/azure-function-token-provider.md
fluid.url: https://fluidframework.com/docs/build/tokenproviders/
> [!NOTE] > This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
-In the [Fluid Framework](https://fluidframework.com/), TokenProviders are responsible for creating and signing tokens that the `@fluidframework/azure-client` uses to make requests to the Azure Fluid Relay service. The Fluid Framework provides a simple, insecure TokenProvider for development purposes, aptly named **InsecureTokenProvider**. Each Fluid service must implement a custom TokenProvider based on the particulars service's authentication and security considerations.
+In the [Fluid Framework](https://fluidframework.com/), TokenProviders are responsible for creating and signing tokens that the `@fluidframework/azure-client` uses to make requests to the Azure Fluid Relay service. The Fluid Framework provides a simple, insecure TokenProvider for development purposes, aptly named **InsecureTokenProvider**. Each Fluid service must implement a custom TokenProvider based on the particular service's authentication and security considerations.
Each Azure Fluid Relay resource you create is assigned a **tenant ID** and its own unique **tenant secret key**. The secret key is a **shared secret**. Your app/service knows it, and the Azure Fluid Relay service knows it. TokenProviders must know the secret key to sign requests, but the secret key cannot be included in client code.
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Delphi Technology Solutions](https://delphi-ts.com/)| |[Developing Today LLC](https://www.developingtoday.net/)| |[DevHawk, LLC](https://www.devhawk.io)|
-|[Diamond Capture Associates LLC]|
+|Diamond Capture Associates LLC|
|[Diffeo, Inc.](https://diffeo.com)| |[DirectApps, Inc. D.B.A. Direct Technology](https://directtechnology.com)| |[DominionTech Inc.](https://www.dominiontech.com)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Exbabylon IT Solutions](https://www.exbabylon.com)| |[Executive Information Systems, LLC](https://www.execinfosys.com)| |[FI Consulting](https://www.ficonsulting.com/)|
-|[Firstworld USA DBA Terminal](https://www.terminal.com/)|
+|Firstworld USA DBA Terminal|
|[FCN, Inc.](https://fcnit.com)| |[Federal Resources Corporation FRC](https://fedresources.com/)| |[FMT Consultants](https://www.fmtconsultants.com/)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[Nihilent Inc](https://nihilent.com)| |[Nimbus Logic LLC](https://www.nimbus-logic.com)| |[Norseman, Inc](https://www.norseman.com)|
+|[Nortec](https://www.nortec.com)|
|[Northrop Grumman](https://www.northropgrumman.com)| |[NTS Cloud](http://ntscloud.com/ )| |[NTT America, Inc.](https://www.us.ntt.net)|
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[General Dynamics Information Technology](https://www.gdit.com)| |[Golden Five LLC](https://www.goldenfiveconsulting.com/)| |[Hypori, Inc.](https://hypori.com/)|
-|[Imager Software, Inc dba ISC]|
+|Imager Software, Inc dba ISC|
|[Impact Networking, LLC](https://www.impactmybiz.com/)| |[IBM Corp.](https://www.ibm.com/industries/government)| |[Jackpine Technologies](https://www.jackpinetech.com)|
azure-monitor Data Retention Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/data-retention-privacy.md
When you install [Azure Application Insights][start] SDK in your app, it sends t
First, the short answer: * The standard telemetry modules that run "out of the box" are unlikely to send sensitive data to the service. The telemetry is concerned with load, performance and usage metrics, exception reports, and other diagnostic data. The main user data visible in the diagnostic reports are URLs; but your app shouldn't in any case put sensitive data in plain text in a URL.
-* You can write code that sends additional custom telemetry to help you with diagnostics and monitoring usage. (This extensibility is a great feature of Application Insights.) It would be possible, by mistake, to write this code so that it includes personal and other sensitive data. If your application works with such data, you should apply a thorough review process to all the code you write.
+* You can write code that sends more custom telemetry to help you with diagnostics and monitoring usage. (This extensibility is a great feature of Application Insights.) It would be possible, by mistake, to write this code so that it includes personal and other sensitive data. If your application works with such data, you should apply a thorough review process to all the code you write.
* While developing and testing your app, it's easy to inspect what's being sent by the SDK. The data appears in the debugging output windows of the IDE and browser. * You can select the location when you create a new Application Insights resource. Know more about Application Insights availability per region [here](https://azure.microsoft.com/global-infrastructure/services/?products=all).
-* Review the collected data, as this may include data that is allowed in some circumstances but not others. A good example of this is Device Name. The device name from a server has no privacy impact and is useful, but a device name from a phone or laptop may have a privacy impact and be less useful. An SDK developed primarily to target servers, would collect device name by default, and this may need to be overwritten in both normal events and exceptions.
+* Review the collected data, as this collection may include data that is allowed in some circumstances but not others. A good example of this circumstance is Device Name. The device name from a server does not affect privacy and is useful, but a device name from a phone or laptop may have privacy implications and be less useful. An SDK developed primarily to target servers, would collect device name by default, and this may need to be overwritten in both normal events and exceptions.
The rest of this article elaborates more fully on these answers. It's designed to be self-contained, so that you can show it to colleagues who aren't part of your immediate team. ## What is Application Insights?
-[Azure Application Insights][start] is a service provided by Microsoft that helps you improve the performance and usability of your live application. It monitors your application all the time it's running, both during testing and after you've published or deployed it. Application Insights creates charts and tables that show you, for example, what times of day you get most users, how responsive the app is, and how well it is served by any external services that it depends on. If there are crashes, failures or performance issues, you can search through the telemetry data in detail to diagnose the cause. And the service will send you emails if there are any changes in the availability and performance of your app.
+[Azure Application Insights][start] is a service provided by Microsoft that helps you improve the performance and usability of your live application. It monitors your application all the time it's running, both during testing and after you've published or deployed it. Application Insights creates charts and tables that show you, for example, what times of day you get most users, how responsive the app is, and how well it's served by any external services that it depends on. If there are crashes, failures or performance issues, you can search through the telemetry data in detail to diagnose the cause. And the service will send you emails if there are any changes in the availability and performance of your app.
In order to get this functionality, you install an Application Insights SDK in your application, which becomes part of its code. When your app is running, the SDK monitors its operation and sends telemetry to the Application Insights service. This is a cloud service hosted by [Microsoft Azure](https://azure.com). (But Application Insights works for any applications, not just applications that are hosted in Azure.)
There are three sources of data:
* The SDK, which you integrate with your app either [in development](./asp-net.md) or [at run time](./status-monitor-v2-overview.md). There are different SDKs for different application types. There's also an [SDK for web pages](./javascript.md), which loads into the end user's browser along with the page.
- * Each SDK has a number of [modules](./configuration-with-applicationinsights-config.md), which use different techniques to collect different types of telemetry.
+ * Each SDK has many [modules](./configuration-with-applicationinsights-config.md), which use different techniques to collect different types of telemetry.
* If you install the SDK in development, you can use its API to send your own telemetry, in addition to the standard modules. This custom telemetry can include any data you want to send. * In some web servers, there are also agents that run alongside the app and send telemetry about CPU, memory, and network occupancy. For example, Azure VMs, Docker hosts, and [Java EE servers](java-2x-agent.md) can have such agents. * [Availability tests](./monitor-web-app-availability.md) are processes run by Microsoft that send requests to your web app at regular intervals. The results are sent to the Application Insights service.
azure-monitor Monitor Web App Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-web-app-availability.md
The name *URL ping test* is a bit of a misnomer. These tests don't use Internet
To create an availability test, you need use an existing Application Insights resource or [create an Application Insights resource](create-new-resource.md). > [!NOTE]
-> URL ping tests are categorized as classic tests. You can find them under **Add Classic Test** on the **Availability** pane. For more advanced features, see [Standard tests (preview)](availability-standard-tests.md).
+> URL ping tests are categorized as classic tests. You can find them under **Add Classic Test** on the **Availability** pane. For more advanced features, see [Standard tests](availability-standard-tests.md).
## Create a test
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
azure-monitor Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Monitor description: Sample Azure Resource Graph queries for Azure Monitor showing use of resource types and tables to access Azure Monitor related resources and properties. Previously updated : 06/16/2022 Last updated : 07/07/2022
azure-monitor Workbooks Commonly Used Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-commonly-used-components.md
+
+ Title: Sample Azure Workbooks components
+description: See sample Azure workbook components
++++ Last updated : 07/05/2022+++
+# Common Workbook use cases
+This article includes commonly used Azure Workbook components and instructions for how to implement them.
+
+## Traffic light icons
+
+You may want to summarize status using a simple visual indication instead of presenting the full range of data values. For example, you may want to categorize your computers by CPU utilization as Cold/Warm/Hot or categorize performance as satisfied/tolerating/frustrated. You can do this by showing an indicator or icon representing the status next to the underlying metric.
++
+The example below shows how do setup a traffic light icon per computer based on the CPU utilization metric.
+
+1. [Create a new empty workbook](workbooks-create-workbook.md).
+1. [Add a parameters](workbooks-create-workbook.md#add-a-parameter-to-an-azure-workbook), make it a [time range parameter](workbooks-time.md), and name it **TimeRange**.
+1. Select **Add query** to add a log query control to the workbook.
+1. Select the `log` query type, a `Log Analytics' resource type, and a Log Analytics workspace in your subscription that has VM performance data as a resource.
+1. In the Query editor, enter:
+ ```
+ Perf
+ | where ObjectName == 'Processor' and CounterName == '% Processor Time'
+ | summarize Cpu = percentile(CounterValue, 95) by Computer
+ | join kind = inner (Perf
+ | where ObjectName == 'Processor' and CounterName == '% Processor Time'
+ | make-series Trend = percentile(CounterValue, 95) default = 0 on TimeGenerated from {TimeRange:start} to {TimeRange:end} step {TimeRange:grain} by Computer
+ ) on Computer
+ | project-away Computer1, TimeGenerated
+ | order by Cpu desc
+ ```
+1. Set the visualization to `Grid`.
+1. Select **Column Settings**.
+1. In the **Columns** section:
+ - _Cpu -_ Column renderer: `Thresholds`, Custom number formatting: `checked`, Units: `Percentage`, Threshold settings (last two need to be in order):
+ - Icon: `Success`, Operator: `Default`
+ - Icon: `Critical`, Operator: `>`, Value: `80`
+ - Icon: `Warning`, Operator: `>`, Value: `60`
+ - _Trend -_ Column renderer: `Spark line`, Color palette: `Green to Red`, Minimum value: `60`, Maximum value: `80`
+9. Select **Save and Close** to commit changes.
+++
+You can also pin this grid to a dashboard using the **Pin to dashboard** button in toolbar. The pinned grid automatically binds to the time range in the dashboard.
++
+## Capturing user input to use in a query
+
+You may want to capture user input using drop-down lists and use the selection in your queries. For example, you can have a drop-down to accept a set of virtual machines and then filter your KQL to include just the selected machines. In most cases, this is as simple as including the parameter's value in the query:
+
+```sql
+ Perf
+ | where Computer in ({Computers})
+ | take 5
+```
+
+In more advanced scenarios, you may need to transform the parameter results before they can be used in queries. Take this OData filter payload:
+
+```json
+{
+ "name": "deviceComplianceTrend",
+ "filter": "(OSFamily eq 'Android' or OSFamily eq 'OS X') and (ComplianceState eq 'Compliant')"
+}
+```
+
+The following example shows how to enable this scenario: Let's say you want the values of the `OSFamily` and `ComplianceState` filters to come from drop-downs in the workbook. The filter could include multiple values as in the `OsFamily` case above. It needs to also support the case where the user wants to include all dimension values, that is to say, with no filters.
+
+### Setup parameters
+
+1. [Create a new empty workbook](workbooks-create-workbook.md) and [add a parameter component](workbooks-create-workbook.md#add-a-parameter-to-an-azure-workbook).
+1. Select **Add parameter** to create a new parameter. Use the following settings:
+ - Parameter name: `OsFilter`
+ - Display name: `Operating system`
+ - Parameter type: `drop-down`
+ - Allow multiple selections: `Checked`
+ - Delimiter: `or` (with spaces before and after)
+ - Quote with: `<empty>`
+ - Get data from: `JSON`
+ - Json Input
+ ```json
+ [
+ { "value": "OSFamily eq 'Android'", "label": "Android" },
+ { "value": "OSFamily eq 'OS X'", "label": "OS X" }
+ ]
+ ```
+ - In the **Include in the drop-down** section:
+ - Select **All**
+ - Select All Value: `OSFamily ne '#@?'`
+ - Select **Save** in the toolbar to save this parameter.
+1. Add another parameter with these settings:
+ - Parameter name: `ComplianceStateFilter`
+ - Display name: `Complaince State`
+ - Parameter type: `drop-down`
+ - Allow multiple selections: `Checked`
+ - Delimiter: `or` (with spaces before and after)
+ - Quote with: `<empty>`
+ - Get data from: `JSON`
+ - Json Input
+ ```json
+ [
+ { "value": "ComplianceState eq 'Compliant'", "label": "Compliant" },
+ { "value": "ComplianceState eq 'Non-compliant'", "label": "Non compliant" }
+ ]
+ ```
+ - In the **Include in the drop-down** section:
+ - Select **All**
+ - Select All Value: `ComplianceState ne '#@?'`
+ - Select **Save** in the toolbar to save this parameter.
+
+1. Select **Add text** to add a text block. In the `Markdown text to display` block, add:
+ ```json
+ {
+ "name": "deviceComplianceTrend",
+ "filter": "({OsFilter}) and ({ComplianceStateFilter})"
+ }
+ ```
+
+ This screenshot shows the parameter settings:
+
+ :::image type="content" source="media/workbooks-commonly-used-components/workbooks-odata-parameters-settings.png" alt-text="Screenshot showing parameter settings for drop-down lists with parameter values.":::
+
+### Single Filter Value
+The simplest case is the selection of a single filter value in each of the dimensions. The drop-down control uses Json input field's value as the parameter's value.
+
+```json
+{
+ "name": "deviceComplianceTrend",
+ "filter": "(OSFamily eq 'OS X') and (ComplianceState eq 'Compliant')"
+}
+```
++
+### Multiple Filter Values
+If the user chooses multiple filter values (e.g. both Android and OS X operating systems), then parameters `Delimiter` and `Quote with` settings kicks in and produces this compound filter:
+
+```json
+{
+ "name": "deviceComplianceTrend",
+ "filter": "(OSFamily eq 'OS X' or OSFamily eq 'Android') and (ComplianceState eq 'Compliant')"
+}
+```
++
+### No Filter Case
+Another common case is having no filter for that dimension. This is equivalent to including all values of the dimensions as part of the result set. The way to enable it is by having an `All` option on the drop-down and have it return a filter expression that always evaluates to `true` (e.g. _ComplianceState eq '#@?'_).
+
+```json
+{
+ "name": "deviceComplianceTrend",
+ "filter": "(OSFamily eq 'OS X' or OSFamily eq 'Android') and (ComplianceState ne '#@?')"
+}
+```
+
+## Reusing query data in different visualizations
+
+There are times where you want to visualize the underlying data set in different ways without having to pay the cost of the query each time. This sample shows you how to do so using the `Merge` option in the query control.
+
+### Set up the parameters
+
+1. [Create a new empty workbook](workbooks-create-workbook.md).
+1. Select **Add query** to create a query control, and enter these values:
+ - Data source: `Logs`
+ - Resource type: `Log Analytics`
+ - Log Analytics workspace: _Pick one of your workspaces that has performance data_
+ - Log Analytics workspace Logs Query
+ ```sql
+ Perf
+ | where CounterName == '% Processor Time'
+ | summarize CpuAverage = avg(CounterValue), CpuP95 = percentile(CounterValue, 95) by Computer
+ | order by CpuAverage desc
+ ```
+1. Select **Run Query** to see the results.
+
+ This is the result data set that we want to reuse in multiple visualizations.
+
+ :::image type="content" source="media/workbooks-commonly-used-components/workbooks-reuse-data-resultset.png" alt-text="Screenshot showing the result of a workbooks query." lightbox="media/workbooks-commonly-used-components/workbooks-reuse-data-resultset.png":::
+
+1. Go to the `Advanced settings` tab, and in the name, enter `Cpu data`.
+1. Select **Add query** to create another query control.
+1. For the **Data source**, select `Merge`.
+1. Select **Add Merge**.
+1. In the settings pop-up, set:
+ - Merge Type: `Duplicate table`
+ - Table: `Cpu data`
+1. Select **Run Merge** in the toolbar. You will get the same result as above:
+
+ :::image type="content" source="media/workbooks-commonly-used-components/workbooks-reuse-data-duplicate.png" alt-text=" Screenshot showing duplicate query results in a workbook." lightbox="media/workbooks-commonly-used-components/workbooks-reuse-data-duplicate.png":::
+
+1. Set the table options:
+ - Use the `Name After Merge` column to set friendly names for your result columns. For example, you can rename `CpuAverage` to `CPU utilization (avg)`, and then use the `Run Merge` button to update the result set.
+ - Use the `Delete` button to remove a column.
+ - Select the `[Cpu data].CpuP95 row
+ - Use the `Delete` button in the query control toolbar.
+ - Use the `Run Merge` button to see the result set without the CpuP95 column
+1. Change the order of the columns using the `Move up` or `Move down` buttons in the toolbar.
+1. Add new columns based on values of other columns using the `Add new item` button in the toolbar.
+1. Style the table using the options in the `Column settings` to get the visualization you want.
+1. Add more query controls working against the `Cpu data` result set if needed.
+
+Here is an example that shows Average and P95 CPU utilization side by side.
++
+## Using Azure Resource Manager (ARM) to retrieve alerts in a subscription
+
+This sample shows you how to use the Azure Resource Manager query control to list all existing alerts in a subscription. This guide will also use JSON Path transformations to format the results. See the [list of supported ARM calls](/rest/api/azure/).
+### Set up the parameters
+
+1. [Create a new empty workbook](workbooks-create-workbook.md).
+1. Select **Add parameter**, and use the following settings:
+ - Parameter name: `Subscription`
+ - Parameter type: `Subscription picker`
+ - Required: `Checked`
+ - Get data from: `Default Subscriptions`
+1. Select **Save**.
+1. Select **Add query** to create a query control,and use these settings. For this example, we are using the [Alerts Get All REST call](/rest/api/monitor/alertsmanagement/alerts/getall) to get a list of existing alerts for a subscription. See the [Azure REST API Reference](/rest/api/azure/) for supported api-versions.
+ - Data source: `Azure Resource Manager (Preview)`
+ - Http Method: `GET`
+ - Path: `/subscriptions/{Subscription:id}/providers/Microsoft.AlertsManagement/alerts`
+ - Add the api-version parameter in the `Parameters` tab
+ - Parameter: `api-version`
+ - Value: `2018-05-05`
+1. Select a subscription from the created subscription parameter and select **Run Query** to see the results.
+
+ This is the raw JSON returned from Azure Resource Manager (ARM).
+
+ :::image type="content" source="media/workbooks-commonly-used-components/workbooks-arm-alerts-query-no-formatting.png" alt-text="Screenshot showing an alert data JSON response in workbooks using an ARM provider." lightbox="media/workbooks-commonly-used-components/workbooks-arm-alerts-query-no-formatting.png":::
+
+### Format the response
+
+You may be satisfied with the information here. However, let us extract some interesting properties and format the response in an easy to read way.
+
+1. Go to the **Result settings** tab.
+1. Switch the Result Format from `Content` to `JSON Path`. [JSON path](workbooks-jsonpath.md) is a Workbook transformer.
+1. In the JSON Path settings, set the JSON Path Table to `$.value.[*].properties.essentials`. This extracts all "value.*.properties.essentials" fields from the returned JSON.
+1. Select **Run Query** to see the grid.
+
+ :::image type="content" source="media/workbooks-commonly-used-components/workbooks-arm-alerts-query-grid.png" alt-text="Screenshot showing alert data in a workbook in grid format using an ARM provider." lightbox="media/workbooks-commonly-used-components/workbooks-arm-alerts-query-grid.png":::
+
+### Filter the results
+
+JSON Path also allows you to pick and choose information from the generated table to show as columns.
+
+For example, if you would like to filter the results to these columns: `TargetResource`, `Severity`, `AlertState`, `Description`, `AlertRule`, `StartTime`, `ResolvedTime`, you could add the following rows in the columns table in JSON Path:
+
+| Column ID | Column JSON Path |
+| :- | :-: |
+| TargetResource | $.targetResource |
+| Severity | $.severity |
+| AlertState | $.alertState |
+| AlertRule | $.alertRule |
+| Description | $.description |
+| StartTime | $.startDateTime |
+| ResolvedTime | $.monitorConditionResolvedDateTime |
++
+## Next steps
+- [Getting started with Azure Workbooks](workbooks-getting-started.md)
azure-monitor Workbooks Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-configurations.md
Last updated 07/05/2022
-# Workbook Configuration Options
-There are several ways you can configure Workbooks to suit your needs.
+# Workbook configuration options
+There are several ways you can configure Workbooks to suit your needs using the settings in the **Settings** tab. When query or metrics steps are displaying time based data, more settings are available in the **Advanced settings** tab.
## Workbook settings The workbooks settings has these tabs to help you configure your workbook.
The workbooks settings has these tabs to help you configure your workbook.
|Resources|This tab contains the resources that appear as default selections in this workbook.<br>The resource marked as the **Owner** resource is where the workbook will be saved, and the location of the workbooks and templates you'll see when browsing. The owner resource can't be removed.<br> You can add a default resource by selecting **Add Resources**. You can remove resources by selecting a resource or several resources, and selecting **Remove Selected Resources**. When you're done adding and removing resources, select **Apply Changes**.| |Versions| This tab contains a list of all the available versions of this workbook. Select a version and use the toolbar to compare, view, or restore versions. Previous workbook versions are available for 90 days.<br><ul><li>**Compare**: Compare the JSON of the previous workbook to the most recently saved version.</li><li>**View**: Opens the selected version of the workbook in a context pane.</li><li>**Restore**: Saves a new copy of the workbook with the contents of the selected version and overwrites any existing current content. You'll be prompted to confirm this action.</li></ul><br>| |Style |In this tab, you can set a padding and spacing style for the whole workbook. The possible options are `Wide`, `Standard`, `Narrow`, `None`. `Standard` is the default style setting.|
-|Pin |While in pin mode, you can select **Pin Workbook** to pin an component from this workbook to a dashboard. Select **Link to Workbook**, to pin a static link to this workbook on your dashboard. You can choose a specific component in your workbook to pin.|
+|Pin |While in pin mode, you can select **Pin Workbook** to pin a component from this workbook to a dashboard. Select **Link to Workbook**, to pin a static link to this workbook on your dashboard. You can choose a specific component in your workbook to pin.|
|Trusted hosts |In this tab, you can enable a trusted source or mark this workbook as trusted in this browser. See [trusted hosts](#trusted-hosts) for detailed information. | > [!NOTE]
Enable trusted source or mark this workbook as trusted in this browser.
| Mark Workbook as trusted | If enabled, this Workbook will be able to call any endpoint, whether the host is marked as trusted or not. A workbook is trusted if it's a new workbook, an existing workbook is saved, or it's explicitly marked as a trusted workbook | | URL grid | A grid to explicitly add trusted hosts. |
+## Time brushing
+
+Time range brushing allows a user to "brush" or "scrub" a range on a chart, and have that range be output as a parameter value.
++
+You can also choose to only export a parameter when a range is explicitly brushed.
+ - If this setting is unchecked (default), the parameter always has a value. When the parameter is not brushed, the value is the full time range displayed in the chart.
+ - If this setting is checked, the parameter has no value before the user brushes the parameter, and is only set after a user brushes the parameter.
+
+### Brushing in a metrics chart
+
+When time brushing is enabled on a metrics chart, the user can "brush" a time by dragging the mouse on the time chart:
++
+Once the brush has stopped, the metrics chart zooms in to that range, and exports that range as a time range parameter.
+An icon in the toolbar in the upper right corner is active, to reset the time range back to its original, un-zoomed time range.
++
+### Brushing in a query chart
+
+When time brushing is enabled on a query chart, indicators appear that the user can drag, or the user can "brush" a range on the time chart:
++
+Once the brush has stopped, the query chart shows that range as a time range parameter, but will not zoom in. This behavior is different than the behavior of metrics charts. Because of the complexity of user written queries, it may not be possible for workbooks to correctly update the range used by the query in the query content directly. If the query is using a time range parameter, it is possible to get this behavior by using a [global parameter](workbooks-parameters.md#global-parameters) instead.
+
+An icon in the toolbar in the upper right corner is active, to reset the time range back to its original, un-zoomed time range.
+ ## Interactivity There are several ways that you can create interactive reports and experiences in workbooks.
There are several ways that you can create interactive reports and experiences i
- **Field to export**: `Request` - **Parameter name**: `SelectedRequest` - **Default value**: `All requests`
-1. [Optional.]If you want to export the entire contents of the selected row instead of just a particular column, leave the `Field to export` property unset. The entire row contents is exported as json to the parameter. On the referencing KQL control, use the `todynamic` function to parse the json and access the individual columns.
-1. Select **Save**.
-
- :::image type="content" source="media/workbooks-configurations/workbooks-export-parameters-add.png" alt-text="Screenshot showing the advanced workbooks editor with settings for exporting fields as parameters.":::
+ :::image type="content" source="media/workbooks-configurations/workbooks-export-parameters-add.png" alt-text="Screenshot showing the advanced workbooks editor with settings for exporting fields as parameters.":::
+
+1. (Optional.) If you want to export the entire contents of the selected row instead of just a particular column, leave the `Field to export` property unset. The entire row contents is exported as json to the parameter. On the referencing KQL control, use the `todynamic` function to parse the json and access the individual columns.
+1. Select **Save**.
1. Select **Done Editing**. 1. Add another query control as in the steps above. 1. Use the Query editor to enter the KQL for your analysis.
The following image shows a more elaborate interactive report in read mode based
:::image type="content" source="media/workbooks-configurations/workbooks-grid-link-details.png" alt-text="Screenshot showing the detail pane of the sampled request in workbooks."::: ### Link Renderer Actions-
-| Link action | Action on click |
-|:- |:-|
-|Generic Details| Shows the row values in a property grid context tab |
-|Cell Details| Shows the cell value in a property grid context tab. Useful when the cell contains a dynamic type with information (for example, json with request properties like location, role instance, etc.). |
-|Cell Details| Shows the cell value in a property grid context tab. Useful when the cell contains a dynamic type with information (for example, json with request properties like location, role instance, etc.). |
-|Custom Event Details| Opens the Application Insights search details with the custom event ID (`itemId`) in the cell |
-|Details| Similar to Custom Event Details, except for dependencies, exceptions, page views, requests, and traces. |
-|Custom Event User Flows| Opens the Application Insights User Flows experience pivoted on the custom event name in the cell |
-|User Flows| Similar to Custom Event User Flows except for exceptions, page views and requests |
-|User Timeline| Opens the user timeline with the user ID (user_Id) in the cell |
-|Session Timeline| Opens the Application Insights search experience for the value in the cell (for example, search for text 'abc' where abc is the value in the cell) |
-|Resource overview| Open the resource's overview in the portal based on the resource ID value in the cell |
+Learn about how [Link actions](workbooks-link-actions.md) work to enhance workbook interactivity.
### Set conditional visibility
azure-monitor Workbooks Create Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-create-workbook.md
For example, you can query Azure Resource Health to help you view any service pr
1. When you're sure you have the query you want in your workbook, select **Done editing**.
+### Best practices for using resource centric log queries
+
+This video shows you how to use resource level logs queries in Azure Workbooks. It also has tips and tricks on how to enable advanced scenarios and improve performance.
+
+> [!VIDEO https://www.youtube.com/embed/8CvjM0VvOA80]
+
+#### Using a dynamic resource type parameter
+Dynamic resource type parameters use dynamic scopes for more efficient querying. The snippet below uses this heuristic:
+1. _Individual resources_: if the count of selected resource is less than or equal to 5
+2. _Resource groups_: if the number of resources is over 5 but the number of resource groups the resources belong to is less than or equal to 3
+3. _Subscriptions_: otherwise
+
+ ```
+ Resources
+ | take 1
+ | project x = dynamic(["microsoft.compute/virtualmachines", "microsoft.compute/virtualmachinescalesets", "microsoft.resources/resourcegroups", "microsoft.resources/subscriptions"])
+ | mvexpand x to typeof(string)
+ | extend jkey = 1
+ | join kind = inner (Resources
+ | where id in~ ({VirtualMachines})
+ | summarize Subs = dcount(subscriptionId), resourceGroups = dcount(resourceGroup), resourceCount = count()
+ | extend jkey = 1) on jkey
+ | project x, label = 'x',
+ selected = case(
+ x in ('microsoft.compute/virtualmachinescalesets', 'microsoft.compute/virtualmachines') and resourceCount <= 5, true,
+ x == 'microsoft.resources/resourcegroups' and resourceGroups <= 3 and resourceCount > 5, true,
+ x == 'microsoft.resources/subscriptions' and resourceGroups > 3 and resourceCount > 5, true,
+ false)
+ ```
+#### Using a static resource scope for querying multiple resource types
+
+```json
+[
+ { "value":"microsoft.compute/virtualmachines", "label":"Virtual machine", "selected":true },
+ { "value":"microsoft.compute/virtualmachinescaleset", "label":"Virtual machine scale set", "selected":true }
+]
+```
+#### Using resource parameters grouped by resource type
+```
+Resources
+| where type =~ 'microsoft.compute/virtualmachines' or type =~ 'microsoft.compute/virtualmachinescalesets'
+| where resourceGroup in~({ResourceGroups})
+| project value = id, label = id, selected = false,
+ group = iff(type =~ 'microsoft.compute/virtualmachines', 'Virtual machines', 'Virtual machine scale sets')
+```
+ ## Adding parameters You can collect input from consumers and reference it in other parts of the workbook using parameters. Often, you would use parameters to scope the result set or to set the right visual. Parameters help you build interactive reports and experiences.
To turn a larger template into multiple sub-templates:
1. If the individual components moved in step 3 had conditional visibilities, that will become the visibility of the outer group (like used in tabs). Remove them from the components inside the group and add that visibility setting to the group itself. Save here to avoid losing changes and/or export and save a copy of the json content. 1. If you want that group to be loaded from a template, you can use the **Edit** toolbar button in the group. This will open just the content of that group as a workbook in a new window. You can then save it as appropriate and close this workbook view (don't close the browser, just that view to go back to the previous workbook you were editing). 1. You can then change the group component to load from template and set the template ID field to the workbook/template you created in step 5. To work with workbooks IDs, the source needs to be the full Azure Resource ID of a shared workbook. Press *Load* and the content of that group will now be loaded from that sub-template instead of saved inside this outer workbook.++
+## Next steps
+- [Common Workbook use cases](workbooks-commonly-used-components.md)
azure-monitor Workbooks Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-time.md
Last updated 07/05/2022
Time parameters allow users to set the time context of analysis and is used by almost all reports. It is relatively simple to setup and use - allowing authors to specify the time ranges to show in the drop-down, including the option for custom time ranges.
-## Creating a time parameter
+## Create a time parameter
+ 1. Start with an empty workbook in edit mode.
-2. Choose _Add parameters_ from the links within the workbook.
-3. Click on the blue _Add Parameter_ button.
-4. In the new parameter pane that pops up enter:
- 1. Parameter name: `TimeRange`
- 2. Parameter type: `Time range picker`
- 3. Required: `checked`
- 4. Available time ranges: Last hour, Last 12 hours, Last 24 hours, Last 48 hours, Last 3 days, Last 7 days and Allow custom time range selection
-5. Choose 'Save' from the toolbar to create the parameter.
+1. Choose **Add parameters** from the links within the workbook.
+1. Select **Add Parameter**.
+1. In the new parameter pane that pops up enter:
+ - Parameter name: `TimeRange`
+ - Parameter type: `Time range picker`
+ - Required: `checked`
+ - Available time ranges: Last hour, Last 12 hours, Last 24 hours, Last 48 hours, Last 3 days, Last 7 days and Allow custom time range selection
+1. Select **Save** to create the parameter.
:::image type="content" source="media/workbooks-time/time-settings.png" alt-text="Screenshot showing the creation of a workbooks time range parameter.":::
-This is how the workbook will look like in read-mode.
+This is what the workbook looks like in read-mode.
:::image type="content" source="media/workbooks-time/parameters-time.png" alt-text="Screenshot showing a time range parameter in read mode."::: ## Referencing a time parameter
-### Via Bindings
+### Referencing a time parameter with bindings
+ 1. Add a query control to the workbook and select an Application Insights resource.
-2. Most workbook controls support a _Time Range_ scope picker. Open the _Time Range_ drop-down and select the `{TimeRange}` in the time range parameters group at the bottom.
-3. This binds the time range parameter to the time range of the chart. The time scope of the sample query is now Last 24 hours.
-4. Run query to see the results
+1. Most workbook controls support a _Time Range_ scope picker. Open the _Time Range_ drop-down and select the `{TimeRange}` in the time range parameters group at the bottom.
+1. This binds the time range parameter to the time range of the chart. The time scope of the sample query is now Last 24 hours.
+1. Run query to see the results
:::image type="content" source="media/workbooks-time/time-binding.png" alt-text="Screenshot showing a workbooks time range parameter referenced via bindings.":::
-### In KQL
+### Referencing a time parameter with KQL
+ 1. Add a query control to the workbook and select an Application Insights resource. 2. In the KQL, enter a time scope filter using the parameter: `| where timestamp {TimeRange}` 3. This expands on query evaluation time to `| where timestamp > ago(1d)`, which is the time range value of the parameter.
This is how the workbook will look like in read-mode.
:::image type="content" source="media/workbooks-time/time-in-code.png" alt-text="Screenshot showing a time range referenced in KQL.":::
-### In Text
+### Referencing a time parameter in text
+ 1. Add a text control to the workbook. 2. In the markdown, enter `The chosen time range is {TimeRange:label}` 3. Choose _Done Editing_ 4. The text control will show text: _The chosen time range is Last 24 hours_ ## Time parameter options+ | Parameter | Explanation | Example | | - |:-|:-| | `{TimeRange}` | Time range label | Last 24 hours |
This is how the workbook will look like in read-mode.
### Using parameter options in a query+ ```kusto requests | make-series Requests = count() default = 0 on timestamp from {TimeRange:start} to {TimeRange:end} step {TimeRange:grain}
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
azure-resource-manager Bicep Functions Any https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-any.md
publicIPAddress: any((pipId == '') ? null : {
For more complex uses of the `any()` function, see the following examples: * [Child resources that require a specific names](https://github.com/Azure/bicep/blob/62eb8109ae51d4ee4a509d8697ef9c0848f36fe4/docs/examples/201/api-management-create-all-resources/main.bicep#L247)
-* [A resource property not defined in the resource's type, even though it exists](https://github.com/Azure/bicep/blob/main/docs/examples/201/log-analytics-with-solutions-and-diagnostics/main.bicep#L26)
+* [A resource property not defined in the resource's type, even though it exists](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.insights/log-analytics-with-solutions-and-diagnostics/main.bicep#L26)
azure-resource-manager Bicep Functions Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-files.md
Title: Bicep functions - files description: Describes the functions to use in a Bicep file to load content from a file. Previously updated : 07/01/2022 Last updated : 07/08/2022 # File functions for Bicep
In VS Code, the properties of the loaded object are available intellisense. For
This function requires **Bicep version 0.7.4 or later**.
+The maximum allowed size of the file is **1,048,576 characters**, including line endings.
+ ### Return value The contents of the file as an Any object.
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-cli.md
description: Use Azure Resource Manager and Azure CLI to deploy resources to Azu
Previously updated : 10/01/2021 Last updated : 07/08/2022
az deployment group create \
If you're using Azure CLI with Windows Command Prompt (CMD) or PowerShell, pass the array in the format: `exampleArray="['value1','value2']"`.
-You can also get the contents of file and provide that content as an inline parameter.
+You can also get the contents of file and provide that content as an inline parameter. Preface the file name with **@**.
```azurecli-interactive az deployment group create \
az deployment group create --name addstorage --resource-group myResourceGroup \
Use double quotes around the JSON that you want to pass into the object.
+If you're using Azure CLI with Windows Command Prompt (CMD) or PowerShell, pass the object in the following format:
+
+```azurecli
+$tags="{'Owner':'Contoso','Cost Center':'2345-324'}"
+az deployment group create --name addstorage --resource-group myResourceGroup \
+--template-file $bicepFile \
+--parameters resourceName=abcdef4556 resourceTags=$tags
+```
+ You can use a variable to contain the parameter values. In Bash, set the variable to all of the parameter values and add it to the deployment command. ```azurecli-interactive
Rather than passing parameters as inline values in your script, you may find it
For more information about the parameter file, see [Create Resource Manager parameter file](./parameter-files.md).
-To pass a local parameter file, use `@` to specify a local file named _storage.parameters.json_.
+To pass a local parameter file, specify the path and file name. The following example shows a parameter file named _storage.parameters.json_. The file is in the same directory where the command is run.
```azurecli-interactive az deployment group create \ --name ExampleDeployment \ --resource-group ExampleGroup \ --template-file storage.bicep \
- --parameters @storage.parameters.json
+ --parameters storage.parameters.json
``` ## Preview changes
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
description: Shows how to create an Azure managed application that is intended f
Previously updated : 06/23/2022 Last updated : 07/08/2022 # Quickstart: Create and publish a managed application definition
-This quickstart provides an introduction to working with [Azure Managed Applications](overview.md). You can create and publish a managed application that is intended for members of your organization.
+This quickstart provides an introduction to working with [Azure Managed Applications](overview.md). You can create and publish a managed application that's intended for members of your organization.
To publish a managed application to your service catalog, you must: -- Create a template that defines the resources to deploy with the managed application.
+- Create an Azure Resource Manager template (ARM template) that defines the resources to deploy with the managed application.
- Define the user interface elements for the portal when deploying the managed application. - Create a _.zip_ package that contains the required template files. - Decide which user, group, or application needs access to the resource group in the user's subscription. - Create the managed application definition that points to the _.zip_ package and requests access for the identity.
+> [!NOTE]
+> Bicep files can't be used in a managed application. You must convert a Bicep file to ARM template JSON with the Bicep [build](../bicep/bicep-cli.md#build) command.
+ ## Create the ARM template Every managed application definition includes a file named _mainTemplate.json_. In it, you define the Azure resources to deploy. The template is no different than a regular ARM template. Create a file named _mainTemplate.json_. The name is case-sensitive.
-Add the following JSON to your file. It defines the parameters for creating a storage account, and specifies the properties for the storage account.
+Add the following JSON and save the file. It defines the parameters for creating a storage account, and specifies the properties for the storage account.
```json {
Add the following JSON to your file. It defines the parameters for creating a st
} ```
-Save the _mainTemplate.json_ file.
- ## Define your create experience As a publisher, you define the portal experience for creating the managed application. The _createUiDefinition.json_ file generates the portal interface. You define how users provide input for each parameter using [control elements](create-uidefinition-elements.md) including drop-downs, text boxes, and password boxes.
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
azure-resource-manager Preview Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/preview-features.md
Title: Set up preview features in Azure subscription description: Describes how to list, register, or unregister preview features in your Azure subscription for a resource provider. Previously updated : 08/18/2021 Last updated : 07/08/2022 # Customer intent: As an Azure user, I want to use preview features in my subscription so that I can expose a resource provider's preview functionality.
Azure Feature Exposure Control (AFEC) is available through the [Microsoft.Featur
`Microsoft.Features/providers/{resourceProviderNamespace}/features/{featureName}`
+## Required access
+
+To list, register, or unregister preview features in your Azure subscription, you need access to the `Microsoft.Features/*` actions. This permission is granted through the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) and [Owner](../../role-based-access-control/built-in-roles.md#owner) built-in roles. You can also specify the required access through a [custom role](../../role-based-access-control/custom-roles.md).
+ ## List preview features You can list all the preview features and their registration states for an Azure subscription.
azure-resource-manager Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Resource Manager description: Sample Azure Resource Graph queries for Azure Resource Manager showing use of resource types and tables to access Azure Resource Manager related resources and properties. Previously updated : 06/16/2022 Last updated : 07/07/2022
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
azure-signalr Signalr Concept Messages And Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-messages-and-connections.md
Azure SignalR Service supports the same formats as ASP.NET Core SignalR: [JSON](
## Message size
-Azure SignalR Service has no size limit for messages.
-
-Large messages are split into smaller messages that are no more than 2 KB each and transmitted separately. SDKs handle message splitting and assembling. No developer efforts are needed.
+The following limits apply for Azure SignalR Service messages:
+
+* Client messages:
+ * For long polling or server side events, the client cannot send messages larger than 1MB.
+ * There is no size limit for Websockets for service.
+ * App server can set a limit for client message size. Default is 32KB. For more information, see [Security considerations in ASP.NET Core SignalR](/aspnet/core/signalr/security?#buffer-management).
+ * For serverless, the message size is limited by upstream implementation, but under 1MB is recommended.
+* Server messages:
+ * There is no limit to server message size, but under 16MB is recommended.
+ * App server can set a limit for client message size. Default is 32KB. For more information, see [Security considerations in ASP.NET Core SignalR](/aspnet/core/signalr/security?#buffer-management).
+ * Serverless:
+ * Rest API: 1MB for message body, 16KB for headers.
+ * There is no limit for Websockets, [management SDK persistent mode](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md), but under 16MB is recommended.
+
+For Websockets clients, large messages are split into smaller messages that are no more than 2 KB each and transmitted separately. SDKs handle message splitting and assembling. No developer efforts are needed.
Large messages do negatively affect messaging performance. Use smaller messages whenever possible, and test to determine the optimal message size for each use-case scenario.
azure-video-indexer Language Identification Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-identification-model.md
Azure Video Indexer supports automatic language identification (LID), which is the process of automatically identifying the spoken language content from audio and sending the media file to be transcribed in the dominant identified language.
-Currently LID supports: English, Spanish, French, German, Italian, Mandarin Chinese, Japanese, Russian, and Portuguese (Brazilian).
+See the list of supported by Azure Video Indexer languages in [supported langues](language-support.md).
Make sure to review the [Guidelines and limitations](#guidelines-and-limitations) section below.
When using portal, go to your **Account videos** on the [Azure Video Indexer](ht
## Model output
-Azure Video Indexer transcribes the video according to the most likely language if the confidence for that language is `> 0.6`. If the language cannot be identified with confidence, it assumes the spoken language is English.
+Azure Video Indexer transcribes the video according to the most likely language if the confidence for that language is `> 0.6`. If the language can't be identified with confidence, it assumes the spoken language is English.
Model dominant language is available in the insights JSON as the `sourceLanguage` attribute (under root/videos/insights). A corresponding confidence score is also available under the `sourceLanguageConfidence` attribute.
Model dominant language is available in the insights JSON as the `sourceLanguage
* Automatic language identification (LID) supports the following languages:
- English, Spanish, French, German, Italian, Mandarin Chines, Japanese, Russian, and Portuguese (Brazilian).
+ See the list of supported by Azure Video Indexer languages in [supported langues](language-support.md).
* Even though Azure Video Indexer supports Arabic (Modern Standard and Levantine), Hindi, and Korean, these languages are not supported in LID. * If the audio contains languages other than the supported list above, the result is unexpected.
-* If Azure Video Indexer cannot identify the language with a high enough confidence (`>0.6`), the fallback language is English.
-* There is no current support for file with mixed languages audio. If the audio contains mixed languages, the result is unexpected.
+* If Azure Video Indexer can't identify the language with a high enough confidence (`>0.6`), the fallback language is English.
+* Currently, there isn't support for file with mixed languages audio. If the audio contains mixed languages, the result is unexpected.
* Low-quality audio may impact the model results. * The model requires at least one minute of speech in the audio. * The model is designed to recognize a spontaneous conversational speech (not voice commands, singing, etc.).
azure-video-indexer Limited Access Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/limited-access-features.md
The Azure Video Indexer service is made available to customers and partners unde
FAQ about Limited Access can be found [here](https://aka.ms/limitedaccesscogservices).
-If you need help with Azure Video Indexer, find support [here](../cognitive-services/cognitive-services-support-options.md).
+If you need help with Azure Video Indexer, find support [here](/azure/cognitive-services/cognitive-services-support-options).
[Report Abuse](https://msrc.microsoft.com/report/abuse) of Azure Video Indexer.
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
description: Learn about the platform updates to Azure VMware Solution.
Previously updated : 12/22/2021 Last updated : 07/07/2022 # Platform updates for Azure VMware Solution Azure VMware Solution will apply important updates starting in March 2021. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management).
+## July 7, 2022
+
+All new Azure VMware Solution private clouds are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
+
+Any existing private clouds will be upgraded to those versions. For more information, please see [VMware ESXi 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-70u3c-release-notes.html) and [VMware vCenter Server 7.0 Update 3c Release Notes](https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-vcenter-server-70u3c-release-notes.html).
+
+You'll receive a notification through Azure Service Health that includes the timeline of the upgrade. You can reschedule an upgrade as needed. This notification also provides details on the upgraded component, its effect on workloads, private cloud access, and other Azure services.
## June 7, 2022
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
cdn Monitoring And Access Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/monitoring-and-access-log.md
Retention data is defined by the **-RetentionInDays** option in the command.
$diagname = <your-diagnostic-setting-name> $days = '30'
- $cdn = Get-AzCdnEndpoint -ResourceGroupName $rsg -ProfileName $cdnprofile -EndpointName $cdnendpoint
+ $cdn = Get-AzCdnProfile -ResourceGroupName $rsg -ProfileName $cdnprofile
$storage = Get-AzStorageAccount -ResourceGroupName $rsg -Name $storageacct
cognitive-services Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/faq.md
- Previously updated : 05/31/2022+ Last updated : 07/07/2022
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/quickstart.md
Previously updated : 06/21/2022 Last updated : 06/30/2022 ms.devlang: csharp, java, javascript, python
cognitive-services Integrate Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/tutorials/integrate-power-bi.md
In this tutorial, you'll learn how to:
- A Microsoft Azure account. [Create a free account](https://azure.microsoft.com/free/cognitive-services/) or [sign in](https://portal.azure.com/). - A Language resource. If you don't have one, you can [create one](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics). - The [Language resource key](../../../cognitive-services-apis-create-account.md#get-the-keys-for-your-resource) that was generated for you during sign-up.-- Customer comments. You can use [our example data](https://aka.ms/cogsvc/ta) or your own data. This tutorial assumes you're using our example data.
+- Customer comments. You can use our example data or your own data. This tutorial assumes you're using our example data.
## Load customer data
cognitive-services Language Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-studio.md
Title: What is Language Studio
+ Title: "Quickstart: Get started with Language Studio"
description: Use this article to learn about Language Studio, and testing features of Azure Cognitive Service for Language
Previously updated : 11/02/2021 Last updated : 07/07/2022
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/overview.md
Previously updated : 02/01/2022 Last updated : 06/28/2022
cognitive-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Services description: Lists Azure Policy built-in policy definitions for Azure Cognitive Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
communication-services Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/access-tokens.md
Last updated 11/17/2021
-zone_pivot_groups: acs-js-csharp-java-python
+zone_pivot_groups: acs-azcli-js-csharp-java-python
Access tokens let Azure Communication Services SDKs [authenticate](../concepts/a
In this quickstart, you'll learn how to use the Azure Communication Services SDKs to create identities and manage your access tokens. For production use cases, we recommend that you generate access tokens on a [server-side service](../concepts/client-and-server-architecture.md). + ::: zone pivot="programming-language-csharp" [!INCLUDE [.NET](./includes/access-tokens/access-token-net.md)] ::: zone-end
In this quickstart, you'll learn how to use the Azure Communication Services SDK
[!INCLUDE [Java](./includes/access-tokens/access-token-java.md)] ::: zone-end
-The app's output describes each completed action:
-
-<!cSpell:disable >
-```console
-Azure Communication Services - Access Tokens Quickstart
-
-Created an identity with ID: 8:acs:4ccc92c8-9815-4422-bddc-ceea181dc774_00000006-19e0-2727-80f5-8b3a0d003502
-
-Issued an access token with 'voip' scope that expires at 30/03/21 08:09 09 AM:
-<token signature here>
-
-Created an identity with ID: 8:acs:4ccc92c8-9815-4422-bddc-ceea181dc774_00000006-1ce9-31b4-54b7-a43a0d006a52
-
-Issued an access token with 'voip' scope that expires at 30/03/21 08:09 09 AM:
-<token signature here>
-
-Successfully revoked all access tokens for identity with ID: 8:acs:4ccc92c8-9815-4422-bddc-ceea181dc774_00000006-19e0-2727-80f5-8b3a0d003502
-
-Deleted the identity with ID: 8:acs:4ccc92c8-9815-4422-bddc-ceea181dc774_00000006-19e0-2727-80f5-8b3a0d003502
-```
-<!cSpell:enable >
- ## Use identity for monitoring and metrics The user ID is intended to act as a primary key for logs and metrics that are collected through Azure Monitor. To view all of a user's calls, for example, you can set up your authentication in a way that maps a specific Azure Communication Services identity (or identities) to a single user.
communication-services Create Communication Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/create-communication-resource.md
After navigating to your Communication Services resource, select **Keys** from t
:::image type="content" source="./media/key.png" alt-text="Screenshot of Communication Services Key page.":::
+### Access your connection strings and service endpoints using Azure CLI
You can also access key information using Azure CLI, like your resource group or the keys for a specific resource.
-Install [Azure CLI](/cli/azure/install-azure-cli-windows?tabs=azure-cli) and use the following command to login. You will need to provide your credentials to connect with your Azure account.
+Install [Azure CLI](/cli/azure/install-azure-cli-windows?tabs=azure-cli) and use the following command to login. You'll need to provide your credentials to connect with your Azure account.
```azurepowershell-interactive az login ```
Now you can access important information about your resources.
```azurepowershell-interactive az communication list --resource-group "<resourceGroup>"
-az communication list-key --name "<communicationName>" --resource-group "<resourceGroup>"
+az communication list-key --name "<acsResourceName>" --resource-group "<resourceGroup>"
```
-If you would like to select a specific subscription you can also specify the ```--subscription``` flag and provide the subscription ID.
+If you would like to select a specific subscription, you can also specify the ```--subscription``` flag and provide the subscription ID.
```azurepowershell-interactive
-az communication list --resource-group "resourceGroup>" --subscription "<subscriptionID>"
+az communication list --resource-group "resourceGroup>" --subscription "<subscriptionId>"
-az communication list-key --name "<communicationName>" --resource-group "resourceGroup>" --subscription "<subscriptionID>"
+az communication list-key --name "<acsResourceName>" --resource-group "resourceGroup>" --subscription "<subscriptionId>"
``` ## Store your connection string
To configure an environment variable, open a console window and select your oper
Open a console window and enter the following command: ```console
-setx COMMUNICATION_SERVICES_CONNECTION_STRING "<yourconnectionstring>"
+setx COMMUNICATION_SERVICES_CONNECTION_STRING "<yourConnectionString>"
```
-After you add the environment variable, you may need to restart any running programs that will need to read the environment variable, including the console window. For example, if you are using Visual Studio as your editor, restart Visual Studio before running the example.
+After you add the environment variable, you may need to restart any running programs that will need to read the environment variable, including the console window. For example, if you're using Visual Studio as your editor, restart Visual Studio before running the example.
#### [macOS](#tab/unix)
-Edit your **.zshrc**, and add the environment variable:
+Edit your **`.zshrc`**, and add the environment variable:
```bash
-export COMMUNICATION_SERVICES_CONNECTION_STRING="<yourconnectionstring>"
+export COMMUNICATION_SERVICES_CONNECTION_STRING="<yourConnectionString>"
``` After you add the environment variable, run `source ~/.zshrc` from your console window to make the changes effective. If you created the environment variable with your IDE open, you may need to close and reopen the editor, IDE, or shell in order to access the variable. #### [Linux](#tab/linux)
-Edit your **.bash_profile**, and add the environment variable:
+Edit your **`.bash_profile`**, and add the environment variable:
```bash
-export COMMUNICATION_SERVICES_CONNECTION_STRING="<yourconnectionstring>"
+export COMMUNICATION_SERVICES_CONNECTION_STRING="<yourConnectionString>"
``` After you add the environment variable, run `source ~/.bash_profile` from your console window to make the changes effective. If you created the environment variable with your IDE open, you may need to close and reopen the editor, IDE, or shell in order to access the variable.
After you add the environment variable, run `source ~/.bash_profile` from your c
## Clean up resources
-If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. [Deleting the resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md#delete-resource-groups) also deletes any other resources associated with it.
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. You can delete your communication resource by running the command below.
+
+```azurecli-interactive
+az communication delete --name "acsResourceName" --resource-group "resourceGroup"
+```
+
+[Deleting the resource group](../../azure-resource-manager/management/manage-resource-groups-portal.md#delete-resource-groups) also deletes any other resources associated with it.
If you have any phone numbers assigned to your resource upon resource deletion, the phone numbers will be released from your resource automatically at the same time.
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/send.md
- devx-track-js - mode-other - kr2b-contr-experiment
-zone_pivot_groups: acs-js-csharp-java-python
+zone_pivot_groups: acs-azcli-js-csharp-java-python
# Quickstart: Send an SMS message
zone_pivot_groups: acs-js-csharp-java-python
<br/> >[!VIDEO https://www.youtube.com/embed/YEyxSZqzF4o] + ::: zone pivot="programming-language-csharp" [!INCLUDE [Send SMS with .NET SDK](./includes/send-sms-net.md)] ::: zone-end
communication-services Get Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/get-phone-number.md
-zone_pivot_groups: acs-azp-java-net-python-csharp-js
+zone_pivot_groups: acs-azcli-azp-java-net-python-csharp-js
# Quickstart: Get and manage phone numbers
zone_pivot_groups: acs-azp-java-net-python-csharp-js
[!INCLUDE [Bulk Acquisition Instructions](../../includes/phone-number-special-order.md)] + ::: zone pivot="platform-azp" [!INCLUDE [Azure portal](./includes/phone-numbers-portal.md)] ::: zone-end
communication-services Chat Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/chat-hero-sample.md
Below you'll find more information on prerequisites and steps to set up the samp
1. Open an instance of PowerShell, Windows Terminal, Command Prompt or equivalent and navigate to the directory that you'd like to clone the sample to. 2. `git clone https://github.com/Azure-Samples/communication-services-web-chat-hero.git`
-3. Get the `Connection String` and `Endpoint URL` from the Azure portal. For more information on connection strings, see [Create an Azure Communication Services resources](../quickstarts/create-communication-resource.md)
+3. Get the `Connection String` and `Endpoint URL` from the Azure Portal or by using the Azure CLI.
+
+ ```azurecli-interactive
+ az communication list-key --name "<acsResourceName>" --resource-group "<resourceGroup>"
+ ```
+
+ For more information on connection strings, see [Create an Azure Communication Services resources](../quickstarts/create-communication-resource.md)
4. Once you get the `Connection String` and `Endpoint URL`, Add both values to the **Server/appsettings.json** file found under the Chat Hero Sample folder. Input your connection string in the variable: `ResourceConnectionString` and endpoint URL in the variable: `EndpointUrl`. ## Local run
container-apps Observability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/observability.md
Previously updated : 05/02/2022 Last updated : 07/05/2022
Azure Container Apps provides several built-in observability features that give
These features include: -- Log streaming-- Container console-- Azure Monitor metrics-- Azure Monitor Log Analytics-- Azure Monitor alerts
+- [Log streaming](#log-streaming)
+- [Container console](#container-console)
+- [Azure Monitor metrics](#azure-monitor-metrics)
+- [Azure Monitor Log Analytics](#azure-monitor-log-analytics)
+- [Azure Monitor alerts](#azure-monitor-alerts)
>[!NOTE] > While not a built-in feature, [Azure Monitor's Application Insights](../azure-monitor/app/app-insights-overview.md) is a powerful tool to monitor your web and background applications.
You can add more scopes to view metrics across multiple container apps.
## Azure Monitor Log Analytics
+Azure Container Apps is integrated with Azure Monitor Log Analytics to monitor and analyze your container app's logs. Each Container Apps environment includes a Log Analytics workspace that provides a common place to store the system and application log data from all container apps running in the environment.
+
+Log entries are accessible by querying Log Analytics tables through the Azure portal or a command shell using the [Azure CLI](/cli/azure/monitor/log-analytics).
+
+<!--
Azure Monitor collects application logs and stores them in a Log Analytics workspace. Each Container Apps environment includes a Log Analytics workspace that provides a common place to store the application log data from all containers running in the environment.
+-->
+
+There are two types of logs for Container Apps.
+
+1. Console logs, which are emitted by your app.
+1. System logs, which are emitted by the Container Apps service.
++
+### Container Apps System Logs
-Application logs consist of messages written to each container's `stdout` and `stderr`. Additionally, if your container app is using Dapr, log entries from the Dapr sidecar are also collected.
+The Container Apps service provides system log messages at the container app level. System logs emits the following messages:
-Azure Monitor stores Container Apps log data in the `ContainerAppConsoleLogs_CL` table. Create queries using this table to view your container app log data.
+| Source | Type | Message |
+||||
+| Dapr | info | Successfully created dapr component \<component-name\> with scope \<dapr-component-scope\> |
+| Dapr | info | Successfully updated dapr component \<component-name\> with scope \<component-type\> |
+| Dapr | error | Error creating dapr component \<component-name\> |
+| Volume Mounts | info | Successfully mounted volume \<volume-name\> for revision \<revision-scope\> |
+| Volume Mounts | error | Error mounting volume \<volume-name\> |
+| Domain Binding | info | Successfully bound Domain \<domain\> to the container app \<container app name\> |
+| Authentication | info | Auth enabled on app. Creating authentication config |
+| Authentication | info | Auth config created successfully |
+| Traffic weight | info | Setting a traffic weight of \<percentage>% for revision \<revision-name\\> |
+| Revision Provisioning | info | Creating a new revision: \<revision-name\> |
+| Revision Provisioning | info | Successfully provisioned revision \<name\> |
+| Revision Provisioning | info| Deactivating Old revisions since 'ActiveRevisionsMode=Single' |
+| Revision Provisioning | error | Error provisioning revision \<revision-name>. ErrorCode: \<[ErrImagePull]\|[Timeout]\|[ContainerCrashing]\> |
-You can create and run queries using Log Analytics in the Azure portal or run queries using Azure CLI commands.
+The system log data is accessible by querying the `ContainerAppSystemlogs_CL` table. The most used Container Apps specific columns in the table are:
-The most used columns in ContainerAppConsoleLogs_CL include:
+| Column | Description |
+|||
+| `ContainerAppName_s` | Container app name |
+| `EnvironmentName_s` | Container Apps environment name |
+| `Log_s` | Log message |
+| `RevisionName_s` | Revision name |
+
+### Container Apps Console Logs
+
+Console logs originate from the `stderr` and `stdout` messages from the containers in your container app and Dapr sidecars. You can view console logs by querying the `ContainerAppConsolelogs_CL` table.
+
+> [!TIP]
+> Instrumenting your code with well-defined log messages can help you to understand how your code is performing and to debug issues. To learn more about best practices refer to [Design for operations](/azure/architecture/guide/design-principles/design-for-operations).
+
+The most commonly used Container Apps specific columns in ContainerAppConsoleLogs_CL include:
|Column |Description | |||
-| `ContainerAppName_s` | container app name |
-| `ContainerGroupName_g` | replica name |
-| `ContainerId` | container identifier |
-| `ContainerImage_s` | container image name |
+| `ContainerAppName_s` | Container app name |
+| `ContainerGroupName_g` | Replica name |
+| `ContainerId_s` | Container identifier |
+| `ContainerImage_s` | Container image name |
| `EnvironmentName_s` | Container Apps environment name |
-| `Message` | log message |
-| `RevisionName_s` | revision name |
+| `Log_s` | Log message |
+| `RevisionName_s` | Revision name |
### Use Log Analytics to query logs
-Log Analytics is a tool in the Azure portal that you can use to view and analyze log data. Using Log Analytics, you can write simple or advanced queries and then sort, filter, and visualize the results in charts to spot trends and identify issues. You can work interactively with the query results or use them with other features such as alerts, dashboards, and workbooks.
+Log Analytics is a tool in the Azure portal that you can use to view and analyze log data. Using Log Analytics, you can write Kusto queries and then sort, filter, and visualize the results in charts to spot trends and identify issues. You can work interactively with the query results or use them with other features such as alerts, dashboards, and workbooks.
Start Log Analytics from **Logs** in the sidebar menu on your container app page. You can also start Log Analytics from **Monitor>Logs**.
-You can query the logs using the columns listed in the **CustomLogs > ContainerAppConsoleLogs_CL** table in the **Tables** tab.
+You can query the logs using the tables listed in the **CustomLogs** category **Tables** tab. The tables in this category are the `ContainerAppSystemlogs_CL` and `ContainerAppConsolelogs_CL` tables.
-Below is a simple query that displays log entries for the container app named *album-api*.
+Below is a Kusto query that displays console log entries for the container app named *album-api*.
```kusto ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'album-api'
-| project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Message
+| project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Log_s
+| take 100
+```
+
+Below is a Kusto query that displays system log entries for the container app named *album-api*.
+
+```kusto
+ContainerAppSystemLogs_CL
+| where ContainerAppName_s == 'album-api'
+| project Time=TimeGenerated, EnvName=EnvironmentName_s, AppName=ContainerAppName_s, Revision=RevisionName_s, Message=Log_s
| take 100 ``` For more information regarding Log Analytics and log queries, see the [Log Analytics tutorial](../azure-monitor/logs/log-analytics-tutorial.md).
-### Query logs via the Azure CLI and PowerShell
+### Query logs via the Azure CLI
Container Apps logs can be queried using the [Azure CLI](/cli/azure/monitor/log-analytics).
-Here's an example Azure CLI query to output a table containing five log records with the container app name "album-api". The table columns are specified by the parameters after the project operator. The $WORKSPACE_CUSTOMER_ID variable contains the GUID of the Log Analytics workspace.
+These example Azure CLI queries output a table containing log records for the container app name **album-api**. The table columns are specified by the parameters after the `project` operator. The `$WORKSPACE_CUSTOMER_ID` variable contains the GUID of the Log Analytics workspace.
++
+This example queries the `ContainerAppConsoleLogs_CL` table:
+
+```azurecli
+az monitor log-analytics query --workspace $WORKSPACE_CUSTOMER_ID --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'album-api' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Log_s, LogLevel_s | take 5" --out table
+```
+
+This example queries the `ContainerAppSystemLogs_CL` table:
```azurecli
-az monitor log-analytics query --workspace $WORKSPACE_CUSTOMER_ID --analytics-query "ContainerAppConsoleLogs_CL | where ContainerAppName_s == 'album-api' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Container=ContainerName_s, Message=Message, LogLevel_s | take 5" --out table
+az monitor log-analytics query --workspace $WORKSPACE_CUSTOMER_ID --analytics-query "ContainerAppSystemLogs_CL | where ContainerAppName_s == 'album-api' | project Time=TimeGenerated, AppName=ContainerAppName_s, Revision=RevisionName_s, Message=Log_s, LogLevel_s | take 5" --out table
``` For more information about using Azure CLI to view container app logs, see [Viewing Logs](monitor.md#viewing-logs).
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 06/29/2022 Last updated : 07/06/2022 # Azure Policy built-in definitions for Azure Container Instances
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
container-registry Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Container Registry description: Sample Azure Resource Graph queries for Azure Container Registry showing use of resource types and tables to access Azure Container Registry related resources and properties. Previously updated : 06/16/2022 Last updated : 07/07/2022
cosmos-db Graph Visualization Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/graph-visualization-partners.md
The interactive interface of Linkurious Enterprise offers an easy way to investi
* [Product details](https://linkurio.us/product/) * [Documentation](https://doc.linkurio.us/) * [Demo](https://linkurious.com/demo/)
-* [Azure Marketplace](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/linkurious.linkurious001?tab=overview)
+* [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/linkurious.lke_st?tab=Overview)
## Cambridge Intelligence
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
Assign a role to an identity:
$resourceGroupName = "<myResourceGroup>" $accountName = "<myCosmosAccount>" $readOnlyRoleDefinitionId = "<roleDefinitionId>" # as fetched above
+# For Service Principals make sure to use the Object ID as found in the Enterprise applications section of the Azure Active Directory portal blade.
$principalId = "<aadPrincipalId>" New-AzCosmosDBSqlRoleAssignment -AccountName $accountName ` -ResourceGroupName $resourceGroupName `
Assign a role to an identity:
resourceGroupName='<myResourceGroup>' accountName='<myCosmosAccount>' readOnlyRoleDefinitionId = '<roleDefinitionId>' # as fetched above
+# For Service Principals make sure to use the Object ID as found in the Enterprise applications section of the Azure Active Directory portal blade.
principalId = '<aadPrincipalId>' az cosmosdb sql role assignment create --account-name $accountName --resource-group $resourceGroupName --scope "/" --principal-id $principalId --role-definition-id $readOnlyRoleDefinitionId ```
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
For **all** containers, your partition key should:
If you need [multi-item ACID transactions](database-transactions-optimistic-concurrency.md#multi-item-transactions) in Azure Cosmos DB, you will need to use [stored procedures or triggers](how-to-write-stored-procedures-triggers-udfs.md#stored-procedures). All JavaScript-based stored procedures and triggers are scoped to a single logical partition.
+> [!NOTE]
+> If you only have one physical partition, the value of the partition key may not be relevant as all queries will target the same physical partition.
+ ## Partition keys for read-heavy containers For most containers, the above criteria is all you need to consider when picking a partition key. For large read-heavy containers, however, you might want to choose a partition key that appears frequently as a filter in your queries. Queries can be [efficiently routed to only the relevant physical partitions](how-to-query-container.md#in-partition-query) by including the partition key in the filter predicate.
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
cosmos-db Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Cosmos DB description: Sample Azure Resource Graph queries for Azure Cosmos DB showing use of resource types and tables to access Azure Cosmos DB related resources and properties. Previously updated : 06/16/2022 Last updated : 07/07/2022
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
Title: Troubleshoot slow requests in Azure Cosmos DB .NET SDK description: Learn how to diagnose and fix slow requests when you use Azure Cosmos DB .NET SDK.-+ Previously updated : 03/09/2022- Last updated : 07/08/2022+
If you need to verify that a database or container exists, don't do so by callin
* You are not measuring latency while debugging the application (no debuggers attached). * The volume of operations is high, don't use bulk for less than 1000 operations. Your provisioned throughput dictates how many operations per second you can process, your goal with bulk would be to utilize as much of it as possible. * Monitor the container for [throttling scenarios](troubleshoot-request-rate-too-large.md). If the container is getting heavily throttled it means the volume of data is larger than your provisioned throughput, you need to either scale up the container or reduce the volume of data (maybe create smaller batches of data at a time).
-* You are correctly using the `async/await` pattern to process all concurrent Tasks and not [blocking any async operation](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait).
+* You are correctly using the `async/await` pattern to [process all concurrent Tasks](tutorial-sql-api-dotnet-bulk-import.md#step-6-populate-a-list-of-concurrent-tasks) and not [blocking any async operation](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait).
## <a name="capture-diagnostics"></a>Capture the diagnostics
cost-management-billing Enable Preview Features Cost Management Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-preview-features-cost-management-labs.md
It's the same experience as the public portal, except with new improvements and
We encourage you to try out the preview features available in Cost Management Labs and share your feedback. It's your chance to influence the future direction of Cost Management. To provide feedback, use the **Report a bug** link in the Try preview menu. It's a direct way to communicate with the Cost Management engineering team.
-## Anomaly detection alerts
<a name="anomalyalerts"></a>
+## Anomaly detection alerts
+ Get notified by email when a cost anomaly is detected on your subscription. Anomaly detection is available for Azure global subscriptions in the cost analysis preview.
Here's an example of a cost anomaly shown in cost analysis:
:::image type="content" source="./media/enable-preview-features-cost-management-labs/cost-anomaly-example.png" alt-text="Screenshot showing an example cost anomaly." lightbox="./media/enable-preview-features-cost-management-labs/cost-anomaly-example.png" ::: - To configure anomaly alerts: 1. Open the cost analysis preview.
For more information about anomaly detection and how to configure alerts, see [I
**Anomaly detection is now available by default in Azure global.**
-## Grouping SQL databases and elastic pools
<a name="aksnestedtable"></a>
+## Grouping SQL databases and elastic pools
+ Get an at-a-glance view of your total SQL costs by grouping SQL databases and elastic pools. They're shown under their parent server in the cost analysis preview. This feature is enabled by default. Understanding what you're being charged for can be complicated. The best place to start for many people is the [Resources view](https://aka.ms/costanalysis/resources) in the cost analysis preview. It shows resources that are incurring cost. But even a straightforward list of resources can be hard to follow when a single deployment includes multiple, related resources. To help summarize your resource costs, we're trying to group related resources together. So, we're changing cost analysis to show child resources.
In addition to SQL servers, you'll also see other services with child resources,
**Grouping SQL databases and elastic pools is available by default in the cost analysis preview.**
-## Average in the cost analysis preview
<a name="cav3average"></a>
+## Average in the cost analysis preview
+ Average in the cost analysis preview shows your average daily or monthly cost at the top of the view. :::image type="content" source="./media/enable-preview-features-cost-management-labs/cost-analysis-preview-average.png" alt-text="Screenshot showing average cost in cost analysis." lightbox="./media/enable-preview-features-cost-management-labs/cost-analysis-preview-average.png" ::: When the selected date range includes the current day, the average cost is calculated ending at yesterday's date. It doesn't include partial cost from the current day because data for the day isn't complete. Every service submits usage at different timelines that affects the average calculation. For more information about data latency and refresh processing, see [Understand Cost Management data](understand-cost-mgt-data.md).
-**Average in the cost analysis preview is available by default in the cost analysis preview.**
+**The Average KPI is available by default in the cost analysis preview.**
-## Budgets in the cost analysis preview
<a name="budgetsfeature"></a>
+## Budgets in the cost analysis preview
+ Budgets in the cost analysis preview help you quickly create and edit budgets directly from the cost analysis preview. :::image type="content" source="./media/enable-preview-features-cost-management-labs/cost-analysis-budget.png" alt-text="Screenshot showing Budget in the cost analysis preview." lightbox="./media/enable-preview-features-cost-management-labs/cost-analysis-budget.png" ::: If you don't have a budget yet, you'll see a link to create a new budget. Budgets created from the cost analysis preview are preconfigured with alerts. Thresholds are set for cost exceeding 50 percent, 80 percent, and 95 percent of your cost. Or, 100 percent of your forecast for the month. You can add other recipients or update alerts from the Budgets page.
-**Budgets in the cost analysis preview is available by default in the cost analysis preview.**
+**The Budget KPI is available by default in the cost analysis preview.**
++
+<a name="resourceparent"></a>
+
+## Group related resources in the cost analysis preview
+
+Group related resources, like disks under VMs or web apps under App Service plans, by adding a ΓÇ£cm-resource-parentΓÇ¥ tag to the child resources with a value of the parent resource ID. Wait 24 hours for tags to be available in usage and your resources will be grouped. Leave feedback to let us know how we can improve this experience further for you.
++
+Some resources have related dependencies that aren't explicit children or nested under the logical parent in Azure Resource Manager. Examples include disks used by a virtual machine or web apps assigned to an App Service plan. Unfortunately, Cost Management isn't aware of these relationships and cannot group them automatically. This experimental feature uses tags to summarize the total cost of your related resources together. You'll see a single row with the parent resource. When you expand the parent resource, you'll see each linked resource listed individually with their respective cost.
+
+As an example, let's say you have an Azure Virtual Desktop host pool configured with two VMs. Tagging the VMs and corresponding network/disk resources groups them under the host pool, giving you the total cost of the session host VMs in your host pool deployment. This gets even more interesting if you want to also include the cost of any cloud solutions made available via your host pool.
++
+Before you link resources together, think about how you'd like to see them grouped. You can only link a resource to one parent and cost analysis only supports one level of grouping today.
+
+Once you know which resources you'd like to group, use the following steps to tag your resources:
+
+1. Open the resource that you want to be the parent.
+2. Select **Properties** in the resource menu.
+3. Find the **Resource ID** property and copy its value.
+4. Open **All resources** or the resource group that has the resources you want to link.
+5. Select the checkboxes for every resource you want to link and click the **Assign tags** command.
+6. Specify a tag key of "cm-resource-parent" (make sure it is typed correctly) and paste the resource ID from step 3.
+7. Wait 24 hours for new usage to be sent to Cost Management with the tags. (Keep in mind resources must be actively running with charges for tags to be updated in Cost Management.)
+8. Open the [Resources view](https://aka.ms/costanalysis/resources) in the cost analysis preview.
+
+Wait for the tags to load in the Resources view and you should now see your logical parent resource with its linked children. If you don't see them grouped yet, check the tags on the linked resources to ensure they're set. If not, check again in 24 hours.
+
+**Grouping related resources is available by default in the cost analysis preview.**
-## Charts in the cost analysis preview
<a name="chartsfeature"></a>
+## Charts in the cost analysis preview
+ Charts in the cost analysis preview include a chart of daily or monthly charges for the specified date range. :::image type="content" source="./media/enable-preview-features-cost-management-labs/cost-analysis-charts.png" alt-text="Screenshot showing a chart in cost analysis preview." lightbox="./media/enable-preview-features-cost-management-labs/cost-analysis-charts.png" ::: Charts are enabled on the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Use the **How would you rate the cost analysis preview?** Option at the bottom of the page to share feedback about the preview.
-## Streamlined menu
<a name="onlyinconfig"></a>
+## Streamlined menu
+ Cost Management includes a central management screen for all configuration settings. Some of the settings are also available directly from the Cost Management menu currently. Enabling the **Streamlined menu** option removes configuration settings from the menu. In the following image, the menu on the left is classic cost analysis. The menu on the right is the streamlined menu.
In the following image, the menu on the left is classic cost analysis. The menu
You can enable **Streamlined menu** on the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Feel free to [share your feedback](https://feedback.azure.com/d365community/idea/5e0ea52c-1025-ec11-b6e6-000d3a4f07b8). As an experimental feature, we need your feedback to determine whether to release or remove the preview.
-## Open config items in the menu
<a name="configinmenu"></a>
+## Open config items in the menu
+ Cost Management includes a central management view for all configuration settings. Currently, selecting a setting opens the configuration page outside of the Cost Management menu. :::image type="content" source="./media/enable-preview-features-cost-management-labs/cost-analysis-open-config-items-menu.png" alt-text="Screenshot showing configuration items after the Open config items in the menu option is selected." :::
You can enable **Open config items in the menu** on the [Try preview](https://ak
[Share your feedback](https://feedback.azure.com/d365community/idea/1403a826-1025-ec11-b6e6-000d3a4f07b8) about the feature. As an experimental feature, we need your feedback to determine whether to release or remove the preview.
-## Change scope from menu
<a name="changescope"></a>
+## Change scope from menu
+ If you manage many subscriptions and need to switch between subscriptions or resource groups often, you might want to include the **Change scope from menu** option. :::image type="content" source="./media/enable-preview-features-cost-management-labs/cost-analysis-change-scope-menu.png" alt-text="Screenshot showing the Change scope option added to the menu after selecting the Change menu from scope preview option." lightbox="./media/enable-preview-features-cost-management-labs/cost-analysis-change-scope-menu.png" :::
It allows changing the scope from the menu for quicker navigation. To enable the
[Share your feedback](https://feedback.azure.com/d365community/idea/e702a826-1025-ec11-b6e6-000d3a4f07b8) about the feature. As an experimental feature, we need your feedback to determine whether to release or remove the preview. + ## How to share feedback We're always listening and making constant improvements based on your feedback, so we welcome it. Here are a few ways to share your feedback with the team:
We're always listening and making constant improvements based on your feedback,
## Next steps
-Learn about [what's new in Cost Management](https://azure.microsoft.com/blog/tag/cost-management/).
+Learn about [what's new in Cost Management](https://azure.microsoft.com/blog/tag/cost-management/).
data-factory Connector Troubleshoot Ftp Sftp Http https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-ftp-sftp-http.md
Previously updated : 03/11/2022 Last updated : 07/08/2022
This article provides suggestions to troubleshoot common problems with the FTP,
- diffie-hellman-group14-sha1 - diffie-hellman-group1-sha1
+### Error Code: SftpInvalidHostKeyFingerprint
+
+- **Message**: `Host key finger-print validation failed. Expected fingerprint is '<value in linked service>', real finger-print is '<server real value>'`
+
+- **Cause**: Azure Data Factory now supports more secure host key algorithms in SFTP connector. For the newly added algorithms, it requires to get the corresponding fingerprint in the SFTP server.
+
+ The newly supported algorithms are:
+
+ - ssh-ed25519
+ - ecdsa-sha2-nistp256
+ - ecdsa-sha2-nistp384
+ - ecdsa-sha2-nistp521
+
+- **Recommendation**: Get a valid fingerprint using the Host Key Name in `real finger-print` from the error message in the SFTP server. You can run the command to get the fingerprint on your SFTP server. For example: run `ssh-keygen -E md5 -lf <keyFilePath>` in Linux server to get the fingerprint. The command may vary among different server types.
+ ## HTTP ### Error code: HttpFileFailedToRead
data-factory How To Create Schedule Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-schedule-trigger.md
This section shows you how to use the Python SDK to create, start, and monitor a
pipelines_to_run = [] pipeline_reference = PipelineReference('copyPipeline') pipelines_to_run.append(TriggerPipelineReference(pipeline_reference, pipeline_parameters))
- tr_properties = ScheduleTrigger(description='My scheduler trigger', pipelines = pipelines_to_run, recurrence=scheduler_recurrence)
+ tr_properties = TriggerResource(properties=ScheduleTrigger(description='My scheduler trigger', pipelines = pipelines_to_run, recurrence=scheduler_recurrence))
adf_client.triggers.create_or_update(rg_name, df_name, tr_name, tr_properties) # Start the trigger
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
Title: Managing Azure Data Factory Studio preview updates
+ Title: Managing Azure Data Factory studio preview experience
description: Learn more about the Azure Data Factory studio preview experience.
Previously updated : 06/28/2022 Last updated : 07/08/2022 # Manage Azure Data Factory studio preview experience
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 06/29/2022 Last updated : 07/06/2022 # Azure Policy built-in definitions for Data Factory (Preview)
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
Azure Data Factory is improved on an ongoing basis. To stay up to date with the
This page is updated monthly, so revisit it regularly.
+## June 2022
+<br>
+<table>
+<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr>
+<tr><td rowspan=3><b>Data flow</b></td><td>Fuzzy join supported for data flows</td><td>Fuzzy join is now supported in Join transformation of data flows with configurable similarity score on join conditions.<br><a href="data-flow-join.md#fuzzy-join">Learn more</a></td></tr>
+<tr><td>Editing capabilities in source projection</td><td>Editing capabilities in source projection is available in Dataflow to make schemas modifications easily<br><a href="data-flow-source.md#source-options">Learn more</a></td></tr>
+<tr><td>Cast transformation and assert error handling</td><td>Cast transformation and assert error handling are now supported in data flows for better transformation.<br><a href="data-flow-assert.md">Learn more</a></td></tr>
+<tr><td rowspan=2><b>Data Movement</b></td><td>Parameterization natively supported in additional 4 connectors</td><td>We added native UI support of parameterization for the following linked
+<tr><td>SAP Change Data Capture (CDC) capabilities in the new SAP ODP connector</td><td>SAP Change Data Capture (CDC) capabilities are now supported in the new SAP ODP connector.<br><a href="sap-change-data-capture-introduction-architecture.md">Learn more</a></td></tr>
+<tr><td><b>Integration Runtime</b></td><td>Time-To-Live in managed VNET (Public Preview)</td><td>Time-To-Live can be set to the provisioned computes in managed VNET.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-time-to-live-ttl-in-managed-virtual/ba-p/3552879">Learn more</a></td></tr>
+<tr><td><b>Monitoring</b></td><td> Rerun pipeline with new parameters</td><td>You can now rerun pipelines with new parameter values in Azure Data Factory.<br><a href="monitor-visually.md#rerun-pipelines-and-activities">Learn more</a></td></tr>
+</table>
+ ## May 2022 <br> <table>
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
na Previously updated : 06/29/2022 Last updated : 07/06/2022
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions for Microsoft Defender for Cloud description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022 # Azure Policy built-in definitions for Microsoft Defender for Cloud
defender-for-cloud Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Microsoft Defender for Cloud description: Sample Azure Resource Graph queries for Microsoft Defender for Cloud showing use of resource types and tables to access Microsoft Defender for Cloud related resources and properties. Previously updated : 06/16/2022 Last updated : 07/07/2022
defender-for-iot Dell Edge 5200 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-edge-5200.md
This article describes the Dell Edge 5200 appliance for OT sensors.
| Appliance characteristic |Details | |||
-|**Hardware profile** | SMB|
+|**Hardware profile** | L500|
|**Performance** | Max bandwidth: 60 Mbp/s<br>Max devices: 1,000 | |**Physical specifications** | Mounting: Wall Mount<br>Ports: 3x RJ45 | |**Status** | Supported, Not available preconfigured|
defender-for-iot Dell Poweredge R340 Xl Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r340-xl-legacy.md
Legacy appliances are certified but aren't currently offered as preconfigured ap
|Appliance characteristic | Description| |||
-|**Hardware profile** | Enterprise|
+|**Hardware profile** | E1800|
|**Performance** | Max bandwidth: 1 Gbp/s<br>Max devices: 10,000 | |**Physical Specifications** | Mounting: 1U<br>Ports: 8x RJ45 or 6x SFP (OPT)| |**Status** | Supported, not available as a preconfigured appliance|
defender-for-iot Hpe Edgeline El300 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-edgeline-el300.md
Legacy appliances are certified but aren't currently offered as preconfigured ap
| Appliance characteristic |Details | |||
-|**Hardware profile** | SMB|
+|**Hardware profile** | L500 |
|**Performance** |Max bandwidth: 100 Mbp/s<br>Max devices: 800 | |**Physical specifications** | Mounting: Mounting kit, Din Rail<br>Ports: 5x RJ45| |**Status** | Supported, Not available pre-configured|
defender-for-iot Hpe Proliant Dl20 Plus Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-enterprise.md
The HPE ProLiant DL20 Plus is also available for the on-premises management cons
| Appliance characteristic |Details | |||
-|**Hardware profile** | Enterprise|
+|**Hardware profile** | E1800, E1000, E500 |
|**Performance** | Max bandwidth: 1 Gbp/s <br>Max devices: 10,000 | |**Physical specifications** | Mounting: 1U <br> Ports: 8x RJ45 or 6x SFP (OPT)| |**Status** | Supported, Available preconfigured |
defender-for-iot Hpe Proliant Dl20 Plus Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-smb.md
Title: HPE ProLiant DL20/DL20 Plus for OT monitoring in SMB deployments- Microsoft Defender for IoT
+ Title: HPE ProLiant DL20/DL20 Plus (NHP 2LFF) for OT monitoring in SMB deployments- Microsoft Defender for IoT
description: Learn about the HPE ProLiant DL20/DL20 Plus appliance when used for in SMB deployments for OT monitoring with Microsoft Defender for IoT. Last updated 04/24/2022
-# HPE ProLiant DL20/DL20 Plus for SMB deployments
+# HPE ProLiant DL20/DL20 Plus (NHP 2LFF) for SMB deployments
This article describes the **HPE ProLiant DL20** or **HPE ProLiant DL20 Plus** appliance for OT sensors in an SBM deployment.
The HPE ProLiant DL20 Plus is also available for the on-premises management cons
| Appliance characteristic |Details | |||
-|**Hardware profile** | SMB|
+|**Hardware profile** | L500|
|**Performance** | Max bandwidth: 200Mbp/s <br>Max devices: 1,000 | |**Physical specifications** | Mounting: 1U<br>Ports: 4x RJ45| |**Status** | Supported; Available as pre-configured |
defender-for-iot Hpe Proliant Dl360 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl360.md
This article describes the **HPE ProLiant DL360** appliance for OT sensors.
| Appliance characteristic |Details | |||
-|**Hardware profile** | Corporate |
+|**Hardware profile** | C5600 |
|**Performance** | Max bandwidth: 3Gbp/s <br> Max devices: 12,000 | |**Physical specifications** | Mounting: 1U<br>Ports: 15x RJ45 or 8x SFP (OPT)| |**Status** | Supported, Available preconfigured|
defender-for-iot Neousys Nuvo 5006Lp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/neousys-nuvo-5006lp.md
Legacy appliances are certified but aren't currently offered as pre-configured a
| Appliance characteristic |Details | |||
-|**Hardware profile** | Office |
+|**Hardware profile** | L100 |
|**Performance** | Max bandwidth: 30 Mbp/s<br>Max devices: 400 | |**Physical specifications** | Mounting: Mounting kit, Din Rail<br>Ports: 5x RJ45| |**Status** | Supported, Not available pre-configured|
defender-for-iot Ys Techsystems Ys Fit2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/ys-techsystems-ys-fit2.md
This article describes the **YS-techsystems YS-FIT2** appliance deployment and i
| Appliance characteristic |Details | |||
-|**Hardware profile** | Office|
+|**Hardware profile** | L100|
|**Performance** | Max bandwidth: 10Mbp/s<br>Max devices: 100| |**Physical specifications** | Mounting: DIN/VESA<br>Ports: 2x RJ45| |**Status** | Supported; Available as pre-configured |
defender-for-iot Ot Appliance Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-appliance-sizing.md
This article is designed to help you choose the right OT appliances for your sen
You can use both physical or virtual appliances.
-## Corporate IT/OT mixed environments
+## C5600: IT/OT mixed environments
Use the following hardware profiles for high bandwidth corporate IT/OT mixed networks: |Hardware profile |Max throughput |Max monitored Assets |Deployment | |||||
-|Corporate | 3 Gbps | 12 K |Physical / Virtual |
+|C5600 | 3 Gbps | 12 K |Physical / Virtual |
-## Enterprise monitoring at the site level
+## E1800, E1000, E500: monitoring at the site level
Use the following hardware profiles for enterprise monitoring at the site level: |Hardware profile |Max throughput |Max monitored assets |Deployment | |||||
-|Enterprise |1 Gbps |10K |Physical / Virtual |
+|E1800 |1 Gbps |10K |Physical / Virtual |
+|E1000 |1 Gbps |10K |Physical / Virtual |
+|E500 |1 Gbps |10K |Physical / Virtual |
+ ## Production line monitoring
Use the following hardware profiles for production line monitoring:
|Hardware profile |Max throughput |Max monitored assets |Deployment | |||||
-|SMB | 200 Mbps | 1,000 |Physical / Virtual |
-|Office | 60 Mbps | 800 | Physical / Virtual |
-|Rugged | 10 Mbps | 100 |Physical / Virtual|
+|L500 | 200 Mbps | 1,000 |Physical / Virtual |
+|L100 | 60 Mbps | 800 | Physical / Virtual |
+|L64 | 10 Mbps | 100 |Physical / Virtual|
## On-premises management console systems
On-premises management consoles allow you to manage and monitor large, multiple-
|Hardware profile |Max monitored sensors |Deployment | ||||
-|Enterprise |Up to 300 |Physical / Virtual |
+|E1800 |Up to 300 |Physical / Virtual |
## Next steps
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
You can order any of the following preconfigured appliances for monitoring your
|Hardware profile |Appliance |Performance / Monitoring |Physical specifications | |||||
-|Corporate | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**: 12,000 <br> 32 Cores/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
-|Enterprise | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
-|SMB | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
-|SMB | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 60Mbp/s<br>**Max devices**: 1,000 <br> 8 Cores/32G RAM/100GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
-|Office | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
+|C5600 | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**: 12,000 <br> 32 Cores/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
+|E1800, E1000, E500 | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+|L500 | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
+|L100 | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
+|L64 | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 60Mbp/s<br>**Max devices**: 1,000 <br> 8 Cores/32G RAM/100GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
> [!NOTE]
You can purchase any of the following appliances for your OT on-premises managem
|Hardware profile |Appliance |Max sensors |Physical specifications | |||||
-|Enterprise | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | 300 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+|E1800, E1000, E500 | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | 300 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
## Next steps
defender-for-iot Ot Virtual Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-virtual-appliances.md
For all deployments, bandwidth results for virtual machines may vary, depending
|Hardware profile |Performance / Monitoring |Physical specifications | ||||
-|**Corporate** | **Max bandwidth**: 2.5 Gb/sec <br>**Max monitored assets**: 12,000 | **vCPU**: 32 <br>**Memory**: 32 GB <br>**Storage**: 5.6 TB (600 IOPS) |
-|**Enterprise** | **Max bandwidth**: 800 Mb/sec <br>**Max monitored assets**: 10,000 | **vCPU**: 8 <br>**Memory**: 32 GB <br>**Storage**: 1.8 TB (300 IOPS) |
-|**SMB** | **Max bandwidth**: 160 Mb/sec <br>**Max monitored assets**: 1,000 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 500 GB (150 IOPS) |
-|**Office** | **Max bandwidth**: 100 Mb/sec <br>**Max monitored assets**: 800 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 100 GB (150 IOPS) |
-|**Rugged** | **Max bandwidth**: 10 Mb/sec <br>**Max monitored assets**: 100 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 60 GB (150 IOPS) |
+|**C5600** | **Max bandwidth**: 2.5 Gb/sec <br>**Max monitored assets**: 12,000 | **vCPU**: 32 <br>**Memory**: 32 GB <br>**Storage**: 5.6 TB (600 IOPS) |
+|**E1800, E1000, E500** | **Max bandwidth**: 800 Mb/sec <br>**Max monitored assets**: 10,000 | **vCPU**: 8 <br>**Memory**: 32 GB <br>**Storage**: 1.8 TB (300 IOPS) |
+|**L500** | **Max bandwidth**: 160 Mb/sec <br>**Max monitored assets**: 1,000 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 500 GB (150 IOPS) |
+|**L100** | **Max bandwidth**: 100 Mb/sec <br>**Max monitored assets**: 800 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 100 GB (150 IOPS) |
+|**L64** | **Max bandwidth**: 10 Mb/sec <br>**Max monitored assets**: 100 | **vCPU**: 4 <br>**Memory**: 8 GB <br>**Storage**: 60 GB (150 IOPS) |
## On-premises management console VM requirements
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
For more information, see the [Microsoft Security Development Lifecycle practice
|Service area |Updates | ||| |**Enterprise IoT networks** | - [Enterprise IoT purchase experience and Defender for Endpoint integration in GA](#enterprise-iot-purchase-experience-and-defender-for-endpoint-integration-in-ga) |
-|**OT networks** |**Sensor software version 22.2.3**:<br><br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br><br>To update to version 22.2.3:<br>- From version 22.1.x, update directly to version 22.2.3<br>- From version 10.x, first update to version 21.1.6, and then update again to 22.2.3<br><br>For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). |
+|**OT networks** |**Sensor software version 22.2.3**:<br><br>- [OT appliance hardware profile updates](#ot-appliance-hardware-profile-updates)<br>- [PCAP access from the Azure portal](#pcap-access-from-the-azure-portal-public-preview)<br>- [Bi-directional alert synch between sensors and the Azure portal](#bi-directional-alert-synch-between-sensors-and-the-azure-portal-public-preview)<br>- [Support diagnostic log enhancements](#support-diagnostic-log-enhancements-public-preview)<br>- [Improved security for uploading protocol plugins](#improved-security-for-uploading-protocol-plugins)<br><br>To update to version 22.2.3:<br>- From version 22.1.x, update directly to version 22.2.3<br>- From version 10.x, first update to version 21.1.6, and then update again to 22.2.3<br><br>For more information, see [Update Defender for IoT OT monitoring software](update-ot-software.md). |
|**Cloud-only features** | - [Microsoft Sentinel incident synch with Defender for IoT alerts](#microsoft-sentinel-incident-synch-with-defender-for-iot-alerts) | ### Enterprise IoT purchase experience and Defender for Endpoint integration in GA
Defender for IoTΓÇÖs new purchase experience and the Enterprise IoT integration
> [!NOTE] > The Enterprise IoT network sensor and all detections remain in Public Preview.
+### OT appliance hardware profile updates
+
+We've refreshed the naming conventions for our OT appliance hardware profiles for greater transparency and clarity.
+
+The new names reflect both the *type* of profile, including *Corporate*, *Enterprise*, and *Production line*, and also the related disk storage size.
+
+Use the following table to understand the mapping between legacy hardware profile names and the current names used in the updated software installation:
+
+|Legacy name |New name | Description |
+||||
+|**Corporate** | **C5600** | A *Corporate* environment, with: <br>16 Cores<br>32 GB RAM<br>5.6 TB disk storage |
+|**Enterprise** | **E1800** | An *Enterprise* environment, with: <br>8 Cores<br>32 GB RAM<br>1.8 TB disk storage |
+|**SMB** | **L500** | A *Production line* environment, with: <br>4 Cores<br>8 GB RAM<br>500 GB disk storage |
+|**Office** | **L100** | A *Production line* environment, with: <br>4 Cores<br>8 GB RAM<br>100 GB disk storage |
+|**Rugged** | **L64** | A *Production line* environment, with: <br>4 Cores<br>8 GB RAM<br>64 GB disk storage |
+
+We also now support new enterprise hardware profiles, for sensors supporting both 500 GB and 1 TB disk sizes.
+
+For more information, see [Which appliances do I need?](ot-appliance-sizing.md)
+ ### PCAP access from the Azure portal (Public preview) Now you can access the raw traffic files, known as packet capture files or PCAP files, directly from the Azure portal. This feature supports SOC or OT security engineers who want to investigate alerts from Defender for IoT or Microsoft Sentinel, without having to access each sensor separately.
event-grid Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/kubernetes/features.md
Although Event Grid on Kubernetes and Azure Event Grid share many features and t
| Azure Relay's Hybrid Connections as a destination | Γ£ÿ | Γ£ö | | [Advanced filtering](filter-events.md) | Γ£ö*** | Γ£ö | | [Webhook AuthN/AuthZ with AAD](../secure-webhook-delivery.md) | Γ£ÿ | Γ£ö |
-| [Event delivery with resource identity](/rest/api/eventgrid/controlplane-version2021-06-01-preview/event-subscriptions/create-or-update) | Γ£ÿ | Γ£ö |
+| [Event delivery with resource identity](/rest/api/eventgrid/controlplane-version2021-10-15-preview/event-subscriptions/create-or-update) | Γ£ÿ | Γ£ö |
| Same set of data plane SDKs | Γ£ö | Γ£ö | | Same set of management SDKs | Γ£ö | Γ£ö | | Same Event Grid CLI | Γ£ö | Γ£ö |
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
event-hubs Resource Governance With App Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/resource-governance-with-app-groups.md
This article shows you how to perform the following tasks:
> Application groups are available only in **premium** and **dedicated** tiers. ## Create an application group
+This section shows you how to create an application group using Azure portal, CLI, PowerShell, and an Azure Resource Manager (ARM) template.
-You can create an application group using the Azure portal as illustrated below. When you create the application group, you should associate it to either a shared access signatures (SAS) or Azure Active Directory(Azure AD) application ID, which is used by client applications.
+### [Azure portal](#tab/portal)
+You can create an application group using the Azure portal by following these steps.
+1. Navigate to your Event Hubs namespace.
+1. On the left menu, select **Application Groups** under **Settings**.
+1. On the **Application Groups** page, select **+ Application Group** on the command bar.
-For example, you can create application group `contosoAppGroup` associating it with SAS policy `contososaspolicy`.
+ :::image type="content" source="./media/resource-governance-with-app-groups/application-groups-page.png" alt-text="Screenshot of the Application Groups page in the Azure portal.":::
+1. On the **Add application group** page, follow these steps:
+ 1. Specify a **name** for the application group.
+ 1. Confirm that **Enabled** is selected. To have the application group in the disabled state first, clear the **Enabled** option. This flag determines whether the clients of an application group can access Event Hubs or not.
+ 1. For **Security context type**, select **Shared access policy** or **AAD application**. When you create the application group, you should associate with either a shared access signatures (SAS) or Azure Active Directory(Azure AD) application ID, which is used by client applications.
+ 1. If you selected **Shared access policy**:
+ 1. For **SAS key name**, select the SAS policy that can be used as a security context for this application group. You can select **Add SAS Policy** to add a new policy and then associate with the application group.
+ 1. Review the auto-generated **Client group ID**, which is the unique ID associated with the application group. You can update it if you like.
+
+ :::image type="content" source="./media/resource-governance-with-app-groups/add-app-group.png" alt-text="Screenshot of the Add application group page with Shared access policy option selected.":::
+ 1. If you selected **AAD application**:
+ 1. For **AAD Application (client) ID**, specify the Azure Active Directory (Azure AD) application or client ID.
+ 1. Review the auto-generated **Client group ID**, which is the unique ID associated with the application group. You can update it if you like.
-## Apply throttling policies
-You can add zero or more policies when you create an application group or to an existing application group.
+ :::image type="content" source="./media/resource-governance-with-app-groups/add-app-group-active-directory.png" alt-text="Screenshot of the Add application group page with Azure AD option.":::
+ 1. To add a policy, follow these steps:
+ 1. Enter a **name** for the policy.
+ 1. For **Type**, select **Throttling policy**.
+ 1. For **Metric ID**, select one of the following options: **Incoming messages**, **Outgoing messages**, **Incoming bytes**, **Outgoing bytes**. In the following example, **Incoming messages** is selected.
+ 1. For **Rate limit threshold**, enter the threshold value. In the following example, **10000** is specified as the threshold for the number of incoming messages.
+
+ :::image type="content" source="./media/resource-governance-with-app-groups/app-group-policy.png" alt-text="Screenshot of the Add application group page with a policy for incoming messages.":::
+
+ Here's a screenshot of the page with another policy added.
-For example, you can add throttling policies related to `IncomingMessages`, `IncomingBytes` or `OutgoingBytes` to the `contosoAppGroup`. These policies will get applied to event streaming workloads of client applications that use the SAS policy `contososaspolicy`.
+ :::image type="content" source="./media/resource-governance-with-app-groups/app-group-policy-2.png" alt-text="Screenshot of the Add application group page with two policies.":::
+ 1. Now, on the **Add application group** page, select **Add**.
+1. Confirm that you see the application group in the list of application groups.
-## Publish or consume events
-Once you successfully add throttling policies to the application group, you can test the throttling behavior by either publishing or consuming events using client applications that are part of the `contosoAppGroup` application group. For that, you can use either an [AMQP client](event-hubs-dotnet-standard-getstarted-send.md) or a [Kafka client](event-hubs-quickstart-kafka-enabled-event-hubs.md) application and same SAS policy name or Azure AD application ID that's used to create the application group.
+ :::image type="content" source="./media/resource-governance-with-app-groups/application-group-list.png" alt-text="Screenshot of the Application groups page with the application group you created.":::
-> [!NOTE]
-> When your client applications are throttled, you should experience a slowness in publishing or consuming data.
+ You can delete the application group in the list by selecting the trash icon button next to it in the list.
-## Enable or disable application groups
-You can prevent client applications accessing your Event Hubs namespace by disabling the application group that contains those applications. When the application group is disabled, client applications won't be able to publish or consume data. Any established connections from client applications of that application group will also be terminated.
+### [Azure CLI](#tab/cli)
+Use the CLI command: [`az eventhubs namespace application-group create`](/cli/azure/eventhubs/namespace/application-group#az-eventhubs-namespace-application-group-create) to create an application group in an Event Hubs namespace.
+
+The following example creates an application group named `myAppGroup` in the namespace `mynamespace` in the Azure resource group `MyResourceGroup`. It uses the following configurations.
+
+- Shared access policy is used as the security context
+- Client app group ID is set to `SASKeyName=<NameOfTheSASkey>`.
+- First throttling policy for the `Incoming messages` metric with `10000` as the threshold.
+- Second throttling policy for the `Incoming bytes` metric with `20000` as the threshold.
+
+```azurecli-interactive
+az eventhubs namespace application-group create --namespace-name mynamespace \
+ -g MyResourceGroup \
+ --name myAppGroup \
+ --client-app-group-identifier SASKeyName=keyname \
+ --throttling-policy-config name=policy1 metric-id=IncomingMessages rate-limit-threshold=10000 \
+ --throttling-policy-config name=policy2 metric-id=IncomingBytes rate-limit-threshold=20000
+```
+
+To learn more about the CLI command, see [`az eventhubs namespace application-group create`](/cli/azure/eventhubs/namespace/application-group#az-eventhubs-namespace-application-group-create).
+
+### [Azure PowerShell](#tab/powershell)
+Use the PowerShell command: [`New-AzEventHubApplicationGroup`](//powershell/module/az.eventhub/new-azeventhubapplicationgroup) to create an application group in an Event Hubs namespace.
-## Create application groups using Resource Manager templates
-You can also create an application group using the Azure Resource Manager (ARM) templates.
+The following example uses the [`New-AzEventHubThrottlingPolicyConfig`](/powershell/module/az.eventhub/new-azeventhubthrottlingpolicyconfig) to create two policies that will be associated with the application.
-The following example shows how to create an application group using an ARM template. In this exmaple, the application group is associated with an existing SAS policy name `contososaspolicy` by setting the client `AppGroupIdentifier` as `SASKeyName=contososaspolicy`. The application group policies are also defined in the ARM template.
+- First throttling policy for the `Incoming bytes` metric with `12345` as the threshold.
+- Second throttling policy for the `Incoming messages` metric with `23416` as the threshold.
+
+Then, it creates an application group named `myappgroup` in the namespace `mynamespace` in the Azure resource group `myresourcegroup` by specifying the throttling policies and shared access policy as the security context.
+
+```azurepowershell-interactive
+$policy1 = New-AzEventHubThrottlingPolicyConfig -Name policy1 -MetricId IncomingBytes -RateLimitThreshold 12345
+
+$policy2 = New-AzEventHubThrottlingPolicyConfig -Name policy2 -MetricId IncomingMessages -RateLimitThreshold 23416
+
+New-AzEventHubApplicationGroup -ResourceGroupName myresourcegroup -NamespaceName mynamespace -Name myappgroup
+ -ClientAppGroupIdentifier SASKeyName=myauthkey -ThrottlingPolicyConfig $policy1, $policy2
+```
+
+To learn more about the PowerShell command, see [`New-AzEventHubApplicationGroup`](/powershell/module/az.eventhub/new-azeventhubapplicationgroup).
+
+### [ARM template](#tab/arm)
+The following example shows how to create an application group using an ARM template. In this example, the application group is associated with an existing SAS policy name `contososaspolicy` by setting the client `AppGroupIdentifier` as `SASKeyName=contososaspolicy`. The application group policies are also defined in the ARM template.
```json
The following example shows how to create an application group using an ARM temp
} } ```++
+## Enable or disable an application group
+You can prevent client applications accessing your Event Hubs namespace by disabling the application group that contains those applications. When the application group is disabled, client applications won't be able to publish or consume data. Any established connections from client applications of that application group will also be terminated.
+
+This section shows you how to enable or disable an application group using Azure portal, PowerShell, CLI, and ARM template.
+
+### [Azure portal](#tab/portal)
+
+1. On the **Event Hubs Namespace** page, select **Application Groups** on the left menu.
+1. Select the application group that you want to enable or disable.
+
+ :::image type="content" source="./media/resource-governance-with-app-groups/select-application-group.png" alt-text="Screenshot showing the Application Groups page with an application group selected.":::
+1. On the **Edit application group** page, clear checkbox next to **Enabled** to disable an application group, and then select **Update** at the bottom of the page. Similarly, select the checkbox to enable an application group.
+
+ :::image type="content" source="./media/resource-governance-with-app-groups/disable-app-group.png" alt-text="Screenshot showing the Edit application group page with Enabled option deselected.":::
+
+### [Azure CLI](#tab/cli)
+Use the [`az eventhubs namespace application-group update`](/cli/azure/eventhubs/namespace/application-group#az-eventhubs-namespace-application-group-update) command with `--is-enabled` set to `false` to disable an application group. Similarly, to enable an application group, set this property to `true` and run the command.
+
+The following sample command disables the application group named `myappgroup` in the Event Hubs namespace `mynamespace` that's in the resource group `myresourcegroup`.
+
+```azurecli-interactive
+az eventhubs namespace application-group update --namespace-name mynamespace -g myresourcegroup --name myappgroup --is-enabled false
+```
+
+### [Azure PowerShell](#tab/powershell)
+Use the [Set-AzEventHubApplicationGroup](/powershell/module/az.eventhub/set-azeventhubapplicationgroup) command with `-IsEnabled` set to `false` to disable an application group. Similarly, to enable an application group, set this property to `true` and run the command.
+
+The following sample command disables the application group named `myappgroup` in the Event Hubs namespace `mynamespace` that's in the resource group `myresourcegroup`.
+
+```azurepowershell-interactive
+Set-AzEventHubApplicationGroup -ResourceGroupName myresourcegroup -NamespaceName mynamespace -Name myappgroup -IsEnabled false
+```
+
+### [ARM template](#tab/arm)
+The following ARM template shows how to update an existing namespace (`contosonamespace`) to disable an application group by setting the `isEnabled` property to `false`. The identifier for the app group is `SASKeyName=RootManageSharedAccessKey`.
+
+> [!NOTE]
+> The following sample also adds two throttling policies
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "namespace_name": {
+ "defaultValue": "contosonamespace",
+ "type": "String"
+ },
+ "client-app-group-identifier": {
+ "defaultValue": "SASKeyName=RootManageSharedAccessKey",
+ "type": "String"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.EventHub/namespaces/applicationGroups",
+ "apiVersion": "2022-01-01-preview",
+ "name": "[concat(parameters('namespace_name'), '/contosoappgroup')]",
+ "properties": {
+ "clientAppGroupIdentifier": "[parameters('client-app-group-identifier')]",
+ "isEnabled": false,
+ "policies": [
+ {
+ "type": "ThrottlingPolicy",
+ "name": "incomingmsgspolicy",
+ "metricId": "IncomingMessages",
+ "rateLimitThreshold": 10000
+ },
+ {
+ "type": "ThrottlingPolicy",
+ "name": "incomingbytespolicy",
+ "metricId": "IncomingBytes",
+ "rateLimitThreshold": 20000
+ }
+ ]
+ }
+ }
+ ]
+}
+```
++
+## Apply throttling policies
+You can add zero or more policies when you create an application group or to an existing application group. For example, you can add throttling policies related to `IncomingMessages`, `IncomingBytes` or `OutgoingBytes` to the `contosoAppGroup`. These policies will get applied to event streaming workloads of client applications that use the SAS policy `contososaspolicy`.
+
+To learn how to add policies while creating an application group, see the [Create an application group](#create-an-application-group) section.
+
+You can also add policies after an application group is created.
+
+### [Azure portal](#tab/portal)
+1. On the **Event Hubs Namespace** page, select **Application Groups** on the left menu.
+1. Select the application group that you want to add, update, or delete a policy.
+
+ :::image type="content" source="./media/resource-governance-with-app-groups/select-application-group.png" alt-text="Screenshot showing the Application Groups page with an application group selected.":::
+1. On the **Edit application group** page, you can do the following steps:
+ 1. Update settings (including threshold values) for existing policies
+ 1. Add a new policy
+
+### [Azure CLI](#tab/cli)
+Use the [`az eventhubs namespace application-group policy add`](/cli/azure/eventhubs/namespace/application-group/policy#az-eventhubs-namespace-application-group-policy-add) to add a policy to an existing application group.
+
+**Example:**
+
+```azurecli-interactive
+az eventhubs namespace application-group policy add --namespace-name mynamespace -g MyResourceGroup --name myAppGroup --throttling-policy-config name=policy1 metric-id=OutgoingMessages rate-limit-threshold=10500 --throttling-policy-config name=policy2 metric-id=IncomingBytes rate-limit-threshold=20000
+```
+
+### [Azure PowerShell](#tab/powershell)
+Use the [Set-AzEventHubApplicationGroup](/powershell/module/az.eventhub/set-azeventhubapplicationgroup) command with `-ThrottingPolicyConfig` set to appropriate values.
+
+**Example:**
+```azurepowershell-interactive
+$policyToBeAppended = New-AzEventHubThrottlingPolicyConfig -Name policy1 -MetricId IncomingBytes -RateLimitThreshold 12345
+
+$appGroup = Get-AzEventHubApplicationGroup -ResourceGroupName myresourcegroup -NamespaceName mynamespace -Name myappgroup
+
+$appGroup.ThrottlingPolicyConfig += $policyToBeAppended
+
+Set-AzEventHubApplicationGroup -ResourceGroupName myresourcegroup -NamespaceName mynamespace -Name myappgroup -ThrottlingPolicyConfig $appGroup.ThrottlingPolicyConfig
+```
+
+### [ARM template](#tab/arm)
+The following ARM template shows how to update an existing namespace (`contosonamespace`) to add throttling policies. The identifier for the app group is `SASKeyName=RootManageSharedAccessKey`.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "namespace_name": {
+ "defaultValue": "contosonamespace",
+ "type": "String"
+ },
+ "client-app-group-identifier": {
+ "defaultValue": "SASKeyName=RootManageSharedAccessKey",
+ "type": "String"
+ }
+ },
+ "resources": [
+ {
+ "type": "Microsoft.EventHub/namespaces/applicationGroups",
+ "apiVersion": "2022-01-01-preview",
+ "name": "[concat(parameters('namespace_name'), '/contosoappgroup')]",
+ "properties": {
+ "clientAppGroupIdentifier": "[parameters('client-app-group-identifier')]",
+ "isEnabled": true,
+ "policies": [
+ {
+ "type": "ThrottlingPolicy",
+ "name": "incomingmsgspolicy",
+ "metricId": "IncomingMessages",
+ "rateLimitThreshold": 10000
+ },
+ {
+ "type": "ThrottlingPolicy",
+ "name": "incomingbytespolicy",
+ "metricId": "IncomingBytes",
+ "rateLimitThreshold": 20000
+ }
+ ]
+ }
+ }
+ ]
+}
+
+```
++
+## Publish or consume events
+Once you successfully add throttling policies to the application group, you can test the throttling behavior by either publishing or consuming events using client applications that are part of the `contosoAppGroup` application group. To test, you can use either an [AMQP client](event-hubs-dotnet-standard-getstarted-send.md) or a [Kafka client](event-hubs-quickstart-kafka-enabled-event-hubs.md) application and same SAS policy name or Azure AD application ID that's used to create the application group.
+
+> [!NOTE]
+> When your client applications are throttled, you should experience a slowness in publishing or consuming data.
++ ## Next steps
-For conceptual information on application groups, see [Resource governance with application groups](resource-governance-overview.md).
+
+- For conceptual information on application groups, see [Resource governance with application groups](resource-governance-overview.md).
+- See [Azure PowerShell reference for Event Hubs](/powershell/module/az.eventhub#event-hub)
+- See [Azure CLI reference for Event Hubs](/cli/azure/eventhubs)
firewall Firewall Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-performance.md
Previously updated : 01/24/2022 Last updated : 07/08/2022
Before deploying Azure Firewall, the performance needs to be tested and evaluate
## Performance data
-The following set of performance results demonstrates the maximal Azure Firewall throughput in various use cases. All use cases were measured while Threat intelligence mode was set to alert/deny.
+The following set of performance results demonstrates the maximal Azure Firewall throughput in various use cases. All use cases were measured while Threat intelligence mode was set to alert/deny. Azure Firewall Premium performance boost feature is enabled on all Azure Firewall premium deployments by default. This feature includes enabling Accelerated Networking on the underlying firewall virtual machines.
|Firewall type and use case |TCP/UDP bandwidth (Gbps) |HTTP/S bandwidth (Gbps) | |||| |Standard |30|30|
-|Premium (no TLS/IDPS) |30|30|
-|Premium with TLS |-|30|
-|Premium with IDS |30|30|
+|Premium (no TLS/IDPS) |30|100|
+|Premium with TLS |-|100|
+|Premium with IDS |100|100|
|Premium with IPS |10|10| > [!NOTE] > IPS (Intrusion Prevention System) takes place when one or more signatures are configured to *Alert and Deny* mode.
-Azure Firewall PremiumΓÇÖs new performance boost functionality is now in public preview and provides you with the following enhancements to the overall firewall performance:
+Azure Firewall also supports the following throughput for single connections:
-|Firewall use case |Without performance boost (Gbps) |With performance boost (Gbps) |
-||||
-|Standard<br>Max bandwidth for single TCP connection |1.3|-|
-|Premium<br>Max bandwidth for single TCP connection |2.6|9.5|
-|Premium max bandwidth with TLS/IDS|30|100|
-
-Performance values are calculated with Azure Firewall at full scale and with Premium performance boost enabled. Actual performance may vary depending on your rule complexity and network configuration. These metrics are updated periodically as performance continuously evolves with each release.
-
-To enable the Azure Firewall Premium performance boost, see [Azure Firewall preview features](firewall-preview.md#azure-firewall-premium-performance-boost-preview).
+|Firewall use case |Throughput (Gbps)|
+|||
+|Standard<br>Max bandwidth for single TCP connection |1.3|
+|Premium<br>Max bandwidth for single TCP connection |9.5|
+|Premium max bandwidth with TLS/IDS|100|
+Performance values are calculated with Azure Firewall at full scale. Actual performance may vary depending on your rule complexity and network configuration. These metrics are updated periodically as performance continuously evolves with each release.
## Next steps
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md
Previously updated : 05/25/2022 Last updated : 07/08/2022
Run the following Azure PowerShell command to turn off this feature:
Unregister-AzProviderFeature -FeatureName AFWEnableNetworkRuleNameLogging -ProviderNamespace Microsoft.Network ```
-### Azure Firewall Premium performance boost (preview)
-
-As more applications move to the cloud, the performance of the network elements can become a bottleneck. As the central piece of any network design, the firewall needs to support all the workloads. The Azure Firewall Premium performance boost feature allows more scalability for these deployments.
-
-This feature significantly increases the throughput of Azure Firewall Premium. For more information, see [Azure Firewall performance](firewall-performance.md).
-
-To enable the Azure Firewall Premium Performance boost feature, run the following commands in Azure PowerShell. Stop and start the firewall for the feature to take effect immediately. Otherwise, the firewall/s is updated with the feature within several days.
-
-The Premium performance boost feature can be enabled on both the [hub virtual network](../firewall-manager/vhubs-and-vnets.md) firewall and the [secured virtual hub](../firewall-manager/vhubs-and-vnets.md) firewall. This feature has no effect on Standard Firewalls.
-
-Run the following Azure PowerShell commands to configure the Azure Firewall Premium performance boost:
-
-```azurepowershell
-Connect-AzAccount
-Select-AzSubscription -Subscription "subscription_id or subscription_name"
-Register-AzProviderFeature -FeatureName AFWEnableAccelnet -ProviderNamespace Microsoft.Network
-Register-AzResourceProvider -ProviderNamespace Microsoft.Network
-```
-
-Run the following Azure PowerShell command to turn off this feature:
-
-```azurepowershell
-Unregister-AzProviderFeature -FeatureName AFWEnableAccelnet -ProviderNamespace Microsoft.Network
-```
- ### IDPS Private IP ranges (preview) In Azure Firewall Premium IDPS, private IP address ranges are used to identify if traffic is inbound, outbound, or internal (East-West). Each signature is applied on specific traffic direction, as indicated in the signature rules table. By default, only ranges defined by IANA RFC 1918 are considered private IP addresses. So traffic sent from a private IP address range to a private IP address range is considered internal. To modify your private IP addresses, you can now easily edit, remove, or add ranges as needed.
frontdoor Create Front Door Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-bicep.md
+
+ Title: 'Quickstart: Create an Azure Front Door Standard/Premium using Bicep'
+description: This quickstart describes how to create an Azure Front Door Standard/Premium using Bicep.
+++ Last updated : 07/08/2022+++
+ na
+
+#Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
++
+# Quickstart: Create a Front Door Standard/Premium using Bicep
+
+This quickstart describes how to use Bicep to create an Azure Front Door Standard/Premium with a Web App as origin.
++
+## Prerequisites
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* IP or FQDN of a website or web application.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/front-door-standard-premium-app-service-public/).
+
+In this quickstart, you'll create a Front Door Standard/Premium, an App Service, and configure the App Service to validate that traffic has come through the Front Door origin.
++
+Multiple Azure resources are defined in the Bicep file:
+
+* [**Microsoft.Network/frontDoors**](/azure/templates/microsoft.network/frontDoors)
+* [**Microsoft.Web/serverfarms**](/azure/templates/microsoft.web/serverfarms) (App service plan to host web apps)
+* [**Microsoft.Web/sites**](/azure/templates/microsoft.web/sites) (Web app origin servicing request for Front Door)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep
+ ```
+
+
+
+ When the deployment finishes, the output is similar to:
+
+ :::image type="content" source="./media/create-front-door-bicep/front-door-standard-premium-bicep-deployment-powershell-output.png" alt-text="Screenshot of Front Door Bicep PowerShell deployment output.":::
+
+## Validate the deployment
+
+Use Azure CLI or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+You can also use the Azure portal to validate the deployment.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Resource groups** from the left pane.
+
+1. Select the resource group that you created in the previous section.
+
+1. Select the Front Door you created and you'll be able to see the endpoint hostname. Copy the hostname and paste it on to the address bar of a browser. Press enter and your request will automatically get routed to the web app.
+
+ :::image type="content" source="./media/create-front-door-bicep/front-door-bicep-web-app-origin-success.png" alt-text="Screenshot of the message: Your web app is running and waiting for your content.":::
+
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the Front Door service and the resource group. This removes the Front Door and all the related resources.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you created a:
+
+* Front Door
+* App Service plan
+* Web App
+
+To learn how to add a custom domain to your Front Door, continue to the Front Door tutorials.
+
+> [!div class="nextstepaction"]
+> [Front Door tutorials](front-door-custom-domain.md)
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/resource-graph-samples.md
Title: Azure Resource Graph sample queries for management groups description: Sample Azure Resource Graph queries for management groups showing use of resource types and tables to access management group details. Previously updated : 06/16/2022 Last updated : 07/07/2022
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 06/29/2022 Last updated : 07/06/2022
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 07/05/2022 Last updated : 07/06/2022
governance Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Policy description: Sample Azure Resource Graph queries for Azure Policy showing use of resource types and tables to access Azure Policy related resources and properties. Previously updated : 06/16/2022 Last updated : 07/07/2022
governance Query Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/query-language.md
This article covers the language components supported by Resource Graph:
- [Understanding the Azure Resource Graph query language](#understanding-the-azure-resource-graph-query-language) - [Resource Graph tables](#resource-graph-tables)
- - [Extended properties (preview)](#extended-properties-preview)
+ - [Extended properties](#extended-properties)
- [Resource Graph custom language elements](#resource-graph-custom-language-elements) - [Shared query syntax (preview)](#shared-query-syntax-preview) - [Supported KQL language elements](#supported-kql-language-elements)
Resources
> When limiting the `join` results with `project`, the property used by `join` to relate the two > tables, _subscriptionId_ in the above example, must be included in `project`.
-## Extended properties (preview)
+## Extended properties
As a _preview_ feature, some of the resource types in Resource Graph have additional type-related properties available to query beyond the properties provided by Azure Resource Manager. This set of
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md
With Azure Resource Graph, you can:
> [!NOTE] > As a _preview_ feature, some `type` objects have additional non-Resource Manager properties > available. For more information, see
-> [Extended properties (preview)](./concepts/query-language.md#extended-properties-preview).
+> [Extended properties](./concepts/query-language.md#extended-properties).
## How Resource Graph is kept current
governance Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/advanced.md
Search-AzGraph -Query "Resources | where type =~ 'microsoft.network/networkinter
## <a name="vm-powerstate"></a>Summarize virtual machine by the power states extended property
-This query uses the [extended properties](../concepts/query-language.md#extended-properties-preview) on
+This query uses the [extended properties](../concepts/query-language.md#extended-properties) on
virtual machines to summarize by power states. ```kusto
governance Samples By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-category.md
Title: List of sample Azure Resource Graph queries by category description: List sample queries for Azure Resource-Graph. Categories include Tags, Azure Advisor, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 06/16/2022 Last updated : 07/07/2022
governance Samples By Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-table.md
Title: List of sample Azure Resource Graph queries by table description: List sample queries for Azure Resource-Graph. Tables include Resources, ResourceContainers, PolicyResources, and more. Previously updated : 06/16/2022 Last updated : 07/07/2022
hdinsight Hdinsight Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-availability-zones.md
HDInsight clusters can currently be created using availability zones in the foll
- US Gov Virginia - West Europe - West US 2
+ - Korea Central
## Overview of availability zones for HDInsight clusters
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
healthcare-apis Deploy Iot Connector In Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-iot-connector-in-azure.md
Previously updated : 07/06/2022 Last updated : 07/07/2022
Leave the **Device Mapping** and **Destination Mapping** fields with their defau
Select the **Review + create** button once the fields are filled out. After the validation has passed, select the **Create** button to begin the deployment. After a successful deployment, there will be remaining configurations that will need to be completed by you for a fully functional MedTech service: * Provide a working device mapping file. For more information, see [How to use device mappings](how-to-use-device-mappings.md).
It's important that you have the following prerequisites completed before you be
1. Sign in the [Azure portal](https://portal.azure.com), and then enter your Health Data Services workspace resource name in the **Search** bar field.
- ![Screenshot of entering the workspace resource name in the search bar field.](media/select-workspace-resource-group.png#lightbox)
+ ![Screenshot of entering the workspace resource name in the search bar field.](media/iot-deploy-manual-in-portal/select-workspace-resource-group.png#lightbox)
2. Select **Deploy MedTech service**.
- ![Screenshot of MedTech services blade.](media/iot-connector-blade.png#lightbox)
+ ![Screenshot of MedTech services blade.](media/iot-deploy-manual-in-portal/iot-connector-blade.png#lightbox)
3. Next, select **Add MedTech service**.
- ![Screenshot of add MedTech services.](media/add-iot-connector.png#lightbox)
+ ![Screenshot of add MedTech services.](media/iot-deploy-manual-in-portal/add-iot-connector.png#lightbox)
## Configure the MedTech service to ingest data Under the **Basics** tab, complete the required fields under **Instance details**.
-![Screenshot of IoT configure instance details.](media/basics-instance-details.png#lightbox)
+![Screenshot of IoT configure instance details.](media/iot-deploy-manual-in-portal/basics-instance-details.png#lightbox)
1. Enter the **MedTech service name**.
Under the **Basics** tab, complete the required fields under **Instance details*
The Consumer Group name is located by using the **Search** bar to go to the Event Hubs instance that you've deployed and by selecting the **Consumer groups** blade.
- ![Screenshot of Consumer group name.](media/consumer-group-name.png#lightbox)
+ ![Screenshot of Consumer group name.](media/iot-deploy-manual-in-portal/consumer-group-name.png#lightbox)
> [!IMPORTANT] > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group.
Under the **Basics** tab, complete the required fields under **Instance details*
The **Fully Qualified Namespace** is the **Host name** located on your Event Hubs Namespace's **Overview** page.
- ![Screenshot of Fully qualified namespace.](media/event-hub-hostname.png#lightbox)
+ ![Screenshot of Fully qualified namespace.](media/iot-deploy-manual-in-portal/event-hub-hostname.png#lightbox)
For more information about Event Hubs Namespaces, see [Namespace](../../event-hubs/event-hubs-features.md?WT.mc_id=Portal-Microsoft_Healthcare_APIs#namespace) in the Features and terminology in Azure Event Hubs document.
Under the **Basics** tab, complete the required fields under **Instance details*
1. Under the **Device Mapping** tab, enter the Device mapping JSON code associated with your MedTech service.
- ![Screenshot of Configure device mapping.](media/configure-device-mapping.png#lightbox)
+ ![Screenshot of Configure device mapping.](media/iot-deploy-manual-in-portal/configure-device-mapping.png#lightbox)
2. Select **Next: Destination >** to configure the destination properties associated with your MedTech service.
Under the **Basics** tab, complete the required fields under **Instance details*
Under the **Destination** tab, enter the destination properties associated with the MedTech service.
- ![Screenshot of Configure destination properties.](media/configure-destination-properties.png#lightbox)
+ ![Screenshot of Configure destination properties.](media/iot-deploy-manual-in-portal/configure-destination-properties.png#lightbox)
1. Enter the Azure Resource ID of the **FHIR service**. The **FHIR Server** name (also known as the **FHIR service**) is located by using the **Search** bar to go to the FHIR service that you've deployed and by selecting the **Properties** blade. Copy and paste the **Resource ID** string to the **FHIR Server** text field.
- ![Screenshot of Enter FHIR server name.](media/fhir-service-resource-id.png#lightbox)
+ ![Screenshot of Enter FHIR server name.](media/iot-deploy-manual-in-portal/fhir-service-resource-id.png#lightbox)
2. Enter the **Destination Name**.
Under the **Destination** tab, enter the destination properties associated with
3. Select **Create** or **Lookup** for the **Resolution Type**. > [!NOTE]
- > For the MedTech service destination to create a valid observation resource in the FHIR service, a device resource and patient resource **must** exist in the FHIR Server, so the observation can properly reference the device that created the data, and the patient the data was measured from. There are two modes the MedTech service can use to resolve the device and patient resources.
+ > For the MedTech service destination to create a valid observation resource in the FHIR service, a device resource and patient resource **must** exist in the FHIR service, so the observation can properly reference the device that created the data, and the patient the data was measured from. There are two modes the MedTech service can use to resolve the device and patient resources.
**Create**
- The MedTech service destination attempts to retrieve a device resource from the FHIR Server using the device identifier included in the event hub message. It also attempts to retrieve a patient resource from the FHIR Server using the patient identifier included in the event hub message. If either resource isn't found, new resources will be created (device, patient, or both) containing just the identifier contained in the event hub message. When you use the **Create** option, both a device identifier and a patient identifier can be configured in the device mapping. In other words, when the IoT Connector destination is in **Create** mode, it can function normally **without** adding device and patient resources to the FHIR Server.
+ The MedTech service destination attempts to retrieve a device resource from the FHIR Server using the device identifier included in the event hub message. It also attempts to retrieve a patient resource from the FHIR service using the patient identifier included in the event hub message. If either resource isn't found, new resources will be created (device, patient, or both) containing just the identifier contained in the event hub message. When you use the **Create** option, both a device identifier and a patient identifier can be configured in the device mapping. In other words, when the MedTech service destination is in **Create** mode, it can function normally **without** adding device and patient resources to the FHIR service.
**Lookup**
- The MedTech service destination attempts to retrieve a device resource from the FHIR service using the device identifier included in the event hub message. If the device resource isn't found, an error will occur, and the data won't be processed. For **Lookup** to function properly, a device resource with an identifier matching the device identifier included in the event hub message **must** exist and the device resource **must** have a reference to a patient resource that also exists. In other words, when the MedTech service destination is in the Lookup mode, device and patient resources **must** be added to the FHIR Server before data can be processed.
+ The MedTech service destination attempts to retrieve a device resource from the FHIR service using the device identifier included in the event hub message. If the device resource isn't found, an error will occur, and the data won't be processed. For **Lookup** to function properly, a device resource with an identifier matching the device identifier included in the event hub message **must** exist and the device resource **must** have a reference to a patient resource that also exists. In other words, when the MedTech service destination is in the Lookup mode, device and patient resources **must** be added to the FHIR service before data can be processed.
For more information, see the open source documentation [FHIR destination mapping](https://github.com/microsoft/iomt-fhir/blob/master/docs/Configuration.md#fhir-mapping).
Tags are name and value pairs used for categorizing resources. For more informat
Under the **Tags** tab, enter the tag properties associated with the MedTech service.
- ![Screenshot of Tag properties.](media/tag-properties.png#lightbox)
+ ![Screenshot of Tag properties.](media/iot-deploy-manual-in-portal/tag-properties.png#lightbox)
1. Enter a **Name**. 2. Enter a **Value**.
Under the **Tags** tab, enter the tag properties associated with the MedTech ser
You should notice a **Validation success** message like what's shown in the image below.
- ![Screenshot of Validation success message.](media/iot-connector-validation-success.png#lightbox)
+ ![Screenshot of Validation success message.](media/iot-deploy-manual-in-portal/iot-connector-validation-success.png#lightbox)
> [!NOTE] > If your MedTech service didn't validate, review the validation failure message, and troubleshoot the issue. It's recommended that you review the properties under each MedTech service tab that you've configured.
Under the **Tags** tab, enter the tag properties associated with the MedTech ser
The newly deployed MedTech service will display inside your Azure Resource groups page.
- ![Screenshot of Deployed MedTech service listed in the Azure Recent resources list.](media/azure-resources-iot-connector-deployed.png#lightbox)
+ ![Screenshot of Deployed MedTech service listed in the Azure Recent resources list.](media/iot-deploy-manual-in-portal/azure-resources-iot-connector-deployed.png#lightbox)
Now that your MedTech service has been deployed, we're going to walk through the steps of assigning permissions to access the event hub and FHIR service.
To ensure that your MedTech service works properly, it must have granted access
2. Select the **Access control (IAM)** blade, and then select **+ Add**.
- ![Screenshot of access control of Event Hubs Namespace.](media/access-control-blade-add.png#lightbox)
+ ![Screenshot of access control of Event Hubs Namespace.](media/iot-deploy-manual-in-portal/access-control-blade-add.png#lightbox)
3. Select **Add role assignment**.
- ![Screenshot of add role assignment.](media/event-hub-add-role-assignment.png#lightbox)
+ ![Screenshot of add role assignment.](media/iot-deploy-manual-in-portal/event-hub-add-role-assignment.png#lightbox)
4. Select the **Role**, and then select **Azure Event Hubs Data Receiver**.
- ![Screenshot of add role assignment required fields.](media/event-hub-add-role-assignment-fields.png#lightbox)
+ ![Screenshot of add role assignment required fields.](media/iot-deploy-manual-in-portal/event-hub-add-role-assignment-fields.png#lightbox)
The Azure Event Hubs Data Receiver role allows the MedTech service that's being assigned this role to receive data from this event hub.
To ensure that your MedTech service works properly, it must have granted access
`<your workspace name>/iotconnectors/<your MedTech service name>`
- When you deploy a MedTech service, it creates a system managed identity. The system managed identify name is a concatenation of the workspace name, resource type (that's the MedTech service), and the name of the MedTech service.
+ When you deploy a MedTech service, it creates a system-assigned managed identity. The system-assigned managed identify name is a concatenation of the workspace name, resource type (that's the MedTech service), and the name of the MedTech service.
7. Select **Save**. After the role assignment has been successfully added to the event hub, a notification will display a green check mark with the text "Add Role assignment." This message indicates that the MedTech service can now read from the event hub.
- ![Screenshot of added role assignment message.](media/event-hub-added-role-assignment.png#lightbox)
+ ![Screenshot of added role assignment message.](media/iot-deploy-manual-in-portal/event-hub-added-role-assignment.png#lightbox)
For more information about authoring access to Event Hubs resources, see [Authorize access with Azure Active Directory](../../event-hubs/authorize-access-azure-active-directory.md).
For more information about authoring access to Event Hubs resources, see [Author
3. Select **Add role assignment**.
- ![Screenshot of add role assignment for the FHIR service.](media/fhir-service-add-role-assignment.png#lightbox)
+ ![Screenshot of add role assignment for the FHIR service.](media/iot-deploy-manual-in-portal/fhir-service-add-role-assignment.png#lightbox)
4. Select the **Role**, and then select **FHIR Data Writer**.
For more information about authoring access to Event Hubs resources, see [Author
6. Select **Save**.
- ![Screenshot of FHIR service added role assignment message.](media/fhir-service-added-role-assignment.png#lightbox)
+ ![Screenshot of FHIR service added role assignment message.](media/iot-deploy-manual-in-portal/fhir-service-added-role-assignment.png#lightbox)
For more information about assigning roles to the FHIR service, see [Configure Azure Role-based Access Control (RBAC)](.././configure-azure-rbac.md).
healthcare-apis How To Use Device Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-device-mappings.md
Previously updated : 02/16/2022 Last updated : 07/07/2022
This article describes how to configure the MedTech service using Device mappings.
-MedTech service requires two types of JSON-based mappings. The first type, **Device mapping**, is responsible for mapping the device payloads sent to the `devicedata` Azure Event Hub end point. It extracts types, device identifiers, measurement date time, and the measurement value(s).
+MedTech service requires two types of JSON-based mappings. The first type, **Device mapping**, is responsible for mapping the device payloads sent to the `devicedata` Azure Event Hubs end point. It extracts types, device identifiers, measurement date time, and the measurement value(s).
The second type, **Fast Healthcare Interoperability Resources (FHIR&#174;) destination mapping**, controls the mapping for FHIR resource. It allows configuration of the length of the observation period, FHIR data type used to store the values, and terminology code(s).
The normalized data model has a few required properties that must be found and e
Below are conceptual examples of what happens during normalization and transformation process within the MedTech service:
-The content payload itself is an Azure Event Hub message, which is composed of three parts: Body, Properties, and SystemProperties. The `Body` is a byte array representing an UTF-8 encoded string. During template evaluation, the byte array is automatically converted into the string value. `Properties` is a key value collection for use by the message creator. `SystemProperties` is also a key value collection reserved by the Azure Event Hub framework with entries automatically populated by it.
+The content payload itself is an Azure Event Hubs message, which is composed of three parts: Body, Properties, and SystemProperties. The `Body` is a byte array representing an UTF-8 encoded string. During template evaluation, the byte array is automatically converted into the string value. `Properties` is a key value collection for use by the message creator. `SystemProperties` is also a key value collection reserved by the Azure Event Hubs framework with entries automatically populated by it.
```json {
The content payload itself is an Azure Event Hub message, which is composed of t
The five device content-mapping types supported today rely on JSONPath to both match the required mapping and extracted values. More information on JSONPath can be found [here](https://goessner.net/articles/JsonPath/). All five template types use the [JSON .NET implementation](https://www.newtonsoft.com/json/help/html/QueryJsonSelectTokenJsonPath.htm) for resolving JSONPath expressions.
-You can define one or more templates within the Device mapping template. Each Event Hub device message received is evaluated against all device mapping templates.
+You can define one or more templates within the Device mapping template. Each Event Hubs device message received is evaluated against all device mapping templates.
A single inbound device message can be separated into multiple outbound messages that are later mapped to different observations in the FHIR service.
In this article, you learned how to use Device mappings. To learn how to use FHI
>[!div class="nextstepaction"] >[How to use FHIR destination mappings](how-to-use-fhir-mappings.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Fhir Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-fhir-mappings.md
Previously updated : 02/16/2022 Last updated : 07/07/2022
This article describes how to configure the MedTech service using the Fast Healt
Below is a conceptual example of what happens during the normalization and transformation process within the MedTech service: ## FHIR destination mappings
In this article, you learned how to use FHIR destination mappings. To learn how
>[!div class="nextstepaction"] >[How to use Device mappings](how-to-use-device-mappings.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-dps Concepts Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-service.md
The device provisioning endpoint is the single endpoint all devices use for auto
The Device Provisioning Service can only provision devices to IoT hubs that have been linked to it. Linking an IoT hub to an instance of the Device Provisioning Service gives the service read/write permissions to the IoT hub's device registry; with the link, a Device Provisioning Service can register a device ID and set the initial configuration in the device twin. Linked IoT hubs may be in any Azure region. You may link hubs in other subscriptions to your provisioning service. - ## Allocation policy The service-level setting that determines how Device Provisioning Service assigns devices to an IoT hub. There are four supported allocation policies:
There are two types of enrollments supported by Device Provisioning Service:
### Enrollment group
-An enrollment group is a group of devices that share a specific attestation mechanism. Enrollment groups support X.509 certificate or symmetric key attestation. Devices in an X.509 enrollment group present X.509 certificates that have been signed by the same root or intermediate Certificate Authority (CA). The common name (CN) of each device's end-entity (leaf) certificate becomes the registration ID for that device. Devices in a symmetric key enrollment group present SAS tokens derived from the group symmetric key. The name of the enrollment group as well as the registration IDs presented by devices must be case-insensitive strings (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). For devices in an enrollment group, the registration ID is also used as the device ID that is registered to IoT Hub.
+An enrollment group is a group of devices that share a specific attestation mechanism. Enrollment groups support X.509 certificate or symmetric key attestation. Devices in an X.509 enrollment group present X.509 certificates that have been signed by the same root or intermediate Certificate Authority (CA). The subject common name (CN) of each device's end-entity (leaf) certificate becomes the registration ID for that device. Devices in a symmetric key enrollment group present SAS tokens derived from the group symmetric key.
+
+The name of the enrollment group as well as the registration IDs presented by devices must be case-insensitive strings of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The enrollment group name can be up to 128 characters long. In symmetric key enrollment groups, the registration IDs presented by devices can be up to 128 characters long. However, in X.509 enrollment groups, because the maximum length of the subject common name in an X.509 certificate is 64 characters, the registration IDs are limited to 64 characters.
+
+For devices in an enrollment group, the registration ID is also used as the device ID that is registered to IoT Hub.
> [!TIP] > We recommend using an enrollment group for a large number of devices that share a desired initial configuration, or for devices all going to the same tenant. ### Individual enrollment
-An individual enrollment is an entry for a single device that may register. Individual enrollments may use either X.509 leaf certificates or SAS tokens (from a physical or virtual TPM) as the attestation mechanisms. The registration ID in an individual enrollment is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). For X.509 individual enrollments, the certificate common name (CN) becomes the registration ID, so the common name must adhere to the registration ID string format. Individual enrollments may have the desired IoT hub device ID specified in the enrollment entry. If it's not specified, the registration ID becomes the device ID that's registered to IoT Hub.
+An individual enrollment is an entry for a single device that may register. Individual enrollments may use either X.509 leaf certificates or SAS tokens (from a physical or virtual TPM) as the attestation mechanisms. The registration ID in an individual enrollment is a case-insensitive string of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). DPS supports registration IDs up to 128 characters long.
+
+For X.509 individual enrollments, the subject common name (CN) of the certificate becomes the registration ID, so the common name must adhere to the registration ID string format. The subject common name has a maximum length of 64 characters, so the registration ID is limited to 64 characters for X.509 enrollments.
+
+Individual enrollments may have the desired IoT hub device ID specified in the enrollment entry. If it's not specified, the registration ID becomes the device ID that's registered to IoT Hub.
> [!TIP] > We recommend using individual enrollments for devices that require unique initial configurations, or for devices that can only authenticate using SAS tokens via TPM attestation.
The ID scope is assigned to a Device Provisioning Service when it is created by
A registration is the record of a device successfully registering/provisioning to an IoT Hub via the Device Provisioning Service. Registration records are created automatically; they can be deleted, but they cannot be updated. - ## Registration ID
-The registration ID is used to uniquely identify a device registration with the Device Provisioning Service. The registration ID must be unique in the provisioning service [ID scope](#id-scope). Each device must have a registration ID. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`).
+The registration ID is used to uniquely identify a device registration with the Device Provisioning Service. The registration ID must be unique in the provisioning service [ID scope](#id-scope). Each device must have a registration ID. The registration ID is a case-insensitive string of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). DPS supports registration IDs up to 128 characters long.
* In the case of TPM, the registration ID is provided by the TPM itself.
-* In the case of X.509-based attestation, the registration ID is set to the common name (CN) of the device certificate. For this reason, the common name must adhere to the registration ID string format.
+* In the case of X.509-based attestation, the registration ID is set to the subject common name (CN) of the device certificate. For this reason, the common name must adhere to the registration ID string format. However, the registration ID is limited to 64 characters because that's the maximum length of the subject common name in an X.509 certificate.
## Device ID
iot-dps Concepts X509 Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-x509-attestation.md
Imagine that Contoso is a large corporation with its own Public Key Infrastructu
The leaf certificate, or end-entity certificate, identifies the certificate holder. It has the root certificate in its certificate chain as well as zero or more intermediate certificates. The leaf certificate is not used to sign any other certificates. It uniquely identifies the device to the provisioning service and is sometimes referred to as the device certificate. During authentication, the device uses the private key associated with this certificate to respond to a proof of possession challenge from the service.
-Leaf certificates used with [Individual enrollment](./concepts-service.md#individual-enrollment) or [Enrollment group](./concepts-service.md#enrollment-group) entries must have the certificate common name (CN) set to the registration ID. The registration ID identifies the device registration with DPS and must be unique to the DPS instance (ID scope) where the device registers. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`).
+Leaf certificates used with [Individual enrollment](./concepts-service.md#individual-enrollment) or [Enrollment group](./concepts-service.md#enrollment-group) entries must have the subject common name (CN) set to the registration ID. The registration ID identifies the device registration with DPS and must be unique to the DPS instance (ID scope) where the device registers. The registration ID is a case-insensitive string of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). DPS supports registration IDs up to 128 characters long; however, the maximum length of the subject common name in an X.509 certificate is 64 characters. The registration ID, therefore, is limited to 64 characters when using X.509 certificates.
-For enrollment groups, the certificate common name (CN) also sets the device ID that is registered with IoT Hub. The device ID will be shown in the **Registration Records** for the authenticated device in the enrollment group. For individual enrollments, the device ID can be set in the enrollment entry. If it's not set in the enrollment entry, then the certificate common name (CN) is used.
+For enrollment groups, the subject common name (CN) also sets the device ID that is registered with IoT Hub. The device ID will be shown in the **Registration Records** for the authenticated device in the enrollment group. For individual enrollments, the device ID can be set in the enrollment entry. If it's not set in the enrollment entry, then the subject common name (CN) is used.
To learn more, see [Authenticating devices signed with X.509 CA certificates](../iot-hub/iot-hub-x509ca-overview.md#authenticating-devices-signed-with-x509-ca-certificates).
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
Perform the steps in this section in your Git Bash prompt.
A public key certificate file (*device-cert.pem*) and private key file (*device-key.pem*) should now be generated in the directory where you ran the `openssl` command.
- The certificate file has its subject common name (CN) set to `my-x509-device`. For X.509-based enrollments, the [Registration ID](./concepts-service.md#registration-id) is set to the common name. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format.
+ The certificate file has its subject common name (CN) set to `my-x509-device`. For X.509-based enrollments, the [Registration ID](./concepts-service.md#registration-id) is set to the common name. The registration ID is a case-insensitive string of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format. DPS supports registration IDs up to 128 characters long; however, the maximum length of the subject common name in an X.509 certificate is 64 characters. The registration ID, therefore, is limited to 64 characters when using X.509 certificates.
5. The certificate file is Base64 encoded. To view the subject common name (CN) and other properties of the certificate file, enter the following command:
iot-dps Tutorial Custom Hsm Enrollment Group X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
In this section you create the device certificates and the full chain device cer
1. Create the device certificate CSR.
- The subject common name (CN) of the device certificate must be set to the [Registration ID](./concepts-service.md#registration-id) that your device will use to register with DPS. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format. For group enrollments, the registration ID is also used as the device ID in IoT Hub. The subject common name is set in the `-subj` parameter in the following command.
+ The subject common name (CN) of the device certificate must be set to the [Registration ID](./concepts-service.md#registration-id) that your device will use to register with DPS. The registration ID is a case-insensitive string of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format. DPS supports registration IDs up to 128 characters long; however, the maximum length of the subject common name in an X.509 certificate is 64 characters. The registration ID, therefore, is limited to 64 characters when using X.509 certificates. For group enrollments, the registration ID is also used as the device ID in IoT Hub.
+
+ The subject common name is set in the `-subj` parameter in the following command.
# [Windows](#tab/windows)
iot-edge Module Composition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-composition.md
The edgeHub module and custom modules also have three properties that tell the I
Startup order is helpful if some modules depend on others. For example, you may want the edgeHub module to start first so that it's ready to route messages when the other modules start. Or you may want to start a storage module before the modules that send data to it. However, you should always design your modules to handle failures of other modules. It's the nature of containers that they may stop and restart at any time, and any number of times.
+ > [!NOTE]
+ > Changes to a module's properties will result in that module restarting. For example, a restart will happen if you change properties for the:
+ > * module image
+ > * Docker create options
+ > * environment variables
+ > * restart policy
+ > * image pull policy
+ > * version
+ > * startup order
+ >
+ > If no module property is changed, the module will **not** restart.
+ ## Declare routes The IoT Edge hub manages communication between modules, IoT Hub, and any leaf devices. Therefore, the $edgeHub module twin contains a desired property called *routes* that declares how messages are passed within a deployment. You can have multiple routes within the same deployment.
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
key-vault Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Key Vault description: Sample Azure Resource Graph queries for Azure Key Vault showing use of resource types and tables to access Azure Key Vault related resources and properties. Previously updated : 06/16/2022 Last updated : 07/07/2022
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022 ms.suite: integration
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
Specify the storage output location to any datastore and path. By default, batch
- SSL: enabled by default for endpoint invocation - VNET support: Batch endpoints support ingress protection. A batch endpoint with ingress protection will accept scoring requests only from hosts inside a virtual network but not from the public internet. A batch endpoint that is created in a private-link enabled workspace will have ingress protection. To create a private-link enabled workspace, see [Create a secure workspace](tutorial-create-secure-workspace.md).
+> [!NOTE]
+Creating batch endpoints in a private-link enabled workspace is only supported in the following versions.
+> - CLI - version 2.15.1 or higher.
+> - REST API - version 2022-05-01 or higher.
+> - SDK V2 - version 0.1.0b3 or higher.
+ ## Next steps - [How to deploy online endpoints with the Azure CLI](how-to-deploy-managed-online-endpoints.md)
machine-learning How To Configure Auto Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-features.md
The following table describes the data guardrails that are currently supported a
Guardrail|Status|Condition&nbsp;for&nbsp;trigger || **Missing feature values imputation** |Passed <br><br><br> Done| No missing feature values were detected in your training data. Learn more about [missing-value imputation.](./how-to-use-automated-ml-for-ml-models.md#customize-featurization) <br><br> Missing feature values were detected in your training data and were imputed.
-**High cardinality feature handling** |Passed <br><br><br> Done| Your inputs were analyzed, and no high-cardinality features were detected. <br><br> High-cardinality features were detected in your inputs and were handled.
+**High cardinality feature detection** |Passed <br><br><br> Done| Your inputs were analyzed, and no high-cardinality features were detected. <br><br> High-cardinality features were detected in your inputs and were handled.
**Validation split handling** |Done| The validation configuration was set to `'auto'` and the training data contained *fewer than 20,000 rows*. <br> Each iteration of the trained model was validated by using cross-validation. Learn more about [validation data](./how-to-configure-auto-train.md#training-validation-and-test-data). <br><br> The validation configuration was set to `'auto'`, and the training data contained *more than 20,000 rows*. <br> The input data has been split into a training dataset and a validation dataset for validation of the model. **Class balancing detection** |Passed <br><br><br><br>Alerted <br><br><br>Done | Your inputs were analyzed, and all classes are balanced in your training data. A dataset is considered to be balanced if each class has good representation in the dataset, as measured by number and ratio of samples. <br><br> Imbalanced classes were detected in your inputs. To fix model bias, fix the balancing problem. Learn more about [imbalanced data](./concept-manage-ml-pitfalls.md#identify-models-with-imbalanced-data).<br><br> Imbalanced classes were detected in your inputs and the sweeping logic has determined to apply balancing. **Memory issues detection** |Passed <br><br><br><br> Done |<br> The selected values (horizon, lag, rolling window) were analyzed, and no potential out-of-memory issues were detected. Learn more about time-series [forecasting configurations](./how-to-auto-train-forecast.md#configuration-settings). <br><br><br>The selected values (horizon, lag, rolling window) were analyzed and will potentially cause your experiment to run out of memory. The lag or rolling-window configurations have been turned off.
-**Frequency detection** |Passed <br><br><br><br> Done |<br> The time series was analyzed, and all data points are aligned with the detected frequency. <br> <br> The time series was analyzed, and data points that don't align with the detected frequency were detected. These data points were removed from the dataset.
+**Frequency detection** |Passed <br><br><br><br> Done |<br> The time series was analyzed, and all data points are aligned with the detected frequency. <br> <br> The time series was analyzed, and data points that don't align with the detected frequency were detected. These data points were removed from the dataset.
+**Cross validation** |Done| In order to accurately evaluate the model(s) trained by AutoML, we leverage a dataset that the model is not trained on. Hence, if the user doesn't provide an explicit validation dataset, a part of the training dataset is used to achieve this. For smaller datasets (fewer than 20,000 samples), cross-validation is leveraged, else a single hold-out set is split from the training data to serve as the validation dataset. Hence, for your input data we leverage cross-validation with 10 folds, if the number of training samples are fewer than 1000, and 3 folds in all other cases.
+**Train-Test data split** |Done| In order to accurately evaluate the model(s) trained by AutoML, we leverage a dataset that the model is not trained on. Hence, if the user doesn't provide an explicit validation dataset, a part of the training dataset is used to achieve this. For smaller datasets (fewer than 20,000 samples), cross-validation is leveraged, else a single hold-out set is split from the training data to serve as the validation dataset. Hence, your input data has been split into a training dataset and a holdout validation dataset.
+**Time Series ID detection** |Passed <br><br><br><br> Fixed | <br> The data set was analyzed, and no duplicate time index were detected. <br> <br> Multiple time series were found in the dataset, and the time series identifiers were automatically created for your dataset.
+**Time series aggregation** |Passed <br><br><br><br> Fixed | <br> The dataset frequency is aligned with the user specified frequency. No aggregation was performed. <br> <br> The data was aggregated to comply with user provided frequency.
+**Short series handling** |Passed <br><br><br><br> Fixed | <br> Automated ML detected enough data points for each series in the input data to continue with training. <br> <br> Automated ML detected that some series did not contain enough data points to train a model. To continue with training, these short series have been dropped or padded.
## Customize featurization
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
In the **Package** field, type azureml-mlflow and then select install. Repeat th
![Azure DB install mlflow library](./media/how-to-use-mlflow-azure-databricks/install-libraries.png)
+## Track Azure Databricks runs with MLflow
+
+Azure Databricks can be configured to track experiments using MLflow in both Azure Databricks workspace and Azure Machine Learning workspace (dual-tracking), or exclusively on Azure Machine Learning. By default, dual-tracking is configured for you when you linked your Azure Databricks workspace.
-## Connect your Azure Databricks and Azure Machine Learning workspaces
+### Dual-tracking on Azure Databricks and Azure Machine Learning
-Linking your ADB workspace to your Azure Machine Learning workspace enables you to track your experiment data in the Azure Machine Learning workspace.
+Linking your ADB workspace to your Azure Machine Learning workspace enables you to track your experiment data in the Azure Machine Learning workspace and Azure Databricks workspace at the same time. This is referred as Dual-tracking.
To link your ADB workspace to a new or existing Azure Machine Learning workspace, 1. Sign in to [Azure portal](https://portal.azure.com).
To link your ADB workspace to a new or existing Azure Machine Learning workspace
![Link Azure DB and Azure Machine Learning workspaces](./media/how-to-use-mlflow-azure-databricks/link-workspaces.png)
-> [!NOTE]
-> MLflow Tracking in a [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md) is not supported.
-
-## Track Azure Databricks runs with MLflow
-
-Azure Databricks can be configured to track experiments using MLflow in both Azure Databricks workspace and Azure Machine Learning workspace (dual-tracking), or exclusively on Azure Machine Learning. By default, dual-tracking is configured for you when you linked your Azure Databricks workspace.
-
-### Dual-tracking on Azure Databricks and Azure Machine Learning
+> [!WARNING]
+> Dual-tracking in a [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md) is not supported by the moment. Configure [exclusive tracking with your Azure Machine Learning workspace](#tracking-exclusively-on-azure-machine-learning-workspace) instead.
After you link your Azure Databricks workspace with your Azure Machine Learning workspace, MLflow Tracking is automatically set to be tracked in all of the following places: * The linked Azure Machine Learning workspace. * Your original ADB workspace.
-You can use then MLflow in Azure Databricks in the same way as you're used to. The following example sets the experiment name as it is usually done in Azure Databricks:
+You can use then MLflow in Azure Databricks in the same way as you're used to. The following example sets the experiment name as it is usually done in Azure Databricks and start logging some parameters:
```python import mlflow
-#Set MLflow experiment.
experimentName = "/Users/{user_name}/{experiment_folder}/{experiment_name}" mlflow.set_experiment(experimentName)
-```
-In your training script, import `mlflow` to use the MLflow logging APIs, and start logging your run metrics. The following example, logs the epoch loss metric.
-```python
-import mlflow
-mlflow.log_metric('epoch_loss', loss.item())
+with mlflow.start_run():
+ mlflow.log_param('epochs', 20)
+ pass
``` > [!NOTE]
mlflow.log_metric('epoch_loss', loss.item())
If you prefer to manage your tracked experiments in a centralized location, you can set MLflow tracking to **only** track in your Azure Machine Learning workspace. This configuration has the advantage of enabling easier path to deployment using Azure Machine Learning deployment options.
+> [!WARNING]
+> For [private link enabled Azure Machine Learning workspace](how-to-configure-private-link.md), you have to [deploy Azure Databricks in your own network (VNet injection)](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject.md) to ensure proper connectivity.
+ You have to configure the MLflow tracking URI to point exclusively to Azure Machine Learning, as it is demonstrated in the following example: # [Using the Azure ML SDK v2](#tab/sdkv2)
After your model is trained, you can log it to the tracking server with the `mlf
mlflow.spark.log_model(model, artifact_path = "model") ```
-It's worth to mention that the flavor `spark` doesn't correspond to the fact that we are training a model in a Spark cluster but because of the training framework it was used (you can perfectly train a model using TensorFlow with Spark and hence the flavor to use would be `tensorflow`.
+It's worth to mention that the flavor `spark` doesn't correspond to the fact that we are training a model in a Spark cluster but because of the training framework it was used (you can perfectly train a model using TensorFlow with Spark and hence the flavor to use would be `tensorflow`).
Models are logged inside of the run being tracked. That means that models are available in either both Azure Databricks and Azure Machine Learning (default) or exclusively in Azure Machine Learning if you configured the tracking URI to point to it.
machine-learning How To Use Mlflow Azure Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-synapse.md
+
+ Title: MLflow Tracking for Azure Synapse Analytics experiments
+
+description: Set up MLflow with Azure Machine Learning to log metrics and artifacts from Azure Synapse Analytics workspace.
++++++ Last updated : 07/06/2022++++
+# Track Azure Synapse Analytics ML experiments with MLflow and Azure Machine Learning
+
+In this article, learn how to enable MLflow to connect to Azure Machine Learning while working in an Azure Synapse Analytics workspace. You can leverage this configuration for tracking, model management and model deployment.
+
+[MLflow](https://www.mlflow.org) is an open-source library for managing the life cycle of your machine learning experiments. MLFlow Tracking is a component of MLflow that logs and tracks your training run metrics and model artifacts. Learn more about [MLflow](concept-mlflow.md).
+
+If you have an MLflow Project to train with Azure Machine Learning, see [Train ML models with MLflow Projects and Azure Machine Learning (preview)](how-to-train-mlflow-projects.md).
+
+## Prerequisites
+
+* An [Azure Synapse Analytics workspace and cluster](/azure/synapse-analytics/quickstart-create-workspace.md).
+* An [Azure Machine Learning Workspace](quickstart-create-resources.md).
+
+## Install libraries
+
+To install libraries on your dedicated cluster in Azure Synapse Analytics:
+
+1. Create a `requirements.txt` file with the packages your experiments requires, but making sure it also includes the following packages:
+
+ __requirements.txt__
+
+ ```pip
+ mlflow
+ azureml-mlflow
+ azure-ai-ml
+ ```
+
+3. Navigate to Azure Analytics Workspace portal.
+
+4. Navigate to the **Manage** tab and select **Apache Spark Pools**.
+
+5. Click the three dots next to the cluster name, and select **Packages**.
+
+ ![install mlflow packages in Azure Synapse Analytics](media/how-to-use-mlflow-azure/install-packages.png)
+
+6. On the **Requirements files** section, click on **Upload**.
+
+7. Upload the `requirements.txt` file.
+
+8. Wait for your cluster to restart.
+
+## Track experiments with MLflow
+
+Azure Synapse Analytics can be configured to track experiments using MLflow to Azure Machine Learning workspace. Azure Machine Learning provides a centralized repository to manage the entire lifecycle of experiments, models and deployments. It also has the advantage of enabling easier path to deployment using Azure Machine Learning deployment options.
+
+### Configuring your notebooks to use MLflow connected to Azure Machine Learning
+
+To use Azure Machine Learning as your centralized repository for experiments, you can leverage MLflow. On each notebook where you are working on, you have to configure the tracking URI to point to the workspace you will be using. The following example shows how it can be done:
+
+ # [Using the Azure ML SDK v2](#tab/sdkv2)
+
+ [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
+
+ You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you're using:
+
+ ```python
+ from azure.ai.ml import MLClient
+ from azure.identity import DeviceCodeCredential
+
+ subscription_id = ""
+ aml_resource_group = ""
+ aml_workspace_name = ""
+
+ ml_client = MLClient(credential=DeviceCodeCredential(),
+ subscription_id=subscription_id,
+ resource_group_name=aml_resource_group)
+
+ azureml_mlflow_uri = ml_client.workspaces.get(aml_workspace_name).mlflow_tracking_uri
+ mlflow.set_tracking_uri(azureml_mlflow_uri)
+ ```
+
+ # [Building the MLflow tracking URI](#tab/custom)
+
+ The Azure Machine Learning Tracking URI can be constructed using the subscription ID, region of where the resource is deployed, resource group name and workspace name. The following code sample shows how:
+
+ ```python
+ import mlflow
+
+ aml_region = ""
+ subscription_id = ""
+ aml_resource_group = ""
+ aml_workspace_name = ""
+
+ azureml_mlflow_uri = f"azureml://{aml_region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{aml_resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{aml_workspace_name}"
+ mlflow.set_tracking_uri(azureml_mlflow_uri)
+ ```
+
+ > [!NOTE]
+ > You can also get this URL by:
+ > 1. Navigate to the [Azure ML Studio web portal](https://ml.azure.com).
+ > 2. Click on the uper-right corner of the page -> View all properties in Azure Portal -> MLflow tracking URI.
+ > 3. Copy the URI and use it with the method `mlflow.set_tracking_uri`.
+
+
+
+### Experiment's names in Azure Machine Learning
+
+By default, Azure Machine Learning tracks runs in a default experiment called `Default`. It is usually a good idea to set the experiment you will be going to work on. Use the following syntax to set the experiment's name:
+
+```python
+mlflow.set_experiment(experiment_name="experiment-name")
+```
+
+### Tracking parameters, metrics and artifacts
+
+You can use then MLflow in Azure Synapse Analytics in the same way as you're used to. For details see [Log & view metrics and log files](how-to-log-view-metrics.md).
+
+## Registering models in the registry with MLflow
+
+Models can be registered in Azure Machine Learning workspace, which offers a centralized repository to manage their lifecycle. The following example logs a model trained with Spark MLLib and also registers it in the registry.
+
+```python
+mlflow.spark.log_model(model,
+ artifact_path = "model",
+ registered_model_name = "model_name")
+```
+
+* **If a registered model with the name doesnΓÇÖt exist**, the method registers a new model, creates version 1, and returns a ModelVersion MLflow object.
+
+* **If a registered model with the name already exists**, the method creates a new model version and returns the version object.
+
+You can manage models registered in Azure Machine Learning using MLflow. View [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md) for more details.
+
+## Deploying and consuming models registered in Azure Machine Learning
+
+Models registered in Azure Machine Learning Service using MLflow can be consumed as:
+
+* An Azure Machine Learning endpoint (real-time and batch): This deployment allows you to leverage Azure Machine Learning deployment capabilities for both real-time and batch inference in Azure Container Instances (ACI), Azure Kubernetes (AKS) or our Managed Endpoints.
+
+* MLFlow model objects or Pandas UDFs, which can be used in Azure Synapse Analytics notebooks in streaming or batch pipelines.
+
+### Deploy models to Azure Machine Learning endpoints
+You can leverage the `azureml-mlflow` plugin to deploy a model to your Azure Machine Learning workspace. Check [How to deploy MLflow models](how-to-deploy-mlflow-models.md) page for a complete detail about how to deploy models to the different targets.
+
+> [!IMPORTANT]
+> Models need to be registered in Azure Machine Learning registry in order to deploy them. Deployment of unregistered models is not supported in Azure Machine Learning.
+
+### Deploy models for batch scoring using UDFs
+
+You can choose Azure Synapse Analytics clusters for batch scoring. The MLFlow model is loaded and used as a Spark Pandas UDF to score new data.
+
+```python
+from pyspark.sql.types import ArrayType, FloatType
+
+model_uri = "runs:/"+last_run_id+ {model_path}
+
+#Create a Spark UDF for the MLFlow model
+pyfunc_udf = mlflow.pyfunc.spark_udf(spark, model_uri)
+
+#Load Scoring Data into Spark Dataframe
+scoreDf = spark.table({table_name}).where({required_conditions})
+
+#Make Prediction
+preds = (scoreDf
+ .withColumn('target_column_name', pyfunc_udf('Input_column1', 'Input_column2', ' Input_column3', …))
+ )
+
+display(preds)
+```
+
+## Clean up resources
+
+If you wish to keep your Azure Synapse Analytics workspace, but no longer need the Azure ML workspace, you can delete the Azure ML workspace. If you don't plan to use the logged metrics and artifacts in your workspace, the ability to delete them individually is unavailable at this time. Instead, delete the resource group that contains the storage account and workspace, so you don't incur any charges:
+
+1. In the Azure portal, select **Resource groups** on the far left.
+
+ ![Delete in the Azure portal](./media/how-to-use-mlflow-azure-databricks/delete-resources.png)
+
+1. From the list, select the resource group you created.
+
+1. Select **Delete resource group**.
+
+1. Enter the resource group name. Then select **Delete**.
++
+## Next steps
+* [Track experiment runs with MLflow and Azure Machine Learning](how-to-use-mlflow.md).
+* [Deploy MLflow models in Azure Machine Learning](how-to-deploy-mlflow-models.md).
+* [Manage your models with MLflow](how-to-manage-models-mlflow.md).
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Previously updated : 06/29/2022 Last updated : 07/06/2022 # Azure Policy built-in definitions for Azure Database for MariaDB
marketplace Azure App Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-metered-billing.md
Follow the instruction in [Support for the commercial marketplace program in Par
- See [Marketplace metering service APIs](marketplace-metering-service-apis.md) for more information.
+**Video tutorial**
+
+- [Metered Billing for Azure Managed Applications Overview](https://go.microsoft.com/fwlink/?linkid=2196310)
marketplace Azure App Offer Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-offer-setup.md
Make sure you update these connections whenever something has changed. You can s
## Next steps - [Configure Azure application properties](azure-app-properties.md)+
+**Video tutorial**
+
+- [Configuring Partner Center for Azure Managed Applications - Demo](https://go.microsoft.com/fwlink/?linkid=2196410)
marketplace Azure Partner Customer Usage Attribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-partner-customer-usage-attribution.md
description: Get an overview of tracking customer usage for Azure Applications o
Previously updated : 10/04/2021++ Last updated : 07/07/2022
View step-by-step instructions with screenshots at [Using Technical Presales and
You will be contacted by a Microsoft Partner Technical Consultant to set up a call to scope your needs.
-## Report
-Reporting for Azure usage tracked via customer usage attribution is not available today for ISV partners. Adding reporting to the Commercial Marketplace Program in Partner Center to cover customer usage attribution is targeted for the second half of 2022.
- ## FAQ #### After a tracking ID is added, can it be changed?
marketplace Commercial Marketplace Get Customer Leads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/commercial-marketplace-get-customer-leads.md
description: Learn about generating and receiving customer leads from your Micro
-+ Last updated 06/29/2022
marketplace Commercial Marketplace Lead Management Instructions Azure Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md
description: Learn how to use Azure Table storage to configure leads for Microso
-+ Last updated 12/02/2021
marketplace Commercial Marketplace Lead Management Instructions Dynamics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-dynamics.md
description: Learn how to set up Dynamics 365 Customer Engagement to manage lead
-+ Last updated 03/30/2020
marketplace Commercial Marketplace Lead Management Instructions Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-https.md
description: Learn how to use Power Automate and an HTTPS endpoint to manage lea
-+ Last updated 05/21/2021
marketplace Commercial Marketplace Lead Management Instructions Marketo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-marketo.md
description: Learn how to use a Marketo CRM system to manage leads from Microsof
-+ Last updated 06/08/2022
marketplace Commercial Marketplace Lead Management Instructions Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-salesforce.md
description: Learn how to use Salesforce to configure leads for Microsoft AppSou
-+ Last updated 12/03/2021
marketplace Plan Azure App Managed App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-azure-app-managed-app.md
For each policy type you add, you must associate Standard or Free Policy SKU. Th
## Next steps - [Create an Azure application offer](azure-app-offer-setup.md)+
+**Video tutorial**
+
+- [Azure Managed Applications Overview](https://go.microsoft.com/fwlink/?linkid=2196308)
marketplace Plan Azure Application Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-azure-application-offer.md
There are two kinds of Azure application plans: _solution template_ and _managed
- To plan a solution template, see [Plan a solution template for an Azure application offer](plan-azure-app-solution-template.md). - To plan an Azure managed application, see [Plan an Azure managed application for an Azure application offer](plan-azure-app-managed-app.md).+
+**Video tutorials and hands-on labs**
+
+- [Mastering Azure Managed Application offers](https://go.microsoft.com/fwlink/?linkid=2201395)
+- [Metered Billing for Azure Managed Applications ΓÇô Demo](https://go.microsoft.com/fwlink/?linkid=2196412)
+- [Azure Managed Application Deployment Package Overview](https://go.microsoft.com/fwlink/?linkid=2196244)
migrate How To Set Up Appliance Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-vmware.md
In the configuration manager, select **Set up prerequisites**, and then complete
After the appliance is successfully registered, to see the registration details, select **View details**.
-1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If the VDDK isn't installed, download VDDK 6.7 from VMware. Extract the downloaded zip file contents to the specified location on the appliance, as indicated in the *Installation instructions*.
+1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If the VDDK isn't installed, download VDDK 6.7 from VMware. Extract the downloaded zip file contents to the specified location on the appliance, the default path is *C:\Program Files\VMware\VMware Virtual Disk Development Kit* as indicated in the *Installation instructions*.
Azure Migrate Server Migration uses the VDDK to replicate servers during migration to Azure.
migrate Migrate Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-services-overview.md
This article provides a quick overview of the Azure Migrate service.
-Azure Migrate provides a centralized hub to assess and migrate on-premises servers, infrastructure, applications, and data to Azure. It provides the following:
+Azure Migrate provides a simplified migration, modernization, and optimization service for Azure. All pre-migration steps such as discovery, assessments, and right-sizing of on-premises resources are included for infrastructure, data, and applications. Azure MigrateΓÇÖs extensible framework allows for integration of third-party tools, thus expanding the scope of supported use-cases. It provides the following:
- **Unified migration platform**: A single portal to start, run, and track your migration to Azure. - **Range of tools**: A range of tools for assessment and migration. Azure Migrate tools include Azure Migrate: Discovery and assessment and Azure Migrate: Server Migration. Azure Migrate also integrates with other Azure services and tools, and with independent software vendor (ISV) offerings.-- **Assessment and migration**: In the Azure Migrate hub, you can assess and migrate:
+- **Assessment, migration and modernization**: In the Azure Migrate hub, you can assess, migrate, and modernize:
- **Servers, databases and web apps**: Assess on-premises servers including web apps and SQL Server instances and migrate them to Azure virtual machines or Azure VMware Solution (AVS) (Preview). - **Databases**: Assess on-premises SQL Server instances and databases to migrate them to an SQL Server on an Azure VM or an Azure SQL Managed Instance or to an Azure SQL Database.
- - **Web applications**: Assess on-premises web applications and migrate them to Azure App Service.
+ - **Web applications**: Assess on-premises web applications and migrate them to Azure App Service and Azure Kubernetes Service.
- **Virtual desktops**: Assess your on-premises virtual desktop infrastructure (VDI) and migrate it to Azure Virtual Desktop. - **Data**: Migrate large amounts of data to Azure quickly and cost-effectively using Azure Data Box products.
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md
In the configuration manager, select **Set up prerequisites**, and then complete
After the appliance is successfully registered, to see the registration details, select **View details**.
-1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If the VDDK isn't installed, download VDDK 6.7 from VMware. Extract the downloaded zip file contents to the specified location on the appliance, as indicated in the *Installation instructions*.
+1. **Install the VDDK**: The appliance checks that VMware vSphere Virtual Disk Development Kit (VDDK) is installed. If the VDDK isn't installed, download VDDK 6.7 from VMware. Extract the downloaded zip file contents to the specified location on the appliance, the default path is *C:\Program Files\VMware\VMware Virtual Disk Development Kit* as indicated in the *Installation instructions*.
Azure Migrate Server Migration uses the VDDK to replicate servers during migration to Azure.
migrate Tutorial Migrate Physical Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md
Title: Migrate machines as physical server to Azure with Azure Migrate. description: This article describes how to migrate physical machines to Azure with Azure Migrate.-+ ms.
Assign the Virtual Machine Contributor role to the Azure account. This provides
- Write to an Azure managed disk. ### Create an Azure network
+> [!IMPORTANT]
+> Virtual Networks (VNets) are a regional service, so make sure you create your VNet in the desired target Azure Region. For example: if you are planning on replicating and migrating Virtual Machines from your on-premises environment to the East US Azure Region, then your target VNet **must be created** in the East US Region. To connect VNets in different regions refer to the [Virtual network peering](/azure/virtual-network/virtual-network-peering-overview) guide.
[Set up](../virtual-network/manage-virtual-network.md#create-a-virtual-network) an Azure virtual network (VNet). When you replicate to Azure, Azure VMs are created and joined to the Azure VNet that you specify when you set up migration.
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
One advantage of running your workload in Azure is its global reach. The flexibl
| Brazil South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | | Canada Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | Canada East | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| Central India | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
+| Central India | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
| Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | China East 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | China North 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | East Asia (Hong Kong) | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | East US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | East US 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| France Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
-| France South | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
+| France Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| France South | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
| Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Japan West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
One advantage of running your workload in Azure is its global reach. The flexibl
| Norway East | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | South Africa North | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | South Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| South India | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: |
+| South India | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
| Southeast Asia | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| Switzerland West | :heavy_check_mark: | :heavy_check_mark: | :x: | :x:
+| Switzerland West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark:
| UAE North | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | UK South | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | UK West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
mysql Quickstart Create Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/quickstart-create-arm-template.md
Previously updated : 10/23/2020 Last updated : 07/07/2022 # Quickstart: Use an ARM template to create an Azure Database for MySQL - Flexible Server
Last updated 10/23/2020
## Prerequisites -- An Azure account with an active subscription.
+- An Azure account with an active subscription.
[!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)] ## Create server with public access+ Create a _mysql-flexible-server-template.json_ file and copy this JSON script to create a server using public access connectivity method and also create a database on the server. ```json {
- "$schema": "http://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "administratorLogin": {
- "type": "string"
- },
- "administratorLoginPassword": {
- "type": "securestring"
- },
- "location": {
- "type": "string"
- },
- "serverName": {
- "type": "string"
- },
- "serverEdition": {
- "type": "string",
- "defaultValue": "Burstable",
- "metadata": {
- "description": "The tier of the particular SKU, e.g. Burstable, GeneralPurpose, MemoryOptimized. High Availability is available only for GeneralPurpose and MemoryOptimized sku."
- }
- },
- "skuName": {
- "type": "string",
- "defaultValue": "Standard_B1ms",
- "metadata": {
- "description": "The name of the sku, e.g. Standard_D32ds_v4."
- }
- },
- "storageSizeGB": {
- "type": "int"
- },
- "storageIops": {
- "type": "int"
- },
- "storageAutogrow": {
- "type": "string",
- "defaultValue": "Enabled"
- },
- "availabilityZone": {
- "type": "string",
- "metadata": {
- "description": "Availability Zone information of the server. (Leave blank for No Preference)."
- }
- },
- "version": {
- "type": "string"
- },
- "tags": {
- "type": "object",
- "defaultValue": {}
- },
- "haEnabled": {
- "type": "string",
- "defaultValue": "Disabled",
- "metadata": {
- "description": "High availability mode for a server : Disabled, SameZone, or ZoneRedundant"
- }
- },
- "standbyAvailabilityZone": {
- "type": "string",
- "metadata": {
- "description": "Availability zone of the standby server."
- }
- },
- "firewallRules": {
- "type": "object",
- "defaultValue": {}
- },
- "backupRetentionDays": {
- "type": "int"
- },
- "geoRedundantBackup": {
- "type": "string"
- },
- "databaseName": {
- "type": "string"
+ "$schema": "http://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "administratorLogin": {
+ "type": "string"
+ },
+ "administratorLoginPassword": {
+ "type": "securestring"
+ },
+ "location": {
+ "type": "string"
+ },
+ "serverName": {
+ "type": "string"
+ },
+ "serverEdition": {
+ "type": "string",
+ "defaultValue": "Burstable",
+ "metadata": {
+ "description": "The tier of the particular SKU, e.g. Burstable, GeneralPurpose, MemoryOptimized. High Availability is available only for GeneralPurpose and MemoryOptimized sku."
+ }
+ },
+ "skuName": {
+ "type": "string",
+ "defaultValue": "Standard_B1ms",
+ "metadata": {
+ "description": "The name of the sku, e.g. Standard_D32ds_v4."
+ }
+ },
+ "storageSizeGB": {
+ "type": "int"
+ },
+ "storageIops": {
+ "type": "int"
+ },
+ "storageAutogrow": {
+ "type": "string",
+ "defaultValue": "Enabled"
+ },
+ "availabilityZone": {
+ "type": "string",
+ "metadata": {
+ "description": "Availability Zone information of the server. (Leave blank for No Preference)."
+ }
+ },
+ "version": {
+ "type": "string"
+ },
+ "tags": {
+ "type": "object",
+ "defaultValue": {}
+ },
+ "haEnabled": {
+ "type": "string",
+ "defaultValue": "Disabled",
+ "metadata": {
+ "description": "High availability mode for a server : Disabled, SameZone, or ZoneRedundant"
+ }
+ },
+ "standbyAvailabilityZone": {
+ "type": "string",
+ "metadata": {
+ "description": "Availability zone of the standby server."
+ }
+ },
+ "firewallRules": {
+ "type": "object",
+ "defaultValue": {}
+ },
+ "backupRetentionDays": {
+ "type": "int"
+ },
+ "geoRedundantBackup": {
+ "type": "string"
+ },
+ "databaseName": {
+ "type": "string"
+ }
+ },
+ "variables": {
+ "api": "2021-05-01",
+ "firewallRules": "[parameters('firewallRules').rules]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.DBforMySQL/flexibleServers",
+ "apiVersion": "[variables('api')]",
+ "location": "[parameters('location')]",
+ "name": "[parameters('serverName')]",
+ "sku": {
+ "name": "[parameters('skuName')]",
+ "tier": "[parameters('serverEdition')]"
+ },
+ "properties": {
+ "version": "[parameters('version')]",
+ "administratorLogin": "[parameters('administratorLogin')]",
+ "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
+ "availabilityZone": "[parameters('availabilityZone')]",
+ "highAvailability": {
+ "mode": "[parameters('haEnabled')]",
+ "standbyAvailabilityZone": "[parameters('standbyAvailabilityZone')]"
+ },
+ "Storage": {
+ "storageSizeGB": "[parameters('storageSizeGB')]",
+ "iops": "[parameters('storageIops')]",
+ "autogrow": "[parameters('storageAutogrow')]"
+ },
+ "Backup": {
+ "backupRetentionDays": "[parameters('backupRetentionDays')]",
+ "geoRedundantBackup": "[parameters('geoRedundantBackup')]"
}
+ },
+ "tags": "[parameters('tags')]"
},
- "variables": {
- "api": "2021-05-01",
- "firewallRules": "[parameters('firewallRules').rules]"
- },
- "resources": [
- {
- "type": "Microsoft.DBforMySQL/flexibleServers",
- "apiVersion": "[variables('api')]",
- "location": "[parameters('location')]",
- "name": "[parameters('serverName')]",
- "sku": {
- "name": "[parameters('skuName')]",
- "tier": "[parameters('serverEdition')]"
- },
- "properties": {
- "version": "[parameters('version')]",
- "administratorLogin": "[parameters('administratorLogin')]",
- "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
- "availabilityZone": "[parameters('availabilityZone')]",
- "highAvailability": {
- "mode": "[parameters('haEnabled')]",
- "standbyAvailabilityZone": "[parameters('standbyAvailabilityZone')]"
- },
- "Storage": {
- "storageSizeGB": "[parameters('storageSizeGB')]",
- "iops": "[parameters('storageIops')]",
- "autogrow": "[parameters('storageAutogrow')]"
- },
- "Backup": {
- "backupRetentionDays": "[parameters('backupRetentionDays')]",
- "geoRedundantBackup": "[parameters('geoRedundantBackup')]"
- }
- },
- "tags": "[parameters('tags')]"
- },
- {
- "condition": "[greater(length(variables('firewallRules')), 0)]",
- "type": "Microsoft.Resources/deployments",
- "apiVersion": "2021-04-01",
- "name": "[concat('firewallRules-', copyIndex())]",
- "copy": {
- "count": "[if(greater(length(variables('firewallRules')), 0), length(variables('firewallRules')), 1)]",
- "mode": "Serial",
- "name": "firewallRulesIterator"
- },
- "dependsOn": [
- "[concat('Microsoft.DBforMySQL/flexibleServers/', parameters('serverName'))]"
- ],
- "properties": {
- "mode": "Incremental",
- "template": {
- "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "resources": [
- {
- "type": "Microsoft.DBforMySQL/flexibleServers/firewallRules",
- "name": "[concat(parameters('serverName'),'/',variables('firewallRules')[copyIndex()].name)]",
- "apiVersion": "[variables('api')]",
- "properties": {
- "StartIpAddress": "[variables('firewallRules')[copyIndex()].startIPAddress]",
- "EndIpAddress": "[variables('firewallRules')[copyIndex()].endIPAddress]"
- }
- }
- ]
- }
- }
- },
- {
- "type": "Microsoft.DBforMySQL/flexibleServers/databases",
- "apiVersion": "[variables('api')]",
- "name": "[concat(parameters('serverName'),'/',parameters('databaseName'))]",
- "dependsOn": [
- "[concat('Microsoft.DBforMySQL/flexibleServers/', parameters('serverName'))]"
- ],
- "properties": {
- "charset": "utf8",
- "collation": "utf8_general_ci"
+ {
+ "condition": "[greater(length(variables('firewallRules')), 0)]",
+ "type": "Microsoft.Resources/deployments",
+ "apiVersion": "2021-04-01",
+ "name": "[concat('firewallRules-', copyIndex())]",
+ "copy": {
+ "count": "[if(greater(length(variables('firewallRules')), 0), length(variables('firewallRules')), 1)]",
+ "mode": "Serial",
+ "name": "firewallRulesIterator"
+ },
+ "dependsOn": [
+ "[concat('Microsoft.DBforMySQL/flexibleServers/', parameters('serverName'))]"
+ ],
+ "properties": {
+ "mode": "Incremental",
+ "template": {
+ "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "resources": [
+ {
+ "type": "Microsoft.DBforMySQL/flexibleServers/firewallRules",
+ "name": "[concat(parameters('serverName'),'/',variables('firewallRules')[copyIndex()].name)]",
+ "apiVersion": "[variables('api')]",
+ "properties": {
+ "StartIpAddress": "[variables('firewallRules')[copyIndex()].startIPAddress]",
+ "EndIpAddress": "[variables('firewallRules')[copyIndex()].endIPAddress]"
+ }
}
+ ]
}
- ]
+ }
+ },
+ {
+ "type": "Microsoft.DBforMySQL/flexibleServers/databases",
+ "apiVersion": "[variables('api')]",
+ "name": "[concat(parameters('serverName'),'/',parameters('databaseName'))]",
+ "dependsOn": [
+ "[concat('Microsoft.DBforMySQL/flexibleServers/', parameters('serverName'))]"
+ ],
+ "properties": {
+ "charset": "utf8",
+ "collation": "utf8_general_ci"
+ }
+ }
+ ]
} ``` ## Create a server with private access+ Create a _mysql-flexible-server-template.json_ file and copy this JSON script to create a server using private access connectivity method inside a virtual network. ```json {
- "$schema": "http://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "administratorLogin": {
- "type": "string"
- },
- "administratorLoginPassword": {
- "type": "securestring"
- },
- "location": {
- "type": "string"
- },
- "serverName": {
- "type": "string"
- },
- "serverEdition": {
- "type": "string",
- "defaultValue": "Burstable",
- "metadata": {
- "description": "The tier of the particular SKU, e.g. Burstable, GeneralPurpose, MemoryOptimized. High Availability is available only for GeneralPurpose and MemoryOptimized sku."
- }
- },
- "skuName": {
- "type": "string",
- "defaultValue": "Standard_B1ms",
- "metadata": {
- "description": "The name of the sku, e.g. Standard_D32ds_v4."
- }
- },
- "storageSizeGB": {
- "type": "int"
- },
- "storageIops": {
- "type": "int"
- },
- "storageAutogrow": {
- "type": "string",
- "defaultValue": "Enabled"
- },
- "availabilityZone": {
- "type": "string",
- "metadata": {
- "description": "Availability Zone information of the server. (Leave blank for No Preference)."
- }
- },
- "version": {
- "type": "string"
- },
- "tags": {
- "type": "object",
- "defaultValue": {}
- },
- "haEnabled": {
- "type": "string",
- "defaultValue": "Disabled",
- "metadata": {
- "description": "High availability mode for a server : Disabled, SameZone, or ZoneRedundant"
- }
- },
- "standbyAvailabilityZone": {
- "type": "string",
- "metadata": {
- "description": "Availability zone of the standby server."
- }
- },
- "vnetName": {
- "type": "string",
- "defaultValue": "azure_mysql_vnet",
- "metadata": { "description": "Virtual Network Name" }
- },
- "subnetName": {
- "type": "string",
- "defaultValue": "azure_mysql_subnet",
- "metadata": { "description": "Subnet Name"}
- },
- "vnetAddressPrefix": {
- "type": "string",
- "defaultValue": "10.0.0.0/16",
- "metadata": { "description": "Virtual Network Address Prefix" }
- },
- "subnetPrefix": {
- "type": "string",
- "defaultValue": "10.0.0.0/24",
- "metadata": { "description": "Subnet Address Prefix" }
- },
- "backupRetentionDays": {
- "type": "int"
- },
- "geoRedundantBackup": {
- "type": "string"
- },
- "databaseName": {
- "type": "string"
- }
+ "$schema": "http://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "administratorLogin": {
+ "type": "string"
},
- "variables": {
- "api": "2021-05-01"
+ "administratorLoginPassword": {
+ "type": "securestring"
},
- "resources": [
- {
- "type": "Microsoft.Network/virtualNetworks",
- "apiVersion": "2021-05-01",
- "name": "[parameters('vnetName')]",
- "location": "[parameters('location')]",
- "properties": {
- "addressSpace": {
- "addressPrefixes": [
- "[parameters('vnetAddressPrefix')]"
- ]
- }
- }
- },
- {
- "type": "Microsoft.Network/virtualNetworks/subnets",
- "apiVersion": "2021-05-01",
- "name": "[concat(parameters('vnetName'),'/',parameters('subnetName'))]",
- "dependsOn": [
- "[concat('Microsoft.Network/virtualNetworks/', parameters('vnetName'))]"
- ],
- "properties": {
- "addressPrefix": "[parameters('subnetPrefix')]",
- "delegations": [
- {
- "name": "MySQLflexibleServers",
- "properties": {
- "serviceName": "Microsoft.DBforMySQL/flexibleServers"
- }
- }
- ]
- }
- },
- {
- "type": "Microsoft.DBforMySQL/flexibleServers",
- "apiVersion": "[variables('api')]",
- "location": "[parameters('location')]",
- "name": "[parameters('serverName')]",
- "dependsOn": [
- "[resourceID('Microsoft.Network/virtualNetworks/subnets/', parameters('vnetName'), parameters('subnetName'))]"
- ],
- "sku": {
- "name": "[parameters('skuName')]",
- "tier": "[parameters('serverEdition')]"
- },
- "properties": {
- "version": "[parameters('version')]",
- "administratorLogin": "[parameters('administratorLogin')]",
- "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
- "availabilityZone": "[parameters('availabilityZone')]",
- "highAvailability": {
- "mode": "[parameters('haEnabled')]",
- "standbyAvailabilityZone": "[parameters('standbyAvailabilityZone')]"
- },
- "Storage": {
- "storageSizeGB": "[parameters('storageSizeGB')]",
- "iops": "[parameters('storageIops')]",
- "autogrow": "[parameters('storageAutogrow')]"
- },
- "network": {
- "delegatedSubnetResourceId": "[resourceID('Microsoft.Network/virtualNetworks/subnets', parameters('vnetName'), parameters('subnetName'))]"
- },
- "Backup": {
- "backupRetentionDays": "[parameters('backupRetentionDays')]",
- "geoRedundantBackup": "[parameters('geoRedundantBackup')]"
- }
- },
- "tags": "[parameters('tags')]"
- },
- {
- "type": "Microsoft.DBforMySQL/flexibleServers/databases",
- "apiVersion": "[variables('api')]",
- "name": "[concat(parameters('serverName'),'/',parameters('databaseName'))]",
- "dependsOn": [
- "[concat('Microsoft.DBforMySQL/flexibleServers/', parameters('serverName'))]"
- ],
+ "location": {
+ "type": "string"
+ },
+ "serverName": {
+ "type": "string"
+ },
+ "serverEdition": {
+ "type": "string",
+ "defaultValue": "Burstable",
+ "metadata": {
+ "description": "The tier of the particular SKU, e.g. Burstable, GeneralPurpose, MemoryOptimized. High Availability is available only for GeneralPurpose and MemoryOptimized sku."
+ }
+ },
+ "skuName": {
+ "type": "string",
+ "defaultValue": "Standard_B1ms",
+ "metadata": {
+ "description": "The name of the sku, e.g. Standard_D32ds_v4."
+ }
+ },
+ "storageSizeGB": {
+ "type": "int"
+ },
+ "storageIops": {
+ "type": "int"
+ },
+ "storageAutogrow": {
+ "type": "string",
+ "defaultValue": "Enabled"
+ },
+ "availabilityZone": {
+ "type": "string",
+ "metadata": {
+ "description": "Availability Zone information of the server. (Leave blank for No Preference)."
+ }
+ },
+ "version": {
+ "type": "string"
+ },
+ "tags": {
+ "type": "object",
+ "defaultValue": {}
+ },
+ "haEnabled": {
+ "type": "string",
+ "defaultValue": "Disabled",
+ "metadata": {
+ "description": "High availability mode for a server : Disabled, SameZone, or ZoneRedundant"
+ }
+ },
+ "standbyAvailabilityZone": {
+ "type": "string",
+ "metadata": {
+ "description": "Availability zone of the standby server."
+ }
+ },
+ "vnetName": {
+ "type": "string",
+ "defaultValue": "azure_mysql_vnet",
+ "metadata": { "description": "Virtual Network Name" }
+ },
+ "subnetName": {
+ "type": "string",
+ "defaultValue": "azure_mysql_subnet",
+ "metadata": { "description": "Subnet Name" }
+ },
+ "vnetAddressPrefix": {
+ "type": "string",
+ "defaultValue": "10.0.0.0/16",
+ "metadata": { "description": "Virtual Network Address Prefix" }
+ },
+ "subnetPrefix": {
+ "type": "string",
+ "defaultValue": "10.0.0.0/24",
+ "metadata": { "description": "Subnet Address Prefix" }
+ },
+ "backupRetentionDays": {
+ "type": "int"
+ },
+ "geoRedundantBackup": {
+ "type": "string"
+ },
+ "databaseName": {
+ "type": "string"
+ }
+ },
+ "variables": {
+ "api": "2021-05-01"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Network/virtualNetworks",
+ "apiVersion": "2021-05-01",
+ "name": "[parameters('vnetName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "addressSpace": {
+ "addressPrefixes": [
+ "[parameters('vnetAddressPrefix')]"
+ ]
+ }
+ }
+ },
+ {
+ "type": "Microsoft.Network/virtualNetworks/subnets",
+ "apiVersion": "2021-05-01",
+ "name": "[concat(parameters('vnetName'),'/',parameters('subnetName'))]",
+ "dependsOn": [
+ "[concat('Microsoft.Network/virtualNetworks/', parameters('vnetName'))]"
+ ],
+ "properties": {
+ "addressPrefix": "[parameters('subnetPrefix')]",
+ "delegations": [
+ {
+ "name": "MySQLflexibleServers",
"properties": {
- "charset": "utf8",
- "collation": "utf8_general_ci"
+ "serviceName": "Microsoft.DBforMySQL/flexibleServers"
}
+ }
+ ]
+ }
+ },
+ {
+ "type": "Microsoft.DBforMySQL/flexibleServers",
+ "apiVersion": "[variables('api')]",
+ "location": "[parameters('location')]",
+ "name": "[parameters('serverName')]",
+ "dependsOn": [
+ "[resourceID('Microsoft.Network/virtualNetworks/subnets/', parameters('vnetName'), parameters('subnetName'))]"
+ ],
+ "sku": {
+ "name": "[parameters('skuName')]",
+ "tier": "[parameters('serverEdition')]"
+ },
+ "properties": {
+ "version": "[parameters('version')]",
+ "administratorLogin": "[parameters('administratorLogin')]",
+ "administratorLoginPassword": "[parameters('administratorLoginPassword')]",
+ "availabilityZone": "[parameters('availabilityZone')]",
+ "highAvailability": {
+ "mode": "[parameters('haEnabled')]",
+ "standbyAvailabilityZone": "[parameters('standbyAvailabilityZone')]"
+ },
+ "Storage": {
+ "storageSizeGB": "[parameters('storageSizeGB')]",
+ "iops": "[parameters('storageIops')]",
+ "autogrow": "[parameters('storageAutogrow')]"
+ },
+ "network": {
+ "delegatedSubnetResourceId": "[resourceID('Microsoft.Network/virtualNetworks/subnets', parameters('vnetName'), parameters('subnetName'))]"
+ },
+ "Backup": {
+ "backupRetentionDays": "[parameters('backupRetentionDays')]",
+ "geoRedundantBackup": "[parameters('geoRedundantBackup')]"
}
-
- ]
+ },
+ "tags": "[parameters('tags')]"
+ },
+ {
+ "type": "Microsoft.DBforMySQL/flexibleServers/databases",
+ "apiVersion": "[variables('api')]",
+ "name": "[concat(parameters('serverName'),'/',parameters('databaseName'))]",
+ "dependsOn": [
+ "[concat('Microsoft.DBforMySQL/flexibleServers/', parameters('serverName'))]"
+ ],
+ "properties": {
+ "charset": "utf8",
+ "collation": "utf8_general_ci"
+ }
+ }
+
+ ]
} ```
read resourceGroupName &&
az group delete --name $resourceGroupName && echo "Press [ENTER] to continue ..." ```+ ## Next steps
For a step-by-step tutorial that guides you through the process of creating an A
For a step-by-step tutorial to build an app with App Service using MySQL, see: > [!div class="nextstepaction"]
->[Build a PHP (Laravel) web app with MySQL](tutorial-php-database-app.md)
+> [Build a PHP (Laravel) web app with MySQL](tutorial-php-database-app.md)
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Previously updated : 06/29/2022 Last updated : 07/06/2022 # Azure Policy built-in definitions for Azure Database for MySQL
networking Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure networking description: Sample Azure Resource Graph queries for Azure networking showing use of resource types and tables to access Azure networking related resources and properties. Previously updated : 06/16/2022 Last updated : 07/07/2022
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
orbital Space Partner Program Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/space-partner-program-overview.md
Our differentiated ecosystem of partners spans space operators, manufacturers, s
We believe in a better together story for Space and Spectrum partners running on Azure. By joining the community, you can gain access to various benefits such as: -- Azure Engineering Training & Adoption Resources -- Quarterly NDA roadmap reviews and newsletters-- Participation in Space and Spectrum focused Microsoft events-- Co-engineering for customer POCs-- Microsoft Product Integration or add-ins-- Joint PR & Marketing for POCs and co-investment-- Azure Sponsorship Credits -- Co-sell and joint GTM coordination-- Opportunities to be showcased in Microsoft customer presentations and sales trainings
+### Co-innovation and engineering
+The Azure Space Partner Community will have direct access to Azure engineering and specialist resources to turn our partnership vision into reality, including:
+- Participation in Azure Space training to learn about and onboard the latest Azure Space technologies.
+- Collaboration and innovation with our engineering and sales specialist teams for customer proof of concepts to demonstrate the value of our partnership.
+- Access to quarterly Azure Space Confidential roadmap reviews and newsletters, and ability to directly influence the produce roadmap.
+- Partner highlighting in reference architectures and training materials.
+
+### Go-to-market scale and support
+Our Azure Space Partner Community will be able to increase their go-to-market opportunities and margins by participating in the following opportunities:
+- Opportunity for Microsoft first party product integration or add-ins, such as in Teams, Power BI, or Outlook.
+- White glove onboarding to the Microsoft Cloud Partner Program, to become a cloud solution provider or managed solution provider via direct or indirect channels.
+- Support onboarding to the Azure Marketplace as an indirect or transactable offer, with access to a broad set of Azure sellers and customers.
+- Joint go-to-market coordination with a regular cadence of customer pipeline reviews.
+
+### Marketing and community involvement
+Azure Space provides a unique opportunity for our partners to expand their marketing through public outreach via our marketing channels, such as:
+- Opportunities to be showcased in Microsoft customer presentations and sales training
+- Participation in space and spectrum focused Microsoft events ΓÇô such as BUILD, Inspire or sales readiness.
+- Joint public relations and marketing opportunities, such as press releases, blogs, and speaking events at conferences.
+
+### Product offering incentives
+The Space Partner Community will also have special access to our premier incentives offered for Azure Space product offerings:
+- Azure credits, sponsored accounts, and volume discounts in return for Microsoft Azure Consumption Commitment
+- EA Programs, such as LSPs and AOSG, including rebates based on resell volume
+- FastTrack dedicated migration and modernization architecture support for qualified opportunities
+- Many other MPN benefits, such as credits for gold competencies and partner marketing benefits via co-sell programs
:::image type="content" source="media/azure-space-program.png" alt-text="Benefits of the Azure Space Community" lightbox="media/azure-space-program.png":::
To join the community, we ask partners to commit to:
- [Analyze space data on Azure](/azure/architecture/example-scenario/data/geospatial-data-processing-analytics-azure) - [Drive insights with geospatial partners on Azure ΓÇô ESRI and visualize with Power BI](https://azuremarketplace.microsoft.com/en/marketplace/apps/esri.arcgis-enterprise?tab=Overview) - [Use the Azure Software Radio Developer VM to jump start your software radio development](https://github.com/microsoft/azure-software-radio)-- [List your app on the Azure Marketplace](../marketplace/determine-your-listing-type.md#free-trial)
+- [List your app on the Azure Marketplace](../marketplace/determine-your-listing-type.md#free-trial)
peering-service Location Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/location-partners.md
The table in this article provides information on the Peering Service connectivi
| **Partners** | **Market**| |--||
-| [Lumen Technologies](https://www.ctl.io/microsoft-azure-peering-services/) |North America, Europe|
+| [Lumen Technologies](https://www.ctl.io/microsoft-azure-peering-services/) |North America, Europe, Asia|
| [BBIX](https://www.bbix.net/en/service/) |Japan | | [CCL](https://concepts.co.nz/news/general-news/) |Oceania | | [Colt](https://www.colt.net/why-colt/strategic-alliances/microsoft-partnership/)|Europe, Asia|
The table in this article provides information on the Peering Service connectivi
| [CMC Networks](https://www.cmcnetworks.net/products/microsoft-azure-peering-services.html) |Africa| | [MainOne](https://www.mainone.net/connectivity-services/) |Africa| | [BICS](https://www.bics.com/services/capacity-solutions/cloud-connect/microsoft-azure-cloud-connect/) |Europe|
-| [Telstra International](https://www.telstra.com.sg/en/products/global-networks/global-internet/global-internet-direct) |Asia |
+| [Telstra International](https://www.telstra.com.sg/en/products/global-networks/global-internet/global-internet-direct) |Asia, Europe |
| [Atman](https://www.atman.pl/en/atman-internet-maps/) |Europe|
+| [LINX](https://www.linx.net/services/microsoft-azure-peering/) |Europe|
+| [Converge ICT](https://www.convergeict.com/enterprise/microsoft-azure-peering-service-maps/) |Asia|
++ > [!NOTE] >For more information about enlisting with the Peering Service Partner program, reach out to peeringservice@microsoft.com.
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-audit.md
By default, pgAudit log statements are emitted along with your regular log state
To learn how to set up logging to Azure Storage, Event Hubs, or Azure Monitor logs, visit the resource logs section of the [server logs article](concepts-logging.md). ## Installing pgAudit
+Before you can install pgAudit extension in Azure Database for PostgreSQL - Flexible Server, you will need to allow-list pgAudit extension for use.
+Using the [Azure portal](https://portal.azure.com):
+
+ 1. Select your Azure Database for PostgreSQL - Flexible Server.
+ 2. On the sidebar, select **Server Parameters**.
+ 3. Search for the `azure.extensions` parameter.
+ 4. Select pgAudit as extension you wish to allow-list.
+ :::image type="content" source="./media/concepts-extensions/allow-list.png" alt-text=" Screenshot showing Azure Database for PostgreSQL - allow-listing extensions for installation ":::
+
+Using [Azure CLI](/cli/azure/):
+
+ You can allow-list extensions via CLI parameter set [command](/cli/azure/postgres/flexible-server/parameter?view=azure-cli-latest&preserve-view=true).
+
+ ```bash
+az postgres flexible-server parameter set --resource-group <your resource group> --server-name <your server name> --subscription <your subscription id> --name azure.extensions --value pgAudit
+ ```
+
+
To install pgAudit, you need to include it in the server's shared preload libraries. A change to Postgres's `shared_preload_libraries` parameter requires a server restart to take effect. You can change parameters using the [Azure portal](howto-configure-server-parameters-using-portal.md), [Azure CLI](howto-configure-server-parameters-using-cli.md), or [REST API](/rest/api/postgresql/singleserver/configurations/createorupdate). Using the [Azure portal](https://portal.azure.com):
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Previously updated : 06/29/2022 Last updated : 07/06/2022 # Azure Policy built-in definitions for Azure Database for PostgreSQL
private-link Create Private Endpoint Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-cli.md
Use the VM you created in the previous step to connect to the webapp across the
8. In the bastion connection to **myVM**, open the web browser.
-9. Enter the URL of your web app, **https://mywebapp1979.azurewebsites.net**.
+9. Enter the URL of your web app, ``https://mywebapp1979.azurewebsites.net``.
If your web app hasn't been deployed, you'll get the following default web app page:
private-link Create Private Endpoint Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-powershell.md
Use the VM you created in the previous step to connect to the webapp across the
8. In the bastion connection to **myVM**, open the web browser.
-9. Enter the URL of your web app, **https://mywebapp1979.azurewebsites.net**.
+9. Enter the URL of your web app, ``https://mywebapp1979.azurewebsites.net``.
If your web app hasn't been deployed, you'll get the following default web app page:
private-link Private Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-link-overview.md
Traffic between your virtual network and the service travels the Microsoft backb
## Key benefits Azure Private Link provides the following benefits: -- **Privately access services on the Azure platform**: Connect your virtual network to services in Azure without a public IP address at the source or destination. Service providers can render their services in their own virtual network and consumers can access those services in their local virtual network. The Private Link platform will handle the connectivity between the consumer and services over the Azure backbone network.
+- **Privately access services on the Azure platform**: Connect your virtual network using private endpoints to all services that can be used as application components in Azure. Service providers can render their services in their own virtual network and consumers can access those services in their local virtual network. The Private Link platform will handle the connectivity between the consumer and services over the Azure backbone network.
- **On-premises and peered networks**: Access services running in Azure from on-premises over ExpressRoute private peering, VPN tunnels, and peered virtual networks using private endpoints. There's no need to configure ExpressRoute Microsoft peering or traverse the internet to reach the service. Private Link provides a secure way to migrate workloads to Azure.
purview Catalog Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-permissions.md
A collection is a tool that the Microsoft Purview Data Map uses to group assets,
The Microsoft Purview governance portal uses a set of predefined roles to control who can access what within the account. These roles are currently: - **Collection administrator** - a role for users that will need to assign roles to other users in the Microsoft Purview governance portal or manage collections. Collection admins can add users to roles on collections where they're admins. They can also edit collections, their details, and add subcollections.
+ A collection administrator on the [root collection](reference-azure-purview-glossary.md#root-collection) also automatically has permission to the Microsoft Purview governance portal. If your **root collection administrator** ever needs to be changed, you can [follow the steps in the section below](#administrator-change).
- **Data curators** - a role that provides access to the data catalog to manage assets, configure custom classifications, set up glossary terms, and view data estate insights. Data curators can create, read, modify, move, and delete assets. They can also apply annotations to assets. - **Data readers** - a role that provides read-only access to data assets, classifications, classification rules, collections and glossary terms. - **Data share contributor** - A role that can share data within an organization and with other organizations using data sharing capabilities in Microsoft Purview. Data share contributors can view, create, update, and delete sent and received shares.
For full instructions, see our [how-to guide for adding role assignments](how-to
## Administrator change
-There may be a time when your [root collection admin](#roles) needs to change. By default, the user who creates the account is automatically assigned collection admin to the root collection. To update the root collection admin, there are three options:
+There may be a time when your [root collection admin](#roles) needs to change. By default, the user who creates the account is automatically assigned collection admin to the root collection. To update the root collection admin, there are four options:
-- You can [assign permissions through the portal](how-to-create-and-manage-collections.md#add-role-assignments) as you have for any other role.
+- You can manage root collection administrators in the [Azure portal](https://portal.azure.com/):
+ 1. Sign in to the Azure portal and search for your Microsoft Purview account.
+ 1. Select **Root collection permission** from the left-side menu on your Microsoft Purview account page.
+ 1. Select **Add root collection admin** to add an administrator.
+ :::image type="content" source="./media/catalog-permissions/root-collection-admin.png" alt-text="Screenshot of a Microsoft Purview account page in the Azure portal with the Root collection permission page selected and the Add root collection admin option highlighted." border="true":::
+ 1. You can also select **View all root collection admins** to be taken to the root collection in the Microsoft Purview governance portal.
+
+- You can [assign permissions through the Microsoft Purview governance portal](how-to-create-and-manage-collections.md#add-role-assignments) as you have for any other role.
- You can use the REST API to add a collection administrator. Instructions to use the REST API to add a collection admin can be found in our [REST API for collections documentation.](tutorial-metadata-policy-collections-apis.md#add-the-root-collection-administrator-role) For additional information, you can see our [REST API reference](/rest/api/purview/accounts/add-root-collection-admin).
purview Concept Best Practices Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-sensitivity-labels.md
The following sections walk you through the process of implementing labeling for
- When you configure sensitivity labels for the Microsoft Purview Data Map, you might define autolabeling rules for files, database columns, or both within the label properties. Microsoft Purview labels files within the Microsoft Purview Data Map. When the autolabeling rule is configured, Microsoft Purview automatically applies the label or recommends that the label is applied. > [!WARNING]
- > If you haven't configured autolabeling for files and emails on your sensitivity labels, users might be affected within your Office and Microsoft 365 environment. You can test autolabeling on database columns without affecting users.
+ > If you haven't configured autolabeling for items on your sensitivity labels, users might be affected within your Office and Microsoft 365 environment. You can test autolabeling on database columns without affecting users.
- If you're defining new autolabeling rules for files when you configure labels for the Microsoft Purview Data Map, make sure that you have the condition for applying the label set appropriately.-- You can set the detection criteria to **All of these** or **Any of these** in the upper right of the autolabeling for files and emails page of the label properties.
+- You can set the detection criteria to **All of these** or **Any of these** in the upper right of the autolabeling for items page of the label properties.
- The default setting for detection criteria is **All of these**. This setting means that the asset must contain all the specified sensitive information types for the label to be applied. While the default setting might be valid in some instances, many customers want to use **Any of these**. Then if at least one asset is found, the label is applied. :::image type="content" source="media/concept-best-practices/label-detection-criteria.png" alt-text="Screenshot that shows detection criteria for a label.":::
purview How To Automatically Label Your Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-automatically-label-your-content.md
Title: How to automatically apply sensitivity labels to your data in Microsoft Purview Data Map description: Learn how to create sensitivity labels and automatically apply them to your data during a scan.--++ Previously updated : 04/21/2021 Last updated : 07/07/2022 # How to automatically apply sensitivity labels to your data in the Microsoft Purview Data Map
After you've extended labeling to assets in the Microsoft Purview Data Map, all
1. Name the label. Then, under **Define the scope for this label**: - In all cases, select **Schematized data assets**.
- - To label files, also select **Files & emails**. This option isn't required to label schematized data assets only
+ - To label files, also select **Items**. This option isn't required to label schematized data assets only.
:::image type="content" source="media/how-to-automatically-label-your-content/create-label-scope-small.png" alt-text="Automatically label in the Microsoft Purview compliance center" lightbox="media/how-to-automatically-label-your-content/create-label-scope.png":::
For example:
### Step 4: Publish labels
+If the Sensitivity label has been published previously, then no further action is needed.
+
+If this is a new sensitivity label that has not been published before, then the label must be published for the changes to take effect. Follow [these steps to publish the label](/microsoft-365/compliance/create-sensitivity-labels#publish-sensitivity-labels-by-creating-a-label-policy).
+ Once you create a label, you'll need to Scan your data in the Microsoft Purview Data Map to automatically apply the labels you've created, based on the autolabeling rules you've defined. ## Scan your data to apply sensitivity labels automatically
purview Quickstart Bicep Create Azure Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/quickstart-bicep-create-azure-purview.md
+
+ Title: 'Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using Bicep'
+description: This Quickstart describes how to create a Microsoft Purview (formerly Azure Purview) account using Bicep.
++ Last updated : 07/05/2022+++++
+# Quickstart: Create a Microsoft Purview (formerly Azure Purview) account using Bicep
+
+This quickstart describes the steps to deploy a Microsoft Purview (formerly Azure Purview) account using Bicep.
++
+After you've created the account, you can begin registering your data sources and using the Microsoft Purview governance portal to understand and govern your data landscape. By connecting to data across your on-premises, multi-cloud, and software-as-a-service (SaaS) sources, the Microsoft Purview Data Map creates an up-to-date map of your information. It identifies and classifies sensitive data, and provides end-to-end data linage. Data consumers are able to discover data across your organization and data administrators are able to audit, secure, and ensure right use of your data.
+
+For more information about the governance capabilities of Microsoft Purview, formerly Azure Purview, [see our overview page](overview.md). For more information about deploying Microsoft Purview across your organization, [see our deployment best practices](deployment-best-practices.md).
+
+To deploy a Microsoft Purview account to your subscription, follow the prerequisites guide below.
++
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/data-share-share-storage-account/).
++
+The following resources are defined in the Bicep file:
+
+* [**Microsoft.Purview/accounts**](/azure/templates/microsoft.purview/accounts)
+
+The Bicep performs the following tasks:
+
+* Creates a Microsoft Purview account in the specified resource group.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as `main.bicep` to your local computer.
+1. Deploy the Bicep file using Azure CLI or Azure PowerShell.
+
+ > [!NOTE]
+ > Replace **\<project-name\>** with a project name that will be used to generate resource names. Replace **\<invitation-email\>** with an email address for receiving data share invitations.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli-interactive
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters projectName=<project-name> invitationEmail=<invitation-email>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```powershell-interactive
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -projectName "<project-name>" -invitationEmail "<invitation-email>"
+ ```
+
+
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Open Microsoft Purview governance portal
+
+After your Microsoft Purview account is created, you'll use the Microsoft Purview governance portal to access and manage it. There are two ways to open Microsoft Purview governance portal:
+
+* Open your Microsoft Purview account in the [Azure portal](https://portal.azure.com). Select the "Open Microsoft Purview governance portal" tile on the overview page.
+ :::image type="content" source="media/create-catalog-portal/open-purview-studio.png" alt-text="Screenshot showing the Microsoft Purview account overview page, with the Microsoft Purview governance portal tile highlighted.":::
+
+* Alternatively, you can browse to [https://web.purview.azure.com](https://web.purview.azure.com), select your Microsoft Purview account, and sign in to your workspace.
+
+## Get started with your Purview resource
+
+After deployment, the first activities are usually:
+
+* [Create a collection](quickstart-create-collection.md)
+* [Register a resource](azure-purview-connector-overview.md)
+* [Scan the resource](concept-scans-and-ingestion.md)
+
+At this time, these actions aren't able to be taken through a Bicep file. Follow the guides above to get started!
+
+## Clean up resources
+
+To clean up the resources deployed in this quickstart, delete the resource group, which deletes all resources in the group.
+
+You can delete the resources through the Azure portal, Azure CLI, or Azure PowerShell.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```powershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you learned how to create a Microsoft Purview (formerly Azure Purview) account using Bicep and how to access the Microsoft Purview governance portal.
+
+Next, you can create a user-assigned managed identity (UAMI) that will enable your new Microsoft Purview account to authenticate directly with resources using Azure Active Directory (Azure AD) authentication.
+
+To create a UAMI, follow our [guide to create a user-assigned managed identity](manage-credentials.md#create-a-user-assigned-managed-identity).
+
+Follow these next articles to learn how to navigate the Microsoft Purview governance portal, create a collection, and grant access to Microsoft Purview:
+
+> [!div class="nextstepaction"]
+> [Using the Microsoft Purview governance portal](use-azure-purview-studio.md)
+> [Create a collection](quickstart-create-collection.md)
+> [Add users to your Microsoft Purview account](catalog-permissions.md)
purview Tutorial Metadata Policy Collections Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-metadata-policy-collections-apis.md
Whether you're adding or removing a user, group, or service principal, you'll fo
} ``` ## Add the Root Collection Administrator role
-By default, the user who created the Microsoft Purview account is the Root Collection Administrator (that is, the administrator of the topmost level of the collection hierarchy). However, in some cases, an organization needs to change the Root Collection Administrator by using the API. For instance, it's possible that the current Root Collection Administrator no longer exists in the organization. In such a case, the Azure portal might be inaccessible to anyone in the organization. For this reason, using the API to assign a new Root Collection Administrator and manage collection permissions becomes the only way to regain access to the Microsoft Purview account.
+
+By default, the user who created the Microsoft Purview account is the Root Collection Administrator (that is, the administrator of the topmost level of the collection hierarchy). However, in some cases, an organization may want to change the Root Collection Administrator using the API.
```ruby POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Purview/accounts/{accountName}/addRootCollectionAdmin?api-version=2021-07-01
role-based-access-control Custom Roles Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-bicep.md
+
+ Title: Create or update Azure custom roles using Bicep - Azure RBAC
+description: Learn how to create or update Azure custom roles using Bicep and Azure role-based access control (Azure RBAC).
+++++ Last updated : 07/01/2022+++
+#Customer intent: As an IT admin, I want to create custom and/or roles using Bicep so that I can start automating custom role processes.
++
+# Create or update Azure custom roles using Bicep
+
+If the [Azure built-in roles](built-in-roles.md) don't meet the specific needs of your organization, you can create your own [custom roles](custom-roles.md). This article describes how to create or update a custom role using Bicep.
++
+To create a custom role, you specify a role name, role permissions, and where the role can be used. In this article, you create a role named _Custom Role - RG Reader_ with resource permissions that can be assigned at a subscription scope or lower.
+
+## Prerequisites
+
+To create a custom role, you must have permissions to create custom roles, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator).
+
+You also must have an active Azure subscription. If you don't have one, you can create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this article is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/create-role-def). The Bicep file has four parameters and a resources section. The four parameters are:
+
+- Array of actions with a default value of `["Microsoft.Resources/subscriptions/resourceGroups/read"]`.
+- Array of `notActions` with an empty default value.
+- Role name with a default value of `Custom Role - RG Reader`.
+- Role description with a default value of `Subscription Level Deployment of a Role Definition`.
+
+The scope where this custom role can be assigned is set to the current subscription.
++
+The resource defined in the Bicep file is:
+
+- [Microsoft.Authorization/roleDefinitions](/azure/templates/Microsoft.Authorization/roleDefinitions)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli-interactive
+ $myActions='("Microsoft.Resources/resources/read","Microsoft.Resources/subscriptions/resourceGroups/read")'
+
+ az deployment sub create --location eastus --name customRole --template-file main.bicep --parameters actions=$myActions
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell-interactive
+ $myActions = @("Microsoft.Resources/resources/read","Microsoft.Resources/subscriptions/resourceGroups/read")
+
+ New-AzSubscriptionDeployment -Location eastus -Name customRole -TemplateFile ./main.bicep -actions $myActions
+ ```
+
+
+
+ > [!NOTE]
+ > Create a variable called **myActions** and then pass that variable. Replace the sample actions with the actions for the roleDefinition.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to verify that the custom role was created.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az role definition list --name "Custom Role - RG Reader"
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzRoleDefinition "Custom Role - RG Reader"
+```
+++
+## Update a custom role
+
+Similar to creating a custom role, you can update an existing custom role using Bicep. To update a custom role, you need to specify the role you want to update.
+
+Here are the changes you would need to make to the previous Bicep file to update the custom role.
+
+1. Include the role ID as a parameter.
+
+ ```bicep
+ ...
+ @description('ID of the role definition')
+ param roleDefName string
+ ...
+
+ ```
+
+2. Remove the roleDefName variable. You'll get a warning if you have a parameter and variable with the same name.
+3. Use Azure CLI or Azure PowerShell to get the roleDefName.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli-interactive
+ az role definition list --name "Custom Role - RG Reader"
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell-interactive
+ Get-AzRoleDefinition -Name "Custom Role - RG Reader"
+ ```
+++
+4. Use Azure CLI or Azure PowerShell to deploy the updated Bicep file, replacing **\<name-id\>** with the roleDefName, and replacing the sample actions with the updated actions for the roleDefinition.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli-interactive
+ $myActions='("Microsoft.Resources/resources/read","Microsoft.Resources/subscriptions/resourceGroups/read")'
+
+ az deployment sub create --location eastus --name customrole --template-file main.bicep --parameters actions=$myActions roleDefName="name-id" roleName="Custom Role - RG Reader"
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell-interactive
+ $myActions = @(""Microsoft.Resources/resources/read","Microsoft.Resources/subscriptions/resourceGroups/read"")
+
+ New-AzSubscriptionDeployment -Location eastus -Name customrole -TemplateFile ./main.bicep -actions $myActions -roleDefName "name-id" -roleName "Custom Role - RG Reader"
+ ```
+
+
+
+ > [!NOTE]
+ > It may take several minutes for the updated role definition to be propagated.
+
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to remove the custom role.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az role definition delete --name "Custom Role - RG Reader"
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzRoleDefinition -Name "Custom Role - RG Reader"
+```
+++
+## Next steps
+
+- [Understand Azure role definitions](role-definitions.md)
+- [Bicep documentation](../azure-resource-manager/bicep/overview.md)
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
security Log Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/log-audit.md
na Previously updated : 10/31/2019 Last updated : 07/08/2022
The following table lists the most important types of logs available in Azure:
|[Activity logs](../../azure-monitor/essentials/platform-logs-overview.md)|Control-plane events on Azure Resource Manager resources| Provides insight into the operations that were performed on resources in your subscription.| REST API, [Azure Monitor](../../azure-monitor/essentials/platform-logs-overview.md)| |[Azure Resource logs](../../azure-monitor/essentials/platform-logs-overview.md)|Frequent data about the operation of Azure Resource Manager resources in subscription| Provides insight into operations that your resource itself performed.| Azure Monitor| |[Azure Active Directory reporting](../../active-directory/reports-monitoring/overview-reports.md)|Logs and reports | Reports user sign-in activities and system activity information about users and group management.|[Graph API](../../active-directory/develop/microsoft-graph-intro.md)|
-|[Virtual machines and cloud services](../../azure-monitor/vm/monitor-virtual-machine.md)|Windows Event Log service and Linux Syslog| Captures system data and logging data on the virtual machines and transfers that data into a storage account of your choice.| Windows (using Windows Azure Diagnostics [[WAD](../../azure-monitor/agents/diagnostics-extension-overview.md)] storage) and Linux in Azure Monitor|
+|[Virtual machines and cloud services](../../azure-monitor/vm/monitor-virtual-machine.md)|Windows Event Log service and Linux Syslog| Captures system data and logging data on the virtual machines and transfers that data into a storage account of your choice.| Windows (using [Azure Diagnostics](../../azure-monitor/agents/diagnostics-extension-overview.md)] storage) and Linux in Azure Monitor|
|[Azure Storage Analytics](/rest/api/storageservices/fileservices/storage-analytics)|Storage logging, provides metrics data for a storage account|Provides insight into trace requests, analyzes usage trends, and diagnoses issues with your storage account.| REST API or the [client library](/dotnet/api/overview/azure/storage)| |[Network security group (NSG) flow logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md)|JSON format, shows outbound and inbound flows on a per-rule basis|Displays information about ingress and egress IP traffic through a Network Security Group.|[Azure Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md)| |[Application insight](../../azure-monitor/app/app-insights-overview.md)|Logs, exceptions, and custom diagnostics| Provides an application performance monitoring (APM) service for web developers on multiple platforms.| REST API, [Power BI](https://powerbi.microsoft.com/documentation/powerbi-azure-and-power-bi/)|
The following table lists the most important types of logs available in Azure:
- [Auditing and logging](management-monitoring-overview.md): Protect data by maintaining visibility and responding quickly to timely security alerts. -- [Security logging and audit-log collection within Azure](https://azure.microsoft.com/resources/videos/security-logging-and-audit-log-collection/): Enforce these settings to ensure that your Azure instances are collecting the correct security and audit logs.- - [Configure audit settings for a site collection](https://support.office.com/article/Configure-audit-settings-for-a-site-collection-A9920C97-38C0-44F2-8BCB-4CF1E2AE22D2?ui=&rs=&ad=US): If you're a site collection administrator, retrieve the history of individual users' actions and the history of actions taken during a particular date range. - [Search the audit log in the Microsoft 365 Defender portal](/microsoft-365/compliance/search-the-audit-log-in-security-and-compliance): Use the Microsoft 365 Defender portal to search the unified audit log and view user and administrator activity in your organization.
security Tls Certificate Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/tls-certificate-changes.md
# Azure TLS certificate changes
+> [!IMPORTANT]
+> This article was published concurrent with the TLS certificate change, and is not being updated. For up-to-date information about CAs, see [Azure Certificate Authority details](azure-ca-details.md).
+ Microsoft uses TLS certificates from the set of Root Certificate Authorities (CAs) that adhere to the CA/Browser Forum Baseline Requirements. All Azure TLS/SSL endpoints contain certificates chaining up to the Root CAs provided in this article. Changes to Azure endpoints began transitioning in August 2020, with some services completing their updates in 2022. All newly created Azure TLS/SSL endpoints contain updated certificates chaining up to the new Root CAs. All Azure services are impacted by this change. Details for some services are listed below:
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
service-fabric How To Managed Cluster Stateless Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-stateless-node-type.md
Sample templates are available: [Service Fabric Stateless Node types template](h
[Azure Spot Virtual Machines on scale sets](../virtual-machine-scale-sets/use-spot.md) enables users to take advantage of unused compute capacity at a significant cost savings. At any point in time when Azure needs the capacity back, the Azure infrastructure will evict these Azure Spot Virtual Machine instances. Therefore, Spot VM node types are great for workloads that can handle interruptions and don't need to be completed within a specific time frame. Recommended workloads include development, testing, batch processing jobs, big data, or other large-scale stateless scenarios.
-To set one or more stateless node types to use Spot VM, set both **isStateless** and **IsSpotVM** properties to true. When deploying a Service Fabric cluster with stateless node types, it's required to have at least one primary node type, which is not stateless in the cluster. Stateless node types configured to use Spot VMs have Eviction Policy set to 'Delete'.
+To set one or more stateless node types to use Spot VM, set both **isStateless** and **IsSpotVM** properties to true. When deploying a Service Fabric cluster with stateless node types, it's required to have at least one primary node type, which is not stateless in the cluster. Stateless node types configured to use Spot VMs have Eviction Policy set to 'Delete' by default. Customers can configure the 'evictionPolicy' to be 'Delete' or 'Deallocate' but this can only be defined at the time of nodetype creation.
-Sample templates are available: [Service Fabric Stateless Node types template](https://github.com/Azure-Samples/service-fabric-cluster-templates)
+Sample templates are available: [Service Fabric Spot Node types template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-Spot)
+
+* The Service Fabric managed cluster resource apiVersion should be **2022-06-01-preview** or later.
+
+```json
+{
+ "apiVersion": "[variables('sfApiVersion')]",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "dependsOn": [
+ "[concat('Microsoft.ServiceFabric/managedclusters/', parameters('clusterName'))]"
+ ],
+ "properties": {
+ "isStateless": true,
+ "isPrimary": false,
+ "IsSpotVM": true,
+ "vmImagePublisher": "[parameters('vmImagePublisher')]",
+ "vmImageOffer": "[parameters('vmImageOffer')]",
+ "vmImageSku": "[parameters('vmImageSku')]",
+ "vmImageVersion": "[parameters('vmImageVersion')]",
+ "vmSize": "[parameters('nodeTypeSize')]",
+ "vmInstanceCount": "[parameters('nodeTypeVmInstanceCount')]",
+ "dataDiskSizeGB": "[parameters('nodeTypeDataDiskSizeGB')]"
+ }
+}
+```
+
+## Enabling Spot VMs with Try & Restore
-* The Service Fabric managed cluster resource apiVersion should be **2022-02-01-preview** or later.
+This configuration enables the platform to automatically try to restore the evicted Spot VMs. Refer to the virtual machine scale set doc for [details](../virtual-machine-scale-sets/use-spot.md#try--restore).
+This configuration can only be enabled on new Spot nodetypes by specifying the **spotRestoreTimeout**, which is an ISO 8601 time duration having a value between 30 & 2880 mins. The platform will try to restore the VMs for this duration, after eviction.
```json {
Sample templates are available: [Service Fabric Stateless Node types template](h
"isStateless": true, "isPrimary": false, "IsSpotVM": true,
+ "evictionPolicy": "deallocate",
+ "spotRestoreTimeout": "PT30M",
"vmImagePublisher": "[parameters('vmImagePublisher')]", "vmImageOffer": "[parameters('vmImageOffer')]", "vmImageSku": "[parameters('vmImageSku')]",
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md
Title: Built-in policy definitions for Azure Service Fabric description: Lists Azure Policy built-in policy definitions for Azure Service Fabric. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
service-health Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Service Health description: Sample Azure Resource Graph queries for Azure Service Health showing use of resource types and tables to access Azure Service Health related resources and properties. Previously updated : 06/16/2022 Last updated : 07/07/2022
spring-cloud How To Configure Health Probes Graceful Termination https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-configure-health-probes-graceful-termination.md
+
+ Title: How to configure health probes and graceful termination period for apps hosted in Azure Spring Apps
+description: Shows you how to customize apps running in Azure Spring Apps with health probes and graceful termination period.
+++ Last updated : 07/02/2022++++
+# How to configure health probes and graceful termination periods for apps hosted in Azure Spring Apps
+
+**This article applies to:** ✔️ Java ✔️ C#
+
+**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+
+This article shows you how to customize apps running in Azure Spring Apps with health probes and graceful termination periods.
+
+A probe is a diagnostic performed periodically by Azure Spring Apps on an app instance. To perform a diagnostic, Azure Spring Apps either executes an arbitrary command of your choice within the app instance, establishes a TCP socket connection, or makes an HTTP request.
+
+Azure Spring Apps uses liveness probes to determine when to restart an application. For example, liveness probes could catch a deadlock, where an application is running but unable to make progress. Restarting the application in such a state can help to make the application more available despite bugs.
+
+Azure Spring Apps uses readiness probes to determine when an app instance is ready to start accepting traffic. One use of this signal is to control which app instances are used as backends for the application. When an app instance isn't ready, it's removed from Kubernetes Service Discovery. For more information, see [Discover and register your Spring Boot applications](how-to-service-registration.md).
+
+Azure Spring Apps uses startup probes to determine when an application has started. If such a probe is configured, it disables liveness and readiness checks until it succeeds, making sure those probes don't interfere with the application startup. You can use this behavior to adopt liveness checks on slow starting applications, preventing them from getting killed before they're up and running.
+
+Azure Spring Apps offers default health probe rules for every application. This article shows you how to customize your application with three kinds of health probes.
+
+## Prerequisites
+
+- The [Azure Spring Apps extension](/cli/azure/azure-cli-extensions-overview) for the Azure CLI.
+
+## Configure health probes and graceful termination for applications
+
+The following sections describe the properties available for configuration and how to set the properties using the Azure CLI.
+
+### Graceful termination
+
+The following table describes the property available for configuring graceful termination.
+
+| Property name | Description |
+|-||
+| terminationGracePeriodSeconds | The grace period is the duration in seconds after the processes running in the app instance are sent a termination signal and before the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. The value must be a non-negative integer. The value zero indicates to stop immediately via the kill signal (with no opportunity to shut down). If this value is nil, the default grace period will be used instead. The default value is 90 seconds. |
+
+### Health probe properties
+
+The following table describes the properties available for configuring health probes.
+
+| Property name | Description |
+||-|
+| initialDelaySeconds | The number of seconds after the app instance has started before probes are initiated. The default value is 0 seconds. The minimum value is 0. |
+| periodSeconds | How often (in seconds) to perform the probe. The default value is 10 seconds. The minimum value is 1 second. |
+| timeoutSeconds | The number of seconds after which the probe times out. The default value is 1 second. The minimum value is 1 second. |
+| failureThreshold | The minimum number of consecutive failures for the probe to be considered failed after having succeeded. The default value is 3. The minimum value is 1. |
+| successThreshold | The minimum number of consecutive successes for the probe to be considered successful after having failed. The default value is 1. The value must be 1 for liveness and startup. The minimum value is 1. |
+
+### Probe action properties
+
+There are three different ways to check an app instance using a probe. Each probe must define exactly one of these three probe actions:
+
+- `HTTPGetAction`
+
+ Performs an HTTP GET request against the app instance on a specified path. The diagnostic is considered successful if the response has a status code greater than or equal to 200 and less than 400.
+
+ | Property name | Description |
+ ||--|
+ | scheme | The scheme to use for connecting to the host. Defaults to HTTP. |
+ | path | The path to access on the HTTP server of the app instance, such as `/healthz`. |
+
+- `ExecAction`
+
+ Executes a specified command inside the app instance. The diagnostic is considered successful if the command exits with a status code of 0.
+
+ | Property name | Description |
+ ||-|
+ | command | The command line to execute inside the app instance. The working directory for the command is root ('/') in the app instance's filesystem. The command is run using `exec`, not inside a shell, so traditional shell instructions won't work. To use a shell, you need to explicitly call out to that shell. An exit status of 0 is treated as live/healthy and non-zero is unhealthy. |
+
+- `TCPSocketAction`
+
+ Performs a TCP check against the app instance.
+
+ There are no available properties to be customized for now.
+
+### Customize your application by using the Azure CLI
+
+The following steps show you how to customize your application.
+
+1. Use the following command to create an application with liveness probe and readiness probe:
+
+ ```azurecli
+ az spring app create \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Cloud-instance-name> \
+ --name <application-name> \
+ --enable-liveness-probe true \
+ --liveness-probe-config <path-to-liveness-probe-json-file> \
+ --enable-readiness-probe true \
+ --readiness-probe-config <path-to-readiness-probe-json-file>
+ ```
+
+ The following example shows the contents of a sample JSON file passed to the `--liveness-probe-config` parameter in the create command:
+
+ ```json
+ {
+ "probe": {
+ "initialDelaySeconds": 30,
+ "periodSeconds": 10,
+ "timeoutSeconds": 1,
+ "failureThreshold": 30,
+ "successThreshold": 1,
+ "probeAction": {
+ "type": "TCPSocketAction",
+ }
+ }
+ }
+ ```
+
+ > [!NOTE]
+ > Azure Spring Apps also support two more kinds of probe actions, as shown in the following JSON file examples:
+ >
+ > ```json
+ > "probeAction": {
+ > "type": "HTTPGetAction",
+ > "scheme": "HTTP",
+ > "path": "/anyPath"
+ > }
+ > ```
+ >
+ > and
+ >
+ > ```json
+ > "probeAction": {
+ > "type": "ExecAction",
+ > "command": ["cat", "/tmp/healthy"]
+ > }
+ > ```
+
+1. Optionally, protect slow starting containers with a startup probe by using the following command:
+
+ ```azurecli
+ az spring app update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Cloud-instance-name> \
+ --name <application-name> \
+ --enable-startup-probe true \
+ --startup-probe-config <path-to-startup-probe-json-file>
+ ```
+
+1. Optionally, disable any specific health probe using the following command:
+
+ ```azurecli
+ az spring app update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Cloud-instance-name> \
+ --name <application-name> \
+ --enable-liveness-probe false
+ ```
+
+1. Optionally, set the termination grace period seconds using the following command:
+
+ ```azurecli
+ az spring app update \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Cloud-instance-name> \
+ --name <application-name> \
+ --grace-period <termination-grace-period-seconds>
+ ```
+
+## Use best practices
+
+Use the following best practices when adding your own persistent storage to Azure Spring Apps.
+
+- Use liveness and readiness probe together. The reason for this recommendation is that Azure Spring Apps provides two approaches for service discovery at the same time. When the readiness probe fails, the app instance will be removed only from Kubernetes Service Discovery. A properly configured liveness probe can remove the issued app instance from Eureka Service Discovery to avoid unexpected cases. For more information about Service Discovery, see [Discover and register your Spring Boot applications](how-to-service-registration.md).
+- When an app instance starts, the first check is done after the delay specified by `initialDelaySeconds`, and subsequent checks happen periodically, with the period length specified by `periodSeconds`. If the app has failed to respond to the requests for a number of times defined by `failureThreshold`, the app instance will be restarted. Be sure your application can start fast enough, or update these parameters, so the total timeout `initialDelaySeconds + periodSeconds * failureThreshold` is longer than the start time of your application.
+- For Spring Boot applications, Spring Boot shipped with the [Health Groups](https://docs.spring.io/spring-boot/docs/2.2.x/reference/html/production-ready-features.html#health-groups) support, allowing developers to select a subset of health indicators and group them under a single, correlated, health status. For more information, see [Liveness and Readiness Probes with Spring Boot](https://spring.io/blog/2020/03/25/liveness-and-readiness-probes-with-spring-boot) on the Spring Blog.
+
+ The following examples show Liveness and Readiness probes with Spring Boot:
+
+ - Liveness probe:
+
+ ```json
+ "probe": {
+ "initialDelaySeconds": 30,
+ "periodSeconds": 10,
+ "timeoutSeconds": 1,
+ "failureThreshold": 30,
+ "successThreshold": 1,
+ "probeAction": {
+ "type": "HTTPGetAction",
+ "scheme": "HTTP",
+ "path": "/actuator/health/liveness"
+ }
+ }
+ ```
+
+ - Readiness probe:
+
+ ```json
+ "probe": {
+ "initialDelaySeconds": 0,
+ "periodSeconds": 10,
+ "timeoutSeconds": 1,
+ "failureThreshold": 3,
+ "successThreshold": 1,
+ "probeAction": {
+ "type": "HTTPGetAction",
+ "scheme": "HTTP",
+ "path": "/actuator/health/readiness"
+ }
+ }
+ ```
+
+## FAQs
+
+The following list shows frequently asked questions (FAQ) about using health probes with Azure Spring Apps.
+
+- I received 400 response when I created applications with customized health probes. What does this mean?
+
+ *The error message will point out which probe is responsible for the provision failure. Be sure the health probe rules are correct and the timeout is long enough for the application to be in the running state.*
+
+- What's the default probe settings for existing application?
+
+ *The following example shows the default settings:*
+
+ ```json
+ "startupProbe": null,
+ "livenessProbe": {
+ "disableProbe": false,
+ "failureThreshold": 24,
+ "initialDelaySeconds": 60,
+ "periodSeconds": 10,
+ "probeAction": {
+ "type": "TCPSocketAction"
+ },
+ "successThreshold": 1,
+ "timeoutSeconds": 1
+ },
+ "readinessProbe": {
+ "disableProbe": false,
+ "failureThreshold": 3,
+ "initialDelaySeconds": 0,
+ "periodSeconds": 10,
+ "probeAction": {
+ "type": "TCPSocketAction"
+ },
+ "successThreshold": 1,
+ "timeoutSeconds": 1
+ }
+ ```
+
+## Next steps
+
+- [Scale an application in Azure Spring Apps](how-to-scale-manual.md).
spring-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/policy-reference.md
Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
storage Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Storage description: Sample Azure Resource Graph queries for Azure Storage showing use of resource types and tables to access Azure Storage related resources and properties. Previously updated : 06/16/2022 Last updated : 07/07/2022
stream-analytics Capture Event Hub Data Parquet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/capture-event-hub-data-parquet.md
Here's an example screenshot of metrics showing input and output events.
Now you know how to use the Stream Analytics no code editor to create a job that captures Event Hubs data to Azure Data Lake Storage Gen2 in Parquet format. Next, you can learn more about Azure Stream Analytics and how to monitor the job that you created. * [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
-* [Monitor Stream Analytics jobs](stream-analytics-monitoring.md)
+* [Monitor Stream Analytics job with Azure portal](stream-analytics-monitoring.md)
stream-analytics Data Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/data-errors.md
There are several data errors that can only be detected after making a call to t
* [Troubleshoot Azure Stream Analytics by using diagnostics logs](stream-analytics-job-diagnostic-logs.md)
-* [Understand Stream Analytics job monitoring and how to monitor queries](stream-analytics-monitoring.md)
+* [Monitor Stream Analytics job with Azure portal](stream-analytics-monitoring.md)
stream-analytics Debug Locally Using Job Diagram Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/debug-locally-using-job-diagram-vs-code.md
In this section, you explore the metrics available for each part of the diagram.
> [!div class="mx-imgBorder"] > ![Job diagram metrics](./media/debug-locally-using-job-diagram-vs-code/job-metrics.png)
-3. Select the name of the input data source from the dropdown to see input metrics. The input source in the screenshot below is called *quotes*. For more information about input metrics, see [Understand Stream Analytics job monitoring and how to monitor queries](stream-analytics-monitoring.md).
+3. Select the name of the input data source from the dropdown to see input metrics. The input source in the screenshot below is called *quotes*. For more information about input metrics, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md).
> [!div class="mx-imgBorder"] > ![Job diagram input metrics](./media/debug-locally-using-job-diagram-vs-code/input-metrics.png)
In this section, you explore the metrics available for each part of the diagram.
> [!div class="mx-imgBorder"] > ![Step metrics](./media/debug-locally-using-job-diagram-vs-code/step-metrics.png)
-5. Select an output in the diagram or from the dropdown to see output-related metrics. For more information about output metrics, see [Understand Stream Analytics job monitoring and how to monitor queries](stream-analytics-monitoring.md). Live output sinks aren't supported.
+5. Select an output in the diagram or from the dropdown to see output-related metrics. For more information about output metrics, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md). Live output sinks aren't supported.
> [!div class="mx-imgBorder"] > ![Output metrics](./media/debug-locally-using-job-diagram-vs-code/output-metrics.png)
stream-analytics Event Ordering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/event-ordering.md
Last updated 08/06/2020
# Configuring event ordering policies for Azure Stream Analytics
-This article describes how to setup and use late arrival and out-of-order event policies in Azure Stream Analytics. These policies are applied only when you use the [TIMESTAMP BY](/stream-analytics-query/timestamp-by-azure-stream-analytics) clause in your query, and they are only applied for cloud input sources.
+This article describes how to setup and use late arrival and out-of-order event policies in Azure Stream Analytics. These policies are applied only when you use the [TIMESTAMP BY](/stream-analytics-query/timestamp-by-azure-stream-analytics) clause in your query, and they're only applied for cloud input sources.
## Event time and Arrival Time
Event may arrive out of order as well. After event time is adjusted based on lat
## Adjust or Drop events
-If events arrive late or out-of-order based on the policies you have configured, you can either drop such events (not processed by Stream Analytics) or have their event time adjusted.
+If events arrive late or out-of-order based on the policies you've configured, you can either drop such events (not processed by Stream Analytics) or have their event time adjusted.
Let us see an example of these policies in action. <br> **Late arrival policy:** 15 seconds
Let us see an example of these policies in action.
| Event No. | Event Time | Arrival Time | System.Timestamp | Explanation | | | | | | | | **1** | 00:10:00 | 00:10:40 | 00:10:25 | Event arrived late and outside tolerance level. So event time gets adjusted to maximum late arrival tolerance. |
-| **2** | 00:10:30 | 00:10:41 | 00:10:30 | Event arrived late but within tolerance level. So event time does not get adjusted. |
+| **2** | 00:10:30 | 00:10:41 | 00:10:30 | Event arrived late but within tolerance level. So event time doesn't get adjusted. |
| **3** | 00:10:42 | 00:10:42 | 00:10:42 | Event arrived on time. No adjustment needed. |
-| **4** | 00:10:38 | 00:10:43 | 00:10:38 | Event arrived out-of-order but within the tolerance of 8 seconds. So, event time does not get adjusted. For analytics purposes, this event will be considered as preceding event number 4. |
+| **4** | 00:10:38 | 00:10:43 | 00:10:38 | Event arrived out-of-order but within the tolerance of 8 seconds. So, event time doesn't get adjusted. For analytics purposes, this event will be considered as preceding event number 4. |
| **5** | 00:10:35 | 00:10:45 | 00:10:37 | Event arrived out-of-order and outside tolerance of 8 seconds. So, event time is adjusted to maximum of out-of-order tolerance. | ## Can these settings delay output of my job?
Example of this message is: <br>
Your input source (Event Hub/IoT Hub) likely has multiple partitions. Azure Stream Analytics produces output for time stamp t1 only after all the partitions that are combined are at least at time t1. For example, assume that the query reads from an event hub partition that has two partitions. One of the partitions, P1, has events until time t1. The other partition, P2, has events until time t1 + x. Output is then produced until time t1. But if there's an explicit Partition by PartitionId clause, both the partitions progress independently.
-When multiple partitions from the same input stream are combined, the late arrival tolerance is the maximum amount of time that every partition waits for new data. If there is one partition in your Event Hub, or if IoT Hub doesnΓÇÖt receive inputs, the timeline for that partition doesn't progress until it reaches the late arrival tolerance threshold. This delays your output by the late arrival tolerance threshold. In such cases, you may see the following message:
+When multiple partitions from the same input stream are combined, the late arrival tolerance is the maximum amount of time that every partition waits for new data. If there's one partition in your event hub, or if IoT Hub doesnΓÇÖt receive inputs, the timeline for that partition doesn't progress until it reaches the late arrival tolerance threshold. This delays your output by the late arrival tolerance threshold. In such cases, you may see the following message:
<br><code> {"message Time":"2/3/2019 8:54:16 PM UTC","message":"Input Partition [2] does not have additional data for more than [5] minute(s). Partition will not progress until either events arrive or late arrival threshold is met.","type":"InputPartitionNotProgressing","correlation ID":"2328d411-52c7-4100-ba01-1e860c757fc2"} </code><br><br>
-This message to inform you that at least one partition in your input is empty and will delay your output by the late arrival threshold. To overcome this, it is recommended you either:
+This message to inform you that at least one partition in your input is empty and will delay your output by the late arrival threshold. To overcome this, it's recommended you either:
1. Ensure all partitions of your Event Hub/IoT Hub receive input. 2. Use Partition by PartitionID clause in your query. ## Why do I see a delay of 5 seconds even when my late arrival policy is set to 0?
-This happens when there is an input partition that has never received any input. You can verify the input metrics by partition to validate this behavior.
+This happens when there's an input partition that has never received any input. You can verify the input metrics by partition to validate this behavior.
-When a partition does not have any data for more than the configured late arrival threshold, stream analytics advances application timestamp as explained in event ordering considerations section. This requires estimated arrival time. If the partition never had any data, stream analytics estimates the arrival time as *local time - 5 seconds*. Due to this partitions that never had any data could show a watermark delay of 5 seconds.
+When a partition doesn't have any data for more than the configured late arrival threshold, stream analytics advances application timestamp as explained in event ordering considerations section. This requires estimated arrival time. If the partition never had any data, stream analytics estimates the arrival time as *local time - 5 seconds*. Due to this, partitions that never had any data could show a watermark delay of 5 seconds.
## Next steps * [Time handling considerations](stream-analytics-time-handling.md)
-* [Metrics available in Stream Analytics](./stream-analytics-monitoring.md#metrics-available-for-stream-analytics)
+* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
+* [Azure Stream Analytics metrics dimensions](./stream-analytics-job-metrics-dimensions.md)
stream-analytics Filter Ingest Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/filter-ingest-data-lake-storage-gen2.md
Here's a sample **Metrics** page:
Learn more about Azure Stream Analytics and how to monitor the job you've created. - [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)-- [Monitor Stream Analytics jobs](stream-analytics-monitoring.md)
+- [Monitor Stream Analytics job with Azure portal](stream-analytics-monitoring.md)
stream-analytics Filter Ingest Synapse Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/filter-ingest-synapse-sql.md
You can see the job under the Process Data section on the **Stream Analytics job
Learn more about Azure Stream Analytics and how to monitor the job you've created. * [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
-* [Monitor Stream Analytics jobs](stream-analytics-monitoring.md)
+* [Monitor Stream Analytics job with Azure portal](stream-analytics-monitoring.md)
stream-analytics Job States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/job-states.md
A Stream Analytics job could be in one of four states at any given time: running
## Next steps * [Setup alerts for Azure Stream Analytics jobs](stream-analytics-set-up-alerts.md)
-* [Metrics available in Stream Analytics](./stream-analytics-monitoring.md#metrics-available-for-stream-analytics)
+* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
+* [Azure Stream Analytics metrics dimensions](./stream-analytics-job-metrics-dimensions.md)
* [Troubleshoot using activity and resource logs](./stream-analytics-job-diagnostic-logs.md)
stream-analytics No Code Materialize Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-materialize-cosmos-db.md
To start the job, you must specify:
Now you know how to use the Stream Analytics no code editor to develop a job that reads from Event Hubs and calculates aggregates such as counts, averages and writes it to your Azure Cosmos DB resource. * [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
-* [Monitor Stream Analytics jobs](stream-analytics-monitoring.md)
+* [Monitor Stream Analytics job with Azure portal](./stream-analytics-monitoring.md)
stream-analytics No Code Stream Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-stream-processing.md
You can:
- Perform data preparation operations like joins and filters - Tackle advanced scenarios such as time-window aggregations (tumbling, hopping, and session windows) for group-by operations
-After you create and run your Stream Analytics jobs, you can easily operationalize production workloads. Use the right set of [built-in metrics](stream-analytics-monitoring.md) for monitoring and troubleshooting purposes. Stream Analytics jobs are billed according to the [pricing model](https://azure.microsoft.com/pricing/details/stream-analytics/) when they're running.
+After you create and run your Stream Analytics jobs, you can easily operationalize production workloads. Use the right set of [built-in metrics](stream-analytics-job-metrics.md) for monitoring and troubleshooting purposes. Stream Analytics jobs are billed according to the [pricing model](https://azure.microsoft.com/pricing/details/stream-analytics/) when they're running.
## Prerequisites
You can see the list of all Stream Analytics jobs created by no-code drag and dr
- Status ΓÇô The status of the job. Select Refresh on top of the list to see the latest status. - Streaming units ΓÇô The number of Streaming units selected when you started the job. - Output watermark - An indicator of liveliness for the data produced by the job. All events before the timestamp are already computed.-- Job monitoring ΓÇô Select **Open metrics** to see the metrics related to this Stream Analytics job. For more information about the metrics you can use to monitor your Stream Analytics job, see [Metrics available for Stream Analytics](stream-analytics-monitoring.md#metrics-available-for-stream-analytics).
+- Job monitoring ΓÇô Select **Open metrics** to see the metrics related to this Stream Analytics job. For more information about the metrics you can use to monitor your Stream Analytics job, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md).
- Operations ΓÇô Start, stop, or delete the job. ## Next steps
stream-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
stream-analytics Stream Analytics Job Analysis With Metric Dimensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-analysis-with-metric-dimensions.md
+
+ Title: Analyze Azure Stream Analytics job performance by using metric dimensions
+description: This article describes how to analyze stream analytics job with metric dimension.
+++++ Last updated : 07/07/2022+
+# Analyze Stream Analytics job performance with metrics dimensions
+
+To understand the Stream Analytics jobΓÇÖs health, it's important to know how to utilize the jobΓÇÖs metrics and dimensions. You can use Azure portal or VS code ASA extension or SDK to get and view the metrics and dimensions, which you're interested in.
+
+This article demonstrates how to use Stream Analytics job metrics and dimensions to analyze the jobΓÇÖs performance through the Azure portal.
+
+Watermark delay and backlogged input events are the main metrics to determine performance of your Streaming analytics job. If your jobΓÇÖs watermark delay is continuously increasing and inputs events are backlogged, it implies that your job is unable to keep up with the rate of input events and produce outputs in a timely manner. LetΓÇÖs look at several examples to analyze the jobΓÇÖs performance through the watermark delay metric data as a starting point.
+
+## No input for certain partition causes job watermark delay increasing
+
+If your embarrassingly parallel jobΓÇÖs watermark delay is steadily increased, you can go to **Metrics** and follow these steps to find out if the root cause is due to no data in some partitions of your input source.
+1. First, you can check which partition has the watermark delay increasing by selecting watermark delay metric and splitting it by ΓÇ£Partition IDΓÇ¥ dimension. For example, you identify that the partition#465 has high watermark delay.
+
+ :::image type="content" source="./media/stream-analytics-job-analysis-with-metric-dimensions/01-watermark-delay-splitting-with-partition-id.png" alt-text="Diagram that show the watermark delay splitting with Partition ID for the case of no input in certain partition." lightbox="./media/stream-analytics-job-analysis-with-metric-dimensions/01-watermark-delay-splitting-with-partition-id.png":::
+
+2. You can then check if there's any input data missing for this partition. To do this, you can select Input Events metric and filter it to this specific partition ID.
+
+ :::image type="content" source="./media/stream-analytics-job-analysis-with-metric-dimensions/02-input-events-splitting-with-partition-id.png" alt-text="Diagram that shows the Input Events splitting with Partition ID for the case of no input in certain partition." lightbox="./media/stream-analytics-job-analysis-with-metric-dimensions/02-input-events-splitting-with-partition-id.png":::
++
+What action could you take further?
+
+- As you can see, the watermark delay for this partition is increasing as there's no input events flowing into this partition. If your job's late arrival tolerance window is several hours and no input data is flowing into a partition, it's expected that the watermark delay for that partition continues to increase until the late arrival window is reached. For example, if your late arrival tolerance is 6 hours and input data isn't flowing into input partition 1, watermark delay for output partition 1 will increase until it reaches 6 hours. You can check if your input source is producing data as expected.
++
+## Input data-skew causes high watermark delay
+
+As mentioned in the above case, when you see your embarrassingly parallel job having high watermark delay, the first thing to do is to check the watermark delay splitting by ΓÇ£Partition IDΓÇ¥ dimension to identify if all the partitions have high watermark delay or just a few of them.
+
+For this example, you can start by splitting the watermark delay metric by **Partition ID** dimension.
++
+As you can see, partition#0 and partition#1 have higher watermark delay (20 ~ 30s) than other eight partitions. The other partitionsΓÇÖ watermark delays are always steady at 8s~10 s. Then, letΓÇÖs check what the input data looks like for all these partitions with the metric ΓÇ£Input EventsΓÇ¥ splitting by ΓÇ£Partition IDΓÇ¥:
+++
+What action could you take further?
+
+As shown in screenshot above, partition#0 and partition#1 that have high watermark delay, are receiving significantly more input data than other partitions. We call this ΓÇ£data-skewΓÇ¥. This means that the streaming nodes processing the partitions with data-skew need to consume more resources (CPU and memory) than others as shown below.
+++
+Streaming nodes that process partitions with higher data skew will exhibit higher CPU and/or SU (memory) utilization that will affect job's performance and result in increasing watermark delay. To mitigate this, you'll need to repartition your input data more evenly.
+
+## Overloaded CPU/memory causes watermark delay increasing
+
+When an embarrassingly parallel job has watermark delay increasing, it may not just happen on one or several partitions, but all of the partitions. How to confirm my job is falling into this case?
+1. First, split the watermark delay with ΓÇ£Partition IDΓÇ¥ dimension, same as the case above. For example, the below job:
+
+ :::image type="content" source="./media/stream-analytics-job-analysis-with-metric-dimensions/06-watermark-delay-splitting-with-partition-id-all-increasing.png" alt-text="Diagram that shows the watermark delay splitting with Partition ID for the case of overloaded cpu and memory." lightbox="./media/stream-analytics-job-analysis-with-metric-dimensions/06-watermark-delay-splitting-with-partition-id-all-increasing.png":::
++
+2. Split the ΓÇ£Input EventsΓÇ¥ metric with ΓÇ£Partition IDsΓÇ¥ to confirm if there's data-skew in input data per partitions.
+3. Then, check the CPU and SU utilization to see if the utilization in all streaming nodes is too high.
+
+ :::image type="content" source="./media/stream-analytics-job-analysis-with-metric-dimensions/07-cpu-and-memory-utilization-splitting-with-node-name.png" alt-text="Diagram that show the CPU and memory utilization splitting by Node name for the case of overloaded cpu and memory." lightbox="./media/stream-analytics-job-analysis-with-metric-dimensions/07-cpu-and-memory-utilization-splitting-with-node-name.png":::
++
+4. If the utilization of CPU and SU is very high (>80%) in all streaming nodes, it can conclude that this job has a large amount of data being processed within each streaming node. You further check how many partitions are allocated to one streaming node by checking the ΓÇ£Input EventsΓÇ¥ metrics with filter by a streaming node ID with "Node Name" dimension and splitting by "Partition IDΓÇ¥. See the screenshot below:
+
+ :::image type="content" source="./media/stream-analytics-job-analysis-with-metric-dimensions/08-partition-count-on-one-streaming-node.png" alt-text="Diagram that shows the partition count on one streaming node for the case of overloaded cpu and memory." lightbox="./media/stream-analytics-job-analysis-with-metric-dimensions/08-partition-count-on-one-streaming-node.png":::
+
+5. From the above screenshot, you can see there are four partitions allocated to one streaming node that occupied nearly 90% ~ 100% of the streaming node resource. You can use the similar approach to check the rest streaming nodes to confirm if they're also processing four partitions data.
+
+What action could you take further?
+
+1. Naturally, youΓÇÖd think to reduce the partition count for each streaming node to reduce the input data for each streaming node. To achieve this, you can double the SUs to have each streaming node to handle two partitions data, or four times the SUs to have each streaming node to handle one partition data. Refer to [Understand and adjust Streaming Units](./stream-analytics-streaming-unit-consumption.md) for the relationship between SUs assignment and streaming node count.
+2. What should I do if the watermark delay is still increasing when one streaming node is handling one partition data? Repartition your input with more partitions to reduce the amount of data in each partition. Refer to this document for details: [Use repartitioning to optimize Azure Stream Analytics jobs](./repartition.md)
+++
+## Next steps
+
+* [Monitor Stream Analytics job with Azure portal](./stream-analytics-monitoring.md)
+* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
+* [Azure Stream Analytics job metrics dimensions](./stream-analytics-job-metrics-dimensions.md)
+* [Understand and adjust Streaming Units](./stream-analytics-streaming-unit-consumption.md)
stream-analytics Stream Analytics Job Metrics Dimensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-metrics-dimensions.md
+
+ Title: Azure Stream Analytics metrics dimensions
+description: This article describes the Azure Stream Analytics metric dimensions.
+++++ Last updated : 06/30/2022+
+# Azure Stream Analytics metrics dimensions
+
+Stream Analytics provides a serverless, distributed streaming processing service. Jobs can run on one or more distributed streaming nodes, which the service automatically manages. The input data are partitioned and allocated to different streaming nodes for processing. Azure Stream Analytics has many metrics available to monitor job's health. Metrics can be split by dimensions, like Partition ID or Node name that helps troubleshoot performance issues with your job. To get the metrics full list, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md).
+
+## Stream Analytics metrics dimensions
+
+Azure Stream Analytics provides three important dimensions: ΓÇ£Logic NameΓÇ¥, ΓÇ£Partition IDΓÇ¥, and ΓÇ£Node NameΓÇ¥ for metrics splitting and filtering.
+
+| Dimension | Definition |
+| - | - |
+| Logic Name | The input or output name for a given Azure Stream Analytics (ASA) job. |
+| Partition ID | The ID of the input data partition from input source, for example, if the input source is from event hub, the partition ID is the EH partition ID. For embarrassingly parallel job, the ΓÇ£Partition IDΓÇ¥ in output is the same as the input partition ID. |
+| Node Name | Identifier of a streaming node that is provisioned when your job runs. A streaming node represents amount of compute and memory resources allocated to your job. |
++++
+## "Logic Name" dimension
+
+The ΓÇ£Logic NameΓÇ¥ is the input or output name for a given Azure Stream Analytics (ASA) job. For example: if an ASA job has four inputs and five outputs, you'll see the four individual logic inputs and five individual logical outputs when splitting input and output related metrics with this dimension. (for example, Input Events, Output Events, etc.)
++
+<!--:::image type="content" source="./media/stream-analytics-job-metrics-dimensions/05-input-events-splitting-by-logic-name.png" alt-text="Diagram that shows the Input events metric splitting by Logic Name."::: -->
+++
+ΓÇ£Logic NameΓÇ¥ dimension is available for the metrics below for filtering and splitting:
+- Backlogged Input Events
+- Data Conversion Errors
+- Early Input Events
+- Input Deserialization Errors
+- Input Event Bytes
+- Input Events
+- Input Source Received
+- Late Input Events
+- Out of order Events
+- Output Events
+- Watermark delay
+
+## "Node Name" dimension
+
+A streaming node represents a set of compute resources that is used to process your input data. Every six Streaming Units (SUs) translates to one node, which the service automatically manages on your behalf. For more information for the relationship between streaming unit and streaming node, see [Understand and adjust Streaming Units](./stream-analytics-streaming-unit-consumption.md).
+
+The ΓÇ£Node NameΓÇ¥ is ΓÇ£Streaming NodeΓÇ¥ level dimension that could help you to drill down certain metrics to the specific streaming node level. For example, the CPU utilization metrics could be split into streaming node level to check the CPU utilization of an individual streaming node.
++
+ΓÇ£Node NameΓÇ¥ dimension is available for the metrics below for filtering and splitting:
+- CPU % Utilization (Preview)
+- SU % Utilization
+- Input Events
+
+## "Partition ID" dimension
+
+When streaming data is ingested into Azure Stream Analytics service for processing, the input data is distributed to streaming nodes according to the partitions in input source. The ΓÇ£Partition IDΓÇ¥ is the ID of the input data partition from input source, for example, if the input source is from event hub, the partition ID is the EH partition ID. The ΓÇ£Partition IDΓÇ¥ is the same as it in the output as well.
+++
+ΓÇ£Partition IDΓÇ¥ dimension is available for the metrics below for filtering and splitting:
+- Backlogged Input Events
+- Data Conversion Errors
+- Early Input Events
+- Input Deserialization Errors
+- Input Event Bytes
+- Input Events
+- Input Source Received
+- Late Input Events
+- Output Events
+- Watermark delay
++
+## Next steps
+
+* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
+* [Analyze Stream Analytics job performance with metrics dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md)
+* [Monitor Stream Analytics job with Azure portal](./stream-analytics-monitoring.md)
+* [Understand and adjust Streaming Units](./stream-analytics-streaming-unit-consumption.md)
stream-analytics Stream Analytics Job Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-job-metrics.md
+
+ Title: Azure Stream Analytics job metrics
+description: This article describes Azure Stream Analytics job metrics.
++++ Last updated : 07/07/2022+++
+# Azure Stream Analytics job metrics
+
+Azure Stream Analytics provides plenty of metrics that can be used to monitor and troubleshoot your query and job performance. These metrics data can be viewed through Azure portal in the **Monitoring** section on the **Overview** page.
++
+You can also navigate to the **Monitoring** section and click **Metrics**. The metric page will be shown for adding the specific metric you'd like to check.
++
+## Metrics available for Stream Analytics
+
+Azure Stream Analytics provides the following metrics for you to monitor your job's health.
+
+| Metric | Definition |
+| - | - |
+| Backlogged Input Events | Number of input events that are backlogged. A non-zero value for this metric implies that your job isn't able to keep up with the number of incoming events. If this value is slowly increasing or consistently non-zero, you should scale out your job. You can learn more by visiting [Understand and adjust Streaming Units](stream-analytics-streaming-unit-consumption.md). |
+| Data Conversion Errors | Number of output events that couldn't be converted to the expected output schema. Error policy can be changed to 'Drop' to drop events that encounter this scenario. |
+| CPU % Utilization (preview) | The percentage of CPU utilized by your job. Even if this value is very high (90% or above), you shouldn't increase number of SUs based on this metric alone. If number of backlogged input events or watermark delay increases, you can then use this CPU% utilization metric to determine if CPU is the bottleneck. It's possible that this metric has spikes intermittently. It's recommended to do scale tests to determine upper bound of your job after which inputs get backlogged or watermark delay increases due to CPU bottleneck. |
+| Early Input Events | Events whose application timestamp is earlier than their arrival time by more than 5 minutes. |
+| Failed Function Requests | Number of failed Azure Machine Learning function calls (if present). |
+| Function Events | Number of events sent to the Azure Machine Learning function (if present). |
+| Function Requests | Number of calls to the Azure Machine Learning function (if present). |
+| Input Deserialization Errors | Number of input events that couldn't be deserialized. |
+| Input Event Bytes | Amount of data received by the Stream Analytics job, in bytes. This can be used to validate that events are being sent to the input source. |
+| Input Events | Number of records deserialized from the input events. This count doesn't include incoming events that result in deserialization errors. The same events can be ingested by Stream Analytics multiple times in scenarios such as internal recoveries and self joins. Therefore it is recommended not to expect Input Events and Output Events metrics to match if your job has a simple 'pass through' query. |
+| Input Sources Received | Number of messages received by the job. For Event Hub, a message is a single EventData. For Blob, a message is a single blob. Please note that Input Sources are counted before deserialization. If there are deserialization errors, input sources can be greater than input events. Otherwise, it can be less than or equal to input events since each message can contain multiple events. |
+| Late Input Events | Events that arrived later than the configured late arrival tolerance window. Learn more about [Azure Stream Analytics event order considerations](./stream-analytics-time-handling.md) . |
+| Out-of-Order Events | Number of events received out of order that were either dropped or given an adjusted timestamp, based on the Event Ordering Policy. This can be impacted by the configuration of the Out of Order Tolerance Window setting. |
+| Output Events | Amount of data sent by the Stream Analytics job to the output target, in number of events. |
+| Runtime Errors | Total number of errors related to query processing (excluding errors found while ingesting events or outputting results) |
+| SU (Memory) % Utilization | The percentage of memory utilized by your job. If SU % utilization is consistently over 80%, the watermark delay is rising, and the number of backlogged events is rising, consider increasing streaming units. High utilization indicates that the job is using close to the maximum allocated resources. |
+| Watermark Delay | The maximum watermark delay across all partitions of all outputs in the job. |
+
+You can use these metrics to [monitor the performance of your Stream Analytics job](./stream-analytics-set-up-alerts.md#scenarios-to-monitor).
++
+## Get help
+For further assistance, try our [Microsoft Q&A question page for Azure Stream Analytics](/answers/topics/azure-stream-analytics.html)
+
+## Next steps
+* [Introduction to Azure Stream Analytics](stream-analytics-introduction.md)
+* [Azure Stream Analytics job metrics dimensions](./stream-analytics-job-metrics-dimensions.md)
+* [Understand and adjust Streaming Units](./stream-analytics-streaming-unit-consumption.md)
+* [Analyze Stream Analytics job performance with metrics dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md)
+* [Monitor Stream Analytics job with Azure portal](./stream-analytics-monitoring.md)
+* [Get started using Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md)
+
stream-analytics Stream Analytics Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-monitoring.md
Title: Understand job monitoring in Azure Stream Analytics
+ Title: Monitor Stream Analytics job with Azure portal
description: This article describes how to monitor Azure Stream Analytics jobs in the Azure portal.
Last updated 03/08/2021
-# Understand Stream Analytics job monitoring and how to monitor queries
+# Monitor Stream Analytics job with Azure portal
-## Introduction: The monitor page
-The Azure portal surfaces key performance metrics that can be used to monitor and troubleshoot your query and job performance. To see these metrics, browse to the Stream Analytics job you are interested in seeing metrics for and view the **Monitoring** section on the Overview page.
+The Azure portal surfaces key performance metrics that can be used to monitor and troubleshoot your query and job performance. This article demonstrates how to monitor your Stream Analytics job in portal with the metrics.
-![Stream Analytics job monitoring link](./media/stream-analytics-monitoring/02-stream-analytics-monitoring-block.png)
+## Azure portal monitor page
+To see Azure Stream Analytics job metrics, browse to the Stream Analytics job you're interested in seeing metrics for and view the **Monitoring** section on the **Overview** page.
-The window will appear as shown:
-![Stream Analytics job monitoring dashboard](./media/stream-analytics-monitoring/01-stream-analytics-monitoring.png)
+Alternatively, browse to the **Monitoring** blade in the left panel and click the **Metrics**, then the metric page will be shown for adding the specific metric you'd like to check:
-## Metrics available for Stream Analytics
-| Metric | Definition |
-| - | - |
-| Backlogged Input Events | Number of input events that are backlogged. A non-zero value for this metric implies that your job isn't able to keep up with the number of incoming events. If this value is slowly increasing or consistently non-zero, you should scale out your job. You can learn more by visiting [Understand and adjust Streaming Units](stream-analytics-streaming-unit-consumption.md). |
-| Data Conversion Errors | Number of output events that could not be converted to the expected output schema. Error policy can be changed to 'Drop' to drop events that encounter this scenario. |
-| CPU % Utilization (preview) | The percentage of CPU utilized by your job. Even if this value is very high (90% or above), you should not increase number of SUs based on this metric alone. If number of backlogged input events or watermark delay increases, you can then use this CPU% utilization metric to determine if CPU is the bottleneck. It is possible that this metric has spikes intermittently. It is recommended to do scale tests to determine upper bound of your job after which inputs get backlogged or watermark delay increases due to CPU bottleneck. |
-| Early Input Events | Events whose application timestamp is earlier than their arrival time by more than 5 minutes. |
-| Failed Function Requests | Number of failed Azure Machine Learning function calls (if present). |
-| Function Events | Number of events sent to the Azure Machine Learning function (if present). |
-| Function Requests | Number of calls to the Azure Machine Learning function (if present). |
-| Input Deserialization Errors | Number of input events that could not be deserialized. |
-| Input Event Bytes | Amount of data received by the Stream Analytics job, in bytes. This can be used to validate that events are being sent to the input source. |
-| Input Events | Number of records deserialized from the input events. This count does not include incoming events that result in deserialization errors. The same events can be ingested by Stream Analytics multiple times in scenarios such as internal recoveries and self joins. Therefore it is recommended not to expect Input Events and Output Events metrics to match if your job has a simple 'pass through' query. |
-| Input Sources Received | Number of messages received by the job. For Event Hub, a message is a single EventData. For Blob, a message is a single blob. Please note that Input Sources are counted before deserialization. If there are deserialization errors, input sources can be greater than input events. Otherwise, it can be less than or equal to input events since each message can contain multiple events. |
-| Late Input Events | Events that arrived later than the configured late arrival tolerance window. Learn more about [Azure Stream Analytics event order considerations](./stream-analytics-time-handling.md) . |
-| Out-of-Order Events | Number of events received out of order that were either dropped or given an adjusted timestamp, based on the Event Ordering Policy. This can be impacted by the configuration of the Out of Order Tolerance Window setting. |
-| Output Events | Amount of data sent by the Stream Analytics job to the output target, in number of events. |
-| Runtime Errors | Total number of errors related to query processing (excluding errors found while ingesting events or outputting results) |
-| SU % Utilization | The percentage of memory utilized by your job. If SU % utilization is consistently over 80%, the watermark delay is rising, and the number of backlogged events is rising, consider increasing streaming units. High utilization indicates that the job is using close to the maximum allocated resources. |
-| Watermark Delay | The maximum watermark delay across all partitions of all outputs in the job. |
-You can use these metrics to [monitor the performance of your Stream Analytics job](./stream-analytics-set-up-alerts.md#scenarios-to-monitor).
+There are 17 types of metrics provided by Azure Stream Analytics service. To learn about the details of them, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md).
-## Customizing Monitoring in the Azure portal
-You can adjust the type of chart, metrics shown, and time range in the Edit Chart settings. For details, see [How to Customize Monitoring](../azure-monitor/data-platform.md).
+You can also use these metrics to [monitor the performance of your Stream Analytics job](./stream-analytics-set-up-alerts.md#scenarios-to-monitor).
- ![Stream Analytics query monitor time graph](./media/stream-analytics-monitoring/08-stream-analytics-monitoring.png)
+## Operate and aggregate metrics in portal monitor
+
+There are several options available for your to operate and aggregate the metrics in portal monitor page.
+
+To check the metrics data for a specific dimension, you can use **Add filter**. There are 3 important metrics dimensions available. To learn more about the metric dimensions, see [Azure Stream Analytics metrics dimensions](./stream-analytics-job-metrics-dimensions.md).
++
+To check the metrics data per dimension, you can use **Apply splitting**.
++
+You can also specify the time range to view the metrics you are interested in.
++
+For details, see [How to Customize Monitoring](../azure-monitor/data-platform.md).
-## Latest output
-Another interesting data point to monitor your job is the time of the last output, shown in the Overview page.
-This time is the application time (i.e. the time using the timestamp from the event data) of the latest output of your job.
## Get help For further assistance, try our [Microsoft Q&A question page for Azure Stream Analytics](/answers/topics/azure-stream-analytics.html)
For further assistance, try our [Microsoft Q&A question page for Azure Stream An
## Next steps * [Introduction to Azure Stream Analytics](stream-analytics-introduction.md) * [Get started using Azure Stream Analytics](stream-analytics-real-time-fraud-detection.md)
+* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
+* [Azure Stream Analytics metrics dimensions](./stream-analytics-job-metrics-dimensions.md)
+* [Analyze Stream Analytics job performance with metrics dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md)
* [Scale Azure Stream Analytics jobs](stream-analytics-scale-jobs.md) * [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference) * [Azure Stream Analytics Management REST API Reference](/rest/api/streamanalytics/)
stream-analytics Stream Analytics Real Time Fraud Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-real-time-fraud-detection.md
Title: Tutorial - Analyze fraudulent call data with Azure Stream Analytics and visualize results in Power BI dashboard description: This tutorial provides an end-to-end demonstration of how to use Azure Stream Analytics to analyze fraudulent calls in a phone call stream.--++ Previously updated : 12/17/2020 Last updated : 07/08/2022 #Customer intent: As an IT admin/developer, I want to run a Stream Analytics job to analyze phone call data and visualize results in a Power BI dashboard. # Tutorial: Analyze fraudulent call data with Stream Analytics and visualize results in Power BI dashboard
-This tutorial shows you how to analyze phone call data using Azure Stream Analytics. The phone call data, generated by a client application, contains fraudulent calls, which are filtered by the Stream Analytics job. You can use the techniques from this tutorial for other types of fraud detection, such as credit card fraud or identity theft.
+This tutorial shows you how to analyze phone call data using Azure Stream Analytics. The phone call data, generated by a client application, contains fraudulent calls, which are detected by the Stream Analytics job. You can use the techniques from this tutorial for other types of fraud detection, such as credit card fraud or identity theft.
In this tutorial, you learn how to:
Use the following steps to create an event hub and send call data to that event
![Event hub configuration in the Azure portal](media/stream-analytics-real-time-fraud-detection/create-event-hub-portal.png)
+7. After the deployment is complete, select **Configuration** under **Settings** in your Event Hubs namespace and change the Minimum TLS version to **Version 1.0**.
+ ![Screenshot of Event hub TLS configuration version 1.0 in the Azure portal.](media/stream-analytics-real-time-fraud-detection/event-hubs-tls-version.png)
+ ### Grant access to the event hub and get a connection string Before an application can send data to Azure Event Hubs, the event hub must have a policy that allows access. The access policy produces a connection string that includes authorization information.
For this transformation, you want a sequence of temporal windows that don't over
GROUP BY TUMBLINGWINDOW(s, 5), SwitchNum ```
- This query uses the `Timestamp By` keyword in the `FROM` clause to specify which timestamp field in the input stream to use to define the Tumbling window. In this case, the window divides the data into segments by the `CallRecTime` field in each record. (If no field is specified, the windowing operation uses the time that each event arrives at the event hub. See "Arrival Time Vs Application Time" in [Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference).
+ This query uses the `Timestamp By` keyword in the `FROM` clause to specify which timestamp field in the input stream to use to define the Tumbling window. In this case, the window divides the data into segments by the `CallRecTime` field in each record. (If no field is specified, the windowing operation uses the time that each event arrives at the event hub. See "Arrival Time vs Application Time" in [Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference).
The projection includes `System.Timestamp`, which returns a timestamp for the end of each window.
When you use a join with streaming data, the join must provide some limits on ho
6. Follow the step 5 again with the following options: * When you get to Visualization Type, select Line chart. * Add an axis and select **windowend**.
- * Add a value and select **fraudulentcalls**.
+ * Add a value and select **fraudulent calls**.
* For **Time window to display**, select the last 10 minutes. 7. Your dashboard should look like the example below once both tiles are added. Notice that, if your event hub sender application and Streaming Analytics application are running, your Power BI dashboard periodically updates as new data arrives.
stream-analytics Stream Analytics Solution Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-solution-patterns.md
If you combine the offline analytics pattern with the near real-time application
## How to monitor ASA jobs
-An Azure Stream Analytics job can be run 24/7 to process incoming events continuously in real time. Its uptime guarantee is crucial to the health of the overall application. While Stream Analytics is the only streaming analytics service in the industry that offers a [99.9% availability guarantee](https://azure.microsoft.com/support/legal/sl).
+An Azure Stream Analytics job can be run 24/7 to process incoming events continuously in real time. Its uptime guarantee is crucial to the health of the overall application. While Stream Analytics is the only streaming analytics service in the industry that offers a [99.9% availability guarantee](https://azure.microsoft.com/support/legal/sl).
![ASA monitoring](media/stream-analytics-solution-patterns/monitoring.png)
stream-analytics Stream Analytics Streaming Unit Consumption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-streaming-unit-consumption.md
Title: Streaming Units in Azure Stream Analytics
-description: This article describes the Streaming Units setting and other factors that impact performance in Azure Stream Analytics.
-
+ Title: Understand and adjust Azure Stream Analytics streaming units
+description: This article describes the streaming units setting and other factors that affects performance in Azure Stream Analytics.
Previously updated : 08/28/2020 Last updated : 07/07/2022
+# Understand and adjust Stream Analytics streaming units
-# Understand and adjust Streaming Units
+## Understand streaming unit and streaming node
Streaming Units (SUs) represents the computing resources that are allocated to execute a Stream Analytics job. The higher the number of SUs, the more CPU and memory resources are allocated for your job. This capacity lets you focus on the query logic and abstracts the need to manage the hardware to run your Stream Analytics job in a timely manner.
-To achieve low latency stream processing, Azure Stream Analytics jobs perform all processing in memory. When running out of memory, the streaming job fails. As a result, for a production job, itΓÇÖs important to monitor a streaming jobΓÇÖs resource usage, and make sure there is enough resource allocated to keep the jobs running 24/7.
+Every 6 SUs corresponding to one streaming node for your job. Jobs with 1 and 3 SUs also have only one streaming node but with a fraction of the computing resources compared to 6 SUs. The 1 and 3 SU jobs provide a cost-effective option for workloads that require smaller scale. Your job can scale beyond 6 SUs to 12, 18, 24 and more by adding more streaming nodes that provide more distributed computing resources allowing your job to process more data volumes.
+
+To achieve low latency stream processing, Azure Stream Analytics jobs perform all processing in memory. When running out of memory, the streaming job fails. As a result, for a production job, itΓÇÖs important to monitor a streaming jobΓÇÖs resource usage, and make sure there's enough resource allocated to keep the jobs running 24/7.
+
+The SU % utilization metric, which ranges from 0% to 100%, describes the memory consumption of your workload. For a streaming job with minimal footprint, this metric is usually between 10% to 20%. If SU% utilization is high (above 80%), or if input events get backlogged (even with a low SU% utilization since it doesn't show CPU usage), your workload likely requires more compute resources, which requires you to increase the number of SUs. It's best to keep the SU metric below 80% to account for occasional spikes. To react to increased workloads and increase streaming units, consider setting an alert of 80% on the SU Utilization metric. Also, you can use watermark delay and backlogged events metrics to see if there's an impact.
+
-The SU % utilization metric, which ranges from 0% to 100%, describes the memory consumption of your workload. For a streaming job with minimal footprint, this metric is usually between 10% to 20%. If SU% utilization is high (above 80%), or if input events get backlogged (even with a low SU% utilization since it does not show CPU usage), your workload likely requires more compute resources, which requires you to increase the number of SUs. It's best to keep the SU metric below 80% to account for occasional spikes. To react to increased workloads and increase streaming units, consider setting an alert of 80% on the SU Utilization metric. Also, you can use watermark delay and backlogged events metrics to see if there is an impact.
-## Configure Stream Analytics Streaming Units (SUs)
+## Configure Stream Analytics streaming units (SUs)
1. Sign in to [Azure portal](https://portal.azure.com/) 2. In the list of resources, find the Stream Analytics job that you want to scale and then open it.  3. In the job page, under the **Configure** heading, select **Scale**. Default number of SUs is 3 when creating a job.
- ![Azure portal Stream Analytics job configuration][img.stream.analytics.preview.portal.settings.scale]
+ :::image type="content" source="./media/stream-analytics-scale-jobs/stream-analytics-preview-portal-job-settings-new-portal.png" alt-text="Diagram to show Azure portal Stream Analytics job configuration." lightbox="./media/stream-analytics-scale-jobs/stream-analytics-preview-portal-job-settings-new-portal.png":::
-4. Use the slider to set the SUs for the job. Notice that you are limited to specific SU settings. 
-5. You can change the number of SUs assigned to your job even when it is running. This is not possible if your job uses a [non-partitioned output](./stream-analytics-parallelization.md#query-using-non-partitioned-output) or has [a multi-step query with different PARTITION BY values](./stream-analytics-parallelization.md#multi-step-query-with-different-partition-by-values). You maybe restricted to choosing from a set of SU values when the job is running.
+4. Choose the SU option in drop-down list to set the SUs for the job. Notice that you're limited to specific SU settings. 
+5. You can change the number of SUs assigned to your job even when it's running. This isn't possible if your job uses a [non-partitioned output](./stream-analytics-parallelization.md#query-using-non-partitioned-output) or has [a multi-step query with different PARTITION BY values](./stream-analytics-parallelization.md#multi-step-query-with-different-partition-by-values). You maybe restricted to choosing from a set of SU values when the job is running.
## Monitor job performance
-Using the Azure portal, you can track the throughput of a job:
+Using the Azure portal, you can track the performance related metrics of a job. To learn about the metrics definition, see [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md). To learn more about the metrics monitoring in portal, see [Monitor Stream Analytics job with Azure portal](./stream-analytics-monitoring.md).
-![Azure Stream Analytics monitor jobs][img.stream.analytics.monitor.job]
Calculate the expected throughput of the workload. If the throughput is less than expected, tune the input partition, tune the query, and add SUs to your job. ## How many SUs are required for a job?
-Choosing the number of required SUs for a particular job depends on the partition configuration for the inputs and the query that's defined within the job. The **Scale** page allows you to set the right number of SUs. It is a best practice to allocate more SUs than needed. The Stream Analytics processing engine optimizes for latency and throughput at the cost of allocating additional memory.
+Choosing the number of required SUs for a particular job depends on the partition configuration for the inputs and the query that's defined within the job. The **Scale** page allows you to set the right number of SUs. It's a best practice to allocate more SUs than needed. The Stream Analytics processing engine optimizes for latency and throughput at the cost of allocating additional memory.
In general, the best practice is to start with 6 SUs for queries that don't use **PARTITION BY**. Then determine the sweet spot by using a trial and error method in which you modify the number of SUs after you pass representative amounts of data and examine the SU% Utilization metric. The maximum number of streaming units that can be used by a Stream Analytics job depends on the number of steps in the query defined for the job and the number of partitions in each step. You can learn more about the limits [here](./stream-analytics-parallelization.md#calculate-the-maximum-streaming-units-of-a-job).
For more information about choosing the right number of SUs, see this page: [Sca
## Factors that increase SU% utilization 
-Temporal (time-oriented) query elements are the core set of stateful operators provided by Stream Analytics. Stream Analytics manages the state of these operations internally on userΓÇÖs behalf, by managing memory consumption, checkpointing for resiliency, and state recovery during service upgrades. Even though Stream Analytics fully manages the states, there are a number of best practice recommendations that users should consider.
+Temporal (time-oriented) query elements are the core set of stateful operators provided by Stream Analytics. Stream Analytics manages the state of these operations internally on userΓÇÖs behalf, by managing memory consumption, checkpointing for resiliency, and state recovery during service upgrades. Even though Stream Analytics fully manages the states, there are many best practice recommendations that users should consider.
-Note that a job with complex query logic could have high SU% utilization even when it is not continuously receiving input events. This can happen after a sudden spike in input and output events. The job might continue to maintain state in memory if the query is complex.
+Note that a job with complex query logic could have high SU% utilization even when it isn't continuously receiving input events. This can happen after a sudden spike in input and output events. The job might continue to maintain state in memory if the query is complex.
-SU% utilization may suddenly drop to 0 for a short period before coming back to expected levels. This happens due to transient errors or system initiated upgrades. Increasing number of streaming units for a job might not reduce SU% Utilization if your query is not [fully parallel](./stream-analytics-parallelization.md).
+SU% utilization may suddenly drop to 0 for a short period before coming back to expected levels. This happens due to transient errors or system initiated upgrades. Increasing number of streaming units for a job might not reduce SU% Utilization if your query isn't [fully parallel](./stream-analytics-parallelization.md).
-While comparing utilization over a period of time, use [event rate metrics](stream-analytics-monitoring.md). InputEvents and OutputEvents metrics show how many events were read and processed. There are metrics that indicate number of error events as well, such as deserialization errors. When the number of events per time unit increases, SU% increases in most cases.
+While comparing utilization over a period of time, use [event rate metrics](stream-analytics-job-metrics.md). InputEvents and OutputEvents metrics show how many events were read and processed. There are metrics that indicate number of error events as well, such as deserialization errors. When the number of events per time unit increases, SU% increases in most cases.
## Stateful query logic in temporal elements One of the unique capability of Azure Stream Analytics job is to perform stateful processing, such as windowed aggregates, temporal joins, and temporal analytic functions. Each of these operators keeps state information. The maximum window size for these query elements is seven days.
The temporal window concept appears in several Stream Analytics query elements:
The following factors influence the memory used (part of streaming units metric) by Stream Analytics jobs: ## Windowed aggregates
-The memory consumed (state size) for a windowed aggregate is not always directly proportional to the window size. Instead, the memory consumed is proportional to the cardinality of the data, or the number of groups in each time window.
+The memory consumed (state size) for a windowed aggregate isn't always directly proportional to the window size. Instead, the memory consumed is proportional to the cardinality of the data, or the number of groups in each time window.
For example, in the following query, the number associated with `clusterid` is the cardinality of the query. 
For example, in the following query, the number associated with `clusterid` is t
GROUP BY clusterid, tumblingwindow (minutes, 5) ```
-In order to mitigate any issues caused by high cardinality in the previous query, you can send events to Event Hub partitioned by `clusterid`, and scale out the query by allowing the system to process each input partition separately using **PARTITION BY** as shown in the example below:
+In order to mitigate any issues caused by high cardinality in the previous query, you can send events to Event Hubs partitioned by `clusterid`, and scale out the query by allowing the system to process each input partition separately using **PARTITION BY** as shown in the example below:
```sql SELECT count(*)
In order to mitigate any issues caused by high cardinality in the previous query
GROUP BY PartitionId, clusterid, tumblingwindow (minutes, 5) ```
-Once the query is partitioned out, it is spread out over multiple nodes. As a result, the number of `clusterid` values coming into each node is reduced thereby reducing the cardinality of the group by operator. 
+Once the query is partitioned out, it's spread out over multiple nodes. As a result, the number of `clusterid` values coming into each node is reduced thereby reducing the cardinality of the group by operator. 
-Event Hub partitions should be partitioned by the grouping key to avoid the need for a reduce step. For more information, see [Event Hubs overview](../event-hubs/event-hubs-about.md). 
+Event Hubs partitions should be partitioned by the grouping key to avoid the need for a reduce step. For more information, see [Event Hubs overview](../event-hubs/event-hubs-about.md). 
## Temporal joins The memory consumed (state size) of a temporal join is proportional to the number of events in the temporal wiggle room of the join, which is event input rate multiplied by the wiggle room size. In other words, the memory consumed by joins is proportional to the DateDiff time range multiplied by average event rate.
The number of unmatched events in the join affect the memory utilization for the
INNER JOIN impressions ON impressions.id = clicks.id AND DATEDIFF(hour, impressions, clicks) between 0 AND 10. ```
-In this example, it is possible that lots of ads are shown and few people click on it and it is required to keep all the events in the time window. Memory consumed is proportional to the window size and event rate. 
+In this example, it's possible that lots of ads are shown and few people click on it and it's required to keep all the events in the time window. Memory consumed is proportional to the window size and event rate. 
-To remediate this, send events to Event Hub partitioned by the join keys (ID in this case), and scale out the query by allowing the system to process each input partition separately using **PARTITION BY** as shown:
+To remediate this, send events to Event Hubs partitioned by the join keys (ID in this case), and scale out the query by allowing the system to process each input partition separately using **PARTITION BY** as shown:
```sql SELECT clicks.id
To remediate this, send events to Event Hub partitioned by the join keys (ID in
ON impression.PartitionId = clicks.PartitionId AND impressions.id = clicks.id AND DATEDIFF(hour, impressions, clicks) between 0 AND 10  ```
-Once the query is partitioned out, it is spread out over multiple nodes. As a result the number of events coming into each node is reduced thereby reducing the size of the state kept in the join window. 
+Once the query is partitioned out, it's spread out over multiple nodes. As a result the number of events coming into each node is reduced thereby reducing the size of the state kept in the join window. 
## Temporal analytic functions
-The memory consumed (state size) of a temporal analytic function is proportional to the event rate multiply by the duration. The memory consumed by analytic functions is not proportional to the window size, but rather partition count in each time window.
+The memory consumed (state size) of a temporal analytic function is proportional to the event rate multiply by the duration. The memory consumed by analytic functions isn't proportional to the window size, but rather partition count in each time window.
The remediation is similar to temporal join. You can scale out the query using **PARTITION BY**.  ## Out of order buffer  User can configure the out of order buffer size in the Event Ordering configuration pane. The buffer is used to hold inputs for the duration of the window, and reorder them. The size of the buffer is proportional to the event input rate multiply by the out of order window size. The default window size is 0. 
-To remediate overflow of the out of order buffer, scale out query using **PARTITION BY**. Once the query is partitioned out, it is spread out over multiple nodes. As a result, the number of events coming into each node is reduced thereby reducing the number of events in each reorder buffer. 
+To remediate overflow of the out of order buffer, scale out query using **PARTITION BY**. Once the query is partitioned out, it's spread out over multiple nodes. As a result, the number of events coming into each node is reduced thereby reducing the number of events in each reorder buffer. 
## Input partition count 
-Each input partition of a job input has a buffer. The larger number of input partitions, the more resource the job consumes. For each streaming unit, Azure Stream Analytics can process roughly 1 MB/s of input. Therefore, you can optimize by matching the number of Stream Analytics streaming units with the number of partitions in your Event Hub.
+Each input partition of a job input has a buffer. The larger number of input partitions, the more resource the job consumes. For each streaming unit, Azure Stream Analytics can process roughly 1 MB/s of input. Therefore, you can optimize by matching the number of Stream Analytics streaming units with the number of partitions in your event hub.
-Typically, a job configured with one streaming unit is sufficient for an Event Hub with two partitions (which is the minimum for Event Hub). If the Event Hub has more partitions, your Stream Analytics job consumes more resources, but not necessarily uses the extra throughput provided by Event Hub.
+Typically, a job configured with one streaming unit is sufficient for an event hub with two partitions (which is the minimum for event hub). If the event hub has more partitions, your Stream Analytics job consumes more resources, but not necessarily uses the extra throughput provided by Event Hubs.
-For a job with 6 streaming units, you may need 4 or 8 partitions from the Event Hub. However, avoid too many unnecessary partitions since that causes excessive resource usage. For example, an Event Hub with 16 partitions or larger in a Stream Analytics job that has 1 streaming unit.
+For a job with 6 streaming units, you may need 4 or 8 partitions from the event hub. However, avoid too many unnecessary partitions since that causes excessive resource usage. For example, an event hub with 16 partitions or larger in a Stream Analytics job that has 1 streaming unit.
## Reference data  Reference data in ASA are loaded into memory for fast lookup. With the current implementation, each join operation with reference data keeps a copy of the reference data in memory, even if you join with the same reference data multiple times. For queries with **PARTITION BY**, each partition has a copy of the reference data, so the partitions are fully decoupled. With the multiplier effect, memory usage can quickly get very high if you join with reference data multiple times with multiple partitions.  
When you add a UDF function, Azure Stream Analytics loads the JavaScript runtime
## Next steps * [Create parallelizable queries in Azure Stream Analytics](stream-analytics-parallelization.md) * [Scale Azure Stream Analytics jobs to increase throughput](stream-analytics-scale-jobs.md)
+* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
+* [Azure Stream Analytics job metrics dimensions](./stream-analytics-job-metrics-dimensions.md)
+* [Monitor Stream Analytics job with Azure portal](./stream-analytics-monitoring.md)
+* [Analyze Stream Analytics job performance with metrics dimensions](./stream-analytics-job-analysis-with-metric-dimensions.md)
+* [Understand and adjust Streaming Units](./stream-analytics-streaming-unit-consumption.md)
<!--Image references-->
-[img.stream.analytics.monitor.job]: ./media/stream-analytics-scale-jobs/StreamAnalytics.job.monitor-NewPortal.png
[img.stream.analytics.configure.scale]: ./media/stream-analytics-scale-jobs/StreamAnalytics.configure.scale.png [img.stream.analytics.perfgraph]: ./media/stream-analytics-scale-jobs/perf.png [img.stream.analytics.streaming.units.scale]: ./media/stream-analytics-scale-jobs/StreamAnalyticsStreamingUnitsExample.jpg
-[img.stream.analytics.preview.portal.settings.scale]: ./media/stream-analytics-scale-jobs/StreamAnalyticsPreviewPortalJobSettings-NewPortal.png
stream-analytics Stream Analytics Time Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-time-handling.md
You may have noticed another concept called early arrival window that looks like
Because Azure Stream Analytics guarantees complete results, you can only specify **job start time** as the first output time of the job, not the input time. The job start time is required so that the complete window is processed, not just from the middle of the window.
-Stream Analytics derives the start time from the query specification. However, because the input event broker is only indexed by arrival time, the system has to translate the starting event time to arrival time. The system can start processing events from that point in the input event broker. With the early arriving window limit, the translation is straightforward: starting event time minus the 5-minute early arriving window. This calculation also means that the system drops all events that are seen as having an event time 5 minutes earlier than the arrival time. The [early input events metric](stream-analytics-monitoring.md) is incremented when the events are dropped.
+Stream Analytics derives the start time from the query specification. However, because the input event broker is only indexed by arrival time, the system has to translate the starting event time to arrival time. The system can start processing events from that point in the input event broker. With the early arriving window limit, the translation is straightforward: starting event time minus the 5-minute early arriving window. This calculation also means that the system drops all events that are seen as having an event time 5 minutes earlier than the arrival time. The [early input events metric](stream-analytics-job-metrics.md) is incremented when the events are dropped.
This concept is used to ensure the processing is repeatable no matter where you start to output from. Without such a mechanism, it would not be possible to guarantee repeatability, as many other streaming systems claim they do.
Stream Analytics jobs have several **Event ordering** options. Two can be config
## Metrics to observe
-You can observe a number of the Event ordering time tolerance effects through [Stream Analytics job metrics](stream-analytics-monitoring.md). The following metrics are relevant:
+You can observe a number of the Event ordering time tolerance effects through [Azure Stream Analytics job metrics](stream-analytics-job-metrics.md). The following metrics are relevant:
|Metric | Description | |||
In this illustration, the following tolerances are used:
## Next steps - [Azure Stream Analytics event order considerations]()-- [Stream Analytics job metrics](stream-analytics-monitoring.md)
+* [Azure Stream Analytics job metrics](./stream-analytics-job-metrics.md)
stream-analytics Stream Analytics Troubleshoot Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-troubleshoot-output.md
This article describes common issues with Azure Stream Analytics output connecti
## The job doesn't produce output 1. Verify connectivity to outputs by using the **Test Connection** button for each output.
-1. Look at [Monitoring metrics](stream-analytics-monitoring.md) on the **Monitor** tab. Because the values are aggregated, the metrics are delayed by a few minutes.
+1. Look at [Monitor Stream Analytics job with Azure portal](stream-analytics-monitoring.md) on the **Monitor** tab. Because the values are aggregated, the metrics are delayed by a few minutes.
* If the **Input Events** value is greater than zero, the job can read the input data. If the **Input Events** value isn't greater than zero, there's an issue with the job's input. See [Troubleshoot input connections](stream-analytics-troubleshoot-input.md) for more information. If your job has reference data input, apply splitting by logical name when looking at **Input Events** metric. If there are no input events from your reference data alone, then it likely means that this input source has not be configured properly to fetch the right reference dataset. * If the **Data Conversion Errors** value is greater than zero and climbing, see [Azure Stream Analytics data errors](data-errors.md) for detailed information about data conversion errors.
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/3-security-access-operations.md
High-Availability Linux (also called *Linux-HA*) provides the failover capabilit
Each service is a resource.
-Netezza groups the Netezza-specific services into the nps resource group. When Heartbeat detects problems that imply a host failure condition or loss of service to the Netezza users, Heartbeat can initiate a failover to the standby host. For details about Linux-HA and its terms and operations, see the documentation at [http://www.linux-ha.org](http://www.linux-ha.org/).
+Netezza groups the Netezza-specific services into the nps resource group. When Heartbeat detects problems that imply a host failure condition or loss of service to the Netezza users, Heartbeat can initiate a failover to the standby host.
Distributed Replicated Block Device (DRBD) is a block device driver that mirrors the content of block devices (hard disks, partitions, and logical volumes) between the hosts. Netezza uses the DRBD replication only on the **/nz** and **/export/home** partitions. As new data is written to the **/nz** partition and the **/export/home** partition on the primary host, the DRBD software automatically makes the same changes to the **/nz** and **/export/home** partition of the standby host.
Adding more compute nodes adds more compute power and ability to leverage more p
## Next steps
-To learn more about visualization and reporting, see the next article in this series: [Visualization and reporting for Netezza migrations](4-visualization-reporting.md).
+To learn more about visualization and reporting, see the next article in this series: [Visualization and reporting for Netezza migrations](4-visualization-reporting.md).
synapse-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
synapse-analytics Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new-archive.md
Last updated 05/20/2022
This article describes previous month updates to Azure Synapse Analytics. For the most current month's release, check out [Azure Synapse Analytics latest updates](whats-new.md). Each update links to the Azure Synapse Analytics blog and an article that provides more information.
+## May 2022 update
+
+The following updates are new to Azure Synapse Analytics this month.
+
+### General
+
+**Get connected with the new Azure Synapse Influencer program!** [Join a community of Azure Synapse Influencers](https://aka.ms/synapseinfluencers) who are helping each other achieve more with cloud analytics! The Azure Synapse Influencer program recognizes Azure Synapse Analytics users and advocates who actively support the community by sharing Synapse-related content, announcements, and product news via social media.
+
+### SQL
+
+* **Data Warehouse Migration guide for Dedicated SQL Pools in Azure Synapse Analytics** - With the benefits that cloud migration offers, we hear that you often look for steps, processes, or guidelines to follow for quick and easy migrations from existing data warehouse environments. We just released a set of [Data Warehouse migration guides](/azure/synapse-analytics/migration-guides/) to make your transition to dedicated SQL Pools in Azure Synapse Analytics easier.
+
+* **Automatic character column length calculation** - It's no longer necessary to define character column lengths! Serverless SQL pools let you query files in the data lake without knowing the schema upfront. The best practice was to specify the lengths of character columns to get optimal performance. Not anymore! With this new feature, you can get optimal query performance without having to define the schema. The serverless SQL pool will calculate the average column length for each inferred character column or character column defined as larger than 100 bytes. The schema will stay the same, while the serverless SQL pool will use the calculated average column lengths internally. It will also automatically calculate the cardinality estimation in case there was no previously created statistic.
+
+### Apache Spark for Synapse
+
+* **Azure Synapse Dedicated SQL Pool Connector for Apache Spark Now Available in Python** - Previously, the Azure Synapse Dedicated SQL Pool connector was only available using Scala. Now, it can be used with Python on Spark 3. The only difference between the Scala and Python implementations is the optional Scala callback handle, which allows you to receive post-write metrics.
+
+ The following are now supported in Python on Spark 3:
+
+ * Read using Azure Active Directory (AD) Authentication or Basic Authentication
+ * Write to Internal Table using Azure AD Authentication or Basic Authentication
+ * Write to External Table using Azure AD Authentication or Basic Authentication
+
+ To learn more about the connector in Python, read [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](./spark/synapse-spark-sql-pool-import-export.md).
+
+* **Manage Azure Synapse Apache Spark configuration** - Apache Spark configuration management is always a challenging task because Spark has hundreds of properties. It is also challenging for you to know the optimal value for Spark configurations. With the new Spark configuration management feature, you can create a standalone Spark configuration artifact with auto-suggestions and built-in validation rules. The Spark configuration artifact allows you to share your Spark configuration within and across Azure Synapse workspaces. You can also easily associate your Spark configuration with a Spark pool, a Notebook, and a Spark job definition for reuse and minimize the need to copy the Spark configuration in multiple places. To learn more about the new Spark configuration management feature, read [Manage Apache Spark configuration](./spark/apache-spark-azure-create-spark-configuration.md).
+
+### Synapse Data Explorer
+
+* **Synapse Data Explorer live query in Excel** - Using the new Data Explorer web experience Open in Excel feature, you can now provide access to live results of your query by sharing the connected Excel Workbook with colleagues and team members.  You can open the live query in an Excel Workbook and refresh it directly from Excel to get the most up to date query results. To learn more about Excel live query, read [Open live query in Excel](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/open-live-kusto-query-in-excel/ba-p/3198500).
+
+* **Use Managed Identities for External SQL Server Tables** - One of the key benefits of Azure Synapse is the ability to bring together data integration, enterprise data warehousing, and big data analytics. With Managed Identity support, Synapse Data Explorer table definition is now simpler and more secure. You can now use managed identities instead of entering in your credentials.
+
+ An external SQL table is a schema entity that references data stored outside the Synapse Data Explorer database. Using the Create and alter SQL Server external tables command, External SQL tables can easily be added to the Synapse Data Explorer database schema.
+
+ To learn more about managed identities, read [Managed identities overview](/azure/data-explorer/managed-identities-overview).
+
+ To learn more about external tables, read [Create and alter SQL Server external tables](/azure/data-explorer/kusto/management/external-sql-tables).
+
+* **New KQL Learn module (2 out of 3) is live!** - The power of Kusto Query Language (KQL) is its simplicity to query structured, semi-structured, and unstructured data together. To make it easier for you to learn KQL, we are releasing Learn modules. Previously, we released [Write your first query with Kusto Query Language](/learn/modules/write-first-query-kusto-query-language/). New this month is [Gain insights from your data by using Kusto Query Language](/learn/modules/gain-insights-data-kusto-query-language/).
+
+ KQL is the query language used to query Synapse Data Explorer big data. KQL has a fast-growing user community, with hundreds of thousands of developers, data engineers, data analysts, and students.
+
+ Check out the newest [KQL Learn Model](/learn/modules/gain-insights-data-kusto-query-language/) and see for yourself how easy it is to become a KQL master.
+
+ To learn more about KQL, read [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/).
+
+* **Azure Synapse Data Explorer connector for Microsoft Power Automate, Logic Apps, and Power Apps [Generally Available]** - The Azure Data Explorer connector for Power Automate enables you to orchestrate and schedule flows, send notifications, and alerts, as part of a scheduled or triggered task. To learn more, read [Azure Data Explorer connector for Microsoft Power Automate](/azure/data-explorer/flow) and [Usage examples for Azure Data Explorer connector to Power Automate](/azure/data-explorer/flow-usage).
+
+* **Dynamic events routing from event hub to multiple databases** - Routing events from Event Hub/IOT Hub/Event Grid is an activity commonly performed by Azure Data Explorer (ADX) users. Previously, you could route events only to a single database per defined connection. If you wanted to route the events to multiple databases, you needed to create multiple ADX cluster connections.
+
+ To simplify the experience, we now support routing events data to multiple databases hosted in a single ADX cluster. To learn more about dynamic routing, read [Ingest from event hub](/azure/data-explorer/ingest-data-event-hub-overview#events-routing).
+
+* **Configure a database using a KQL inline script as part of JSON ARM deployment template** - Previously, Azure Data Explorer supported running a Kusto Query Language (KQL) script to configure your database during Azure Resource Management (ARM) template deployment. Now, this can be done using an inline script provided inline as a parameter to a JSON ARM template. To learn more about using a KQL inline script, read [Configure a database using a Kusto Query Language script](/azure/data-explorer/database-script).
+
+### Data Integration
+
+* **Export pipeline monitoring as a CSV** - The ability to export pipeline monitoring to CSV has been added after receiving many community requests for the feature. Simply filter the Pipeline runs screen to the data you want and click ΓÇÿExport to CSVΓÇÖ. To learn more about exporting pipeline monitoring and other monitoring improvements, read [Azure Data Factory monitoring improvements](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-monitoring-improvements/ba-p/3295531).
+
+* **Incremental data loading made easy for Synapse and Azure Database for PostgreSQL and MySQL** - In a data integration solution, incrementally loading data after an initial full data load is a widely used scenario. Automatic incremental source data loading is now natively available for Synapse SQL and Azure Database for PostgreSQL and MySQL. With a simple click, users can ΓÇ£enable incremental extractΓÇ¥ and only inserted or updated rows will be read by the pipeline. To learn more about incremental data loading, read [Incrementally copy data from a source data store to a destination data store](../data-factory/tutorial-incremental-copy-overview.md).
+
+* **User-Defined Functions for Mapping Data Flows [Public Preview]** - We hear you that you can find yourself doing the same string manipulation, math calculations, or other complex logic several times. Now, with the new user-defined function feature, you can create customized expressions that can be reused across multiple mapping data flows. User-defined functions will be grouped in libraries to help developers group common sets of functions. Once youΓÇÖve created a data flow library, you can add in your user-defined functions. You can even add in multiple arguments to make your function more reusable. To learn more about user-defined functions, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628).
+
+* **Assert Error Handling** - Error handling has now been added to sinks following an assert transformation. Assert transformations enable you to build custom rules for data quality and data validation. You can now choose whether to output the failed rows to the selected sink or to a separate file. To learn more about error handling, read [Assert data transformation in mapping data flow](../data-factory/data-flow-assert.md).
+
+* **Mapping data flows projection editing** - New UI updates have been made to source projection editing in mapping data flows. You can now update source projection column names and column types with the click of a button. To learn more about source projection editing, read [Source transformation in mapping data flow](../data-factory/data-flow-source.md).
+
+### Synapse Link
+
+**Azure Synapse Link for SQL [Public Preview]** - At Microsoft Build 2022, we announced the Public Preview availability of Azure Synapse Link for SQL, for both SQL Server 2022 and Azure SQL Database. Data-driven, quality insights are critical for companies to stay competitive. The speed to achieve those insights can make all the difference. The costly and time-consuming nature of traditional ETL and ELT pipelines is no longer enough. With this release, you can now take advantage of low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. This makes it easier to run BI reporting on operational data in near real-time, with minimal impact on your operational store. To learn more, read [Announcing the Public Preview of Azure Synapse Link for SQL](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986) and watch our YouTube video.
+
+> [!VIDEO https://www.youtube.com/embed/pgusZy34-Ek]
+ ## Apr 2022 update The following updates are new to Azure Synapse Analytics this month.
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
Last updated 04/15/2022
This article lists updates to Azure Synapse Analytics that are published in April 2022. Each update links to the Azure Synapse Analytics blog and an article that provides more information. For previous months releases, check out [Azure Synapse Analytics - updates archive](whats-new-archive.md).
-The following updates are new to Azure Synapse Analytics this month.
- ## General
-**Get connected with the new Azure Synapse Influencer program!** [Join a community of Azure Synapse Influencers](https://aka.ms/synapseinfluencers) who are helping each other achieve more with cloud analytics! The Azure Synapse Influencer program recognizes Azure Synapse Analytics users and advocates who actively support the community by sharing Synapse-related content, announcements, and product news via social media.
+* **Azure Orbital analytics with Synapse Analytics** - We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with [Azure Synapse Analytics](overview-what-is.md). The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](/cognitive-services/) models, AI models from partners, and bring-your-own-data models.
+* **Azure Synapse success by design** - Project success is no accident and requires careful planning and execution. The Synapse Analytics' Success by Design playbooks are now available on Microsoft Docs. The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. These guides contain best practices from the most challenging and complex solution implementations incorporating Azure Synapse. To learn more about the Azure Synapse proof of concept playbook, read [Success by Design](./guidance/success-by-design-introduction.md).
## SQL
-* **Data Warehouse Migration guide for Dedicated SQL Pools in Azure Synapse Analytics** - With the benefits that cloud migration offers, we hear that you often look for steps, processes, or guidelines to follow for quick and easy migrations from existing data warehouse environments. We just released a set of [Data Warehouse migration guides](/azure/synapse-analytics/migration-guides/) to make your transition to dedicated SQL Pools in Azure Synapse Analytics easier.
-
-* **Automatic character column length calculation** - It's no longer necessary to define character column lengths! Serverless SQL pools let you query files in the data lake without knowing the schema upfront. The best practice was to specify the lengths of character columns to get optimal performance. Not anymore! With this new feature, you can get optimal query performance without having to define the schema. The serverless SQL pool will calculate the average column length for each inferred character column or character column defined as larger than 100 bytes. The schema will stay the same, while the serverless SQL pool will use the calculated average column lengths internally. It will also automatically calculate the cardinality estimation in case there was no previously created statistic.
-
-## Apache Spark for Synapse
-
-* **Azure Synapse Dedicated SQL Pool Connector for Apache Spark Now Available in Python** - Previously, the Azure Synapse Dedicated SQL Pool connector was only available using Scala. Now, it can be used with Python on Spark 3. The only difference between the Scala and Python implementations is the optional Scala callback handle, which allows you to receive post-write metrics.
-
- The following are now supported in Python on Spark 3:
-
- * Read using Azure Active Directory (AD) Authentication or Basic Authentication
- * Write to Internal Table using Azure AD Authentication or Basic Authentication
- * Write to External Table using Azure AD Authentication or Basic Authentication
-
- To learn more about the connector in Python, read [Azure Synapse Dedicated SQL Pool Connector for Apache Spark](./spark/synapse-spark-sql-pool-import-export.md).
-
-* **Manage Azure Synapse Apache Spark configuration** - Apache Spark configuration management is always a challenging task because Spark has hundreds of properties. It is also challenging for you to know the optimal value for Spark configurations. With the new Spark configuration management feature, you can create a standalone Spark configuration artifact with auto-suggestions and built-in validation rules. The Spark configuration artifact allows you to share your Spark configuration within and across Azure Synapse workspaces. You can also easily associate your Spark configuration with a Spark pool, a Notebook, and a Spark job definition for reuse and minimize the need to copy the Spark configuration in multiple places. To learn more about the new Spark configuration management feature, read [Manage Apache Spark configuration](./spark/apache-spark-azure-create-spark-configuration.md).
-
-## Synapse Data Explorer
-
-* **Synapse Data Explorer live query in Excel** - Using the new Data Explorer web experience Open in Excel feature, you can now provide access to live results of your query by sharing the connected Excel Workbook with colleagues and team members.  You can open the live query in an Excel Workbook and refresh it directly from Excel to get the most up to date query results. To learn more about Excel live query, read [Open live query in Excel](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/open-live-kusto-query-in-excel/ba-p/3198500).
-
-* **Use Managed Identities for External SQL Server Tables** - One of the key benefits of Azure Synapse is the ability to bring together data integration, enterprise data warehousing, and big data analytics. With Managed Identity support, Synapse Data Explorer table definition is now simpler and more secure. You can now use managed identities instead of entering in your credentials.
-
- An external SQL table is a schema entity that references data stored outside the Synapse Data Explorer database. Using the Create and alter SQL Server external tables command, External SQL tables can easily be added to the Synapse Data Explorer database schema.
-
- To learn more about managed identities, read [Managed identities overview](/azure/data-explorer/managed-identities-overview).
-
- To learn more about external tables, read [Create and alter SQL Server external tables](/azure/data-explorer/kusto/management/external-sql-tables).
-
-* **New KQL Learn module (2 out of 3) is live!** - The power of Kusto Query Language (KQL) is its simplicity to query structured, semi-structured, and unstructured data together. To make it easier for you to learn KQL, we are releasing Learn modules. Previously, we released [Write your first query with Kusto Query Language](/learn/modules/write-first-query-kusto-query-language/). New this month is [Gain insights from your data by using Kusto Query Language](/learn/modules/gain-insights-data-kusto-query-language/).
-
- KQL is the query language used to query Synapse Data Explorer big data. KQL has a fast-growing user community, with hundreds of thousands of developers, data engineers, data analysts, and students.
-
- Check out the newest [KQL Learn Model](/learn/modules/gain-insights-data-kusto-query-language/) and see for yourself how easy it is to become a KQL master.
-
- To learn more about KQL, read [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/).
+**Result set size limit increase** - We know that you turn to Azure Synapse Analytics to work with large amounts of data. With that in mind, the maximum size of query result sets in Serverless SQL pools has been increased from 200GB to 400GB. This limit is shared between concurrent queries. To learn more about this size limit increase and other constraints, read [Self-help for serverless SQL pool](./sql/resources-self-help-sql-on-demand.md?tabs=x80070002#constraints).
-* **Azure Synapse Data Explorer connector for Microsoft Power Automate, Logic Apps, and Power Apps [Generally Available]** - The Azure Data Explorer connector for Power Automate enables you to orchestrate and schedule flows, send notifications, and alerts, as part of a scheduled or triggered task. To learn more, read [Azure Data Explorer connector for Microsoft Power Automate](/azure/data-explorer/flow) and [Usage examples for Azure Data Explorer connector to Power Automate](/azure/data-explorer/flow-usage).
+## Synapse data explorer
-* **Dynamic events routing from event hub to multiple databases** - Routing events from Event Hub/IOT Hub/Event Grid is an activity commonly performed by Azure Data Explorer (ADX) users. Previously, you could route events only to a single database per defined connection. If you wanted to route the events to multiple databases, you needed to create multiple ADX cluster connections.
+* **Web Explorer new homepage** - The new Synapse Web Explorer homepage makes it even easier to get started with Synapse Web Explorer. The [Web Explorer homepage](https://dataexplorer.azure.com/home) now includes the following sections:
- To simplify the experience, we now support routing events data to multiple databases hosted in a single ADX cluster. To learn more about dynamic routing, read [Ingest from event hub](/azure/data-explorer/ingest-data-event-hub-overview#events-routing).
+ * Get started ΓÇô Sample gallery offering example queries and dashboards for popular Synapse Data Explorer use cases.
+ * Recommended ΓÇô Popular learning modules designed to help you master Synapse Web Explorer and KQL.
+ * Documentation ΓÇô Synapse Web Explorer basic and advanced documentation.
-* **Configure a database using a KQL inline script as part of JSON ARM deployment template** - Previously, Azure Data Explorer supported running a Kusto Query Language (KQL) script to configure your database during Azure Resource Management (ARM) template deployment. Now, this can be done using an inline script provided inline as a parameter to a JSON ARM template. To learn more about using a KQL inline script, read [Configure a database using a Kusto Query Language script](/azure/data-explorer/database-script).
+* **Web Explorer sample gallery** - A great way to learn about a product is to see how it is being used by others. The Web Explorer sample gallery provides end-to-end samples of how customers leverage Synapse Data Explorer popular use cases such as Logs Data, Metrics Data, IoT data and Basic big data examples. Each sample includes the dataset, well-documented queries, and a sample dashboard. To learn more about the sample gallery, read [Azure Data Explorer in 60 minutes with the new samples gallery](https://techcommunity.microsoft.com/t5/azure-data-explorer-blog/azure-data-explorer-in-60-minutes-with-the-new-samples-gallery/ba-p/3447552).
-## Data Integration
+* **Web Explorer dashboards drill through capabilities** - You can now add drill through capabilities to your Synapse Web Explorer dashboards. The new drill through capabilities allow you to easily jump back and forth between dashboard pages. This is made possible by using a contextual filter to connect your dashboards. Defining these contextual drill throughs is done by editing the visual interactions of the selected tile in your dashboard. To learn more about drill through capabilities, read [Use drillthroughs as dashboard parameters](/data-explorer/dashboard-parameters.md#use-drillthroughs-as-dashboard-parameters).
-* **Export pipeline monitoring as a CSV** - The ability to export pipeline monitoring to CSV has been added after receiving many community requests for the feature. Simply filter the Pipeline runs screen to the data you want and click ΓÇÿExport to CSVΓÇÖ. To learn more about exporting pipeline monitoring and other monitoring improvements, read [Azure Data Factory monitoring improvements](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/adf-monitoring-improvements/ba-p/3295531).
+* **Time Zone settings for Web Explorer** - Being able to display data in different time zones is very powerful. You can now decide to view the data in UTC time, your local time zone, or the time zone of the monitored device/machine. The Time Zone settings of the Web Explorer now apply to both the Query results and to the Dashboard. By changing the time zone, the dashboards will be automatically refreshed to present the data with the selected time zone. For more information on time zone settings, read [Change datetime to specific time zone](/data-explorer/web-query-data.md#change-datetime-to-specific-time-zone).
-* **Incremental data loading made easy for Synapse and Azure Database for PostgreSQL and MySQL** - In a data integration solution, incrementally loading data after an initial full data load is a widely used scenario. Automatic incremental source data loading is now natively available for Synapse SQL and Azure Database for PostgreSQL and MySQL. With a simple click, users can ΓÇ£enable incremental extractΓÇ¥ and only inserted or updated rows will be read by the pipeline. To learn more about incremental data loading, read [Incrementally copy data from a source data store to a destination data store](../data-factory/tutorial-incremental-copy-overview.md).
+## Data integration
-* **User-Defined Functions for Mapping Data Flows [Public Preview]** - We hear you that you can find yourself doing the same string manipulation, math calculations, or other complex logic several times. Now, with the new user-defined function feature, you can create customized expressions that can be reused across multiple mapping data flows. User-defined functions will be grouped in libraries to help developers group common sets of functions. Once youΓÇÖve created a data flow library, you can add in your user-defined functions. You can even add in multiple arguments to make your function more reusable. To learn more about user-defined functions, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628).
+* **Fuzzy Join option in Join Transformation** - Fuzzy matching with a sliding similarity score option has been added to the Join transformation in Mapping Data Flows. You can create inner and outer joins on data values that are similar rather than exact matches! Previously, you would have had to use an exact match. The sliding scale value goes from 60% to 100%, making it easy to adjust the similarity threshold of the match. For learn more about fuzzy joins, read [Join transformation in mapping data flow](../data-factory/data-flow-join.md).
-* **Assert Error Handling** - Error handling has now been added to sinks following an assert transformation. Assert transformations enable you to build custom rules for data quality and data validation. You can now choose whether to output the failed rows to the selected sink or to a separate file. To learn more about error handling, read [Assert data transformation in mapping data flow](../data-factory/data-flow-assert.md).
+* **Map Data [Generally Available]** - WeΓÇÖre excited to announce that the Map Data tool is now Generally Available. The Map Data tool is a guided process to help you create ETL mappings and mapping data flows from your source data to Synapse without writing code. To learn more about Map Data, read [Map Data in Azure Synapse Analytics](./database-designer/overview-map-data.md).
-* **Mapping data flows projection editing** - New UI updates have been made to source projection editing in mapping data flows. You can now update source projection column names and column types with the click of a button. To learn more about source projection editing, read [Source transformation in mapping data flow](../data-factory/data-flow-source.md).
+* **Rerun pipeline with new parameters** - You can now change pipeline parameters when re-running a pipeline from the Monitoring page without having to return to the pipeline editor. After running a pipeline with new parameters, you can easily monitor the new run against the old ones without having to toggle between pages. To learn more about rerunning pipelines with new parameters, read [Rerun pipelines and activities](../data-factory/monitor-visually.md#rerun-pipelines-and-activities).
-## Synapse Link
+* **User Defined Functions [Generally Available]** - WeΓÇÖre excited to announce that user defined functions (UDFs) are now Generally Available. With user-defined functions, you can create customized expressions that can be reused across multiple mapping data flows. You no longer have to use the same string manipulation, math calculations, or other complex logic several times. User-defined functions will be grouped in libraries to help developers group common sets of functions. To learn more about user defined functions, read [User defined functions in mapping data flows](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-user-defined-functions-preview-for-mapping-data/ba-p/3414628).
+## Machine learning
-**Azure Synapse Link for SQL [Public Preview]** - At Microsoft Build 2022, we announced the Public Preview availability of Azure Synapse Link for SQL, for both SQL Server 2022 and Azure SQL Database. Data-driven, quality insights are critical for companies to stay competitive. The speed to achieve those insights can make all the difference. The costly and time-consuming nature of traditional ETL and ELT pipelines is no longer enough. With this release, you can now take advantage of low- and no-code, near real-time data replication from your SQL-based operational stores into Azure Synapse Analytics. This makes it easier to run BI reporting on operational data in near real-time, with minimal impact on your operational store. To learn more, read [Announcing the Public Preview of Azure Synapse Link for SQL](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/announcing-the-public-preview-of-azure-synapse-link-for-sql/ba-p/3372986) and watch our YouTube video.
+**Distributed Deep Neural Network Training with Horovod and Petastorm [Public Preview]** - To simplify the process for creating and managing GPU-accelerated pools, Azure Synapse takes care of pre-installing low-level libraries and setting up all the complex networking requirements between compute nodes. This integration allows users to get started with GPU- accelerated pools within just a few minutes.
-> [!VIDEO https://www.youtube.com/embed/pgusZy34-Ek]
+Now, Azure Synapse Analytics provides built-in support for deep learning infrastructure. The Azure Synapse Analytics runtime for Apache Spark 3.1 and 3.2 now includes support for the most common deep learning libraries like TensorFlow and PyTorch. The Azure Synapse runtime also includes supporting libraries like Petastorm and Horovod, which are commonly used for distributed training. This feature is currently available in Public Preview.
+To learn more about how to leverage these libraries within your Azure Synapse Analytics GPU-accelerated pools, read the [Deep learning tutorials](./machine-learning/concept-deep-learning.md).
## Next steps [Get started with Azure Synapse Analytics](get-started.md)
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Title: Built-in policy definitions for Azure virtual machine scale sets description: Lists Azure Policy built-in policy definitions for Azure virtual machine scale sets. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
The following platform SKUs are currently supported (and more are added periodic
| Publisher | OS Offer | Sku | |-||--|
-| Canonical | UbuntuServer | 16.04-LTS |
-| Canonical | UbuntuServer | 18.04-LTS |
-| MicrosoftWindowsServer | WindowsServer | 2012-R2-Datacenter |
-| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter |
+| MicrosoftWindowsServer | WindowsServer | 2012-R2-Datacenter |
+| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter |
+| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-gensecond |
+| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-gs |
| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-with-Containers |
+| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-with-containers |
+| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-with-containers-gs |
| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-Containers |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-Core |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-Core-with-Containers |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-core |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-core-with-containers |
| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-gensecond |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-gs |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-smalldisk |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-containers |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-containers-gs |
+| MicrosoftWindowsServer | WindowsServer | 2022-Datacenter |
+| MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-smalldisk |
+| MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-azure-edition |
+| MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-core |
+| MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-core-smalldisk |
+| MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-g2 |
+| MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-smalldisk-g2 |
+| Canonical | UbuntuServer | 20.04-LTS |
+| Canonical | UbuntuServer | 20.04-LTS-Gen2 |
+| Canonical | UbuntuServer | 18.04-LTS |
+| Canonical | UbuntuServer | 18.04-LTS-Gen2 |
+| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-1 |
+| MicrosoftCblMariner | Cbl-Mariner | 1-Gen2 |
+| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-2 |
+| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-2-Gen2 |
## Requirements for configuring automatic OS image upgrade
virtual-machines Disks Upload Vhd To Managed Disk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-upload-vhd-to-managed-disk-cli.md
description: Learn how to upload a VHD to an Azure managed disk and copy a manag
Previously updated : 09/07/2021 Last updated : 07/07/2022
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
+This article explains how to either upload a VHD from your local machine to an Azure managed disk or copy a managed disk to another region, using AzCopy. This process, direct upload, enables you to upload a VHD up to 32 TiB in size directly into a managed disk. Currently, direct upload is supported for standard HDD, standard SSD, and premium SSD managed disks. It isn't supported for ultra disks, yet.
-## Prerequisites
+If you're providing a backup solution for IaaS VMs in Azure, you should use direct upload to restore customer backups to managed disks. When uploading a VHD from a source external to Azure, speeds depend on your local bandwidth. When uploading or copying from an Azure VM, your bandwidth would be the same as standard HDDs.
++
+## Secure uploads with Azure AD (preview)
+
+> [!IMPORTANT]
+> If Azure AD is being used to enforce upload restrictions, you must use the Azure PowerShell module's [Add-AzVHD command](../windows/disks-upload-vhd-to-managed-disk-powershell.md#secure-uploads-with-azure-ad-preview) to upload a disk. Azure CLI doesn't currently support uploading a disk if Azure AD is being used to enforce upload restrictions.
+
+If you're using [Azure Active Directory (Azure AD)](../../active-directory/fundamentals/active-directory-whatis.md) to control resource access, you can now use it to restrict uploading of Azure managed disks. This feature is currently in preview. When a user attempts to upload a disk, Azure validates the identity of the requesting user in Azure AD, and confirms that user has the required permissions. At a higher level, a system administrator could set a policy at the Azure account or subscription level, to ensure that all disks and snapshots must use Azure AD for uploading.
+
+## Get started
+
+If you'd prefer to upload disks through a GUI, you can do so using Azure Storage Explorer. For details refer to: [Use Azure Storage Explorer to manage Azure managed disks](../disks-use-storage-explorer-managed-disks.md)
+
+### Prerequisites
- Download the latest [version of AzCopy v10](../../storage/common/storage-use-azcopy-v10.md#download-and-install-azcopy). - [Install the Azure CLI](/cli/azure/install-azure-cli). - If you intend to upload a VHD from on-premises: A fixed size VHD that [has been prepared for Azure](../windows/prepare-for-upload-vhd-image.md), stored locally. - Or, a managed disk in Azure, if you intend to perform a copy action.
-## Getting started
-
-If you'd prefer to upload disks through a GUI, you can do so using Azure Storage Explorer. For details refer to: [Use Azure Storage Explorer to manage Azure managed disks](../disks-use-storage-explorer-managed-disks.md)
- To upload your VHD to Azure, you'll need to create an empty managed disk that is configured for this upload process. Before you create one, there's some additional information you should know about these disks. This kind of managed disk has two unique states:
Create an empty standard HDD for uploading by specifying both the **-ΓÇôfor-uplo
Replace `<yourdiskname>`, `<yourresourcegroupname>`, `<yourregion>` with values of your choosing. The `--upload-size-bytes` parameter contains an example value of `34359738880`, replace it with a value appropriate for you. > [!TIP]
-> If you are creating an OS disk, add `--hyper-v-generation <yourGeneration>` to `az disk create`.
+> If you're creating an OS disk, add `--hyper-v-generation <yourGeneration>` to `az disk create`.
+>
+> If you're using Azure AD to secure disk uploads, add `-dataAccessAuthmode 'AzureActiveDirectory'`.
```azurecli az disk create -n <yourdiskname> -g <yourresourcegroupname> -l <yourregion> --os-type Linux --for-upload --upload-size-bytes 34359738880 --sku standard_lrs
az disk create -n <yourdiskname> -g <yourresourcegroupname> -l <yourregion> --os
If you would like to upload either a premium SSD or a standard SSD, replace **standard_lrs** with either **premium_LRS** or **standardssd_lrs**. Ultra disks are not supported for now.
+### Generate writeable SAS
+ Now that you've created an empty managed disk that is configured for the upload process, you can upload a VHD to it. To upload a VHD to the disk, you'll need a writeable SAS, so that you can reference it as the destination for your upload. To generate a writable SAS of your empty managed disk, replace `<yourdiskname>`and `<yourresourcegroupname>`, then use the following command:
az disk revoke-access -n <yourdiskname> -g <yourresourcegroupname>
Direct upload also simplifies the process of copying a managed disk. You can either copy within the same region or cross-region (to another region).
-The follow script will do this for you, the process is similar to the steps described earlier, with some differences since you're working with an existing disk.
+The following script will do this for you, the process is similar to the steps described earlier, with some differences since you're working with an existing disk.
> [!IMPORTANT] > You need to add an offset of 512 when you're providing the disk size in bytes of a managed disk from Azure. This is because Azure omits the footer when returning the disk size. The copy will fail if you don't do this. The following script already does this for you.
virtual-machines Download Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/download-vhd.md
Title: Download a Linux VHD from Azure description: Download a Linux VHD using the Azure CLI and the Azure portal.--+++ Previously updated : 08/03/2020- Last updated : 07/07/2022 # Download a Linux VHD from Azure
Your snapshot will be created shortly, and can then be used to download or creat
> > This method is only recommended for VMs with a single OS disk. VMs with one or more data disks should be stopped before download or before creating a snapshot for the OS disk and each data disk.
+## Secure downloads and uploads with Azure AD (preview)
++ ## Generate SAS URL To download the VHD file, you need to generate a [shared access signature (SAS)](../../storage/common/storage-sas-overview.md?toc=/azure/virtual-machines/windows/toc.json) URL. When the URL is generated, an expiration time is assigned to the URL.
+# [Portal](#tab/azure-portal)
+ 1. On the menu of the page for the VM, select **Disks**. 2. Select the operating system disk for the VM, and then select **Disk Export**. 1. If required, update the value of **URL expires in (seconds)** to give you enough time to complete the download. The default is 3600 seconds (one hour). 3. Select **Generate URL**.
-
+
+# [PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+$diskSas = Grant-AzDiskAccess -ResourceGroupName "yourRGName" -DiskName "yourDiskName" -DurationInSecond 86400 -Access 'Read'
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az disk grant-access --duration-in-seconds 86400 --access-level Read --name yourDiskName --resource-group yourRGName
+```
++ ## Download VHD
+> [!NOTE]
+> If you're using Azure AD to secure managed disk downloads, the user downloading the VHD must have the appropriate [RBAC permissions](#assign-rbac-role).
+
+# [Portal](#tab/azure-portal)
+ 1. Under the URL that was generated, select **Download the VHD file**. :::image type="content" source="./media/download-vhd/export-download.PNG" alt-text="Shows the button to download the VHD."::: 2. You may need to select **Save** in the browser to start the download. The default name for the VHD file is *abcd*.
+# [PowerShell](#tab/azure-powershell)
+
+Use the following script to download your VHD:
+
+```azurepowershell
+Connect-AzAccount
+#Set localFolder to your desired download location
+$localFolder = "yourPathHere"
+$blob = Get-AzStorageBlobContent -Uri $diskSas.AccessSAS -Destination $localFolder -Force
+```
+
+When the download finishes, revoke access to your disk using `Revoke-AzDiskAccess -ResourceGroupName "yourRGName" -DiskName "yourDiskName"`.
+
+# [Azure CLI](#tab/azure-cli)
+
+Replace `yourPathhere` and `sas-URI` with your values, then use the following script to download your VHD:
+
+> [!NOTE]
+> If you're using Azure AD to secure your managed disk uploads and downloads, add `--auth-mode login` to `az storage blob download`.
+
+```azurecli
+
+#set localFolder to your desired download location
+localFolder=yourPathHere
+#If you're using Azure AD to secure your managed disk uploads and downloads, add --auth-mode login to the following command.
+az storage blob download -f $localFolder --blob-url "sas-URI"
+```
+
+When the download finishes, revoke access to your disk using `az disk revoke-access --name diskName --resource-group yourRGName`.
+++ ## Next steps - Learn how to [upload and create a Linux VM from custom disk with the Azure CLI](upload-vhd.md).
virtual-machines Redhat Create Upload Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/redhat-create-upload-vhd.md
This section shows you how to prepare a RHEL 7 distro from an ISO using a kickst
# Disable the root account usermod root -p '!!'
- # Disabke swap in WALinuxAgent
- ResourceDisk.Format=n
- ResourceDisk.EnableSwap=n
- # Configure swap using cloud-init echo 'DefaultEnvironment="CLOUD_CFG=/etc/cloud/cloud.cfg.d/00-azure-swap.cfg"' >> /etc/systemd/system.conf cat > /etc/cloud/cloud.cfg.d/00-azure-swap.cfg << EOF
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
virtual-machines Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Virtual Machines description: Sample Azure Resource Graph queries for Azure Virtual Machines showing use of resource types and tables to access Azure Virtual Machines related resources and properties. Previously updated : 06/16/2022 Last updated : 07/07/2022
virtual-machines Disks Upload Vhd To Managed Disk Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md
Title: Upload a VHD to Azure or copy a disk across regions - Azure PowerShell
description: Learn how to upload a VHD to an Azure managed disk and copy a managed disk across regions, using Azure PowerShell, via direct upload. Previously updated : 02/01/2022 Last updated : 07/07/2022 linux -+ # Upload a VHD to Azure or copy a managed disk to another region - Azure PowerShell
This article explains how to either upload a VHD from your local machine to an A
If you're providing a backup solution for IaaS VMs in Azure, you should use direct upload to restore customer backups to managed disks. When uploading a VHD from a source external to Azure, speeds depend on your local bandwidth. When uploading or copying from an Azure VM, your bandwidth would be the same as standard HDDs.
-## Getting started
+## Secure uploads with Azure AD (preview)
+
+If you're using [Azure Active Directory (Azure AD)](../../active-directory/fundamentals/active-directory-whatis.md) to control resource access, you can now use it to restrict uploading of Azure managed disks. This feature is currently in preview. When a user attempts to upload a disk, Azure validates the identity of the requesting user in Azure AD, and confirms that user has the required permissions. At a higher level, a system administrator could set a policy at the Azure account or subscription level, to ensure that all disks and snapshots must use Azure AD for uploading.
+
+### Prerequisites
+
+### Restrictions
+
+### Assign RBAC role
+
+To access managed disks secured with Azure AD, the requesting user must have either the [Data Operator for Managed Disks](../../role-based-access-control/built-in-roles.md#data-operator-for-managed-disks) role, or a [custom role](../../role-based-access-control/custom-roles-powershell.md) with the following permissions:
+
+- **Microsoft.Compute/disks/download/action**
+- **Microsoft.Compute/disks/upload/action**
+- **Microsoft.Compute/snapshots/download/action**
+- **Microsoft.Compute/snapshots/upload/action**
+
+For detailed steps on assigning a role, see [Assign Azure roles using Azure PowerShell](../../role-based-access-control/role-assignments-powershell.md). To create or update a custom role, see [Create or update Azure custom roles using Azure PowerShell](../../role-based-access-control/custom-roles-powershell.md).
+
+## Get started
There are two ways you can upload a VHD with the Azure PowerShell module: You can either use the [Add-AzVHD](/powershell/module/az.compute/add-azvhd?view=azps-7.1.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) command, which will automate most of the process for you, or you can perform the upload manually with AzCopy.
-Generally, you should use [Add-AzVHD](#use-add-azvhd). However, if you need to upload a VHD that is larger than 50 GiB, consider [uploading the VHD manually with AzCopy](#manual-upload). VHDs 50 GiB and larger will upload faster using AzCopy.
+Generally, you should use [Add-AzVHD](#use-add-azvhd). However, if you need to upload a VHD that is larger than 50 GiB, consider [uploading the VHD manually with AzCopy](#manual-upload). VHDs 50 GiB and larger upload faster using AzCopy.
For guidance on how to copy a managed disk from one region to another, see [Copy a managed disk](#copy-a-managed-disk).
For guidance on how to copy a managed disk from one region to another, see [Copy
### Upload a VHD
+ > [!IMPORTANT]
+> If Azure AD is being used to enforce upload restrictions, you must use Add-AzVHD to upload a disk. The manual upload process isn't currently supported.
+
+### (Optional) Grant access to the disk
+
+If Azure AD is used to enforce upload restrictions on a subscription or at the account level, [Add-AzVHD](/powershell/module/az.compute/add-azvhd?view=azps-7.1.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) only succeeds if attempted by a user that has the [appropriate RBAC role or necessary permissions](#assign-rbac-role). You'll need to [assign RBAC permissions](../../role-based-access-control/role-assignments-powershell.md) to grant access to the disk and generate a writeable SAS.
+
+```azurepowershell
+New-AzRoleAssignment -SignInName <emailOrUserprincipalname> `
+-RoleDefinitionName "Data Operator for Managed Disks" `
+-Scope /subscriptions/<subscriptionId>
+```
+
+### Use Add-AzVHD
+ The following example uploads a VHD from your local machine to a new Azure managed disk using [Add-AzVHD](/powershell/module/az.compute/add-azvhd?view=azps-7.1.0&viewFallbackFrom=azps-5.4.0&preserve-view=true). Replace `<your-filepath-here>`, `<your-resource-group-name>`,`<desired-region>`, and `<desired-managed-disk-name>` with your parameters:
+> [!NOTE]
+> If you're using Auzre AD to enforce upload restrictions, add `DataAccessAuthMode 'AzureActiveDirectory'` to the end of your `Add-AzVhd` command.
+ ```azurepowershell # Required parameters $path = <your-filepath-here>.vhd
$name = <desired-managed-disk-name>
# Optional parameters # $Zone = <desired-zone> # $sku=<desired-SKU>
+# -DataAccessAuthMode 'AzureActiveDirectory'
# To use $Zone or #sku, add -Zone or -DiskSKU parameters to the command Add-AzVhd -LocalFilePath $path -ResourceGroupName $resourceGroup -Location $location -DiskName $name
Now, on your local shell, create an empty standard HDD for uploading by specifyi
Replace `<yourdiskname>`, `<yourresourcegroupname>`, and `<yourregion>` then run the following commands: > [!TIP]
-> If you are creating an OS disk, add `-HyperVGeneration '<yourGeneration>'` to `New-AzDiskConfig`.
+> If you're creating an OS disk, add `-HyperVGeneration '<yourGeneration>'` to `New-AzDiskConfig`.
```powershell $vhdSizeBytes = (Get-Item "<fullFilePathHere>").length
$diskconfig = New-AzDiskConfig -SkuName 'Standard_LRS' -OsType 'Windows' -Upload
New-AzDisk -ResourceGroupName '<yourresourcegroupname>' -DiskName '<yourdiskname>' -Disk $diskconfig ```
-If you would like to upload either a premium SSD or a standard SSD, replace **Standard_LRS** with either **Premium_LRS** or **StandardSSD_LRS**. Ultra disks are not yet supported.
+If you would like to upload either a premium SSD or a standard SSD, replace **Standard_LRS** with either **Premium_LRS** or **StandardSSD_LRS**. Ultra disks aren't currently supported.
+
+### Generate writeable SAS
Now that you've created an empty managed disk that is configured for the upload process, you can upload a VHD to it. To upload a VHD to the disk, you'll need a writeable SAS, so that you can reference it as the destination for your upload.
virtual-machines Download Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/download-vhd.md
Title: Download a Windows VHD from Azure description: Download a Windows VHD using the Azure portal.---+++ Previously updated : 01/13/2019- Last updated : 07/07/2022 # Download a Windows VHD from Azure
Your snapshot will be created shortly, and can then be used to download or creat
> > This method is only recommended for VMs with a single OS disk. VMs with one or more data disks should be stopped before download or before creating a snapshot for the OS disk and each data disk. +
+## Secure downloads and uploads with Azure AD (preview)
++ ## Generate download URL To download the VHD file, you need to generate a [shared access signature (SAS)](../../storage/common/storage-sas-overview.md?toc=/azure/virtual-machines/windows/toc.json) URL. When the URL is generated, an expiration time is assigned to the URL.
+# [Portal](#tab/azure-portal)
+ 1. On the page for the VM, click **Disks** in the left menu. 1. Select the operating system disk for the VM. 1. On the page for the disk, select **Disk Export** from the left menu. 1. The default expiration time of the URL is *3600* seconds (one hour). You may need to increase this for Windows OS disks or large data disks. **36000** seconds (10 hours) is usually sufficient. 1. Click **Generate URL**.
+# [PowerShell](#tab/azure-powershell)
+
+Replace `yourRGName` and `yourDiskName` with your values, then run the following command to get your SAS.
+
+```azurepowershell
+$diskSas = Grant-AzDiskAccess -ResourceGroupName "yourRGName" -DiskName "yourDiskName" -DurationInSecond 86400 -Access 'Read'
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+Replace `yourRGName` and `yourDiskName` with your values, then run the following command to get your SAS.
+
+```azurecli
+az disk grant-access --duration-in-seconds 86400 --access-level Read --name yourDiskName --resource-group yourRGName
+```
++++ > [!NOTE] > The expiration time is increased from the default to provide enough time to download the large VHD file for a Windows Server operating system. Large VHDs can take up to several hours to download depending on your connection and the size of the VM.
->
->
## Download VHD
+> [!NOTE]
+> If you're using Azure AD to secure managed disk downloads, the user downloading the VHD must have the appropriate [RBAC permissions](#assign-rbac-role).
+
+# [Portal](#tab/azure-portal)
+ 1. Under the URL that was generated, click Download the VHD file. 1. You may need to click **Save** in your browser to start the download. The default name for the VHD file is *abcd*.
+# [PowerShell](#tab/azure-powershell)
+
+Use the following script to download your VHD:
+
+```azurepowershell
+Connect-AzAccount
+#Set localFolder to your desired download location
+$localFolder = "c:\tempfiles"
+$blob = Get-AzStorageBlobContent -Uri $diskSas.AccessSAS -Destination $localFolder -Force
+```
+
+When the download finishes, revoke access to your disk using `Revoke-AzDiskAccess -ResourceGroupName "yourRGName" -DiskName "yourDiskName"`.
+
+# [Azure CLI](#tab/azure-cli)
+
+Replace `yourPathhere` and `sas-URI` with your values, then use the following script to download your VHD:
+
+> [!NOTE]
+> If you're using Azure AD to secure your managed disk uploads and downloads, add `--auth-mode login` to `az storage blob download`.
+
+```azurecli
+
+#set localFolder to your desired download location
+localFolder=yourPathHere
+#If you're using Azure AD to secure your managed disk uploads and downloads, add --auth-mode login to the following command.
+az storage blob download -f $localFolder --blob-url "sas-URI"
+```
+
+When the download finishes, revoke access to your disk using `az disk revoke-access --name diskName --resource-group yourRGName`.
+++ ## Next steps - Learn how to [upload a VHD file to Azure](upload-generalized-managed.md).
virtual-network-manager Concept Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-use-cases.md
Learn about use cases for Azure Virtual Network Manager including managing conne
Connectivity configuration allows you to create different network topologies based on your network needs. You create a connectivity configuration by adding new or existing virtual networks into [network groups](concept-network-groups.md) and creating a topology that meets your needs. The connectivity configuration offers three topology options: mesh, hub and spoke, or hub and spoke with direct connectivity between spoke virtual networks. ### Mesh topology
-When a mesh topology is deployed, all virtual networks have direct connectivity with each other. They don't need to go through other hops on the network to communicate. Mesh topology is useful when all the virtual networks need to communicate directly with each other.
+When a [mesh topology](concept-connectivity-configuration.md#mesh-network-topology) is deployed, all virtual networks have direct connectivity with each other. They don't need to go through other hops on the network to communicate. Mesh topology is useful when all the virtual networks need to communicate directly with each other.
### Hub and spoke topology
-Hub and spoke topology is recommended when you're deploying central infrastructure services in a hub virtual network that are shared by spoke virtual networks. This topology can be more efficient than having these common components in all spoke virtual networks.
+[Hub and spoke topology](concept-connectivity-configuration.md#hub-and-spoke-topology) is recommended when you're deploying central infrastructure services in a hub virtual network that are shared by spoke virtual networks. This topology can be more efficient than having these common components in all spoke virtual networks.
-### Hub and spoke topology with direct connectivity between spoke virtual networks
-This topology combines the two above topologies. It's recommended when you have common central infrastructure in the hub, and you want direct communication between all spokes. Direct connectivity helps you reduce the latency caused by extra network hops when going through a hub.
-
-### Maintaining topology
-AVNM automatically maintains the desired topology you defined in the connectivity configuration when changes are made to your infrastructure. For example, when you add a new spoke to the topology, AVNM can handle the changes necessary to create the connectivity to the spoke and its virtual networks.
+### Hub and spoke topology with direct connectivity
+This topology combines the two above topologies. It's recommended when you have common central infrastructure in the hub, and you want direct communication between all spokes. [Direct connectivity](concept-connectivity-configuration.md#direct-connectivity) helps you reduce the latency caused by extra network hops when going through a hub.
+### Maintaining virtual network topology
+AVNM automatically maintains the desired topology you defined in the connectivity configuration when changes are made to your infrastructure. For example, when you add new spoke to the topology, AVNM can handle the changes necessary to create the connectivity to the spoke and its virtual networks.
## Security With Azure Virtual Network Manager, you create [security admin rules](concept-security-admins.md) to enforce security policies across virtual networks in your organization. Security admin rules take precedence over rules defined by network security groups, and they're applied first when analyzing traffic as seen in the following diagram: Common uses include: - Create standard rules that must be applied and enforced on all existing VNets and newly created VNets.
virtual-network Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/troubleshoot-nat.md
Refer to the table below for which tools to use to validate NAT gateway connecti
| Linux | nc (generic connection test) | curl (TCP application layer test) | application specific | | Windows | [PsPing](/sysinternals/downloads/psping) | PowerShell [Invoke-WebRequest](/powershell/module/microsoft.powershell.utility/invoke-webrequest) | application specific |
-To analyze outbound traffic from NAT gateway, use NSG flow logs.
+To analyze outbound traffic from NAT gateway, use NSG flow logs. NSG flow logs provide information on when a connection from your virtual network takes place, from where (source IP and port) to which destination (destination IP and port) along with the state of the connection, the traffic flow direction and size of the traffic (packets and bytes sent).
* To learn more about NSG flow logs, see [NSG flow log overview](../../network-watcher/network-watcher-nsg-flow-logging-overview.md). * For guides on how to enable NSG flow logs, see [Enabling NSG flow logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md#enabling-nsg-flow-logs). * For guides on how to read NSG flow logs, see [Working with NSG flow logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md#working-with-flow-logs).
NAT gateway cannot be deployed in a gateway subnet. VPN gateway uses gateway sub
[Virtual Network NAT gateway](nat-overview.md) supports IPv4 UDP and TCP protocols. NAT gateway cannot be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. NAT gateway can be deployed on a dual stack subnet, but will still only use IPv4 Public IP addresses for directing outbound traffic. Deploy NAT gateway on a dual stack subnet when you need IPv6 resources to exist in the same subnet as IPv4 resources.
+### Cannot attach NAT gateway to a subnet that contains a VM NIC in a failed state
+
+When you try to associate NAT gateway to a subnet that contains a virtual machine network interface (NIC) in a failed state, you will receive an error message indicating that this action cannot be performed. You must first get the VM NIC out of the failed state before you can attach NAT gateway to the subnet.
+
+To troubleshoot NICs in a failed state, follow these steps
+1. Determine the provisioning state of your NICs using the [Get-AzNetworkInterface Powershell command](/powershell/module/az.network/get-aznetworkinterface#example-2-get-all-network-interfaces-with-a-specific-provisioning-state) and setting the value of the "provisioningState" to "Succeeded".
+2. Perform [GET/SET powershell commands](/powershell/module/az.network/set-aznetworkinterface#example-1-configure-a-network-interface) on the network interface to update the provisioning state.
+3. Check the results of this operation by checking the provisioining state of your NICs again (follow commands from step 1).
+ ## SNAT exhaustion due to NAT gateway configuration Common SNAT exhaustion issues with NAT gateway typically have to do with the configurations on the NAT gateway. Common SNAT exhaustion issues include:
The table below describes two common scenarios in which outbound connectivity ma
### TCP idle timeout timers set higher than the default value
-The NAT gateway TCP idle timeout timer is set to 4 minutes by default but is configurable up to 120 minutes. If this setting is changed to a higher value than the default, NAT gateway will hold on to flows longer and can create [additional pressure on SNAT port inventory](nat-gateway-resource.md#timers). The table below describes a common scenarion in which a high TCP idle timeout may be causing SNAT exhaustion and provides possible mitigation steps to take:
+The NAT gateway TCP idle timeout timer is set to 4 minutes by default but is configurable up to 120 minutes. If this setting is changed to a higher value than the default, NAT gateway will hold on to flows longer and can create [additional pressure on SNAT port inventory](nat-gateway-resource.md#timers). The table below describes a common scenario in which a high TCP idle timeout may be causing SNAT exhaustion and provides possible mitigation steps to take:
| Scenario | Evidence | Mitigation | ||||
A couple important notes about the NAT gateway and Azure App Services integratio
### Port 25 cannot be used for regional VNet integration with NAT gateway
-Port 25 is an SMTP port that is used to send email. Azure app services regional Virtual network integration cannot use port 25 by design. In a scenario where regional Virtual network integration is enabled for NAT gateway to connect an application to an email SMTP server, traffic will be blocked on port 25 despite NAT gateway working with all other ports for outbound traffic. This block on port 25 cannot be removed.
+Port 25 is an SMTP port that is used to send email. Azure app services regional Virtual network integration cannot use port 25 by design. While it is possible to have the block on port 25 removed, having this block removed will still not allow you to use port 25 with your Azure App services. Azure App services regional virtual network integration cannot use port 25 by design.
+
+If NAT gateway is enabled on the integration subnet with your Azure App services, NAT gateway can still be used to connect outbound to the internet on other ports except port 25.
**Work around solution:** * Set up port forwarding to a Windows VM to route traffic to Port 25.
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 06/29/2022 Last updated : 07/06/2022
virtual-network Virtual Network Peering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-peering-overview.md
The traffic between virtual machines in peered virtual networks is routed direct
You can apply network security groups in either virtual network to block access to other virtual networks or subnets. When configuring virtual network peering, either open or close the network security group rules between the virtual networks. If you open full connectivity between peered virtual networks, you can apply network security groups to block or deny specific access. Full connectivity is the default option. To learn more about network security groups, see [Security groups](./network-security-groups-overview.md).
+## Resize the address of Azure virtual networks that are peered
+
+You can resize the address of Azure virtual networks that are peered without incurring any downtime. This feature is useful when you need to grow or resize the virtual networks in Azure after scaling your workloads. With this feature, existing peerings on the virtual network do not need to be deleted before adding or deleting an address prefix on the virtual network. This feature can work for both IPv4 and IPv6 address spaces.
+
+Note:
+
+This feature does not support the following scenarios if the virtual network to be updated is peered with:
+
+* A classic virtual network
+* A managed virtual network such as the Azure VWAN hub
++ ## Service chaining Service chaining enables you to direct traffic from one virtual network to a virtual appliance or gateway in a peered network through user-defined routes.
virtual-wan Certificates Point To Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/certificates-point-to-site.md
Title: 'Generate and export certificates for User VPN connections | Azure Virtual WAN'
+ Title: 'Generate and export certificates for User VPN P2S connections: PowerShell'
+ description: Learn how to create a self-signed root certificate, export a public key, and generate client certificates for Virtual WAN User VPN (point-to-site) connections using PowerShell.- - Previously updated : 04/27/2021 Last updated : 07/06/2022
-# Generate and export certificates for User VPN connections
+# Generate and export certificates for User VPN connections using PowerShell
+
+User VPN (point-to-site) configurations can be configured to require certificates to authenticate. This article shows you how to create a self-signed root certificate and generate client certificates using PowerShell on Windows 10 (or later) or Windows Server 2016 (or later).
+
+The PowerShell cmdlets that you use to generate certificates are part of the operating system and don't work on other versions of Windows. The host operating system is only used to generate the certificates. Once the certificates are generated, you can upload them or install them on any supported client operating system.
+
+If you don't have a computer that meets the operating system requirement, you can use [MakeCert](../vpn-gateway/vpn-gateway-certificates-point-to-site-makecert.md) to generate certificates. The certificates that you generate using either method can be installed on any supported client operating system.
-User VPN (point-to-site) connections use certificates to authenticate. This article shows you how to create a self-signed root certificate and generate client certificates using PowerShell on Windows 10 or Windows Server 2016.
-You must perform the steps in this article on a computer running Windows 10 or Windows Server 2016. The PowerShell cmdlets that you use to generate certificates are part of the operating system and do not work on other versions of Windows. The Windows 10 or Windows Server 2016 computer is only needed to generate the certificates. Once the certificates are generated, you can upload them, or install them on any supported client operating system.
+## Install an exported client certificate
+Each client that connects over a P2S connection requires a client certificate to be installed locally. For steps to install a certificate, see [Install client certificates](install-client-certificates.md).
## Next steps
-Continue with the [Virtual WAN steps for user VPN connection](virtual-wan-point-to-site-portal.md#p2sconfig).
+Continue with the [Virtual WAN steps for user VPN connections](virtual-wan-point-to-site-portal.md#p2sconfig).
virtual-wan Install Client Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/install-client-certificates.md
+
+ Title: 'Install a User VPN P2S client certificate'
+
+description: Learn how to install client certificates for User VPN P2S certificate authentication - Windows, Mac, Linux.
+++ Last updated : 07/06/2022+++
+# Install client certificates for User VPN connections
+
+When a Virtual WAN User VPN P2S configuration is configured for certificate authentication, each client computer must have a client certificate installed locally. This article helps you install a client certificate locally on a client computer. You can also use [Intune](/mem/intune/configuration/vpn-settings-configure) to install certain VPN client profiles and certificates.
+
+If you want to generate a client certificate, see [Generate and export certificates for User VPN connections](certificates-point-to-site.md).
+
+## <a name="installwin"></a>Windows
++
+## <a name="installmac"></a>macOS
++
+## <a name="installlinux"></a>Linux
+
+The Linux client certificate is installed on the client as part of the client configuration. Use the VPN Gateway [Client configuration - Linux](../vpn-gateway/point-to-site-vpn-client-cert-linux.md) instructions.
+
+## Next steps
+
+Continue with the [Virtual WAN User VPN](virtual-wan-point-to-site-portal.md#p2sconfig) configuration steps.
vpn-gateway Vpn Gateway Certificates Point To Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-certificates-point-to-site.md
Title: 'Generate and export certificates for P2S: PowerShell'
-description: Learn how to create a self-signed root certificate, export a public key, and generate client certificates for VPN Gateway Point-to-Site connections.
-
+description: Learn how to create a self-signed root certificate, export a public key, and generate client certificates for VPN Gateway point-to-site connections.
- Previously updated : 06/03/2021 Last updated : 07/06/2022
-# Generate and export certificates for Point-to-Site using PowerShell
+# Generate and export certificates for point-to-site using PowerShell
-Point-to-Site connections use certificates to authenticate. This article shows you how to create a self-signed root certificate and generate client certificates using PowerShell on Windows 10 or later, or Windows Server 2016. If you are looking for different certificate instructions, see [Certificates - Linux](vpn-gateway-certificates-point-to-site-linux.md) or [Certificates - MakeCert](vpn-gateway-certificates-point-to-site-makecert.md).
+Point-to-site connections use certificates to authenticate. This article shows you how to create a self-signed root certificate and generate client certificates using PowerShell on Windows 10 or later, or Windows Server 2016 or later.
-The steps in this article apply to Windows 10 or later, or Windows Server 2016. The PowerShell cmdlets that you use to generate certificates are part of the operating system and do not work on other versions of Windows. The Windows 10 or later, or Windows Server 2016 computer is only needed to generate the certificates. Once the certificates are generated, you can upload them, or install them on any supported client operating system.
+The PowerShell cmdlets that you use to generate certificates are part of the operating system and don't work on other versions of Windows. The host operating system is only used to generate the certificates. Once the certificates are generated, you can upload them or install them on any supported client operating system.
-If you do not have access to a Windows 10 or later, or Windows Server 2016 computer, you can use [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md) to generate certificates. The certificates that you generate using either method can be installed on any [supported](vpn-gateway-howto-point-to-site-resource-manager-portal.md#faq) client operating system.
+If you don't have a computer that meets the operating system requirement, you can use [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md) to generate certificates. The certificates that you generate using either method can be installed on any [supported](vpn-gateway-howto-point-to-site-resource-manager-portal.md#faq) client operating system.
## <a name="install"></a>Install an exported client certificate
-Each client that connects to the VNet over a P2S connection requires a client certificate to be installed locally.
-
-To install a client certificate, see [Install a client certificate for Point-to-Site connections](point-to-site-how-to-vpn-client-install-azure-cert.md).
+Each client that connects over a P2S connection requires a client certificate to be installed locally. To install a client certificate, see [Install a client certificate for point-to-site connections](point-to-site-how-to-vpn-client-install-azure-cert.md).
## Next steps
-Continue with your Point-to-Site configuration.
+Continue with your point-to-site configuration.
* For **Resource Manager** deployment model steps, see [Configure P2S using native Azure certificate authentication](vpn-gateway-howto-point-to-site-resource-manager-portal.md).
-* For **classic** deployment model steps, see [Configure a Point-to-Site VPN connection to a VNet (classic)](vpn-gateway-howto-point-to-site-classic-azure-portal.md).
+* For **classic** deployment model steps, see [Configure a point-to-site VPN connection to a VNet (classic)](vpn-gateway-howto-point-to-site-classic-azure-portal.md).
web-application-firewall Application Gateway Crs Rulegroups Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md
The following rule groups and rules are available when using Web Application Fir
### <a name="general-32"></a> General |RuleId|Description| |||
+|200002|Failed to Parse Request Body.|
+|200003|Multipart Request Body Strict Validation.|
|200004|Possible Multipart Unmatched Boundary.| ### <a name="crs800-32"></a> KNOWN-CVES